text stringlengths 16 172k | source stringlengths 32 122 |
|---|---|
Spatial visualization abilityorvisual-spatial abilityis theabilityto mentally manipulate2-dimensionaland3-dimensionalfigures. It is typically measured with simplecognitive testsand is predictive of user performance with some kinds of user interfaces.
The cognitive tests used to measure spatial visualization ability includingmental rotationtasks like theMental Rotations Testor mental cutting tasks like theMental Cutting Test; andcognitive testslike the VZ-1 (Form Board), VZ-2 (Paper Folding), and VZ-3 (Surface Development) tests from the Kit of Factor-Reference cognitive tests produced byEducational Testing Service. Though the descriptions of spatial visualization and mental rotation sound similar, mental rotation is a particular task that can be accomplished using spatial visualization.[1]
TheMinnesota Paper Form Board Testinvolves giving participants a shape and a set of smaller shapes which they are then instructed to determine which combination of small shapes will fill the larger shape completely without overlapping. The Paper Folding test involves showing participants a sequence of folds in a piece of paper, through which a set of holes is then punched. The participants must choose which of a set of unfolded papers with holes corresponds to the one they have just seen.
The Surface Development test involves giving participants a flat shape with numbered sides and a three-dimensional shape with lettered sides and asking the participants to indicate which numbered side corresponds to which lettered side.
The construct of spatial visualization ability was first identified as separate from general intelligence in the 20th Century, and its implications for computer system design were identified in the 1980s.
In 1987,Kim Vicenteand colleagues ran a battery of cognitive tests on a set of participants and then determined which cognitive abilities correlated with performance on a computerized information search task. They found that the only significant predictors of performance werevocabularyand spatial visualization ability, and that those with high spatial visualization ability were twice as fast to perform the task as those with lower levels of spatial visualization ability.[2]
Older adults tend to perform worse on measures of spatial visualization ability than younger adults, and this effect seems to occur even among people who use spatial visualization frequently on the job, such asarchitectsandsurveyors(though they still perform better on the measures than others of the same age). It is, however, possible that the types of spatial visualization used by architects are not measured accurately by the tests.[which?]
According to certain studies, men on average have one standard deviation higher spatial intelligence quotient than women.[3]This domain is one of the few where clearsex differences in cognitionappear. Researchers at theUniversity of Torontosay that differences between men and women on some tasks that require spatial skills are largely eliminated after both groups play a video game for only a few hours.[4]AlthoughHerman Witkinhad claimed women are more "visually dependent" than men,[5]this has recently been disputed.[6]
The gender difference in spatial ability was found to be attributed to morphological differences between male and female brains. The parietal lobe is a part of the brain that is recognized to be involved in spatial ability, especially in 2d- and 3d mental rotation. Researchers at the University of Iowa found that the thicker grey matter in theparietal lobeof females led to a disadvantage in mental rotations, and that the larger surface areas of theparietal lobeof males led to an advantage in mental rotations. The results found by the researches support the notion that gender differences in spatial abilities arose during human evolution such that both sexes cognitively and neurologically developed to behave adaptively. However, the effect of socialization and environment on the difference in spatial ability is still open for debate.[7]
Other studies suggest gender differences in spatial thinking may be explained by astereotype threateffect. The fear of fulfilling stereotypes negatively affects the performance which results in aself-fulfilling prophecy.[8] | https://en.wikipedia.org/wiki/Spatial_visualization_ability |
Avisual languageis a system of communication using visual elements. Speech as a means of communication cannot strictly be separated from the whole of human communicative activity which includes the visual[1]and the term 'language' in relation to vision is an extension of its use to describe the perception, comprehension and production of visible signs.
An image which dramatizes and communicates an idea presupposes the use of a visuallanguage. Just as people can 'verbalize' their thinking, they can 'visualize' it. Adiagram, amap, and apaintingare all examples of uses of visual language. Its structural units include line, shape, colour, form, motion, texture, pattern, direction, orientation, scale, angle, space and proportion.
The elements in an image represent concepts in a spatial context, rather than the linear form used for words. Speech andvisual communicationare parallel and often interdependent means by which humans exchange information.
Visual units in the form of lines and marks are constructed into meaningful shapes and structures or signs. Different areas of the cortex respond to different elements such as colour and form.Semir Zeki[2]has shown the responses in the brain to the paintings ofMichelangelo,Rembrandt,Vermeer,Magritte,MalevichandPicasso.
What we have in our minds in a waking state and what we imagine in dreams is very much of the same nature.[3]Dream images might be with or without spoken words, other sounds or colours. In the waking state there is usually, in the foreground, the buzz of immediate perception, feeling, mood and as well as fleeting memory images.[4]In a mental state between dreaming and being fully awake is a state known as 'day dreaming' or a meditative state, during which "the things we see in the sky when the clouds are drifting, the centaurs and stags, antelopes and wolves" are projected from the imagination.[5]Rudolf Arnheim[6]has attempted to answer the question: what does a mental image look like? In Greek philosophy, the School of Leucippus and Democritus believed that a replica of an object enters the eye and remains in the soul as a memory as a complete image. Berkeley explained that parts, for example, a leg rather than the complete body, can be brought visually to the mind. Arnheim considers the psychologist,Edward B. Titchener's account to be the breakthrough in understanding something of how the vague incomplete quality of the image is 'impressionistic' and carries meaning as well as form.
Abstract arthas shown that the qualities of line and shape, proportion and colour convey meaning directly without the use of words or pictorial representation.Wassily Kandinsky[7]showed how drawn lines and marks can be expressive without any association with a representational image. From the most ancient cultures and throughout history visual language has been used to encode meaning: "The Bronze Age Badger Stone on Ilkly Moor is covered in circles, lines, hollow cups, winged figures, a spread hand, an ancient swastika, an embryo, a shooting star? … It's a story-telling rock, a message from a world before (written) words."[8]Richard Gregorysuggests that, "Perhaps the ability to respond to absent imaginary situations," as our early ancestors did with paintings on rock, "represents an essential step towards the development of abstract thought."[9]
The sense of sight operates selectively.Perceptionis not a passive recording of all that is in front of the eyes, but is a continuous judgement of scale and colour relationships,[10]and includes making categories of forms to classify images and shapes in the world.[11]Children of six to twelve months are to be able through experience and learning to discriminate between circles, squares and triangles. The child from this age onwards learns to classify objects, abstracting essential qualities and comparing them to other similar objects. Before objects can be perceived and identified the child must be able to classify the different shapes and sizes that a single object may appear to have when it is seen in varying surroundings and from different aspects.[12]
The perception of a shape requires the grasping of the essential structural features, to produce a "whole" orgestalt. The theory of thegestaltwas proposed byChristian von Ehrenfelsin 1890. He pointed out that a melody is still recognisable when played in different keys and argued that the whole is not simply the sum of its parts but a total structure.Max Wertheimerresearched von Ehrenfels' idea, and in his "Theory of Form" (1923) – nicknamed "the dot essay" because it was illustrated with abstract patterns of dots and lines – he concluded that the perceiving eye tends to bring together elements that look alike (similarity groupings) and will complete an incomplete form (object hypothesis). An array of random dots tends to form configurations (constellations).[13]All these innate abilities demonstrate how the eye and the mind are seeking pattern and simple whole shapes. When we look at more complex visual images such as paintings we can see that art has been a continuous attempt to "notate" visual information.
Thought processes are diffused and interconnected and are cognitive at a sensory level. The mind thinks at its deepest level in sense material, and the two hemispheres of the brain deal with different kinds of thought.[14]The brain is divided into two hemispheres and athick bundle of nerve fibresenable these two halves to communicate with each other.[15][16]In most people the ability to organize and produce speech is predominantly located in the left side. Appreciating spatial perceptions depends more on the right hemisphere, although there is a left hemisphere contribution.[17]In an attempt to understand how designers solve problems,L. Bruce Archerproposed "that the way designers (and everybody else, for that matter) form images in theirmind's eye, manipulating and evaluating ideas before, during and after externalising them, constitutes a cognitive system comparable with but different from, the verbal language system. Indeed we believe that human beings have an innate capacity for cognitive modelling, and its expression through sketching, drawing, construction, acting out and so on, that is fundamental to human thought."[18]
The visual language begins to develop in babies as the eye and brain become able to focus, and be able to recognize patterns. Children's drawings show a process of increasing perceptual awareness and range of elements to express personal experience and ideas.[19]The development of the visual aspect of language communication in education has been referred to asgraphicacy,[20]as a parallel discipline to literacy and numeracy. The ability to think and communicate in visual terms is part of, and of equal importance in the learning process, with that of literacy and numeracy. The visual artist, asMichael Twyman[21]has pointed out, has developed the ability to handle the visual language to communicate ideas. This includes both the understanding and conception and the production of concepts in a visual form. | https://en.wikipedia.org/wiki/Visual_language |
The tables below compare features of notablenote-takingsoftware. | https://en.wikipedia.org/wiki/Comparison_of_note-taking_software |
This is a list ofwiki softwareprograms. They are grouped by use case: standard wiki programs, personal wiki programs, hosted-only wikis, wiki-based content management software, and wiki-based project management software. They are further subdivided by the language of implementation: JavaScript, Java,PHP, Python,Perl,Ruby, and so on.
There are also wiki applications designed for personal use,[3]apps for mobile use,[4]and apps for use fromUSB flash drives.[5]They often include more features than traditional wikis, including:
A list of such software: | https://en.wikipedia.org/wiki/Desktop_wiki |
Apersonal data manager(PDM) is a portable hardware tool enabling secure storage and easy access to user data.[1]It can also be an application located on a portablesmart deviceor PC, enabling novice end-users to directly define, classify, and manipulate a universe of information objects.[1]Usually PDMs includepassword managementsoftware,web-browserfavorites andcryptographic software.
Advanced PDM can also store settings forVPNandTerminal Services, address books, and other features. PDM can also store and launch several portable software applications.
Companies such as Salmon Technologies and theirSalmonPDMapplication have been innovative in creating personalized directory structures to aid/prompt individuals where to store key typical pieces of information, such as legal documents, education/schooling information, medical information, property/vehicle bills, service contracts, and more. The process of creating directory structures that map to individual/family unit types, such as Child, Adult, Couple, Family with Children/Dependents is referred to as Personal Directory Modeling.
TheDatabox Projectis academia-based research into developing "an open-source personal networked device, augmented by cloud-hosted services, that collates, curates, and mediates access to an individual’s personal data by verified and audited third party applications and services."[2]
Thiscomputer-storage-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Personal_data_manager |
Apersonal organizer, also known as adatebook,date log,daybook,day planner,personal analog assistant,book planner,year planner, oragenda(from Latinagenda– things to do), is a portable book or binder designed for personal management. It typically includes sections such as adiary,calendar,address book, blank paper, checklists, and additional useful information like maps and telephone codes.[1]It is related to the separate desktop stationery items that have one or more of the same functions, such as appointment calendars,rolodexes,notebooks, andalmanacs.
By the end of the 20th century, paper-and-binder personal organizers started to be replaced byelectronic devicessuch aspersonal digital assistants(PDAs),personal information managersoftware, and online organizers. This process has accelerated in the beginning of the 21st century with the advent ofsmartphones,tablet computers,smartwatchesand a variety ofmobile appswhich enhance the potential for personal organisation and productivity.
They were sometimes referred to as a filofax,[2]after the UK-based companyFilofaxthat produces a popular range of personal organiser wallets. | https://en.wikipedia.org/wiki/Personal_organizer |
This is a list ofwiki softwareprograms. They are grouped by use case: standard wiki programs, personal wiki programs, hosted-only wikis, wiki-based content management software, and wiki-based project management software. They are further subdivided by the language of implementation: JavaScript, Java,PHP, Python,Perl,Ruby, and so on.
There are also wiki applications designed for personal use,[3]apps for mobile use,[4]and apps for use fromUSB flash drives.[5]They often include more features than traditional wikis, including:
A list of such software: | https://en.wikipedia.org/wiki/Personal_wiki |
Apersonal information manager(often referred to as aPIM toolor, more simply, aPIM) is a type of application software that functions as a personal organizer. The acronymPIMis now, more commonly, used in reference to personal information management as a field of study.[1]As an information management tool, a PIM tool's purpose is to facilitate the recording, tracking, and management of certain types of "personal information".
Personal information can include any of the following:[2]
Some PIM/PDMsoftware products are capable of synchronizing data over acomputer network, includingmobile ad hoc networks(MANETs). This feature typically stores the personal data oncloud drivesallowing for continuous concurrent data updates/access, on the user's computers, includingdesktop computers,laptopcomputers, and mobile devices, such apersonal digital assistantsorsmartphones.)[3]
Prior to the introduction of the term "Personal digital assistant" ("PDA") by Apple in 1992, handheld personal organizers such as thePsion Organiserand theSharp Wizardwere also referred to as "PIMs".[4][5]
The time management and communications functions of PIMs largely migrated from PDAs to smartphones, with Apple, RIM (Research In Motion, nowBlackBerry), and others all manufacturing smartphones that offer most of the functions of earlier PDAs. | https://en.wikipedia.org/wiki/Personal_information_manager |
The following is alist of personal information managers(PIMs) and online organizers. | https://en.wikipedia.org/wiki/List_of_personal_information_managers |
Incomputer science, thesemantic desktopis a collective term for ideas related to changing a computer'suser interfaceand data handling capabilities so that data are more easily shared between differentapplicationsor tasks and so that data that once could not be automatically processed by a computer could be. It also encompasses some ideas about being able to share information automatically between different people. This concept is very much related to theSemantic Web, but is distinct insofar as its main concern is the personal use of information.
The vision of the semantic desktop can be considered as a response to the perceived problems of existing user interfaces.
Without goodmetadata, computers cannot easily learn many commonly needed attributes about files. For example, suppose one downloads a document by a particular author on a particular subject – though the document will likely clearly indicate its subject, author, source and possiblycopyrightinformation there may be no easy way for the computer to obtain this information and process it across applications like file managers, desktop search engines, and other services. This means the computer cannot search, filter or otherwise act upon the information as effectively as it otherwise could. This is very much the problem that theSemantic Webis concerned with.
Researchers in the iMemex project provide the following query examples:[1]
Both of these queries need to parse the file structure, the first one to find a section in a LaTeX document, the second one to find figures and their labels in documents of any format, both of which current OSs don't know how to do.
A user might want te relate in a single query information that is maintained by the file system, such as placement in a folder, and information that is inside a file. With current technology, this query cannot be issued in one single request.
In query example 1 above, the project information is only materialized in the folder hierarchy; the rest of the filters relate to the inside of the file, and some of it needs to parse the file structure (see above). This leads to performing a first query in the file system and further search inside a file.
There is also the problem of relating different files with each other. For example, on operating systems such as Unix,e-mailsare stored separately from files. Neither has anything to do with tasks, notes or planned activities that may be stored in acalendar program. Contacts might be stored in another program. However, all these forms of information might simultaneously be relevant and necessary for a particular task.
Related to this, a user will often access a lot of data from theInternetwhich are segregated from the data stored locally on the computer and accessed through abrowseror other program. Researchers in the iMemex project provide the example of searching both in the local folder hierarchy and also in email attachments, which are located on an IMAP server[1](see above, query example 2). In addition, the folder hierarchies are often different on both systems.
As well as accessing data, a user has to share data, often through e-mail or separatefile transferprograms.
The semantic desktop is an attempt to solve some or all of these problem by extending the operating system's capabilities to handle all data using Semantic Web technologies. Based on this data integration, improved user interfaces (or plugins to existing applications) can give the user an integrated view on stored knowledge.
Sauermann et al. proposed a definition of Semantic Desktop in 2005:
A Semantic Desktop is a device in which an individual stores all her digital information like documents, multimedia and messages. These are interpreted as Semantic Web resources, each is identified by aUniform Resource Identifier(URI) and all data is accessible and queryable asResource Description Framework(RDF) graph. Resources from the web can be stored and authored content can be shared with others. Ontologies allow the user to express personal mental models and form the semantic glue interconnecting information and systems. Applications respect this and store, read and communicate via ontologies and Semantic Web protocols. The Semantic Desktop is an enlarged supplement to the user's memory.[2]
There are various interpretations of the semantic desktop. At its most limited state it might be interpreted as adding mechanisms for relating machine readablemetadatato files. In a more extreme way it could be viewed as a complete replacement to existing user interfaces, which unifies all forms of data and provides a consistent single interface. There are many degrees between these two depending on which of the above problems are being dealt with.
To foster interoperability between different implementations and publish standards, the community around theNepomukproject founded the OSCA Foundation (OSCAF)[3]in 2008. Since June 2009, the developers from the Nepomuk-KDEcommunities andXesamcollaborate with OSCAF to help standardizing the data formats for KDE,GNOMEandfreedesktop.org. The Nepomuk/OSCAF standards are taken up by these projects andNokia'sMaemo Platform.[4]
The Semantic Web is mainly concerned with makingmachine readablemetadata to enable computers to process shared information, and the creation of formats and standards related to this. As such the aims of allowing more of a user's data to be processed by a computer and allowing data to more easily be shared could be considered as a subset of those of the Semantic Web, but extended to a user's local computer, rather than just files stored on the Internet.
However the aims of creating a unified interface and allowing data to be accessed in a format independent way are not really the concerns of the Semantic Web.
In practice most projects related to the semantic desktop make use of Semantic Web protocols for storing their data. In particularRDF's concepts are used, and the format itself is used.
Semantic file systemsallow the user to query files by semantic metadata. As such, they can be considered a part of the semantic desktop.
Some operating systems such asBeOSinclude a semantic file system, which is a move towards a more semantic desktop. | https://en.wikipedia.org/wiki/Semantic_desktop |
Information overload(also known asinfobesity,[1][2]infoxication,[3]orinformation anxiety[4]) is the difficulty in understanding an issue andeffectively making decisionswhen one hastoo much information(TMI) about that issue,[5]and is generally associated with the excessive quantity of daily information.[6]The term "information overload" was first used as early as 1962 by scholars in management and information studies, including in Bertram Gross' 1964 bookThe Managing of Organizations[7][8]and was further popularized byAlvin Tofflerin his bestselling 1970 bookFuture Shock.[9]Speier et al. (1999) said that if input exceeds the processing capacity, information overload occurs, which is likely to reduce the quality of the decisions.[10]
In a newer definition, Roetzel (2019) focuses on time and resources aspects. He states that when a decision-maker is given many sets of information, such as complexity, amount, and contradiction, the quality of its decision is decreased because of the individual's limitation of scarce resources to process all the information and optimally make the best decision.[11]
The advent of moderninformation technologyhas been a primary driver of information overload on multiple fronts: in quantity produced, ease of dissemination, and breadth of the audience reached. Longstanding technological factors have been further intensified by the rise ofsocial mediaincluding theattention economy, which facilitatesattention theft.[12][13]In the age of connective digital technologies,informatics, theInternet culture(or the digital culture), information overload is associated with over-exposure, excessive viewing of information, and input abundance of information and data.
Even though information overload is linked to digital cultures and technologies,Ann Blairnotes that the term itself predates modern technologies, as indications of information overload were apparent when humans began collecting manuscripts, collecting, recording, and preserving information.[14]One of the first social scientists to notice the negative effects of information overload was the sociologistGeorg Simmel(1858–1918), who hypothesized that the overload of sensations in the modern urban world caused city dwellers to become jaded and interfered with their ability to react to new situations.[15]The social psychologistStanley Milgram(1933–1984) later used the concept of information overload to explainbystander behavior.
Psychologists have recognized for many years that humans have a limited capacity to store current information in memory. PsychologistGeorge Armitage Millerwas very influential in this regard, proposing that people can process about seven chunks of information at a time. Miller says that under overload conditions, people become confused and are likely to make poorer decisions based on the information they have received as opposed to making informed ones.
A quite early example of the term "information overload" can be found in an article by Jacob Jacoby, Donald Speller and Carol Kohn Berning, who conducted an experiment on 192 housewives which was said to confirm the hypothesis that more information about brands would lead to poorerdecision making.
Long before that, the concept was introduced by Diderot, although it was not by the term "information overload":
As long as the centuries continue to unfold, the number of books will grow continually, and one can predict that a time will come when it will be almost as difficult to learn anything from books as from the direct study of the whole universe. It will be almost as convenient to search for some bit of truth concealed in nature as it will be to find it hidden away in an immense multitude of bound volumes.
In the internet age, the term "information overload" has evolved into phrases such as "information glut", "data smog", and "data glut" (Data Smog, Shenk, 1997).[16]In his abstract, Kazi Mostak Gausul Hoq commented that people often experience an "information glut" whenever they struggle with locating information from print, online, or digital sources.[17]What was once a term grounded incognitive psychologyhas evolved into a rich metaphor used outside the world of academia.
Information overload has been documented throughout periods where advances in technology have increased a production of information. As early as the 3rd or 4th century BC, people regarded information overload with disapproval. Around this time, inEcclesiastes12:12, the passage revealed the writer's comment "of making books there is no end" and in the 1st century AD,Seneca the Eldercommented, that "the abundance of books is distraction". In 1255, the Dominican Vincent of Beauvais, also commented on the flood of information: "the multitude of books, the shortness of time and the slipperiness of memory."[14]Similar complaints around the growth of books were also mentioned in China. There were also information enthusiasts. TheLibrary of Alexandriawas established around the 3rd century BCE or 1st century Rome, which introduced acts of preserving historical artifacts. Museums and libraries established universal grounds of preserving the past for the future, but much like books, libraries were only granted with limited access.
Renaissance humanists always had a desire to preserve their writings and observations,[14]but were only able to record ancient texts by hand because books were expensive and only the privileged and educated could afford them. Humans experience an overload in information by excessively copying ancient manuscripts and replicating artifacts, creating libraries and museums that have remained in the present.[14]Around 1453 AD,Johannes Gutenberginvented theprinting pressand this marked another period of information proliferation. As a result of lowering production costs, generation of printed materials ranging frompamphlets,manuscriptsto books were made available to the average person.
Following Gutenberg's invention, the introduction of mass printing began in Western Europe. Information overload was often experienced by the affluent, but the circulation of books were becoming rapidly printed and available at a lower cost, allowing the educated to purchase books. Information became recordable, by hand, and could be easily memorized for future storage and accessibility. This era marked a time where inventive methods were established to practice information accumulation. Aside from printing books and passage recording, encyclopedias and alphabetical indexes were introduced, enabling people to save and bookmark information for retrieval. These practices marked both present and future acts of information processing.
Swiss scientistConrad Gessnercommented on the increasing number of libraries and printed books,[14]and was most likely the first academic who discussed the consequences of information overload as he observed how "unmanageable" information came to be after the creation of the printing press.[18]
Blair notes that while scholars were elated with the number of books available to them, they also later experienced fatigue with the amount of excessive information that was readily available and overpopulated them.Scholarscomplained about the abundance of information for a variety of reasons, such as the diminishing quality of text asprintersrushed to print manuscripts and the supply of new information being distracting and difficult to manage. Erasmus, one of the many recognized humanists of the 16th century asked, "Is there anywhere on earth exempt from these swarms of new books?".[19]
Many grew concerned with the rise of books in Europe, especially in England, France, and Germany. From 1750 to 1800, there was a 150% increase in the production of books. In 1795, German bookseller and publisher Johann Georg Heinzmann said "no nation printed as much as the Germans" and expressed concern about Germans reading ideas and no longer creating original thoughts and ideas.[20]
To combat information overload, scholars developed their own information records for easier and simply archival access and retrieval. Modern Europe compilers used paper and glue to cut specific notes and passages from a book and pasted them to a new sheet for storage.Carl Linnaeusdeveloped paper slips, often called his botanical paper slips, from 1767 to 1773, to record his observations. Blair argues that these botanical paper slips gave birth to the "taxonomical system" that has endured to the present, influencing both the mass inventions of the index card and the library card catalog.[19]
In his book,The Information: A History, A Theory, A Flood,published in 2011, authorJames Gleicknotes that engineers began taking note of the concept of information, quickly associated it in a technical sense: information was both quantifiable and measurable. He discusses how information theory was created to first bridge mathematics, engineering, and computing together, creating an information code between the fields. English speakers from Europe often equated "computer science" to "informatique,informatica, andInformatik".[21]This leads to the idea that all information can be saved and stored on computers, even if information experiences entropy. But at the same time, the term information, and its many definitions have changed.[citation needed]
In the second half of the 20th century, advances in computer and information technology led to the creation of theInternet.
In the modernInformation Age, information overload is experienced as distracting and unmanageable information such asemail spam, email notifications,instant messages,Tweetsand Facebook updates in the context of the work environment.[22]Social mediahas resulted in "social information overload", which can occur on sites like Facebook, and technology is changing to serve our social culture.
In today's society, day-to-day activities increasingly involve the technological world where information technology exacerbates the number of interruptions that occur in the work environment.[23]Management may be even more disrupted in their decision making, and may result in more poor decisions. Thus, thePIECESframework mentions information overload as a potential problem in existing information systems.[24]
As the world moves into a new era ofglobalization, an increasing number of people connect to the internet to conduct their own research[25]and are given the ability to contribute to publicly accessible data. This has elevated the risk for the spread of misinformation.[according to whom?]
In a 2018 literature review, Roetzel indicates that information overload can be seen as a virus—spreading through (social) media and news networks.[11]
The latest research hypothesizes that information overload is a multilevel phenomenon, i.e., there are different mechanisms responsible for its emergence at the individual, group, and the whole society levels, however, these levels are interlinked.[26]
In a piece published bySlate, Vaughan Bell argues that "Worries about information overload are as old as information itself"[18]because each generation and century will inevitably experience a significant impact with technology. In the 21st century, Frank Furedi describes how an overload in information is metaphorically expressed as a flood, which is an indication that humanity is being "drowned" by the waves of data coming at it.[27]This includes how the human brain continues to process information whether digitally or not. Information overload can lead to "information anxiety", which is the gap between the information that is understood and the information that it is perceived must be understood. The phenomenon of information overload is connected to the field ofinformation technology(IT). IT corporate management implements training to "improve the productivity of knowledge workers". Ali F. Farhoomand and Don H. Drury note that employees often experience an overload in information whenever they have difficulty absorbing and assimilating the information they receive to efficiently complete a task because they feel burdened, stressed, and overwhelmed.[28]
At New York's Web 2.0 Expo in 2008,Clay Shirky's speech indicated that information overload in the modern age is a consequence of a deeper problem, which he calls "filter failure",[29]where humans continue to overshare information with each other. This is due to the rapid rise of apps and unlimited wireless access. In the moderninformation age, information overload is experienced as distracting and unmanageable information such asemail spam, email notifications,instant messages,Tweets, and Facebook updates in the context of the work environment.Social mediahas resulted in "social information overload", which can occur on sites like Facebook, and technology is changing to serve our social culture. As people view increasing amounts of information in the form of news stories, emails, blog posts, Facebook statuses,Tweets,Tumblrposts and other new sources of information, they become their own editors,gatekeepers, and aggregators of information.[30]Social media platforms create a distraction as users attention spans are challenged once they enter an online platform. One concern in this field is that massive amounts of information can be distracting and negatively impact productivity anddecision-makingandcognitive control. Another concern is the "contamination" of useful information with information that might not be entirely accurate (information pollution).
The general causes of information overload include:
Email remains a major source of information overload, as people struggle to keep up with the rate of incoming messages. As well as filtering out unsolicited commercial messages (spam), users also have to contend with the growing use ofemail attachmentsin the form of lengthy reports, presentations, and media files.[31]
A December 2007New York Timesblog post described email as "a $650 billion drag on the economy",[32]and theNew York Timesreported in April 2008 that "email has become the bane of some people's professional lives" due to information overload, yet "none of [the current wave of high-profile Internet startups focused on email] really eliminates the problem of email overload because none helps us prepare replies".[33]
In January 2011, Eve Tahmincioglu, a writer forNBC News, wrote an article titled "It's Time to Deal With That Overflowing Inbox". Compiling statistics with commentary, she reported that there were 294 billion emails sent each day in 2010, up from 50 billion in 2009. Quoted in the article, workplace productivity expert Marsha Egan stated that people need to differentiate between working on email and sorting through it. This meant that rather than responding to every email right away, users should delete unnecessary emails and sort the others into action or reference folders first. Egan then went on to say "We are more wired than ever before, and as a result need to be more mindful of managing email or it will end up managing us."[34]
The Daily TelegraphquotedNicholas Carr, former executive editor of theHarvard Business Reviewand the author ofThe Shallows: What The Internet Is Doing To Our Brains, as saying that email exploits a basic human instinct to search for new information, causing people to become addicted to "mindlessly pressing levers in the hope of receiving a pellet of social or intellectual nourishment". His concern is shared byEric Schmidt, chief executive ofGoogle, who stated that "instantaneous devices" and the abundance of information people are exposed to through email and other technology-based sources could be having an impact on the thought process, obstructing deep thinking, understanding, impeding the formation of memories and making learning more difficult. This condition of "cognitive overload" results in diminished information retaining ability and failing to connect remembrances to experiences stored in the long-term memory, leaving thoughts "thin and scattered".[35]This is also manifest in the education process.[36]
In addition to email, theWorld Wide Webhas provided access to billions of pages of information. In many offices, workers are given unrestricted access to the Web, allowing them to manage their own research. The use ofsearch engineshelps users to find information quickly. However, information published online may not always be reliable, due to the lack of authority-approval or a compulsory accuracy check before publication. Internet information lacks credibility as the Web's search engines do not have the abilities to filter and manage information and misinformation.[37]This results in people having to cross-check what they read before using it for decision-making, which takes up more time.[citation needed]
Viktor Mayer-Schönberger, author ofDelete: The Virtue of Forgetting in the Digital Age,argues that everyone can be a "participant" on the Internet, where they are all senders and receivers of information.[38]On the Internet, trails of information are left behind, allowing other Internet participants to share and exchange information. Information becomes difficult to control on the Internet.
TheBBCreports that "every day, the information we send and receive online – whether that's checking emails or searching the internet – amount to over 2.5 quintillion bytes of data."[39]
Social mediaare applications and websites with an online community where users create and share content with each other, and it adds to the problem of information overload because so many people have access to it.[40]It presents many different views and outlooks on subject matters so that one may have difficulty taking it all in and drawing a clear conclusion.[41]Information overload may not be the core reason for people's anxieties about the amount of information they receive in their daily lives. Instead, information overload can be considered situational. Social media users tend to feel less overloaded by information when using their personal profiles, rather than when their work institutions expect individuals to gather a mass of information. Most people see information through social media in their lives as an aid to help manage their day-to-day activities and not an overload.[42]Depending on what social media platform is being used, it may be easier or harder to stay up to date on posts from people. Facebook users who post and read more than others tend to be able to keep up. On the other hand, Twitter users who post and read a lot of tweets still feel like it is too much information (or none of it is interesting enough).[11]Another problem with social media is that many people create a living by creating content for either their own or someone else's platform, which can create for creators to publish an overload of content.
In the context of searching for information, researchers have identified two forms of information overload:outcome overloadwhere there are too many sources of information andtextual overloadwhere the individual sources are too long. This form of information overload may cause searchers to be less systematic. Disillusionment when a search is more challenging than expected may result in an individual being less able to search effectively. Information overload when searching can result in asatisficingstrategy.[43]: 7
Savolainen identifiesfilteringandwithdrawalas common responses to information. Filtering involves quickly working out whether a particular piece of information, such as an email, can be ignored based on certain criteria. Withdrawal refers to limiting the number of sources of information with which one interacts. They distinguish between "pull" and "push" sources of information, a "pull" source being one where one seeks out relevant information, a "push" source one where others decide what information might be interesting. They note that "pull" sources can avoid information overload but by only "pulling" information one risks missing important information.[44]
There have been many solutions proposed for how to mitigate information overload. Research examining how people seek to control an overloaded environment has shown that people purposefully using different coping strategies.[45][46][47]In general, overload coping strategy consists of two excluding (ignoring and filtering) and two including (customizing and saving) approaches.[47][46]Excluding approach focuses on managing the quantity of information, while including approach is geared towards complexity management.
Johnson advisesdisciplinewhich helps mitigate interruptions and for the elimination of push or notifications. He explains that notifications pull people's attentions away from their work and into social networks and emails. He also advises that people stop using their iPhones as alarm clocks which means that the phone is the first thing that people will see when they wake up leading to people checking their email right away.[51]
Clay Shirkystates:[29]
What we're dealing with now is not the problem of information overload, because we're always dealing (and always have been dealing) with information overload... Thinking about information overload isn't accurately describing the problem; thinking about filter failure is.
Consider the use of Internet applications and add-ons such as theInbox Pauseadd-on forGmail.[52]This add-on does not reduce the number of emails that people get but it pauses the inbox. Burkeman in his article talks about the feeling of being in control is the way to deal with information overload which might involve self-deception. He advises to fight irrationality with irrationality by using add-ons that allow you to pause your inbox or produce other results. Reducing large amounts of information is key.
Dealing with IO from a social network site such as Facebook, a study done byHumboldt University[53]showed some strategies that students take to try and alleviate IO while using Facebook. Some of these strategies included: Prioritizing updates from friends who were physically farther away in other countries, hiding updates from less-prioritized friends, deleting people from their friends list, narrowing the amount of personal information shared, and deactivating the Facebook account.
Decision makers performing complex tasks have little if any excess cognitive capacity. Narrowing one's attention as a result of the interruption is likely to result in the loss of information cues, some of which may be relevant to completing the task. Under these circumstances, performance is likely to deteriorate. As the number or intensity of the distractions/interruptions increases, the decision maker's cognitive capacity is exceeded, and performance deteriorates more severely. In addition to reducing the number of possible cues attended to, more severe distractions/interruptions may encourage decision-makers to use heuristics, take shortcuts, or opt for asatisficing decision, resulting in lower decision accuracy.
Somecognitive scientistsand graphic designers have emphasized the distinction between raw information and information in a form that can be used in thinking. In this view, information overload may be better viewed as organization underload. That is, they suggest that the problem is not so much the volume of information but the fact that it cannot be discerned how to use it well in the raw or biased form it is presented. Authors who have taken this view include graphic artist and architectRichard Saul Wurmanand statistician and cognitive scientistEdward Tufte. Wurman uses the term "information anxiety" to describe humanity's attitude toward the volume of information in general and their limitations in processing it.[55]Tufte primarily focuses on quantitative information and explores ways to organize large complex datasets visually to facilitate clear thinking. Tufte's writing is important in such fields as information design and visual literacy,[56]which deal with the visual communication of information. Tufte coined the term "chartjunk" to refer to useless, non-informative, or information-obscuring elements of quantitative information displays, such as the use of graphics to overemphasize the importance of certain pieces of data or information.[57]
In a study conducted by Soucek and Moser (2010),[58]they investigated what impact a training intervention on how to cope with information overload would have on employees. They found that the training intervention did have a positive impact on IO, especially on those who struggled with work impairment and media usage, and employees who had a higher amount of incoming emails.[58]
Recent research suggests that an "attention economy" of sorts will naturally emerge from information overload,[59]allowing Internet users greater control over their online experience with particular regard to communication mediums such as email and instant messaging. This could involve some sort of cost being attached to email messages. For example, managers charging a small fee for every email received – e.g. $1.00 – which the sender must pay from their budget. The aim of such charging is to force the sender to consider the necessity of the interruption. However, such a suggestion undermines the entire basis of the popularity of email, namely that emails are free of charge to send.
Economics often assumes that people are rational in that they have the knowledge of their preferences and an ability to look for the best possible ways to maximize their preferences. People are seen as selfish and focus on what pleases them. Looking at various parts on their own results in the negligence of the other parts that work alongside it that create the effect of IO. Lincoln suggests possible ways to look at IO in a more holistic approach by recognizing the many possible factors that play a role in IO and how they work together to achieve IO.[60]
It would be impossible for an individual to read all theacademic paperspublished in a narrow speciality, even if they spent all their time reading. A response to this is the publishing ofsystematic reviewssuch as theCochrane Reviews. Richard Smith argues that it would be impossible for a general practitioner to read all the literature relevant to every individual patient they consult with and suggests one solution would be anexpert systemfor use of doctors while consulting.[61]
Media related toInformation overloadat Wikimedia Commons | https://en.wikipedia.org/wiki/Information_overload |
Relevanceis the connection between topics that makes one useful for dealing with the other. Relevance is studied in many different fields, including cognitive science, logic, andlibrary and information science.Epistemologystudies it in general, and different theories of knowledge have different implications for what is considered relevant.
"Something (A) is relevant to a task (T) if it increases the likelihood of accomplishing the goal (G), which is implied byT."[1]
A thing might be relevant, a document or a piece of information may be relevant. Relevance does not depend on whether we speak of "things" or "information".
If you believe thatschizophreniais caused by bad communication between mother and child, then family interaction studies become relevant. If, on the other hand, you subscribe to a genetic theory of relevance then the study of genes becomes relevant. If you subscribe to the epistemology of empiricism, then only intersubjectively controlled observations are relevant. If, on the other hand, you subscribe tofeminist epistemology, then the sex of the observer becomes relevant.
In formal reasoning, relevance has proved an important but elusive concept. It is important because the solution of any problem requires the prior identification of the relevant elements from which a solution can be constructed. It is elusive, because the meaning of relevance appears to be difficult or impossible to capture within conventionallogical systems. The obvious suggestion that q is relevant to p if q is implied by p breaks down because under standard definitions ofmaterial implication, a false proposition implies all other propositions. However though 'iron is a metal' may be implied by 'cats lay eggs' it doesn't seem to be relevant to it the way in which 'cats are mammals' and 'mammals give birth to living young' are relevant to each other. If one states "I love ice cream", and another person responds "I have a friend named Brad Cook", then these statements are not relevant. However, if one states "I love ice cream", and another person responds "I have a friend named Brad Cook who also likes ice cream", this statement now becomes relevant because it relates to the first person's idea.
Another proposal defines relevance or, more accurately, irrelevance information-theoretically.[2]It is easiest to state in terms of variables, which might reflect the values of measurable hypotheses or observation statements. The conditional entropy of an observation variable e conditioned on a variablehcharacterizing alternative hypotheses provides a measure of the irrelevance of the observation variableeto the set of competing hypotheses characterized byh. It is useful combined with measures of the information content of the variableein terms of its entropy. One can then subtract the content ofethat is irrelevant toh(given by its conditional entropy conditioned onh) from the total information content ofe(given by its entropy) to calculate the amount of information the variable e contains about the set of hypotheses characterized byh. Relevance (via the concept of irrelevance) and information content then characterize the observation variable and can be used to measure its sensitivity and specificity (respectively) as a test for alternative hypotheses.
More recently a number of theorists[who?]have sought to account for relevance in terms of "possible worldlogics" inintensional logic. Roughly, the idea is thatnecessary truthsare true in all possible worlds,contradictions(logical falsehoods) are true in no possible worlds, andcontingentpropositions can be ordered in terms of the number of possible worlds in which they are true. Relevance is argued to depend upon the "remoteness relationship" between an actual world in which relevance is being evaluated and the set of possible worlds within which it is true.
In 1986,Dan Sperberand Deirdre Wilson drew attention to the central importance of relevance decisions in reasoning and communication. They proposed an account of the process of inferring relevant information from any given utterance. To do this work, they used what they called the "Principle of Relevance": namely, the position thatany utterance addressed to someone automatically conveys the presumption of its own optimal relevance. The central idea of Sperber and Wilson's theory is that all utterances are encountered in some context, and the correct interpretation of a particular utterance is the one that allows most new implications to be made in that context on the basis of the least amount of information necessary to convey it. For Sperber and Wilson, relevance is conceived as relative or subjective, as it depends upon the state of knowledge of a hearer when they encounter an utterance.
Sperber and Wilson stress that this theory is not intended to account for every intuitive application of the English word "relevance". Relevance, as a technical term, is restricted to relationships between utterances and interpretations, and so the theory cannot account for intuitions such as the one that relevance relationships obtain in problems involving physical objects. If a plumber needs to fix a leaky faucet, for example, some objects and tools are relevant (e.g. a wrench) and others are not (e.g. a waffle iron). And, moreover, the latter seems to be irrelevant in a manner which does not depend upon the plumber's knowledge, or the utterances used to describe the problem.
A theory of relevance that seems to be more readily applicable to such instances of physical problem solving has been suggested by Gorayska and Lindsay in a series of articles published during the 1990s. The key feature of their theory is the idea that relevance is goal-dependent. An item (e.g., an utterance or object) is relevant to a goal if and only if it can be an essential element of some plan capable of achieving the desired goal. This theory embraces both propositional reasoning and the problem-solving activities of people such as plumbers, and defines relevance in such a way that what is relevant is determined by the real world (because what plans will work is a matter of empirical fact) rather than the state of knowledge or belief of a particular problem solver.
TheeconomistJohn Maynard Keynessaw the importance of defining relevance to the problem of calculating risk in economic decision-making. He suggested that the relevance of a piece of evidence, such as a true proposition, should be defined in terms of the changes it produces of estimations of the probability of future events. Specifically, Keynes proposed that new evidenceeis irrelevant to a propositionx, given old evidenceq, if and only ifx/eq=x/q, otherwise, the proposition is relevant.
There are technical problems with this definition, for example, the relevance of a piece of evidence can be sensitive to the order in which other pieces of evidence are received.
The meaning of "relevance" in U.S. law is reflected in Rule 401 of theFederal Rules of Evidence. That rule defines relevance as "having any tendency to make the existence of any fact that is of consequence to the determinations of the action more probable or less probable than it would be without the evidence". In other words, if a fact were to have no bearing on the truth or falsity of a conclusion, it would be legally irrelevant.
This field has considered when documents (or document representations) retrieved from databases are relevant or non-relevant. Given a conception of relevance, two measures have been applied:Precision and recall:
Recall =a: (a+c), where
Recall is thus an expression of how exhaustive a search for documents is.
Precision =a: (a+b), where
Precision is thus a measure of the amount of noise in document-retrieval.
Relevance itself has in the literature often been based on what is termed "the system's view" and "the user's view". Hjørland (2010) criticize these two views and defends a "subject knowledge view of relevance".
During the 1960s,relevancebecame a fashionablebuzzword, meaning roughly 'relevance to social concerns', such asracial equality,poverty,social justice,world hunger, worldeconomic development, and so on. The implication was that some subjects, e.g., the study ofmedieval poetryand the practice ofcorporate law, were not worthwhile because they did not address pressingsocial issues.[citation needed] | https://en.wikipedia.org/wiki/Relevance |
Inmachine learningandinformation retrieval, thecluster hypothesisis an assumption about the nature of the data handled in those fields, which takes various forms. In information retrieval, it states that documents that areclusteredtogether "behave similarly with respect to relevance to information needs".[1]In terms ofclassification, it states that if points are in the same cluster, they are likely to be of the same class.[2]There may be multiple clusters forming a single class.
The cluster hypothesis was formulated first by van Rijsbergen:[3]"closely associated documents
tend to be relevant to the same requests". Thus, theoretically, asearch enginecould try to locate only the appropriate cluster for a query, and then allow users to browse through this cluster. Although experiments showed that the cluster hypothesis as such holds, exploiting it for retrieval did not lead to satisfying results.[4]
The cluster assumption is assumed in many machine learning algorithms such as thek-nearest neighbor classification algorithmand thek-means clustering algorithm. As the word "likely" appears in the definition, there is no clear border differentiating whether the assumption does hold or does not hold. In contrast the amount of adherence of data to this assumption can be quantitatively measured.
The cluster assumption is equivalent to theLow density separation assumptionwhich states that the decision boundary should lie on a low-density region. To prove this, suppose the decision boundary crosses one of the clusters. Then this cluster will contain points from two different classes, therefore it is violated on this cluster. | https://en.wikipedia.org/wiki/Cluster_hypothesis |
Linear discriminant analysis(LDA),normal discriminant analysis(NDA),canonical variates analysis(CVA), ordiscriminant function analysisis a generalization ofFisher's linear discriminant, a method used instatisticsand other fields, to find alinear combinationof features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as alinear classifier, or, more commonly, fordimensionality reductionbefore laterclassification.
LDA is closely related toanalysis of variance(ANOVA) andregression analysis, which also attempt to express onedependent variableas a linear combination of other features or measurements.[2][3]However, ANOVA usescategoricalindependent variablesand acontinuousdependent variable, whereas discriminant analysis has continuousindependent variablesand a categorical dependent variable (i.e.the class label).[4]Logistic regressionandprobit regressionare more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables. These other methods are preferable in applications where it is not reasonable to assume that the independent variables are normally distributed, which is a fundamental assumption of the LDA method.
LDA is also closely related toprincipal component analysis(PCA) andfactor analysisin that they both look for linear combinations of variables which best explain the data.[5]LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made.
LDA works when the measurements made on independent variables for each observation are continuous quantities. When dealing with categorical independent variables, the equivalent technique is discriminant correspondence analysis.[6][7]
Discriminant analysis is used when groups are known a priori (unlike incluster analysis). Each case must have a score on one or more quantitative predictor measures, and a score on a group measure.[8]In simple terms, discriminant function analysis is classification - the act of distributing things into groups, classes or categories of the same type.
The originaldichotomousdiscriminant analysis was developed by SirRonald Fisherin 1936.[9]It is different from anANOVAorMANOVA, which is used to predict one (ANOVA) or multiple (MANOVA) continuous dependent variables by one or more independent categorical variables. Discriminant function analysis is useful in determining whether a set of variables is effective in predicting category membership.[10]
Consider a set of observationsx→{\displaystyle {\vec {x}}}(also called features, attributes, variables or measurements) for each sample of an object or event with known classy{\displaystyle y}. This set of samples is called thetraining setin asupervised learningcontext. The classification problem is then to find a good predictor for the classy{\displaystyle y}of any sample of the same distribution (not necessarily from the training set) given only an observationx→{\displaystyle {\vec {x}}}.[11]: 338
LDA approaches the problem by assuming that the conditionalprobability density functionsp(x→|y=0){\displaystyle p({\vec {x}}|y=0)}andp(x→|y=1){\displaystyle p({\vec {x}}|y=1)}are boththe normal distributionwith mean andcovarianceparameters(μ→0,Σ0){\displaystyle \left({\vec {\mu }}_{0},\Sigma _{0}\right)}and(μ→1,Σ1){\displaystyle \left({\vec {\mu }}_{1},\Sigma _{1}\right)}, respectively. Under this assumption, theBayes-optimal solutionis to predict points as being from the second class if the log of the likelihood ratios is bigger than some threshold T, so that:
Without any further assumptions, the resulting classifier is referred to asquadratic discriminant analysis(QDA).
LDA instead makes the additional simplifyinghomoscedasticityassumption (i.e.that the class covariances are identical, soΣ0=Σ1=Σ{\displaystyle \Sigma _{0}=\Sigma _{1}=\Sigma }) and that the covariances have full rank.
In this case, several terms cancel:
and the above decision criterion
becomes a threshold on thedot product
for some threshold constantc, where
This means that the criterion of an inputx→{\displaystyle {\vec {x}}}being in a classy{\displaystyle y}is purely a function of this linear combination of the known observations.
It is often useful to see this conclusion in geometrical terms: the criterion of an inputx→{\displaystyle {\vec {x}}}being in a classy{\displaystyle y}is purely a function of projection of multidimensional-space pointx→{\displaystyle {\vec {x}}}onto vectorw→{\displaystyle {\vec {w}}}(thus, we only consider its direction). In other words, the observation belongs toy{\displaystyle y}if correspondingx→{\displaystyle {\vec {x}}}is located on a certain side of a hyperplane perpendicular tow→{\displaystyle {\vec {w}}}. The location of the plane is defined by the thresholdc{\displaystyle c}.
The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables.[8]
It has been suggested that discriminant analysis is relatively robust to slight violations of these assumptions,[12]and it has also been shown that discriminant analysis may still be reliable when using dichotomous variables (where multivariate normality is often violated).[13]
Discriminant analysis works by creating one or more linear combinations of predictors, creating a newlatent variablefor each function. These functions are called discriminant functions. The number of functions possible is eitherNg−1{\displaystyle N_{g}-1}whereNg{\displaystyle N_{g}}= number of groups, orp{\displaystyle p}(the number of predictors), whichever is smaller. The first function created maximizes the differences between groups on that function. The second function maximizes differences on that function, but also must not be correlated with the previous function. This continues with subsequent functions with the requirement that the new function not be correlated with any of the previous functions.
Given groupj{\displaystyle j}, withRj{\displaystyle \mathbb {R} _{j}}sets of sample space, there is a discriminant rule such that ifx∈Rj{\displaystyle x\in \mathbb {R} _{j}}, thenx∈j{\displaystyle x\in j}. Discriminant analysis then, finds “good” regions ofRj{\displaystyle \mathbb {R} _{j}}to minimize classification error, therefore leading to a high percent correct classified in the classification table.[14]
Each function is given a discriminant score[clarification needed]to determine how well it predicts group placement.
Aneigenvaluein discriminant analysis is the characteristic root of each function.[clarification needed]It is an indication of how well that function differentiates the groups, where the larger the eigenvalue, the better the function differentiates.[8]This however, should be interpreted with caution, as eigenvalues have no upper limit.[10][8]The eigenvalue can be viewed as a ratio ofSSbetweenandSSwithinas in ANOVA when the dependent variable is the discriminant function, and the groups are the levels of theIV[clarification needed].[10]This means that the largest eigenvalue is associated with the first function, the second largest with the second, etc..
Some suggest the use of eigenvalues aseffect sizemeasures, however, this is generally not supported.[10]Instead, thecanonical correlationis the preferred measure of effect size. It is similar to the eigenvalue, but is the square root of the ratio ofSSbetweenandSStotal. It is the correlation between groups and the function.[10]Another popular measure of effect size is the percent of variance[clarification needed]for each function. This is calculated by: (λx/Σλi) X 100 whereλxis the eigenvalue for the function and Σλiis the sum of all eigenvalues. This tells us how strong the prediction is for that particular function compared to the others.[10]Percent correctly classified can also be analyzed as an effect size. The kappa value can describe this while correcting for chance agreement.[10]Kappa normalizes across all categorizes rather than biased by a significantly good or poorly performing classes.[clarification needed][17]
Canonical discriminant analysis (CDA) finds axes (k− 1canonical coordinates,kbeing the number of classes) that best separate the categories. These linear functions are uncorrelated and define, in effect, an optimalk− 1 space through then-dimensional cloud of data that best separates (the projections in that space of) thekgroups. See “Multiclass LDA” for details below.
Because LDA uses canonical variates, it was initially often referred as the "method of canonical variates"[18]or canonical variates analysis (CVA).[19]
The termsFisher's linear discriminantandLDAare often used interchangeably, althoughFisher'soriginal article[2]actually describes a slightly different discriminant, which does not make some of the assumptions of LDA such asnormally distributedclasses or equal classcovariances.
Suppose two classes of observations havemeansμ→0,μ→1{\displaystyle {\vec {\mu }}_{0},{\vec {\mu }}_{1}}and covariancesΣ0,Σ1{\displaystyle \Sigma _{0},\Sigma _{1}}. Then the linear combination of featuresw→Tx→{\displaystyle {\vec {w}}^{\mathrm {T} }{\vec {x}}}will havemeansw→Tμ→i{\displaystyle {\vec {w}}^{\mathrm {T} }{\vec {\mu }}_{i}}andvariancesw→TΣiw→{\displaystyle {\vec {w}}^{\mathrm {T} }\Sigma _{i}{\vec {w}}}fori=0,1{\displaystyle i=0,1}. Fisher defined the separation between these twodistributionsto be the ratio of the variance between the classes to the variance within the classes:
This measure is, in some sense, a measure of thesignal-to-noise ratiofor the class labelling. It can be shown that the maximum separation occurs when
When the assumptions of LDA are satisfied, the above equation is equivalent to LDA.
Be sure to note that the vectorw→{\displaystyle {\vec {w}}}is thenormalto the discriminanthyperplane. As an example, in a two dimensional problem, the line that best divides the two groups is perpendicular tow→{\displaystyle {\vec {w}}}.
Generally, the data points to be discriminated are projected ontow→{\displaystyle {\vec {w}}}; then the threshold that best separates the data is chosen from analysis of the one-dimensional distribution. There is no general rule for the threshold. However, if projections of points from both classes exhibit approximately the same distributions, a good choice would be the hyperplane between projections of the two means,w→⋅μ→0{\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{0}}andw→⋅μ→1{\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{1}}. In this case the parameter c in threshold conditionw→⋅x→>c{\displaystyle {\vec {w}}\cdot {\vec {x}}>c}can be found explicitly:
Otsu's methodis related to Fisher's linear discriminant, and was created to binarize the histogram of pixels in a grayscale image by optimally picking the black/white threshold that minimizes intra-class variance and maximizes inter-class variance within/between grayscales assigned to black and white pixel classes.
In the case where there are more than two classes, the analysis used in the derivation of the Fisher discriminant can be extended to find asubspacewhich appears to contain all of the class variability.[20]This generalization is due toC. R. Rao.[21]Suppose that each of C classes has a meanμi{\displaystyle \mu _{i}}and the same covarianceΣ{\displaystyle \Sigma }. Then the scatter between class variability may be defined by the sample covariance of the class means
whereμ{\displaystyle \mu }is the mean of the class means. The class separation in a directionw→{\displaystyle {\vec {w}}}in this case will be given by
This means that whenw→{\displaystyle {\vec {w}}}is aneigenvectorofΣ−1Σb{\displaystyle \Sigma ^{-1}\Sigma _{b}}the separation will be equal to the correspondingeigenvalue.
IfΣ−1Σb{\displaystyle \Sigma ^{-1}\Sigma _{b}}is diagonalizable, the variability between features will be contained in the subspace spanned by the eigenvectors corresponding to theC− 1 largest eigenvalues (sinceΣb{\displaystyle \Sigma _{b}}is of rankC− 1 at most). These eigenvectors are primarily used in feature reduction, as in PCA. The eigenvectors corresponding to the smaller eigenvalues will tend to be very sensitive to the exact choice of training data, and it is often necessary to use regularisation as described in the next section.
If classification is required, instead ofdimension reduction, there are a number of alternative techniques available. For instance, the classes may be partitioned, and a standard Fisher discriminant or LDA used to classify each partition. A common example of this is "one against the rest" where the points from one class are put in one group, and everything else in the other, and then LDA applied. This will result in C classifiers, whose results are combined. Another common
method is pairwise classification, where a new classifier is created for each pair of classes (givingC(C− 1)/2 classifiers in total), with the individual classifiers combined to produce a final classification.
The typical implementation of the LDA technique requires that all the samples are available in advance. However, there are situations where the entire data set is not available and the input data are observed as a stream. In this case, it is desirable for the LDA feature extraction to have the ability to update the computed LDA features by observing the new samples without running the algorithm on the whole data set. For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is anincremental LDA algorithm, and this idea has been extensively studied over the last two decades.[22]Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features.[23]In other work, Demir and Ozmehmet proposed online local learning algorithms for updating LDA features incrementally using error-correcting and the Hebbian learning rules.[24]Later, Aliyari et al. derived fast incremental algorithms to update the LDA features by observing the new samples.[22]
In practice, the class means and covariances are not known. They can, however, be estimated from the training set. Either themaximum likelihood estimateor themaximum a posterioriestimate may be used in place of the exact value in the above equations. Although the estimates of the covariance may be considered optimal in some sense, this does not mean that the resulting discriminant obtained by substituting these values is optimal in any sense, even if the assumption of normally distributed classes is correct.
Another complication in applying LDA and Fisher's discriminant to real data occurs when the number of measurements of each sample (i.e., the dimensionality of each data vector) exceeds the number of samples in each class.[5]In this case, the covariance estimates do not have full rank, and so cannot be inverted. There are a number of ways to deal with this. One is to use apseudo inverseinstead of the usual matrix inverse in the above formulae. However, better numeric stability may be achieved by first projecting the problem onto the subspace spanned byΣb{\displaystyle \Sigma _{b}}.[25]Another strategy to deal with small sample size is to use ashrinkage estimatorof the covariance matrix, which
can be expressed mathematically as
whereI{\displaystyle I}is the identity matrix, andλ{\displaystyle \lambda }is theshrinkage intensityorregularisation parameter.
This leads to the framework of regularized discriminant analysis[26]or shrinkage discriminant analysis.[27]
Also, in many practical cases linear discriminants are not suitable. LDA and Fisher's discriminant can be extended for use in non-linear classification via thekernel trick. Here, the original observations are effectively mapped into a higher dimensional non-linear space. Linear classification in this non-linear space is then equivalent to non-linear classification in the original space. The most commonly used example of this is thekernel Fisher discriminant.
LDA can be generalized tomultiple discriminant analysis, wherecbecomes acategorical variablewithNpossible states, instead of only two. Analogously, if the class-conditional densitiesp(x→∣c=i){\displaystyle p({\vec {x}}\mid c=i)}are normal with shared covariances, thesufficient statisticforP(c∣x→){\displaystyle P(c\mid {\vec {x}})}are the values ofNprojections, which are thesubspacespanned by theNmeans,affine projectedby the inverse covariance matrix. These projections can be found by solving ageneralized eigenvalue problem, where the numerator is the covariance matrix formed by treating the means as the samples, and the denominator is the shared covariance matrix. See “Multiclass LDA” above for details.
In addition to the examples given below, LDA is applied inpositioningandproduct management.
Inbankruptcy predictionbased on accounting ratios and other financial variables, linear discriminant analysis was the first statistical method applied to systematically explain which firms entered bankruptcy vs. survived. Despite limitations including known nonconformance of accounting ratios to the normal distribution assumptions of LDA,Edward Altman's1968 model[28]is still a leading model in practical applications.[29][30][31]
In computerisedface recognition, each face is represented by a large number of pixel values. Linear discriminant analysis is primarily used here to reduce the number of features to a more manageable number before classification. Each of the new dimensions is a linear combination of pixel values, which form a template. The linear combinations obtained using Fisher's linear discriminant are calledFisher faces, while those obtained using the relatedprincipal component analysisare calledeigenfaces.
Inmarketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data.Logistic regressionor other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:
The main application of discriminant analysis in medicine is the assessment of severity state of a patient and prognosis of disease outcome. For example, during retrospective analysis, patients are divided into groups according to severity of disease – mild, moderate, and severe form. Then results of clinical and laboratory analyses are studied to reveal statistically different variables in these groups. Using these variables, discriminant functions are built to classify disease severity in future patients. Additionally, Linear Discriminant Analysis (LDA) can help select more discriminative samples for data augmentation, improving classification performance.[32]
In biology, similar principles are used in order to classify and define groups of different biological objects, for example, to define phage types of Salmonella enteritidis based on Fourier transform infrared spectra,[33]to detect animal source ofEscherichia colistudying its virulence factors[34]etc.
This method can be used toseparate the alteration zones[clarification needed]. For example, when different data from various zones are available, discriminant analysis can find the pattern within the data and classify it effectively.[35]
Discriminant function analysis is very similar tologistic regression, and both can be used to answer the same research questions.[10]Logistic regression does not have as many assumptions and restrictions as discriminant analysis. However, when discriminant analysis’ assumptions are met, it is more powerful than logistic regression.[36]Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate.[8]Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met.[9][8]
Geometric anomalies in higher dimensions lead to the well-knowncurse of dimensionality. Nevertheless, proper utilization ofconcentration of measurephenomena can make computation easier.[37]An important case of theseblessing of dimensionalityphenomena was highlighted by Donoho and Tanner: if a sample is essentially high-dimensional then each point can be separated from the rest of the sample by linear inequality, with high probability, even for exponentially large samples.[38]These linear inequalities can be selected in the standard (Fisher's) form of the linear discriminant for a rich family of probability distribution.[39]In particular, such theorems are proven forlog-concavedistributions includingmultidimensional normal distribution(the proof is based on the concentration inequalities for log-concave measures[40]) and for product measures on a multidimensional cube (this is proven usingTalagrand's concentration inequalityfor product probability spaces). Data separability by classical linear discriminants simplifies the problem of error correction forartificial intelligencesystems in high dimension.[41] | https://en.wikipedia.org/wiki/Linear_discriminant_analysis |
Adatabase indexis adata structurethat improves the speed of data retrieval operations on adatabase tableat the cost of additional writes and storage space to maintain the index data structure. Indexes are used to quickly locate data without having to search every row in a database table every time said table is accessed. Indexes can be created using one or morecolumns of a database table, providing the basis for both rapid randomlookupsand efficient access of ordered records.
An index is a copy of selected columns of data, from a table, that is designed to enable very efficient search. An index normally includes a "key" or direct link to the original row of data from which it was copied, to allow the complete row to be retrieved efficiently. Some databases extend the power of indexing by letting developers create indexes on column values that have been transformed by functions orexpressions. For example, an index could be created onupper(last_name), which would only store the upper-case versions of thelast_namefield in the index. Another option sometimes supported is the use ofpartial index, where index entries are created only for those records that satisfy some conditional expression. A further aspect of flexibility is to permit indexing onuser-defined functions, as well as expressions formed from an assortment of built-in functions.
Mostdatabasesoftware includes indexing technology that enablessub-linear timelookupto improve performance, aslinear searchis inefficient for large databases.
Suppose a database contains N data items and one must be retrieved based on the value of one of the fields. A simple implementation retrieves and examines each item according to the test. If there is only one matching item, this can stop when it finds that single item, but if there are multiple matches, it must test everything. This means that the number of operations in the average case isO(N) orlinear time. Since databases may contain many objects, and since lookup is a common operation, it is often desirable to improve performance.
An index is any data structure that improves the performance of lookup. There are many differentdata structuresused for this purpose. There are complex design trade-offs involving lookup performance, index size, and index-update performance. Many index designs exhibit logarithmic (O(log(N))) lookup performance and in some applications it is possible to achieve flat (O(1)) performance.
Indexes are used to policedatabase constraints, such as UNIQUE, EXCLUSION,PRIMARY KEYandFOREIGN KEY. An index may be declared as UNIQUE, which creates an implicit constraint on the underlying table. Database systems usually implicitly create an index on a set of columns declared PRIMARY KEY, and some are capable of using an already-existing index to police this constraint. Many database systems require that both referencing and referenced sets of columns in a FOREIGN KEY constraint are indexed, thus improving performance of inserts, updates and deletes to the tables participating in the constraint.
Some database systems support an EXCLUSION constraint that ensures that, for a newly inserted or updated record, a certain predicate holds for no other record. This can be used to implement a UNIQUE constraint (with equality predicate) or more complex constraints, like ensuring that no overlapping time ranges or no intersecting geometry objects would be stored in the table. An index supporting fast searching for records satisfying the predicate is required to police such a constraint.[1]
The data is present in arbitrary order, but thelogical orderingis specified by the index. The data rows may be spread throughout the table regardless of the value of the indexed column or expression. The non-clustered index tree contains the index keys in sorted order, with the leaf level of the index containing the pointer to the record (page and the row number in the data page in page-organized engines; row offset in file-organized engines).
In a non-clustered index,
There can be more than one non-clustered index on a database table.
Clustering alters the data block into a certain distinct order to match the index, resulting in the row data being stored in order. Therefore, only one clustered index can be created on a given database table. Clustered indices can greatly increase overall speed of retrieval, but usually only where the data is accessed sequentially in the same or reverse order of the clustered index, or when a range of items is selected.
Since the physical records are in this sort order on disk, the next row item in the sequence is immediately before or after the last one, and so fewer data block reads are required. The primary feature of a clustered index is therefore the ordering of the physical data rows in accordance with the index blocks that point to them. Some databases separate the data and index blocks into separate files, others put two completely different data blocks within the same physical file(s).
When multiple databases and multiple tables are joined, it is called acluster(not to be confused with clustered index described previously). The records for the tables sharing the value of a cluster key shall be stored together in the same or nearby data blocks. This may improve the joins of these tables on the cluster key, since the matching records are stored together and less I/O is required to locate them.[2]The cluster configuration defines the data layout in the tables that are parts of the cluster. A cluster can be keyed with aB-treeindex or ahash table. The data block where the table record is stored is defined by the value of the cluster key.
The order that the index definition defines the columns in is important. It is possible to retrieve a set of row identifiers using only the first indexed column. However, it is not possible or efficient (on most databases) to retrieve the set of row identifiers using only the second or greater indexed column.
For example, in a phone book organized by city first, then by last name, and then by first name, in a particular city, one can easily extract the list of all phone numbers. However, it would be very tedious to find all the phone numbers for a particular last name. One would have to look within each city's section for the entries with that last name. Some databases can do this, others just won't use the index.
In the phone book example with acomposite indexcreated on the columns (city, last_name, first_name), if we search by giving exact values for all the three fields, search time is minimal—but if we provide the values forcityandfirst_nameonly, the search uses only thecityfield to retrieve all matched records. Then a sequential lookup checks the matching withfirst_name. So, to improve the performance, one must ensure that the index is created on the order of search columns.
Indexes are useful for many applications but come with some limitations. Consider the followingSQLstatement:SELECTfirst_nameFROMpeopleWHERElast_name='Smith';. To process this statement without an index the database software must look at the last_name column on every row in the table (this is known as afull table scan). With an index the database simply follows the index data structure (typically aB-tree) until the Smith entry has been found; this is much less computationally expensive than a full table scan.
Consider this SQL statement:SELECTemail_addressFROMcustomersWHEREemail_addressLIKE'%@wikipedia.org';. This query would yield an email address for every customer whose email address ends with "@wikipedia.org", but even if the email_address column has been indexed the database must perform a full index scan. This is because the index is built with the assumption that words go from left to right. With awildcardat the beginning of the search-term, the database software is unable to use the underlying index data structure (in other words, the WHERE-clause isnotsargable). This problem can be solved through the addition of another index created onreverse(email_address)and a SQL query like this:SELECTemail_addressFROMcustomersWHEREreverse(email_address)LIKEreverse('%@wikipedia.org');. This puts the wild-card at the right-most part of the query (nowgro.aidepikiw@%), which the index on reverse(email_address) can satisfy.
When the wildcard characters are used on both sides of the search word as%wikipedia.org%, the index available on this field is not used. Rather only a sequential search is performed, which takesO(N){\displaystyle O(N)}time.
A bitmap index is a special kind of indexing that stores the bulk of its data asbit arrays(bitmaps) and answers most queries by performingbitwise logical operationson these bitmaps. The most commonly used indexes, such asB+ trees, are most efficient if the values they index do not repeat or repeat a small number of times. In contrast, the bitmap index is designed for cases where the values of a variable repeat very frequently. For example, the sex field in a customer database usually contains at most three distinct values: male, female or unknown (not recorded). For such variables, the bitmap index can have a significant performance advantage over the commonly used trees.
A dense index indatabasesis afilewith pairs of keys andpointersfor everyrecordin the data file. Every key in this file is associated with a particular pointer toa recordin the sorted data file.
In clustered indices with duplicate keys, the dense index pointsto the first recordwith that key.[3]
A sparse index in databases is a file with pairs of keys and pointers for everyblockin the data file. Every key in this file is associated with a particular pointerto the blockin the sorted data file. In clustered indices with duplicate keys, the sparse index pointsto the lowest search keyin each block.
A reverse-key index reverses the key value before entering it in the index. E.g., the value 24538 becomes 83542 in the index. Reversing the key value is particularly useful for indexing data such as sequence numbers, where new key values monotonically increase.
An inverted index maps a content word to the document containing it, thereby allowing full-text searches.
The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database.
It is used to index fields that are neither ordering fields nor key fields (there is no assurance that the file is organized on key field or primary key field). One index entry for every tuple in the data file (dense index) contains the value of the indexed attribute and pointer to the block or record.
A hash index in database is most commonly used index in data management. It is created on a column that contains unique values, such as a primary key or email address.
Another type of index used in database systems islinear hashing.
Indices can be implemented using a variety of data structures. Popular indices includebalanced trees,B+ treesandhashes.[4]
InMicrosoft SQL Server, theleaf nodeof the clustered index corresponds to the actual data, not simply a pointer to data that resides elsewhere, as is the case with a non-clustered index.[5]Each relation can have a single clustered index and many unclustered indices.[6]
An index is typically being accessed concurrently by several transactions and processes, and thus needsconcurrency control. While in principle indexes can utilize the common database concurrency control methods, specialized concurrency control methods for indexes exist, which are applied in conjunction with the common methods for a substantial performance gain.
In most cases, an index is used to quickly locate the data records from which the required data is read. In other words, the index is only used to locate data records in the table and not to return data.
A covering index is a special case where the index itself contains the required data fields and can answer the required data.
Consider the following table (other fields omitted):
To find the Name for ID 13, an index on (ID) is useful, but the record must still be read to get the Name. However, an index on (ID, Name) contains the required data field and eliminates the need to look up the record.
Covering indexes are each for a specific table. Queries which JOIN/ access across multiple tables, may potentially consider covering indexes on more than one of these tables.[7]
A covering index can dramatically speed up data retrieval but may itself be large due to the additional keys, which slow down data insertion and update. To reduce such index size, some systems allow including non-key fields in the index. Non-key fields are not themselves part of the index ordering but only included at the leaf level, allowing for a covering index with less overall index size.
This can be done in SQL withCREATEINDEXmy_indexONmy_table(id)INCLUDE(name);.[8][9]
No standard defines how to create indexes, because the ISO SQL Standard does not cover physical aspects.Indexes are one of the physical parts of database conception among others like storage (tablespace or filegroups).[clarify]RDBMS vendors all give aCREATEINDEXsyntax with some specific options that depend on their software's capabilities. | https://en.wikipedia.org/wiki/Database_index |
Intext retrieval,full-text searchrefers to techniques for searching a singlecomputer-storeddocumentor a collection in afull-text database. Full-text search is distinguished from searches based onmetadataor on parts of the original texts represented in databases (such as titles, abstracts, selected sections, or bibliographical references).
In a full-text search, asearch engineexamines all of the words in every stored document as it tries to match search criteria (for example, text specified by a user). Full-text-searching techniques appeared in the 1960s, for exampleIBM STAIRSfrom 1969, and became common in onlinebibliographic databasesin the 1990s.[verification needed]Many websites and application programs (such asword processingsoftware) provide full-text-search capabilities. Some web search engines, such as the formerAltaVista, employ full-text-search techniques, while others index only a portion of the web pages examined by their indexing systems.[1]
When dealing with a small number of documents, it is possible for the full-text-search engine to directly scan the contents of the documents with eachquery, a strategy called "serial scanning". This is what some tools, such asgrep, do when searching.
However, when the number of documents to search is potentially large, or the quantity of search queries to perform is substantial, the problem of full-text search is often divided into two tasks: indexing and searching. The indexing stage will scan the text of all the documents and build a list of search terms (often called anindex, but more correctly named aconcordance). In the search stage, when performing a specific query, only the index is referenced, rather than the text of the original documents.[2]
The indexer will make an entry in the index for each term or word found in a document, and possibly note its relative position within the document. Usually the indexer will ignorestop words(such as "the" and "and") that are both common and insufficiently meaningful to be useful in searching. Some indexers also employ language-specificstemmingon the words being indexed. For example, the words "drives", "drove", and "driven" will be recorded in the index under the single concept word "drive".
Recall measures the quantity of relevant results returned by a search, while precision is the measure of the quality of the results returned. Recall is the ratio of relevant results returned to all relevant results. Precision is the ratio of the number of relevant results returned to the total number of results returned.
The diagram at right represents a low-precision, low-recall search. In the diagram the red and green dots represent the total population of potential search results for a given search. Red dots represent irrelevant results, and green dots represent relevant results. Relevancy is indicated by the proximity of search results to the center of the inner circle. Of all possible results shown, those that were actually returned by the search are shown on a light-blue background. In the example only 1 relevant result of 3 possible relevant results was returned, so the recall is a very low ratio of 1/3, or 33%. The precision for the example is a very low 1/4, or 25%, since only 1 of the 4 results returned was relevant.[3]
Due to the ambiguities ofnatural language, full-text-search systems typically includes options likefilteringto increase precision andstemmingto increase recall.Controlled-vocabularysearching also helps alleviate low-precision issues bytaggingdocuments in such a way that ambiguities are eliminated. The trade-off between precision and recall is simple: an increase in precision can lower overall recall, while an increase in recall lowers precision.[4]
Full-text searching is likely to retrieve many documents that are notrelevantto theintendedsearch question. Such documents are calledfalse positives(seeType I error). The retrieval of irrelevant documents is often caused by the inherent ambiguity ofnatural language. In the sample diagram to the right, false positives are represented by the irrelevant results (red dots) that were returned by the search (on a light-blue background).
Clustering techniques based onBayesianalgorithms can help reduce false positives. For a search term of "bank", clustering can be used to categorize the document/data universe into "financial institution", "place to sit", "place to store" etc. Depending on the occurrences of words relevant to the categories, search terms or a search result can be placed in one or more of the categories. This technique is being extensively deployed in thee-discoverydomain.[clarification needed]
The deficiencies of full text searching have been addressed in two ways: By providing users with tools that enable them to express their search questions more precisely, and by developing new search algorithms that improve retrieval precision.
ThePageRankalgorithm developed byGooglegives more prominence to documents to which otherWeb pageshave linked.[6]SeeSearch enginefor additional examples.
The following is a partial list of available software products whose predominant purpose is to perform full-text indexing and searching. Some of these are accompanied with detailed descriptions of their theory of operation or internal algorithms, which can provide additional insight into how full-text search may be accomplished. | https://en.wikipedia.org/wiki/Full-text_search |
Key Word In Context(KWIC) is the most common format forconcordancelines. The term KWIC was coined byHans Peter Luhn.[1]The system was based on a concept calledkeyword in titles, which was first proposed for Manchester libraries in 1864 byAndrea Crestadoro.[2]
A KWIC index is formed by sorting and aligning the words within an article title to allow each word (except thestop words) in titles to be searchable alphabetically in the index.[3]It was a useful indexing method for technical manuals before computerizedfull text searchbecame common.
For example, a search query including all of the words in an example definition ("KWIC is an acronym for Key Word In Context, the most common format for concordance lines") and the Wikipedia slogan in English ("the free encyclopedia"), searched against a Wikipedia page, might yield a KWIC index as follows. A KWIC index usually uses a wide layout to allow the display of maximum 'in context' information (not shown in the following example).
A KWIC index is a special case of apermuted index.[4]This term refers to the fact that it indexes allcyclic permutationsof the headings. Books composed of many short sections with their own descriptive headings, most notably collections ofmanual pages, often ended with apermuted indexsection, allowing the reader to easily find a section by any word from its heading. This practice, also known asKey Word Out of Context(KWOC), is no longer common.
Note: The first reference does not show the KWIC index unless you pay to view the paper. The second reference does not even list the paper at all. | https://en.wikipedia.org/wiki/Key_Word_in_Context |
Aselection-based searchsystem is asearch enginesystem in which the user invokes asearch queryusing only themouse.[1]A selection-based search system allows the user to search the internet for more information about anykeywordor phrase contained within a document or webpage in any software application on their desktop computer using the mouse.
Traditional browser-based search systems require the user to launch aweb browser, navigate to a search page, type or paste a query into asearch box, review a list of results, and click ahyperlinkto view these results. Three characteristic features of a selection-based search system are that the user can invoke search using only their mouse from within the context of any application on their desktop (for exampleMicrosoft Office,Adobe Reader,Mozilla Firefox, etc.), receive categorized suggestions which are based on the context of the user-selected text (or in some cases thewisdom of crowds), and view the results in floating information boxes which can be sized, shared, docked, closed and stacked on top of the document that has the user’s primary focus.
In its simplest form, selection-based search enables users to launch a search query by selecting text on any application on their desktop. It is commonly believed that selection-based search lowers the user barrier to search and permits an incremental number of searches per user per day.[2]Selection-based search systems also operate on the premise that users value information in context. They may save the user from having to juggle multiple applications, multiple web browsers or use multiple search engines separately.[3]
The term selection-based search is frequently used to classify a set of search engine systems, including a desktop client and a series ofcloud computingservices, but is also used to describe the paradigm of categorizing a keyword and searching multiple data sources using only the mouse. The National Information Standards Organization (NISO) uses the terms selection-based search and mouse-based search interchangeably to describe this web search paradigm.
Selection-based search systems create what is known as a semantic database of trained terms. They do not compile a physicaldatabaseor catalogue of the web on the users' desktop computer. Instead, they take a user's selected keyword or keywords, pass it to severalheterogeneousonlinecloud services, categorize the keyword(s), and then compile the results in ahomogeneousmanner based on a specificalgorithm.[4]
No two selection-based search systems are alike. Some simply provide a list of links in acontext menuto other websites, such as the proposedInternet Explorer 8 Acceleratorsfeature. Others only allow the user to search their desktop files such asMacintosh Spotlight, or to search a popular search engine such asGoogleorYahoo!, while others only search lesser-known search engines,newsgroups, and more specialized databases. Selection-based search systems also differ in how the results are presented and the quality of semantic categorization which is used. Some will open links to content in a new browser window. Others return content in floating information boxes which can be sized, shared, docked, etc.
A key challenge for selection-based search is that a long or nested list of categories quickly becomes unwieldy for the user. Therefore, it is incumbent upon the selection-based search system to both categorize the user-selected text and to identify those online services which most naturally apply to the selected text. For example, when the user selects an address, the system needs to identify the address as most suitable for an online mapping service such asGoogle Maps. When the user selects a movie title, the system needs to identify the selection as suitable for a movie database such asInternet Movie Database. When the user selects the name of a company, the system needs to identify the concordant stock symbol and an appropriate financial database such asYahoo! Finance.
Usability can vary widely between selection-based search systems relying on a large number of variables. Even the most basic selection-based search systems will allow more of the web to be searched by the user in the context of their work than any one stand-alone search engine. On the other hand, the process is sometimes said to be redundant if the system applies no intelligence to categorizing the selected text and matching it to an online service, and simply provides a link for the user to their preferred search engine(s). | https://en.wikipedia.org/wiki/Selection-based_search |
Asitemapis a list ofpagesof aweb sitewithin adomain.
There are three primary kinds of sitemap:
Sitemaps may be addressed to users or to software.
Many sites have user-visible sitemaps which present a systematic view, typically hierarchical, of the site. These are intended to help visitors find specific pages, and can also be used by crawlers. They also act as a navigation aid[1]by providing an overview of a site's content at a single glance.
Alphabetically organized sitemaps, sometimes called site indexes, are a different approach.
For use by search engines and other crawlers, there is a structured format, theXMLSitemap, which lists the pages in a site, their relative importance, and how often they are updated.[2]This is pointed to from therobots.txt fileand is typically calledsitemap.xml. The structured format is particularly important for websites which include pages that are not accessible throughlinksfrom other pages, but only through the site's search tools or by dynamic construction ofURLsinJavaScript.
Googleintroduced theSitemap protocol, so web developers can publish lists of links from across their sites. The basic premise is that some sites have a large number of dynamic pages that are only available through the use of forms and user entries. The Sitemap files contain URLs to these pages so thatweb crawlerscan find them.Bing, Google,YahooandAsknow jointly support the Sitemaps protocol.
Since the majorsearch enginesuse the same protocol,[3]having a Sitemap lets them have the updated page information. Sitemaps do not guarantee all links will be crawled, and being crawled does not guarantee indexing.[4]Google Webmaster Tools allow a website owner to upload a sitemap that Google will crawl, or they can accomplish the same thing with the robots.txt file.[5]
Below is an example of a validated XML sitemap for a simple three-page website. Sitemaps are a useful tool for making sites searchable, particularly those written in non-HTML languages. | https://en.wikipedia.org/wiki/Site_map |
Document retrievalis defined as the matching of some stated user query against a set offree-textrecords. These records could be any type of mainlyunstructured text, such asnewspaper articles, real estate records or paragraphs in a manual. User queries can range from multi-sentence full descriptions of an information need to a few words.
Document retrieval is sometimes referred to as, or as a branch of,text retrieval. Text retrieval is a branch ofinformation retrievalwhere the information is stored primarily in the form oftext. Text databases became decentralized thanks to thepersonal computer. Text retrieval is a critical area of study today, since it is the fundamental basis of allinternetsearch engines.
Document retrieval systems find information to given criteria by matching text records (documents) against user queries, as opposed toexpert systemsthat answer questions byinferringover a logicalknowledge database. A document retrieval system consists of a database of documents, aclassification algorithmto build a full text index, and a user interface to access the database.
A document retrieval system has two main tasks:
Internetsearch enginesare classical applications of document retrieval. The vast majority of retrieval systems currently in use range from simple Boolean systems through to systems usingstatisticalornatural language processingtechniques.
There are two main classes of indexing schemata for document retrieval systems:form based(orword based), andcontent basedindexing. The document classification scheme (orindexing algorithm) in use determines the nature of the document retrieval system.
Form based document retrieval addresses the exact syntactic properties of a text, comparable to substring matching in string searches. The text is generally unstructured and not necessarily in a natural language, the system could for example be used to process large sets of chemical representations in molecular biology. Asuffix treealgorithm is an example for form based indexing.
The content based approach exploits semantic connections between documents and parts thereof, and semantic connections between queries and documents. Most content based document retrieval systems use aninverted indexalgorithm.
Asignature fileis a technique that creates aquick and dirtyfilter, for example aBloom filter, that will keep all the documents that match to the query andhopefullya few ones that do not. The way this is done is by creating for each file a signature, typically a hash coded version. One method is superimposed coding. A post-processing step is done to discard the false alarms. Since in most cases this structure is inferior toinverted filesin terms of speed, size and functionality, it is not used widely. However, with proper parameters it can beat the inverted files in certain environments.
ThePubMed[1]form interface features the "related articles" search which works through a comparison of words from the documents' title, abstract, andMeSHterms using a word-weighted algorithm.[2][3] | https://en.wikipedia.org/wiki/Text_retrieval |
TheAssociation of College and Research Librariesdefinesinformation literacyas a "set of integrated abilities encompassing the reflective discovery of information, the understanding of how information is produced and valued and the use of information in creating new knowledge and participating ethically in communities of learning".[1][2][3][4]In the United Kingdom, theChartered Institute of Library and Information Professionals' definition also makes reference to knowing both "when" and "why" information is needed.[5]
The 1989American Library Association(ALA) Presidential Committee on Information Literacy formally defined information literacy (IL) as attributes of an individual, stating that "to be information literate, a person must be able to recognize when information is needed and have the ability to locate, evaluate and use effectively the needed information".[6][7]In 1990, academic Lori Arp published a paper asking, "Are information literacy instruction and bibliographic instruction the same?"[8]Arp argued that neither term was particularly well defined by theoreticians or practitioners in the field. Further studies were needed to lessen the confusion and continue to articulate the parameters of the question.[8]
The Alexandria Proclamation of 2005 defined the term as a human rights issue: "Information literacy empowers people in all walks of life to seek, evaluate, use and create information effectively to achieve their personal, social, occupational and educational goals. It is a basic human right in a digital world and promotes social inclusion in all nations."[9]The United States National Forum on Information Literacy defined information literacy as "the ability to know when there is a need for information, to be able to identify, locate, evaluate, and effectively use that information for the issue or problem at hand."[10][11]Meanwhile, in the UK, the library professional bodyCILIP, define information literacy as "the ability to think critically and make balanced judgements about any information we find and use. It empowers us as citizens to develop informed views and to engage fully with society.”[12]
A number of other efforts have been made to better define the concept and its relationship to other skills and forms ofliteracy. Other pedagogical outcomes related to information literacy include traditional literacy,computer literacy, research skills andcritical thinkingskills. Information literacy as a sub-discipline is an emerging topic of interest and counter measure among educators and librarians with the prevalence ofmisinformation,fake news, anddisinformation.
Scholars have argued that in order to maximize people's contributions to a democratic andpluralisticsociety, educators should be challenging governments and the business sector to support and fund educational initiatives in information literacy.[13]
The phrase "information literacy" first appeared in print in a 1974 report written on behalf of theNational Commission on Libraries and Information Scienceby Paul G. Zurkowski, who was at the time president of the Information Industry Association (now theSoftware and Information Industry Association). Zurkowski used the phrase to describe the "techniques and skills" learned by the information literate "for utilizing the wide range of information tools as well as primary sources in molding information solutions to their problems" and drew a relatively firm line between the "literates" and "information illiterates."[14]
The concept of information literacy appeared again in a 1976 paper by Lee Burchina presented at theTexas A&M Universitylibrary's symposium. Burchina identified a set of skills needed to locate and use information for problem solving and decision making.[15]In another 1976 article inLibrary Journal, M.R. Owens applied the concept to political information literacy and civic responsibility, stating, "All [people] are created equal but voters with information resources are in a position to make more intelligent decisions than citizens who are information illiterates. The application of information resources to the process of decision-making to fulfill civic responsibilities is a vital necessity."[16]
In a literature review published in an academic journal in 2020,Oral Roberts Universityprofessor Angela Sample cites several conceptual waves of information literacy definitions as defining information as a way of thinking, a set of skills, and asocial practice.[17][18][19]The introduction of these concepts led to the adoption of a mechanism calledmetaliteracyand the creation of threshold concepts and knowledge dispositions, which led to the creation of the ALA's Information Literacy Framework.[18][17]
TheAmerican Library Association's Presidential Committee on Information Literacy released a report on January 10, 1989. Titled as the Presidential Committee on Information Literacy: Final Report,[20]the article outlines the importance of information literacy, opportunities to develop it, and the idea of an Information Age School. The recommendations of the Committee led to establishment of the National Forum on Information Literacy, a coalition of more than 90 national and international organizations.[10]
In 1998, theAmerican Association of School Librariansand theAssociation for Educational Communications and TechnologypublishedInformation Power: Building Partnerships for Learning, which further established specific goals for information literacy education, defining some nine standards in the categories of "information literacy," "independent learning," and "social responsibility."[21]
Also in 1998, the Presidential Committee on Information Literacy updated its final report.[22]The report outlined six recommendations from the original report, and examined areas of challenge and progress.
In 1999, the Society of College, National and University Libraries (SCONUL) in the UK publishedThe Seven Pillars of Information Literacyto model the relationship between information skills and IT skills, and the idea of the progression of information literacy into the curriculum of higher education.
In 2003, the National Forum on Information Literacy, along withUNESCOand theNational Commission on Libraries and Information Science, sponsored an international conference in Prague.[23]Representatives from twenty-three countries gathered to discuss the importance of information literacy in a global context. The resulting Prague Declaration[24]described information literacy as a "key to social, cultural, and economic development of nations and communities, institutions and individuals in the 21st century" and declared its acquisition as "part of the basichuman rightof lifelong learning".[24]
In the United States specifically, information literacy was prioritized in 2009 during PresidentBarack Obama's first term. In effort to stress the value information literacy has on everyday communication, he designated Octoberas National Information Literacy Awareness Monthin his released proclamation.[25]
TheAmerican Library Association's Presidential Committee on Information Literacy defined information literacy as the ability "to recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information" and highlighted information literacy as a skill essential forlifelong learningand the production of an informed and prosperous citizenry.[20]
The committee outlined six principal recommendations. Included were recommendations like "Reconsider the ways we have organized information institutionally, structured information access, and defined information's role in our lives at home in the community, and in the work place"; to promote "public awareness of the problems created by information illiteracy"; to develop a national research agenda related to information and its use; to ensure the existence of "a climate conducive to students' becoming information literate"; to include information literacy concerns inteacher educationdemocracy.[26]
In the updated report, the committee ended with an invitation, asking the National Forum and regular citizens to recognize that "the result of these combined efforts will be a citizenry which is made up of effective lifelong learners who can always find the information needed for the issue or decision at hand. This new generation of information literate citizens will truly be America's most valuable resource," and to continue working toward an information literate world.[27]
The Presidential Committee on Information Literacy resulted in the creation of the National Forum on Information Literacy.
In 1983, United States published "A Nation at Risk: The Imperative for Educational Reform", a report declaring that a "rising tide of mediocrity" was eroding the foundation of the American educational system.[28]The report has been regarded as the genesis of the current educational reform movement within the United States.[citation needed]
This report, in conjunction with the rapid emergence of the information society, led theAmerican Library Association (ALA)to convene a panel of educators and librarians in 1987. The Forum,UNESCOandInternational Federation of Library Associations and Institutions(IFLA) collaborated to organize several "experts meetings" that resulted in the Prague Declaration (2003) and the Alexandria Proclamation (2005). Both statements underscore the importance of information literacy as a basic, fundamental human right, and consider IL as a lifelong learning skill.
IFLAhas established an Information Literacy Section. The Section has, in turn, developed and mounted an Information Literacy Resources Directory, called InfoLit Global. Librarians, educators and information professionals may self-register and upload information-literacy-related materials. (IFLA, Information Literacy Section, n.d.) According to the IFLA website, "The primary purpose of the Information Literacy Section is to foster international cooperation in the development of information literacy education in all types of libraries and information institutions."[29]
This alliance was created from the recommendation of the Prague Conference of Information Literacy Experts in 2003. One of its goals is to allow for the sharing of information literacy research and knowledge between nations. The IAIL also sees "lifelong learning" as a basic human right, and their ultimate goal is to use information literacy as a way to allow everyone to participate in the "Information Society" as a way of fulfilling this right.[30]The following organizations are founding members of IAIL:
UNESCO’s MIL aims to promote critical thinking and enhance an individual’s ability to access, evaluate, use and create media in various forms.
According to the UNESCO website, their "action to provide people with the skills and abilities for critical reception, assessment and use of information and media in their professional and personal lives."[36]Their goal is to create information literate societies by creating and maintaining educational policies for information literacy. They work with teachers around the world, training them in the importance of information literacy and providing resources for them to use in their classrooms.
UNESCO publishes studies in multiple countries, looking at how information literacy is currently taught, how it differs in different demographics, and how to raise awareness. They also publish tools and curricula for school boards and teachers to implement.[37]
In "Information Literacy as a Liberal Art," Jeremy J. Shapiro and Shelley K. Hughes (1996) advocated a more holistic approach to information literacy education, one that encouraged not merely the addition of information technology courses as an adjunct to existing curricula, but rather a radically new conceptualization of "our entire educational curriculum in terms of information."
Drawing upon Enlightenment ideals like those articulated byEnlightenmentphilosopherCondorcet, Shapiro and Hughes argued that information literacy education is "essential to the future ofdemocracy, if citizens are to be intelligent shapers of theinformation societyrather than its pawns, and to humanistic culture, if information is to be part of a meaningful existence rather than a routine of production and consumption."
To this end, Shapiro and Hughes outlined a "prototype curriculum" that encompassed the concepts ofcomputer literacy, library skills, and "a broader, critical conception of a more humanistic sort," suggesting seven important components of a holistic approach to information literacy:
Ira Shor further defines critical literacy as "[habits] of thought, reading, writing, and speaking which go beneath surface meaning, first impressions, dominant myths, official pronouncements, traditional clichés,received wisdom, and mere opinions, to understand the deep meaning, root causes, social context, ideology, and personal consequences of any action, event, object, process, organization, experience, text, subject matter, policy, mass media, or discourse."[40]
Big6 (Eisenberg and Berkowitz 1990) is a six-step process that provides support in the activities required to solve information-based problems: task definition, information seeking strategies, location and access, use of information, synthesis, and evaluation.[41][42]The Big6 skills have been used in a variety of settings to help those with a variety of needs. For example, the library of Dubai Women's College, in Dubai, United Arab Emirates which is an English as a second language institution, uses the Big6 model for its information literacy workshops. According to Story-Huffman (2009), using Big6 at the college "has transcended cultural and physical boundaries to provide a knowledge base to help students become information literate" (para. 8). In primary grades, Big6 has been found to work well with variety of cognitive and language levels found in the classroom.
Differentiated instruction and the Big6 appear to be made for each other. While it seems as though all children will be on the same Big6 step at the same time during a unit of instruction, there is no reason students cannot work through steps at an individual pace. In addition, the Big 6 process allows for seamless differentiation by interest.[43]
Issues to consider in the Big6 approach have been highlighted by Philip Doty:
This approach is problem-based, is designed to fit into the context of Benjamin Bloom's taxonomy of cognitive objectives, and aims toward the development of critical thinking. While the Big6 approach has a great deal of power, it also has serious weaknesses. Chief among these are the fact that users often lack well-formed statements of information needs, as well as the model's reliance on problem-solving rhetoric. Often, the need for information and its use are situated in circumstances that are not as well-defined, discrete, and monolithic as problems.[44]
Eisenberg (2004) has recognized that there are a number of challenges to effectively applying the Big6 skills, not the least of which isinformation overloadwhich can overwhelm students. Part of Eisenberg's solution is for schools to help students become discriminating users of information.
This conception, used primarily in thelibrary and information studiesfield, and rooted in the concepts oflibrary instructionand bibliographic instruction, is the ability "to recognize when information is needed and have the ability to locate, evaluate and use effectively the needed information."[45]In this view, information literacy is the basis for lifelong learning. It is also the basis for evaluating contemporary sources of information.
In the publicationInformation Power: Building Partnerships for Learning(AASL and AECT, 1998), three categories, nine standards, and twenty-nine indicators are used to describe the information literate student.
The categories and their standards are as follows:
Standards: The student who is information literate
Standards: The student who is an independent learner is information literate and
Standards: The student who contributes positively to the learning community and to society is information literate and
Since information may be presented in a number of formats, the term "information" applies to more than just the printed word. Otherliteraciessuch as visual, media, computer, network, and basic literacies are implicit in information literacy.
Many of those who are in most need of information literacy are often amongst those least able to access the information they require:
Minority and at-risk students, illiterate adults, people with English as a second language, and economically disadvantaged people are among those most likely to lack access to the information that can improve their situations. Most are not even aware of the potential help that is available to them.[47]
As the Presidential Committee report points out, members of these disadvantaged groups are often unaware that libraries can provide them with the access, training and information they need. In Osborne (2004), many libraries around the country are finding numerous ways to reach many of these disadvantaged groups by discovering their needs in their own environments (including prisons) and offering them specific services in the libraries themselves.
The rapidly evolving information landscape has demonstrated a need for education methods and practices to evolve and adapt accordingly. Information literacy is a key focus of educational institutions at all levels and in order to uphold this standard, institutions are promoting a commitment to lifelong learning and an ability to seek out and identify innovations that will be needed to keep pace with or outpace changes.[48]
Educational methods and practices, within our increasingly information-centric society, must facilitate and enhance a student's ability to harness the power of information. Key to harnessing the power of information is the ability to evaluate information, to ascertain among other things its relevance, authenticity and modernity. The information evaluation process is crucial life skill and a basis for lifelong learning.[49]According to Lankshear and Knobel, what is needed in our education system is a new understanding of literacy, information literacy and on literacy teaching. Educators need to learn to account for the context of our culturally and linguistically diverse and increasingly globalized societies. We also need to take account of the burgeoning variety of text forms associated with information and multimedia technologies.[50]
Evaluation consists of several component processes including metacognition, goals, personal disposition, cognitive development, deliberation, and decision-making. This is both a difficult and complex challenge and underscores the importance of being able to think critically.
Critical thinking is an important educational outcome for students.[49]Education institutions have experimented with several strategies to help foster critical thinking, as a means to enhance information evaluation and information literacy among students. When evaluating evidence, students should be encouraged to practice formal argumentation.[51]Debates and formal presentations must also be encouraged to analyze and critically evaluate information.
Education professionals must underscore the importance of high information quality. Students must be trained to distinguish between fact and opinion. They must be encouraged to use cue words such as "I think" and "I feel" to help distinguish between factual information and opinions. Information related skills that are complex or difficult to comprehend must be broken down into smaller parts. Another approach would be to train students in familiar contexts. Education professionals should encourage students to examine "causes" of behaviors, actions and events. Research shows that people evaluate more effectively if causes are revealed, where available.[48]
Information in any format is produced to convey a message and is shared via a selected delivery method. The iterative processes of researching, creating, revising, and disseminating information vary, and the resulting product reflects these differences (Association of College, p. 5).
Some call for increased critical analysis in Information Literacy instruction. Smith (2013) identifies this as beneficial "to individuals, particularly young
people during their period of formal education. It could equip them with the skills they need to understand the political system and their place within it, and, where necessary, to challenge this" (p. 16).[52]
National content standards, state standards and information literacy skills terminology may vary, but all have common components relating to information literacy.
Information literacy skills are critical to several of the National Education Goals outlined in theGoals 2000: Educate America Act, particularly in the act's aims to increase "school readiness," "student achievement andcitizenship," and "adult literacyandlifelong learning."[53]Of specific relevance are the "focus onlifelong learning, the ability tothink critically, and on the use of new and existing information forproblem solving," all of which are important components of information literacy.[54]
In 1998, theAmerican Association of School Librariansand theAssociation for Educational Communications and Technologypublished "Information Literacy Standards for Student Learning," which identified nine standards that librarians and teachers in K–12 schools could use to describe information literate students and define the relationship of information literacy to independent learning and social responsibility:
In 2007 AASL expanded and restructured the standards that school librarians should strive for in their teaching. These were published as "Standards for the 21st Century Learner" and address several literacies: information, technology, visual, textual, and digital. These aspects of literacy were organized within four key goals: that "learners use of skills, resources, & tools" to "inquire, think critically, and gain knowledge"; to "draw conclusions, make informed decisions, apply knowledge to new situations, and create new knowledge"; to "share knowledge and participate ethically and productively as members of our democratic society"; and to "pursue personal and aesthetic growth."[55]
In 2000, the Association of College and Research Libraries (ACRL), a division of theAmerican Library Association(ALA), released "Information Literacy Competency Standards for Higher Education," describing five standards and numerous performance indicators considered best practices for the implementation and assessment of postsecondary information literacy programs.[56]The five standards are:
These standards were meant to span from the simple to more complicated, or in terms of Bloom'sTaxonomy of Educational Objectives, from the "lower order" to the "higher order." Lower order skills would involve for instance being able to use an online catalog to find a book relevant to an information need in an academic library. Higher order skills would involve critically evaluating and synthesizing information from multiple sources into a coherent interpretation or argument.[57]
In 2016, the Association of College and Research Librarians (ACRL) rescinded the Standards and replaced them with the Framework for Information Literacy for Higher Education, which offers the following set of core ideas:
The Framework is based on a cluster of interconnected core concepts, with flexible options for implementation, rather than on a set of standards or learning outcomes, or any prescriptive enumeration of skills. At the[58]heart of this Framework are conceptual understandings that organize many other concepts and ideas about information, research, and scholarship into a coherent whole.[59]
Today instruction methods have changed drastically from the mostly one-directional teacher-student model, to a more collaborative approach where the students themselves feel empowered. Much of this challenge is now being informed by theAmerican Association of School Librariansthat published new standards for student learning in 2007.
Within the K–12 environment, effective curriculum development is vital to imparting Information Literacy skills to students. Given the already heavy load on students, efforts must be made to avoid curriculum overload.[60]Eisenberg strongly recommends adopting a collaborative approach to curriculum development among classroom teachers, librarians, technology teachers, and other educators. Staff must be encouraged to work together to analyze student curriculum needs, develop a broad instruction plan, set information literacy goals, and design specific unit and lesson plans that integrate the information skills and classroom content. These educators can also collaborate on teaching and assessment duties
Educators are selecting various forms ofresource-based learning(authentic learning, problem-based learning and work-based learning) to help students focus on the process and to help students learn from the content. Information literacy skills are necessary components of each. Within a school setting, it is very important that a students' specific needs as well as the situational context be kept in mind when selecting topics for integrated information literacy skills instruction. The primary goal should be to provide frequent opportunities for students to learn and practice information problem solving.[60]To this extent, it is also vital to facilitate repetition of information seeking actions and behavior. The importance of repetition in information literacy lesson plans cannot be underscored, since we tend to learn through repetition. A students' proficiency will improve over time if they are afforded regular opportunities to learn and to apply the skills they have learnt.
The process approach to education is requiring new forms of student assessment. Students demonstrate their skills, assess their own learning, and evaluate the processes by which this learning has been achieved by preparing portfolios, learning and research logs, and using rubrics.
Information literacy efforts are underway on individual, local, and regional bases.
Many states have either fully adopted AASL information literacy standards or have adapted them to suit their needs.[48]States such as Oregon (OSLIS, 2009)[61]increasing rely on these guidelines for curriculum development and setting information literacy goals. Virginia,[62]on the other hand, chose to undertake a comprehensive review, involving all relevant stakeholders and formulate its own guidelines and standards for information literacy. At an international level, two framework documents jointly produced by UNESCO and the IFLA (International Federation of Library Associations and Institutions) developed two framework documents that laid the foundations in helping define the educational role to be played by school libraries: the School library manifesto (1999).[63]
Another immensely popular approach to imparting information literacy is the Big6 set of skills.[60]Eisenberg claims that the Big6 is the most widely used model in K–12 education. This set of skills seeks to articulate the entire information seeking life cycle. The Big6 is made up of six major stages and two sub-stages under each major stages. It defines the six steps as being: task definition, information seeking strategies, location and access, use of information, synthesis, and evaluation. Such approaches seek to cover the full range of information problem-solving actions that a person would normally undertake, when faced with an information problem or with making a decision based on available resources.
Information literacy instruction in higher education can take a variety of forms: stand-alone courses or classes, online tutorials, workbooks, course-related instruction, or course-integrated instruction.
The six regional accreditation boards have added information literacy to their standards.[64]Librarians often are required to teach the concepts of information literacy during "one shot" classroom lectures. There are also credit courses offered by academic librarians to instruct college students in becoming more information literate. Additionally, information literacy instruction is usually tailored to a specific disciplines. One such attempt in the area of physics was published in 2009 but there are many many more published.[65]
In 2016, theAssociation of College and Research Libraries(ACRL, part of theAmerican Library Association) adopted a new "Framework for Information Literacy for Higher Education,"[66]replacing the ACRL's "Information Literacy Standards for Higher Education" that had been approved in 2000. The standards were largely criticized by proponents ofcritical information literacy, a concept deriving fromcritical pedagogy, for being too prescriptive.[67]It's termed a "framework" because it consists of interconnected core concepts designed to be interpreted and implemented locally depending on the context and needs of the audience. The framework draws on recent research around threshold concepts, or the ideas that are gateways to broader understanding or skills in a given discipline.[68]It also draws on newer research around metaliteracy, and assumes a more holistic view of information literacy that includes creation and collaboration in addition to consumption, so is appropriate for current practices around social media and Web 2.0.[69]The six concepts, or frames, are:
This draws from the concept ofmetaliteracy,[69]which offers a renewed vision of information literacy as an overarching set of abilities in which students are consumers and creators of information who can participate successfully in collaborative spaces (Association of College, p. 2)
There is a growing body of scholarly research describing faculty-librarian collaboration to bring information literacy skills practice into higher education curriculum, moving beyond "one shot" lectures to an integrated model in which librarians help design assignments, create guides to useful course resources, and provide direct support to students throughout courses.[70][71][72][73][74][75]A recent literature review indicates that there is still a lack of evidence concerning the unique information literacy practices of doctoral students, especially within disciplines such as the health sciences.[76]
There have also been efforts in higher education to highlight issues of data privacy, as they relate to information literacy. For example, at theUniversity of North Florida, in 2021, data privacy was added to their Library and Information Studies curriculum. The history of data privacy was included in this change, as well as topics such as, "data collection, data brokers, browser fingerprinting, cookies, data security, IP ranges, SSO, http vs https, anonymization, encryption, opt out vs opt in. These are all areas in which information professionals can improve information literacy, through understanding data privacy, practicing good techniques for data privacy, and teaching patrons about the importance/techniques for data privacy.[77]Additionally, information literacy instruction has been offered focusing onfake news.[78]
Now that information literacy has become a part of the core curriculum at many post-secondary institutions, the library community is charged to provide information literacy instruction in a variety of formats, includingonline learningand distance education. TheAssociation of College and Research Libraries(ACRL) addresses this need in its Guidelines for Distance Education Services (2000):
Library resources and services in institutions of higher education must meet the needs of all their faculty, students, and academic support staff, wherever these individuals are located, whether on a main campus, off campus, in distance education or extended campus programs—or in the absence of a campus at all, in courses taken for credit or non-credit; in continuing education programs; in courses attended in person or by means of electronic transmission; or any other means of distance education.
Within the e-learning anddistance educationworlds, providing effective information literacy programs brings together the challenges of both distance librarianship and instruction. With the prevalence of course management systems such asWebCTandBlackboard, library staff are embedding information literacy training within academic programs and within individual classes themselves.[79]
In October 2013, theNational Library of Singapore(NLB) created the S.U.R.E, (Source, Understand, Research, Evaluate) campaign.[80]The objectives and strategies of the S.U.R.E. campaign were first presented at the 2014 IFLA WLIC.[81]It is summarised by NLB as simplifying information literacy into four basic building blocks, to "promote and educate the importance of Information Literacy and discernment in information searching."[82]
Public events furthering the S.U.R.E. campaign were organised 2015. This was called the "Super S.U.R.E. Show" involving speakers to engage the public with their anecdotes and other learning points, for example the ability to separate fact from opinion.[83]
Information literacy is taught by librarians at institutes of higher education. Some components of information literacy are embedded in the undergraduate curriculum at theNational University of Singapore.[84]
Many academic libraries are participating in a culture of assessment, and attempt to show the value of their information literacy interventions to their students. Librarians use a variety of techniques for this assessment, some of which aim to empower students and librarians and resist adherence to unquestioned norms.[85]Information literacy instruction has been shown to improve student outcomes in higher education.[86]Oakleaf describes the benefits and dangers of various assessment approaches:fixed-choice tests, performance assessments, andrubrics.[87]
In public libraries, information literacy is connected tolifelong learning, development of employability skills, personal health management, andinformal learning.[88] | https://en.wikipedia.org/wiki/Information_literacy |
TheACMConference on Information and Knowledge Management(CIKM, pronounced/ˈsikəm/) is an annualcomputer scienceresearch conferencededicated toinformation management(IM) andknowledge management(KM). Since the first event in 1992, the conference has evolved into one of the major forums for research ondatabase management,information retrieval, and knowledge management.[1][2]The conference is noted for itsinterdisciplinarity, as it brings together communities that otherwise often publish at separate venues. Recent editions have attracted well beyond 500 participants.[3]In addition to the main research program, the conference also features a number of workshops, tutorials, and industry presentations.[4]
For many years, the conference was held in the US. Since 2005, venues in other countries have been selected as well. | https://en.wikipedia.org/wiki/Conference_on_Information_and_Knowledge_Management |
Document classificationordocument categorizationis a problem inlibrary science,information scienceandcomputer science. The task is to assign adocumentto one or moreclassesorcategories. This may be done "manually" (or "intellectually") oralgorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science. The problems are overlapping, however, and there is therefore interdisciplinary research on document classification.
The documents to be classified may be texts, images, music, etc. Each kind of document possesses its special classification problems. When not otherwise specified,text classificationis implied.
Documents may be classified according to theirsubjectsor according to other attributes (such as document type, author, printing year etc.). In the rest of this article only subject classification is considered. There are two main philosophies of subject classification of documents: the content-based approach and the request-based approach.
Content-based classificationis classification in which the weight given to particular subjects in a document determines the class to which the document is assigned. It is, for example, a common rule for classification in libraries, that at least 20% of the content of a book should be about the class to which the book is assigned.[1]In automatic classification it could be the number of times given words appears in a document.
Request-oriented classification(or -indexing) is classification in which the anticipated request from users is influencing how documents are being classified. The classifier asks themself: “Under which descriptors should this entity be found?” and “think of all the possible queries and decide for which ones the entity at hand is relevant” (Soergel, 1985, p. 230[2]).
Request-oriented classification may be classification that is targeted towards a particular audience or user group. For example, a library or a database for feminist studies may classify/index documents differently when compared to a historical library. It is probably better, however, to understand request-oriented classification aspolicy-based classification: The classification is done according to some ideals and reflects the purpose of the library or database doing the classification. In this way it is not necessarily a kind of classification or indexing based on user studies. Only if empirical data about use or users are applied should request-oriented classification be regarded as a user-based approach.
Sometimes a distinction is made between assigning documents to classes ("classification") versus assigningsubjectsto documents ("subject indexing") but asFrederick Wilfrid Lancasterhas argued, this distinction is not fruitful. "These terminological distinctions,” he writes, “are quite meaningless and only serve to cause confusion” (Lancaster, 2003, p. 21[3]). The view that this distinction is purely superficial is also supported by the fact that a classification system may be transformed into athesaurusand vice versa (cf., Aitchison, 1986,[4]2004;[5]Broughton, 2008;[6]Riesthuis & Bliedung, 1991[7]). Therefore, the act of labeling a document (say by assigning a term from acontrolled vocabularyto a document) is at the same time to assign that document to the class of documents indexed by that term (all documents indexed or classified as X belong to the same class of documents). In other words, labeling a document is the same as assigning it to the class of documents indexed under that label.
Automatic document classification tasks can be divided into three sorts:supervised document classificationwhere some external mechanism (such as human feedback) provides information on the correct classification for documents,unsupervised document classification(also known asdocument clustering), where the classification must be done entirely without reference to external information, andsemi-supervised document classification,[8]where parts of the documents are labeled by the external mechanism. There are several software products under various license models available.[9][10][11][12][13][14]
Automatic document classification techniques include:
Classification techniques have been applied to | https://en.wikipedia.org/wiki/Document_classification |
Anabstracting serviceis a service that providesabstractsof publications, often on a subject or group of related subjects, usually on a subscription basis.[1]Anindexing serviceis a service that assigns descriptors and other kinds of access points todocuments. The word indexing service is today mostly used for computer programs, but may also cover services providingback-of-the-book indexes,journal indexes, and related kinds ofindexes.[2]Anindexing and abstracting serviceis a service that provides shortening or summarizing of documents and assigning of descriptors for referencing documents.[3]
The product is often anabstracts journalor abibliographic index, which may be a subject bibliography or abibliographic database.
Guidelines for indexing and abstracting, including the evaluation of such services, are given in the literature oflibrary and information science.[4] | https://en.wikipedia.org/wiki/Indexing_and_abstracting_service |
Metadata(ormetainformation) is "datathat provides information about other data",[1]but not the content of the data itself, such as the text of a message or the image itself.[2]There are many distinct types of metadata, including:
Metadata is not strictly bound to one of these categories, as it can describe a piece of data in many other ways.
Metadata has various purposes. It can help usersfind relevant informationanddiscover resources. It can also help organize electronic resources, provide digital identification, and archive and preserve resources. Metadata allows users to access resources by "allowing resources to be found by relevant criteria, identifying resources, bringing similar resources together, distinguishing dissimilar resources, and giving location information".[8]Metadata oftelecommunicationactivities includingInternettraffic is very widely collected by various national governmental organizations. This data is used for the purposes oftraffic analysisand can be used formass surveillance.[9]
Metadata was traditionally used in thecard catalogsoflibrariesuntil the 1980s when libraries converted their catalog data to digitaldatabases.[10]In the 2000s, as data and information were increasingly stored digitally, this digital data was described usingmetadata standards.[11]
The first description of "meta data" for computer systems is purportedly noted by MIT's Center for International Studies experts David Griffel and Stuart McIntosh in 1967: "In summary then, we have statements in an object language about subject descriptions of data and token codes for the data. We also have statements in a meta language describing the data relationships and transformations, and ought/is relations between norm and data."[12]
Unique metadata standards exist for different disciplines (e.g.,museumcollections,digital audio files,websites, etc.). Describing thecontentsandcontextof data ordata filesincreases its usefulness. For example, aweb pagemay include metadata specifying what software language the page is written in (e.g., HTML), what tools were used to create it, what subjects the page is about, and where to find more information about the subject. This metadata can automatically improve the reader's experience and make it easier for users to find the web page online.[13]ACDmay include metadata providing information about the musicians, singers, and songwriters whose work appears on the disc.
In many countries, government organizations routinely store metadata about emails, telephone calls, web pages, video traffic, IP connections, and cell phone locations.[14]
Metadata means "data about data". Metadata is defined as the data providing information about one or more aspects of the data; it is used to summarize basic information about data that can make tracking and working with specific data easier.[15]Some examples include:
For example, adigital imagemay include metadata that describes the size of the image, its color depth, resolution, when it was created, the shutter speed, and other data.[16]A text document's metadata may contain information about how long the document is, who the author is, when the document was written, and a short summary of the document. Metadata within web pages can also contain descriptions of page content, as well as key words linked to the content.[17]These links are often called "Metatags", which were used as the primary factor in determining order for a web search until the late 1990s.[17]The reliance on metatags in web searches was decreased in the late 1990s because of "keyword stuffing",[17]whereby metatags were being largely misused to trick search engines into thinking some websites had more relevance in the search than they really did.[17]
Metadata can be stored and managed in adatabase, often called ametadata registryormetadata repository.[18]However, without context and a point of reference, it might be impossible to identify metadata just by looking at it.[19]For example: by itself, a database containing several numbers, all 13 digits long could be the results of calculations or a list of numbers to plug into anequation –without any other context, the numbers themselves can be perceived as the data. But if given the context that this database is a log of a book collection, those 13-digit numbers may now be identified asISBNs–information that refers to the book, but is not itself the information within the book. The term "metadata" was coined in 1968 by Philip Bagley, in his book "Extension of Programming Language Concepts" where it is clear that he uses the term in the ISO 11179 "traditional" sense, which is "structural metadata" i.e. "data about the containers of data"; rather than the alternative sense "content about individual instances of data content" or metacontent, the type of data usually found in library catalogs.[20][21]Since then the fields of information management, information science, information technology, librarianship, andGIShave widely adopted the term. In these fields, the wordmetadatais defined as "data about data".[22]While this is the generally accepted definition, various disciplines have adopted their own more specific explanations and uses of the term.
Slatereported in 2013 that the United States government's interpretation of "metadata" could be broad, and might include message content such as the subject lines of emails.[23]
While the metadata application is manifold, covering a large variety of fields, there are specialized and well-accepted models to specify types of metadata.Bretherton& Singley (1994) distinguish between two distinct classes: structural/control metadata and guide metadata.[24]Structural metadatadescribes the structure of database objects such as tables, columns, keys and indexes.Guide metadatahelps humans find specific items and is usually expressed as a set of keywords in a natural language. According toRalph Kimball, metadata can be divided into three categories:technical metadata(or internal metadata),business metadata(or external metadata), andprocess metadata.
NISOdistinguishes three types of metadata: descriptive, structural, and administrative.[22]Descriptive metadatais typically used for discovery and identification, as information to search and locate an object, such as title, authors, subjects, keywords, and publisher.Structural metadatadescribes how the components of an object are organized. An example of structural metadata would be how pages are ordered to form chapters of a book. Finally,administrative metadatagives information to help manage the source. Administrative metadata refers to the technical information, such as file type, or when and how the file was created. Two sub-types of administrative metadata are rights management metadata and preservation metadata.Rights management metadataexplainsintellectual property rights, whilepreservation metadatacontains information to preserve and save a resource.[8]
Statistical data repositories have their own requirements for metadata in order to describe not only the source and quality of the data[6]but also what statistical processes were used to create the data, which is of particular importance to the statistical community in order to both validate and improve the process of statistical data production.[7]
An additional type of metadata beginning to be more developed isaccessibility metadata. Accessibility metadata is not a new concept to libraries; however, advances in universal design have raised its profile.[25]: 213–214Projects like Cloud4All and GPII identified the lack of common terminologies and models to describe the needs and preferences of users and information that fits those needs as a major gap in providing universal access solutions.[25]: 210–211Those types of information are accessibility metadata.[25]: 214Schema.orghas incorporated several accessibility properties based on IMS Global Access for All Information Model Data Element Specification.[25]: 214The Wiki pageWebSchemas/Accessibilitylists several properties and their values. While the efforts to describe and standardize the varied accessibility needs of information seekers are beginning to become more robust, their adoption into established metadata schemas has not been as developed. For example, while Dublin Core (DC)'s "audience" and MARC 21's "reading level" could be used to identify resources suitable for users with dyslexia and DC's "format" could be used to identify resources available in braille, audio, or large print formats, there is more work to be done.[25]: 214
Metadata (metacontent) or, more correctly, the vocabularies used to assemble metadata (metacontent) statements, is typically structured according to a standardized concept using a well-defined metadata scheme, includingmetadata standardsandmetadata models. Tools such ascontrolled vocabularies,taxonomies,thesauri,data dictionaries, andmetadata registriescan be used to apply further standardization to the metadata. Structural metadata commonality is also of paramount importance indata modeldevelopment and indatabase design.
Metadata (metacontent) syntax refers to the rules created to structure the fields or elements of metadata (metacontent).[26]A single metadata scheme may be expressed in a number of different markup or programming languages, each of which requires a different syntax. For example, Dublin Core may be expressed in plain text,HTML,XML, andRDF.[27]
A common example of (guide) metacontent is the bibliographic classification, the subject, theDewey Decimal class number. There is always an implied statement in any "classification" of some object. To classify an object as, for example, Dewey class number 514 (Topology) (i.e. books having the number 514 on their spine) the implied statement is: "<book><subject heading><514>". This is a subject-predicate-object triple, or more importantly, a class-attribute-value triple. The first 2 elements of the triple (class, attribute) are pieces of some structural metadata having a defined semantic. The third element is a value, preferably from some controlled vocabulary, some reference (master) data. The combination of the metadata and master data elements results in a statement which is a metacontent statement i.e. "metacontent = metadata + master data". All of these elements can be thought of as "vocabulary". Both metadata and master data are vocabularies that can be assembled into metacontent statements. There are many sources of these vocabularies, both meta and master data: UML, EDIFACT, XSD, Dewey/UDC/LoC, SKOS, ISO-25964, Pantone, Linnaean Binomial Nomenclature, etc. Using controlled vocabularies for the components of metacontent statements, whether for indexing or finding, is endorsed byISO 25964: "If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved."[28]This is particularly relevant when considering search engines of the internet, such as Google. The process indexes pages and then matches text strings using its complex algorithm; there is no intelligence or "inferencing" occurring, just the illusion thereof.
Metadata schemata can be hierarchical in nature where relationships exist between metadata elements and elements are nested so that parent-child relationships exist between the elements.
An example of a hierarchical metadata schema is theIEEE LOMschema, in which metadata elements may belong to a parent metadata element.
Metadata schemata can also be one-dimensional, or linear, where each element is completely discrete from other elements and classified according to one dimension only.
An example of a linear metadata schema is theDublin Coreschema, which is one-dimensional.
Metadata schemata are often 2 dimensional, or planar, where each element is completely discrete from other elements but classified according to 2 orthogonal dimensions.[29]
The degree to which the data or metadata is structured is referred to as"granularity". "Granularity" refers to how much detail is provided. Metadata with a high granularity allows for deeper, more detailed, and more structured information and enables a greater level of technical manipulation. A lower level of granularity means that metadata can be created for considerably lower costs but will not provide as detailed information. The major impact of granularity is not only on creation and capture, but moreover on maintenance costs. As soon as the metadata structures become outdated, so too is the access to the referred data. Hence granularity must take into account the effort to create the metadata as well as the effort to maintain it.
In all cases where the metadata schemata exceed the planar depiction, some type of hypermapping is required to enable display and view of metadata according to chosen aspect and to serve special views. Hypermapping frequently applies to layering of geographical and geological information overlays.[30]
International standards apply to metadata. Much work is being accomplished in the national and international standards communities, especiallyANSI(American National Standards Institute) andISO(International Organization for Standardization) to reach a consensus on standardizing metadata and registries. The core metadata registry standard isISO/IEC11179 Metadata Registries (MDR), the framework for the standard is described in ISO/IEC 11179-1:2004.[31]A new edition of Part 1 is in its final stage for publication in 2015 or early 2016. It has been revised to align with the current edition of Part 3, ISO/IEC 11179-3:2013[32]which extends the MDR to support the registration of Concept Systems.
(seeISO/IEC 11179). This standard specifies a schema for recording both the meaning and technical structure of the data for unambiguous usage by humans and computers. ISO/IEC 11179 standard refers to metadata as information objects about data, or "data about data". In ISO/IEC 11179 Part-3, the information objects are data about Data Elements, Value Domains, and other reusable semantic and representational information objects that describe the meaning and technical details of a data item. This standard also prescribes the details for a metadata registry, and for registering and administering the information objects within a Metadata Registry. ISO/IEC 11179 Part 3 also has provisions for describing compound structures that are derivations of other data elements, for example through calculations, collections of one or more data elements, or other forms of derived data. While this standard describes itself originally as a "data element" registry, its purpose is to support describing and registering metadata content independently of any particular application, lending the descriptions to being discovered and reused by humans or computers in developing new applications, databases, or for analysis of data collected in accordance with the registered metadata content. This standard has become the general basis for other kinds of metadata registries, reusing and extending the registration and administration portion of the standard.
The Geospatial community has a tradition of specializedgeospatial metadatastandards, particularly building on traditions of map- and image-libraries and catalogs. Formal metadata is usually essential for geospatial data, as common text-processing approaches are not applicable.
TheDublin Coremetadata terms are a set of vocabulary terms that can be used to describe resources for the purposes of discovery. The original set of 15 classic[33]metadata terms, known as the Dublin Core Metadata Element Set[34]are endorsed in the following standards documents:
The W3C Data Catalog Vocabulary (DCAT)[38]is an RDF vocabulary that supplements Dublin Core with classes for Dataset, Data Service, Catalog, and Catalog Record. DCAT also uses elements from FOAF, PROV-O, and OWL-Time. DCAT provides an RDF model to support the typical structure of a catalog that contains records, each describing a dataset or service.
Although not a standard,Microformat(also mentioned in the sectionmetadata on the internetbelow) is a web-based approach to semantic markup which seeks to re-use existing HTML/XHTML tags to convey metadata. Microformat follows XHTML and HTML standards but is not a standard in itself. One advocate of microformats,Tantek Çelik, characterized a problem with alternative approaches:
Here's a new language we want you to learn, and now you need to output these additional files on your server. It's a hassle. (Microformats) lower the barrier to entry.[39]
Most common types ofcomputer filescan embed metadata, including documents, (e.g.Microsoft Officefiles,OpenDocumentfiles,PDF) images, (e.g.JPEG,PNG) Video files, (e.g.AVI,MP4) and audio files. (e.g.WAV,MP3)
Metadata may be added to files by users, but some metadata is often automatically added to files by authoring applications or by devices used to produce the files, without user intervention.
While metadata in files are useful for finding them, they can be aprivacyhazard when the files are shared. Usingmetadata removal toolsto clean files before sharing them can mitigate this risk.
Metadata may be written into adigital photofile that will identify who owns it, copyright and contact information, what brand or model of camera created the file, along with exposure information (shutter speed, f-stop, etc.) and descriptive information, such as keywords about the photo, making the file or image searchable on a computer and/or the Internet. Some metadata is created by the camera such as, color space, color channels, exposure time, and aperture (EXIF), while some is input by the photographer and/or software after downloading to a computer.[40]Most digital cameras write metadata about the model number, shutter speed, etc., and some enable you to edit it;[41]this functionality has been available on most Nikon DSLRs since theNikon D3, on most new Canon cameras since theCanon EOS 7D, and on most Pentax DSLRs since the Pentax K-3. Metadata can be used to make organizing in post-production easier with the use of key-wording. Filters can be used to analyze a specific set of photographs and create selections on criteria like rating or capture time. On devices with geolocation capabilities likeGPS(smartphones in particular), the location the photo was taken from may also be included.
Photographic Metadata Standards are governed by organizations that develop the following standards. They include, but are not limited to:
Metadata is particularly useful in video, where information about its contents (such as transcripts of conversations and text descriptions of its scenes) is not directly understandable by a computer, but where an efficient search of the content is desirable. This is particularly useful in video applications such asAutomatic Number Plate Recognitionand Vehicle Recognition Identification software, wherein license plate data is saved and used to create reports and alerts.[43]There are 2 sources in which video metadata is derived: (1) operational gathered metadata, that is information about the content produced, such as the type of equipment, software, date, and location; (2) human-authored metadata, to improve search engine visibility, discoverability, audience engagement, and providing advertising opportunities to video publishers.[44]Avid's MetaSync and Adobe's Bridge are examples of professional video editing software with access to metadata.[45]
Information on the times, origins and destinations of phone calls, electronic messages, instant messages, and other modes of telecommunication, as opposed to message content, is another form of metadata. Bulk collection of thiscall detail recordmetadata by intelligence agencies has proven controversial after disclosures byEdward Snowdenof the fact that certain Intelligence agencies such as theNSAhad been (and perhaps still are) keeping online metadata on millions of internet users for up to a year, regardless of whether or not they [ever] were persons of interest to the agency.
Geospatial metadata relates to Geographic Information Systems (GIS) files, maps, images, and other data that is location-based. Metadata is used in GIS to document the characteristics and attributes of geographic data, such as database files and data that is developed within a GIS. It includes details like who developed the data, when it was collected, how it was processed, and what formats it's available in, and then delivers the context for the data to be used effectively.[46]
Metadata can be created either by automated information processing or by manual work. Elementary metadata captured by computers can include information about when an object was created, who created it, when it was last updated, file size, and file extension. In this context anobjectrefers to any of the following:
Ametadata enginecollects, stores and analyzes information about data and metadata in use within a domain.[47]
Data virtualization emerged in the 2000s as the new software technology to complete the virtualization "stack" in the enterprise. Metadata is used in data virtualization servers which are enterprise infrastructure components, alongside database and application servers. Metadata in these servers is saved as persistent repository and describebusiness objectsin various enterprise systems and applications. Structural metadata commonality is also important to support data virtualization.
Standardization and harmonization work has brought advantages to industry efforts to build metadata systems in the statistical community.[48][49]Several metadata guidelines and standards such as the European Statistics Code of Practice[50]and ISO 17369:2013 (Statistical Data and Metadata Exchangeor SDMX)[48]provide key principles for how businesses, government bodies, and other entities should manage statistical data and metadata. Entities such asEurostat,[51]European System of Central Banks,[51]and theU.S. Environmental Protection Agency[52]have implemented these and other such standards and guidelines with the goal of improving "efficiency when managing statistical business processes".[51]
Metadata has been used in various ways as a means of cataloging items in libraries in both digital and analog formats. Such data helps classify, aggregate, identify, and locate a particular book, DVD, magazine, or any object a library might hold in its collection.[53]Until the 1980s, many library catalogs used 3x5 inch cards in file drawers to display a book's title, author, subject matter, and an abbreviatedalpha-numericstring (call number) which indicated the physical location of the book within the library's shelves. TheDewey Decimal Systememployed by libraries for the classification of library materials by subject is an early example of metadata usage. The early paper catalog had information regarding whichever item was described on said card: title, author, subject, and a number as to where to find said item.[54]Beginning in the 1980s and 1990s, many libraries replaced these paper file cards with computer databases. These computer databases make it much easier and faster for users to do keyword searches. Another form of older metadata collection is the use by the US Census Bureau of what is known as the "Long Form". The Long Form asks questions that are used to create demographic data to find patterns of distribution.[55]Librariesemploy metadata inlibrary catalogues, most commonly as part of anIntegrated Library Management System. Metadata is obtained bycatalogingresources such as books, periodicals, DVDs, web pages or digital images. This data is stored in the integrated library management system,ILMS, using theMARCmetadata standard. The purpose is to direct patrons to the physical or electronic location of items or areas they seek as well as to provide a description of the item/s in question.
More recent and specialized instances of library metadata include the establishment ofdigital librariesincludinge-printrepositories and digital image libraries. While often based on library principles, the focus on non-librarian use, especially in providing metadata, means they do not follow traditional or common cataloging approaches. Given the custom nature of included materials, metadata fields are often specially created e.g. taxonomic classification fields, location fields, keywords, or copyright statement. Standard file information such as file size and format are usually automatically included.[56]Library operation has for decades been a key topic in efforts towardinternational standardization. Standards for metadata in digital libraries includeDublin Core,METS,MODS,DDI,DOI,URN,PREMISschema,EML, andOAI-PMH. Leading libraries in the world give hints on their metadata standards strategies.[57][58]The use and creation of metadata in library and information science also include scientific publications:
Metadata for scientific publications is often created by journal publishers and citation databases such asPubMedandWeb of Science. The data contained within manuscripts or accompanying them as supplementary material is less often subject to metadata creation,[59][60]though they may be submitted to e.g. biomedical databases after publication. The original authors and database curators then become responsible for metadata creation, with the assistance of automated processes. Comprehensive metadata for all experimental data is the foundation of theFAIR Guiding Principles, or the standards for ensuring research data arefindable,accessible,interoperable, andreusable.[61]
Such metadata can then be utilized, complemented, and made accessible in useful ways.OpenAlexis a free online index of over 200 million scientific documents that integrates and provides metadata such as sources,citations,author information,scientific fields, and research topics. ItsAPIand open source website can be used for metascience,scientometrics, and novel tools that query thissemanticweb ofpapers.[62][63][64]Another project under development,Scholia, uses the metadata of scientific publications for various visualizations and aggregation features such as providing a simple user interface summarizing literature about a specific feature of the SARS-CoV-2 virus usingWikidata's "main subject" property.[65]
In research labor, transparent metadata about authors' contributions to works have been proposed – e.g. the role played in the production of the paper, the level of contribution and the responsibilities.[66][67]
Moreover, various metadata about scientific outputs can be created or complemented – for instance, some organizations attempt to track and link citations of papers as 'Supporting', 'Mentioning' or 'Contrasting' the study.[68]Other examples include developments ofalternative metrics[69]– which, beyond providing help for assessment and findability, also aggregate many of the public discussions about a scientific paper on social media such asReddit,citations on Wikipedia, andreports about the studyin the news media[70]– and a call for showingwhether or not the original findings are confirmed or could get reproduced.[71][72]
Metadata in a museum context is the information that trained cultural documentation specialists, such asarchivists,librarians, museumregistrarsandcurators, create to index, structure, describe, identify, or otherwise specify works of art, architecture, cultural objects and their images.[73][74][75]Descriptive metadata is most commonly used in museum contexts for object identification and resource recovery purposes.[74]
Metadata is developed and applied within collecting institutions and museums in order to:
Many museums and cultural heritage centers recognize that given the diversity of artworks and cultural objects, no single model or standard suffices to describe and catalog cultural works.[73][74][75]For example, a sculpted Indigenous artifact could be classified as an artwork, an archaeological artifact, or an Indigenous heritage item. The early stages of standardization in archiving, description and cataloging within the museum community began in the late 1990s with the development of standards such asCategories for the Description of Works of Art(CDWA), Spectrum,CIDOC Conceptual Reference Model(CRM), Cataloging Cultural Objects (CCO) and the CDWA Lite XML schema.[74]These standards useHTMLandXMLmarkup languages for machine processing, publication and implementation.[74]TheAnglo-American Cataloguing Rules(AACR), originally developed for characterizing books, have also been applied to cultural objects, works of art and architecture.[75]Standards, such as the CCO, are integrated within a Museum'sCollections Management System(CMS), a database through which museums are able to manage their collections, acquisitions, loans and conservation.[75]Scholars and professionals in the field note that the "quickly evolving landscape of standards and technologies" creates challenges for cultural documentarians, specifically non-technically trained professionals.[76][page needed]Most collecting institutions and museums use arelational databaseto categorize cultural works and their images.[75]Relational databases and metadata work to document and describe the complex relationships amongst cultural objects and multi-faceted works of art, as well as between objects and places, people, and artistic movements.[74][75]Relational database structures are also beneficial within collecting institutions and museums because they allow for archivists to make a clear distinction between cultural objects and their images; an unclear distinction could lead to confusing and inaccurate searches.[75]
An object's materiality, function, and purpose, as well as the size (e.g., measurements, such as height, width, weight), storage requirements (e.g., climate-controlled environment), and focus of the museum and collection, influence the descriptive depth of the data attributed to the object by cultural documentarians.[75]The established institutional cataloging practices, goals, and expertise of cultural documentarians and database structure also influence the information ascribed to cultural objects and the ways in which cultural objects are categorized.[73][75]Additionally, museums often employ standardized commercial collection management software that prescribes and limits the ways in which archivists can describe artworks and cultural objects.[76]As well, collecting institutions and museums useControlled Vocabulariesto describe cultural objects and artworks in their collections.[74][75]Getty Vocabularies and the Library of Congress Controlled Vocabularies are reputable within the museum community and are recommended by CCO standards.[75]Museums are encouraged to use controlled vocabularies that are contextual and relevant to their collections and enhance the functionality of their digital information systems.[74][75]Controlled Vocabularies are beneficial within databases because they provide a high level of consistency, improving resource retrieval.[74][75]Metadata structures, including controlled vocabularies, reflect theontologiesof the systems from which they were created. Often the processes through which cultural objects are described and categorized through metadata in museums do not reflect the perspectives of the maker communities.[73][77]
Metadata has been instrumental in the creation of digital information systems and archives within museums and has made it easier for museums to publish digital content online. This has enabled audiences who might not have had access to cultural objects due to geographic or economic barriers to have access to them.[74]In the 2000s, as more museums have adopted archival standards and created intricate databases, discussions aboutLinked Databetween museum databases have come up in the museum, archival, and library science communities.[76]Collection Management Systems (CMS) andDigital Asset Managementtools can be local or shared systems.[75]Digital Humanitiesscholars note many benefits of interoperability between museum databases and collections, while also acknowledging the difficulties of achieving such interoperability.[76]
Problems involving metadata inlitigationin theUnited Statesare becoming widespread.[when?]Courts have looked at various questions involving metadata, including thediscoverabilityof metadata by parties. The Federal Rules of Civil Procedure have specific rules for discovery of electronically stored information, and subsequent case law applying those rules has elucidated on the litigant's duty to produce metadata when litigating in federal court.[78]In October 2009, theArizona Supreme Courthas ruled that metadata records arepublic record.[79]Document metadata have proven particularly important in legal environments in which litigation has requested metadata, that can include sensitive information detrimental to a certain party in court. Usingmetadata removal toolsto "clean" or redact documents can mitigate the risks of unwittingly sending sensitive data. This process partially (seedata remanence) protects law firms from potentially damaging leaking of sensitive data throughelectronic discovery.
Opinion polls have shown that 45% of Americans are "not at all confident" in the ability of social media sites to ensure their personal data is secure and 40% say that social media sites should not be able to store any information on individuals. 76% of Americans say that they are not confident that the information advertising agencies collect on them is secure and 50% say that online advertising agencies should not be allowed to record any of their information at all.[80]
In Australia, the need to strengthen national security has resulted in the introduction of a new metadata storage law.[81]This new law means that both security and policing agencies will be allowed to access up to 2 years of an individual's metadata, with the aim of making it easier to stop any terrorist attacks and serious crimes from happening.
Legislative metadata has been the subject of some discussion inlaw.govforums such as workshops held by theLegal Information Instituteat theCornell Law Schoolon 22 and 23 March 2010. The documentation for these forums is titled, "Suggested metadata practices for legislation and regulations".[82]
A handful of key points have been outlined by these discussions, section headings of which are listed as follows:
Australian medical research pioneered the definition of metadata for applications in health care. That approach offers the first recognized attempt to adhere to international standards in medical sciences instead of defining a proprietary standard under theWorld Health Organization(WHO) umbrella. The medical community yet did not approve of the need to follow metadata standards despite research that supported these standards.[83]
Research studies in the fields ofbiomedicineandmolecular biologyfrequently yield large quantities of data, including results ofgenomeormeta-genomesequencing,proteomicsdata, and even notes or plans created during the course of research itself.[84]Each data type involves its own variety of metadata and the processes necessary to produce these metadata. General metadata standards, such as ISA-Tab,[85]allow researchers to create and exchange experimental metadata in consistent formats. Specific experimental approaches frequently have their own metadata standards and systems: metadata standards formass spectrometryincludemzML[86]and SPLASH,[87]whileXML-based standards such asPDBML[88]and SRA XML[89]serve as standards for macromolecular structure and sequencing data, respectively.
The products of biomedical research are generally realized as peer-reviewed manuscripts and these publications are yet another source of data(see#Science).
Adata warehouse(DW) is a repository of an organization's electronically stored data. Data warehouses are designed to manage and store the data. Data warehouses differ frombusiness intelligence(BI) systems because BI systems are designed to use data to create reports and analyze the information, to provide strategic guidance to management.[90]Metadata is an important tool in how data is stored in data warehouses. The purpose of a data warehouse is to house standardized, structured, consistent, integrated, correct, "cleaned" and timely data, extracted from various operational systems in an organization. The extracted data are integrated in the data warehouse environment to provide an enterprise-wide perspective. Data are structured in a way to serve the reporting and analytic requirements. The design of structural metadata commonality using adata modelingmethod such asentity-relationship modeldiagramming is important in any data warehouse development effort. They detail metadata on each piece of data in the data warehouse. An essential component of adata warehouse/business intelligencesystem is the metadata and tools to manage and retrieve the metadata.Ralph Kimball[91]describes metadata as the DNA of the data warehouse as metadata defines the elements of thedata warehouseand how they work together.
Kimballet al.[92]refers to 3 main categories of metadata: Technical metadata, business metadata and process metadata. Technical metadata is primarilydefinitional, while business metadata and process metadata is primarilydescriptive. The categories sometimes overlap.
TheHTMLformat used to define web pages allows for the inclusion of a variety of types of metadata, from basic descriptive text, dates and keywords to further advanced metadata schemes such as theDublin Core,e-GMS, andAGLS[93]standards. Pages and files can also begeotaggedwithcoordinates, categorized or tagged, including collaboratively such as withfolksonomies.
When media hasidentifiersset or when such can be generated, information such as file tags and descriptions can be pulled orscrapedfrom the Internet – for example about movies.[94]Various online databases are aggregated and provide metadata for various data. The collaboratively builtWikidatahas identifiers not just for media but also abstract concepts, various objects, and other entities, that can be looked up by humans and machines to retrieve useful information and to link knowledge in other knowledge bases and databases.[65]
Metadata may be included in the page's header or in a separate file.Microformatsallow metadata to be added to on-page data in a way that regular web users do not see, but computers,web crawlersandsearch enginescan readily access. Many search engines are cautious about using metadata in their ranking algorithms because of exploitation of metadata and the practice of search engine optimization,SEO, to improve rankings. See theMeta elementarticle for further discussion. This cautious attitude may be justified as people, according to Doctorow,[95]are not executing care and diligence when creating their own metadata and that metadata is part of a competitive environment where the metadata is used to promote the metadata creators own purposes. Studies show that search engines respond to web pages with metadata implementations,[96]and Google has an announcement on its site showing the meta tags that its search engine understands.[97]Enterprise search startupSwiftyperecognizes metadata as a relevance signal that webmasters can implement for their website-specific search engine, even releasing their own extension, known as Meta Tags 2.[98]
In thebroadcastindustry, metadata is linked to audio and videobroadcast mediato:
This metadata can be linked to the video media thanks to thevideo servers. Most major broadcast sporting events likeFIFA World Cupor theOlympic Gamesuse this metadata to distribute their video content toTV stationsthroughkeywords. It is often the host broadcaster[99]who is in charge of organizing metadata through itsInternational Broadcast Centreand its video servers. This metadata is recorded with the images and entered by metadata operators (loggers) who associate in live metadata available inmetadata gridsthroughsoftware(such asMulticam(LSM)orIPDirectorused during the FIFA World Cup or Olympic Games).[100][101]
Metadata that describes geographic objects in electronic storage or format (such as datasets, maps, features, or documents with a geospatial component) has a history dating back to at least 1994. This class of metadata is described more fully on thegeospatial metadataarticle.
Ecological and environmental metadata is intended to document the "who, what, when, where, why, and how" of data collection for a particular study. This typically means which organization or institution collected the data, what type of data, which date(s) the data was collected, the rationale for the data collection, and the methodology used for the data collection. Metadata should be generated in a format commonly used by the most relevant science community, such asDarwin Core,Ecological Metadata Language,[102]orDublin Core. Metadata editing tools exist to facilitate metadata generation (e.g. Metavist,[103]Mercury, Morpho[104]). Metadata should describe theprovenanceof the data (where they originated, as well as any transformations the data underwent) and how to give credit for (cite) the data products.
When first released in 1982, Compact Discs only contained a Table Of Contents (TOC) with the number of tracks on the disc and their length in samples.[105][106]Fourteen years later in 1996, a revision of theCD Red Bookstandard addedCD-Textto carry additional metadata.[107]But CD-Text was not widely adopted. Shortly thereafter, it became common for personal computers to retrieve metadata from external sources (e.g.CDDB,Gracenote) based on the TOC.
Digitalaudioformats such asdigital audio filessuperseded music formats such ascassette tapesandCDsin the 2000s. Digital audio files could be labeled with more information than could be contained in just the file name. That descriptive information is called theaudio tagor audio metadata in general. Computer programs specializing in adding or modifying this information are calledtag editors. Metadata can be used to name, describe, catalog, and indicate ownership or copyright for a digital audio file, and its presence makes it much easier to locate a specific audio file within a group, typically through use of a search engine that accesses the metadata. As different digital audio formats were developed, attempts were made to standardize a specific location within the digital files where this information could be stored.
As a result, almost all digital audio formats, includingmp3, broadcast wav, andAIFFfiles, have similar standardized locations that can be populated with metadata. The metadata for compressed and uncompressed digital music is often encoded in theID3tag. Common editors such asTagLibsupport MP3, Ogg Vorbis, FLAC, MPC, Speex, WavPack TrueAudio, WAV, AIFF, MP4, and ASF file formats.
With the availability ofcloudapplications, which include those to add metadata to content, metadata is increasingly available over the Internet.
Metadata can be stored eitherinternally,[108]in the same file or structure as the data (this is also calledembedded metadata), orexternally, in a separate file or field from the described data. A data repository typically stores the metadatadetachedfrom the data but can be designed to support embedded metadata approaches. Each option has advantages and disadvantages:
Metadata can be stored in either human-readable or binary form. Storing metadata in a human-readable format such asXMLcan be useful because users can understand and edit it without specialized tools.[109]However, text-based formats are rarely optimized for storage capacity, communication time, or processing speed. A binary metadata format enables efficiency in all these respects, but requires special software to convert the binary information into human-readable content.
Each relational database system has its own mechanisms for storing metadata. Examples of relational-database metadata include:
In database terminology, this set of metadata is referred to as thecatalog. TheSQLstandard specifies a uniform means to access the catalog, called theinformation schema, but not all databases implement it, even if they implement other aspects of the SQL standard. For an example of database-specific metadata access methods, seeOracle metadata. Programmatic access to metadata is possible using APIs such asJDBC, or SchemaCrawler.[110]
One of the first satirical examinations of the concept of Metadata as we understand it today is American science fiction authorHal Draper's short story, "MS Fnd in a Lbry" (1961). Here, the knowledge of all Mankind is condensed into an object the size of a desk drawer, however, the magnitude of the metadata (e.g. catalog of catalogs of... , as well as indexes and histories) eventually leads to dire yet humorous consequences for the human race. The story prefigures the modern consequences of allowing metadata to become more important than the real data it is concerned with, and the risks inherent in that eventuality as a cautionary tale. | https://en.wikipedia.org/wiki/Metadata |
Overcategorization,overcategorisationorcategory clutteris the process of assigning too many categories, classes orindex termsto a givendocument. It is related to theLibrary and Information Science(LIS) concepts ofdocument classificationandsubject indexing.
In LIS, the ideal number of terms that should be assigned to classify an item are measured by the variablesprecision and recall. Assigning few category labels that are most closely related to the content of the item being classified will result in searches that have high precision, I.e., where a high proportion of the results are closely related to the query. Assigning more category labels to each item will reduce the precision of each search, but increase the recall, retrieving more relevant results. Related LIS concepts include exhaustivity of indexing andinformation overload.
If too many categories are assigned to a given document, theimplicationsfor users depend on howinformativethe links are. If the user is able to distinguish betweenusefuland not useful links, the damage is limited: The user only wastes time selecting links. In many cases, however, the user cannot judge whether or not a given link will turn out to be fruitful. In that case he or she has to follow the link and to read or skim another document. The worst case scenario is, of course, that even after reading the new document the user is unable to decide whether or not it might be useful if its subject matter is not thoroughly investigated.
Overcategorization also has another unpleasant implication: It makes the system (for examplein Wikipedia) difficult to maintain in aconsistentway. If the system is inconsistent, it means that when the user considers the links in a given category, he or she will not find all documents relevant to that category.
Basically, the problem of overcategorization should be understood from the perspective ofrelevanceand the traditional measures ofrecallandprecision. If too fewrelevantcategories are assigned to a document, recall may decrease. If too many non-relevant categories are assigned, precision becomes lower. The hard job is to say which categories are fruitful orrelevantfor future use of the document. | https://en.wikipedia.org/wiki/Overcategorization |
Thomas of Ireland(fl.1295 – before 1338), also known asThomas Hibernicus,Thomas Palmeranus, orThomas Palmerstonus, was anIrishanthologistandindexer.[1]
Thomas was a Fellow of theCollege of Sorbonneand a Master of Arts by 1295, and referred to as a former fellow in the first manuscripts of hisManipulusin 1306. He is believed to have died before 1338.
Lampen, aFranciscan, argues that Thomas Palmeranus, Thomas Hibernicus and Thomas Palmerstonus are the same person.
Thomas was the author of three short works on theology and biblicalexegesis, and the compiler of theManipulus florum('A Handful of Flowers'). The latter, a Latinflorilegium, has been described as a "collection of some 6,000 extracts from patristic and a few classical authors".[2]Thomas compiled this collection from books in the library of the Sorbonne, "and at his death he bequeathed his books and sixteen pounds Parisian to the college".[3]
TheManipulus florumsurvives in over one hundred and ninety manuscripts, and was first printed in 1483. It was printed twenty-six times in the 16th century, eleven times in the 17th. As late as the 19th century, editions were published in Vienna and Turin.
Although Thomas was apparently a member of thesecular clergy, his anthology was highly successful because it was "well suited to the needs of the new mendicant preaching orders ... [to] ... locate quotations ... relevant to any subject they might wish to touch on in their sermons."[4]Indeed, Boyer has demonstrated that very soon after theManipuluswas completed a FrenchDominicanused it to compose a series of surviving sermons.[5]However, Nighman has argued that, although it was surely used by preachers, Thomas did not actually intend his anthology as a reference tool for sermon composition, as argued by the Rouses, but rather as a learning aid for university students, especially those intending on a clerical career involving pastoral care.[6]Nighman has also demonstrated its reception in several non-sermon texts, including Walter Bower's Scotichronicon.[7]
Thomas was also among the earliest pioneers of medieval information technology that included alphabeticalsubject indicesandcross-references. "In his selection, and in the various indexing techniques he invented or improved on, he revealed true originality and inventiveness."[4]Those finding tools are preserved, and electronically enhanced, in Nighman's online critical edition of theManipulus florum.
Thomas was also the author of three other works: | https://en.wikipedia.org/wiki/Thomas_of_Ireland |
Document retrievalis defined as the matching of some stated user query against a set offree-textrecords. These records could be any type of mainlyunstructured text, such asnewspaper articles, real estate records or paragraphs in a manual. User queries can range from multi-sentence full descriptions of an information need to a few words.
Document retrieval is sometimes referred to as, or as a branch of,text retrieval. Text retrieval is a branch ofinformation retrievalwhere the information is stored primarily in the form oftext. Text databases became decentralized thanks to thepersonal computer. Text retrieval is a critical area of study today, since it is the fundamental basis of allinternetsearch engines.
Document retrieval systems find information to given criteria by matching text records (documents) against user queries, as opposed toexpert systemsthat answer questions byinferringover a logicalknowledge database. A document retrieval system consists of a database of documents, aclassification algorithmto build a full text index, and a user interface to access the database.
A document retrieval system has two main tasks:
Internetsearch enginesare classical applications of document retrieval. The vast majority of retrieval systems currently in use range from simple Boolean systems through to systems usingstatisticalornatural language processingtechniques.
There are two main classes of indexing schemata for document retrieval systems:form based(orword based), andcontent basedindexing. The document classification scheme (orindexing algorithm) in use determines the nature of the document retrieval system.
Form based document retrieval addresses the exact syntactic properties of a text, comparable to substring matching in string searches. The text is generally unstructured and not necessarily in a natural language, the system could for example be used to process large sets of chemical representations in molecular biology. Asuffix treealgorithm is an example for form based indexing.
The content based approach exploits semantic connections between documents and parts thereof, and semantic connections between queries and documents. Most content based document retrieval systems use aninverted indexalgorithm.
Asignature fileis a technique that creates aquick and dirtyfilter, for example aBloom filter, that will keep all the documents that match to the query andhopefullya few ones that do not. The way this is done is by creating for each file a signature, typically a hash coded version. One method is superimposed coding. A post-processing step is done to discard the false alarms. Since in most cases this structure is inferior toinverted filesin terms of speed, size and functionality, it is not used widely. However, with proper parameters it can beat the inverted files in certain environments.
ThePubMed[1]form interface features the "related articles" search which works through a comparison of words from the documents' title, abstract, andMeSHterms using a word-weighted algorithm.[2][3] | https://en.wikipedia.org/wiki/Document_retrieval |
Information retrieval(IR) incomputingandinformation scienceis the task of identifying and retrievinginformation systemresources that are relevant to aninformation need. The information need can be specified in the form of a search query. In the case of document retrieval, queries can be based onfull-textor other content-based indexing. Information retrieval is thescience[1]of searching for information in a document, searching for documents themselves, and also searching for themetadatathat describes data, and fordatabasesof texts, images or sounds.
Automated information retrieval systems are used to reduce what has been calledinformation overload. An IR system is a software system that provides access to books, journals and other documents; it also stores and manages those documents.Web search enginesare the most visible IR applications.
An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval, a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees ofrelevance.
An object is an entity that is represented by information in a content collection ordatabase. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. Thisrankingof results is a key difference of information retrieval searching compared to database searching.[2]
Depending on theapplicationthe data objects may be, for example, text documents, images,[3]audio,[4]mind maps[5]or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates ormetadata.
Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.[6]
there is ... a machine called the Univac ... whereby letters and figures are coded as a pattern of magnetic spots on a long steel tape. By this means the text of a document, preceded by its subject code symbol, can be recorded ... the machine ... automatically selects and types out those references which have been coded in any desired way at a rate of 120 words a minute
The idea of using computers to search for relevant pieces of information was popularized in the articleAs We May ThinkbyVannevar Bushin 1945.[7]It would appear that Bush was inspired by patents for a 'statistical machine' – filed byEmanuel Goldbergin the 1920s and 1930s – that searched for documents stored on film.[8]The first description of a computer searching for information was described by Holmstrom in 1948,[9]detailing an early mention of theUnivaccomputer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedyDesk Set. In the 1960s, the first large information retrieval research group was formed byGerard Saltonat Cornell. By the 1970s several different retrieval techniques had been shown to perform well on smalltext corporasuch as the Cranfield collection (several thousand documents).[7]Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s.
In 1992, the US Department of Defense along with theNational Institute of Standards and Technology(NIST), cosponsored theText Retrieval Conference(TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods thatscaleto huge corpora. The introduction ofweb search engineshas boosted the need for very large scale retrieval systems even further.
By the late 1990s, the rise of the World Wide Web fundamentally transformed information retrieval. While early search engines such asAltaVista(1995) andYahoo!(1994) offered keyword-based retrieval, they were limited in scale and ranking refinement. The breakthrough came in 1998 with the founding ofGoogle, which introduced thePageRankalgorithm,[10]using the web’s hyperlink structure to assess page importance and improve relevance ranking.
During the 2000s, web search systems evolved rapidly with the integration of machine learning techniques. These systems began to incorporate user behavior data (e.g., click-through logs), query reformulation, and content-based signals to improve search accuracy and personalization. In 2009,MicrosoftlaunchedBing, introducing features that would later incorporatesemanticweb technologies through the development of its Satori knowledge base. Academic analysis[11]have highlighted Bing’s semantic capabilities, including structured data use and entity recognition, as part of a broader industry shift toward improving search relevance and understanding user intent through natural language processing.
A major leap occurred in 2018, when Google deployedBERT(BidirectionalEncoderRepresentations fromTransformers) to better understand the contextual meaning of queries and documents. This marked one of the first times deep neural language models were used at scale in real-world retrieval systems.[12]BERT’s bidirectional training enabled a more refined comprehension of word relationships in context, improving the handling of natural language queries. Because of its success, transformer-based models gained traction in academic research and commercial search applications.[13]
Simultaneously, the research community began exploring neural ranking models that outperformed traditional lexical-based methods. Long-standing benchmarks such as theTextREtrievalConference (TREC), initiated in 1992, and more recent evaluation frameworks Microsoft MARCO(MAchineReadingCOmprehension) (2019)[14]became central to training and evaluating retrieval systems across multiple tasks and domains. MS MARCO has also been adopted in the TREC Deep Learning Tracks, where it serves as a core dataset for evaluating advances in neural ranking models within a standardized benchmarking environment.[15]
As deep learning became integral to information retrieval systems, researchers began to categorize neural approaches into three broad classes:sparse,dense, andhybridmodels. Sparse models, including traditional term-based methods and learned variants like SPLADE, rely on interpretable representations and inverted indexes to enable efficient exact term matching with added semantic signals.[16]Dense models, such as dual-encoder architectures like ColBERT, use continuous vector embeddings to support semantic similarity beyond keyword overlap.[17]Hybrid models aim to combine the advantages of both, balancing the lexical (token) precision of sparse methods with the semantic depth of dense models. This way of categorizing models balances scalability, relevance, and efficiency in retrieval systems.[18]
As IR systems increasingly rely on deep learning, concerns around bias, fairness, and explainability have also come to the picture. Research is now focused not just on relevance and efficiency, but on transparency, accountability, and user trust in retrieval algorithms.
Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category):
Methods/Techniques in which information retrieval techniques are employed include:
In order to effectively retrieve relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model.
In addition to the theoretical distinctions, modern information retrieval models are also categorized on how queries and documents are represented and compared, using a practical classification distinguishing between sparse, dense and hybrid models.[19]
This classification has become increasingly common in both academic and the real world applications and is getting widely adopted and used in evaluation benchmarks for Information Retrieval models.[23][20]
The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed forBoolean retrieval[clarification needed]or top-k retrieval, includeprecision and recall. All measures assume aground truthnotion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may beill-posedand there may be different shades of relevance. | https://en.wikipedia.org/wiki/Information_retrieval_applications |
Concept Searching Limitedwas asoftware companythat specialized ininformation retrievalsoftware. It created products forenterprise search, taxonomy management, andstatistical classification.
Concept Searching was founded in 2002 in theUKwith offices in theUSAandSouth Africa.[citation needed]In August 2003, the company introduced the idea of usingcompound term processing.[1][2]
The company's products ran on the Microsoft.NETplatform. The products integrated with MicrosoftSharePointand many other platforms.[3]
Concept Searching developed theSmart Content Framework, a toolset that provides an enterprise framework to mitigate risk, automate processes, manage information, protect privacy, and address compliance issues. The Smart Content Framework was used by many large organizations including 23,000 users at theNASASafety Center.[4]
Concept Searching was acquired byNetwrixon 28 November 2018.[5] | https://en.wikipedia.org/wiki/Concept_Searching_Limited |
Enterprise searchis software technology for searching data sources internal to a company, typicallyintranetanddatabasecontent. The search is generally offered only to users internal to the company.[1][2]Enterprise search can be contrasted withweb search, which applies search technology to documents on the open web, anddesktop search, which applies search technology to the content on a single computer.
Enterprise search systems index data and documents from a variety of sources such as:file systems,intranets,document management systems,e-mail, anddatabases. Many enterprise search systems integrate structured andunstructured datain their collections.[3]Enterprise search systems also use access controls to enforce a security policy on their users.[4]
Enterprise search can be seen as a type ofvertical searchof an enterprise.
In an enterprise search system, content goes through various phases from source repository to search results:
Content awareness (or "content collection") is usually either a push or pull model. In the push model, a source system is integrated with the search engine in such a way that it connects to it and pushes new content directly to itsAPIs. This model is used when real-time indexing is important. In the pull model, the software gathers content from sources using a connector such as aweb crawleror adatabaseconnector. The connector typically polls the source with certain intervals to look for new, updated or deleted content.[5]
Content from different sources may have many different formats or document types, such as XML, HTML, Office document formats or plain text. The content processing phase processes the incoming documents to plain text using document filters. It is also often necessary to normalize content in various ways to improverecallorprecision. These may includestemming,lemmatization,synonymexpansion,entity extraction,part of speechtagging.
As part of processing and analysis,tokenizationis applied to split the content intotokenswhich is the basic matching unit. It is also common to normalize tokens to lower case to provide case-insensitive search, as well as to normalize accents to provide better recall.
The resulting text is stored in anindex, which is optimized for quick lookups without storing the full text of the document. The index may contain the dictionary of all unique words in the corpus as well as information about ranking andterm frequency.
Using a web page, the user issues aqueryto the system. The query consists of any terms the user enters as well as navigational actions such asfacetingand paging information.
The processed query is then compared to the stored index, and the search system returns results (or "hits") referencing source documents that match. Some systems are able to present the document as it was indexed. | https://en.wikipedia.org/wiki/Enterprise_search |
Categorical perceptionis a phenomenon ofperceptionof distinct categories when there is gradual change in a variable along a continuum. It was originally observed for auditory stimuli but now found to be applicable to other perceptual modalities.[1][2]
If one analyzes the sound spectrogram of [ba] and [pa], for example, [p] and [b] can be visualized as lying somewhere on an acoustic continuum based on their VOT (voice onset time). It is possible to construct a continuum of some intermediate tokens lying between the [p] and [b] endpoints by gradually decreasing the voice onset time.
Alvin Libermanand colleagues[3](he did not talk about voice onset time in that paper) reported that when people listen to sounds that vary along the voicing continuum, they perceive only /ba/s and /pa/s, nothing in between. This effect—in which a perceived quality jumps abruptly from one category to another at a certain point along a continuum, instead of changing gradually—he dubbed "categorical perception" (CP). He suggested that CP was unique to speech, that CP made speech special, and, in what came to be called "the motor theory of speechperception," he suggested that CP's explanation lay in the anatomy of speech production.
According to the (now abandoned)motor theory of speech perception, the reason people perceive an abrupt change between /ba/ and /pa/ is that the way we hear speech sounds is influenced by how people produce them when they speak. What is varying along this continuum is voice-onset-time: the "b" in [ba] has shorter VOT than the "p" in [pa] (i.e. the vocal folds start vibrating around the time of the release of the occlusion for [b], but tens of miliseconds later for [p]; but note that different varieties of English may implement VOT in different ways to signal contrast). Apparently, unlike the synthetic "morphing" apparatus, people's natural vocal apparatus is not capable of producing anything in between ba and pa. So when one hears asoundfrom the VOT continuum, their brain perceives it by trying to match it with what it would have had to do to produce it. Since the only thing they can produce is /ba/ or /pa/, they will perceive any of the synthetic stimuli along the continuum as either /ba/ or /pa/, whichever it is closer to. A similar CP effect is found with ba/da (or with any two speech sounds belonging to different categories); these too lie along a continuum acoustically, but vocally, /ba/ is formed with the two lips, /da/ with the tip of the tongue and the alveolar ridge, and our anatomy does not allow any intermediates.
The motortheoryof speech perception explained how speech was special and why speech-sounds are perceived categorically:sensory perceptionis mediated by motor production.
If motor production mediates sensoryperception, then one assumes that this CP effect is a result of learning to producespeech.Eimaset al. (1971), however, found thatinfantsalready have speech CP before they begin to speak. Perhaps, then, it is aninnateeffect, evolved to "prepare" us to learn to speak.[4]But Kuhl (1987) found that chinchillas also have "speech CP" even though they never learn to speak, and presumably did not evolve to do so.[5]Lane (1965) went on to show that CP effects can be induced bylearningalone, with a purely sensory (visual) continuum in which there is no motor production discontinuity to mediate the perceptual discontinuity.[6]He concluded that speech CP is not special after all, but merely a special case of Lawrence's classic demonstration that stimuli to which you learn to make a different response become more distinctive and stimuli to which you learn to make the same response become more similar.
It also became clear that CP was not quite the all-or-none effect Liberman had originally thought it was: It is not that all /pa/s are indistinguishable and all /ba/s are indistinguishable: We can hear the differences, just as we can see the differences between different shades of red. It is just that the within-category differences (pa1/pa2 or red1/red2) sound/look much smaller than the between-category differences (pa2/ba1 or red2/yellow1), even when the size of the underlying physical differences (voicing, wavelength) are actually the same.
The study of categorical perception often uses experiments involving discrimination and identification tasks in order to categorize participants' perceptions of sounds.Voice onset time(VOT) is measured along a continuum rather than a binary. English bilabial stops /b/ and /p/ are voiced and voiceless counterparts of the same place and manner of articulation, yet native speakers distinguish the sounds primarily by where they fall on the VOT continuum. Participants in these experiments establish clearphonemeboundaries on the continuum; two sounds with different VOT will be perceived as the same phoneme if on the same side of the boundary.[7]Participants take longer to discriminate between two sounds falling in the same category of VOT than between two on opposite sides of the phoneme boundary, even if the difference in VOT is greater between the two in the same category.[8]
In a categorical perception identification task, participants often must identify stimuli, such as speech sounds. An experimenter testing the perception of the VOT boundary between /p/ and /b/ may play several sounds falling on various parts of the VOT continuum and ask volunteers whether they hear each sound as /p/ or /b/.[9]In such experiments, sounds on one side of the boundary are heard almost universally as /p/ and on the other as /b/. Stimuli on or near the boundary take longer to identify and are reported differently by different volunteers, but are perceived as either /b/ or /p/, rather than as a sound somewhere in the middle.[7]
A simple AB discrimination task presents participants with two options and participants must decide if they are identical.[9]Predictions for a discrimination task in an experiment are often based on the preceding identification task. An ideal discrimination experiment validating categorical perception of stop consonants would result in volunteers more often correctly discriminating stimuli that fall on opposite sides of the boundary, while discriminating at chance level on the same side of the boundary.[8]
In an ABX discrimination task, volunteers are presented with three stimuli. A and B must be distinct stimuli and volunteers decide which of the two the third stimulus X matches. This discrimination task is much more common than a simple AB task.[9][8]
According to theSapir–Whorf hypothesis(of which Lawrence's acquired similarity/distinctiveness effects would simply be a special case), language affects the way that people perceive the world. For example, colors are perceived categorically only because they happen to be named categorically: Our subdivisions of thespectrumarearbitrary, learned, and vary acrossculturesandlanguages. But Berlin & Kay (1969) suggested that this was not so: Not only do most cultures and languages subdivide and name thecolor spectrumthe same way, but even for those who don't, the regions of compression and separation are the same.[10]We all see blues as more alike and greens as more alike, with a fuzzy boundary in between, whether or not we have named the difference. This view has been challenged in a review article by Regier and Kay (2009) who discuss a distinction between the questions "1. Do color terms affect color perception?" and "2. Are color categories determined by largely arbitrary linguistic convention?". They report evidence that linguistic categories, stored in the left hemisphere of the brain for most people, do affect categorical perception but primarily in the right visual field, and that this effect is eliminated with a concurrent verbal interference task.[11]
Universalism, in contrasts to the Sapir-Whorf hypothesis, posits that perceptual categories are innate, and are unaffected by the language that one speaks.[12]
Support of the Sapir-Whorf hypothesis describes instances in which speakers of one language demonstrate categorical perception in a way that is different from speakers of another language. Examples of such evidence are provided below:
Regier and Kay (2009) reported evidence that linguistic categories affect categorical perception primarily in the right visual field.[13]The right visual field is controlled by the left hemisphere of the brain, which also controls language faculties. Davidoff (2001) presented evidence that in color discrimination tasks, native English speakers discriminated more easily between color stimuli across a determined blue-green boundary than within the same side, but did not show categorical perception when given the same task with Berinmo "nol" and "wor"; Berinmo speakers performed oppositely.[14]
A popular theory in current research is "weak-Whorfianism,' which is the theory that although there is a strong universal component to perception, cultural differences still have an impact. For example, a 1998 study found that while there was evidence of universal perception of color between speakers of Setswana and English, there were also marked differences between the two language groups.[15]
Thesignatureof categorical perception (CP) is within-category compression and/or between-category separation. The size of the CP effect is merely a scaling factor; it is this compression/separation "accordion effect", that is CP's distinctive feature. In this respect, the "weaker" CP effect for vowels, whose motor production is continuous rather than categorical, but whoseperceptionis by this criterion categorical, is every bit as much of a CP effect as the ba/pa and ba/da effects. But, as with colors, it looks as if the effect is an innate one: Our sensory category detectors for both color and speech sounds are born already "biased" by evolution: Our perceived color and speech-soundspectrumis already "warped" with these compression/separations.
The Lane/Lawrence demonstrations, lately replicated and extended by Goldstone (1994), showed that CP can be induced by learning alone.[16]There are also the countlesscategoriescataloged in our dictionaries that, according to categorical perception, are unlikely to be inborn. Nativist theorists such as Fodor [1983] have sometimes seemed to suggest that all of ourcategoriesare inborn.[17]There are recent demonstrations that, although the primary color and speech categories may be inborn, their boundaries can be modified or even lost as a result of learning, and weaker secondary boundaries can be generated by learning alone.[18]
In the case of innate CP, our categorically biasedsensory detectorspick out their prepared color and speech-sound categories far more readily and reliably than if our perception had been continuous.
Learning is a cognitive process that results in a relatively permanent change in behavior. Learning can influence perceptual processing.[19]Learning influences perceptual processing by altering the way in which an individual perceives a given stimulus based on prior experience or knowledge. This means that the way something is perceived is changed by how it was seen, observed, or experienced before. The effects of learning can be studied in categorical perception by looking at the processes involved.[20]
Learned categorical perception can be divided into different processes through some comparisons. The processes can be divided into between category and within category groups of comparison
.[21]Between category groups are those that compare between two separate sets of objects. Within category groups are those that compare within one set of objects. Between subjects comparisons lead to a categorical expansion effect. A categorical expansion occurs when the classifications and boundaries for the category become broader, encompassing a larger set of objects. In other words, a categorical expansion is when the "edge lines" for defining a category become wider. Within subjects comparisons lead to a categorical compression effect. A categorical compression effect corresponds to the narrowing of category boundaries to include a smaller set of objects (the "edge lines" are closer together).[21]Therefore, between category groups lead to less rigid group definitions whereas within category groups lead to more rigid definitions.
Another method of comparison is to look at both supervised and unsupervised group comparisons. Supervised groups are those for which categories have been provided, meaning that the category has been defined previously or given a label; unsupervised groups are groups for which categories are created, meaning that the categories will be defined as needed and are not labeled.[22]
In studying learned categorical perception, themes are important. Learning categories is influenced by the presence of themes. Themes increase quality of learning. This is seen especially in cases where the existing themes are opposite.[22]In learned categorical perception, themes serve as cues for different categories. They assist in designating what to look for when placing objects into their categories. For example, when perceiving shapes, angles are a theme. The number of angles and their size provide more information about the shape and cue different categories. Three angles would cue a triangle, whereas four might cue a rectangle or a square. Opposite to the theme of angles would be the theme of circularity. The stark contrast between the sharp contour of an angle and the round curvature of a circle make it easier to learn.
Similar to themes, labels are also important to learned categorical perception.[21]Labels are “noun-like” titles that can encourage categorical processing with a focus on similarities.[21]The strength of a label can be determined by three factors: analysis of affective (or emotional) strength, permeability (the ability to break through) of boundaries, and a judgment (measurement of rigidity) of discreteness.[21]Sources of labels differ, and, similar to unsupervised/supervised categories, are either created or already exist.[21][22]Labels affect perception regardless of their source. Peers, individuals, experts, cultures, and communities can create labels. The source doesn’t appear to matter as much as mere presence of a label, what matters is that there is a label. There is a positive correlation between strength of the label (combination of three factors) and the degree to which the label affects perception, meaning that the stronger the label, the more the label affects perception.[21]
Cues used in learned categorical perception can foster easier recall and access of prior knowledge in the process of learning and using categories.[22]An item in a category can be easier to recall if the category has a cue for the memory. As discussed, labels and themes both function as cues for categories, and, therefore, aid in the memory of these categories and the features of the objects belonging to them.
There are several brain structures at work that promote learned categorical perception. The areas and structures involved include: neurons, the prefrontal cortex, and the inferotemporal cortex.[20][23]Neurons in general are linked to all processes in the brain and, therefore, facilitate learned categorical perception. They send the messages between brain areas and facilitate the visual and linguistic processing of the category. The prefrontal cortex is involved in “forming strong categorical representations.”[20]The inferotemporal cortex has cells that code for different object categories and are turned along diagnostic category dimensions, areas distinguishing category boundaries.[20]
The learning of categories and categorical perception can be improved through adding verbal labels, making themes relevant to the self, making more separate categories, and by targeting similar features that make it easier to form and define categories.
Learned categorical perception occurs not only in human species but has been demonstrated in animal species as well. Studies have targeted categorical perception using humans, monkeys, rodents, birds, frogs.[23][24]These studies have led to numerous discoveries. They focus primarily on learning the boundaries of categories, where inclusion begins and ends, and they support the hypothesis that categorical perception does have a learned component.
Computational modeling (Tijsseling & Harnad 1997; Damper & Harnad 2000) has shown that many types of category-learning mechanisms (e.g. both back-propagation and competitive networks) display CP-like effects.[25][26]In back-propagation nets, the hidden-unit activation patterns that "represent" an input build up within-category compression and between-category separation as they learn; other kinds of nets display similar effects. CP seems to be a means to an end: Inputs that differ among themselves are "compressed" onto similar internal representations if they must all generate the same output; and they become more separate if they must generate different outputs. The network's "bias" is what filters inputs onto their correct output category. The nets accomplish this by selectively detecting (after much trial and error, guided by error-correcting feedback) the invariant features that are shared by the members of the same category and that reliably distinguish them from members of different categories; the nets learn to ignore all other variation as irrelevant to thecategorization.
Neural data provide correlates of CP and of learning.[27]Differences between event-related potentials recorded from the brain have been found to be correlated with differences in the perceived category of the stimulus viewed by the subject.Neural imagingstudies have shown that these effects are localized and even lateralized to certain brain regions in subjects who have successfully learned the category, and are absent in subjects who have not.[28][29]
Categorical perception is identified with the left prefrontal cortex with this showing such perception for speech units while this is not by posterior areas earlier in their processing such as areas in the leftsuperior temporal gyrus.[30]
Both innate and learned CP are sensorimotor effects: The compression/separationbiasesare sensorimotor biases, and presumably had sensorimotor origins, whether during the sensorimotor life-history of theorganism, in the case of learned CP, or the sensorimotor life-history of the species, in the case of innate CP. Theneural netI/O models are also compatible with this fact: Their I/O biases derive from their I/O history. But when we look at our repertoire ofcategoriesin a dictionary, it is highly unlikely that many of them had a direct sensorimotor history during our lifetimes, and even less likely in our ancestors' lifetimes. How many of us have seen a unicorn in real life? We have seen pictures of them, but what had those who first drew those pictures seen? And what about categories I cannot draw or see (or taste or touch): What about the most abstract categories, such as goodness and truth?
Some of ourcategoriesmust originate from another source than direct sensorimotorexperience, and here we return to language and the Whorf Hypothesis: Can categories, and their accompanying CP, be acquired through language alone? Again, there are some neural net simulation results suggesting that once a set of category names has been "grounded" through direct sensorimotor experience, they can be combined into Boolean combinations (man = male & human) and into still higher-ordercombinations(bachelor = unmarried & man) which not only pick out the more abstract, higher-order categories much the way the direct sensorimotor detectors do, but also inherit their CP effects, as well as generating some of their own. Bachelor inherits the compression/separation of unmarried and man, and adds a layer of separation/compression of its own.[31][32]
These language-induced CP-effects remain to be directly demonstrated in human subjects; so far only learned and innate sensorimotor CP have been demonstrated.[33][34]The latter shows the Whorfian power ofnamingand categorization, in warping ourperceptionof the world. That is enough to rehabilitate the Whorf Hypothesis from its apparent failure on color terms (and perhaps also from its apparent failure on eskimo snow terms[35]), but to show that it is a full-blown language effect, and not merely a vocabulary effect, it will have to be shown that our perception of theworldcan also be warped, not just by how things are named but by what we are told about them.
Emotions are an important characteristic of the human species. An emotion is an abstract concept that is most easily observed by looking at facial expressions. Emotions and their relation to categorical perception are often studied using facial expressions.[36][37][38][39][40]Faces contain a large amount of valuable information.[38]
Emotions are divided into categories because they are discrete from one another. Each emotion entails a separate and distinct set of reactions, consequences, and expressions. The feeling and expression of emotions is a natural occurrence, and, it is actually a universal occurrence for some emotions. There are six basic emotions that are considered universal to the human species across age, gender, race, country, and culture and that are considered to be categorically distinct. These six basic emotions are: happiness, disgust, sadness, surprise, anger, and fear.[39]According to the discrete emotions approach, people experience one emotion and not others, rather than a blend.[39]Categorical perception of emotional facial expressions does not require lexical categories.[39]Of these six emotions, happiness is the most easily identified.
The perception of emotions using facial expressions reveals slight gender differences[36]based on the definition and boundaries (essentially, the “edge line” where one emotion ends and a subsequent emotion begins) of the categories. The emotion of anger is perceived easier and quicker when it is displayed by males. However, the same effects are seen in the emotion of happiness when portrayed by women.[36]These effects are essentially observed because the categories of the two emotions (anger and happiness) are more closely associated with other features of these specific genders.
Although a verbal label is provided to emotions, it is not required to categorically perceive them. Before language in infants, they can distinguish emotional responses. The categorical perception of emotions is by a "hardwired mechanism".[39]Additional evidence exists showing the verbal labels from cultures that may not have a label for a specific emotion but can still categorically perceive it as its own emotion, discrete and isolated from other emotions.[39]The perception of emotions into categories has also been studied using the tracking of eye movements which showed an implicit response with no verbal requirement because the eye movement response required only the movement and no subsequent verbal response.[37]
The categorical perception of emotions is sometimes a result of joint processing. Other factors may be involved in this perception. Emotional expression and invariable features (features that remain relatively consistent) often work together.[38]Race is one of the invariable features that contribute to categorical perception in conjunction with expression. Race can also be considered a social category.[38]Emotional categorical perception can also be seen as a mix of categorical and dimensional perception. Dimensional perception involves visual imagery. Categorical perception occurs even when processing is dimensional.[40]
This article incorporates text by Stevan Harnad available under theCC BY-SA 3.0license. The text and its release have been received by theWikimedia Volunteer Response Team; for more information, see thetalk page. | https://en.wikipedia.org/wiki/Categorical_perception |
Acolor spaceis a specific organization ofcolors. In combination with color profiling supported by various physical devices, it supports reproducible representations of color – whether such representation entails ananalogor adigitalrepresentation. A color space may be arbitrary, i.e. with physically realized colors assigned to a set of physicalcolor swatcheswith corresponding assignedcolor names(including discrete numbers in – for example – thePantonecollection), or structured with mathematical rigor (as with theNCS System,Adobe RGBandsRGB). A "color space" is a useful conceptual tool for understanding the color capabilities of a particular device or digital file. When trying to reproduce color on another device, color spaces can show whether shadow/highlight detail and color saturation can be retained, and by how much either will be compromised.
A "color model" is an abstract mathematical model describing the way colors can be represented astuplesof numbers (e.g. triples inRGBor quadruples inCMYK); however, a color model with no associated mapping function to anabsolute color spaceis a more or less arbitrary color system with no connection to any globally understood system of color interpretation. Adding a specific mapping function between a color model and a reference color space establishes within the reference color space a definite "footprint", known as agamut, and for a given color model, this defines a color space. For example, Adobe RGB and sRGB are two different absolute color spaces, both based on the RGB color model. When defining a color space, the usual reference standard is theCIELABorCIEXYZcolor spaces, which were specifically designed to encompass all colors the average human can see.[1]
Since "color space" identifies a particular combination of the color model and the mapping function, the word is often used informally to identify a color model. However, even though identifying a color space automatically identifies the associated color model, this usage is incorrect in a strict sense. For example, although several specific color spaces are based on theRGB color model, there is no such thing as the singularRGB color space.
In 1802,Thomas Youngpostulated the existence of three types ofphotoreceptors(now known ascone cells) in the eye, each of which was sensitive to a particular range of visible light.[2]Hermann von Helmholtzdeveloped theYoung–Helmholtz theoryfurther in 1850: that the three types of cone photoreceptors could be classified as short-preferring (blue), middle-preferring (green), and long-preferring (red), according to their response to thewavelengthsof light striking theretina. The relative strengths of the signals detected by the three types of cones are interpreted by thebrainas a visible color. But it is not clear that they thought of colors as being points in color space.
The color-space concept was likely due toHermann Grassmann, who developed it in two stages. First, he developed the idea ofvector space, which allowed the algebraic representation of geometric concepts inn-dimensionalspace.[3]Fearnley-Sander (1979) describes Grassmann's foundation of linear algebra as follows:[4]
The definition of alinear space(vector space)... became widely known around 1920, whenHermann Weyland others published formal definitions. In fact, such a definition had been given thirty years previously byPeano, who was thoroughly acquainted with Grassmann's mathematical work. Grassmann did not put down a formal definition—the language was not available—but there is no doubt that he had the concept.
With this conceptual background, in 1853, Grassmann published a theory of how colors mix; it and its three color laws are still taught, asGrassmann's law.[5]
As noted first by Grassmann... the light set has the structure of a cone in the infinite-dimensional linear space. As a result, a quotient set (with respect to metamerism) of the light cone inherits the conical structure, which allows color to be represented as a convex cone in the 3- D linear space, which is referred to as the color cone.[6]
Colors can be created inprintingwithcolorspaces based on theCMYK color model, using the subtractiveprimary colorsofpigment(cyan,magenta,yellow, andkey[black]). To create a three-dimensional representation of a given color space, we can assign the amount of magenta color to the representation's Xaxis, the amount of cyan to its Y axis, and the amount of yellow to its Z axis. The resulting 3-D space provides a unique position for every possible color that can be created by combining those three pigments.
Colors can be created oncomputer monitorswith color spaces based on theRGB color model, using the additive primary colors (red,green, andblue). A three-dimensional representation would assign each of the three colors to the X, Y, and Z axes. Colors generated on a given monitor will be limited by the reproduction medium, such as the phosphor (in aCRT monitor) or filters and backlight (LCDmonitor).
Another way of creating colors on a monitor is with anHSL or HSVcolor model, based onhue,saturation,brightness(value/lightness). With such a model, the variables are assigned tocylindrical coordinates.
Many color spaces can be represented as three-dimensional values in this manner, but some have more, or fewer dimensions, and some, such asPantone, cannot be represented in this way at all.
Color space conversion is the translation of the representation of a color from one basis to another. This typically occurs in the context of converting an image that is represented in one color space to another color space, the goal being to make the translated image look as similar as possible to the original.
The RGB color model is implemented in different ways, depending on the capabilities of the system used. The most common incarnation in general use as of 2021[update]is the 24-bitimplementation, with 8 bits, or 256 discrete levels of color perchannel.[7]Any color space based on such a 24-bit RGB model is thus limited to a range of 256×256×256 ≈ 16.7 million colors. Some implementations use 16 bits per component for 48 bits total, resulting in the samegamutwith a larger number of distinct colors. This is especially important when working with wide-gamut color spaces (where most of the more common colors are located relatively close together), or when a large number of digital filtering algorithms are used consecutively. The same principle applies for any color space based on the same color model, but implemented at differentbit depths.
CIE 1931 XYZ color spacewas one of the first attempts to produce a color space based on measurements of human color perception (earlier efforts were byJames Clerk Maxwell, König & Dieterici, and Abney atImperial College)[8]and it is the basis for almost all other color spaces. TheCIERGBcolor space is a linearly-related companion of CIE XYZ. Additional derivatives of CIE XYZ include theCIELUV,CIEUVW, andCIELAB.
RGBusesadditive colormixing, because it describes what kind oflightneeds to beemittedto produce a given color. RGB stores individual values for red, green and blue.RGBAis RGB with an additional channel, alpha, to indicate transparency. Common color spaces based on the RGB model includesRGB,Adobe RGB,ProPhoto RGB,scRGB, andCIE RGB.
CMYKusessubtractive colormixing used in the printing process, because it describes what kind ofinksneed to be applied so the lightreflectedfrom thesubstrateand through the inks produces a given color. One starts with a white substrate (canvas, page, etc.), and uses ink to subtract color from white to create an image. CMYK stores ink values for cyan, magenta, yellow and black. There are many CMYK color spaces for different sets of inks, substrates, and press characteristics (which change the dot gain or transfer function for each ink and thus change the appearance).
YIQwas formerly used inNTSC(North America,Japanand elsewhere) television broadcasts for historical reasons. This system stores alumavalue roughly analogous to (and sometimes incorrectly identified as)[9][10]luminance, along with twochromavalues as approximate representations of the relative amounts of blue and red in the color. It is similar to theYUVscheme used in most video capture systems[11]and inPAL(Australia,Europe, exceptFrance, which usesSECAM) television, except that the YIQ color space is rotated 33° with respect to the YUV color space and the color axes are swapped. TheYDbDrscheme used by SECAM television is rotated in another way.
YPbPris a scaled version of YUV. It is most commonly seen in its digital form,YCbCr, used widely invideoandimage compressionschemes such asMPEGandJPEG.
xvYCCis a new international digital video color space standard published by theIEC(IEC 61966-2-4). It is based on theITUBT.601andBT.709standards but extends the gamut beyond the R/G/B primaries specified in those standards.
HSV(hue,saturation,value), also known as HSB (hue, saturation,brightness) is often used by artists because it is often more natural to think about a color in terms of hue and saturation than in terms of additive or subtractive color components. HSV is a transformation of an RGB color space, and its components and colorimetry are relative to the RGB color space from which it was derived.
HSL(hue,saturation,lightness/luminance), also known as HLS or HSI (hue, saturation,intensity) is quite similar toHSV, with "lightness" replacing "brightness". The difference is that thebrightnessof a pure color is equal to the brightness of white, while thelightnessof a pure color is equal to the lightness of a medium gray.
Early color spaces had two components. They largely ignored blue light because the added complexity of a 3-component process provided only a marginal increase in fidelity when compared to the jump from monochrome to 2-component color.
Incolor science, there are two meanings of the termabsolute color space:
In this article, we concentrate on the second definition.
CIEXYZ,sRGB, andICtCpare examples of absolute color spaces, as opposed to a genericRGB color space.
A non-absolute color space can be made absolute by defining its relationship to absolute colorimetric quantities. For instance, if the red, green, and blue colors in a monitor are measured exactly, together with other properties of the monitor, then RGB values on that monitor can be considered as absolute. TheCIE 1976 L*, a*, b* color spaceis sometimes referred to as absolute, though it also needs awhite pointspecification to make it so.[16]
A popular way to make a color space like RGB into an absolute color is to define anICCprofile, which contains the attributes of the RGB. This is not the only way to express an absolute color, but it is the standard in many industries. RGB colors defined by widely accepted profiles include sRGB andAdobe RGB. The process of adding anICC profileto a graphic or document is sometimes calledtaggingorembedding; tagging, therefore, marks the absolute meaning of colors in that graphic or document.
A color in one absolute color space can be converted into another absolute color space, and back again, in general; however, some color spaces may havegamutlimitations, and converting colors that lie outside that gamut will not produce correct results. There are also likely to be rounding errors, especially if the popular range of only 256 distinct values per component (8-bit color) is used.
One part of the definition of an absolute color space is the viewing conditions. The same color, viewed under different natural or artificiallightingconditions, will look different. Those involved professionally with color matching may use viewing rooms, lit by standardized lighting.
Occasionally, there are precise rules for converting between non-absolute color spaces. For example,HSL and HSVspaces are defined as mappings of RGB. Both are non-absolute, but the conversion between them should maintain the same color. However, in general, converting between two non-absolute color spaces (for example, RGB toCMYK) or between absolute and non-absolute color spaces (for example, RGB to L*a*b*) is almost a meaningless concept.
A different method of defining absolute color spaces is familiar to many consumers as the swatch card, used to select paint, fabrics, and the like. This is a way of agreeing a color between two parties. A more standardized method of defining absolute colors is thePantone Matching System, a proprietary system that includes swatch cards and recipes that commercial printers can use to make inks that are a particular color. | https://en.wikipedia.org/wiki/Color_space |
Inartificial intelligence(AI),commonsense reasoningis a human-like ability to make presumptions about the type and essence of ordinary situations humans encounter every day. These assumptions include judgments about the nature of physical objects, taxonomic properties, and peoples' intentions. A device that exhibits commonsense reasoning might be capable of drawing conclusions that are similar to humans'folk psychology(humans' innate ability to reason about people's behavior and intentions) andnaive physics(humans' natural understanding of the physical world).[1]
Some definitions and characterizations of common sense from different authors include:
NYU professor Ernest Davis characterizes commonsense knowledge as "what a typical seven year old knows about the world", including physical objects, substances, plants, animals, and human society. It usually excludes book-learning, specialized knowledge, and knowledge of conventions; but it sometimes includes knowledge about those topics. For example, knowing how to play cards is specialized knowledge, not "commonsense knowledge"; but knowing that people play cards for fun does count as "commonsense knowledge".[7]
Compared with humans, existing AI lacks several features of human commonsense reasoning; most notably, humans have powerful mechanisms for reasoning about "naïve physics" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "folk psychology" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence". (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators.)[1][8][9]This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existingself-driving carscannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[10][11][12]
Overlapping subtopics of commonsense reasoning include quantities and measurements, time and space, physics, minds, society, plans and goals, and actions and change.[13]
The commonsense knowledge problem is a current project in the sphere of artificial intelligence to create a database that contains the general knowledge most individuals are expected to have, represented in an accessible way to artificial intelligence programs[14]that use natural language. Due to the broad scope of the commonsense knowledge, this issue is considered to be among the most difficult problems in AI research.[15]In order for any task to be done as a human mind would manage it, the machine is required to appear as intelligent as a human being. Such tasks includeobject recognition,machine translationandtext mining. To perform them, the machine has to be aware of the same concepts that an individual, who possess commonsense knowledge, recognizes.
In 1961,Bar Hillelfirst discussed the need and significance of practical knowledge for natural language processing in the context of machine translation.[16]Some ambiguities are resolved by using simple and easy to acquire rules. Others require a broad acknowledgement of the surrounding world, thus they require more commonsense knowledge. For instance, when a machine is used to translate a text, problems of ambiguity arise, which could be easily resolved by attaining a concrete and true understanding of the context. Online translators often resolve ambiguities using analogous or similar words. For example, in translating the sentences "The electrician is working" and "The telephone is working" into German, the machine translates correctly "working" in the means of "laboring" in the first one and as "functioning properly" in the second one. The machine has seen and read in the body of texts that the German words for "laboring" and "electrician" are frequently used in a combination and are found close together. The same applies for "telephone" and "function properly". However, the statistical proxy which works in simple cases often fails in complex ones. Existing computer programs carry out simple language tasks by manipulating short phrases or separate words, but they don't attempt any deeper understanding and focus on short-term results.
Issues of this kind arise in computer vision.[1][17]For instance when looking at a photograph of a bathroom some items that are small and only partly seen, such as facecloths and bottles, are recognizable due to the surrounding objects (toilet, wash basin, bathtub), which suggest the purpose of the room. In an isolated image they would be difficult to identify.
Movies prove to be even more difficult tasks. Some movies contain scenes and moments that cannot be understood by simply matching memorized templates to images. For instance, to understand the context of the movie, the viewer is required to make inferences about characters’ intentions and make presumptions depending on their behavior. In the contemporary state of the art, it is impossible to build and manage a program that will perform such tasks as reasoning, i.e. predicting characters’ actions. The most that can be done is to identify basic actions and track characters.
The need and importance of commonsense reasoning inautonomous robotsthat work in a real-life uncontrolled environment is evident. For instance, if a robot is programmed to perform the tasks of a waiter at a cocktail party, and it sees that the glass he had picked up is broken, the waiter-robot should not pour the liquid into the glass, but instead pick up another one. Such tasks seem obvious when an individual possesses simple commonsense reasoning, but to ensure that a robot will avoid such mistakes is challenging.[1]
Significant progress in the field of the automated commonsense reasoning is made in the areas of the taxonomic reasoning, actions and change reasoning, reasoning about time. Each of these spheres has a well-acknowledged theory for wide range of commonsense inferences.[18]
Taxonomy is the collection of individuals and categories and their relations. Three basic relations are:
Transitivity is one type of inference in taxonomy. SinceTweetyis an instance ofrobinandrobinis a subset ofbird, it follows thatTweetyis an instance ofbird. Inheritance is another type of inference. SinceTweetyis an instance ofrobin, which is a subset ofbirdandbirdis marked with propertycanfly, it follows thatTweetyandrobinhave propertycanfly.
When an individual taxonomizes more abstract categories, outlining and delimiting specific categories becomes more problematic. Simple taxonomic structures are frequently used in AI programs. For instance,WordNetis a resource including a taxonomy, whose elements are meanings of English words.Web miningsystems used to collect commonsense knowledge from Web documents focus on taxonomic relations and specifically in gathering taxonomic relations.[1]
The theory of action, events and change is another range of the commonsense reasoning.[19]There are established reasoning methods for domains that satisfy the constraints listed below:
Temporal reasoning is the ability to make presumptions about humans' knowledge of times, durations and time intervals. For example, if an individual knows that Mozart was born after Haydn and died earlier than him, they can use their temporal reasoning knowledge to deduce that Mozart had died younger than Haydn. The inferences involved reduce themselves to solving systems of linear inequalities.[20]
To integrate that kind of reasoning with concrete purposes, such asnatural language interpretation, is more challenging, because natural language expressions have context dependent interpretation.[21]Simple tasks such as assigning timestamps to procedures cannot be done with total accuracy.
Qualitative reasoning[22]is the form of commonsense reasoning analyzed with certain success. It is concerned with the direction of change in interrelated quantities. For instance, if the price of a stock goes up, the amount of stocks that are going to be sold will go down. If some ecosystem contains wolves and lambs and the number of wolves decreases, the death rate of the lambs will go down as well. This theory was firstly formulated byJohan de Kleer, who analyzed an object moving on a roller coaster.
The theory of qualitative reasoning is applied in many spheres such as physics, biology, engineering, ecology, etc. It serves as the basis for many practical programs, analogical mapping, text understanding.
As of 2014, there are some commercial systems trying to make the use of commonsense reasoning significant. However, they use statistical information as a proxy for commonsense knowledge, where reasoning is absent. Current programs manipulate individual words, but they don't attempt or offer further understanding. According to Ernest Davis andGary Marcus, five major obstacles interfere with the producing of a satisfactory "commonsense reasoner".[1]
Compared with humans, as of 2018 existing computer programs perform extremely poorly on modern "commonsense reasoning" benchmark tests such as theWinograd Schema Challenge.[23]The problem of attaining human-level competency at "commonsense knowledge" tasks is considered to probably be "AI complete" (that is, solving it would require the ability to synthesize ahuman-level intelligence).[24][25]Some researchers believe thatsupervised learningdata is insufficient to produce an artificial general intelligence capable of commonsense reasoning, and have therefore turned to less-supervised learning techniques.[26]
Commonsense's reasoning study is divided into knowledge-based approaches and approaches that are based onmachine learningover and using a large data corpora with limited interactions between these two types of approaches.[citation needed]There are alsocrowdsourcingapproaches, attempting to construct a knowledge basis by linking the collective knowledge and the input of non-expert people. Knowledge-based approaches can be separated into approaches based on mathematical logic.[citation needed]
In knowledge-based approaches, the experts are analyzing the characteristics of the inferences that are required to do reasoning in a specific area or for a certain task. The knowledge-based approaches consist of mathematically grounded approaches, informal knowledge-based approaches and large-scale approaches. The mathematically grounded approaches are purely theoretical and the result is a printed paper instead of a program. The work is limited to the range of the domains and the reasoning techniques that are being reflected on. In informal knowledge-based approaches, theories of reasoning are based on anecdotal data and intuition that are results from empirical behavioral psychology. Informal approaches are common in computer programming. Two other popular techniques for extracting commonsense knowledge from Web documents involveWeb miningandCrowd sourcing.
COMET (2019), which uses both theOpenAIGPTlanguage model architecture and existing commonsense knowledge bases such asConceptNet, claims to generate commonsense inferences at a level approaching human benchmarks. Like many other current efforts, COMET over-relies on surface language patterns and is judged to lack deep human-level understanding of many commonsense concepts. Other language-model approaches include training on visual scenes rather than just text, and training on textual descriptions of scenarios involving commonsense physics.[6][27] | https://en.wikipedia.org/wiki/Commonsense_reasoning |
Conceptual dependency theoryis a model ofnatural language understandingused inartificial intelligencesystems.
Roger SchankatStanford Universityintroduced the model in 1969, in the early days of artificial intelligence.[1]This model was extensively used by Schank's students atYale Universitysuch asRobert Wilensky, Wendy Lehnert, andJanet Kolodner.
Schank developed the model to represent knowledge for natural language input into computers. Partly influenced by the work ofSydney Lamb, his goal was to make the meaning independent of the words used in the input, i.e. two sentences identical in meaning would have a single representation. The system was also intended to draw logical inferences.[2]
The model uses the following basic representational tokens:[3]
A set ofconceptual transitionsthen act on this representation, e.g. an ATRANS is used to represent a transfer such as "give" or "take" while a PTRANS is used to act on locations such as "move" or "go". An MTRANS represents mental acts such as "tell", etc.
A sentence such as "John gave a book to Mary" is then represented as the action of an ATRANS on two real world objects, John and Mary. | https://en.wikipedia.org/wiki/Conceptual_dependency_theory |
Face spaceis a theoretical idea inpsychologysuch that it is amultidimensional spacein which recognizable faces are stored. The representation of faces within this space are according to invariant features of the face itself.[1]However, recently it was theoretically demonstrated that faces can be stored in the face space according to their dynamic features as well, and that in this case the resulting space exhibits a twofold structure.[2]
The face space framework has been highly influential in recent face processing theory; cited in almost 1000 scientific articles and recently revisited in a special edition of the journalQuarterly Journal of Experimental Psychologyfeaturing the top 10 ideas that have appeared in the journal's pages.[3]
Face space is useful for accounting various aspects of face recognition including, theown-race bias,[4]distinctiveness and caricature effects.[5]The framework has also provided useful applications in the design of forensic techniques for eyewitness identification, such asfacial compositesandpolice lineups.[3]
The face-space framework is a psychological model that explains how (adult) humans process and store facial information, which we use forfacial recognition. It is multidimensional, with each dimension categorised by certain facial features, some of which may be: face shape, hair colour and length, distance between the eyes, age and masculinity.[1][3]However, these are not categorically identified, and face-space dimensions could theoretically include any distinguishing facial feature.[1]The model assumes that every face is mentally represented as a specific location within this psychological space (according to its dimensions) and that the faces’ likeness correspond with the distance between them; similar faces being nearer to each other, and different faces further.[3]
Mathematical assumptions are also necessary to explain the features of face-space. The central point of face-space (i.e. the origin) represents thecentral tendencyof all the dimensions of facial features,[1]with stored faces assumed to have anormal distributionin each of these dimensions. As such, faces are arranged most densely, and look the most typical at the origin, and become sparser and more distinctive with greater distance from the origin.[3]
Storing a face in a specific location within face-space involves the encoding of facial data into the dimensions of the framework. However, encoding is never perfect; any factor that hinders face recognition can induce encoding error.[3]Factors likenegative colours, minimal viewing time andinversion(seen upside-down) when viewing a face can substantially increase its encoding error.[3]Whilst other factors like race, distinctiveness[1]and caricature effects[6]can circumstantially make encoding easier or more difficult.
Two slightly different models of face-space are standard: norm and exemplar-based face-spaces. Both models encode faces in a multidimensional psychological space and account for factors like race and inversion. However, they differ in terms of their explanation of a face’s location in the space; either as avectorfrom a ‘norm face’, or as distance from other faces.[1]
In the norm-based model, the encoding of faces is relative to a central face at the origin: a ‘norm face’.[1]Faces are arranged using vectors from this norm, with the vector’s parameters of length and direction determined by the distinctiveness and features of the face respectively.[3]
In the exemplar-based model, faces are encoded as individual points in the space, rather than as vectors relative to a norm face. Similarity to other faces in this model is defined by the relative distance between the faces.[1]
Multiple studies have found that faces with distinctive features are recognised more easily than typical faces.[6]The face-space framework is able to explain this finding as it assumes that faces are distributed normally across its dimensions. Therefore, numerous typical faces are found at the origin of the model, with increasingly distinctive, yet infrequent, faces found further away.[7]Thus, since distinctive faces are located as more distant from other faces in the face-space (low exemplar density), confusion with these other faces is less likely, leading to better recognition.[7]
A caricature effect denotes the finding that caricatured faces are more easily recognised than veridical (original) ones.[6]Caricaturescompare individual faces with an ‘average’ face (simplified version of original) and exaggerate the facial differences that are found, thus giving the original face more distinctive features.[6]The increased distinctiveness induced by the caricature explains the caricature effect:[6]the original face is more typical than the caricature and therefore in a more crowded area of face-space, whereas the more distinctive caricature is further away, and thus has less scope for confusion.
Anown-race biasis the tendency to more easily recognise faces of people of the same race as yourself, compared to those of different races. Numerous studies have provided evidence for this phenomenon,[8]and face-space presents multiple explanations.
An interpretation of the location of faces within face-space usingmultidimensional scaling, reveals that other-race faces are densely packed within a more distant area of face-space, while own-race faces are more unbiasedly distributed around the origin.[8]This is explained in terms of the difference in exemplar density (how close faces are positioned to other faces) between own and other-race faces. Because other-race faces are encoded with less emphasis on distinguishing facial features, and more on race (the opposite of own-race faces), they are grouped close together, yet are distant from the central point of face-space. Unlike distinctiveness however, distance in this case does not facilitate facial recognition, due to the higher exemplar density (many faces are close together) of other-race faces.[8][1]
Norm-based face-space on the other hand explains the own-race bias as a consequence of distance from the ‘norm face’.[1]Own-race faces are located nearer the norm, whereas other-race faces are grouped further from it, making own-race faces quicker to process and recognise.[1]
The face-space framework has been very influential in the development of modern eyewitness identification techniques. In particular, for both the fourth generation offacial composite systems[9]as well as fairer police line-ups for suspects with distinguishing features.[10]
Face-space puts an emphasis on facial identification according to similarities or differences between whole faces, rather than individual facial features.[9]Accordingly,principal component analysisis used to derive certain dimensions or‘eigenfaces’from sample faces, which can be combined and encoded upon to construct new, whole, faces.[9]This can be used to create more accurate facial composite systems, as holistic face representations are better recognised than representations using individual features,[11]such as those used by older facial composite systems.[9]
In apolice lineup, choosing the sole suspect that has a distinguishing feature described by the eyewitness (such as a piercing) on the basis of that feature alone, is not fair to that potentially innocent suspect.[3]To correct for this partiality, you can either cover up the feature on the individual, or replicate the feature on all suspects.[10]Distinctive features replicated on multiple faces would mean they are nearer in face-space, and therefore perceived as more similar, according to the hybrid-similarity model.[10]Consequently, this model correctly predicts replication as a more effective procedure for correct identification of target individuals than concealment, as a result of a more difficult decision induced through lesser variance within the lineup.[10]
Conceptual space | https://en.wikipedia.org/wiki/Face_space |
Global workspace theory(GWT) is a framework for thinking aboutconsciousnessintroduced by cognitive scientistBernard Baarsin 1988.[1][2]It was developed to qualitatively explain a large set of matched pairs of conscious and unconscious processes. GWT has been influential in modeling consciousness and higher-order cognition as emerging from competition and integrated flows of information across widespread, parallel neural processes.
Bernard Baars derived inspiration for the theory as the cognitive analog of theblackboard systemof early artificial intelligence system architectures, where independent programs shared information.[3]
Global workspace theory is one of the leading theories of consciousness.[4][5][6][7]While aspects of GWT are matters of debate, it remains a focus of current research, including brain interpretations and computational simulations.[citation needed]
GWT uses themetaphorof atheater, with conscious thought being like material illuminated on the main stage. Attention acts as aspotlight, bringing some of this unconscious activity into conscious awareness on the global workspace. Baars wrote in his 1997 article "In the Theatre of Consciousness" in theJournal of Consciousness Studiesthat the concept describes:[8]
[A] stage, an attentional spotlight shining on the stage, actors to represent the contents of conscious experience, an audience, and a few invisible people behind the scenes, who exercise great influence on whatever becomes visible on stage.
The stage receives sensory and abstract information, but only events in the spotlight shining on the stage are completely conscious.
A review of Baars' 1997 bookIn the Theater of Consciousness: The Workspace of the Mindfurther described:[9]
Thus peripheral and central sensory stimuli, imagination, and intuition compete for the center of attention, from where they address the unconscious processes of memory, interpretation, automatic routines, and motivation which, in turn, affect the control and context operators running the show from behind the scenes.
In a discussion withSusan Blackmorein her bookConversations on Consciousness, Baars said:[10]
From my point of view, the metaphor that is useful for understanding consciousness is the theatre metaphor, which also happens to be quite ancient, going back at least to Plato in the West, and to the Vedanta scriptures in the East. The theatre metaphor, in a simple way, says that what’s conscious is like the bright spot cast by a spotlight on to the stage of a theatre. What’s unconscious is everything else: all the people sitting in the audience are unconscious components of the brain which get information from consciousness; and there are people sitting behind the scenes, the director and the playwright and so on, who are shaping the contents of consciousness, telling the actor in the light spot what to say. It’s a very simple metaphor, but it turns out to be quite useful.
Baars distinguishes this fromCartesian theater: "You don't have a little self sitting in the theatre".[11]
The brain contains many specialized processes or modules that operate in parallel, much of which is unconscious. The global workspace is a functional hub of broadcast and integration that allows information to be disseminated across modules. As such GWT can be classified as afunctionalisttheory of consciousness.[12]
When sensory input, memories, or internal representations receive attention, they enter the global workspace and become accessible to various cognitive processes. As elements compete for attention, those that succeed gain entry to the global workspace, allowing their information to be distributed and coordinated throughout the whole cognitive system.
GWT resembles the concept of working memory and is proposed to correspond to a 'momentarily active, subjectively experienced' event in working memory. It facilitates top-down control of attention, working memory, planning, and problem-solving through this information sharing.
GWT involves a fleeting memory with a duration of a few seconds (much shorter than the 10–30 seconds of classicalworking memory). GWT contents are proposed[citation needed]to correspond to what we areconsciousof, and are broadcast to a multitude ofunconsciouscognitive brainprocesses, which may be called receiving processes. Otherunconsciousprocesses, operating in parallel with limitedcommunicationbetween them, can form coalitions which can act as input processes to the global workspace. Since globally broadcast messages can evoke actions in receiving processes throughout the brain,[citation needed]the global workspace may be used to exercise executive control to perform voluntary actions. Individual as well as allied processes compete for access to the global workspace,[13]striving to disseminate their messages to all other processes in an effort to recruit more cohorts and thereby increase the likelihood of achieving their goals. Incoming stimuli need to be stored temporarily in order to be able to compete for attention and conscious access. Kouider and Dehaene predicted the existence of a sensory memory buffer that maintains stimuli for "a few hundreds of milliseconds".[13]Recent research offers preliminary evidence for such a buffer store and indicates a gradual but rapid decay with extraction of meaningful information severely impaired after 300 ms and most data being completely lost after 700 ms.[14]
Baars asserts that working memory "is closely associated with conscious experience, though not identical to it."[15]Conscious events may involve more necessary conditions, such as interacting with a "self" system, and an executive interpreter in the brain, such as has been suggested by a number of authors includingMichael S. Gazzaniga.
Nevertheless, GWT can successfully model a number of characteristics of consciousness, such as its role in handling novel situations, its limited capacity, its sequential nature, and its ability to trigger a vast range of unconscious brain processes. Moreover, GWT lends itself well to computational modeling.Stan Franklin'sIDAmodel is one such computational implementation of GWT. See also Dehaene et al. (2003), Shanahan[16]and Bao's "Global Workspace Network" model.[17]
GWT also specifies "behind the scenes" contextual systems, which shape conscious contents without ever becoming conscious, such as thedorsalcorticalstream of the visual system. This architectural approach leads to specific neural hypotheses.Sensoryevents in different modalities may compete with each other forconsciousnessif their contents are incompatible. For example, the audio and video track of a movie will compete rather than fuse if the two tracks are out of sync by more than 100 ms., approximately.[citation needed]The 100 ms time domain corresponds closely with the known brain physiology of consciousness, including brain rhythms in the alpha-theta-gamma domain, and event-related potentials in the 200–300 ms domain.[18][19]
However, much of this research is based on studies of unconscious priming and recent studies show that many of the methods used for unconscious priming are flawed.[20]
Stanislas Dehaeneextended the global workspace with the "neuronal avalanche" showing how sensory information gets selected to be broadcast throughout the cortex.[21]Many brain regions, the prefrontal cortex, anterior temporal lobe, inferior parietal lobe, and the precuneus all send and receive numerous projections to and from a broad variety of distant brain regions, allowing the neurons there to integrate information over space and time. Multiple sensory modules can therefore converge onto a single coherent interpretation, for example, a "red sports car zooming by". This global interpretation is broadcast back to the global workspace creating the conditions for the emergence of a single state of consciousness, at once differentiated and integrated.
Alternatively, the theory ofpractopoiesissuggests that the global workspace is achieved in the brain primarily through fast adaptive mechanisms of nerve cells.[22]According to that theory, connectivity does not matter much. Critical is rather the fact that neurons can rapidly adapt to the sensory context within which they operate. Notably, for achieving a global workspace, the theory presumes that these fast adaptive mechanisms have the capability to learn when and how to adapt.
J. W. Dalton has criticized the global workspace theory on the grounds that it provides, at best, an account of the cognitivefunctionof consciousness, and fails even to address the deeper problem of its nature, of what consciousnessis, and of how any mental process whatsoever can be conscious: thehard problem of consciousness.[23]However, the abstract of A. C. Elitzur's 1997 paper summarized that while GWT "does not address the 'hard problems,' namely, the very nature of consciousness, it constrains any theory that attempts to do so and provides important insights into the relation between consciousness and cognition".[24]
InConsciousness: A Very Short Introduction,Susan Blackmoresaid there are two possible interpretations of GWT and it is often hard to tell which people mean, but "in the first version, the hard problem remains: something magical happens to turn unconscious items into conscious ones. In the second, it disappears, but we have to give up the idea that some items are conscious and others not".[25] | https://en.wikipedia.org/wiki/Global_workspace_theory |
Animage schema(bothschemasandschemataare used as plural forms) is a recurring structure within ourcognitiveprocesses which establishes patterns of understanding and reasoning. As an understudy toembodied cognition, image schemas are formed from our bodily interactions,[1]from linguistic experience, and from historical context. The term is introduced inMark Johnson's bookThe Body in the Mind; in case study 2 ofGeorge Lakoff'sWomen, Fire and Dangerous Things:and further explained byTodd OakleyinThe Oxford handbook of cognitive linguistics;byRudolf ArnheiminVisual Thinking; by the collectionFrom Perception to Meaning: Image Schemas in Cognitive Linguisticsedited byBeate Hampeand Joseph E. Grady.
In contemporarycognitive linguistics, an image schema is considered an embodied prelinguistic structure of experience that motivatesconceptual metaphormappings. Learned in early infancy they are often described as spatiotemporal relationships that enable actions and describe characteristics of the environment. They exist both as static and dynamic version, describing both states and processes,[2]compare Containment vs. Going_In/Out, and they are learned from all sensorimodalities.
Evidence for image schemas is drawn from a number of related disciplines, including work on cross-modal cognition inpsychology, from spatial cognition in bothlinguisticsand psychology,[3]cognitive linguistics,[4]and fromneuroscience.[5]The influences of image schemas is not only seen in cognitive linguistics and developmental psychology, but also in interface design[6]and more recently, the theory has become of increased interest in artificial intelligence[7]and cognitive robotics[8]to help ground meaning.
Image schemas are dynamic embodied patterns—they take placeinandthroughtime. Moreover, they are multi-modal patterns of experience, not simply visual. For instance, consider how the dynamic nature of the containment schema is reflected in the various spatial senses of the English wordout.Outmay be used in cases where a clearly defined trajector (TR) leaves a spatially bounded landmark (LM), as in:
In the most prototypical of such cases the landmark is a clearly defined container. However,outmay also be used to indicate those cases where the trajector is a mass that spreads out, effectively expanding the area of the containing landmark:
Finally,outis also often used to describe motion along a linear path where the containing landmark is implied and not defined at all:
Experientially basic and primarily spatial image schemas such as the Containment schema and its derivatives the Out schemas lend their logic to non-spatial situations. For example, one may metaphorically use the termoutto describe non-spatial experiences:
Johnson argues that more abstract reasoning is shaped by such underlying spatial patterns. For example, he notes that the logic of containment is not just a matter of being in or out of the container. For example, if someone is in adeepdepression, we know it is likely to be a long time before they are well. The deeper the trajector is in the container, the longer it will take for the trajector to get out of it. Similarly, Johnson argues that transitivity and thelaw of the excluded middlein logic are underlaid by preconceptual embodied experiences of the Containment schema.
In case study two of his bookWomen, Fire and Dangerous Things, Lakoff re-presented the analysis of the English wordoverdone byClaudia Brugmanin her (1981) master's thesis. Similar to the analysis ofoutgiven by Johnson, Lakoff argued that there were six basic spatial schemas for the English wordover. Moreover, Lakoff gave a detailed accounting of how these schemas were interrelated in terms of what he called aradial category structure. For example, these six schemas could be both further specified by other spatial schemas such as whether thetrajectorwas in contact with thelandmarkor not (as inthe plane flew over the mountainvs.he climbed over the mountain). Furthermore, Lakoff identified a group of "transformational" image schemata such as rotational schemas and path to object mass, as inSpider-Man climbed all over the wall. This analysis raised profound questions about how image schemas could be grouped, transformed, and how sequences of image schemas could be chained together in language, mind, and brain.
Johnson indicates that his analysis ofoutdrew upon a 1981 doctoral dissertation bySusan Lindnerin linguistics at UCSD underRonald Langacker, and more generally by the theory ofcognitive grammarput forth by him.[9]For the force group of image schemas Johnson also drew on an early version of theforce dynamicschemas put forth byLen Talmy, as used by linguists such asEve Sweetser. Other influences includeMax Wertheimer'sgestaltstructure theory andKant's account of schemasin categorization, as well as studies in experimental psychology on themental rotationof images.
In addition to the dissertation onoverby Brugman, Lakoff's use of image schema theory also drew extensively on Talmy and Langacker's theories of spatial relations terms. Other theories making use of similar conceptual primitives to capture meaning includeJean M. Mandler'sspatial primitives,Anna Wierzbicka'ssemantic primes[10],Leonard Talmy's conceptual primitives,Roger Schankconceptual dependency theoryandAndrea A. diSessa's phenomenological primitives (p-prims).
Image schemas have also been proposed to be descriptors ofGibsonianaffordances. An object like acupaffords the image schema Containment to liquids and an abstract concept liketransportationoffer the affordance of moving something from one point to another as an image-schematic combination of Source-Path-Goal and Containment (alternatively Support).
While originally a theory for cognitive linguistics, the theory of image schemas and the underlying ideas behind embodied cognition have become of increased interest in artificial intelligence and cognitive robotics to help solve issues with natural language comprehension and the application of affordances. The research on formal accounts (e.g.[13][14]) of these abstract patterns date back several decades and has been proposed as a way to deal withgeographical information science,[15]natural language comprehension, automaticontologygeneration[16]and computationalconceptual blending.[17]
As a direct relation to embodied cognition, and more specifically embodiedconstruction grammar, formal approaches to image schemas often limit the research area by looking at image schemas exclusively asspatiotemporal relationships. This provides a feasible foundation for knowledge representation to represent each individual image schema as well as their interconnection as relationships in a 3D space. One formal language to describe them is the ISL (Image Schema Language), a logic language combined by different formal calculi andfirst-order logicthat builds on creating hierarchical families of logical micro-theories that is able to represent different degrees of specification of the image schemas.[14]
In artificial intelligence, image schemas are also used as an inspiration to advance natural language comprehension of metaphors, conceptual blending and creative language use. This is extended to also include non-linguistic reasoning such ascommonsense reasoning(e.g. see Davis'Egg cracking problemand the approach made to describe it image-schematically[18]) and the formal structure of events[19]prototypical as some of the biggest challenges in AI.
While Johnson provided an initial list of image schemas inThe Body in the Mind(p. 126), his diagrams for them are scattered throughout his book and he only diagrammed a portion of those image schemas he listed. In his work, Lakoff also used several additional schemas.
Source:[20] | https://en.wikipedia.org/wiki/Image_schema |
Phonetic spaceis the range of sounds that can be made by an individual.[1]There is some controversy over whether an individual's phonetic space is language dependent, or if there exists some common, innate, phonetic space across languages.[2]
Phonetic Space is a concept pioneered byMartin Joosin 1948[3]and developed by Gordon E. Peterson in 1951[4]andNoam Chomskyin 1968.[5]Chomsky developed the idea that phonetic space is universal and every human is born with a discrete phonetic space.[5]The most cited rebuttal of Chomsky's proposal of a universal and discrete phonetic space is an article by Port and Leary titled, "Against Formal Phonology".[6]Applications of phonetic space include interlanguage phonetic comparison and phonological analysis.[2]
A definition of phonetic space is not agreed upon, the concept varying in use and meaning depending on the author in question. Some similarities and constants can, however, be drawn. One thing that is known, phonetic space is universal; every human that uses verbal communication obtains a discrete phonetic space.[1][2][5]This space is the distribution of vowels perceived by the speaker. The recognition of words, and specifically the vowels within these words, is achieved by noting a perceived difference between one sound and another. The act of comparing these competing sounds and categorizing them within the mind is the creation of a phonetic space.[7]The identity of each sound is a conglomerate of ideas and concepts composed of categories such as: VOT (Voice Onset Time), Amplitude Rise-Time,Formant Frequency, Bandwidth, Formant Transition, and Energy-Density Maximum. Not all of these categories are used for every sound, however in building an individual phonetic space, the aforementioned attributes are oftentimes integral to the differentiation process used by the mind to successfully distinguish between any two competing sounds.[5]Based on these ideas, theVowel Quadrilateralis used to show what the realization of these basic would look like, and helps to visually conceptualize the separation of competing phonetic space that occurs within the human mind.
In 2005, Robert F. Port and Adam P. Leary published an argument against the existence of a fixed phonetic inventory. They presented the idea of a phonetic space as unrealistic in terms of the broadness of languages present and more specifically that languages are not consistent in distinctness, discreteness, or temporal patterns, even within the same language.[6]They argue that in order for a formal system to exist, it must have rules, and therefore each "phonetic atom" - in this case, all the phonetic sounds in the universe - "must be static and discretely different from each other," which means there can be no inconsistency in how each sound is produced. They argue that this is unrealistic because speakers of the same language often speak differently in that theintonationsof sounds and stresses on syllables depend on each person's style of speaking, not necessarily their accent.
Port and Leary claim that phonetics is filled with many asymmetries. How we understand the phonetic space to look like comes from the idea that the dimensions of the space include Voicing, Height, and Nasal, and the variations of these dimensions help produce the many sounds of language. Port and Leary argue that not all phonetic properties can be combined, however, such asvowel heightand backness, and therefore, the rules are asymmetric in that it is unknown what properties can exist together in one sound.
In regards to the concept of the phonetic space, Port and Leary essentially argue that, contrary to the research of Chomsky and Halle, there are too many inconsistencies and difficulties concerned with the existence of a phonetic space and that while their perspective is not widely accepted by other linguists, they contribute valid points to the idea that the infinite number of sounds cannot co-exist perfectly with a set of rules in one space.[6]
Phonetic space is rarely touched upon inlinguistics, and therefore little research has been done on the topic, however, there are a few things of note regarding the subject: The idea of phonetic space could not have developed until we had a working definition ofphoneticsand had a way to place sound in space. WhileGrassmann's development oflinear algebraset us on the conceptual path to placing values in space,[8][circular reference]it wasC. G. Kratzensteinwho first published detailed methods to synthesize speech in the 1700s. "Although when his principal phonetic work, was published in 1781 and 1782 there was no clear understanding of acoustic resonance, his accomplishment – via trial and error – was remarkable and contributed to accumulating "existence proofs" that speech could be understood in physical and physiological terms."[9]
While first mentioned in the 1700s, the idea was largely ignored until the 1940s when the term was more officially coined by Martin Joos, an American linguist and professor of German. Joos contributed much to the realm of phonetics andphonology, writing the monograph that helped linguists come to a more unified theory regarding acoustics in phonetics.[10][circular reference][11]The concept would later be expanded on by Gordon E. Peterson[12]in his essay, ‘The Phonetic Value of Vowels’. Along with these contributions,Marshall McLuhancould be mentioned as well, as he was the one to truly consideracoustic space, which is very similar to phonetic space. Though not exactly the same, as acoustic space refers more to the environment that allows for the sound, while phonetic space is more niche, in that it is in reference to the space between sounds.[13][circular reference][14]On a surface level they may not seem related, but it is worth the mention even if nothing is directly attributed to McLuhan.
Martin Jooswas an American linguist who was most commonly known for his study on language formality.[15]Though Joos didn't solely study phonetic space, he contributed to the field of Acoustic Phonetics through his journal entryAcoustic PhoneticsandReadings in Linguistics.
Gordon E. Peterson was an American linguist whose field of study varied from acoustic analysis to phonemic theory and automatic speech recognition.[16]Though Peterson didn't explicitly study phonetic space, in his study of phonetic value, he concluded that the vowel diagram that linguists typically use is a two-dimensional representation of the vowels in the phonetic space, which is multi-dimensional.[17]
Noam Chomskyis a prominent American linguist who pioneered the idea of an innate universal grammar, which also ties into his idea that phonetic space is also universally innate.[18][19]
In 2010, a study on Phonetic Space was done to determine if phonetic spaces do exist and differ speaker to speaker.[20]Three groups of participants were tested: those who were born and raised in China, those who moved from China at an early age and Americans who have learnedChineselater in life. Subjects were recorded saying various sounds and analyzed thoroughPraat, a computer software that measures sounds into Hz. The various frequencies are grouped intoFormantswhich correlate to certain sounds in the proposed phonetic space. The recorded values for the sounds of heritage speakers and non heritage speakers differed greatly. The averages show that the phonetic space, or values of sound, differ between the three groups. | https://en.wikipedia.org/wiki/Phonetic_space |
Semantic spaces[note 1][1]in the natural language domain aim to create representations of natural language that are capable of capturing meaning. The original motivation for semantic spaces stems from two core challenges of natural language:Vocabulary mismatch(the fact that the same meaning can be expressed in many ways) andambiguityof natural language (the fact that the same term can have several meanings).
The application of semantic spaces innatural language processing(NLP) aims at overcoming limitations ofrule-basedor model-based approaches operating on thekeywordlevel. The main drawback with these approaches is their brittleness, and the large manual effort required to create either rule-based NLP systems or training corpora for model learning.[2][3]Rule-based andmachine learningbased models are fixed on the keyword level and break down if the vocabulary differs from that defined in the rules or from the training material used for the statistical models.
Research in semantic spaces dates back more than 20 years. In 1996, two papers were published that raised a lot of attention around the general idea of creating semantic spaces:latent semantic analysis[4]andHyperspace Analogue to Language.[5]However, their adoption was limited by the large computational effort required to construct and use those semantic spaces. A breakthrough with regard to theaccuracyof modelling associative relations between words (e.g. "spider-web", "lighter-cigarette", as opposed to synonymous relations such as "whale-dolphin", "astronaut-driver") was achieved byexplicit semantic analysis(ESA)[6]in 2007. ESA was a novel (non-machine learning) based approach that represented words in the form of vectors with 100,000dimensions(where each dimension represents an Article inWikipedia). However practical applications of the approach are limited due to the large number of required dimensions in the vectors.
More recently, advances inneural networktechniques in combination with other new approaches (tensors) led to a host of new recent developments:Word2vec[7]fromGoogle,GloVe[8]fromStanford University, andfastText[9]fromFacebookAI Research (FAIR) labs. | https://en.wikipedia.org/wiki/Semantic_space |
Incomputer science, astate spaceis adiscrete spacerepresenting the set of all possible configurations of a system.[1]It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields ofartificial intelligenceandgame theory.
For instance, thetoy problemVacuum World has a discrete finite state space in which there are a limited set of configurations that the vacuum and dirt can be in. A "counter" system, where states are thenatural numbersstarting at 1 and are incremented over time[2]has an infinite discrete state space. The angular position of an undampedpendulum[3]is a continuous (and therefore infinite) state space.
State spaces are useful incomputer scienceas a simple model of machines. Formally, a state space can be defined as atuple[N,A,S,G] where:
A state space has some common properties:
For example, the Vacuum World has a branching factor of 4, as the vacuum cleaner can end up in 1 of 4 adjacent squares after moving (assuming it cannot stay in the same square nor move diagonally). The arcs of Vacuum World are bidirectional, since any square can be reached from any adjacent square, and the state space is not a tree since it is possible to enter a loop by moving between any 4 adjacent squares.
State spaces can be either infinite or finite, and discrete or continuous.
The size of the state space for a given system is the number of possible configurations of the space.[3]
If the size of the state space is finite, calculating the size of the state space is acombinatorialproblem.[4]For example, in theeight queens puzzle, the state space can be calculated by counting all possible ways to place 8 pieces on an 8x8 chessboard. This is the same as choosing 8 positions without replacement from a set of 64, or
This is significantly greater than the number of legal configurations of the queens, 92. In many games the effective state space is small compared to all reachable/legal states. This property is also observed inchess, where the effective state space is the set of positions that can be reached by game-legal moves. This is far smaller than the set of positions that can be achieved by placing combinations of the available chess pieces directly on the board.
All continuous state spaces can be described by a correspondingcontinuous functionand are therefore infinite.[3]Discrete state spaces can also have (countably) infinite size, such as the state space of the time-dependent "counter" system,[2]similar to the system inqueueing theorydefining the number of customers in a line, which would have state space {0, 1, 2, 3, ...}.
Exploring a state space is the process of enumerating possible states in search of a goal state. The state space ofPacman, for example, contains a goal state whenever all food pellets have been eaten, and is explored by moving Pacman around the board.[5]
A search state is a compressed representation of a world state in a state space, and is used for exploration. Search states are used because a state space often encodes more information than is necessary to explore the space. Compressing each world state to only information needed for exploration improves efficiency by reducing the number of states in the search.[5]For example, a state in the Pacman space includes information about the direction Pacman is facing (up, down, left, or right). Since it does not cost anything to change directions in Pacman, search states for Pacman would not include this information and reduce the size of the search space by a factor of 4, one for each direction Pacman could be facing.
Standard search algorithms are effective in exploring discrete state spaces. The following algorithms exhibit bothcompletenessand optimality in searching a state space:[5][6]
These methods do not extend naturally to exploring continuous state spaces. Exploring a continuous state space in search of a given goal state is equivalent to optimizing an arbitrarycontinuous functionwhich is not always possible; seemathematical optimization. | https://en.wikipedia.org/wiki/State_space |
Visual spaceis the experience of space by an awareobserver. It is the subjective counterpart of the space of physical objects. There is a long history in philosophy, and later psychology of writings describing visual space, and its relationship to the space of physical objects. A partial list would includeRené Descartes,Immanuel Kant,Hermann von Helmholtz,William James, to name just a few.
The location and shape of physical objects can be accurately described with the tools of geometry. For practical purposes the space we occupy isEuclidean. It is three-dimensional and measurable using tools such as rulers. It can be quantified using co-ordinate systems like theCartesianx,y,z, or polar coordinates with angles of elevation, azimuth and distance from an arbitrary origin.
Percepts, the counterparts in the aware observer's conscious experience of objects in physical space, constitute an ordered ensemble or, asErnst Cassirerexplained,[1]Visual space can not be measured with rulers. Historically philosophers used introspection and reasoning to describe it. With the development ofPsychophysics, beginning withGustav Fechner, there has been an effort to develop suitable experimental procedures which allow objective descriptions of visual space, including geometric descriptions, to be developed and tested. An example illustrates the relationship between the concepts of object and visual space. Two straight lines are presented to an observer who is asked to set them so that they appear parallel. When this has been done, the linesareparallel in visual space. A comparison is then possible with the actual measured layout of the lines in physical space. Good precision can be achieved using these and other psychophysical procedures in human observers or behavioral ones in trained animals.[2]
Thevisual field, the area or extent of physical space that is being imaged on the retina, should be distinguished from the perceptualspacein which visual percepts are located, which we callvisual space. Confusion is caused by the use ofSehraumin the German literature for both. There is no doubt thatEwald Heringand his followers meant visual space in their writings.[3]
The fundamental distinction was made byRudolf Carnapbetween three kinds of space which he calledformal,physicalandperceptual.[4]Mathematicians, for example, deal with ordered structures, ensembles of elements for which rules of logico-deductive relationships hold, limited solely by being not self-contradictory. These are theformalspaces. According to Carnap, studyingphysicalspace means examining the relationship between empirically determined objects. Finally, there is the realm of what students of Kant know asAnschauungen,immediate sensory experiences, often translated as "apperceptions", which belong toperceptual spaces.
Geometry is the discipline devoted to the study of space and the rules relating the elements to each other. For example, in Euclidean space thePythagorean theoremprovides a rule to compute distances fromCartesian coordinates. In a two-dimensional space of constant curvature, like the surface of a sphere, the rule is somewhat more complex but applies everywhere. On the two-dimensional surface of a football, the rule is more complex still and has different values depending on location. In well-behaved spaces such rules used for measurement, calledmetrics, are classically handled by the mathematics invented byRiemann. Object space belongs to that class.
To the extent that it is reachable by scientifically acceptable probes, visual space as defined is also a candidate for such considerations. The first and remarkably prescient analysis was published byErnst Mach[5]in 1901. Under the headingOn Physiological as Distinguished from Geometrical SpaceMach states that "Both spaces are threefold manifoldnesses" but the former is "...neither constituted everywhere and in all directions alike, nor infinite in extent, nor unbounded." A notable attempt at a rigorous formulation was made in 1947 byRudolf Luneburg, who preceded his essay on mathematical analysis of vision[6]by a profound analysis of the underlying principles. When features are sufficiently singular and distinct, there is no problem about a correspondence between an individual itemAin object space and its correlateA'in visual space. Questions can be asked and answered such as "If visual perceptsA',B',C'are correlates of physical objectsA,B,C,and ifClies betweenAandB, doesC'lie betweenA'andB'?" In this manner, the possibility of visual space being metrical can be approached. If the exercise is successful, a great deal can be said about the nature of the mapping of the physical space on the visual space.
On the basis of fragmentary psychophysical data of previous generations, Luneburg concluded that visual space was hyperbolic with constant curvature, meaning that elements can be moved throughout the space without changing shape. One of Luneburg's major arguments is that, in accord with a common observation, the transformation involving hyperbolic space renders infinity into a dome (the sky). The Luneburg proposition gave rise to discussions and attempts at corroborating experiments, which on the whole did not favor it.[7]
Basic to the problem, and underestimated by Luneburg the mathematician, is the likely success of a mathematically viable formulation of the relationship between objects in physical space and percepts in visual space. Any scientific investigation of visual space is colored by the kind of access we have to it, and the precision, repeatability and generality of measurements. Insightful questions can be asked about the mapping of visual space to object space[8]but answers are mostly limited in the range of their validity. If the physical setting that satisfies the criterion of, say, apparent parallelism varies from observer to observer, or from day to day, or from context to context, so does the geometrical nature of, and hence mathematical formulation for, visual space.
All these arguments notwithstanding, there is a major concordance between the locations of items in object space and their correlates in visual space. It is adequately veridical for us to navigate very effectively in the world, deviations from such a situation are sufficiently notable to warrant special consideration.visual space agnosiais a recognized neurological condition, and the many common distortions, calledgeometrical-optical illusions, are widely demonstrated but of minor consequence.
Its founder,Gustav Theodor Fechnerdefined the mission of the discipline ofpsychophysicsas the functional relationship between the mental and material worlds—in this particular case, the visual and object spaces—but he acknowledged an intermediate step, which has since blossomed into the major enterprise of modern neuroscience. In distinguishing betweeninnerandouterpsychophysics, Fechner recognized that a physical stimulus generates a percept by way of an effect on the organism's sensory and nervous systems. Hence, without denying that its essence is the arc between object and percept, the inquiry can concern itself with the neural substrate of visual space.[citation needed]
Two major concepts dating back to the middle of the 19th century set the parameters of the discussion here.Johannes Mülleremphasized that what matters in a neural path is the connection it makes,[citation needed]and HermannLotze, from psychological considerations, enunciated the principle oflocal sign[clarify].[citation needed]Put together in modern neuroanatomical terms they mean that a nerve fiber from a fixed retinal location instructs its target neurons in the brain about the presence of a stimulus in the location in the eye's visual field that is imaged there. The orderly array of retinal locations is preserved in the passage from the retina to the brain, and provides what is aptly called a "retinotopic"mapping in theprimary visual cortex. Thus in the first instance brain activity retains the relative spatial ordering of the objects and lays the foundations for a neural substrate of visual space.
Unfortunately simplicity and transparency ends here. Right at the outset, visual signals are analyzed not only for their position, but also, separately in parallel channels, for many other attributes such as brightness, color, orientation, depth. No single neuron or even neuronal center or circuit represents both the nature of a target feature and its accurate location. The unitary mapping of object space into the coherent visual space without internal contradictions or inconsistencies that we as observer automatically experience, demands concepts of conjoint activity in several parts of the nervous system that is at present beyond the reach of neurophysiological research.
Though the details of the process by which the experience of visual space emerges remain opaque, a startling finding gives hope for future insights. Neural units have been demonstrated in the brain structure calledhippocampusthat show activity only when the animal is in a specific place in its environment.[10]
Only on an astronomical scale are physical space and its contents interdependent, This major proposition of thegeneral theory of relativityis of no concern in vision. For us, distances in object space are independent of the nature of the objects.
But this is not so simple in visual space. At a minim an observer judges the relative location of a few light points in an otherwise dark visual field, a simplistic extension from object space that enabled Luneburg to make some statements about the geometry of visual space. In a more richly textured visual world, the various visual percepts carry with them prior perceptual associations which often affect their relative spatial disposition. Identical separations in physical space can look quite different (are quite differentin visual space) depending on the features that demarcate them. This is particularly so in the depth dimension because the apparatus by which values in the third visual dimension are assigned is fundamentally different from that for the height and width of objects.
Even inmonocular vision, which physiologically has only two dimensions, cues of size, perspective, relative motion etc. are used toassign depth differences to percepts. Looked at as a mathematical/geometrical problem, expanding a 2-dimensional object manifold into a 3-dimensional visual world is "ill-posed," i.e., not capable of a rational solution, but is accomplished quite effectively by the human observer.
The problem becomes less ill-posed whenbinocular visionallows actual determination of relative depth bystereoscopy, but its linkage to the evaluation of distance in the other two dimensions is uncertain (see:stereoscopic depth rendition). Hence, the uncomplicated three-dimensional visual space of every-day experience is the product of many perceptual and cognitive layers superimposed on the physiological representation of the physical world of objects. | https://en.wikipedia.org/wiki/Visual_space |
Autoassociative memory, also known asauto-association memoryor anautoassociation network, is any type of memory that is able to retrieve a piece of data from only a tiny sample of itself. They are very effective in de-noising or removing interference from the input and can be used to determine whether the given input is “known” or “unknown”.
In artificial neural network, examples includevariational autoencoder,denoising autoencoder,Hopfield network.
In reference to computer memory, the idea of associative memory is also referred to asContent-addressable memory(CAM).
The net is said to recognize a “known” vector if the net produces a pattern of activation on the output units which is same as one of the vectors stored in it.
Standard memories (data storage) are organized by being indexed by positional memory addresses which are also used for data retrieval.
Autoassociative memories are organized in such a way that data is stored in a graph like system with connection weights based on the number of inherent associative connections between two memories which makes it possible to query it using a memory already contained in the associative memory as query-key and retrieve that memory and closely connected memories at the same time.Hopfield networks[1]have been shown[2]to act asautoassociative memorysince they are capable of remembering data by observing a portion of that data.
In some cases, an auto-associative net does not reproduce a stored pattern the first time around, but if the result of the first showing is input to the net again, the stored pattern is reproduced.[3]They are of 3 further kinds — Recurrent linear auto-associator,[4]Brain-State-in-a-Box net,[5]andDiscrete Hopfield net. The Hopfield Network is the most well known example of an autoassociative memory.
Hopfield networks serve ascontent-addressable ("associative") memorysystems withbinarythresholdnodes, and they have been shown to act as autoassociative since they are capable of remembering data by observing a portion of that data.[2]
Heteroassociative memories, on the other hand, can recall an associated piece of datum fromonecategory upon presentation of data fromanothercategory. For example: It is possible that the associative recall is a transformation from the pattern “banana” to the different pattern “monkey.”[6]
Bidirectional associative memories(BAM)[7]areartificial neural networksthat have long been used for performingheteroassociativerecall.
For example, the sentence fragments presented below are sufficient for most English-speaking adult humans to recall the missing information.
Many readers will realize the missing information is in fact:
This demonstrates the capability of autoassociative networks to recall the whole by using some of its parts. | https://en.wikipedia.org/wiki/Autoassociative_memory |
Thecerebellar model arithmetic computer(CMAC) is a type of neural network based on a model of the mammaliancerebellum. It is also known as the cerebellar model articulation controller. It is a type ofassociativememory.[2]
The CMAC was first proposed as a function modeler for robotic controllers byJames Albusin 1975[1](hence the name), but has been extensively used inreinforcement learningand also as for automated classification in themachine learningcommunity. The CMAC is an extension of theperceptronmodel. It computes a function forn{\displaystyle n}input dimensions. The input space is divided up into hyper-rectangles, each of which is associated with a memory cell. The contents of the memory cells are the weights, which are adjusted during training. Usually, more than one quantisation of input space is used, so that any point in input space is associated with a number of hyper-rectangles, and therefore with a number of memory cells. The output of a CMAC is the algebraic sum of the weights in all the memory cells activated by the input point.
A change of value of the input point results in a change in the set of activated hyper-rectangles, and therefore a change in the set of memory cells participating in the CMAC output. The CMAC output is therefore stored in a distributed fashion, such that the output corresponding to any point in input space is derived from the value stored in a number of memory cells (hence the name associative memory). This provides generalisation.
In the adjacent image, there are two inputs to the CMAC, represented as a 2D space. Two quantising functions have been used to divide this space with two overlapping grids (one shown in heavier lines). A single input is shown near the middle, and this has activated two memory cells, corresponding to the shaded area. If another point occurs close to the one shown, it will share some of the same memory cells, providing generalisation.
The CMAC is trained by presenting pairs of input points and output values, and adjusting the weights in the activated cells by a proportion of the error observed at the output. This simple training algorithm has a proof of convergence.[3]
It is normal to add a kernel function to the hyper-rectangle, so that points falling towards the edge of a hyper-rectangle have a smaller activation than those falling near the centre.[4]
One of the major problems cited in practical use of CMAC is the memory size required, which is directly related to the number of cells used. This is usually ameliorated by using ahash function, and only providing memory storage for the actual cells that are activated by inputs.
Initially least mean square (LMS) method is employed to update the weights of CMAC. The convergence of using LMS for training CMAC is sensitive to the learning rate and could lead to divergence. In 2004,[5]a recursive least squares (RLS) algorithm was introduced to train CMAC online. It does not need to tune a learning rate. Its convergence has been proved theoretically and can be guaranteed to converge in one step. The computational complexity of this RLS algorithm is O(N3).
Based on QR decomposition, an algorithm (QRLS) has been further simplified to have an O(N) complexity. Consequently, this reduces memory usage and time cost significantly. A parallel pipeline array structure on implementing this algorithm has been introduced.[6]
Overall by utilizing QRLS algorithm, the CMAC neural network convergence can be guaranteed, and the weights of the nodes can be updated using one step of training. Its parallel pipeline array structure offers its great potential to be implemented in hardware for large-scale industry usage.
Since the rectangular shape of CMAC receptive field functions produce discontinuous staircase function approximation, by integrating CMAC with B-splines functions, continuous CMAC offers the capability of obtaining any order of derivatives of the approximate functions.
In recent years, numerous studies have confirmed that by stacking several shallow structures into a single deep structure, the overall system could achieve better data representation, and, thus, more effectively deal with nonlinear and high complexity tasks. In 2018,[7]a deep CMAC (DCMAC) framework was proposed and a backpropagation algorithm was derived to estimate the DCMAC parameters. Experimental results of an adaptive noise cancellation task showed that the proposed DCMAC can achieve better noise cancellation performance when compared with that from the conventional single-layer CMAC. | https://en.wikipedia.org/wiki/Cerebellar_model_articulation_controller |
There are manytypes of artificial neural networks(ANN).
Artificial neural networksarecomputational modelsinspired bybiological neural networks, and are used toapproximatefunctionsthat are generally unknown. Particularly, they are inspired by the behaviour ofneuronsand the electrical signals they convey between input (such as from the eyes or nerve endings in the hand), processing, and output from the brain (such as reacting to light, touch, or heat). The way neurons semantically communicate is an area of ongoing research.[1][2][3][4]Most artificial neural networks bear only some resemblance to their more complex biological counterparts, but are very effective at their intended tasks (e.g. classification or segmentation).
Some artificial neural networks areadaptive systemsand are used for example tomodel populationsand environments, which constantly change.
Neural networks can be hardware- (neurons are represented by physical components) orsoftware-based(computer models), and can use a variety of topologies and learning algorithms.
In feedforward neural networks the information moves from the input to output directly in every layer. There can be hidden layers with or without cycles/loops to sequence inputs. Feedforward networks can be constructed with various types of units, such as binaryMcCulloch–Pitts neurons, the simplest of which is theperceptron. Continuous neurons, frequently with sigmoidalactivation, are used in the context ofbackpropagation.
The Group Method of Data Handling (GMDH)[5]features fully automatic structural and parametric model optimization. The node activation functions areKolmogorov–Gabor polynomialsthat permit additions and multiplications. It uses a deep multilayerperceptronwith eight layers.[6]It is asupervised learningnetwork that grows layer by layer, where each layer is trained byregression analysis. Useless items are detected using avalidation set, and pruned throughregularization. The size and depth of the resulting network depends on the task.[7]
An autoencoder, autoassociator or Diabolo network[8]: 19is similar to themultilayer perceptron(MLP) – with an input layer, an output layer and one or more hidden layers connecting them. However, the output layer has the same number of units as the input layer. Its purpose is to reconstruct its own inputs (instead of emitting a target value). Therefore, autoencoders areunsupervised learningmodels. An autoencoder is used forunsupervised learningofefficient codings,[9][10]typically for the purpose ofdimensionality reductionand for learninggenerative modelsof data.[11][12]
A probabilistic neural network (PNN) is a four-layer feedforward neural network. The layers are Input, hidden pattern/summation, and output. In the PNN algorithm, the parent probability distribution function (PDF) of each class is approximated by aParzen windowand a non-parametric function. Then, using PDF of each class, the class probability of a new input is estimated and Bayes’ rule is employed to allocate it to the class with the highest posterior probability.[13]It was derived from theBayesian network[14]and a statistical algorithm calledKernel Fisher discriminant analysis.[15]It is used for classification and pattern recognition.
A time delay neural network (TDNN) is a feedforward architecture for sequential data that recognizesfeaturesindependent of sequence position. In order to achieve time-shift invariance, delays are added to the input so that multiple data points (points in time) are analyzed together.
It usually forms part of a larger pattern recognition system. It has been implemented using aperceptronnetwork whose connection weights were trained with back propagation (supervised learning).[16]
A convolutional neural network (CNN, or ConvNet or shift invariant or space invariant) is a class of deep network, composed of one or moreconvolutionallayers with fully connected layers (matching those in typical ANNs) on top.[17][18]It uses tied weights andpooling layers. In particular, max-pooling.[19]It is often structured via Fukushima's convolutional architecture.[20]They are variations ofmultilayer perceptronsthat use minimalpreprocessing.[21]This architecture allows CNNs to take advantage of the 2D structure of input data.
Its unit connectivity pattern is inspired by the organization of thevisual cortex. Units respond to stimuli in a restricted region of space known as thereceptive field. Receptive fields partially overlap, over-covering the entirevisual field. Unit response can be approximated mathematically by aconvolutionoperation.[22]
CNNs are suitable for processing visual and other two-dimensional data.[23][24]They have shown superior results in both image and speech applications. They can be trained with standard backpropagation. CNNs are easier to train than other regular, deep, feed-forward neural networks and have many fewer parameters to estimate.[25]
Capsule Neural Networks(CapsNet) add structures called capsules to a CNN and reuse output from several capsules to form more stable (with respect to various perturbations) representations.[26]
Examples of applications in computer vision includeDeepDream[27]androbot navigation.[28]They have wide applications inimage and video recognition,recommender systems[29]andnatural language processing.[30]
A deep stacking network (DSN)[31](deep convex network) is based on a hierarchy of blocks of simplified neural network modules. It was introduced in 2011 by Deng and Yu.[32]It formulates the learning as aconvex optimization problemwith aclosed-form solution, emphasizing the mechanism's similarity tostacked generalization.[33]Each DSN block is a simple module that is easy to train by itself in asupervisedfashion without backpropagation for the entire blocks.[8]
Each block consists of a simplifiedmulti-layer perceptron(MLP) with a single hidden layer. The hidden layerhhas logisticsigmoidalunits, and the output layer has linear units. Connections between these layers are represented by weight matrixU;input-to-hidden-layer connections have weight matrixW. Target vectorstform the columns of matrixT, and the input data vectorsxform the columns of matrixX.The matrix of hidden units isH=σ(WTX){\displaystyle {\boldsymbol {H}}=\sigma ({\boldsymbol {W}}^{T}{\boldsymbol {X}})}. Modules are trained in order, so lower-layer weightsWare known at each stage. The function performs the element-wiselogistic sigmoidoperation. Each block estimates the same final label classy, and its estimate is concatenated with original inputXto form the expanded input for the next block. Thus, the input to the first block contains the original data only, while downstream blocks' input adds the output of preceding blocks. Then learning the upper-layer weight matrixUgiven other weights in the network can be formulated as a convex optimization problem:
which has a closed-form solution.[31]
Unlike other deep architectures, such asDBNs, the goal is not to discover the transformedfeaturerepresentation. The structure of the hierarchy of this kind of architecture makes parallel learning straightforward, as a batch-mode optimization problem. In purelydiscriminative tasks, DSNs outperform conventional DBNs.
This architecture is a DSN extension. It offers two important improvements: it uses higher-order information fromcovariancestatistics, and it transforms thenon-convex problemof a lower-layer to a convex sub-problem of an upper-layer.[34]TDSNs use covariance statistics in abilinear mappingfrom each of two distinct sets of hidden units in the same layer to predictions, via a third-ordertensor.
While parallelization and scalability are not considered seriously in conventionalDNNs,[35][36][37]all learning forDSNs andTDSNs is done in batch mode, to allow parallelization.[32][31]Parallelization allows scaling the design to larger (deeper) architectures and data sets.
The basic architecture is suitable for diverse tasks such asclassificationandregression.
Such aneural networkis designed for the numerical solution ofmathematical equations, such as differential, integral, delay, fractional and others. As input parameters, PINN[38]accepts variables (spatial, temporal, and others), transmits them through the network block. At the output, it produces an approximate solution and substitutes it into the mathematical model, considering the initial and boundary conditions. If the solution does not satisfy the required accuracy, one uses thebackpropagationand rectify the solution.
Besides PINN, there exists deep neural operator (DeepONet)[39]and Fourier neural operator (FNO).[40]
Regulatory feedback networks account for feedback found throughout brain recognition processing areas. Instead of recognition-inference being feedforward (inputs-to-output) as in neural networks, regulatory feedback assumes inference iteratively compares inputs to outputs & neurons inhibit their own inputs, collectively evaluating how important and unique each input is for the next iteration. This ultimately finds neuron activations minimizing mutual input overlap, estimating distributions during recognition and offloading the need for complex neural network training & rehearsal.[41]
Regulatory feedback processing suggests an important real-time recognition processing role for ubiquitous feedback found between brain pre and post synaptic neurons, which is meticulously maintained byhomeostatic plasticity: found to be kept in balance through multiple, often redundant, mechanisms. RF also inherently shows neuroscience phenomena such as Excitation-Inhibition balance, network-wideburstingfollowed by quieting, and human cognitive search phenomena ofdifficulty with similarityand pop-out when multiple inputs are present, without additional parameters.
A regulatory feedback network makes inferences usingnegative feedback.[42]The feedback is used to find the optimal activation of units. It is most similar to anon-parametric methodbut is different fromK-nearest neighborin that it mathematically emulates feedforward networks.
Radial basis functions are functions that have a distance criterion with respect to a center. Radial basis functions have been applied as a replacement for the sigmoidal hidden layer transfer characteristic in multi-layer perceptrons. RBF networks have two layers: In the first, input is mapped onto each RBF in the 'hidden' layer. The RBF chosen is usually a Gaussian. In regression problems the output layer is a linear combination of hidden layer values representing mean predicted output. The interpretation of this output layer value is the same as aregression modelin statistics. In classification problems the output layer is typically asigmoid functionof a linear combination of hidden layer values, representing a posterior probability. Performance in both cases is often improved byshrinkagetechniques, known asridge regressionin classical statistics. This corresponds to a prior belief in small parameter values (and therefore smooth output functions) in aBayesianframework.
RBF networks have the advantage of avoiding local minima in the same way as multi-layer perceptrons. This is because the only parameters that are adjusted in the learning process are the linear mapping from hidden layer to output layer. Linearity ensures that the error surface is quadratic and therefore has a single easily found minimum. In regression problems this can be found in one matrix operation. In classification problems the fixed non-linearity introduced by the sigmoid output function is most efficiently dealt with usingiteratively re-weighted least squares.
RBF networks have the disadvantage of requiring good coverage of the input space by radial basis functions. RBF centres are determined with reference to the distribution of the input data, but without reference to the prediction task. As a result, representational resources may be wasted on areas of the input space that are irrelevant to the task. A common solution is to associate each data point with its own centre, although this can expand the linear system to be solved in the final layer and requires shrinkage techniques to avoidoverfitting.
Associating each input datum with an RBF leads naturally to kernel methods such assupport vector machines(SVM) and Gaussian processes (the RBF is thekernel function). All three approaches use a non-linear kernel function to project the input data into a space where the learning problem can be solved using a linear model. Like Gaussian processes, and unlike SVMs, RBF networks are typically trained in a maximum likelihood framework by maximizing the probability (minimizing the error). SVMs avoid overfitting by maximizing instead amargin. SVMs outperform RBF networks in most classification applications. In regression applications they can be competitive when the dimensionality of the input space is relatively small.
RBF neural networks are conceptually similar toK-nearest neighbor(k-NN) models. The basic idea is that similar inputs produce similar outputs.
Assume that each case in a training set has two predictor variables, x and y, and the target variable has two categories, positive and negative. Given a new case with predictor values x=6, y=5.1, how is the target variable computed?
The nearest neighbor classification performed for this example depends on how many neighboring points are considered. If 1-NN is used and the closest point is negative, then the new point should be classified as negative. Alternatively, if 9-NN classification is used and the closest 9 points are considered, then the effect of the surrounding 8 positive points may outweigh the closest 9-th (negative) point.
An RBF network positions neurons in the space described by the predictor variables (x,y in this example). This space has as many dimensions as predictor variables. The Euclidean distance is computed from the new point to the center of each neuron, and a radial basis function (RBF, also called a kernel function) is applied to the distance to compute the weight (influence) for each neuron. The radial basis function is so named because the radius distance is the argument to the function.
The value for the new point is found by summing the output values of the RBF functions multiplied by weights computed for each neuron.
The radial basis function for a neuron has a center and a radius (also called a spread). The radius may be different for each neuron, and, in RBF networks generated by DTREG, the radius may be different in each dimension.
With larger spread, neurons at a distance from a point have a greater influence.
RBF networks have three layers:
The following parameters are determined by the training process:
Various methods have been used to train RBF networks. One approach first usesK-means clusteringto find cluster centers which are then used as the centers for the RBF functions. However, K-means clustering is computationally intensive and it often does not generate the optimal number of centers. Another approach is to use a random subset of the training points as the centers.
DTREG uses a training algorithm that uses an evolutionary approach to determine the optimal center points and spreads for each neuron. It determines when to stop adding neurons to the network by monitoring the estimated leave-one-out (LOO) error and terminating when the LOO error begins to increase because of overfitting.
The computation of the optimal weights between the neurons in the hidden layer and the summation layer is done using ridge regression. An iterative procedure computes the optimal regularization Lambda parameter that minimizes the generalized cross-validation (GCV) error.
A GRNN is an associative memory neural network that is similar to theprobabilistic neural networkbut it is used for regression and approximation rather than classification.
A deep belief network (DBN) is a probabilistic,generative modelmade up of multiple hidden layers. It can be considered acompositionof simple learning modules.[43]
A DBN can be used to generatively pre-train a deep neural network (DNN) by using the learned DBN weights as the initial DNN weights. Various discriminative algorithms can then tune these weights. This is particularly helpful when training data are limited, because poorly initialized weights can significantly hinder learning. These pre-trained weights end up in a region of the weight space that is closer to the optimal weights than random choices. This allows for both improved modeling and faster ultimate convergence.[44]
Recurrent neural networks(RNN) propagate data forward, but also backwards, from later processing stages to earlier stages. RNN can be used as general sequence processors.
This architecture was developed in the 1980s. Its network creates a directed connection between every pair of units. Each has a time-varying, real-valued (more than just zero or one) activation (output). Each connection has a modifiable real-valued weight. Some of the nodes are called labeled nodes, some output nodes, the rest hidden nodes.
Forsupervised learningin discrete time settings, training sequences of real-valued input vectors become sequences of activations of the input nodes, one input vector at a time. At each time step, each non-input unit computes its current activation as a nonlinear function of the weighted sum of the activations of all units from which it receives connections. The system can explicitly activate (independent of incoming signals) some output units at certain time steps. For example, if the input sequence is a speech signal corresponding to a spoken digit, the final target output at the end of the sequence may be a label classifying the digit. For each sequence, its error is the sum of the deviations of all activations computed by the network from the corresponding target signals. For a training set of numerous sequences, the total error is the sum of the errors of all individual sequences.
To minimize total error,gradient descentcan be used to change each weight in proportion to its derivative with respect to the error, provided the non-linear activation functions aredifferentiable. The standard method is called "backpropagation through time" or BPTT, a generalization of back-propagation for feedforward networks.[45][46]A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL.[47][48]Unlike BPTT this algorithm islocal in time but not local in space.[49][50]An online hybrid between BPTT and RTRL with intermediate complexity exists,[51][52]with variants for continuous time.[53]A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events.[54][55]TheLong short-term memoryarchitecture overcomes these problems.[56]
Inreinforcement learningsettings, no teacher provides target signals. Instead afitness functionorreward functionorutility functionis occasionally used to evaluate performance, which influences its input stream through output units connected to actuators that affect the environment. Variants ofevolutionary computationare often used to optimize the weight matrix.
TheHopfield network(like similar attractor-based networks) is of historic interest although it is not a general RNN, as it is not designed to process sequences of patterns. Instead it requires stationary inputs. It is an RNN in which all connections are symmetric. It guarantees that it will converge. If the connections are trained usingHebbian learningthe Hopfield network can perform as robustcontent-addressable memory, resistant to connection alteration.
TheBoltzmann machinecan be thought of as a noisy Hopfield network. It is one of the first neural networks to demonstrate learning oflatent variables(hidden units). Boltzmann machine learning was at first slow to simulate, but the contrastive divergence algorithm speeds up training for Boltzmann machines andProducts of Experts.
The self-organizing map (SOM) usesunsupervised learning. A set of neurons learn to map points in an input space to coordinates in an output space. The input space can have different dimensions and topology from the output space, and SOM attempts to preserve these.
Learning vector quantization(LVQ) can be interpreted as a neural network architecture. Prototypical representatives of the classes parameterize, together with an appropriate distance measure, in a distance-based classification scheme.
Simple recurrent networks have three layers, with the addition of a set of "context units" in the input layer. These units connect from the hidden layer or the output layer with a fixed weight of one.[57]At each time step, the input is propagated in a standard feedforward fashion, and then a backpropagation-like learning rule is applied (not performinggradient descent). The fixed back connections leave a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied).
Reservoir computing is a computation framework that may be viewed as an extension ofneural networks.[58]Typically an input signal is fed into a fixed (random)dynamical systemcalled areservoirwhose dynamics map the input to a higher dimension. Areadoutmechanism is trained to map the reservoir to the desired output. Training is performed only at the readout stage.Liquid-state machines[59]are a type of reservoir computing.[60]
The echo state network (ESN) employs a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that are trained. ESN are good at reproducing certaintime series.[61]
Thelong short-term memory(LSTM)[56]avoids thevanishing gradient problem. It works even when with long delays between inputs and can handle signals that mix low and high frequency components. LSTM RNN outperformed other RNN and other sequence learning methods such asHMMin applications such as language learning[62]and connected handwriting recognition.[63]
Bi-directional RNN, or BRNN, use a finite sequence to predict or label each element of a sequence based on both the past and future context of the element.[64]This is done by adding the outputs of two RNNs: one processing the sequence from left to right, the other one from right to left. The combined outputs are the predictions of the teacher-given target signals. This technique proved to be especially useful when combined with LSTM.[65]
Hierarchical RNN connects elements in various ways to decompose hierarchical behavior into useful subprograms.[66][67]
A district from conventional neural networks, stochastic artificial neural network used as an approximation to
random functions.
A RNN (often a LSTM) where a series is decomposed into a number of scales where every scale informs the primary length between two consecutive points. A first order scale consists of a normal RNN, a second order consists of all points separated by two indices and so on. The Nth order RNN connects the first and last node. The outputs from all the various scales are treated as aCommittee of Machinesand the associated scores are used genetically for the next iteration.
Biological studies have shown that the human brain operates as a collection of small networks. This realization gave birth to the concept ofmodular neural networks, in which several small networks cooperate or compete to solve problems.
A committee of machines (CoM) is a collection of different neural networks that together "vote" on a given example. This generally gives a much better result than individual networks. Because neural networks suffer from local minima, starting with the same architecture and training but using randomly different initial weights often gives vastly different results.[citation needed]A CoM tends to stabilize the result.
The CoM is similar to the generalmachine learningbaggingmethod, except that the necessary variety of machines in the committee is obtained by training from different starting weights rather than training on different randomly selected subsets of the training data.
The associative neural network (ASNN) is an extension of committee of machines that combines multiple feedforward neural networks and the k-nearest neighbor technique. It uses the correlation between ensemble responses as a measure of distance amid the analyzed cases for the kNN. This corrects the Bias of the neural network ensemble. An associative neural network has a memory that can coincide with the training set. If new data become available, the network instantly improves its predictive ability and provides data approximation (self-learns) without retraining. Another important feature of ASNN is the possibility to interpret neural network results by analysis of correlations between data cases in the space of models.[68]
A physical neural network includes electrically adjustable resistance material to simulate artificial synapses. Examples include theADALINEmemristor-based neural network.[69]Anoptical neural networkis a physical implementation of anartificial neural networkwithoptical components.
Unlike static neural networks, dynamic neural networks adapt their structure and/or parameters to the input during inference[70]showing time-dependent behaviour, such as transient phenomena and delay effects.
Dynamic neural networks in which the parameters may change over time are related to thefast weightsarchitecture (1987),[71]where one neural network outputs the weights of another neural network.
Cascade correlation is an architecture andsupervised learningalgorithm. Instead of just adjusting the weights in a network of fixed topology,[72]Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages: It learns quickly, determines its own size and topology, retains the structures it has built even if the training set changes and requires nobackpropagation.
A neuro-fuzzy network is afuzzyinference systemin the body of an artificial neural network. Depending on the FIS type, several layers simulate the processes involved in a fuzzy inference-likefuzzification, inference, aggregation anddefuzzification. Embedding an FIS in a general structure of an ANN has the benefit of using available ANN training methods to find the parameters of a fuzzy system.
Compositional pattern-producing networks (CPPNs) are a variation of artificial neural networks which differ in their set ofactivation functionsand how they are applied. While typical artificial neural networks often contain onlysigmoid functions(and sometimesGaussian functions), CPPNs can include both types of functions and many others. Furthermore, unlike typical artificial neural networks, CPPNs are applied across the entire space of possible inputs so that they can represent a complete image. Since they are compositions of functions, CPPNs in effect encode images at infinite resolution and can be sampled for a particular display at whatever resolution is optimal.
Memory networks[73][74]incorporatelong-term memory. The long-term memory can be read and written to, with the goal of using it for prediction. These models have been applied in the context ofquestion answering(QA) where the long-term memory effectively acts as a (dynamic) knowledge base and the output is a textual response.[75]
Insparse distributed memoryorhierarchical temporal memory, the patterns encoded by neural networks are used as addresses forcontent-addressable memory, with "neurons" essentially serving as address encoders anddecoders. However, the early controllers of such memories were not differentiable.[76]
This type of network can add new patterns without re-training. It is done by creating a specific memory structure, which assigns each new pattern to an orthogonal plane using adjacently connected hierarchical arrays.[77]The network offers real-time pattern recognition and high scalability; this requires parallel processing and is thus best suited for platforms such aswireless sensor networks,grid computing, andGPGPUs.
Hierarchical temporal memory (HTM) models some of the structural andalgorithmicproperties of theneocortex. HTM is abiomimeticmodel based onmemory-predictiontheory. HTM is a method for discovering and inferring the high-level causes of observed input patterns and sequences, thus building an increasingly complex model of the world.
HTM combines existing ideas to mimic the neocortex with a simple design that provides many capabilities. HTM combines and extends approaches used inBayesian networks, spatial and temporal clustering algorithms, while using a tree-shaped hierarchy of nodes that is common inneural networks.
Holographic Associative Memory (HAM) is an analog, correlation-based, associative, stimulus-response system. Information is mapped onto the phase orientation of complex numbers. The memory is effective forassociativememorytasks, generalization and pattern recognition with changeable attention. Dynamic search localization is central to biological memory. In visual perception, humans focus on specific objects in a pattern. Humans can change focus from object to object without learning. HAM can mimic this ability by creating explicit representations for focus. It uses a bi-modal representation of pattern and a hologram-like complex spherical weight state-space. HAMs are useful for optical realization because the underlying hyper-spherical computations can be implemented with optical computation.[78]
Apart fromlong short-term memory(LSTM), other approaches also added differentiable memory to recurrent functions. For example:
Neural Turing machines (NTM)[86]couple LSTM networks to external memory resources, with which they can interact by attentional processes. The combined system is analogous to aTuring machinebut is differentiable end-to-end, allowing it to be efficiently trained bygradient descent. Preliminary results demonstrate that neural Turing machines can infer simple algorithms such as copying, sorting and associative recall from input and output examples.
Differentiable neural computers(DNC) are an NTM extension. They out-performed Neural turing machines,long short-term memorysystems and memory networks on sequence-processing tasks.[87][88][89][90][91]
Approaches that represent previous experiences directly anduse a similar experience to form a local modelare often callednearest neighbourork-nearest neighborsmethods.[92]Deep learning is useful in semantic hashing[93]where a deepgraphical modelthe word-count vectors[94]obtained from a large set of documents.[clarification needed]Documents are mapped to memory addresses in such a way that semantically similar documents are located at nearby addresses. Documents similar to a query document can then be found by accessing all the addresses that differ by only a few bits from the address of the query document. Unlikesparse distributed memorythat operates on 1000-bit addresses, semantic hashing works on 32 or 64-bit addresses found in a conventional computer architecture.
Deep neural networks can be potentially improved by deepening and parameter reduction, while maintaining trainability. While training extremely deep (e.g., 1 million layers) neural networks might not be practical,CPU-like architectures such as pointer networks[95]and neural random-access machines[96]overcome this limitation by using externalrandom-access memoryand other components that typically belong to acomputer architecturesuch asregisters,ALUandpointers. Such systems operate onprobability distributionvectors stored in memory cells and registers. Thus, the model is fully differentiable and trains end-to-end. The key characteristic of these models is that their depth, the size of their short-term memory, and the number of parameters can be altered independently.
Encoder–decoder frameworks are based on neural networks that map highlystructuredinput to highly structured output. The approach arose in the context ofmachine translation,[97][98][99]where the input and output are written sentences in two natural languages. In that work, an LSTM RNN or CNN was used as an encoder to summarize a source sentence, and the summary was decoded using a conditional RNNlanguage modelto produce the translation.[100]These systems share building blocks: gated RNNs and CNNs and trained attention mechanisms.
Instantaneously trained neural networks(ITNN) were inspired by the phenomenon of short-term learning that seems to occur instantaneously. In these networks the weights of the hidden and the output layers are mapped directly from the training vector data. Ordinarily, they work on binary data, but versions for continuous data that require small additional processing exist.
Spiking neural networks(SNN) explicitly consider the timing of inputs. The network input and output are usually represented as a series of spikes (delta functionor more complex shapes). SNN can process information in thetime domain(signals that vary over time). They are often implemented as recurrent networks. SNN are also a form ofpulse computer.[101]
Spiking neural networks with axonal conduction delays exhibit polychronization, and hence could have a very large memory capacity.[102]
SNN and the temporal correlations of neural assemblies in such networks—have been used to model figure/ground separation and region linking in the visual system.
Spatial neural networks (SNNs) constitute a supercategory of tailoredneural networks (NNs)for representing and predicting geographic phenomena. They generally improve both the statisticalaccuracyandreliabilityof the a-spatial/classic NNs whenever they handlegeo-spatial datasets, and also of the other spatial(statistical) models(e.g. spatial regression models) whenever the geo-spatialdatasets' variables depictnon-linear relations.[103][104][105]Examples of SNNs are the OSFA spatial neural networks, SVANNs and GWNNs.
Theneocognitronis a hierarchical, multilayered network that was modeled after thevisual cortex. It uses multiple types of units, (originally two, calledsimpleandcomplexcells), as a cascading model for use in pattern recognition tasks.[106][107][108]Local features are extracted by S-cells whose deformation is tolerated by C-cells. Local features in the input are integrated gradually and classified at higher layers.[109]Among the various kinds of neocognitron[110]are systems that can detect multiple patterns in the same input by using back propagation to achieveselective attention.[111]It has been used forpattern recognitiontasks and inspiredconvolutional neural networks.[112]
Compound hierarchical-deep models compose deep networks with non-parametricBayesian models.Featurescan be learned using deep architectures such asDBNs,[113]deep Boltzmann machines(DBM),[114]deep auto encoders,[115]convolutional variants,[116][117]ssRBMs,[118]deep coding networks,[119]DBNs with sparse feature learning,[120]RNNs,[121]conditional DBNs,[122]denoising autoencoders.[123]This provides a better representation, allowing faster learning and more accurate classification with high-dimensional data. However, these architectures are poor at learning novel classes with few examples, because all network units are involved in representing the input (adistributed representation) and must be adjusted together (highdegree of freedom). Limiting the degree of freedom reduces the number of parameters to learn, facilitating learning of new classes from few examples.Hierarchical Bayesian (HB)modelsallow learning from few examples, for example[124][125][126][127][128]forcomputer vision,statisticsandcognitive science.
Compound HD architectures aim to integrate characteristics of both HB and deep networks. The compound HDP-DBM architecture is ahierarchical Dirichlet process(HDP)as a hierarchical model, incorporating DBM architecture. It is a fullgenerative model, generalized from abstract concepts flowing through the model layers, which is able to synthesize new examples in novel classes that look "reasonably" natural. All the levels are learned jointly by maximizing a jointlog-probabilityscore.[129]
In a DBM with three hidden layers, the probability of a visible input ''ν''is:
whereh={h(1),h(2),h(3)}{\displaystyle {\boldsymbol {h}}=\{{\boldsymbol {h}}^{(1)},{\boldsymbol {h}}^{(2)},{\boldsymbol {h}}^{(3)}\}}is the set of hidden units, andψ={W(1),W(2),W(3)}{\displaystyle \psi =\{{\boldsymbol {W}}^{(1)},{\boldsymbol {W}}^{(2)},{\boldsymbol {W}}^{(3)}\}}are the model parameters, representing visible-hidden and hidden-hidden symmetric interaction terms.
A learned DBM model is an undirected model that defines thejoint distributionP(ν,h1,h2,h3){\displaystyle P(\nu ,h^{1},h^{2},h^{3})}. One way to express what has been learned is theconditional modelP(ν,h1,h2∣h3){\displaystyle P(\nu ,h^{1},h^{2}\mid h^{3})}and apriortermP(h3){\displaystyle P(h^{3})}.
HereP(ν,h1,h2∣h3){\displaystyle P(\nu ,h^{1},h^{2}\mid h^{3})}represents a conditional DBM model, which can be viewed as a two-layer DBM but with bias terms given by the states ofh3{\displaystyle h^{3}}:
A deep predictive coding network (DPCN) is apredictivecoding scheme that uses top-down information to empirically adjust the priors needed for a bottom-upinferenceprocedure by means of a deep, locally connected,generative model. This works by extracting sparsefeaturesfrom time-varying observations using a linear dynamical model. Then, a pooling strategy is used to learn invariant feature representations. These units compose to form a deep architecture and are trained bygreedylayer-wiseunsupervised learning. The layers constitute a kind ofMarkov chainsuch that the states at any layer depend only on the preceding and succeeding layers.
DPCNs predict the representation of the layer, by using a top-down approach using the information in upper layer and temporal dependencies from previous states.[130]
DPCNs can be extended to form aconvolutional network.[130]
Multilayer kernel machines (MKM) are a way of learning highly nonlinear functions by iterative application of weakly nonlinear kernels. They usekernel principal component analysis(KPCA),[131]as a method for theunsupervisedgreedy layer-wise pre-training step of deep learning.[132]
Layerℓ+1{\displaystyle \ell +1}learns the representation of the previous layerℓ{\displaystyle \ell }, extracting thenl{\displaystyle n_{l}}principal component(PC) of the projection layerl{\displaystyle l}output in the feature domain induced by the kernel. To reduce thedimensionaliityof the updated representation in each layer, asupervised strategyselects the best informative features among features extracted by KPCA. The process is:
Some drawbacks accompany the KPCA method for MKMs.
A more straightforward way to use kernel machines for deep learning was developed for spoken language understanding.[133]The main idea is to use a kernel machine to approximate a shallow neural net with an infinite number of hidden units, then use adeep stacking networkto splice the output of the kernel machine and the raw input in building the next, higher level of the kernel machine. The number of levels in the deep convex network is ahyper-parameterof the overall system, to be determined bycross validation. | https://en.wikipedia.org/wiki/Types_of_artificial_neural_networks#Dynamic |
Forholographic data storage,holographic associative memory(HAM) is an information storage and retrieval system based on the principles ofholography. Holograms are made by using two beams of light, called a "reference beam" and an "object beam". They produce a pattern on thefilmthat contains them both. Afterwards, by reproducing the reference beam, the hologram recreates a visual image of the original object. In theory, one could use the object beam to do the same thing: reproduce the original reference beam. In HAM, the pieces of information act like the two beams. Each can be used to retrieve the other from the pattern. It can be thought of as anartificial neural networkwhich mimics the way the brain uses information. The information is presented in abstract form by a complex vector which may be expressed directly by awaveformpossessing frequency and magnitude. This waveform is analogous to electrochemical impulses believed to transmit information between biologicalneuroncells.
HAM is part of the family of analog, correlation-based, associative, stimulus-response memories, where information is mapped onto the phase orientation of complex numbers. It can be considered as acomplexvaluedartificial neural network. The holographic associative memory exhibits some remarkable characteristics. Holographs have been shown to be effective forassociativememorytasks, generalization, and pattern recognition with changeable attention. Ability of dynamic search localization is central to natural memory.[1]For example, in visual perception, humans always tend to focus on some specific objects in a pattern. Humans can effortlessly change the focus from object to object without requiring relearning. HAM provides a computational model which can mimic this ability by creating representation for focus. At the heart of this new memory lies a novel bi-modal representation of pattern and a hologram-like complex spherical weight state-space. Besides the usual advantages of associative computing, this technique also has excellent potential for fast optical realization because the underlying hyper-spherical computations can be naturally implemented on optical computations.
It is based on principle of information storage in the form of stimulus-response patterns where information is presented byphase angleorientations of complex numbers on aRiemann surface.[2]A very large number of stimulus-response patterns may be superimposed or "enfolded" on a single neural element. Stimulus-response associations may be both encoded and decoded in one non-iterative transformation. The mathematical basis requires no optimization of parameters or errorbackpropagation, unlikeconnectionistneural networks. The principal requirement is for stimulus patterns to be made symmetric ororthogonalin the complex domain. HAM typically employssigmoidpre-processing where raw inputs are orthogonalized and converted toGaussian distributions. | https://en.wikipedia.org/wiki/Holographic_associative_memory |
There are manytypes of artificial neural networks(ANN).
Artificial neural networksarecomputational modelsinspired bybiological neural networks, and are used toapproximatefunctionsthat are generally unknown. Particularly, they are inspired by the behaviour ofneuronsand the electrical signals they convey between input (such as from the eyes or nerve endings in the hand), processing, and output from the brain (such as reacting to light, touch, or heat). The way neurons semantically communicate is an area of ongoing research.[1][2][3][4]Most artificial neural networks bear only some resemblance to their more complex biological counterparts, but are very effective at their intended tasks (e.g. classification or segmentation).
Some artificial neural networks areadaptive systemsand are used for example tomodel populationsand environments, which constantly change.
Neural networks can be hardware- (neurons are represented by physical components) orsoftware-based(computer models), and can use a variety of topologies and learning algorithms.
In feedforward neural networks the information moves from the input to output directly in every layer. There can be hidden layers with or without cycles/loops to sequence inputs. Feedforward networks can be constructed with various types of units, such as binaryMcCulloch–Pitts neurons, the simplest of which is theperceptron. Continuous neurons, frequently with sigmoidalactivation, are used in the context ofbackpropagation.
The Group Method of Data Handling (GMDH)[5]features fully automatic structural and parametric model optimization. The node activation functions areKolmogorov–Gabor polynomialsthat permit additions and multiplications. It uses a deep multilayerperceptronwith eight layers.[6]It is asupervised learningnetwork that grows layer by layer, where each layer is trained byregression analysis. Useless items are detected using avalidation set, and pruned throughregularization. The size and depth of the resulting network depends on the task.[7]
An autoencoder, autoassociator or Diabolo network[8]: 19is similar to themultilayer perceptron(MLP) – with an input layer, an output layer and one or more hidden layers connecting them. However, the output layer has the same number of units as the input layer. Its purpose is to reconstruct its own inputs (instead of emitting a target value). Therefore, autoencoders areunsupervised learningmodels. An autoencoder is used forunsupervised learningofefficient codings,[9][10]typically for the purpose ofdimensionality reductionand for learninggenerative modelsof data.[11][12]
A probabilistic neural network (PNN) is a four-layer feedforward neural network. The layers are Input, hidden pattern/summation, and output. In the PNN algorithm, the parent probability distribution function (PDF) of each class is approximated by aParzen windowand a non-parametric function. Then, using PDF of each class, the class probability of a new input is estimated and Bayes’ rule is employed to allocate it to the class with the highest posterior probability.[13]It was derived from theBayesian network[14]and a statistical algorithm calledKernel Fisher discriminant analysis.[15]It is used for classification and pattern recognition.
A time delay neural network (TDNN) is a feedforward architecture for sequential data that recognizesfeaturesindependent of sequence position. In order to achieve time-shift invariance, delays are added to the input so that multiple data points (points in time) are analyzed together.
It usually forms part of a larger pattern recognition system. It has been implemented using aperceptronnetwork whose connection weights were trained with back propagation (supervised learning).[16]
A convolutional neural network (CNN, or ConvNet or shift invariant or space invariant) is a class of deep network, composed of one or moreconvolutionallayers with fully connected layers (matching those in typical ANNs) on top.[17][18]It uses tied weights andpooling layers. In particular, max-pooling.[19]It is often structured via Fukushima's convolutional architecture.[20]They are variations ofmultilayer perceptronsthat use minimalpreprocessing.[21]This architecture allows CNNs to take advantage of the 2D structure of input data.
Its unit connectivity pattern is inspired by the organization of thevisual cortex. Units respond to stimuli in a restricted region of space known as thereceptive field. Receptive fields partially overlap, over-covering the entirevisual field. Unit response can be approximated mathematically by aconvolutionoperation.[22]
CNNs are suitable for processing visual and other two-dimensional data.[23][24]They have shown superior results in both image and speech applications. They can be trained with standard backpropagation. CNNs are easier to train than other regular, deep, feed-forward neural networks and have many fewer parameters to estimate.[25]
Capsule Neural Networks(CapsNet) add structures called capsules to a CNN and reuse output from several capsules to form more stable (with respect to various perturbations) representations.[26]
Examples of applications in computer vision includeDeepDream[27]androbot navigation.[28]They have wide applications inimage and video recognition,recommender systems[29]andnatural language processing.[30]
A deep stacking network (DSN)[31](deep convex network) is based on a hierarchy of blocks of simplified neural network modules. It was introduced in 2011 by Deng and Yu.[32]It formulates the learning as aconvex optimization problemwith aclosed-form solution, emphasizing the mechanism's similarity tostacked generalization.[33]Each DSN block is a simple module that is easy to train by itself in asupervisedfashion without backpropagation for the entire blocks.[8]
Each block consists of a simplifiedmulti-layer perceptron(MLP) with a single hidden layer. The hidden layerhhas logisticsigmoidalunits, and the output layer has linear units. Connections between these layers are represented by weight matrixU;input-to-hidden-layer connections have weight matrixW. Target vectorstform the columns of matrixT, and the input data vectorsxform the columns of matrixX.The matrix of hidden units isH=σ(WTX){\displaystyle {\boldsymbol {H}}=\sigma ({\boldsymbol {W}}^{T}{\boldsymbol {X}})}. Modules are trained in order, so lower-layer weightsWare known at each stage. The function performs the element-wiselogistic sigmoidoperation. Each block estimates the same final label classy, and its estimate is concatenated with original inputXto form the expanded input for the next block. Thus, the input to the first block contains the original data only, while downstream blocks' input adds the output of preceding blocks. Then learning the upper-layer weight matrixUgiven other weights in the network can be formulated as a convex optimization problem:
which has a closed-form solution.[31]
Unlike other deep architectures, such asDBNs, the goal is not to discover the transformedfeaturerepresentation. The structure of the hierarchy of this kind of architecture makes parallel learning straightforward, as a batch-mode optimization problem. In purelydiscriminative tasks, DSNs outperform conventional DBNs.
This architecture is a DSN extension. It offers two important improvements: it uses higher-order information fromcovariancestatistics, and it transforms thenon-convex problemof a lower-layer to a convex sub-problem of an upper-layer.[34]TDSNs use covariance statistics in abilinear mappingfrom each of two distinct sets of hidden units in the same layer to predictions, via a third-ordertensor.
While parallelization and scalability are not considered seriously in conventionalDNNs,[35][36][37]all learning forDSNs andTDSNs is done in batch mode, to allow parallelization.[32][31]Parallelization allows scaling the design to larger (deeper) architectures and data sets.
The basic architecture is suitable for diverse tasks such asclassificationandregression.
Such aneural networkis designed for the numerical solution ofmathematical equations, such as differential, integral, delay, fractional and others. As input parameters, PINN[38]accepts variables (spatial, temporal, and others), transmits them through the network block. At the output, it produces an approximate solution and substitutes it into the mathematical model, considering the initial and boundary conditions. If the solution does not satisfy the required accuracy, one uses thebackpropagationand rectify the solution.
Besides PINN, there exists deep neural operator (DeepONet)[39]and Fourier neural operator (FNO).[40]
Regulatory feedback networks account for feedback found throughout brain recognition processing areas. Instead of recognition-inference being feedforward (inputs-to-output) as in neural networks, regulatory feedback assumes inference iteratively compares inputs to outputs & neurons inhibit their own inputs, collectively evaluating how important and unique each input is for the next iteration. This ultimately finds neuron activations minimizing mutual input overlap, estimating distributions during recognition and offloading the need for complex neural network training & rehearsal.[41]
Regulatory feedback processing suggests an important real-time recognition processing role for ubiquitous feedback found between brain pre and post synaptic neurons, which is meticulously maintained byhomeostatic plasticity: found to be kept in balance through multiple, often redundant, mechanisms. RF also inherently shows neuroscience phenomena such as Excitation-Inhibition balance, network-wideburstingfollowed by quieting, and human cognitive search phenomena ofdifficulty with similarityand pop-out when multiple inputs are present, without additional parameters.
A regulatory feedback network makes inferences usingnegative feedback.[42]The feedback is used to find the optimal activation of units. It is most similar to anon-parametric methodbut is different fromK-nearest neighborin that it mathematically emulates feedforward networks.
Radial basis functions are functions that have a distance criterion with respect to a center. Radial basis functions have been applied as a replacement for the sigmoidal hidden layer transfer characteristic in multi-layer perceptrons. RBF networks have two layers: In the first, input is mapped onto each RBF in the 'hidden' layer. The RBF chosen is usually a Gaussian. In regression problems the output layer is a linear combination of hidden layer values representing mean predicted output. The interpretation of this output layer value is the same as aregression modelin statistics. In classification problems the output layer is typically asigmoid functionof a linear combination of hidden layer values, representing a posterior probability. Performance in both cases is often improved byshrinkagetechniques, known asridge regressionin classical statistics. This corresponds to a prior belief in small parameter values (and therefore smooth output functions) in aBayesianframework.
RBF networks have the advantage of avoiding local minima in the same way as multi-layer perceptrons. This is because the only parameters that are adjusted in the learning process are the linear mapping from hidden layer to output layer. Linearity ensures that the error surface is quadratic and therefore has a single easily found minimum. In regression problems this can be found in one matrix operation. In classification problems the fixed non-linearity introduced by the sigmoid output function is most efficiently dealt with usingiteratively re-weighted least squares.
RBF networks have the disadvantage of requiring good coverage of the input space by radial basis functions. RBF centres are determined with reference to the distribution of the input data, but without reference to the prediction task. As a result, representational resources may be wasted on areas of the input space that are irrelevant to the task. A common solution is to associate each data point with its own centre, although this can expand the linear system to be solved in the final layer and requires shrinkage techniques to avoidoverfitting.
Associating each input datum with an RBF leads naturally to kernel methods such assupport vector machines(SVM) and Gaussian processes (the RBF is thekernel function). All three approaches use a non-linear kernel function to project the input data into a space where the learning problem can be solved using a linear model. Like Gaussian processes, and unlike SVMs, RBF networks are typically trained in a maximum likelihood framework by maximizing the probability (minimizing the error). SVMs avoid overfitting by maximizing instead amargin. SVMs outperform RBF networks in most classification applications. In regression applications they can be competitive when the dimensionality of the input space is relatively small.
RBF neural networks are conceptually similar toK-nearest neighbor(k-NN) models. The basic idea is that similar inputs produce similar outputs.
Assume that each case in a training set has two predictor variables, x and y, and the target variable has two categories, positive and negative. Given a new case with predictor values x=6, y=5.1, how is the target variable computed?
The nearest neighbor classification performed for this example depends on how many neighboring points are considered. If 1-NN is used and the closest point is negative, then the new point should be classified as negative. Alternatively, if 9-NN classification is used and the closest 9 points are considered, then the effect of the surrounding 8 positive points may outweigh the closest 9-th (negative) point.
An RBF network positions neurons in the space described by the predictor variables (x,y in this example). This space has as many dimensions as predictor variables. The Euclidean distance is computed from the new point to the center of each neuron, and a radial basis function (RBF, also called a kernel function) is applied to the distance to compute the weight (influence) for each neuron. The radial basis function is so named because the radius distance is the argument to the function.
The value for the new point is found by summing the output values of the RBF functions multiplied by weights computed for each neuron.
The radial basis function for a neuron has a center and a radius (also called a spread). The radius may be different for each neuron, and, in RBF networks generated by DTREG, the radius may be different in each dimension.
With larger spread, neurons at a distance from a point have a greater influence.
RBF networks have three layers:
The following parameters are determined by the training process:
Various methods have been used to train RBF networks. One approach first usesK-means clusteringto find cluster centers which are then used as the centers for the RBF functions. However, K-means clustering is computationally intensive and it often does not generate the optimal number of centers. Another approach is to use a random subset of the training points as the centers.
DTREG uses a training algorithm that uses an evolutionary approach to determine the optimal center points and spreads for each neuron. It determines when to stop adding neurons to the network by monitoring the estimated leave-one-out (LOO) error and terminating when the LOO error begins to increase because of overfitting.
The computation of the optimal weights between the neurons in the hidden layer and the summation layer is done using ridge regression. An iterative procedure computes the optimal regularization Lambda parameter that minimizes the generalized cross-validation (GCV) error.
A GRNN is an associative memory neural network that is similar to theprobabilistic neural networkbut it is used for regression and approximation rather than classification.
A deep belief network (DBN) is a probabilistic,generative modelmade up of multiple hidden layers. It can be considered acompositionof simple learning modules.[43]
A DBN can be used to generatively pre-train a deep neural network (DNN) by using the learned DBN weights as the initial DNN weights. Various discriminative algorithms can then tune these weights. This is particularly helpful when training data are limited, because poorly initialized weights can significantly hinder learning. These pre-trained weights end up in a region of the weight space that is closer to the optimal weights than random choices. This allows for both improved modeling and faster ultimate convergence.[44]
Recurrent neural networks(RNN) propagate data forward, but also backwards, from later processing stages to earlier stages. RNN can be used as general sequence processors.
This architecture was developed in the 1980s. Its network creates a directed connection between every pair of units. Each has a time-varying, real-valued (more than just zero or one) activation (output). Each connection has a modifiable real-valued weight. Some of the nodes are called labeled nodes, some output nodes, the rest hidden nodes.
Forsupervised learningin discrete time settings, training sequences of real-valued input vectors become sequences of activations of the input nodes, one input vector at a time. At each time step, each non-input unit computes its current activation as a nonlinear function of the weighted sum of the activations of all units from which it receives connections. The system can explicitly activate (independent of incoming signals) some output units at certain time steps. For example, if the input sequence is a speech signal corresponding to a spoken digit, the final target output at the end of the sequence may be a label classifying the digit. For each sequence, its error is the sum of the deviations of all activations computed by the network from the corresponding target signals. For a training set of numerous sequences, the total error is the sum of the errors of all individual sequences.
To minimize total error,gradient descentcan be used to change each weight in proportion to its derivative with respect to the error, provided the non-linear activation functions aredifferentiable. The standard method is called "backpropagation through time" or BPTT, a generalization of back-propagation for feedforward networks.[45][46]A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL.[47][48]Unlike BPTT this algorithm islocal in time but not local in space.[49][50]An online hybrid between BPTT and RTRL with intermediate complexity exists,[51][52]with variants for continuous time.[53]A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events.[54][55]TheLong short-term memoryarchitecture overcomes these problems.[56]
Inreinforcement learningsettings, no teacher provides target signals. Instead afitness functionorreward functionorutility functionis occasionally used to evaluate performance, which influences its input stream through output units connected to actuators that affect the environment. Variants ofevolutionary computationare often used to optimize the weight matrix.
TheHopfield network(like similar attractor-based networks) is of historic interest although it is not a general RNN, as it is not designed to process sequences of patterns. Instead it requires stationary inputs. It is an RNN in which all connections are symmetric. It guarantees that it will converge. If the connections are trained usingHebbian learningthe Hopfield network can perform as robustcontent-addressable memory, resistant to connection alteration.
TheBoltzmann machinecan be thought of as a noisy Hopfield network. It is one of the first neural networks to demonstrate learning oflatent variables(hidden units). Boltzmann machine learning was at first slow to simulate, but the contrastive divergence algorithm speeds up training for Boltzmann machines andProducts of Experts.
The self-organizing map (SOM) usesunsupervised learning. A set of neurons learn to map points in an input space to coordinates in an output space. The input space can have different dimensions and topology from the output space, and SOM attempts to preserve these.
Learning vector quantization(LVQ) can be interpreted as a neural network architecture. Prototypical representatives of the classes parameterize, together with an appropriate distance measure, in a distance-based classification scheme.
Simple recurrent networks have three layers, with the addition of a set of "context units" in the input layer. These units connect from the hidden layer or the output layer with a fixed weight of one.[57]At each time step, the input is propagated in a standard feedforward fashion, and then a backpropagation-like learning rule is applied (not performinggradient descent). The fixed back connections leave a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied).
Reservoir computing is a computation framework that may be viewed as an extension ofneural networks.[58]Typically an input signal is fed into a fixed (random)dynamical systemcalled areservoirwhose dynamics map the input to a higher dimension. Areadoutmechanism is trained to map the reservoir to the desired output. Training is performed only at the readout stage.Liquid-state machines[59]are a type of reservoir computing.[60]
The echo state network (ESN) employs a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that are trained. ESN are good at reproducing certaintime series.[61]
Thelong short-term memory(LSTM)[56]avoids thevanishing gradient problem. It works even when with long delays between inputs and can handle signals that mix low and high frequency components. LSTM RNN outperformed other RNN and other sequence learning methods such asHMMin applications such as language learning[62]and connected handwriting recognition.[63]
Bi-directional RNN, or BRNN, use a finite sequence to predict or label each element of a sequence based on both the past and future context of the element.[64]This is done by adding the outputs of two RNNs: one processing the sequence from left to right, the other one from right to left. The combined outputs are the predictions of the teacher-given target signals. This technique proved to be especially useful when combined with LSTM.[65]
Hierarchical RNN connects elements in various ways to decompose hierarchical behavior into useful subprograms.[66][67]
A district from conventional neural networks, stochastic artificial neural network used as an approximation to
random functions.
A RNN (often a LSTM) where a series is decomposed into a number of scales where every scale informs the primary length between two consecutive points. A first order scale consists of a normal RNN, a second order consists of all points separated by two indices and so on. The Nth order RNN connects the first and last node. The outputs from all the various scales are treated as aCommittee of Machinesand the associated scores are used genetically for the next iteration.
Biological studies have shown that the human brain operates as a collection of small networks. This realization gave birth to the concept ofmodular neural networks, in which several small networks cooperate or compete to solve problems.
A committee of machines (CoM) is a collection of different neural networks that together "vote" on a given example. This generally gives a much better result than individual networks. Because neural networks suffer from local minima, starting with the same architecture and training but using randomly different initial weights often gives vastly different results.[citation needed]A CoM tends to stabilize the result.
The CoM is similar to the generalmachine learningbaggingmethod, except that the necessary variety of machines in the committee is obtained by training from different starting weights rather than training on different randomly selected subsets of the training data.
The associative neural network (ASNN) is an extension of committee of machines that combines multiple feedforward neural networks and the k-nearest neighbor technique. It uses the correlation between ensemble responses as a measure of distance amid the analyzed cases for the kNN. This corrects the Bias of the neural network ensemble. An associative neural network has a memory that can coincide with the training set. If new data become available, the network instantly improves its predictive ability and provides data approximation (self-learns) without retraining. Another important feature of ASNN is the possibility to interpret neural network results by analysis of correlations between data cases in the space of models.[68]
A physical neural network includes electrically adjustable resistance material to simulate artificial synapses. Examples include theADALINEmemristor-based neural network.[69]Anoptical neural networkis a physical implementation of anartificial neural networkwithoptical components.
Unlike static neural networks, dynamic neural networks adapt their structure and/or parameters to the input during inference[70]showing time-dependent behaviour, such as transient phenomena and delay effects.
Dynamic neural networks in which the parameters may change over time are related to thefast weightsarchitecture (1987),[71]where one neural network outputs the weights of another neural network.
Cascade correlation is an architecture andsupervised learningalgorithm. Instead of just adjusting the weights in a network of fixed topology,[72]Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages: It learns quickly, determines its own size and topology, retains the structures it has built even if the training set changes and requires nobackpropagation.
A neuro-fuzzy network is afuzzyinference systemin the body of an artificial neural network. Depending on the FIS type, several layers simulate the processes involved in a fuzzy inference-likefuzzification, inference, aggregation anddefuzzification. Embedding an FIS in a general structure of an ANN has the benefit of using available ANN training methods to find the parameters of a fuzzy system.
Compositional pattern-producing networks (CPPNs) are a variation of artificial neural networks which differ in their set ofactivation functionsand how they are applied. While typical artificial neural networks often contain onlysigmoid functions(and sometimesGaussian functions), CPPNs can include both types of functions and many others. Furthermore, unlike typical artificial neural networks, CPPNs are applied across the entire space of possible inputs so that they can represent a complete image. Since they are compositions of functions, CPPNs in effect encode images at infinite resolution and can be sampled for a particular display at whatever resolution is optimal.
Memory networks[73][74]incorporatelong-term memory. The long-term memory can be read and written to, with the goal of using it for prediction. These models have been applied in the context ofquestion answering(QA) where the long-term memory effectively acts as a (dynamic) knowledge base and the output is a textual response.[75]
Insparse distributed memoryorhierarchical temporal memory, the patterns encoded by neural networks are used as addresses forcontent-addressable memory, with "neurons" essentially serving as address encoders anddecoders. However, the early controllers of such memories were not differentiable.[76]
This type of network can add new patterns without re-training. It is done by creating a specific memory structure, which assigns each new pattern to an orthogonal plane using adjacently connected hierarchical arrays.[77]The network offers real-time pattern recognition and high scalability; this requires parallel processing and is thus best suited for platforms such aswireless sensor networks,grid computing, andGPGPUs.
Hierarchical temporal memory (HTM) models some of the structural andalgorithmicproperties of theneocortex. HTM is abiomimeticmodel based onmemory-predictiontheory. HTM is a method for discovering and inferring the high-level causes of observed input patterns and sequences, thus building an increasingly complex model of the world.
HTM combines existing ideas to mimic the neocortex with a simple design that provides many capabilities. HTM combines and extends approaches used inBayesian networks, spatial and temporal clustering algorithms, while using a tree-shaped hierarchy of nodes that is common inneural networks.
Holographic Associative Memory (HAM) is an analog, correlation-based, associative, stimulus-response system. Information is mapped onto the phase orientation of complex numbers. The memory is effective forassociativememorytasks, generalization and pattern recognition with changeable attention. Dynamic search localization is central to biological memory. In visual perception, humans focus on specific objects in a pattern. Humans can change focus from object to object without learning. HAM can mimic this ability by creating explicit representations for focus. It uses a bi-modal representation of pattern and a hologram-like complex spherical weight state-space. HAMs are useful for optical realization because the underlying hyper-spherical computations can be implemented with optical computation.[78]
Apart fromlong short-term memory(LSTM), other approaches also added differentiable memory to recurrent functions. For example:
Neural Turing machines (NTM)[86]couple LSTM networks to external memory resources, with which they can interact by attentional processes. The combined system is analogous to aTuring machinebut is differentiable end-to-end, allowing it to be efficiently trained bygradient descent. Preliminary results demonstrate that neural Turing machines can infer simple algorithms such as copying, sorting and associative recall from input and output examples.
Differentiable neural computers(DNC) are an NTM extension. They out-performed Neural turing machines,long short-term memorysystems and memory networks on sequence-processing tasks.[87][88][89][90][91]
Approaches that represent previous experiences directly anduse a similar experience to form a local modelare often callednearest neighbourork-nearest neighborsmethods.[92]Deep learning is useful in semantic hashing[93]where a deepgraphical modelthe word-count vectors[94]obtained from a large set of documents.[clarification needed]Documents are mapped to memory addresses in such a way that semantically similar documents are located at nearby addresses. Documents similar to a query document can then be found by accessing all the addresses that differ by only a few bits from the address of the query document. Unlikesparse distributed memorythat operates on 1000-bit addresses, semantic hashing works on 32 or 64-bit addresses found in a conventional computer architecture.
Deep neural networks can be potentially improved by deepening and parameter reduction, while maintaining trainability. While training extremely deep (e.g., 1 million layers) neural networks might not be practical,CPU-like architectures such as pointer networks[95]and neural random-access machines[96]overcome this limitation by using externalrandom-access memoryand other components that typically belong to acomputer architecturesuch asregisters,ALUandpointers. Such systems operate onprobability distributionvectors stored in memory cells and registers. Thus, the model is fully differentiable and trains end-to-end. The key characteristic of these models is that their depth, the size of their short-term memory, and the number of parameters can be altered independently.
Encoder–decoder frameworks are based on neural networks that map highlystructuredinput to highly structured output. The approach arose in the context ofmachine translation,[97][98][99]where the input and output are written sentences in two natural languages. In that work, an LSTM RNN or CNN was used as an encoder to summarize a source sentence, and the summary was decoded using a conditional RNNlanguage modelto produce the translation.[100]These systems share building blocks: gated RNNs and CNNs and trained attention mechanisms.
Instantaneously trained neural networks(ITNN) were inspired by the phenomenon of short-term learning that seems to occur instantaneously. In these networks the weights of the hidden and the output layers are mapped directly from the training vector data. Ordinarily, they work on binary data, but versions for continuous data that require small additional processing exist.
Spiking neural networks(SNN) explicitly consider the timing of inputs. The network input and output are usually represented as a series of spikes (delta functionor more complex shapes). SNN can process information in thetime domain(signals that vary over time). They are often implemented as recurrent networks. SNN are also a form ofpulse computer.[101]
Spiking neural networks with axonal conduction delays exhibit polychronization, and hence could have a very large memory capacity.[102]
SNN and the temporal correlations of neural assemblies in such networks—have been used to model figure/ground separation and region linking in the visual system.
Spatial neural networks (SNNs) constitute a supercategory of tailoredneural networks (NNs)for representing and predicting geographic phenomena. They generally improve both the statisticalaccuracyandreliabilityof the a-spatial/classic NNs whenever they handlegeo-spatial datasets, and also of the other spatial(statistical) models(e.g. spatial regression models) whenever the geo-spatialdatasets' variables depictnon-linear relations.[103][104][105]Examples of SNNs are the OSFA spatial neural networks, SVANNs and GWNNs.
Theneocognitronis a hierarchical, multilayered network that was modeled after thevisual cortex. It uses multiple types of units, (originally two, calledsimpleandcomplexcells), as a cascading model for use in pattern recognition tasks.[106][107][108]Local features are extracted by S-cells whose deformation is tolerated by C-cells. Local features in the input are integrated gradually and classified at higher layers.[109]Among the various kinds of neocognitron[110]are systems that can detect multiple patterns in the same input by using back propagation to achieveselective attention.[111]It has been used forpattern recognitiontasks and inspiredconvolutional neural networks.[112]
Compound hierarchical-deep models compose deep networks with non-parametricBayesian models.Featurescan be learned using deep architectures such asDBNs,[113]deep Boltzmann machines(DBM),[114]deep auto encoders,[115]convolutional variants,[116][117]ssRBMs,[118]deep coding networks,[119]DBNs with sparse feature learning,[120]RNNs,[121]conditional DBNs,[122]denoising autoencoders.[123]This provides a better representation, allowing faster learning and more accurate classification with high-dimensional data. However, these architectures are poor at learning novel classes with few examples, because all network units are involved in representing the input (adistributed representation) and must be adjusted together (highdegree of freedom). Limiting the degree of freedom reduces the number of parameters to learn, facilitating learning of new classes from few examples.Hierarchical Bayesian (HB)modelsallow learning from few examples, for example[124][125][126][127][128]forcomputer vision,statisticsandcognitive science.
Compound HD architectures aim to integrate characteristics of both HB and deep networks. The compound HDP-DBM architecture is ahierarchical Dirichlet process(HDP)as a hierarchical model, incorporating DBM architecture. It is a fullgenerative model, generalized from abstract concepts flowing through the model layers, which is able to synthesize new examples in novel classes that look "reasonably" natural. All the levels are learned jointly by maximizing a jointlog-probabilityscore.[129]
In a DBM with three hidden layers, the probability of a visible input ''ν''is:
whereh={h(1),h(2),h(3)}{\displaystyle {\boldsymbol {h}}=\{{\boldsymbol {h}}^{(1)},{\boldsymbol {h}}^{(2)},{\boldsymbol {h}}^{(3)}\}}is the set of hidden units, andψ={W(1),W(2),W(3)}{\displaystyle \psi =\{{\boldsymbol {W}}^{(1)},{\boldsymbol {W}}^{(2)},{\boldsymbol {W}}^{(3)}\}}are the model parameters, representing visible-hidden and hidden-hidden symmetric interaction terms.
A learned DBM model is an undirected model that defines thejoint distributionP(ν,h1,h2,h3){\displaystyle P(\nu ,h^{1},h^{2},h^{3})}. One way to express what has been learned is theconditional modelP(ν,h1,h2∣h3){\displaystyle P(\nu ,h^{1},h^{2}\mid h^{3})}and apriortermP(h3){\displaystyle P(h^{3})}.
HereP(ν,h1,h2∣h3){\displaystyle P(\nu ,h^{1},h^{2}\mid h^{3})}represents a conditional DBM model, which can be viewed as a two-layer DBM but with bias terms given by the states ofh3{\displaystyle h^{3}}:
A deep predictive coding network (DPCN) is apredictivecoding scheme that uses top-down information to empirically adjust the priors needed for a bottom-upinferenceprocedure by means of a deep, locally connected,generative model. This works by extracting sparsefeaturesfrom time-varying observations using a linear dynamical model. Then, a pooling strategy is used to learn invariant feature representations. These units compose to form a deep architecture and are trained bygreedylayer-wiseunsupervised learning. The layers constitute a kind ofMarkov chainsuch that the states at any layer depend only on the preceding and succeeding layers.
DPCNs predict the representation of the layer, by using a top-down approach using the information in upper layer and temporal dependencies from previous states.[130]
DPCNs can be extended to form aconvolutional network.[130]
Multilayer kernel machines (MKM) are a way of learning highly nonlinear functions by iterative application of weakly nonlinear kernels. They usekernel principal component analysis(KPCA),[131]as a method for theunsupervisedgreedy layer-wise pre-training step of deep learning.[132]
Layerℓ+1{\displaystyle \ell +1}learns the representation of the previous layerℓ{\displaystyle \ell }, extracting thenl{\displaystyle n_{l}}principal component(PC) of the projection layerl{\displaystyle l}output in the feature domain induced by the kernel. To reduce thedimensionaliityof the updated representation in each layer, asupervised strategyselects the best informative features among features extracted by KPCA. The process is:
Some drawbacks accompany the KPCA method for MKMs.
A more straightforward way to use kernel machines for deep learning was developed for spoken language understanding.[133]The main idea is to use a kernel machine to approximate a shallow neural net with an infinite number of hidden units, then use adeep stacking networkto splice the output of the kernel machine and the raw input in building the next, higher level of the kernel machine. The number of levels in the deep convex network is ahyper-parameterof the overall system, to be determined bycross validation. | https://en.wikipedia.org/wiki/Types_of_artificial_neural_networks#Memory_networks |
Thememory-prediction frameworkis a theory ofbrainfunction created byJeff Hawkinsand described in his 2004 bookOn Intelligence. This theory concerns the role of the mammalianneocortexand its associations with thehippocampiand thethalamusin matching sensory inputs to storedmemorypatterns and how this process leads to predictions of what will happen in the future.
The theory is motivated by the observed similarities between the brain structures (especiallyneocorticaltissue) that are used for a wide range of behaviours available to mammals. The theory posits that the remarkably uniformphysicalarrangement of cortical tissue reflects a single principle or algorithm which underlies all cortical information processing. The basic processing principle is hypothesized to be a feedback/recallloopwhich involves bothcorticaland extra-cortical participation (the latter from thethalamusand thehippocampiin particular).
The central concept of the memory-prediction framework is that bottom-up inputs are matched in ahierarchyofrecognition, and evoke a series of top-down expectations encoded as potentiations. These expectations interact with the bottom-up signals to both analyse those inputs and generatepredictionsof subsequent expected inputs. Each hierarchy level remembers frequently observed temporal sequences of input patterns and generates labels or 'names' for these sequences. When an input sequence matches a memorized sequence at a given level of the hierarchy, a label or 'name' is propagated up the hierarchy – thus eliminating details at higher levels and enabling them to learn higher-order sequences. This process produces increased invariance at higher levels. Higher levels predict future input by matching partial sequences and projecting their expectations to the lower levels. However, when a mismatch between input and memorized/predicted sequences occurs, a more complete representation propagates upwards. This causes alternative 'interpretations' to be activated at higher levels, which in turn generates other predictions at lower levels.
Consider, for example, the process ofvision. Bottom-up information starts as low-levelretinalsignals (indicating the presence of simple visual elements and contrasts). At higher levels of the hierarchy, increasingly meaningful information is extracted, regarding the presence oflines,regions,motions, etc. Even further up the hierarchy, activity corresponds to the presence of specific objects – and then to behaviours of these objects. Top-down information fills in details about the recognized objects, and also about their expected behaviour as time progresses.
The sensory hierarchy induces a number of differences between the various levels. As one moves up the hierarchy,representationshave increased:
The relationship between sensory and motor processing is an important aspect of the basic theory. It is proposed that the motor areas of thecortexconsist of a behavioural hierarchy similar to the sensory hierarchy, with the lowest levels consisting of explicit motor commands to musculature and the highest levels corresponding to abstract prescriptions (e.g. 'resize the browser'). The sensory and motor hierarchies are tightly coupled, with behaviour giving rise to sensory expectations and sensoryperceptionsdriving motor processes.
Finally, it is important to note that all the memories in the cortical hierarchy have to be learnt – this information is not pre-wired in the brain. Hence, the process of extracting thisrepresentationfrom the flow of inputs and behaviours is theorized as a process that happens continually duringcognition.
Hawkins has extensive training as an electrical engineer. Another way to describe the theory (hinted at in his book) is as alearninghierarchyoffeed forwardstochasticstate machines. In this view, the brain is analyzed as an encoding problem, not too dissimilar from future-predicting error-correction codes. The hierarchy is a hierarchy ofabstraction, with the higher level machines' states representing more abstract conditions or events, and these states predisposing lower-level machines to perform certain transitions. The lower level machines model limited domains of experience, or control or interpret sensors or effectors. The whole system actually controls the organism's behavior. Since the state machine is "feed forward", the organism responds to future events predicted from past data. Since it is hierarchical, the system exhibits behavioral flexibility, easily producing new sequences of behavior in response to new sensory data. Since the system learns, the new behavior adapts to changing conditions.
That is, the evolutionary purpose of the brain is to predict the future, in admittedly limited ways, so as to change it.
The hierarchies described above are theorized to occur primarily in mammalian neocortex. In particular, neocortex is assumed to consist of a large number ofcolumns(as surmised also byVernon Benjamin Mountcastlefrom anatomical and theoretical considerations). Each column is attuned to a particular feature at a given level in a hierarchy. It receives bottom-up inputs from lower levels, and top-down inputs from higher levels. (Other columns at the same level also feed into a given column, and serve mostly to inhibit the activation exclusive representations.) When an input is recognized – that is, acceptable agreement is obtained between the bottom-up and top-down sources – a column generates outputs which in turn propagate to both lower and higher levels.
These processes map well to specific layers within mammalian cortex. (The cortical layers should not be confused with different levels of the processing hierarchy: all the layers in a single column participate as one element in a single hierarchical level). Bottom-up input arrives at layer 4 (L4), whence it propagates to L2 and L3 for recognition of the invariant content. Top-down activation arrives to L2 and L3 via L1 (the mostly axonal layer that distributes activation locally across columns). L2 and L3 compare bottom up and top-down information, and generate either the invariant 'names' when sufficient match is achieved, or the more variable signals that occur when this fails. These signals are propagated up the hierarchy (via L5) and also down the hierarchy (via L6 and L1).
To account for storage and recognition ofsequencesof patterns, a combination of two processes is suggested. The nonspecificthalamusacts as a 'delay line' – that is, L5 activates this brain area, which re-activates L1 after a slight delay. Thus, the output of one column generates L1 activity, which will coincide with the input to a column which is temporally subsequent within a sequence. This time ordering operates in conjunction with the higher-level identification of the sequence, which does not change in time; hence, activation of the sequence representation causes the lower-level components to be predicted one after the other. (Besides this role in sequencing, the thalamus is also active as sensorywaystation– these roles apparently involve distinct regions of this anatomically non-uniform structure.)
Another anatomically diverse brain structure which is hypothesized to play an important role in hierarchical cognition is thehippocampus. It is well known that damage to both hippocampi impairs the formation of long-termdeclarative memory; individuals with such damage are unable to form new memories of episodic nature, although they can recall earlier memories without difficulties and can also learn new skills. In the current theory, the hippocampi are thought of as the top level of the cortical hierarchy; they are specialized to retain memories of events that propagate all the way to the top. As such events fit into predictable patterns, they become memorizable at lower levels in the hierarchy. (Such movement of memories down the hierarchy is, incidentally, a general prediction of the theory.) Thus, the hippocampi continually memorize 'unexpected' events (that is, those not predicted at lower levels); if they are damaged, the entire process of memorization through the hierarchy is compromised.
In 2016 Hawkins hypothesized thatcortical columnsdid not just capture a sensation, but also the relative location of that sensation, in three dimensions rather than two (situated capture), in relation to what was around it.[1]"When the brain builds a model of the world, everything has a location relative to everything else"[1]—Jeff Hawkins.
Some neuroscience research with animals supports the idea that the hippocampus integrates new information with existing memories to form predictive models. This process enables more efficient problem-solving and adaptation to new tasks.[2][3]
The memory-prediction framework explains a number of psychologically salient aspects of cognition. For example, the ability of experts in any field to effortlessly analyze and remember complex problems within their field is a natural consequence of their formation of increasingly refined conceptual hierarchies. Also, the procession from 'perception' to 'understanding' is readily understandable as a result of the matching of top-down and bottom-upexpectations. Mismatches, in contrast, generate the exquisite ability of biological cognition to detect unexpected perceptions and situations. (Deficiencies in this regard are a common characteristic of current approaches to artificial intelligence.)
Besides these subjectively satisfying explanations, the framework also makes a number of testablepredictions. For example, the important role that prediction plays throughout the sensory hierarchies calls for anticipatory neural activity in certain cells throughout sensory cortex. In addition, cells that 'name' certain invariants should remain active throughout the presence of those invariants, even if the underlying inputs change. The predicted patterns of bottom-up and top-down activity – with former being more complex when expectations are not met – may be detectable, for example by functional magnetic resonance imaging (fMRI).
Although these predictions are not highly specific to the proposed theory, they are sufficiently unambiguous to make verification or rejection of its central tenets possible. SeeOn Intelligencefor details on the predictions and findings.
By design, the current theory builds on the work of numerous neurobiologists, and it may be argued that most of these ideas have already been proposed by researchers such asGrossbergandMountcastle. On the other hand, the novel separation of the conceptual mechanism (i.e., bidirectional processing and invariant recognition) from the biological details (i.e., neural layers, columns and structures) lays the foundation for abstract thinking about a wide range of cognitive processes.
The most significant limitation of this theory is its current[when?]lack of detail. For example, the concept ofinvarianceplays a crucial role; Hawkins posits "name cells" for at least some of these invariants. (See alsoNeural ensemble#Encodingforgrandmother neuronswhich perform this type of function, andmirror neuronsfor asomatosensory systemviewpoint.) But it is far from obvious how to develop a mathematically rigorous definition, which will carry the required conceptual load across the domains presented by Hawkins. Similarly, a complete theory will require credible details on both the short-term dynamics and the learning processes that will enable the cortical layers to behave as advertised.
IBMis implementing Hawkins' model.[citation needed]
The memory-prediction theory claims a common algorithm is employed by all regions in the neocortex. The theory has given rise to a number of software models aiming to simulate this common algorithm using a hierarchical memory structure. The year in the list below indicates when the model was last updated.
The following models use belief propagation orbelief revisionin singly connectedBayesian networks. | https://en.wikipedia.org/wiki/Memory-prediction_framework |
Neural coding(orneural representation) is aneurosciencefield concerned with characterising the hypothetical relationship between thestimulusand the neuronal responses, and the relationship among theelectrical activitiesof the neurons in theensemble.[1][2]Based on the theory that
sensory and other information is represented in thebrainbynetworks of neurons, it is believed thatneuronscan encode bothdigitalandanaloginformation.[3]
Neurons have an ability uncommon among the cells of the body to propagate signals rapidly over large distances by generating characteristic electrical pulses calledaction potentials: voltage spikes that can travel down axons. Sensory neurons change their activities by firing sequences of action potentials in various temporal patterns, with the presence of external sensory stimuli, such aslight,sound,taste,smellandtouch. Information about the stimulus is encoded in this pattern of action potentials and transmitted into and around the brain. Beyond this, specialized neurons, such as those of the retina, can communicate more information throughgraded potentials. These differ from action potentials because information about the strength of a stimulus directly correlates with the strength of the neurons' output. The signal decays much faster for graded potentials, necessitating short inter-neuron distances and high neuronal density. The advantage of graded potentials is higher information rates capable of encoding more states (i.e. higher fidelity) than spiking neurons.[4]
Although action potentials can vary somewhat in duration,amplitudeand shape, they are typically treated as identical stereotyped events in neural coding studies. If thebrief durationof an action potential (about 1 ms) is ignored, an action potential sequence, or spike train, can be characterized simply by a series ofall-or-nonepoint events in time.[5]The lengths of interspike intervals (ISIs) between two successive spikes in a spike train often vary, apparently randomly.[6]The study of neural coding involves measuring and characterizing how stimulus attributes, such as light or sound intensity, or motor actions, such as the direction of an arm movement, are represented by neuron action potentials or spikes. In order to describe and analyze neuronal firing,statistical methodsand methods ofprobability theoryand stochasticpoint processeshave been widely applied.
With the development of large-scale neural recording and decoding technologies, researchers have begun to crack the neural code and have already provided the first glimpse into the real-time neural code as memory is formed and recalled in the hippocampus, a brain region known to be central for memory formation.[7][8][9]Neuroscientists have initiated several large-scale brain decoding projects.[10][11]
The link between stimulus and response can be studied from two opposite points of view. Neural encoding refers to the map from stimulus to response. The main focus is to understand how neurons respond to a wide variety of stimuli, and to construct models that attempt to predict responses to other stimuli.Neural decodingrefers to the reverse map, from response to stimulus, and the challenge is to reconstruct a stimulus, or certain aspects of that stimulus, from the spike sequences it evokes.[citation needed]
A sequence, or 'train', of spikes may contain information based on different coding schemes. In some neurons the strength with which a postsynaptic partner responds may depend solely on the 'firing rate', the average number of spikes per unit time (a 'rate code'). At the other end, a complex 'temporal code' is based on the precise timing of single spikes. They may be locked to an external stimulus such as in the visual[12]andauditory systemor be generated intrinsically by the neural circuitry.[13]
Whether neurons use rate coding or temporal coding is a topic of intense debate within the neuroscience community, even though there is no clear definition of what these terms mean.[14]
The rate coding model ofneuronalfiring communication states that as the intensity of a stimulus increases, thefrequencyor rate ofaction potentials, or "spike firing", increases. Rate coding is sometimes called frequency coding.
Rate coding is a traditional coding scheme, assuming that most, if not all, information about the stimulus is contained in the firing rate of the neuron. Because the sequence of action potentials generated by a given stimulus varies from trial to trial, neuronal responses are typically treated statistically or probabilistically. They may be characterized by firing rates, rather than as specific spike sequences. In most sensory systems, the firing rate increases, generally non-linearly, with increasing stimulus intensity.[15]Under a rate coding assumption, any information possibly encoded in the temporal structure of the spike train is ignored. Consequently, rate coding is inefficient but highly robust with respect to the ISI 'noise'.[6]
During rate coding, precisely calculating firing rate is very important. In fact, the term "firing rate" has a few different definitions, which refer to different averaging procedures, such as anaverage over time(rate as a single-neuron spike count) or anaverage over several repetitions(rate of PSTH) of experiment.
In rate coding, learning is based on activity-dependent synaptic weight modifications.
Rate coding was originally shown byEdgar AdrianandYngve Zottermanin 1926.[16]In this simple experiment different weights were hung from amuscle. As the weight of the stimulus increased, the number of spikes recorded from sensory nerves innervating the muscle also increased. From these original experiments, Adrian and Zotterman concluded that action potentials were unitary events, and that the frequency of events, and not individual event magnitude, was the basis for most inter-neuronal communication.
In the following decades, measurement of firing rates became a standard tool for describing the properties of all types of sensory orcorticalneurons, partly due to the relative ease of measuring rates experimentally. However, this approach neglects all the information possibly contained in the exact timing of the spikes. During recent years, more and more experimental evidence has suggested that a straightforward firing rate concept based on temporal averaging may be too simplistic to describe brain activity.[6]
The spike-count rate, also referred to as temporal average, is obtained by counting the number of spikes that appear during a trial and dividing by the duration of trial.[14]The length T of the time window is set by the experimenter and depends on the type of neuron recorded from and to the stimulus. In practice, to get sensible averages, several spikes should occur within the time window. Typical values are T = 100 ms or T = 500 ms, but the duration may also be longer or shorter (Chapter 1.5in the textbook 'Spiking Neuron Models'[14]).
The spike-count rate can be determined from a single trial, but at the expense of losing all temporal resolution about variations in neural response during the course of the trial. Temporal averaging can work well in cases where the stimulus is constant or slowly varying and does not require a fast reaction of theorganism— and this is the situation usually encountered in experimental protocols. Real-world input, however, is hardly stationary, but often changing on a fast time scale. For example, even when viewing a static image, humans performsaccades, rapid changes of the direction of gaze. The image projected onto the retinalphotoreceptorschanges therefore every few hundred milliseconds (Chapter 1.5in[14])
Despite its shortcomings, the concept of a spike-count rate code is widely used not only in experiments, but also in models ofneural networks. It has led to the idea that a neuron transforms information about a single input variable (the stimulus strength) into a single continuous output variable (the firing rate).
There is a growing body of evidence that inPurkinje neurons, at least, information is not simply encoded in firing but also in the timing and duration of non-firing, quiescent periods.[17][18]There is also evidence from retinal cells, that information is encoded not only in the firing rate but also in spike timing.[19]More generally, whenever a rapid response of an organism is required a firing rate defined as a spike-count over a few hundred milliseconds is simply too slow.[14]
The time-dependent firing rate is defined as the average number of spikes (averaged over trials) appearing during a short interval between times t and t+Δt, divided by the duration of the interval.[14]It works for stationary as well as for time-dependent stimuli. To experimentally measure the time-dependent firing rate, the experimenter records from a neuron while stimulating with some input sequence. The same stimulation sequence is repeated several times and the neuronal response is reported in aPeri-Stimulus-Time Histogram(PSTH). The time t is measured with respect to the start of the stimulation sequence. The Δt must be large enough (typically in the range of one or a few milliseconds) so that there is a sufficient number of spikes within the interval to obtain a reliable estimate of the average. The number of occurrences of spikes nK(t;t+Δt) summed over all repetitions of the experiment divided by the number K of repetitions is a measure of the typical activity of the neuron between time t and t+Δt. A further division by the interval length Δt yields time-dependent firing rate r(t) of the neuron, which is equivalent to the spike density of PSTH (Chapter 1.5in[14]).
For sufficiently small Δt, r(t)Δt is the average number of spikes occurring between times t and t+Δt over multiple trials. If Δt is small, there will never be more than one spike within the interval between t and t+Δt on any given trial. This means that r(t)Δt is also thefractionof trials on which a spike occurred between those times. Equivalently, r(t)Δt is theprobabilitythat a spike occurs during this time interval.
As an experimental procedure, the time-dependent firing rate measure is a useful method to evaluate neuronal activity, in particular in the case of time-dependent stimuli. The obvious problem with this approach is that it can not be the coding scheme used by neurons in the brain. Neurons can not wait for the stimuli to repeatedly present in an exactly same manner before generating a response.[14]
Nevertheless, the experimental time-dependent firing rate measure can make sense, if there are large populations of independent neurons that receive the same stimulus. Instead of recording from a population of N neurons in a single run, it is experimentally easier to record from a single neuron and average over N repeated runs. Thus, the time-dependent firing rate coding relies on the implicit assumption that there are always populations of neurons.
When precise spike timing or high-frequency firing-ratefluctuationsare found to carry information, the neural code is often identified as a temporal code.[14][20]A number of studies have found that the temporal resolution of the neural code is on a millisecond time scale, indicating that precise spike timing is a significant element in neural coding.[3][21][19]Such codes, that communicate via the time between spikes are also referred to as interpulse interval codes, and have been supported by recent studies.[22]
Neurons exhibit high-frequency fluctuations of firing-rates which could be noise or could carry information. Rate coding models suggest that these irregularities are noise, while temporal coding models suggest that they encode information. If the nervous system only used rate codes to convey information, a more consistent, regular firing rate would have been evolutionarily advantageous, and neurons would have utilized this code over other less robust options.[23]Temporal coding supplies an alternate explanation for the “noise," suggesting that it actually encodes information and affects neural processing. To model this idea, binary symbols can be used to mark the spikes: 1 for a spike, 0 for no spike. Temporal coding allows the sequence 000111000111 to mean something different from 001100110011, even though the mean firing rate is the same for both sequences, at 6 spikes/10 ms.[24]
Until recently, scientists had put the most emphasis on rate encoding as an explanation forpost-synaptic potentialpatterns. However, functions of the brain are more temporally precise than the use of only rate encoding seems to allow.[19]In other words, essential information could be lost due to the inability of the rate code to capture all the available information of the spike train. In addition, responses are different enough between similar (but not identical) stimuli to suggest that the distinct patterns of spikes contain a higher volume of information than is possible to include in a rate code.[25]
Temporal codes (also calledspike codes[14]), employ those features of the spiking activity that cannot be described by the firing rate. For example,time-to-first-spikeafter the stimulus onset,phase-of-firingwith respect to background oscillations, characteristics based on the second and higher statisticalmomentsof the ISIprobability distribution, spike randomness, or precisely timed groups of spikes (temporal patterns) are candidates for temporal codes.[26]As there is no absolute time reference in the nervous system, the information is carried either in terms of the relative timing of spikes in a population of neurons (temporal patterns) or with respect to anongoing brain oscillation(phase of firing).[3][6]One way in which temporal codes are decoded, in presence ofneural oscillations, is that spikes occurring at specific phases of an oscillatory cycle are more effective in depolarizing thepost-synaptic neuron.[27]
The temporal structure of a spike train or firing rate evoked by a stimulus is determined both by the dynamics of the stimulus and by the nature of the neural encoding process. Stimuli that change rapidly tend to generate precisely timed spikes[28](and rapidly changing firing rates in PSTHs) no matter what neural coding strategy is being used. Temporal coding in the narrow sense refers to temporal precision in the response that does not arise solely from the dynamics of the stimulus, but that nevertheless relates to properties of the stimulus. The interplay between stimulus and encoding dynamics makes the identification of a temporal code difficult.
In temporal coding, learning can be explained by activity-dependent synaptic delay modifications.[29]The modifications can themselves depend not only on spike rates (rate coding) but also on spike timing patterns (temporal coding), i.e., can be a special case ofspike-timing-dependent plasticity.[30]
The issue of temporal coding is distinct and independent from the issue of independent-spike coding. If each spike is independent of all the other spikes in the train, the temporal character of the neural code is determined by the behavior of time-dependent firing rate r(t). If r(t) varies slowly with time, the code is typically called a rate code, and if it varies rapidly, the code is called temporal.
For very brief stimuli, a neuron's maximum firing rate may not be fast enough to produce more than a single spike. Due to the density of information about the abbreviated stimulus contained in this single spike, it would seem that the timing of the spike itself would have to convey more information than simply the average frequency of action potentials over a given period of time. This model is especially important forsound localization, which occurs within the brain on the order of milliseconds. The brain must obtain a large quantity of information based on a relatively short neural response. Additionally, if low firing rates on the order of ten spikes per second must be distinguished from arbitrarily close rate coding for different stimuli, then a neuron trying to discriminate these two stimuli may need to wait for a second or more to accumulate enough information. This is not consistent with numerous organisms which are able to discriminate between stimuli in the time frame of milliseconds, suggesting that a rate code is not the only model at work.[24]
To account for the fast encoding of visual stimuli, it has been suggested that neurons of the retina encode visual information in the latency time between stimulus onset and first action potential, also called latency to first spike or time-to-first-spike.[31]This type of temporal coding has been shown also in the auditory and somato-sensory system. The main drawback of such a coding scheme is its sensitivity to intrinsic neuronal fluctuations.[32]In theprimary visual cortexof macaques, the timing of the first spike relative to the start of the stimulus was found to provide more information than the interval between spikes. However, the interspike interval could be used to encode additional information, which is especially important when the spike rate reaches its limit, as in high-contrast situations. For this reason, temporal coding may play a part in coding defined edges rather than gradual transitions.[33]
The mammaliangustatory systemis useful for studying temporal coding because of its fairly distinct stimuli and the easily discernible responses of the organism.[34]Temporally encoded information may help an organism discriminate between different tastants of the same category (sweet, bitter, sour, salty, umami) that elicit very similar responses in terms of spike count. The temporal component of the pattern elicited by each tastant may be used to determine its identity (e.g., the difference between two bitter tastants, such as quinine and denatonium). In this way, both rate coding and temporal coding may be used in the gustatory system – rate for basic tastant type, temporal for more specific differentiation.[35]
Research on mammalian gustatory system has shown that there is an abundance of information present in temporal patterns across populations of neurons, and this information is different from that which is determined by rate coding schemes. Groups of neurons may synchronize in response to a stimulus. In studies dealing with the front cortical portion of the brain in primates, precise patterns with short time scales only a few milliseconds in length were found across small populations of neurons which correlated with certain information processing behaviors. However, little information could be determined from the patterns; one possible theory is they represented the higher-order processing taking place in the brain.[25]
As with the visual system, inmitral/tufted cellsin theolfactory bulbof mice, first-spike latency relative to the start of a sniffing action seemed to encode much of the information about an odor. This strategy of using spike latency allows for rapid identification of and reaction to an odorant. In addition, some mitral/tufted cells have specific firing patterns for given odorants. This type of extra information could help in recognizing a certain odor, but is not completely necessary, as average spike count over the course of the animal's sniffing was also a good identifier.[36]Along the same lines, experiments done with the olfactory system of rabbits showed distinct patterns which correlated with different subsets of odorants, and a similar result was obtained in experiments with the locust olfactory system.[24]
The specificity of temporal coding requires highly refined technology to measure informative, reliable, experimental data. Advances made inoptogeneticsallow neurologists to control spikes in individual neurons, offering electrical and spatial single-cell resolution. For example, blue light causes the light-gated ion channelchannelrhodopsinto open, depolarizing the cell and producing a spike. When blue light is not sensed by the cell, the channel closes, and the neuron ceases to spike. The pattern of the spikes matches the pattern of the blue light stimuli. By inserting channelrhodopsin gene sequences into mouse DNA, researchers can control spikes and therefore certain behaviors of the mouse (e.g., making the mouse turn left).[37]Researchers, through optogenetics, have the tools to effect different temporal codes in a neuron while maintaining the same mean firing rate, and thereby can test whether or not temporal coding occurs in specific neural circuits.[38]
Optogenetic technology also has the potential to enable the correction of spike abnormalities at the root of several neurological and psychological disorders.[38]If neurons do encode information in individual spike timing patterns, key signals could be missed by attempting to crack the code while looking only at mean firing rates.[24]Understanding any temporally encoded aspects of the neural code and replicating these sequences in neurons could allow for greater control and treatment of neurological disorders such asdepression,schizophrenia, andParkinson's disease. Regulation of spike intervals in single cells more precisely controls brain activity than the addition of pharmacological agents intravenously.[37]
Phase-of-firing code is a neural coding scheme that combines thespikecount code with a time reference based onoscillations. This type of code takes into account a time label for each spike according to a time reference based on phase of local ongoing oscillations at low[39]or high frequencies.[40]
It has been shown that neurons in some cortical sensory areas encode rich naturalistic stimuli in terms of their spike times relative to the phase of ongoing network oscillatory fluctuations, rather than only in terms of their spike count.[39][41]Thelocal field potentialsignals reflect population (network) oscillations. The phase-of-firing code is often categorized as a temporal code although the time label used for spikes (i.e. the network oscillation phase) is a low-resolution (coarse-grained) reference for time. As a result, often only four discrete values for the phase are enough to represent all the information content in this kind of code with respect to the phase of oscillations in low frequencies. Phase-of-firing code is loosely based on thephase precessionphenomena observed in place cells of thehippocampus. Another feature of this code is that neurons adhere to a preferred order of spiking between a group of sensory neurons, resulting in firing sequence.[42]
Phase code has been shown in visual cortex to involve alsohigh-frequency oscillations.[42]Within a cycle of gamma oscillation, each neuron has its own preferred relative firing time. As a result, an entire population of neurons generates a firing sequence that has a duration of up to about 15 ms.[42]
Population coding is a method to represent stimuli by using the joint activities of a number of neurons. In population coding, each neuron has a distribution of responses over some set of inputs, and the responses of many neurons may be combined to determine some value about the inputs. From the theoretical point of view, population coding is one of a few mathematically well-formulated problems in neuroscience. It grasps the essential features of neural coding and yet is simple enough for theoretic analysis.[43]Experimental studies have revealed that this coding paradigm is widely used in the sensory and motor areas of the brain.
For example, in the visual areamedial temporal(MT), neurons are tuned to the direction of object motion.[44]In response to an object moving in a particular direction, many neurons in MT fire with a noise-corrupted andbell-shapedactivity pattern across the population. The moving direction of the object is retrieved from the population activity, to be immune from the fluctuation existing in a single neuron's signal. When monkeys are trained to move a joystick towards a lit target, a single neuron will fire for multiple target directions. However it fires the fastest for one direction and more slowly depending on how close the target was to the neuron's "preferred" direction.[45][46]If each neuron represents movement in its preferred direction, and the vector sum of all neurons is calculated (each neuron has a firing rate and a preferred direction), the sum points in the direction of motion. In this manner, the population of neurons codes the signal for the motion.[citation needed]This particular population code is referred to aspopulation vectorcoding.
Place-time population codes, termed the averaged-localized-synchronized-response (ALSR) code, have been derived for neural representation of auditory acoustic stimuli. This exploits both the place or tuning within the auditory nerve, as well as the phase-locking within each nerve fiber auditory nerve. The first ALSR representation was for steady-state vowels;[47]ALSR representations of pitch and formant frequencies in complex, non-steady state stimuli were later demonstrated for voiced-pitch,[48]and formant representations in consonant-vowel syllables.[49]The advantage of such representations is that global features such as pitch or formant transition profiles can be represented as global features across the entire nerve simultaneously via both rate and place coding.
Population coding has a number of other advantages as well, including reduction of uncertainty due to neuronalvariabilityand the ability to represent a number of different stimulus attributes simultaneously. Population coding is also much faster than rate coding and can reflect changes in the stimulus conditions nearly instantaneously.[50]Individual neurons in such a population typically have different but overlapping selectivities, so that many neurons, but not necessarily all, respond to a given stimulus.
Typically an encoding function has a peak value such that activity of the neuron is greatest if the perceptual value is close to the peak value, and becomes reduced accordingly for values less close to the peak value.[citation needed]It follows that the actual perceived value can be reconstructed from the overall pattern of activity in the set of neurons. Vector coding is an example of simple averaging. A more sophisticated mathematical technique for performing such a reconstruction is the method ofmaximum likelihoodbased on a multivariate distribution of the neuronal responses. These models can assume independence, second order correlations,[51]or even more detailed dependencies such as higher ordermaximum entropy models,[52]orcopulas.[53]
The correlation coding model ofneuronalfiring claims that correlations betweenaction potentials, or "spikes", within a spike train may carry additional information above and beyond the simple timing of the spikes. Early work suggested that correlation between spike trains can only reduce, and never increase, the totalmutual informationpresent in the two spike trains about a stimulus feature.[54]However, this was later demonstrated to be incorrect. Correlation structure can increase information content if noise and signal correlations are of opposite sign.[55]Correlations can also carry information not present in the average firing rate of two pairs of neurons. A good example of this exists in the pentobarbital-anesthetized marmoset auditory cortex, in which a pure tone causes an increase in the number of correlated spikes, but not an increase in the mean firing rate, of pairs of neurons.[56]
The independent-spike coding model ofneuronalfiring claims that each individualaction potential, or "spike", is independent of each other spike within thespike train.[20][57]
A typical population code involves neurons with a Gaussian tuning curve whose means vary linearly with the stimulus intensity, meaning that the neuron responds most strongly (in terms of spikes per second) to a stimulus near the mean. The actual intensity could be recovered as the stimulus level corresponding to the mean of the neuron with the greatest response. However, the noise inherent in neural responses means that a maximum likelihood estimation function is more accurate.
This type of code is used to encode continuous variables such as joint position, eye position, color, or sound frequency. Any individual neuron is too noisy to faithfully encode the variable using rate coding, but an entire population ensures greater fidelity and precision. For a population of unimodal tuning curves, i.e. with a single peak, the precision typically scales linearly with the number of neurons. Hence, for half the precision, half as many neurons are required. In contrast, when the tuning curves have multiple peaks, as ingrid cellsthat represent space, the precision of the population can scale exponentially with the number of neurons. This greatly reduces the number of neurons required for the same precision.[58]
Dimensionality reductionandtopological data analysis, have revealed that the population code is constrained to low-dimensional manifolds,[59]sometimes also referred to asattractors. The position along the neural manifold correlates to certain behavioral conditions like head direction neurons in the anterodorsal thalamic nucleus forming a ring structure,[60]grid cellsencoding spatial position inentorhinal cortexalong the surface of atorus,[61]ormotor cortexneurons encoding hand movements[62]and preparatory activity.[63]The low-dimensional manifolds are known to change in a state dependent manner, such as eye closure in thevisual cortex,[64]or breathing behavior in theventral respiratory column.[65]
The sparse code is when each item is encoded by the strong activation of a relatively small set of neurons. For each item to be encoded, this is a different subset of all available neurons. In contrast to sensor-sparse coding, sensor-dense coding implies that all information from possible sensor locations is known.
As a consequence, sparseness may be focused on temporal sparseness ("a relatively small number of time periods are active") or on the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. Sparse coding of natural images produceswavelet-like oriented filters that resemble thereceptive fieldsof simple cells in the visual cortex.[66]The capacity of sparse codes may be increased by simultaneous use of temporal coding, as found in the locust olfactory system.[67]
Given a potentially large set of input patterns, sparse coding algorithms (e.g.sparse autoencoder) attempt to automatically find a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols.
Most models of sparse coding are based on the linear generative model.[68]In this model, the symbols are combined in alinear fashionto approximate the input.
More formally, given a k-dimensional set of real-numbered input vectorsξ→∈Rk{\displaystyle {\vec {\xi }}\in \mathbb {R} ^{k}}, the goal of sparse coding is to determine n k-dimensionalbasis vectorsb1→,…,bn→∈Rk{\displaystyle {\vec {b_{1}}},\ldots ,{\vec {b_{n}}}\in \mathbb {R} ^{k}}, corresponding to neuronal receptive fields, along with asparsen-dimensional vector of weights or coefficientss→∈Rn{\displaystyle {\vec {s}}\in \mathbb {R} ^{n}}for each input vector, so that a linear combination of the basis vectors with proportions given by the coefficients results in a close approximation to the input vector:ξ→≈∑j=1nsjb→j{\displaystyle {\vec {\xi }}\approx \sum _{j=1}^{n}s_{j}{\vec {b}}_{j}}.[69]
The codings generated by algorithms implementing a linear generative model can be classified into codings withsoft sparsenessand those withhard sparseness.[68]These refer to the distribution of basis vector coefficients for typical inputs. A coding with soft sparseness has a smoothGaussian-like distribution, but peakier than Gaussian, with many zero values, some small absolute values, fewer larger absolute values, and very few very large absolute values. Thus, many of the basis vectors are active. Hard sparseness, on the other hand, indicates that there are many zero values,noorhardly anysmall absolute values, fewer larger absolute values, and very few very large absolute values, and thus few of the basis vectors are active. This is appealing from a metabolic perspective: less energy is used when fewer neurons are firing.[68]
Another measure of coding is whether it iscritically completeorovercomplete. If the number of basis vectors n is equal to the dimensionality k of the input set, the coding is said to be critically complete. In this case, smooth changes in the input vector result in abrupt changes in the coefficients, and the coding is not able to gracefully handle small scalings, small translations, or noise in the inputs. If, however, the number of basis vectors is larger than the dimensionality of the input set, the coding isovercomplete. Overcomplete codings smoothly interpolate between input vectors and are robust under input noise.[70]The human primaryvisual cortexis estimated to be overcomplete by a factor of 500, so that, for example, a 14 x 14 patch of input (a 196-dimensional space) is coded by roughly 100,000 neurons.[68]
Other models are based onmatching pursuit, asparse approximationalgorithm which finds the "best matching" projections of multidimensional data, anddictionary learning, a representation learning method which aims to find asparse matrixrepresentation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.[71][72][73]
Sparse coding may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such tasks require implementing stimulus-specificassociative memoriesin which only a few neurons out of apopulationrespond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli.
Theoretical work onsparse distributed memoryhas suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations.[74]Experimentally, sparse representations of sensory information have been observed in many systems, including vision,[75]audition,[76]touch,[77]and olfaction.[78]However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been difficult to obtain.
In theDrosophilaolfactory system, sparse odor coding by theKenyon cellsof themushroom bodyis thought to generate a large number of precisely addressable locations for the storage of odor-specific memories.[79]Sparseness is controlled by a negative feedback circuit between Kenyon cells andGABAergicanterior paired lateral (APL) neurons. Systematic activation and blockade of each leg of this feedback circuit shows that Kenyon cells activate APL neurons and APL neurons inhibit Kenyon cells. Disrupting the Kenyon cell–APL feedback loop decreases the sparseness of Kenyon cell odor responses, increases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor-specificity of memories.[80] | https://en.wikipedia.org/wiki/Neural_coding |
Aneural Turing machine(NTM) is arecurrent neural networkmodel of aTuring machine. The approach was published byAlex Graveset al. in 2014.[1]NTMs combine the fuzzypattern matchingcapabilities ofneural networkswith thealgorithmicpower ofprogrammable computers.
An NTM has a neural network controller coupled toexternal memoryresources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them usinggradient descent.[2]An NTM with along short-term memory(LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone.[1]
The authors of the original NTM paper did not publish theirsource code.[1]The first stable open-source implementation was published in 2018 at the 27th International Conference on Artificial Neural Networks, receiving a best-paper award.[3][4][5]Other open source implementations of NTMs exist but as of 2018 they are not sufficiently stable for production use.[6][7][8][9][10][11][12]The developers either report that thegradientsof their implementation sometimes becomeNaNduring training for unknown reasons and cause training to fail;[10][11][9]report slow convergence;[7][6]or do not report the speed of learning of their implementation.[12][8]
Differentiable neural computersare an outgrowth of Neural Turing machines, withattention mechanismsthat control where the memory is active, and improve performance.[13] | https://en.wikipedia.org/wiki/Neural_Turing_machine |
Random indexingis adimensionality reductionmethod and computational framework fordistributional semantics, based on the insight that very-high-dimensionalvector space modelimplementations are impractical, that models need not grow in dimensionality when new items (e.g. new terminology) are encountered, and that a high-dimensional model can be projected into a space of lower dimensionality without compromising L2 distance metrics if the resulting dimensions are chosen appropriately.
This is the original point of therandom projectionapproach to dimension reduction first formulated as theJohnson–Lindenstrauss lemma, andlocality-sensitive hashinghas some of the same starting points. Random indexing, as used in representation of language, originates from the work ofPentti Kanerva[1][2][3][4][5]onsparse distributed memory, and can be described as an incremental formulation of a random projection.[6]
It can be also verified that random indexing is a random projection technique for the construction of Euclidean spaces—i.e. L2 normed vector spaces.[7]In Euclidean spaces, random projections are elucidated using the Johnson–Lindenstrauss lemma.[8]
The TopSig technique[9]extends the random indexing model to producebit vectorsfor comparison with theHamming distancesimilarity function. It is used for improving the performance ofinformation retrievalanddocument clustering. In a similar line of research, Random Manhattan Integer Indexing (RMII)[10]is proposed for improving the performance of the methods that employ theManhattan distancebetween text units. Many random indexing methods primarily generate similarity from co-occurrence of items in a corpus. Reflexive Random Indexing (RRI)[11]generates similarity from co-occurrence and from shared occurrence with other items. | https://en.wikipedia.org/wiki/Random_indexing |
Semantic foldingtheory describes a procedure for encoding thesemanticsofnatural languagetext in a semantically groundedbinary representation. This approach provides a framework for modelling how language data is processed by theneocortex.[1]
Semantic folding theory draws inspiration fromDouglas R. Hofstadter'sAnalogy as the Core of Cognitionwhich suggests that the brain makes sense of the world by identifying and applyinganalogies.[2]The theory hypothesises that semantic data must therefore be introduced to the neocortex in such a form as to allow the application of asimilarity measureand offers, as a solution, thesparsebinary vectoremploying a two-dimensional topographicsemantic spaceas a distributional reference frame. The theory builds on the computational theory of the human cortex known ashierarchical temporal memory(HTM), and positions itself as a complementary theory for the representation of language semantics.
A particular strength claimed by this approach is that the resulting binary representation enables complex semantic operations to be performed simply and efficiently at the most basic computational level.
Analogous to the structure of the neocortex, Semantic Folding theory posits the implementation of a semantic space as a two-dimensional grid. This grid is populated by context-vectors[note 1]in such a way as to place similar context-vectors closer to each other, for instance, by using competitive learning principles. Thisvector space modelis presented in the theory as an equivalence to the well known word space model[3]described in theinformation retrievalliterature.
Given a semantic space (implemented as described above) a word-vector[note 2]can be obtained for any given wordYby employing the followingalgorithm:
The result of this process will be a word-vector containing all the contexts in which the word Y appears and will therefore be representative of the semantics of that word in the semantic space. It can be seen that the resulting word-vector is also in a sparse distributed representation (SDR) format [Schütze, 1993] & [Sahlgreen, 2006].[3][4]Some properties of word-SDRs that are of particular interest with respect tocomputational semanticsare:[5]
Semantic spaces[note 3][6]in the natural language domain aim to create representations of natural language that are capable of capturing meaning. The original motivation for semantic spaces stems from two core challenges of natural language:Vocabulary mismatch(the fact that the same meaning can be expressed in many ways) andambiguityof natural language (the fact that the same term can have several meanings).
The application of semantic spaces innatural language processing(NLP) aims at overcoming limitations ofrule-basedor model-based approaches operating on thekeywordlevel. The main drawback with these approaches is their brittleness, and the large manual effort required to create either rule-based NLP systems or training corpora for model learning.[7][8]Rule-based andmachine learning-based models are fixed on the keyword level and break down if the vocabulary differs from that defined in the rules or from the training material used for the statistical models.
Research in semantic spaces dates back more than 20 years. In 1996, two papers were published that raised a lot of attention around the general idea of creating semantic spaces:latent semantic analysis[9]fromMicrosoftandHyperspace Analogue to Language[10]from theUniversity of California. However, their adoption was limited by the large computational effort required to construct and use those semantic spaces. A breakthrough with regard to theaccuracyof modelling associative relations between words (e.g. "spider-web", "lighter-cigarette", as opposed to synonymous relations such as "whale-dolphin", "astronaut-driver") was achieved byexplicit semantic analysis(ESA)[11]in 2007. ESA was a novel (non-machine learning) based approach that represented words in the form of vectors with 100,000dimensions(where each dimension represents an Article inWikipedia). However practical applications of the approach are limited due to the large number of required dimensions in the vectors.
More recently, advances inneural networkingtechniques in combination with other new approaches (tensors) led to a host of new recent developments:Word2vec[12]fromGoogleandGloVe[13]fromStanford University.
Semantic folding represents a novel, biologically inspired approach to semantic spaces where each word is represented as a sparse binary vector with 16,000 dimensions (a semantic fingerprint) in a 2D semantic map (the semantic universe). Sparse binary representation are advantageous in terms of computational efficiency, and allow for the storage of very large numbers of possible patterns.[5]
The topological distribution over a two-dimensional grid (outlined above) lends itself to abitmaptype visualization of the semantics of any word or text, where each active semantic feature can be displayed as e.g. apixel. As can be seen in the images shown here, this representation allows for a direct visual comparison of the semantics of two (or more) linguistic items.
Image 1 clearly demonstrates that the two disparate terms "dog" and "car" have, as expected, very obviously different semantics.
Image 2 shows that only one of the meaning contexts of "jaguar", that of "Jaguar" the car, overlaps with the meaning of Porsche (indicating partial similarity). Other meaning contexts of "jaguar" e.g. "jaguar" the animal clearly have different non-overlapping contexts.
The visualization of semantic similarity using Semantic Folding bears a strong resemblance to thefMRIimages produced in a research study conducted by A.G. Huth et al.,[14][15]where it is claimed that words are grouped in the brain by meaning.voxels, little volume segments of the brain, were found to follow a pattern were semantic information is represented along the boundary of the visual cortex with visual and linguistic categories represented on posterior and anterior side respectively.[16][17][18] | https://en.wikipedia.org/wiki/Semantic_folding |
Semantic memoryrefers to general worldknowledgethat humans have accumulated throughout their lives.[1]Thisgeneral knowledge(word meanings,concepts, facts, and ideas) is intertwined in experience and dependent onculture. New concepts are learned by applying knowledge learned from things in the past.[2]
Semantic memory is distinct fromepisodic memory—the memory of experiences and specific events that occur in one's life that can be recreated at any given point.[3]For instance, semantic memory might contain information about what a cat is, whereas episodic memory might contain a specific memory of stroking a particular cat.
Semantic memory and episodic memory are both types ofexplicit memory (or declarative memory), or memory of facts or events that can be consciously recalled and "declared".[4]The counterpart to declarative or explicit memory isimplicit memory(also known as nondeclarative memory).[5]
The idea of semantic memory was first introduced following a conference in 1972 betweenEndel Tulvingand W. Donaldson on the role of organization in human memory. Tulving constructed a proposal to distinguish betweenepisodic memoryand what he termed semantic memory.[6]He was mainly influenced by the ideas of Reiff and Scheers, who in 1959 made the distinction between two primary forms of memory.[7]One form was titledremembrances, and the othermemoria. The remembrance concept dealt with memories that contained experiences of an autobiographic index, whereas thememoriaconcept dealt with memories that did not reference experiences having an autobiographic index.[8]
Semantic memory reflects the knowledge of the world, and the termgeneral knowledgeis often used. It holds generic information that is more than likely acquired across various contexts and is used across different situations. According to Madigan in his book titledMemory, semantic memory is the sum of all knowledge one has obtained—vocabulary, understanding of math, or all the facts one knows. In his book titledEpisodic and Semantic Memory, Tulving adopted the termsemanticfrom linguists to refer to a system of memory for "words and verbal symbols, their meanings and referents, the relations between them, and the rules, formulas, or algorithms for influencing them".[9]
The use of semantic memory differs from episodic memory: semantic memory refers to general facts and meanings one shares with others, while episodic memory refers to unique and concrete personal experiences. Tulving's proposal of this distinction was widely accepted, primarily because it allowed the separate conceptualization of world knowledge.[10]Tulving discusses conceptions of episodic and semantic memory in his book titledPrécis of Elements of Episodic Memory,[11]in which he states that several factors differentiate between episodic memory and semantic memory in ways that include
In 2022, researchers Felipe De Brigard, Sharda Umanath, andMuireann Irishargued that Tulving conceptualized semantic memory to be different from episodic memory in that "episodic memories were viewed as supported via spatiotemporal relations while information in semantic memory was mediated through conceptual, meaning-based associations".[12]
Recent research[when?]has focused on the idea that when people access a word's meaning, sensorimotor information that is used to perceive and act on the concrete object the word suggests is automatically activated. In the theory of grounded cognition, the meaning of a particular word is grounded in the sensorimotor systems.[13]For example, when one thinks of a pear, knowledge of grasping, chewing, sights, sounds, and tastes used to encode episodic experiences of a pear are recalled through sensorimotor simulation.
A grounded simulation approach refers to context-specific re-activations that integrate the important features of episodic experience into a current depiction. Such research has challenged previously utilized amodal views. The brain encodes multiple inputs such as words and pictures to integrate and create a larger conceptual idea by using amodal views (also known asamodal perception). Instead of being representations in modality-specific systems, semantic memory representations had previously been viewed as redescriptions of modality-specific states. Some accounts of category-specific semantic deficits that are amodal remain even though researchers are beginning to find support for theories in which knowledge is tied to modality-specific brain regions. The concept that semantic representations are grounded across modality-specific brain regions can be supported by episodic and semantic memory appearing to function in different yet mutually dependent ways. The distinction between semantic and episodic memory has become a part of the broader scientific discourse. For example, researchers speculate that semantic memory captures the stable aspects of our personality while episodes of illness may have a more episodic nature.[14]
This study[15]was not created to solely provide evidence for the distinction of semantic and episodic memory stores. However, they did use the experimental dissociation method which provides evidence for Tulving's hypothesis.
In the first part, subjects were presented with a total of 60 words (one at a time) and were asked different questions.
In the second phase of the experiment, 60 "old words" seen in stage one and 20 "new words" not shown in stage one were presented to the subjects one at a time.
The subjects were given one of two tasks:
Results showed that the percentage of correct answers in the semantic task (perceptual identification) did not change with the encoding conditions of appearance, sound, or meaning. The percentage of correct answers for the episodic task increased from the appearance condition (.50), to the sound condition (.63), to the meaning condition (.86). The effect was also greater for the "yes" encoding words than the "no" encoding words, which suggested a strong distinction of performance of episodic and semantic tasks, supporting Tulving's hypothesis.
Semantic memory's contents are not tied to any particular instance of experience, as in episodic memory. Instead, what is stored in semantic memory is the "gist" of experience, an abstract structure that applies to a wide variety of experiential objects and delineates categorical and functional relationships between such objects. There are numerous sub-theories related to semantic memory that have developed since Tulving initially posited his argument on the differences between semantic and episodic memory; an example is the belief in hierarchies of semantic memory, in which different information one has learned with specific levels of related knowledge is associated. According to this theory, brains are able to associate specific information with other disparate ideas despite not having unique memories that correspond to when that knowledge was stored in the first place.[16]This theory of hierarchies has also been applied to episodic memory, as in the case of work by William Brewer on the concept of autobiographical memory.[17]
Networksof various sorts play an integral part in many theories of semantic memory. Generally speaking, a network is composed of a set of nodes connected by links. The nodes may represent concepts, words, perceptual features, or nothing at all. The links may be weighted such that some are stronger than others or, equivalently, have a length such that some links take longer to traverse than others. All these features of networks have been employed in models of semantic memory.
One of the first examples of a network model of semantic memory is the teachable language comprehender (TLC).[18]In this model, each node is a word, representing a concept (likebird). Within each node is stored a set of properties (like "can fly" or "has wings") as well as links to other nodes (likechicken). A node is directly linked to those nodes of which it is either a subclass or superclass (i.e.,birdwould be connected to bothchickenandanimal). Properties are stored at the highest category level to which they apply; for example, "is yellow" would be stored withcanary, "has wings" would be stored withbird(one level up), and "can move" would be stored withanimal(another level up). Nodes may also store negations of the properties of their superordinate nodes (i.e., "NOT-can fly" would be stored with "penguin").
Processing in TLC is a form ofspreading activation.[19]When a node becomes active, that activation spreads to other nodes via the links between them. In that case, the time to answer the question "Is a chicken a bird?" is a function of how far the activation between the nodes forchickenandbirdmust spread, or the number of links between those nodes.
The original version of TLC did not put weights on the links between nodes. This version performed comparably to humans in many tasks, but failed to predict that people would respond faster to questions regarding more typical category instances than those involving less typical instances.[20]Allan Collinsand Quillian later updated TLC to include weighted connections to account for this effect,[21]which allowed it to explain both the familiarity effect and the typicality effect. Its biggest advantage is that it clearly explainspriming: information from memory is more likely to be retrieved if related information (the "prime") has been presented a short time before. There are still a number of memory phenomena for which TLC has no account, including why people are able to respond quickly to obviously false questions (like "is a chicken a meteor?") when the relevant nodes are very far apart in the network.[22]
TLC is an instance of a more general class of models known assemantic networks. In a semantic network, each node is to be interpreted as representing a specific concept, word, or feature; each node is a symbol. Semantic networks generally do not employ distributed representations for concepts, as may be found in aneural network. The defining feature of a semantic network is that its links are almost always directed (that is, they only point in one direction, from a base to a target) and the links come in many different types, each one standing for a particular relationship that can hold between any two nodes.[23]
Semantic networks see the most use in models ofdiscourseandlogicalcomprehension, as well as inartificial intelligence.[24]In these models, the nodes correspond to words or word stems and the links represent syntactic relations between them.[25]
Feature models view semantic categories as being composed of relatively unstructured sets of features. Thesemantic feature-comparison modeldescribes memory as being composed of feature lists for different concepts.[26]According to this view, the relations between categories would not be directly retrieved, and would be indirectly computed instead. For example, subjects might verify a sentence by comparing the feature sets that represent its subject and predicate concepts. Such computational feature-comparison models include the ones proposed by Meyer (1970),[27]Rips (1975),[28]and Smithet al.(1974).[26]
Early work in perceptual and conceptual categorization assumed that categories had critical features and that category membership could be determined by logical rules for the combination of features. More recent theories have accepted that categories may have an ill-defined or "fuzzy" structure[29]and have proposed probabilistic or global similarity models for the verification of category membership.[30]
The set ofassociationsamong a collection of items in memory is equivalent to the links between nodes in a network, where each node corresponds to a unique item in memory. Indeed, neural networks and semantic networks may be characterized as associative models of cognition. However, associations are often more clearly represented as anN×Nmatrix, whereNis the number of items in memory; each cell of the matrix corresponds to the strength of the association between the row item and the column item.
Learning of associations is generally believed to be aHebbianprocess, where whenever two items in memory are simultaneously active, the association between them grows stronger, and the more likely either item is to activate the other. See below for specific operationalizations of associative models.
A standard model of memory that employs association in this manner is the search of associative memory (SAM) model.[31]Though SAM was originally designed to model episodic memory, its mechanisms are sufficient to support some semantic memory representations.[32]The model contains a short-term store (STS) and long-term store (LTS), where STS is a briefly activated subset of the information in the LTS. The STS has limited capacity and affects the retrieval process by limiting the amount of information that can be sampled and limiting the time the sampled subset is in an active mode. The retrieval process in LTS is cue dependent and probabilistic, meaning that a cue initiates the retrieval process and the selected information from memory is random. The probability of being sampled is dependent on the strength of association between the cue and the item being retrieved, with stronger associations being sampled before one is chosen. The buffer size is defined asr, and not a fixed number, and as items are rehearsed in the buffer the associative strengths grow linearly as a function of the total time inside the buffer.[33]In SAM, when any two items simultaneously occupy a working memory buffer, the strength of their association is incremented; items that co-occur more often are more strongly associated. Items in SAM are also associated with a specific context, where the strength of that association determined by how long each item is present in a given context. In SAM, memories consist of a set of associations between items in memory and between items and contexts. The presence of a set of items and/or a context is more likely to evoke some subset of the items in memory. The degree to which items evoke one another—either by virtue of their shared context or their co-occurrence—is an indication of the items'semantic relatedness.
In an updated version of SAM, pre-existing semantic associations are accounted for using a semanticmatrix. During the experiment, semantic associations remain fixed showing the assumption that semantic associations are not significantly impacted by the episodic experience of one experiment. The two measures used to measure semantic relatedness in this model are latent semantic analysis (LSA) and word association spaces (WAS).[34]The LSA method states that similarity between words is reflected through their co-occurrence in a local context.[35]WAS was developed by analyzing a database of free association norms, and is where "words that have similar associative structures are placed in similar regions of space".[36]
The adaptive control of thought (ACT)[37](and laterACT-R(Adaptive Control of Thought-Rational)[38]) theory of cognition representsdeclarative memory(of which semantic memory is a part) as "chunks", which consist of a label, a set of defined relationships to other chunks (e.g., "this is a _", or "this has a _"), and any number of chunk-specific properties. Chunks can be mapped as a semantic network, given that each node is a chunk with its unique properties, and each link is the chunk's relationship to another chunk. In ACT, a chunk's activation decreases as a function of the time from when the chunk was created, and increases with the number of times the chunk has been retrieved from memory. Chunks can also receive activation fromGaussian noiseand from their similarity to other chunks. For example, ifchickenis used as a retrieval cue,canarywill receive activation by virtue of its similarity to the cue. When retrieving items from memory, ACT looks at the most active chunk in memory; if it is above threshold, it is retrieved; otherwise an "error of omission" has occurred and the item has been forgotten. There is also retrieval latency, which varies inversely with the amount by which the activation of the retrieved chunk exceeds the retrieval threshold. This latency is used to measure the response time of the ACT model and compare it to human performance.[39]
Some models characterize the acquisition of semantic information as a form ofstatistical inferencefrom a set of discrete experiences, distributed across a number ofcontexts. Though these models differ in specifics, they generally employ an (Item × Context)matrixwhere each cell represents the number of times an item in memory has occurred in a given context. Semantic information is gleaned by performing a statistical analysis of this matrix.
Many of these models bear similarity to the algorithms used insearch engines, though it is not yet clear whether they really use the same computational mechanisms.[40][41]
One of the more popular models islatent semantic analysis(LSA).[42]In LSA, a T × Dmatrixis constructed from atext corpus, where T is the number of terms in the corpus and D is the number of documents (here "context" is interpreted as "document" and only words—or word phrases—are considered as items in memory). Each cell in the matrix is then transformed according to the equation:
Mt,d′=ln(1+Mt,d)−∑i=0DP(i|t)lnP(i|t){\displaystyle \mathbf {M} _{t,d}'={\frac {\ln {(1+\mathbf {M} _{t,d})}}{-\sum _{i=0}^{D}P(i|t)\ln {P(i|t)}}}}
whereP(i|t){\displaystyle P(i|t)}is the probability that contexti{\displaystyle i}is active, given that itemt{\displaystyle t}has occurred (this is obtained simply by dividing the raw frequency,Mt,d{\displaystyle \mathbf {M} _{t,d}}by the total of the item vector,∑i=0DMt,i{\displaystyle \sum _{i=0}^{D}\mathbf {M} _{t,i}}).
The Hyperspace Analogue to Language (HAL) model[43][44]considers context only as the words that immediately surround a given word. HAL computes an NxN matrix, where N is the number of words in its lexicon, using a 10-word reading frame that moves incrementally through a corpus of text. Like SAM, any time two words are simultaneously in the frame, the association between them is increased, that is, the corresponding cell in the NxN matrix is incremented. The bigger the distance between the two words, the smaller the amount by which the association is incremented (specifically,Δ=11−d{\displaystyle \Delta =11-d}, whered{\displaystyle d}is the distance between the two words in the frame).
Thecognitive neuroscienceof semantic memory is a controversial issue with two dominant views.
Many researchers and clinicians believe that semantic memory is stored by the samebrainsystems involved inepisodic memory, that is, the medialtemporal lobes, including thehippocampal formation.[45]In this system, the hippocampal formation "encodes" memories, or makes it possible for memories to form at all, and the neocortex stores memories after the initial encoding process is completed. Recently,[when?]new evidence has been presented in support of a more precise interpretation of this hypothesis. The hippocampal formation includes, among other structures: the hippocampus itself, theentorhinal cortex, and theperirhinal cortex. These latter two make up the parahippocampal cortices. Amnesiacs with damage to the hippocampus but some spared parahippocampal cortex were able to demonstrate some degree of intact semantic memory despite a total loss of episodic memory, which strongly suggests that information encoding leading to semantic memory does not have its physiological basis in the hippocampus.[46]
Other researchers believe thehippocampusis only involved inepisodic memoryand spatialcognition, which raises the question of where semantic memory may be located. Some believe semantic memory lives in thetemporal cortex, while others believe that it is widely distributed across all brain areas.[citation needed]
The hippocampal areas associate semantic memory with declarative memory. The left inferiorprefrontal cortexand the left posteriortemporalareas are other areas involved in semantic memory use.Temporal lobedamage affecting the lateral and medial cortexes have been related to semantic impairments. Damage to different areas of the brain affect semantic memory differently.[47]
Neuroimaging evidence suggests that left hippocampal areas show an increase in activity during semantic memory tasks. During semantic retrieval, two regions in the rightmiddle frontal gyrusand the area of the rightinferior temporal gyrussimilarly show an increase in activity.[47]Damage to areas involved in semantic memory result in various deficits, depending on the area and type of damage. For instance, Lambon Ralph, Lowe, & Rogers (2007) found that category-specific impairments can occur where patients have different knowledge deficits for one semantic category over another, depending on location and type of damage.[48]Category-specific impairments might indicate that knowledge may rely differentially upon sensory and motor properties encoded in separate areas (Farah and McClelland, 1991).[full citation needed]
Category-specific impairments can involve cortical regions where living and nonliving things are represented and where feature and conceptual relationships are represented. Depending on the damage to the semantic system, one type might be favored over the other. In many cases, there is a point where one domain is better than the other (such as the representation of living and nonliving things over feature and conceptual relationships or vice versa).[49]
Different diseases and disorders can affect the biological workings of semantic memory. A variety of studies have been done in an attempt to determine the effects on varying aspects of semantic memory. For example, Lambon, Lowe, & Rogers studied the different effectssemantic dementiaand herpes simplex virus encephalitis have on semantic memory. They found that semantic dementia has a more generalized semantic impairment. Additionally, deficits in semantic memory as a result of herpes simplex virus encephalitis tend to have more category-specific impairments.[48]Other disorders that affect semantic memory, such asAlzheimer's disease, has been observed clinically as errors in naming, recognizing, or describing objects. Whereas researchers have attributed such impairment to degradation of semantic knowledge.[50]
Various neural imaging and research points to semantic memory andepisodic memoryresulting from distinct areas in the brain. Other research suggests that both semantic memory and episodic memory are part of a singular declarative memory system, yet represent different sectors and parts within the greater whole. Different areas within the brain are activated depending on whether semantic or episodic memory is accessed.[51]
Category-specific semantic impairments are a neuropsychological occurrence in which an individual ability to identify certain categories of objects is selectively impaired while other categories remain undamaged.[52]This condition can result in brain damage that is widespread, patchy, or localized. Research suggests that the temporal lobe, more specifically the structural description system, might be responsible for category specific impairments of semantic memory disorders.[52]
Theories on category-specific semantic deficits tend to fall into two different groups based on their underlying principles. Theories based on the correlated structure principle, which states that conceptual knowledge organization in the brain is a reflection of how often an object's properties occur, assume that the brain reflects the statistical relation of object properties and how they relate to each other. Theories based on the neural structure principle, which states that the conceptual knowledge organization in the brain is controlled by representational limits imposed by the brain itself, assume that organization is internal. These theories assume that natural selective pressures have caused neural circuits specific to certain domains to be formed, and that these are dedicated to problem-solving and survival. Animals, plants, and tools are all examples of specific circuits that would be formed based on this theory.[52]
Category-specific semantic deficits tend to fall into two different categories, each of which can be spared or emphasized depending on the individual's specific deficit. The first category consists of animate objects, with animals being the most common deficit. The second category consists of inanimate objects with two subcategories: fruits and vegetables (biological inanimate objects), and artifacts being the most common deficits. The type of deficit does not indicate a lack of conceptual knowledge associated with that category, as the visual system used to identify and describe the structure of objects functions independently of an individual's conceptual knowledge base.[52]
Most of the time, these two categories are consistent with case-study data. However, there are a few exceptions to the rule. Categories like food, body parts, and musical instruments have been shown to defy the animate/inanimate or biological/non-biological categorical division. In some cases, it has been shown that musical instruments tend to be impaired in patients with damage to the living things category despite the fact that musical instruments fall in the non-biological/inanimate category. However, there are also cases of biological impairment where musical instrument performance is at a normal level. Similarly, food has been shown to be impaired in those with biological category impairments. The category of food specifically can present some irregularities though because it can be natural, but it can also be highly processed, such as in a case study of an individual who had impairments for vegetables and animals, while their category for food remained intact.[52]
Modality refers to a semantic category of meaning that has to do with necessity and probability expressed through language. In linguistics, certain expressions are said to have modal meanings. A few examples of this includeconditionals,auxiliary verbs, adverbs, and nouns. When looking at category-specific semantic deficits, there is another kind of modality that looks at word relationships which is much more relevant to these disorders and impairments.[53]
For category-specific impairments, there are modality-specific theories that are based on a few general predictions. These theories state that damage to the visual modality will result in a deficit of biological objects, while damage to the functional modality will result in a deficit of non-biological objects (artifacts). Modality-based theories assume that if there is damage to modality-specific knowledge, then all the categories that fall under it will be damaged. In this case, damage to the visual modality would result in a deficit for all biological objects with no deficits restricted to the more specific categories. For example, there would be no category specific semantic deficits for just "animals" or just "fruits and vegetables".[52]
Semantic dementiais a semantic memory disorder that causes patients to lose the ability to match words or images to their meanings.[54]It is fairly rare for patients with semantic dementia to develop category specific impairments, though there have been documented cases of it occurring. Typically, a more generalized semantic impairment results from dimmed semantic representations in the brain.[55]
Alzheimer's disease is a subcategory of semantic dementia which can cause similar symptoms. The main difference between the two is that Alzheimer's is categorized by atrophy to both sides of the brain, while semantic dementia is categorized by loss of brain tissue in the front portion of the left temporal lobe.[54]With Alzheimer's disease in particular, interactions with semantic memory produce different patterns in deficits between patients and categories over time which is caused by distorted representations in the brain.[56]For example, in the initial onset of Alzheimer's disease, patients have mild difficulty with the artifacts category. As the disease progresses, the category specific semantic deficits progress as well, and patients see a more concrete deficit with natural categories. In other words, the deficit tends to be worse with living things as opposed to non-living things.[56]
Herpes simplexvirus encephalitis (HSVE) is a neurological disorder which causes inflammation of the brain. Early symptoms include headache, fever, and drowsiness, but over time symptoms including diminished ability to speak, memory loss, and aphasia develop. HSVE can also cause category-specific semantic deficits to occur.[57]When this happens, patients typically have temporal lobe damage that affects the medial and lateral cortex as well as the frontal lobe. Studies have also shown that patients with HSVE have a much higher incidence of category-specific semantic deficits than those with semantic dementia, though both cause a disruption of flow through the temporal lobe.[55]
Abrain lesionrefers to any abnormal tissue in or on the brain, most often caused by a trauma or infection. In one case study, a patient underwent surgery to remove an aneurysm, and the surgeon had to clip the anterior communicating artery which resulted in basal forebrain and fornix lesions. Before surgery, this patient was completely independent and had no semantic memory issues. However, after the operation and the lesions developed, the patient reported difficulty with naming and identifying objects, recognition tasks, and comprehension. The patient had a much more significant amount of trouble with objects in the living category which could be seen in the drawings of animals which the patient was asked to do and in the data from the matching and identification tasks. Every lesion is different, but in this case study researchers suggested that the semantic deficits presented themselves as a result of disconnection of the temporal lobe. The findings led to the conclusion that any type of lesion in the temporal lobe, depending on severity and location, has the potential to cause semantic deficits.[58]
The following table summarizes conclusions from theJournal of Clinical and Experimental Neuropsychology.[59]
These results give a baseline for the differences in semantic knowledge across gender for healthy subjects. Experimental data observes that males with category-specific semantic deficits are mainly impaired with fruits and vegetables while women with category specific semantic deficits are mainly impaired with animals and artifacts. It has been concluded that there are significant gender differences when it comes to category-specific semantic deficits, and that the patient will tend to be impaired in categories that had less existing knowledge to begin with.[59]
Semantic memory is also discussed in reference tomodality. Different components represent information from different sensorimotor channels. Modality specific impairments are divided into separate subsystems on the basis of input modality. Examples of different input modalities include visual, auditory, and tactile input. Modality-specific impairments are also divided into subsystems based on the type of information. Visual vs. verbal and perceptual vs. functional information are examples of information types.[60]
Semantic memory disorders fall into two groups. Semantic refractory access disorders are contrasted with semantic storage disorders according to four factors: temporal factors, response consistency, frequency, and semantic relatedness. A key feature of semantic refractory access disorders is temporal distortions, where decreases in response time to certain stimuli are noted when compared to natural response times. In access disorders there are inconsistencies in comprehending and responding to stimuli that have been presented many times. Temporal factors impact response consistency. In storage disorders, an inconsistent response to specific items is not observed. Stimulus frequency determines performance at all stages of cognition. Extreme word frequency effects are common in semantic storage disorders while in semantic refractory access disorders word frequency effects are minimal. The comparison of close and distant groups tests semantic relatedness. Close groupings have words that are related because they are drawn from the same category, such as a list of clothing types. Distant groupings contain words with broad categorical differences, such as unrelated words. Comparing close and distant groups shows that in access disorders semantic relatedness had a negative effect, which is not observed in semantic storage disorders. Category-specific and modality-specific impairments are important components in access and storage disorders of semantic memory.[61]
Positron emission tomography(PET) andfunctional magnetic resonance imaging(fMRI) allow cognitive neuroscientists to explore different hypotheses concerning the neural network organization of semantic memory. By using these neuroimaging techniques researchers can observe the brain activity of participants while they perform cognitive tasks. These tasks can include, but are not limited to, naming objects, deciding if two stimuli belong in the same object category, or matching pictures to their written or spoken names.[62]
A developing theory is that semantic memory, like perception, can be subdivided into types of visual information—color, size, form, and motion. Thompson-Schill (2003) found that the left or bilateral ventraltemporal cortexappears to be involved in retrieval of knowledge of color and form, the left lateral temporal cortex in knowledge of motion, and theparietal cortexin knowledge of size.[63]
Neuroimaging studies suggest a large, distributed network of semantic representations that are organized minimally by attribute, and perhaps additionally by category. These networks include "extensive regions of ventral (form and color knowledge) and lateral (motion knowledge) temporal cortex, parietal cortex (size knowledge), andpremotor cortex(manipulation knowledge). Other areas, such as more anterior regions of temporal cortex, may be involved in the representation of nonperceptual (e.g. verbal) conceptual knowledge, perhaps in some categorically-organized fashion."[64] | https://en.wikipedia.org/wiki/Semantic_memory |
Asemantic network, orframe networkis aknowledge basethat representssemanticrelations betweenconceptsin a network. This is often used as a form ofknowledge representation. It is adirectedorundirected graphconsisting ofvertices, which representconcepts, andedges, which representsemantic relationsbetweenconcepts,[1]mapping or connectingsemantic fields. A semantic network may be instantiated as, for example, agraph databaseor aconcept map. Typical standardized semantic networks are expressed assemantic triples.
Semantic networks are used inneurolinguisticsandnatural language processingapplications such assemantic parsing[2]andword-sense disambiguation.[3]Semantic networks can also be used as a method to analyze large texts and identify the main themes and topics (e.g., ofsocial mediaposts), to reveal biases (e.g., in news coverage), or even to map an entire research field.[4]
Examples of the use of semantic networks inlogic,directed acyclic graphsas a mnemonic tool, dates back centuries, the earliest documented use being the Greek philosopherPorphyry's commentary onAristotle'scategoriesin the third century AD.
Incomputing history, "Semantic Nets" for thepropositional calculuswere firstimplementedforcomputersbyRichard H. Richensof theCambridge Language Research Unitin 1956 as an "interlingua" formachine translationofnatural languages,[5]although the importance of this work and the Cambridge Language Research Unit was only belatedly realized.
Semantic networks were also independently implemented by Robert F. Simmons[6]and Sheldon Klein, using thefirst-order predicate calculusas a base, after being inspired by a demonstration ofVictor Yngve. The "line of research was originated by the first President of theAssociation for Computational Linguistics, Victor Yngve, who in 1960 had published descriptions ofalgorithmsfor using aphrase structure grammarto generate syntactically well-formed nonsense sentences. Sheldon Klein and I about 1962–1964 were fascinated by the technique and generalized it to a method for controlling the sense of what was generated by respecting the semantic dependencies of words as they occurred in text."[7]Other researchers, most notablyM. Ross Quillian[8]and others atSystem Development Corporationhelped contribute to their work in the early 1960s as part of the SYNTHEX project. It's these publications at System Development Corporation that most modern derivatives of the term "semantic network" cite as their background. Later prominent works were done byAllan M. Collinsand Quillian (e.g., Collins and Quillian;[9][10]Collins and Loftus[11]Quillian[12][13][14][15]). Still later in 2006, Hermann Helbig fully describedMultiNet.[16]
In the late 1980s, two universities in theNetherlands,GroningenandTwente, jointly began a project calledKnowledge Graphs, which are semantic networks but with the added constraint that edges are restricted to be from a limited set of possible relations, to facilitatealgebras on the graph.[17]In the subsequent decades, the distinction between semantic networks andknowledge graphswas blurred.[18][19]In 2012,Googlegave their knowledge graph the nameKnowledge Graph.
The semantic link network was systematically studied as asemantic social networkingmethod. Its basic model consists of semantic nodes, semantic links between nodes, and a semantic space that defines the semantics of nodes and links and reasoning rules on semantic links. The systematic theory and model was published in 2004.[20]This research direction can trace to the definition of inheritance rules for efficient model retrieval in 1998[21]and the Active Document Framework ADF.[22]Since 2003, research has developed toward social semantic networking.[23]This work is a systematic innovation at the age of theWorld Wide Weband global social networking rather than an application or simple extension of the Semantic Net (Network). Its purpose and scope are different from that of the Semantic Net (or network).[24]The rules for reasoning and evolution and automatic discovery of implicit links play an important role in the Semantic Link Network.[25][26]Recently it has been developed to support Cyber-Physical-Social Intelligence.[27]It was used for creating a general summarization method.[28]The self-organised Semantic Link Network was integrated with a multi-dimensional category space to form a semantic space to support advanced applications with multi-dimensional abstractions and self-organised semantic links[29][30]It has been verified that Semantic Link Network play an important role in understanding and representation throughtext summarisationapplications.[31][32]Semantic Link Network has been extended from cyberspace to cyber-physical-social space. Competition relation and symbiosis relation as well as their roles in evolving society were studied in the emerging topic: Cyber-Physical-Social Intelligence[33]
More specialized forms of semantic networks has been created for specific use. For example, in 2008, Fawsy Bendeck's PhD thesis formalized theSemantic Similarity Network(SSN) that contains specialized relationships and propagation algorithms to simplify thesemantic similarityrepresentation and calculations.[34]
A semantic network is used when one has knowledge that is best understood as a set of concepts that are related to one another.
Most semantic networks are cognitively based. They consist of arcs (spokes) and nodes (hubs) which can be organized into a taxonomic hierarchy. Different semantic networks can also be connected by bridge nodes. Semantic networks contributed to the ideas ofspreading activation,inheritance, and nodes as proto-objects.
One process of constructing semantic networks, known also asco-occurrence networks, includes identifying keywords in the text, calculating the frequencies of co-occurrences, and analyzing the networks to find central words and clusters of themes in the network.[35]
In the field oflinguistics, semantic networks represent how the human mind handles associated concepts. Typically, concepts in a semantic network can have one of two different relationships: either semantic or associative.
If semantic in relation, the two concepts are linked by any of the following semantic relationships:synonymy,antonymy,hypernymy,hyponymy,holonymy,meronymy,metonymy, orpolysemy. These are not the only semantic relationships, but some of the most common.
If associative in relation, the two concepts are linked based on their frequency to occur together. These associations are accidental, meaning that nothing about their individual meanings requires them to be associated with one another, only that they typically are. Examples of this would be pig and farm, pig and trough, or pig and mud. While nothing about the meaning of pig forces it to be associated with farms, as pigs can be wild, the fact that pigs are so frequently found on farms creates an accidental associated relationship. These thematic relationships are common within semantic networks and are notable results infree associationtests.
As the initial word is given, activation of the most closely related concepts begin, spreading outward to the lesser associated concepts. An example of this would be the initial word pig prompting mammal, then animal, and then breathes. This example shows that taxonomic relationships are inherent within semantic networks. The most closely related concepts typically sharesemantic features, which are determinants of semantic similarity scores. Words with higher similarity scores are more closely related, thus have higher probability of being a close word in the semantic network.
These relationships can be suggested into the brain throughpriming, where previous examples of the same relationship are shown before the target word is shown. The effect of priming on a semantic network linking can be seen through the speed of the reaction time to the word. Priming can help to reveal the structure of a semantic network and which words are most closely associated with the original word.
Disruption of a semantic network can lead to a semantic deficit (not to be confused with assemantic dementia).
There exists physical manifestation of semantic relationships in the brain as well. Category-specific semantic circuits show that words belonging to different categories are processed in circuits differently located throughout the brain. For example, the semantic circuits for a word associated with the face or mouth (such as lick) is located in a different place of the brain than a word associated with the leg or foot (such as kick). This is a primary result of a 2013 study published byFriedemann Pulvermüller[citation needed]. These semantic circuits are directly tied to their sensorimotor areas of the brain. This is known as embodied semantics, a subtopic ofembodied language processing.
If brain damage occurs, the normal processing of semantic networks could be disrupted, leading to preference into what kind of relationships dominate the semantic network in the mind.
The following code shows an example of a semantic network in theLisp programming languageusing anassociation list.
To extract all the information about the "canary" type, one would use theassocfunction with a key of "canary".[36]
An example of a semantic network isWordNet, alexicaldatabase ofEnglish. It groups English words into sets of synonyms calledsynsets, provides short, general definitions, and records the various semantic relations between these synonym sets. Some of the most common semantic relations defined aremeronymy(A is a meronym of B if A is part of B),holonymy(B is a holonym of A if B contains A),hyponymy(ortroponymy) (A is subordinate of B; A is kind of B),hypernymy(A is superordinate of B),synonymy(A denotes the same as B) andantonymy(A denotes the opposite of B).
WordNet properties have been studied from anetwork theoryperspective and compared to other semantic networks created fromRoget's Thesaurusandword associationtasks. From this perspective the three of them are asmall world structure.[37]
It is also possible to represent logical descriptions using semantic networks such as theexistential graphsofCharles Sanders Peirceor the relatedconceptual graphsofJohn F. Sowa.[1]These have expressive power equal to or exceeding standardfirst-order predicate logic. Unlike WordNet or other lexical or browsing networks, semantic networks using these representations can be used for reliable automated logical deduction. Some automated reasoners exploit the graph-theoretic features of the networks during processing.
Other examples of semantic networks areGellishmodels.Gellish Englishwith itsGellish English dictionary, is aformal languagethat is defined as a network of relations between concepts and names of concepts. Gellish English is a formal subset of natural English, just as Gellish Dutch is a formal subset of Dutch, whereas multiple languages share the same concepts. Other Gellish networks consist of knowledge models and information models that are expressed in the Gellish language. A Gellish network is a network of (binary) relations between things. Each relation in the network is an expression of a fact that is classified by a relation type. Each relation type itself is a concept that is defined in the Gellish language dictionary. Each related thing is either a concept or an individual thing that is classified by a concept. The definitions of concepts are created in the form of definition models (definition networks) that together form a Gellish Dictionary. A Gellish network can be documented in a Gellish database and is computer interpretable.
SciCrunchis a collaboratively edited knowledge base for scientific resources. It provides unambiguous identifiers (Research Resource IDentifiers or RRIDs) for software, lab tools etc. and it also provides options to create links between RRIDs and from communities.
Another example of semantic networks, based oncategory theory, isologs. Here each type is an object, representing a set of things, and each arrow is a morphism, representing a function.Commutative diagramsalso are prescribed to constrain the semantics.
In the social sciences people sometimes use the term semantic network to refer toco-occurrence networks.[38][39]The basic idea is that words that co-occur in a unit of text, e.g. a sentence, are semantically related to one another. Ties based on co-occurrence can then be used to construct semantic networks. This process includes identifying keywords in the text, constructing co-occurrence networks, and analyzing the networks to find central words and clusters of themes in the network. It is a particularly useful method to analyze large text andbig data.[40]
There are also elaborate types of semantic networks connected with corresponding sets of software tools used forlexicalknowledge engineering, like the Semantic Network Processing System (SNePS) of Stuart C. Shapiro[41]or theMultiNetparadigm of Hermann Helbig,[42]especially suited for the semantic representation of natural language expressions and used in severalNLPapplications.
Semantic networks are used in specialized information retrieval tasks, such asplagiarism detection. They provide information on hierarchical relations in order to employsemantic compressionto reduce language diversity and enable the system to match word meanings, independently from sets of words used.
The Knowledge Graphproposed by Google in 2012 is actually an application of semantic network in search engine.
Modeling multi-relational data like semantic networks in low-dimensional spaces through forms ofembeddinghas benefits in expressing entity relationships as well as extracting relations from mediums like text. There are many approaches to learning these embeddings, notably using Bayesian clustering frameworks or energy-based frameworks, and more recently, TransE[43](NeurIPS2013). Applications of embedding knowledge base data includeSocial network analysisandRelationship extraction. | https://en.wikipedia.org/wiki/Semantic_network |
Visual indexing theory, also known asFINST theory, is a theory of earlyvisual perceptiondeveloped byZenon Pylyshynin the 1980s. It proposes apre-attentivemechanism (a ‘FINST’) whose function is to individuate salient elements of a visual scene, and track their locations across space and time. Developed in response to what Pylyshyn viewed as limitations of prominent theories of visual perception at the time, visual indexing theory is supported by several lines of empirical evidence.
'FINST' abbreviates ‘FINgers of INSTantiation’. Pylyshyn describes visual indexing theory in terms of this analogy.[1]Imagine, he proposes, placing your fingers on five separate objects in a scene. As those objects move about, your fingers stay in respective contact with each of them, allowing you to continually track their whereabouts and positions relative to one another. While you may not be able to discern in this way any detailed information about the items themselves, the presence of your fingers provides a reference via which you can access such information at any time, without having to relocate the objects within the scene. Furthermore, the objects' continuity over time is inherently maintained — you know the object referenced by your pinky finger at timetis the same object as that referenced by your pinky att−1, regardless of any spatial transformations it has undergone, because your finger has remained in continuous contact with it.
Visual indexing theory holds that the visual perceptual system works in an analogous way. FINSTs behave like the fingers in the above scenario, pointing to and tracking the location of various objects in visual space. Like fingers, FINSTs are:
FINSTs operate pre-attentively — that is, before attention is drawn or directed to an object in the visual field. Their primary task is toindividuatecertain salient features in a scene, conceptually distinguishing these from other stimuli. Under visual indexing theory, FINSTing is a necessary precondition for higher level perceptual processing.
Pylyshyn suggests that what FINSTs operate upon in a direct sense is 'feature clusters' on the retina, though a precise set of criteria for FINST allocation has not been defined. "The question of how FINSTs are assigned in the first instance remains open, although it seems reasonable that they are assigned primarily in a stimulus-driven manner, perhaps by the activation of locally distinct properties of the stimulus-particularly by new features entering the visual field."[1]
FINSTs are subject to resource constraints. Up to around five FINSTs can be allocated at any given time, and these provide the visual system information about the relative locations of FINSTed objects with respect to one another.
Once an object has been individuated, its FINST then continues to index that particular feature cluster as it moves across the retina. "Thus distal features which are currently projected onto the retina can be indexed through the FINST mechanism in a way that is transparent to their retinal location."[1]By continually tracking an objects' whereabouts as it moves about, FINSTs perform the additional function of maintaining the continuity of objects over time.
Under visual indexing theory, an object cannot be attended to until it has first been indexed. Once it has been allocated a FINST, the index provides the visual system with rapid and preferential access to the object for further processing of features such as colour, texture and shape.
While in this sense FINSTs provide the means for higher-level processing to occur, FINSTs themselves are "opaque to the properties of the objects to which they refer."[1]FINSTs do not directly convey any information about an indexed object, beyond its position at a given instant. "Thus, on initial contact, objects are not interpreted as belonging to a certain type or having certain properties; in other words, objects are initially detected without being conceptualised."[2]Like the fingers described above, FINSTs' role in visual perception is purely an indexical one.
Visual indexing theory was created partly in response to what Pylyshyn viewed as limitations of traditional theories of perception and cognition — in particular, the spotlight model of attention, and the descriptive view of visual representation.[1][3]
The traditional view of visual perception holds thatattentionis fundamental to visual processing. In terms of an analogy offered by Posner, Snyder and Davidson (1980): "Attention can be likened to a spotlight that enhances the efficiency of detection of events within its beam".[4]This spotlight can be controlled volitionally, or drawn involuntarily to salient elements of a scene,[5]but a key characteristic is that it can only be deployed to one location at a time. In 1986, Eriksen and St. James conducted a series of experiments which suggested that the spotlight of attention comes equipped with a zoom-lens. The zoom-lens allows the size of the area of attentional focus to be expanded (but due to a fixed limit on available attentional resources, only at the expense of processing efficiency).[6]
According to Pylyshyn, the spotlight/zoom-lens model cannot tell the complete story of visual perception. He argues that a pre-attentive mechanism is needed to individuate objects upon which a spotlight of attention could be directed in the first place. Furthermore, results of multiple object tracking studies (discussed below) are "incompatible with the proposal that items are accessed by moving around a single spotlight of attention."[7]Visual indexing theory addresses these limitations.
According to the classical view ofmental representation, we perceive objects according to the conceptual descriptions they fall under. It is these descriptions, and not the raw content of our visual perceptions, that allow us to construct meaningful representations of the world around us, and determine appropriate courses of action. In Pylyshyn's words, "it is not the bright spot in the sky that determines which way we set out when we are lost, but the fact that we see it (or represent it) as the North Star".[3]The method by which we come to match a percept to its appropriate description has been the subject of ongoing investigation (for example the way in which parts of objects are combined to represent their whole),[8]but there is a general consensus that descriptions are fundamental in this way to visual perception.[3]
Like the spotlight model of attention, Pylyshyn takes the descriptive model of visual representation to be incomplete. One issue is that the theory does not account for demonstrative, or indexical references. "For example, in the presence of a visual stimulus, we can think thoughts such as `that is red' where the term `that' refers to something we have picked out in our field of view without reference to what category it falls under or what properties it may have."[3]Relatedly, the theory has problems accounting for how we are able to pick out a single token among several objects of the same type. For example, I may refer to a particular can of soup on a supermarket shelf sitting among a number of identical cans that answer to the same description. In both cases, a spatiotemporal reference is required in order to pick out the object within the scene, independently of any description that object may fall under. FINSTs, Pylyshyn suggests, provide just such a reference.
A deeper problem for this view, according to Pylyshyn, is that it cannot account for objects' continuity over time. "An individual remains the same individual when it moves about or when it changes any (or even all) of its visible properties."[3]If we refer to objects solely in terms of their conceptual descriptions, it is not clear how the visual system maintains an object's identity when those descriptions change. "The visual system needs to be able to pick out a particular individual regardless of what properties the individual happens to have at any instant of time."[3]Pylyshyn argues that FINSTs' detachment from the descriptions of the objects they reference overcomes this problem.
Three main types of experiments provide data that support visual indexing theory. Multiple tracking studies demonstrate that more than one object can be tracked within the visual field simultaneously, subitizing studies suggest the existence of a mechanism that allows small numbers of objects to be efficiently enumerated, and subset selection studies show that certain elements of a visual scene can be processed independently of other items. In all three cases, FINSTs provide an explanation of the phenomenon observed.[7][2]
Multiple object tracking describes the ability of human subjects to simultaneously track the movement of up to five target objects as they move across the visual field, usually in the presence of identical moving distractor objects of equal or greater number. The phenomenon was first demonstrated by Pylyshyn and Storm in 1988,[9]and their results have been widely replicated (see Pylyshyn, 2007 for a summary.[10])
Experimental setup
In a typical experiment, a number of identical objects (up to 10) are initially shown on a screen. Some subset of these objects (up to five) are then designated as targets — usually by flashing or changing colour momentarily — before returning to being indistinguishable from the non-target objects. All of the objects then proceed to move randomly around the screen for between 7 and 15 seconds. The subject's task it to identify, once the objects have stopped moving, which objects were the targets. Successful completion of the task thus requires subjects to continually track each of the target objects as they move, and ignore the distractors.
Results
Under such experimental conditions, it has been repeatedly found that subjects can track multiple moving objects simultaneously.[7]In addition to consistently observing a high rate of successful target tracking, researchers have shown that subjects can:
Two defining properties of FINSTs are their plurality, and their capacity to track indexed objects as they move around a visually cluttered scene. "Thus multiple-item tracking studies provide strong support for one of the more counterintuitive predictions of FINST theory — namely, that the identity of items can be maintained by the visual system even when the items are visually indiscriminable from their neighbors and when their locations are constantly changing."[7]
Subitizingrefers to the rapid and accurate enumeration of small numbers of items. Numerous studies (dating back toJevonsin 1871)[19]have demonstrated that subjects can very quickly and accurately report the quantity of objects randomly presented on a display, when they number fewer than around five. While larger quantities require subjects to count or estimate — at great expense of time and accuracy — it seems that a different enumeration method is employed in these low-quantity cases. In 1949, Kaufman, Lord, Reese and Volkmann coined the term 'subitizing' to describe the phenomenon.[20]
In 2023 a study ofsingle neuron recordingsin themedial temporal lobeof neurosurgical patients judging numbers reported evidence of two separate neural mechanisms with a boundary inneuronal codingaround number 4 that correlates with the behavioural transition from subitizing to estimation, supporting the old observation of Jevons.[21][22]
Experimental setup
In a typical experiment, subjects are briefly shown (for around 100ms) a screen containing a number of randomly arranged objects. The subjects' task is to report the number of items shown, which can range between one and several hundred per trial.
Results
When the number of items to be enumerated is within the subitizing range, each additional item on the display adds around 40–120ms to the total response time. Beyond the subitizing range, each additional item adds 250–350ms to the total response time (so that when the number of items presented is plotted against reaction time, an 'elbow' shaped curve results.) Researchers generally take this to be evidence of there being (at least) two different enumeration methods at work — one for small numbers, and another for larger numbers.[23]
Trick and Pylyshyn (1993) argue that "subitizing can be explained only by virtue of a limited-capacity mechanism that operates after the spatially parallel processes of feature detection and grouping but before the serial processes of spatial attention."[23]In other words, by a mechanism such as a FINST.
A key assumption of visual indexing theory is that once an item entering the visual field has been indexed, that index provides the subject with rapid subsequent access to the object, which bypasses any higher level cognitive processes.[2]In order to test this hypothesis, Burkell and Pylyshyn (1997) designed a series of experiments to see whether subjects could effectively index a subset of items on a display, such that a search task could be undertaken with respect to only the selected items.[24]
Experimental setup
Burkell and Pylyshyn's experiments took advantage of a well-documented distinction between two types ofvisual search:
The experimental setup is similar to a typical conjunction search task: 15 items are presented on a screen, each of which has one of two colours, and one of two orientations. Three of these items are designated as the subset by late onset (appearing after the others). The subset contains the target item and two distractors.
The key independent variable in this experiment is the nature of the subset selected. In some cases, the subset comprises a feature search set — i.e. the target differs from the two distractors in one dimension only. In other cases, the subset is equivalent to a conjunction search, with the target differing from the distractors in both dimensions. Because the total display contains items that differ from the target in both dimensions, if subjects are quicker to respond to the feature search subsets, this would suggest they had taken advantage of the "pop out" method of target identification. This in turn would mean that they had applied their visual search to the subsetted items only.
Results
Burkell and Pylyshyn found that subjects were indeed quicker to identify the target object in the subset feature search condition than they were in the subset conjunction search condition, suggesting that the subsetted objects were successfully prioritised. In other words, the subsets "could, in a number of important ways, be accessed by the visual system as though they were the only items present".[7]Furthermore, the subsetted objects' particular positions within the display made no difference to subjects' ability to search across them — even when they were distally located.[24]Watson and Humphreys (1997) reported similar findings.[26]These results are consistent with the predictions of visual indexing theory: FINSTs provide a possible mechanism by which the subsets were prioritised. | https://en.wikipedia.org/wiki/Visual_indexing_theory |
Concept miningis an activity that results in the extraction ofconceptsfromartifacts. Solutions to the task typically involve aspects ofartificial intelligenceandstatistics, such asdata miningandtext mining.[1][2]Because artifacts are typically a loosely structured sequence of words and other symbols (rather than concepts), the problem isnontrivial, but it can provide powerful insights into the meaning, provenance and similarity of documents.
Traditionally, the conversion of words to concepts has been performed using athesaurus,[3]and for computational techniques the tendency is to do the same. The thesauri used are either specially created for the task, or a pre-existing language model, usually related to Princeton'sWordNet.
The mappings of words to concepts[4]are oftenambiguous. Typically each word in a given language will relate to several possible concepts. Humans use context to disambiguate the various meanings of a given piece of text, where availablemachine translationsystems cannot easily infer context.
For the purposes of concept mining, however, these ambiguities tend to be less important than they are with machine translation, for in large documents the ambiguities tend to even out, much as is the case with text mining.
There are many techniques fordisambiguationthat may be used. Examples are linguistic analysis of the text and the use of word and concept association frequency information that may be inferred from large text corpora. Recently, techniques that base onsemantic similaritybetween the possible concepts and the context have appeared and gained interest in the scientific community.
One of the spin-offs of calculating document statistics in the concept domain, rather than the word domain, is that concepts form natural tree structures based onhypernymyandmeronymy. These structures can be used to generate simple tree membership statistics, that can be used to locate any document in aEuclidean concept space. If the size of a document is also considered as another dimension of this space then an extremely efficient indexing system can be created. This technique is currently in commercial use locating similar legal documents in a 2.5 million document corpus.
Standard numeric clustering techniques may be used in "concept space" as described above to locate and index documents by the inferred topic. These are numerically far more efficient than theirtext miningcousins, and tend to behave more intuitively, in that they map better to the similarity measures a human would generate. | https://en.wikipedia.org/wiki/Concept_mining |
Inbioinformatics,k-mersaresubstringsof lengthk{\displaystyle k}contained within a biological sequence. Primarily used within the context ofcomputational genomicsandsequence analysis, in whichk-mers are composed ofnucleotides(i.e. A, T, G, and C),k-mers are capitalized upon toassemble DNA sequences,[1]improveheterologous gene expression,[2][3]identify species in metagenomic samples,[4]and createattenuated vaccines.[5]Usually, the termk-mer refers to all of a sequence's subsequences of lengthk{\displaystyle k}, such that the sequence AGAT would have fourmonomers(A, G, A, and T), three 2-mers (AG, GA, AT), two 3-mers (AGA and GAT) and one 4-mer (AGAT). More generally, a sequence of lengthL{\displaystyle L}will haveL−k+1{\displaystyle L-k+1}k-mers and there existnk{\displaystyle n^{k}}total possiblek-mers, wheren{\displaystyle n}is number of possible monomers (e.g. four in the case ofDNA).
k-mers are simply lengthk{\displaystyle k}subsequences. For example, all the possiblek-mers of a DNA sequence are shown below:
A method of visualizingk-mers, thek-mer spectrum, shows the multiplicity of eachk-mer in a sequence versus the number ofk-mers with that multiplicity.[6]The number of modes in ak-mer spectrum for a species's genome varies, with most species having a unimodal distribution.[7]However, allmammalshave a multimodal distribution. The number of modes within ak-mer spectrum can vary between regions of genomes as well: humans have unimodalk-mer spectra in5' UTRsandexonsbut multimodal spectra in3' UTRsandintrons.
The frequency ofk-mer usage is affected by numerous forces, working at multiple levels, which are often in conflict. It is important to note thatk-mers for higher values ofkare affected by the forces affecting lower values ofkas well. For example, if the 1-mer A does not occur in a sequence, none of the 2-mers containing A (AA, AT, AG, and AC) will occur either, thereby linking the effects of the different forces.
Whenk= 1, there are four DNAk-mers,i.e., A, T, G, and C. At the molecular level, there are threehydrogen bondsbetween G and C, whereas there are only two between A and T. GC bonds, as a result of the extra hydrogen bond (and stronger stacking interactions), are more thermally stable than AT bonds.[8]Mammals and birds have a higher ratio of Gs and Cs to As and Ts (GC-content), which led to the hypothesis that thermal stability was a driving factor of GC-content variation.[9]However, while promising, this hypothesis did not hold up under scrutiny: analysis among a variety of prokaryotes showed no evidence of GC-content correlating with temperature as the thermal adaptation hypothesis would predict.[10]Indeed, if natural selection were to be the driving force behind GC-content variation, that would require thatsingle nucleotide changes, which are oftensilent, to alter the fitness of an organism.[11]
Rather, current evidence suggests thatGC‐biased gene conversion(gBGC) is a driving factor behind variation in GC content.[11]gBGC is a process that occurs duringrecombinationwhich replaces As and Ts with Gs and Cs.[12]This process, though distinct from natural selection, can nevertheless exert selective pressure on DNA biased towards GC replacements being fixed in the genome. gBGC can therefore be seen as an "impostor" of natural selection. As would be expected, GC content is greater at sites experiencing greater recombination.[13]Furthermore, organisms with higher rates of recombination exhibit higher GC content, in keeping with the gBGC hypothesis's predicted effects.[14]Interestingly, gBGC does not appear to be limited toeukaryotes.[15]Asexual organisms such as bacteria and archaea also experience recombination by means of gene conversion, a process of homologous sequence replacement resulting in multiple identical sequences throughout the genome.[16]That recombination is able to drive up GC content in all domains of life suggests that gBGC is universally conserved. Whether gBGC is a (mostly) neutral byproduct of the molecular machinery of life or is itself under selection remains to be determined. The exact mechanism and evolutionary advantage or disadvantage of gBGC is currently unknown.[17]
Despite the comparatively large body of literature discussing GC-content biases, relatively little has been written about dinucleotide biases. What is known is that these dinucleotide biases are relatively constant throughout the genome, unlike GC-content, which, as seen above, can vary considerably.[18]This is an important insight that must not be overlooked. If dinucleotide bias were subject to pressures resulting fromtranslation, then there would be differing patterns of dinucleotide bias incodingandnoncodingregions driven by some dinucelotides' reduced translational efficiency.[19]Because there is not, it can therefore be inferred that the forces modulating dinucleotide bias are independent of translation. Further evidence against translational pressures affecting dinucleotide bias is the fact that the dinucleotide biases of viruses, which rely heavily on translational efficiency, are shaped by their viral family more than by their hosts, whose translational machinery the viruses hijack.[20]
Counter to gBGC's increasing GC-content isCG suppression, which reduces the frequency ofCG2-mers due todeaminationofmethylatedCG dinucleotides, resulting in substitutions of CGs with TGs, thereby reducing the GC-content.[21]This interaction highlights the interrelationship between the forces affectingk-mers for varying values ofk.
One interesting fact about dinucleotide bias is that it can serve as a "distance" measurement between phylogenetically similar genomes. The genomes of pairs of organisms that are closely related share more similar dinucleotide biases than between pairs of more distantly related organisms.[18]
There are twenty naturalamino acidsthat are used to build the proteins that DNA encodes. However, there are only four nucleotides. Therefore, there cannot be a one-to-one correspondence between nucleotides and amino acids. Similarly, there are 16 2-mers, which is also not enough to unambiguously represent every amino acid. However, there are 64 distinct 3-mers in DNA, which is enough to uniquely represent each amino acid. These non-overlapping 3-mers are calledcodons. While each codon only maps to one amino acid, each amino acid can berepresented by multiple codons. Thus, the same amino acid sequence can have multiple DNA representations. Interestingly, each codon for an amino acid is not used in equal proportions.[22]This is calledcodon-usage bias(CUB). Whenk= 3, a distinction must be made between true 3-mer frequency and CUB. For example, the sequence ATGGCA has four 3-mer words within it (ATG, TGG, GGC, and GCA) while only containing two codons (ATG and GCA). However, CUB is a major driving factor of 3-mer usage bias (accounting for up to ⅓ of it, since ⅓ of thek-mers in a coding region are codons) and will be the main focus of this section.
The exact cause of variation between the frequencies of various codons is not fully understood. It is known that codon preference is correlated with tRNA abundances, with codons matching more abundant tRNAs being correspondingly more frequent[22]and that more highly expressed proteins exhibit greater CUB.[23]This suggests that selection for translational efficiency or accuracy is the driving force behind CUB variation.
Similar to the effect seen in dinucleotide bias, the tetranucleotide biases of phylogenetically similar organisms are more similar than between less closely related organisms.[4]The exact cause of variation in tetranucleotide bias is not well understood, but it has been hypothesized to be the result of the maintenance of genetic stability at the molecular level.[24]
The frequency of a set ofk-mers in a species's genome, in a genomic region, or in a class of sequences can be used as a "signature" of the underlying sequence. Comparing these frequencies is computationally easier thansequence alignmentand is an important method inalignment-free sequence analysis. It can also be used as a first stage analysis before an alignment.
In sequence assembly,k-mers are used during the construction ofDe Bruijn graphs.[25][26]In order to create a De Bruijn Graph, thek-mers stored in each edge with lengthL{\displaystyle L}must overlap another string in another edge byL−1{\displaystyle L-1}in order to create avertex. Reads generated fromnext-generation sequencingwill typically have different read lengths being generated. For example, reads byIllumina's sequencing technology capture reads of 100-mers. However, the problem with the sequencing is that only small fractions out of all the possible 100-mers that are present in the genome are actually generated. This is due to read errors, but more importantly, just simple coverage holes that occur during sequencing. The problem is that these small fractions of the possiblek-mers violate the key assumption of De Bruijn graphs that all thek-mer reads must overlap its adjoiningk-mer in the genome byk−1{\displaystyle k-1}(which cannot occur when all the possiblek-mers are not present).
The solution to this problem is to break thesek-mer sized reads into smallerk-mers, such that the resulting smallerk-mers will represent all the possiblek-mers of that smaller size that are present in the genome.[27]Furthermore, splitting thek-mers into smaller sizes also helps alleviate the problem of different initial read lengths. In this example, the five reads do not account for all the possible 7-mers of the genome, and as such, a De Bruijn graph cannot be created. But, when they are split into 4-mers, the resultant subsequences are enough to reconstruct the genome using a De Bruijn graph.
Beyond being used directly for sequence assembly,k-mers can also be used to detect genome mis-assembly by identifyingk-mers that are overrepresented which suggest the presence ofrepeated DNA sequencesthat have been combined.[28]In addition,k-mers are also used to detect bacterial contamination during eukaryotic genome assembly, an approach borrowed from the field of metagenomics.[29][30]
The choice of thek-mer size has many different effects on the sequence assembly. These effects vary greatly between lower sized and larger sizedk-mers. Therefore, an understanding of the differentk-mer sizes must be achieved in order to choose a suitable size that balances the effects. The effects of the sizes are outlined below.
With respect to disease, dinucleotide bias has been applied to the detection of genetic islands associated with pathogenicity.[11]Prior work has also shown that tetranucleotide biases are able to effectively detecthorizontal gene transferin both prokaryotes[32]and eukaryotes.[33]
Another application ofk-mers is in genomics-based taxonomy. For example, GC-content has been used to distinguish between species ofErwiniawith moderate success.[34]Similar to the direct use of GC-content for taxonomic purposes is the use of Tm, the melting temperature of DNA. Because GC bonds are more thermally stable, sequences with higher GC content exhibit a higher Tm. In 1987, the Ad Hoc Committee on Reconciliation of Approaches to Bacterial Systematics proposed the use of ΔTmas factor in determining species boundaries as part of thephylogenetic species concept, though this proposal does not appear to have gained traction within the scientific community.[35]
Other applications within genetics and genomics include:
k-mer frequency and spectrum variation is heavily used in metagenomics for both analysis[47][48]and binning. In binning, the challenge is to separate sequencing reads into "bins" of reads for each organism (oroperational taxonomic unit), which will then be assembled. TETRA is a notable tool that takes metagenomic samples and bins them into organisms based on their tetranucleotide (k= 4) frequencies.[49]Other tools that similarly rely onk-mer frequency for metagenomic binning are CompostBin (k= 6),[50]PCAHIER,[51]PhyloPythia (5 ≤k≤ 6),[52]CLARK (k≥ 20),[53]and TACOA (2 ≤k≤ 6).[54]Recent developments have also applieddeep learningto metagenomic binning usingk-mers.[55]
Other applications within metagenomics include:
Modifyingk-mer frequencies in DNA sequences has been used extensively in biotechnological applications to control translational efficiency. Specifically, it has been used to both up- and down-regulate protein production rates.
With respect to increasing protein production, reducing unfavorable dinucleotide frequency has been used yield higher rates of protein synthesis.[61]In addition, codon usage bias has been modified to create synonymous sequences with greater protein expression rates.[2][3]Similarly, codon pair optimization, a combination of dinucelotide and codon optimization, has also been successfully used to increase expression.[62]
The most studied application ofk-mers for decreasing translational efficiency is codon-pair manipulation for attenuating viruses in order to create vaccines. Researchers were able to recodedengue virus, the virus that causesdengue fever, such that its codon-pair bias was more different to mammalian codon-usage preference than the wild type.[63]Though containing an identical amino-acid sequence, the recoded virus demonstrated significantly weakenedpathogenicitywhile eliciting a strong immune response. This approach has also been used effectively to create an influenza vaccine[64]as well a vaccine forMarek's disease herpesvirus(MDV).[65]Notably, the codon-pair bias manipulation employed to attenuate MDV did not effectively reduce theoncogenicityof the virus, highlighting a potential weakness in the biotechnology applications of this approach. To date, no codon-pair deoptimized vaccine has been approved for use.
Two later articles help explain the actual mechanism underlying codon-pair deoptimization: codon-pair bias is the result of dinucleotide bias.[66][67]By studying viruses and their hosts, both sets of authors were able to conclude that the molecular mechanism that results in the attenuation of viruses is an increase in dinucleotides poorly suited for translation.
GC-content, due to its effect onDNA melting point, is used to predict annealing temperature inPCR, another important biotechnology tool.
Determining the possiblek-mers of a read can be done by simply cycling over the string length by one and taking out each substring of lengthk{\displaystyle k}. The pseudocode to achieve this is as follows:
Because the number ofk-mers grows exponentially for values ofk, countingk-mers for large values ofk(usually >10) is a computationally difficult task. While simple implementations such as the above pseudocode work for small values ofk, they need to be adapted for high-throughput applications or whenkis large. To solve this problem, various tools have been developed: | https://en.wikipedia.org/wiki/K-mer |
Ann-gramis a sequence ofnadjacent symbols in particular order.[1]The symbols may benadjacentletters(includingpunctuation marksand blanks),syllables, or rarely wholewordsfound in a language dataset; or adjacentphonemesextracted from a speech-recording dataset, or adjacent base pairs extracted from a genome. They are collected from atext corpusorspeech corpus.
IfLatin numerical prefixesare used, thenn-gram of size 1 is called a "unigram", size 2 a "bigram" (or, less commonly, a "digram") etc. If, instead of the Latin ones, theEnglish cardinal numbersare furtherly used, then they are called "four-gram", "five-gram", etc. Similarly, usingGreek numerical prefixessuch as "monomer", "dimer", "trimer", "tetramer", "pentamer", etc., or English cardinal numbers, "one-mer", "two-mer", "three-mer", etc. are used in computational biology, forpolymersoroligomersof a known size, calledk-mers. When the items are words,n-grams may also be calledshingles.[2]
In the context ofnatural language processing(NLP), the use ofn-grams allowsbag-of-wordsmodels to capture information such as word order, which would not be possible in the traditional bag of words setting.
(Shannon 1951)[3]discussedn-gram models of English. For example:
Figure 1 shows several example sequences and the corresponding 1-gram, 2-gram and 3-gram sequences.
Here are further examples; these are word-level 3-grams and 4-grams (and counts of the number of times they appeared) from the Googlen-gram corpus.[4]
3-grams
4-grams | https://en.wikipedia.org/wiki/N-gram |
Arolling hash(also known as recursive hashing or rolling checksum) is ahash functionwhere the input is hashed in a window that moves through the input.
A few hash functions allow a rolling hash to be computed very quickly—the new hash value is rapidly calculated given only the old hash value, the old value removed from the window, and the new value added to the window—similar to the way amoving averagefunction can be computed much more quickly than other low-pass filters; and similar to the way aZobrist hashcan berapidly updatedfrom the old hash value.
One of the main applications is theRabin–Karp string search algorithm, which uses the rolling hash described below. Another popular application is thersyncprogram, which uses a checksum based onMark Adler'sadler-32as its rolling hash. Low Bandwidth Network Filesystem (LBFS) uses aRabin fingerprintas its rolling hash. FastCDC (Fast Content-Defined Chunking) uses a compute-efficient Gear fingerprint as its rolling hash.
At best, rolling hash values arepairwise independent[1]or stronglyuniversal. They cannot be3-wise independent, for example.
TheRabin–Karp string search algorithmis often explained using a rolling hash function that only uses multiplications and additions:
wherea{\displaystyle a}is a constant, andc1,...,ck{\displaystyle c_{1},...,c_{k}}are the input characters (but this function is not aRabin fingerprint, see below).
In order to avoid manipulating hugeH{\displaystyle H}values, all math is donemodulon{\displaystyle n}. The choice ofa{\displaystyle a}andn{\displaystyle n}is critical to get good hashing; in particular, the modulusn{\displaystyle n}is typically a prime number. Seelinear congruential generatorfor more discussion.
Removing and adding characters simply involves adding or subtracting the first or last term. Shifting all characters by one position to the left requires multiplying the entire sumH{\displaystyle H}bya{\displaystyle a}. Shifting all characters by one position to the right requires dividing the entire sumH{\displaystyle H}bya{\displaystyle a}. Note that in modulo arithmetic,a{\displaystyle a}can be chosen to have amultiplicative inversea−1{\displaystyle a^{-1}}by whichH{\displaystyle H}can be multiplied to get the result of the division without actually performing a division.
TheRabin fingerprintis another hash, which also interprets the input as a polynomial, but over theGalois fieldGF(2). Instead of seeing the input as a polynomial of bytes, it is seen as a polynomial of bits, and all arithmetic is done in GF(2) (similarly toCRC32). The hash is the result of the division of that polynomial by an irreducible polynomial over GF(2). It is possible to update a Rabin fingerprint using only the entering and the leaving byte, making it effectively a rolling hash.
Because it shares the same author as the Rabin–Karp string search algorithm, which is often explained with another, simpler rolling hash, and because this simpler rolling hash is also a polynomial, both rolling hashes are often mistaken for each other. The backup softwareresticuses a Rabin fingerprint for splitting files, with blob size varying between512KiBand8MiB.[2]
Hashing by cyclic polynomial[3]—sometimes called Buzhash—is also simple and it has the benefit of avoiding multiplications, usingbarrel shiftsinstead. It is a form oftabulation hashing: it presumes that there is some hash functionh{\displaystyle h}from characters to integers in the interval[0,2L){\displaystyle [0,2^{L})}. This hash function might be simply an array or ahash tablemapping characters to random integers. Let the functions{\displaystyle s}be a cyclic binary rotation (orcircular shift): it rotates the bits by 1 to the left, pushing the latest bit in the first position. E.g.,s(101)=011{\displaystyle s(101)=011}. Let⊕{\displaystyle \oplus }be the bitwiseexclusive or. The hash values are defined as
where the multiplications by powers of two can be implemented by binary shifts. The result is a number in[0,2L){\displaystyle [0,2^{L})}.
Computing the hash values in a rolling fashion is done as follows. LetH{\displaystyle H}be the previous hash value. RotateH{\displaystyle H}once:H←s(H){\displaystyle H\leftarrow s(H)}. Ifc1{\displaystyle c_{1}}is the character to be removed, rotate itk{\displaystyle k}times:sk(h(c1)){\displaystyle s^{k}(h(c_{1}))}. Then simply set
whereck+1{\displaystyle c_{k+1}}is the new character.
Hashing by cyclic polynomials is strongly universal or pairwise independent: simply keep the firstL−k+1{\displaystyle L-k+1}bits. That is, take the resultH{\displaystyle H}and dismiss anyk−1{\displaystyle k-1}consecutive bits.[1]In practice, this can be achieved by an integer divisionH→H÷2k−1{\displaystyle H\rightarrow H\div 2^{k-1}}.
One of the interesting use cases of the rolling hash function is that it can create dynamic, content-based chunks of a stream or file. This is especially useful when it is required to send only the changed chunks of a large file over a network: a simple byte addition at the front of the file would normally cause all fixed size windows to become updated, while in reality, only the first "chunk" has been modified.[4]
A simple approach to making dynamic chunks is to calculate a rolling hash, and if the hash value matches an arbitrary pattern (e.g. all zeroes) in the lowerNbits (with a probability of12n{\textstyle {1 \over 2^{n}}}, given the hash has a uniform probability distribution) then it’s chosen to be a chunk boundary. Each chunk will thus have an average size of2n{\textstyle 2^{n}}bytes. This approach ensures that unmodified data (more than a window size away from the changes) will have the same boundaries.
Once the boundaries are known, the chunks need to be compared by cryptographic hash value to detect changes.[5]The backup softwareBorguses the Buzhash algorithm with a customizable chunk size range for splitting file streams.[6]
Such content-defined chunking is often used fordata deduplication.[4][6]
Several programs, including gzip (with the--rsyncableoption) and rsyncrypto, do content-based slicing based on this specific (unweighted) moving sum:[7]
where
Shifting the window by one byte simply involves adding the new character to the sum and subtracting the oldest character (no longer in the window) from the sum.
For everyn{\displaystyle n}whereH(n)==0{\displaystyle H(n)==0}, these programs cut the file betweenn{\displaystyle n}andn+1{\displaystyle n+1}.
This approach will ensure that any change in the file will only affect its current and possibly the next chunk, but no other chunk.
Chunking is a technique to divide a data stream into a set of blocks, also called chunks. Content-defined chunking (CDC) is a chunking technique in which the division of the data stream is not based on fixed chunk size, as in fixed-size chunking, but on its content.
The Content-Defined Chunking algorithm needs to compute the hash value of a data stream byte by byte and split the data stream into chunks when the hash value meets a predefined value. However, comparing a string byte-by-byte will introduce the heavy computation overhead. FastCDC[8]proposes a new and efficient Content-Defined Chunking approach. It uses a fast rolling Gear hash algorithm,[9]skipping the minimum length, normalizing the chunk-size distribution, and last but not the least, rolling two bytes each time to speed up the CDC algorithm, which can achieve about 10X higher throughput than Rabin-based CDC approach.[10]
The basic version pseudocode is provided as follows:
Where Gear array is a pre-calculated hashing array. Here FastCDC uses Gear hashing algorithm which can calculate the rolling hashing results quickly and keep the uniform distribution of the hashing results as Rabin. Compared with the traditional Rabin hashing algorithm, it achieves a much faster speed.
Experiments suggest that it can generate nearly the same chunk size distribution in the much shorter time (about 1/10 of rabin-based chunking[10]) when segmenting the data stream.
All rolling hash functions can be computed in time linear in the number of characters and updated in constant time when characters are shifted by one position. In particular, computing the Rabin–Karp rolling hash of a string of lengthk{\displaystyle k}requiresO(k){\displaystyle O(k)}modular arithmetic operations, and hashing by cyclic polynomials requiresO(k){\displaystyle O(k)}bitwiseexclusive orsandcircular shifts.[1] | https://en.wikipedia.org/wiki/Rolling_hash |
TheRand index[1]orRand measure(named after William M. Rand) instatistics, and in particular indata clustering, is a measure of the similarity between twodata clusterings. A form of the Rand index may be defined that is adjusted for the chance grouping of elements, this is theadjusted Rand index. The Rand index is theaccuracyof determining if a link belongs within a cluster or not.
Given asetofn{\displaystyle n}elementsS={o1,…,on}{\displaystyle S=\{o_{1},\ldots ,o_{n}\}}and twopartitionsofS{\displaystyle S}to compare,X={X1,…,Xr}{\displaystyle X=\{X_{1},\ldots ,X_{r}\}}, a partition ofSintorsubsets, andY={Y1,…,Ys}{\displaystyle Y=\{Y_{1},\ldots ,Y_{s}\}}, a partition ofSintossubsets, define the following:
The Rand index,R{\displaystyle R}, is:[1][2]
Intuitively,a+b{\displaystyle a+b}can be considered as the number of agreements betweenX{\displaystyle X}andY{\displaystyle Y}andc+d{\displaystyle c+d}as the number of disagreements betweenX{\displaystyle X}andY{\displaystyle Y}.
Since the denominator is the total number of pairs, the Rand index represents thefrequency of occurrenceof agreements over the total pairs, or the probability thatX{\displaystyle X}andY{\displaystyle Y}will agree on a randomly chosen pair.
(n2){\displaystyle {n \choose 2}}is calculated asn(n−1)/2{\displaystyle n(n-1)/2}.
Similarly, one can also view the Rand index as a measure of the percentage of correct decisions made by the algorithm. It can be computed using the following formula:
The Rand index has a value between 0 and 1, with 0 indicating that the two data clusterings do not agree on any pair of points and 1 indicating that the data clusterings are exactly the same.
In mathematical terms, a, b, c, d are defined as follows:
for some1≤i,j≤n,i≠j,1≤k,k1,k2≤r,k1≠k2,1≤l,l1,l2≤s,l1≠l2{\displaystyle 1\leq i,j\leq n,i\neq j,1\leq k,k_{1},k_{2}\leq r,k_{1}\neq k_{2},1\leq l,l_{1},l_{2}\leq s,l_{1}\neq l_{2}}
The Rand index can also be viewed through the prism of binary classification accuracy over the pairs of elements inS{\displaystyle S}. The two class labels are "oi{\displaystyle o_{i}}andoj{\displaystyle o_{j}}are in the same subset inX{\displaystyle X}andY{\displaystyle Y}" and "oi{\displaystyle o_{i}}andoj{\displaystyle o_{j}}are in different subsets inX{\displaystyle X}andY{\displaystyle Y}".
In that setting,a{\displaystyle a}is the number of pairs correctly labeled as belonging to the same subset (true positives), andb{\displaystyle b}is the number of pairs correctly labeled as belonging to different subsets (true negatives).
The adjusted Rand index is the corrected-for-chance version of the Rand index.[1][2][3]Such a correction for chance establishes a baseline by using the expected similarity of all pair-wise comparisons between clusterings specified by a random model. Traditionally, the Rand Index was corrected using the Permutation Model for clusterings (the number and size of clusters within a clustering are fixed, and all random clusterings are generated by shuffling the elements between the fixed clusters)[4]However, the premises of the permutation model are frequently violated; in many clustering scenarios, either the number of clusters or the size distribution of those clusters vary drastically. For example, consider that inK-meansthe number of clusters is fixed by the practitioner, but the sizes of those clusters are inferred from the data. Variations of the adjusted Rand Index account for different models of random clusterings.[5]
Though the Rand Index may only yield a value between 0 and +1, the adjusted Rand index can yield negative values if the index is less than the expected index.[6]
Given a setSofnelements, and two groupings or partitions (e.g.clusterings) of these elements, namelyX={X1,X2,…,Xr}{\displaystyle X=\{X_{1},X_{2},\ldots ,X_{r}\}}andY={Y1,Y2,…,Ys}{\displaystyle Y=\{Y_{1},Y_{2},\ldots ,Y_{s}\}}, the overlap betweenXandYcan be summarized in a contingency table[nij]{\displaystyle \left[n_{ij}\right]}where each entrynij{\displaystyle n_{ij}}denotes the number of objects in common betweenXi{\displaystyle X_{i}}andYj{\displaystyle Y_{j}}:nij=|Xi∩Yj|{\displaystyle n_{ij}=|X_{i}\cap Y_{j}|}.
The original Adjusted Rand Index using the Permutation Model is
wherenij,ai,bj{\displaystyle n_{ij},a_{i},b_{j}}are values from the contingency table. | https://en.wikipedia.org/wiki/Rand_index |
Binary classificationis the task ofclassifyingthe elements of asetinto one of two groups (each calledclass). Typical binary classification problems include:
When measuring the accuracy of a binary classifier, the simplest way is to count the errors. But in the real world often one of the two classes is more important, so that the number of both of the differenttypes of errorsis of interest. For example, in medical testing, detecting a disease when it is not present (afalse positive) is considered differently from not detecting a disease when it is present (afalse negative).
Given a classification of a specific data set, there are four basic combinations of actual data category and assigned category:true positivesTP (correct positive assignments),true negativesTN (correct negative assignments),false positivesFP (incorrect positive assignments), andfalse negativesFN (incorrect negative assignments).
These can be arranged into a 2×2contingency table, with rows corresponding to actual value – condition positive or condition negative – and columns corresponding to classification value – test outcome positive or test outcome negative.
From tallies of the four basic outcomes, there are many approaches that can be used to measure the accuracy of a classifier or predictor. Different fields have different preferences.
A common approach to evaluation is to begin by computing two ratios of a standard pattern. There are eight basic ratios of this form that one can compute from the contingency table, which come in four complementary pairs (each pair summing to 1). These are obtained by dividing each of the four numbers by the sum of its row or column, yielding eight numbers, which can be referred to generically in the form "true positive row ratio" or "false negative column ratio".
There are thus two pairs of column ratios and two pairs of row ratios, and one can summarize these with four numbers by choosing one ratio from each pair – the other four numbers are the complements.
The row ratios are:
The column ratios are:
In diagnostic testing, the main ratios used are the true column ratios – true positive rate and true negative rate – where they are known assensitivity and specificity. In informational retrieval, the main ratios are the true positive ratios (row and column) – positive predictive value and true positive rate – where they are known asprecision and recall.
Cullerne Bown has suggested a flow chart for determining which pair of indicators should be used when.[1]Otherwise, there is no general rule for deciding. There is also no general agreement on how the pair of indicators should be used to decide on concrete questions, such as when to prefer one classifier over another.
One can take ratios of a complementary pair of ratios, yielding fourlikelihood ratios(two column ratio of ratios, two row ratio of ratios). This is primarily done for the column (condition) ratios, yieldinglikelihood ratios in diagnostic testing. Taking the ratio of one of these groups of ratios yields a final ratio, thediagnostic odds ratio(DOR). This can also be defined directly as (TP×TN)/(FP×FN) = (TP/FN)/(FP/TN); this has a useful interpretation – as anodds ratio– and is prevalence-independent.
There are a number of other metrics, most simply theaccuracyor Fraction Correct (FC), which measures the fraction of all instances that are correctly categorized; the complement is the Fraction Incorrect (FiC). TheF-scorecombines precision and recall into one number via a choice of weighing, most simply equal weighing, as the balanced F-score (F1 score). Some metrics come fromregression coefficients: themarkednessand theinformedness, and theirgeometric mean, theMatthews correlation coefficient. Other metrics includeYouden's J statistic, theuncertainty coefficient, thephi coefficient, andCohen's kappa.
Statistical classificationis a problem studied inmachine learningin which the classification is performed on the basis of aclassification rule. It is a type ofsupervised learning, a method of machine learning where the categories are predefined, and is used to categorize new probabilistic observations into said categories. When there are only two categories the problem is known as statistical binary classification.
Some of the methods commonly used for binary classification are:
Each classifier is best in only a select domain based upon the number of observations, the dimensionality of thefeature vector, the noise in the data and many other factors. For example,random forestsperform better thanSVMclassifiers for 3D point clouds.[2][3]
Binary classification may be a form ofdichotomizationin which a continuous function is transformed into a binary variable. Tests whose results are of continuous values, such as mostblood values, can artificially be made binary by defining acutoff value, with test results being designated aspositive or negativedepending on whether the resultant value is higher or lower than the cutoff.
However, such conversion causes a loss of information, as the resultant binary classification does not tellhow muchabove or below the cutoff a value is. As a result, when converting a continuous value that is close to the cutoff to a binary one, the resultantpositiveornegative predictive valueis generally higher than thepredictive valuegiven directly from the continuous value. In such cases, the designation of the test of being either positive or negative gives the appearance of an inappropriately high certainty, while the value is in fact in an interval of uncertainty. For example, with the urine concentration ofhCGas a continuous value, a urinepregnancy testthat measured 52 mIU/ml of hCG may show as "positive" with 50 mIU/ml as cutoff, but is in fact in an interval of uncertainty, which may be apparent only by knowing the original continuous value. On the other hand, a test result very far from the cutoff generally has a resultant positive or negative predictive value that is lower than the predictive value given from the continuous value. For example, a urine hCG value of 200,000 mIU/ml confers a very high probability of pregnancy, but conversion to binary values results in that it shows just as "positive" as the one of 52 mIU/ml. | https://en.wikipedia.org/wiki/Binary_classification |
Thepositive and negative predictive values(PPVandNPVrespectively) are the proportions of positive and negative results instatisticsanddiagnostic teststhat aretrue positiveandtrue negativeresults, respectively.[1]The PPV and NPV describe the performance of a diagnostic test or other statistical measure. A high result can be interpreted as indicating the accuracy of such a statistic. The PPV and NPV are not intrinsic to the test (astrue positive rateandtrue negative rateare); they depend also on theprevalence.[2]Both PPV and NPV can be derived usingBayes' theorem.
Although sometimes used synonymously, apositive predictive valuegenerally refers to what is established by control groups, while apost-test probabilityrefers to a probability for an individual. Still, if the individual'spre-test probabilityof the target condition is the same as the prevalence in the control group used to establish the positive predictive value, the two are numerically equal.
Ininformation retrieval, the PPV statistic is often called theprecision.
The positive predictive value (PPV), orprecision, is defined as
where a "true positive" is the event that the test makes a positive prediction, and the subject has a positive result under thegold standard, and a "false positive" is the event that the test makes a positive prediction, and the subject has a negative result under the gold standard. The ideal value of the PPV, with a perfect test, is 1 (100%), and the worst possible value would be zero.
The PPV can also be computed fromsensitivity,specificity, and theprevalenceof the condition:
cf. Bayes' theorem
The complement of the PPV is thefalse discovery rate(FDR):
The negative predictive value is defined as:
where a "true negative" is the event that the test makes a negative prediction, and the subject has a negative result under the gold standard, and a "false negative" is the event that the test makes a negative prediction, and the subject has a positive result under the gold standard. With a perfect test, one which returns no false negatives, the value of the NPV is 1 (100%), and with a test which returns no true negatives the NPV value is zero.
The NPV can also be computed fromsensitivity,specificity, andprevalence:
The complement of the NPV is thefalse omission rate(FOR):
Although sometimes used synonymously, anegative predictive valuegenerally refers to what is established by control groups, while a negativepost-test probabilityrather refers to a probability for an individual. Still, if the individual'spre-test probabilityof the target condition is the same as the prevalence in the control group used to establish the negative predictive value, then the two are numerically equal.
The following diagram illustrates how thepositive predictive value,negative predictive value,sensitivity, and specificityare related.
Note that the positive and negative predictive values can only be estimated using data from across-sectional studyor other population-based study in which validprevalenceestimates may be obtained. In contrast, the sensitivity and specificity can be estimated fromcase-control studies.
Suppose thefecal occult blood(FOB) screen test is used in 2030 people to look for bowel cancer:
The small positive predictive value (PPV = 10%) indicates that many of the positive results from this testing procedure are false positives. Thus it will be necessary to follow up any positive result with a more reliable test to obtain a more accurate assessment as to whether cancer is present. Nevertheless, such a test may be useful if it is inexpensive and convenient. The strength of the FOB screen test is instead in its negative predictive value — which, if negative for an individual, gives us a high confidence that its negative result is true.
Note that the PPV is not intrinsic to the test—it depends also on the prevalence.[2]Due to the large effect of prevalence upon predictive values, a standardized approach has been proposed, where the PPV is normalized to a prevalence of 50%.[11]PPV is directly proportional[dubious–discuss]to the prevalence of the disease or condition. In the above example, if the group of people tested had included a higher proportion of people with bowel cancer, then the PPV would probably come out higher and the NPV lower. If everybody in the group had bowel cancer, the PPV would be 100% and the NPV 0%.[citation needed]
To overcome this problem, NPV and PPV should only be used if the ratio of the number of patients in the disease group and the number of patients in the healthy control group used to establish the NPV and PPV is equivalent to the prevalence of the diseases in the studied population, or, in case two disease groups are compared, if the ratio of the number of patients in disease group 1 and the number of patients in disease group 2 is equivalent to the ratio of the prevalences of the two diseases studied. Otherwise, positive and negativelikelihood ratiosare more accurate than NPV and PPV, because likelihood ratios do not depend on prevalence.[citation needed]
When an individual being tested has a differentpre-test probabilityof having a condition than the control groups used to establish the PPV and NPV, the PPV and NPV are generally distinguished from the positive and negativepost-test probabilities, with the PPV and NPV referring to the ones established by the control groups, and the post-test probabilities referring to the ones for the tested individual (as estimated, for example, bylikelihood ratios). Preferably, in such cases, a large group of equivalent individuals should be studied, in order to establish separate positive and negative predictive values for use of the test in such individuals.[citation needed]
Bayes' theoremconfers inherent limitations on the accuracy of screening tests as a function of disease prevalence or pre-test probability. It has been shown that a testing system can tolerate significant drops in prevalence, up to a certain well-defined point known as theprevalence threshold, below which the reliability of a positive screening test drops precipitously. That said, Balayla et al.[12]showed that sequential testing overcomes the aforementioned Bayesian limitations and thus improves the reliability of screening tests. For a desired positive predictive valueρ{\displaystyle \rho }, whereρ<1{\displaystyle \rho <1}, that approaches some constantk{\displaystyle k}, the number of positive test iterationsni{\displaystyle n_{i}}needed is:
where
Of note, the denominator of the above equation is the natural logarithm of the positivelikelihood ratio(LR+). Also, note that a critical assumption is that the tests must be independent. As described Balayla et al.,[12]repeating the same test may violate the this independence assumption and in fact "A more natural and reliable method to enhance the positive predictive value would be, when available, to use a different test with different parameters altogether after an initial positive result is obtained.".[12]
PPV is used to indicate the probability that in case of a positive test, that the patient really has the specified disease. However, there may be more than one cause for a disease and any single potential cause may not always result in the overt disease seen in a patient. There is potential to mix up related target conditions of PPV and NPV, such as interpreting the PPV or NPV of a test as having a disease, when that PPV or NPV value actually refers only to a predisposition of having that disease.[13]
An example is the microbiological throat swab used in patients with asore throat. Usually publications stating PPV of a throat swab are reporting on the probability that this bacterium is present in the throat, rather than that the patient is ill from the bacteria found. If presence of this bacterium always resulted in a sore throat, then the PPV would be very useful. However the bacteria may colonise individuals in a harmless way and never result in infection or disease. Sore throats occurring in these individuals are caused by other agents such as a virus. In this situation the gold standard used in the evaluation study represents only the presence of bacteria (that might be harmless) but not a causal bacterial sore throat illness. It can be proven that this problem will affect positive predictive value far more than negative predictive value.[14]To evaluate diagnostic tests where the gold standard looks only at potential causes of disease, one may use an extension of the predictive value termed theEtiologic Predictive Value.[13][15] | https://en.wikipedia.org/wiki/Positive_and_negative_predictive_values |
Indecision theory,economics, andprobability theory, theDutch book argumentsare a set ofresultsshowing that agents must satisfy the axioms ofrational choiceto avoid a kind ofself-contradictioncalled a Dutch book. ADutch book, sometimes also called amoney pump, is a set of bets that ensures a guaranteed loss, i.e. the gambler will lose money no matter what happens.[1]A set of bets is calledcoherentif it cannot result in a Dutch book.
The Dutch book arguments are used to explore degrees of certainty in beliefs, and demonstrate that rational bet-setters must beBayesian;[2]in other words, a rational bet-setter must assign event probabilities that behave according to theaxioms of probability, and must have preferences that can be modeled using thevon Neumann–Morgenstern axioms.
In economics, they are used to model behavior by ruling out situations whereagents"burn money" for no real reward. Models based on the assumption that actors are rational are calledrational choice models. That assumption is weakened inbehavioral modelsof decision-making.
Thethought experimentwas first proposed by the Italian probabilistBruno de Finettiin order to justifyBayesian probability,[citation needed]and was more thoroughly explored byLeonard Savage, who developed it into a full model of rational choice.
Assume we have two players, A and B. Player A must set the price of a promise to pay $1 if John Smith wins tomorrow's election. Player B will be able to choose either to buy the promise from A at the price A has set, or to require A to buy the promise, still at the same price. In other words: Player A sets the odds, but Player B decides which side of the bet to take. The price A sets is called the "operational subjective probability".
If player A decides that John Smith is 12.5% likely to win, they might then set an odds of 7:1 against. This arbitrary valuation — the "operational subjective probability" — determines the payoff of a successful wager. $1 wagered at these odds will produce either a loss of $1 (if Smith loses) or a win of $7 (if Smith wins). In this example the $1 will also be returned to the bettor in the event of success.
The standard Dutch book argument concludes that rational agents must have subjective probabilities for random events, and that these probabilities must satisfy the standard axioms of probability. In other words, any rational person must be willing to assign a (quantitative)subjective probabilityto different events.
Note that the argument does not imply agents are willing to engage ingamblingin the traditional sense. The word "bet" as used here refers to any kind of decision underuncertainty. For example, buying an unfamiliar good at a supermarket is a kind of "bet" (the buyer "bets" that the product is good), as is getting into a car ("betting" that the driver will not be involved in an accident).
The Dutch book argument can be reversed by considering the perspective of the bookmaker. In this case, the Dutch book arguments show that any rational agent must be willing to accept some kinds of risks, i.e. to make uncertain bets, or else they will sometimes refuse "free gifts" or "Czech books", a series of bets leaving them better-off with 100% certainty.[citation needed]
In one example, abookmakerhas offered the following odds and attracted one bet on each horse whose relative sizes make the result irrelevant. The implied probabilities, i.e. probability of each horse winning, add up to a number greater than 1, violating theaxiom of unitarity:
Whichever horse wins in this example, the bookmaker will pay out $200 (including returning the winning stake)—but the punter has bet $210, hence making a loss of $10 on the race.
However, if horse 4 was withdrawn and the bookmaker does not adjust the other odds, the implied probabilities would add up to 0.95. In such a case, a gambler could always reap a profit of $10 by betting $100, $50 and $40 on the remaining three horses, respectively, and not having to stake $20 on the withdrawn horse, which now cannot win.
Other forms of Dutch books can be used to establish the other axioms of probability, sometimes involving more complex bets likeforecasting the order in which horses will finish. InBayesian probability,Frank P. RamseyandBruno de Finettirequired personal degrees of belief to becoherentso that a Dutch book could not be made against them, whichever way bets were made.Necessary and sufficient conditionsfor this are that their degrees of belief satisfy all theaxioms of probability.
A person who has set prices on an array of wagers in such a way that he or she will make a net gain regardless of the outcome is said to have made a Dutch book. When one has a Dutch book, one's opponent always loses. A person who sets prices in a way that gives his or her opponent a Dutch book is not behaving rationally.
The rules do not forbid a set price higher than $1, but a prudent opponent may sell one a high-priced ticket, such that the opponent comes out ahead regardless of the outcome of the event on which the bet is made. The rules also do not forbid a negative price, but an opponent may extract a paid promise from the bettor to pay him or her later should a certain contingency arise. In either case, the price-setter loses. Theselose-lose situationsparallel the fact that a probability can neither exceed 1 (certainty) nor be less than 0 (no chance of winning).
Now suppose one sets the price of a promise to pay $1 if the Boston Red Sox win next year's World Series, and also the price of a promise to pay $1 if the New York Yankees win, and finally the price of a promise to pay $1 ifeitherthe Red Sox or the Yankees win. One may set the prices in such a way that
But if one sets the price of the third ticket lower than the sum of the first two tickets, a prudent opponent will buy that ticket and sell the other two tickets to the price-setter. By considering the three possible outcomes (Red Sox, Yankees, some other team), one will note that regardless of which of the three outcomes eventuates, one will lose. An analogous fate awaits if one set the price of the third ticket higher than the sum of the other two prices. This parallels the fact that probabilities ofmutually exclusive eventsare additive (seeprobability axioms).
Now imagine a more complicated scenario. One must set the prices of three promises:
Three outcomes are possible: The game is cancelled; the game is played and the Red Sox lose; the game is played and the Red Sox win. One may set the prices in such a way that
(where the second price above is that of the bet that includes the refund in case of cancellation). (Note: The prices here are the dimensionless numbers obtained by dividing by $1, which is the payout in all three cases.) A prudent opponent writes three linear inequalities in three variables. The variables are the amounts they will invest in each of the three promises; the value of one of these is negative if they will make the price-setter buy that promise and positive if they will buy it. Each inequality corresponds to one of the three possible outcomes. Each inequality states that your opponent's net gain is more than zero. A solution exists if thedeterminantof the matrix is not zero. That determinant is:
Thus a prudent opponent can make the price setter a sure loser unless one sets one's prices in a way that parallels the simplest conventional characterization ofconditional probability.
In the 2015 running of theKentucky Derby, the favorite ("American Pharaoh") was setante-postat 5:2, the second favorite at 3:1, and the third favorite at 8:1. All other horses had odds against of 12:1 or higher. With these odds, a wager of $10 on each of all 18 starters would result in a net loss if either the favorite or the second favorite were to win.
However, if one assumes that no horse quoted 12:1 or higher will win, and one bets $10 on each of the top three, one is guaranteed at least a small win. The favorite (who did win) would result in a payout of $25, plus the returned $10 wager, giving an ending balance of $35 (a $5 net increase). A win by the second favorite would produce a payoff of $30 plus the original $10 wager, for a net $10 increase. A win by the third favorite gives $80 plus the original $10, for a net increase of $60.
This sort of strategy, so far as it concerns just the top three, forms a Dutch Book. However, if one considers all eighteen contenders, then no Dutch Book exists for this race.
In economics, the classic example of a situation in which a consumer X can be Dutch-booked is if they haveintransitive preferences. Classical economic theory assumes that preferences aretransitive: if someone thinks A is better than B and B is better than C, then they must think A is better than C. Moreover, there cannot be any "cycles" of preferences.
The money pump argument notes that if someone held a set of intransitive preferences, they could be exploited (pumped) for money until being forced to exit the market. Imagine Jane has twenty dollars to buy fruit. She can fill her basket with either oranges or apples. Jane would prefer to have a dollar rather than an apple, an apple rather than an orange, and an orange rather than a dollar. Because Jane would rather have an orange than a dollar, she is willing to buy an orange for just over a dollar (perhaps $1.10). Then, she trades her orange for an apple, because she would rather have an apple than an orange. Finally, she sells her apple for a dollar, because she would rather have a dollar than an apple. At this point, Jane is left with $19.90, and has lost 10¢ and gained nothing in return. This process can be repeated until Jane is left with no money. (Note that, if Jane truly holds these preferences, she would see nothing wrong with this process; at every step, Jane agrees she has been left better off.) After running out of money, Jane must exit the market, and her preferences and actions cease to be economically relevant.
Experiments inbehavioral economicsshow that subjects can violate the requirement for transitive preferences when comparing bets.[3]However, most subjects do not make these choices in within-subject comparisons where the contradiction is made obviously visible (in other words, the subjects do not hold genuinely intransitive preferences, but instead make mistakes when making choices usingheuristics).
Economists usually argue that people with preferences like X's will have all their wealth taken from them in the market. If this is the case, we won't observe preferences with intransitivities or other features that allow people to be Dutch-booked. However, if people are somewhat sophisticated about their intransitivities and/or if competition by arbitrageurs drives epsilon to zero, non-"standard" preferences may still be observable.
It can be shown that the set of prices is coherent when they satisfy theprobability axiomsand related results such as theinclusion–exclusion principle. | https://en.wikipedia.org/wiki/Coherence_(philosophical_gambling_strategy) |
Indecision theory, adecision ruleis a function which maps an observation to an appropriate action. Decision rules play an important role in the theory ofstatisticsandeconomics, and are closely related to the concept of astrategyingame theory.
In order to evaluate the usefulness of a decision rule, it is necessary to have aloss functiondetailing the outcome of each action under different states.
Given an observable random variableXover theprobability space(X,Σ,Pθ){\displaystyle \scriptstyle ({\mathcal {X}},\Sigma ,P_{\theta })}, determined by a parameterθ∈Θ, and a setAof possible actions, a (deterministic)decision ruleis a functionδ:X{\displaystyle \scriptstyle {\mathcal {X}}}→A. | https://en.wikipedia.org/wiki/Decision_rule |
The use ofevidence under Bayes' theoremrelates to the probability of finding evidence in relation to the accused, whereBayes' theoremconcerns theprobabilityof an event and its inverse. Specifically, it compares the probability of finding particular evidence if the accused were guilty, versus if they were not guilty. An example would be the probability of finding a person's hair at the scene, if guilty, versus if just passing through the scene. Another issue would be finding a person's DNA where they lived, regardless of committing a crime there.
Amongevidencescholars, the study of evidence in recent decades has become broadly interdisciplinary, incorporating insights frompsychology,economics, andprobability theory. One area of particular interest and controversy has beenBayes' theorem.[1]Bayes' theorem is an elementary proposition ofprobability theory. It provides a way of updating, in light of new information, one’s probability that a proposition is true. Evidence scholars have been interested in its application to their field, either to study the value ofrules of evidence, or to help determine facts attrial.
Suppose that the proposition to be proven is that the defendant was the source of a hair found at the crime scene. Before learning that the hair was a genetic match for the defendant’s hair, the factfinder believes that the odds are 2 to 1 that the defendant was the source of the hair. If they used Bayes’ theorem, they could multiply those prior odds by a “likelihood ratio” in order to update the odds after learning that the hair matched the defendant’s hair. The likelihood ratio is astatisticderived by comparing the odds that the evidence (expert testimonyof a match) would be found if the defendant was the source with the odds that it would be found if the defendant was not the source. If it is ten times more likely that the testimony of a match would occur if the defendant was the source than if not, then the factfinder should multiply their prior odds by ten, giving posterior odds of 20 to 1.
Bayesian skeptics have objected to this use of Bayes’ theorem in litigation on a variety of grounds. These run from jury confusion and computational complexity to the assertion that standard probability theory is not a normatively satisfactory basis for adjudication of rights.
Bayesian enthusiasts have replied on two fronts. First, they have said that whatever its value inlitigation, Bayes' theorem is valuable in studying evidence rules. For example, it can be used to model relevance. It teaches that the relevance of evidence that a proposition is true depends on how much the evidence changes the prior odds, and that how much it changes the prior odds depends on how likely the evidence would be found (or not) if the proposition were true. These basic insights are also useful in studying individual evidence rules, such as the rule allowing witnesses to be impeached with prior convictions.
Second, they have said that it is practical to useBayes' theoremin a limited set of circumstances in litigation (such as integrating genetic match evidence with other evidence), and that assertions thatprobability theoryis inappropriate forjudicialdeterminations are nonsensical or inconsistent.
Some observers believe that in recent years (i) the debate about probabilities has become stagnant, (ii) the protagonists in the probabilities debate have been talking past each other, (iii) not much is happening at the high-theory level, and (iv) the most interesting work is in theempiricalstudy of the efficacy of instructions on Bayes’ theorem in improving jury accuracy. However, it is possible that thisskepticismabout the probabilities debate in law rests on observations of the arguments made by familiar protagonists in the legal academy. In fields outside of law, work on formal theories relating to uncertainty continues unabated. One important development has been the work on "soft computing" such as has been carried on, for example, atBerkeleyunderLotfi Zadeh's BISC (Berkeley Initiative in Soft Computing). Another example is the increasing amount of work, by people both in and outside law, on "argumentation" theory. Also, work on Bayes nets continues. Some of this work is beginning to filter into legal circles. See, for example, the many papers on formal approaches to uncertainty (including Bayesian approaches) in the Oxford journal: Law, Probability and Risk[1].
There are some famous cases whereBayes' theoremcan be applied. | https://en.wikipedia.org/wiki/Evidence_under_Bayes%27_theorem |
Inductive reasoningrefers to a variety ofmethods of reasoningin which the conclusion of an argument is supported not with deductive certainty, but with some degree of probability.[1]Unlikedeductivereasoning(such asmathematical induction), where the conclusion iscertain, given the premises are correct, inductive reasoning produces conclusions that are at bestprobable, given the evidence provided.[2][3]
The types of inductive reasoning include generalization, prediction,statistical syllogism, argument from analogy, and causal inference. There are also differences in how their results are regarded.
A generalization (more accurately, aninductive generalization) proceeds from premises about asampleto a conclusion about thepopulation.[4]The observation obtained from this sample is projected onto the broader population.[4]
For example, if there are 20 balls—either black or white—in an urn: to estimate their respective numbers, asampleof four balls is drawn, three are black and one is white. An inductive generalization may be that there are 15 black and five white balls in the urn. However this is only one of 17 possibilities as to theactualnumber of each color of balls in the urn (thepopulation)-- there may, of course, have been 19 black and just 1 white ball, or only 3 black balls and 17 white, or any mix in between. The probability of each possible distribution being the actual numbers of black and white balls can be estimated using techniques such asBayesian inference, where prior assumptions about the distribution are updated with the observed sample, ormaximum likelihood estimation(MLE), which identifies the distribution most likely given the observed sample.
How much the premises support the conclusion depends upon the number in the sample group, the number in the population, and the degree to which the sample represents the population (which, for a static population, may be achieved by taking a random sample). The greater the sample size relative to the population and the more closely the sample represents the population, the stronger the generalization is. Thehasty generalizationand thebiased sampleare generalization fallacies.
A statistical generalization is a type of inductive argument in which a conclusion about a population is inferred using astatistically representative sample. For example:
The measure is highly reliable within a well-defined margin of error provided that the selection process was genuinely random and that the numbers of items in the sample having the properties considered are large. It is readily quantifiable. Compare the preceding argument with the following. "Six of the ten people in my book club are Libertarians. Therefore, about 60% of people are Libertarians." The argument is weak because the sample is non-random and the sample size is very small.
Statistical generalizations are also calledstatistical projections[5]andsample projections.[6]
An anecdotal generalization is a type of inductive argument in which a conclusion about a population is inferred using a non-statistical sample.[7]In other words, the generalization is based onanecdotal evidence. For example:
This inference is less reliable (and thus more likely to commit the fallacy of hasty generalization) than a statistical generalization, first, because the sample events are non-random, and second because it is not reducible to a mathematical expression. Statistically speaking, there is simply no way to know, measure and calculate the circumstances affecting performance that will occur in the future. On a philosophical level, the argument relies on the presupposition that the operation of future events will mirror the past. In other words, it takes for granted a uniformity of nature, an unproven principle that cannot be derived from the empirical data itself. Arguments that tacitly presuppose this uniformity are sometimes calledHumeanafter the philosopher who was first to subject them to philosophical scrutiny.[8]
An inductive prediction draws a conclusion about a future, current, or past instance from a sample of other instances. Like an inductive generalization, an inductive prediction relies on a data set consisting of specific instances of a phenomenon. But rather than conclude with a general statement, the inductive prediction concludes with a specific statement about theprobabilitythat a single instance will (or will not) have an attribute shared (or not shared) by the other instances.[9]
A statisticalsyllogismproceeds from a generalization about a group to a conclusion about an individual.
For example:
This is astatistical syllogism.[10]Even though one cannot be sure Bob will attend university, the exact probability of this outcome is fully assured (given no further information). Twodicto simpliciterfallacies can occur in statistical syllogisms: "accident" and "converse accident".
The process of analogical inference involves noting the shared properties of two or more things and from this basis inferring that they also share some further property:[11]
Analogical reasoning is very frequent incommon sense,science,philosophy,law, and thehumanities, but sometimes it is accepted only as an auxiliary method. A refined approach iscase-based reasoning.[12]
This isanalogical induction, according to which things alike in certain ways are more prone to be alike in other ways. This form of induction was explored in detail by philosopher John Stuart Mill in hisSystem of Logic, where he states, "[t]here can be no doubt that every resemblance [not known to be irrelevant] affords some degree of probability, beyond what would otherwise exist, in favor of the conclusion."[13]SeeMill's Methods.
Some thinkers contend that analogical induction is a subcategory of inductive generalization because it assumes a pre-established uniformity governing events.[citation needed]Analogical induction requires an auxiliary examination of therelevancyof the characteristics cited as common to the pair. In the preceding example, if a premise were added stating that both stones were mentioned in the records of early Spanish explorers, this common attribute is extraneous to the stones and does not contribute to their probable affinity.
A pitfall of analogy is that features can becherry-picked: while objects may show striking similarities, two things juxtaposed may respectively possess other characteristics not identified in the analogy that are characteristics sharplydissimilar. Thus, analogy can mislead if not all relevant comparisons are made.
A causal inference draws a conclusion about a possible or probable causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.[citation needed]
The two principal methods used to reach inductive generalizations areenumerative inductionandeliminative induction.[14][15]
Enumerative induction is an inductive method in which a generalization is constructed based on thenumberof instances that support it. The more supporting instances, the stronger the conclusion.[14][15]
The most basic form of enumerative induction reasons from particular instances to all instances and is thus an unrestricted generalization.[16]If one observes 100 swans, and all 100 were white, one might infer a probable universalcategorical propositionof the formAll swans are white. As thisreasoning form's premises, even if true, do not entail the conclusion's truth, this is a form of inductive inference. The conclusion might be true, and might be thought probably true, yet it can be false. Questions regarding the justification and form of enumerative inductions have been central inphilosophy of science, as enumerative induction has a pivotal role in the traditional model of thescientific method.
This isenumerative induction, also known assimple inductionorsimple predictive induction. It is a subcategory of inductive generalization. In everyday practice, this is perhaps the most common form of induction. For the preceding argument, the conclusion is tempting but makes a prediction well in excess of the evidence. First, it assumes that life forms observed until now can tell us how future cases will be: an appeal to uniformity. Second, the conclusionAllis a bold assertion. A single contrary instance foils the argument. And last, quantifying the level of probability in any mathematical form is problematic.[17]By what standard do we measure our Earthly sample of known life against all (possible) life? Suppose we do discover some new organism—such as some microorganism floating in the mesosphere or an asteroid—and it is cellular. Does the addition of this corroborating evidence oblige us to raise our probability assessment for the subject proposition? It is generally deemed reasonable to answer this question "yes", and for a good many this "yes" is not only reasonable but incontrovertible. So then justhow muchshould this new data change our probability assessment? Here, consensus melts away, and in its place arises a question about whether we can talk of probability coherently at all with or without numerical quantification.
This is enumerative induction in itsweak form. It truncates "all" to a mere single instance and, by making a far weaker claim, considerably strengthens the probability of its conclusion. Otherwise, it has the same shortcomings as the strong form: its sample population is non-random, and quantification methods are elusive.
Eliminative induction, also called variative induction, is an inductive method first put forth byFrancis Bacon;[18]in it a generalization is constructed based on thevarietyof instances that support it. Unlike enumerative induction, eliminative induction reasons based on the various kinds of instances that support a conclusion, rather than the number of instances that support it. As the variety of instances increases, the more possible conclusions based on those instances can be identified as incompatible and eliminated. This, in turn, increases the strength of any conclusion that remains consistent with the various instances. In this context, confidence is the function of how many instances have been identified as incompatible and eliminated. This confidence is expressed as the Baconian probability i|n (read as "i out of n") where n reasons for finding a claim incompatible has been identified and i of these have been eliminated by evidence or argument.[18]
There are three ways of attacking an argument; these ways - known as defeaters indefeasible reasoningliterature - are : rebutting, undermining, and undercutting. Rebutting defeats by offering a counter-example, undermining defeats by questioning the validity of the evidence, and undercutting defeats by pointing out conditions where a conclusion is not true when the inference is. By identifying defeaters and proving them wrong is how this approach builds confidence.[18]
This type of induction may use different methodologies such as quasi-experimentation, which tests and, where possible, eliminates rival hypotheses.[19]Different evidential tests may also be employed to eliminate possibilities that are entertained.[20]
Eliminative induction is crucial to the scientific method and is used to eliminate hypotheses that are inconsistent with observations and experiments.[14][15]It focuses on possible causes instead of observed actual instances of causal connections.[21]
For a move from particular to universal,Aristotlein the 300s BCE used the Greek wordepagogé, whichCicerotranslated into the Latin wordinductio.[22]
Aristotle'sPosterior Analyticscovers the methods of inductive proof in natural philosophy and in the social sciences. The first book ofPosterior Analyticsdescribes the nature and science of demonstration and its elements: including definition, division, intuitive reason of first principles, particular and universal demonstration, affirmative and negative demonstration, the difference between science and opinion, etc.
The ancientPyrrhonistswere the first Western philosophers to point out theProblem of induction: that induction cannot, according to them, justify the acceptance of universal statements as true.[22]
TheEmpiric schoolof ancient Greek medicine employedepilogismas a method of inference. 'Epilogism' is a theory-free method that looks at history through the accumulation of facts without major generalization and with consideration of the consequences of making causal claims.[23]Epilogism is an inference which moves entirely within the domain of visible and evident things, it tries not to invokeunobservables.
TheDogmatic schoolof ancient Greek medicine employedanalogismosas a method of inference.[24]This method used analogy to reason from what was observed to unobservable forces.
In 1620,early modern philosopherFrancis Baconrepudiated the value of mere experience and enumerative induction alone.His methodofinductivismrequired that minute and many-varied observations that uncovered the natural world's structure and causal relations needed to be coupled with enumerative induction in order to have knowledge beyond the present scope of experience. Inductivism therefore required enumerative induction as a component.
The empiricistDavid Hume's 1740 stance found enumerative induction to have no rational, let alone logical, basis; instead, induction was the product of instinct rather than reason, a custom of the mind and an everyday requirement to live. While observations, such as the motion of the sun, could be coupled with the principle of theuniformity of natureto produce conclusions that seemed to be certain, theproblem of inductionarose from the fact that the uniformity of nature was not a logically valid principle, therefore it could not be defended as deductively rational, but also could not be defended as inductively rational by appealing to the fact that the uniformity of nature has accurately described the past and therefore, will likely accurately describe the future because that is an inductive argument and therefore circular since induction is what needs to be justified.
Since Hume first wrote about the dilemma between the invalidity of deductive arguments and the circularity of inductive arguments in support of the uniformity of nature, this supposed dichotomy between merely two modes of inference, deduction and induction, has been contested with the discovery of a third mode of inference known as abduction, orabductive reasoning, which was first formulated and advanced byCharles Sanders Peirce, in 1886, where he referred to it as "reasoning by hypothesis."[25]Inference to the best explanation is often, yet arguably, treated as synonymous to abduction as it was first identified by Gilbert Harman in 1965 where he referred to it as "abductive reasoning," yet his definition of abduction slightly differs from Pierce's definition.[26]Regardless, if abduction is in fact a third mode of inference rationally independent from the other two, then either the uniformity of nature can be rationally justified through abduction, or Hume's dilemma is more of a trilemma. Hume was also skeptical of the application of enumerative induction and reason to reach certainty about unobservables and especially the inference of causality from the fact that modifying an aspect of a relationship prevents or produces a particular outcome.
Awakened from "dogmatic slumber" by a German translation of Hume's work,Kantsought to explain the possibility ofmetaphysics. In 1781, Kant'sCritique of Pure Reasonintroducedrationalismas a path toward knowledge distinct fromempiricism. Kant sorted statements into two types.Analyticstatements are true by virtue of thearrangementof their terms andmeanings, thus analytic statements aretautologies, merely logical truths, true bynecessity. Whereassyntheticstatements hold meanings to refer to states of facts,contingencies. Against both rationalist philosophers likeDescartesandLeibnizas well as against empiricist philosophers likeLockeandHume, Kant'sCritique of Pure Reasonis a sustained argument that in order to have knowledge we need both a contribution of our mind (concepts) as well as a contribution of our senses (intuitions). Knowledge proper is for Kant thus restricted to what we can possibly perceive (phenomena), whereas objects of mere thought ("things in themselves") are in principle unknowable due to the impossibility of ever perceiving them.
Reasoning that the mind must contain its own categories for organizingsense data, making experience of objects inspaceandtime (phenomena)possible, Kant concluded that theuniformity of naturewas ana prioritruth.[27]A class of synthetic statements that was notcontingentbut true by necessity, was thensynthetica priori. Kant thus saved bothmetaphysicsandNewton's law of universal gravitation. On the basis of the argument that what goes beyond our knowledge is "nothing to us,"[28]he discardedscientific realism. Kant's position that knowledge comes about by a cooperation of perception and our capacity to think (transcendental idealism) gave birth to the movement ofGerman idealism.Hegel'sabsolute idealismsubsequently flourished across continental Europe and England.
Positivism, developed byHenri de Saint-Simonand promulgated in the 1830s by his former studentAuguste Comte, was the firstlate modernphilosophy of science. In the aftermath of theFrench Revolution, fearing society's ruin, Comte opposedmetaphysics. Human knowledge had evolved from religion to metaphysics to science, said Comte, which had flowed frommathematicstoastronomytophysicstochemistrytobiologytosociology—in that order—describing increasingly intricate domains. All of society's knowledge had become scientific, with questions oftheologyand ofmetaphysicsbeing unanswerable. Comte found enumerative induction reliable as a consequence of its grounding in available experience. He asserted the use of science, rather than metaphysical truth, as the correct method for the improvement of human society.
According to Comte,scientific methodframes predictions, confirms them, and states laws—positive statements—irrefutable bytheologyor bymetaphysics. Regarding experience as justifying enumerative induction by demonstrating theuniformity of nature,[27]the British philosopherJohn Stuart Millwelcomed Comte's positivism, but thoughtscientific lawssusceptible to recall or revision and Mill also withheld from Comte'sReligion of Humanity. Comte was confident in treatingscientific lawas anirrefutable foundation for all knowledge, and believed that churches, honouring eminent scientists, ought to focus public mindset onaltruism—a term Comte coined—to apply science for humankind's social welfare viasociology, Comte's leading science.
During the 1830s and 1840s, while Comte and Mill were the leading philosophers of science,William Whewellfound enumerative induction not nearly as convincing, and, despite the dominance of inductivism, formulated "superinduction".[29]Whewell argued that "the peculiar import of the termInduction" should be recognised: "there is some Conceptionsuperinducedupon the facts", that is, "the Invention of a new Conception in every inductive inference". The creation of Conceptions is easily overlooked and prior to Whewell was rarely recognised.[29]Whewell explained:
"Although we bind together facts by superinducing upon them a new Conception, this Conception, once introduced and applied, is looked upon as inseparably connected with the facts, and necessarily implied in them. Having once had the phenomena bound together in their minds in virtue of the Conception, men can no longer easily restore them back to detached and incoherent condition in which they were before they were thus combined."[29]
These "superinduced" explanations may well be flawed, but their accuracy is suggested when they exhibit what Whewell termedconsilience—that is, simultaneously predicting the inductive generalizations in multiple areas—a feat that, according to Whewell, can establish their truth. Perhaps to accommodate the prevailing view of science as inductivist method, Whewell devoted several chapters to "methods of induction" and sometimes used the phrase "logic of induction", despite the fact that induction lacks rules and cannot be trained.[29]
In the 1870s, the originator ofpragmatism,C S Peirceperformed vast investigations that clarified the basis ofdeductive inferenceas a mathematical proof (as, independently, didGottlob Frege). Peirce recognized induction but always insisted on a third type of inference that Peirce variously termedabductionorretroductionorhypothesisorpresumption.[30]Later philosophers termed Peirce's abduction, etc.,Inference to the Best Explanation(IBE).[31]
Having highlighted Hume'sproblem of induction,John Maynard Keynesposedlogical probabilityas its answer, or as near a solution as he could arrive at.[32]Bertrand Russellfound Keynes'sTreatise on Probabilitythe best examination of induction, and believed that if read withJean Nicod'sLe Probleme logique de l'inductionas well asR B Braithwaite's review of Keynes's work in the October 1925 issue ofMind, that would cover "most of what is known about induction", although the "subject is technical and difficult, involving a good deal of mathematics".[33]Two decades later,Russellfollowed Keynes in regarding enumerative induction as an "independent logical principle".[34][35][36]Russell found:
"Hume's skepticism rests entirely upon his rejection of the principle of induction. The principle of induction, as applied to causation, says that, ifAhas been found very often accompanied or followed byB, then it is probable that on the next occasion on whichAis observed, it will be accompanied or followed byB. If the principle is to be adequate, a sufficient number of instances must make the probability not far short of certainty. If this principle, or any other from which it can be deduced, is true, then the casual inferences which Hume rejects are valid, not indeed as giving certainty, but as giving a sufficient probability for practical purposes. If this principle is not true, every attempt to arrive at general scientific laws from particular observations is fallacious, and Hume's skepticism is inescapable for an empiricist. The principle itself cannot, of course, without circularity, be inferred from observed uniformities, since it is required to justify any such inference. It must, therefore, be, or be deduced from, an independent principle not based on experience. To this extent, Hume has proved that pure empiricism is not a sufficient basis for science. But if this one principle is admitted, everything else can proceed in accordance with the theory that all our knowledge is based on experience. It must be granted that this is a serious departure from pure empiricism, and that those who are not empiricists may ask why, if one departure is allowed, others are forbidden. These, however, are not questions directly raised by Hume's arguments. What these arguments prove—and I do not think the proof can be controverted—is that induction is an independent logical principle, incapable of being inferred either from experience or from other logical principles, and that without this principle, science is impossible."[36]
In a 1965 paper,Gilbert Harmanexplained that enumerative induction is not an autonomous phenomenon, but is simply a disguised consequence of Inference to the Best Explanation (IBE).[31]IBE is otherwise synonymous withC S Peirce'sabduction.[31]Many philosophers of science espousingscientific realismhave maintained that IBE is the way that scientists develop approximately true scientific theories about nature.[37]
Inductive reasoning is a form of argument that—in contrast to deductive reasoning—allows for the possibility that a conclusion can be false, even if all of thepremisesare true.[38]This difference between deductive and inductive reasoning is reflected in the terminology used to describe deductive and inductive arguments. In deductive reasoning, an argument is "valid" when, assuming the argument's premises are true, the conclusionmust betrue. If the argument is valid and the premisesaretrue, then the argument is"sound". In contrast, in inductive reasoning, an argument's premises can never guarantee that the conclusionmust betrue. Instead, an argument is "strong" when, assuming the argument's premises are true, the conclusion isprobablytrue. If the argument is strong and the premises are thought to be true, then the argument is said to be "cogent".[39]Less formally, the conclusion of an inductive argument may be called "probable", "plausible", "likely", "reasonable", or "justified", but never "certain" or "necessary". Logic affords no bridge from the probable to the certain.
The futility of attaining certainty through some critical mass of probability can be illustrated with a coin-toss exercise. Suppose someone tests whether a coin is either a fair one or two-headed. They flip the coin ten times, and ten times it comes up heads. At this point, there is a strong reason to believe it is two-headed. After all, the chance of ten heads in a row is .000976: less than one in one thousand. Then, after 100 flips, every toss has come up heads. Now there is "virtual" certainty that the coin is two-headed, and one can regard it as "true" that the coin is probably two-headed. Still, one can neither logically nor empirically rule out that the next toss will produce tails. No matter how many times in a row it comes up heads, this remains the case. If one programmed a machine to flip a coin over and over continuously, at some point the result would be a string of 100 heads. In the fullness of time, all combinations will appear.
As for the slim prospect of getting ten out of ten heads from a fair coin—the outcome that made the coin appear biased—many may be surprised to learn that the chance of any sequence of heads or tails is equally unlikely (e.g., H-H-T-T-H-T-H-H-H-T) and yet it occurs ineverytrial of ten tosses. That meansallresults for ten tosses have the same probability as getting ten out of ten heads, which is 0.000976. If one records the heads-tails sequences, for whatever result, that exact sequence had a chance of 0.000976.
An argument is deductive when the conclusion is necessary given the premises. That is, the conclusion must be true if the premises are true. For example, after getting 10 heads in a row one might deduce that the coin had met some statistical criterion to be regarded as probably two-sided, a conclusion that would not be falsified even if the next toss yielded tails.
If a deductive conclusion follows duly from its premises, then it is valid; otherwise, it is invalid (that an argument is invalid is not to say its conclusions are false; it may have a true conclusion, just not on account of the premises). An examination of the following examples will show that the relationship between premises and conclusion is such that the truth of the conclusion is already implicit in the premises. Bachelors are unmarried because wesaythey are; we have defined them so. Socrates is mortal because we have included him in a set of beings that are mortal. The conclusion for a valid deductive argument is already contained in the premises since its truth is strictly a matter of logical relations. It cannot say more than its premises. Inductive premises, on the other hand, draw their substance from fact and evidence, and the conclusion accordingly makes a factual claim or prediction. Its reliability varies proportionally with the evidence. Induction wants to reveal somethingnewabout the world. One could say that induction wants to saymorethan is contained in the premises.
To better see the difference between inductive and deductive arguments, consider that it would not make sense to say: "all rectangles so far examined have four right angles, so the next one I see will have four right angles." This would treat logical relations as something factual and discoverable, and thus variable and uncertain. Likewise, speaking deductively we may permissibly say. "All unicorns can fly; I have a unicorn named Charlie; thus Charlie can fly." This deductive argument is valid because the logical relations hold; we are not interested in their factual soundness.
The conclusions of inductive reasoning are inherentlyuncertain. It only deals with the extent to which, given the premises, the conclusion is "credible" according to some theory of evidence. Examples include amany-valued logic,Dempster–Shafer theory, orprobability theorywith rules for inference such asBayes' rule. Unlike deductive reasoning, it does not rely on universals holding over aclosed domain of discourseto draw conclusions, so it can be applicable even in cases ofepistemic uncertainty(technical issues with this may arise however; for example, thesecond axiom of probabilityis a closed-world assumption).[40]
Another crucial difference between these two types of argument is that deductive certainty is impossible in non-axiomatic or empirical systems such asreality, leaving inductive reasoning as the primary route to (probabilistic) knowledge of such systems.[41]
Given that "ifAis true then that would causeB,C, andDto be true", an example of deduction would be "Ais true therefore we can deduce thatB,C, andDare true". An example of induction would be "B,C, andDare observed to be true thereforeAmight be true".Ais areasonableexplanation forB,C, andDbeing true.
For example:
Note, however, that the asteroid explanation for the mass extinction is not necessarily correct. Other events with the potential to affect global climate also coincide with theextinction of the non-avian dinosaurs. For example, the release ofvolcanic gases(particularlysulfur dioxide) during the formation of theDeccan TrapsinIndia.
Another example of an inductive argument:
This argument could have been made every time a new biological life form was found, and would have had a correct conclusion every time; however, it is still possible that in the future a biological life form not requiring liquid water could be discovered.
As a result, the argument may be stated as:
A classical example of an "incorrect" statistical syllogism was presented by John Vickers:
The conclusion fails because the population of swans then known was not actually representative of all swans. A more reasonable conclusion would be: in line with applicable conventions, we might reasonablyexpectall swans in England to be white, at least in the short-term.
Succinctly put: deduction is aboutcertainty/necessity; induction is aboutprobability.[10]Any single assertion will answer to one of these two criteria. Another approach to the analysis of reasoning is that ofmodal logic, which deals with the distinction between the necessary and thepossiblein a way not concerned with probabilities among things deemed possible.
The philosophical definition of inductive reasoning is more nuanced than a simple progression from particular/individual instances to broader generalizations. Rather, the premises of an inductivelogical argumentindicate some degree of support (inductive probability) for the conclusion but do notentailit; that is, they suggest truth but do not ensure it. In this manner, there is the possibility of moving from general statements to individual instances (for example, statistical syllogisms).
Note that the definition ofinductivereasoning described here differs frommathematical induction, which, in fact, is a form ofdeductivereasoning. Mathematical induction is used to provide strict proofs of the properties of recursively defined sets.[42]The deductive nature of mathematical induction derives from its basis in a non-finite number of cases, in contrast with the finite number of cases involved in an enumerative induction procedure likeproof by exhaustion. Both mathematical induction and proof by exhaustion are examples ofcomplete induction. Complete induction is a masked type of deductive reasoning.
Although philosophers at least as far back as thePyrrhonistphilosopherSextus Empiricushave pointed out the unsoundness of inductive reasoning,[43]the classic philosophical critique of theproblem of inductionwas given by the Scottish philosopherDavid Hume.[44]Although the use of inductive reasoning demonstrates considerable success, the justification for its application has been questionable. Recognizing this, Hume highlighted the fact that our mind often draws conclusions from relatively limited experiences that appear correct but which are actually far from certain. In deduction, the truth value of the conclusion is based on the truth of the premise. In induction, however, the dependence of the conclusion on the premise is always uncertain. For example, let us assume that all ravens are black. The fact that there are numerous black ravens supports the assumption. Our assumption, however, becomes invalid once it is discovered that there are white ravens. Therefore, the general rule "all ravens are black" is not the kind of statement that can ever be certain. Hume further argued that it is impossible to justify inductive reasoning: this is because it cannot be justified deductively, so our only option is to justify it inductively. Since this argument is circular, with the help ofHume's forkhe concluded that our use of induction is not logically justifiable .[45]
Hume nevertheless stated that even if induction were proved unreliable, we would still have to rely on it. So instead of a position ofsevere skepticism, Hume advocated apractical skepticismbased oncommon sense, where the inevitability of induction is accepted.[46]Bertrand Russellillustrated Hume's skepticism in a story about a chicken who, fed every morning without fail and following the laws of induction, concluded that this feeding would always continue, until his throat was eventually cut by the farmer.[47]
In 1963,Karl Popperwrote, "Induction,i.e.inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure."[48][49]Popper's 1972 bookObjective Knowledge—whose first chapter is devoted to the problem of induction—opens, "I think I have solved a major philosophical problem: theproblem of induction".[49]In Popper's schema, enumerative induction is "a kind of optical illusion" cast by the steps of conjecture and refutation during aproblem shift.[49]An imaginative leap, thetentative solutionis improvised, lacking inductive rules to guide it.[49]The resulting, unrestricted generalization is deductive, an entailed consequence of all explanatory considerations.[49]Controversy continued, however, with Popper's putative solution not generally accepted.[50]
Donald A. Gilliesargues thatrules of inferencesrelated to inductive reasoning are overwhelmingly absent from science, and describes most scientific inferences as "involv[ing] conjectures thought up by human ingenuity and creativity, and by no means inferred in any mechanical fashion, or according to precisely specified rules."[51]Gillies also provides a rare counterexample "in the machine learning programs ofAI."[51]
Inductive reasoning is also known as hypothesis construction because any conclusions made are based on current knowledge and predictions.[citation needed]As with deductive arguments, biases can distort the proper application of inductive argument, thereby preventing the reasoner from forming the mostlogical conclusionbased on the clues. Examples of these biases include theavailability heuristic,confirmation bias, and thepredictable-world bias.
The availability heuristic is regarded as causing the reasoner to depend primarily upon information that is readily available. People have a tendency to rely on information that is easily accessible in the world around them. For example, in surveys, when people are asked to estimate the percentage of people who died from various causes, most respondents choose the causes that have been most prevalent in the media such as terrorism, murders, and airplane accidents, rather than causes such as disease and traffic accidents, which have been technically "less accessible" to the individual since they are not emphasized as heavily in the world around them.
Confirmation bias is based on the natural tendency to confirm rather than deny a hypothesis. Research has demonstrated that people are inclined to seek solutions to problems that are more consistent with known hypotheses rather than attempt to refute those hypotheses. Often, in experiments, subjects will ask questions that seek answers that fit established hypotheses, thus confirming these hypotheses. For example, if it is hypothesized that Sally is a sociable individual, subjects will naturally seek to confirm the premise by asking questions that would produce answers confirming that Sally is, in fact, a sociable individual.
The predictable-world bias revolves around the inclination to perceive order where it has not been proved to exist, either at all or at a particular level of abstraction. Gambling, for example, is one of the most popular examples of predictable-world bias. Gamblers often begin to think that they see simple and obvious patterns in the outcomes and therefore believe that they are able to predict outcomes based on what they have witnessed. In reality, however, the outcomes of these games are difficult to predict and highly complex in nature. In general, people tend to seek some type of simplistic order to explain or justify their beliefs and experiences, and it is often difficult for them to realise that their perceptions of order may be entirely different from the truth.[52]
As a logic of induction rather than a theory of belief,Bayesian inferencedoes not determine which beliefs area priorirational, but rather determines how we should rationally change the beliefs we have when presented with evidence. We begin by considering an exhaustive list of possibilities, a definite probabilistic characterisation of each of them (in terms of likelihoods) and preciseprior probabilitiesfor them (e.g. based on logic or induction from previous experience) and, when faced with evidence, we adjust the strength of our belief in the given hypotheses in a precise manner usingBayesian logicto yield candidate 'a posteriori probabilities', taking no account of the extent to which the new evidence may happen to give us specific reasons to doubt our assumptions. Otherwise it is advisable to review and repeat as necessary the consideration of possibilities and their characterisation until, perhaps, a stable situation is reached.[53]
Around 1960,Ray Solomonofffounded the theory of universalinductive inference, a theory of prediction based on observations, for example, predicting the next symbol based upon a given series of symbols. This is a formal inductive framework that combinesalgorithmic information theorywith the Bayesian framework. Universal inductive inference is based on solid philosophical foundations and 'seems to be an inadequate tool for dealing with any reasonably complex or real-world environment',[54]and can be considered as a mathematically formalizedOccam's razor. Fundamental ingredients of the theory are the concepts ofalgorithmic probabilityandKolmogorov complexity.
Inductive inference typically considers hypothesis classes with a countable size. A recent advance[55]established a sufficient and necessary condition for inductive inference: a finite error bound is guaranteed if and only if the hypothesis class is a countable union of online learnable classes. Notably, this condition allows the hypothesis class to have an uncountable size while remaining learnable within this framework. | https://en.wikipedia.org/wiki/Inductive_argument |
Cognitive biasesare systematic patterns of deviation from norm and/or rationality in judgment.[1][2]They are often studied inpsychology,sociologyandbehavioral economics.[1]
Although the reality of most of these biases is confirmed byreproducibleresearch,[3][4]there are often controversies about how to classify these biases or how to explain them.[5]Severaltheoretical causes are known for some cognitive biases, which provides a classification of biases by their common generative mechanism (such as noisy information-processing[6]).Gerd Gigerenzerhas criticized the framing of cognitive biases as errors in judgment, and favors interpreting them as arising from rational deviations from logical thought.[7]
Explanations include information-processing rules (i.e., mental shortcuts), calledheuristics, that the brain uses to producedecisionsor judgments. Biases have a variety of forms and appear as cognitive ("cold") bias, such as mental noise,[6]or motivational ("hot") bias, such as when beliefs are distorted bywishful thinking. Both effects can be present at the same time.[8][9]
There are also controversies over some of these biases as to whether they count as useless orirrational, or whether they result in useful attitudes or behavior. For example, when getting to know others, people tend to askleading questionswhich seem biased towards confirming their assumptions about the person. However, this kind ofconfirmation biashas also been argued to be an example ofsocial skill; a way to establish a connection with the other person.[10]
Although this research overwhelmingly involves human subjects, some studies have found bias in non-human animals as well. For example,loss aversionhas been shown in monkeys andhyperbolic discountinghas been observed in rats, pigeons, and monkeys.[11]
These biases affect belief formation, reasoning processes, business and economic decisions, and human behavior in general.
The anchoring bias, or focalism, is the tendency to rely too heavily—to "anchor"—on one trait or piece of information when making decisions (usually the first piece of information acquired on that subject).[12][13]Anchoring bias includes or involves the following:
The tendency to perceive meaningful connections between unrelated things.[18]The following are types of apophenia:
The availability heuristic (also known as the availability bias) is the tendency to overestimate the likelihood of events with greater "availability" in memory, which can be influenced by how recent the memories are or how unusual or emotionally charged they may be.[22]The availability heuristic includes or involves the following:
Cognitive dissonance is the perception of contradictory information and the mental toll of it.
Confirmation bias is the tendency to search for, interpret, focus on and remember information in a way that confirms one's preconceptions.[35]There are multiple other cognitive biases which involve or are types of confirmation bias:
Egocentric bias is the tendency to rely too heavily on one's own perspective and/or have a different perception of oneself relative to others.[38]The following are forms of egocentric bias:
Extension neglect occurs where the quantity of the sample size is not sufficiently taken into consideration when assessing the outcome, relevance or judgement. The following are forms of extension neglect:
False priors are initial beliefs and knowledge which interfere with the unbiased evaluation of factual evidence and lead to incorrect conclusions. Biases based on false priors include:
The framing effect is the tendency to draw different conclusions from the same information, depending on how that information is presented. Forms of the framing effect include:
The following relate to prospect theory:
Association fallacies include:
Attribution bias includes:
Conformity is involved in the following:
Ingroup bias is the tendency for people to give preferential treatment to others they perceive to be members of their own groups. It is related to the following:
Inpsychologyandcognitive science, a memory bias is acognitive biasthat either enhances or impairs the recall of amemory(either the chances that the memory will be recalled at all, or the amount of time it takes for it to be recalled, or both), or that alters the content of a reported memory. There are many types of memory bias, including:
The misattributions include: | https://en.wikipedia.org/wiki/List_of_cognitive_biases |
Anecdotal evidence(oranecdata[1]) isevidencebased on descriptions and reports of individual, personal experiences, or observations,[2][3]collected in a non-systematicmanner.[4]
The termanecdotalencompasses a variety of forms of evidence. This word refers to personal experiences, self-reported claims,[3]or eyewitness accounts of others,[5]including those from fictional sources, making it a broad category that can lead to confusion due to its variedinterpretations.
Anecdotal evidence can be true or false but is not usually subjected to the methodology ofscholarly method, thescientific method, or the rules of legal, historical, academic, orintellectual rigor, meaning that there are little or no safeguards againstfabricationor inaccuracy.[2]However, the use of anecdotal reports inadvertisingor promotion of a product, service, or idea may be considered atestimonial, which is highly regulated in certain jurisdictions.[6]
The persuasiveness of anecdotal evidence compared to that of statistical evidence has been a subject of debate; some studies have argued for the presence a generalized tendency to overvalue anecdotal evidence, whereas others have emphasized the types of argument as a prerequisite or rejected the conclusion altogether.[7][8][9][10][11]
In science, definitions of anecdotal evidence include:
Anecdotal evidence may be considered within the scope ofscientific methodas some anecdotal evidence can be both empirical and verifiable, e.g. in the use ofcase studiesin medicine. Other anecdotal evidence, however, does not qualify asscientific evidence, because its nature prevents it from being investigated by the scientific method, for instance, in that offolkloreor in the case of intentionally fictional anecdotes. Where only one or a few anecdotes are presented, there is a chance that they may be unreliable due tocherry-pickedor otherwisenon-representativesamples of typical cases.[16][17]Similarly, psychologists have found that due tocognitive biaspeople are more likely to remember notable or unusual examples rather than typical examples.[18]Thus, even when accurate, anecdotal evidence is not necessarily representative of a typical experience. Accurate determination of whether an anecdote is typical requiresstatisticalevidence.[19]Misuse of anecdotal evidence in the form ofargument from anecdoteis aninformal fallacy[20]and is sometimes referred to as the "person who" fallacy ("I know a person who..."; "I know of a case where..." etc.) which places undue weight on experiences of close peers which may not be typical.
Anecdotal evidence can have varying degrees of formality. For instance, in medicine, published anecdotal evidence by a trained observer (a doctor) is called acase report, and is subjected to formalpeer review.[21]Although such evidence is not seen as conclusive, researchers may sometimes regard it as an invitation to more rigorous scientific study of the phenomenon in question.[22]For instance, one study found that 35 of 47 anecdotal reports of drug side-effects were later sustained as "clearly correct."[23]
Anecdotal evidence is considered the least certain type ofscientific information.[24]Researchers may use anecdotal evidence for suggesting newhypotheses, but never as validating evidence.[25][26]
If an anecdote illustrates a desired conclusion rather than a logical conclusion, it is considered afaultyorhasty generalization.[27]
In any case where some factor affects the probability of an outcome, rather than uniquely determining it, selected individual cases prove nothing; e.g. "my grandfather smoked two packs a day until he died at 90" and "my sister never smoked but died of lung cancer". Anecdotes often refer to the exception, rather than the rule: "Anecdotes are useless precisely because they may point to idiosyncratic responses."[28]
In medicine, anecdotal evidence may also be subject toplacebo effects.[29]
In the legal sphere, anecdotal evidence, if it passes certain legal requirements and is admitted astestimony, is a common form of evidence used in a court of law. In many cases, anecdotal evidence is the only evidence presented at trial.[30]Scientific evidence in a court of law is calledphysical evidence, but this is much rarer. Anecdotal evidence, with a few safeguards, represents the bulk of evidence in court.
The legal rigors applied to testimony for it to be considered evidence is that it must be givenunder oath, that the person is only testifying to their own words and actions, and that someone intentionally lying under oath is subject toperjury. However, these rigors do not maketestimonyin a court of law equal toscientific evidenceas there are far less legal rigors. Testimony about another person's experiences or words is calledhearsayand is usually not admissible, though there are certain exceptions. However, any hearsay that is not objected to or thrown out by a judge is considered evidence for a jury. This means that trials contain quite a bit of anecdotal evidence, which is considered as relevant evidence by a jury. Eyewitness testimony (which is a form of anecdotal evidence) is considered the most compelling form of evidence by a jury.[31] | https://en.wikipedia.org/wiki/Misleading_vividness |
Theprevention paradoxdescribes the seemingly contradictory situation where the majority of cases of a disease come from a population at low or moderate risk of that disease, and only a minority of cases come from the high risk population (of the same disease). This is because the number of people at high risk is small. The prevention paradox was first formally described in 1981[1]by theepidemiologistGeoffrey Rose.
Especially during theCOVID-19 pandemicof 2020, the term "prevention paradox" was also used to describe the apparent paradox of people questioning steps to prevent the spread of the pandemic because the prophesied spread did not occur.[2]This however is instead an example of aself-defeating prophecy[3]or apreparedness paradox.
For example, Rose describes the case ofDown syndromewhere maternal age is a risk factor. Yet, most cases of Down syndrome will be born to younger, low risk mothers (this is true at least in populations where most women have children at a younger age). This situation is paradoxical because it is common and logical to equate high-risk populations with making up the majority of theburden of disease.
Another example could be seen in terms of reducing overall alcohol problems in a population. Although less serious, most alcohol problems are not found among dependent drinkers. Greater societal gain will be obtained by achieving a small reduction in alcohol misuse within a far larger group of "risky" drinkers with less serious problems than by trying to reduce problems among a smaller number of dependent drinkers. | https://en.wikipedia.org/wiki/Prevention_paradox |
Simpson's paradoxis a phenomenon inprobabilityandstatisticsin which a trend appears in several groups of data but disappears or reverses when the groups are combined. This result is often encountered in social-science and medical-science statistics,[1][2][3]and is particularly problematic when frequency data are unduly givencausalinterpretations.[4]The paradox can be resolved whenconfounding variablesand causal relations are appropriately addressed in the statistical modeling[4][5](e.g., throughcluster analysis).[6]
Simpson's paradox has been used to illustrate the kind of misleading results that themisuse of statisticscan generate.[7][8]
Edward H. Simpsonfirst described this phenomenon in a technical paper in 1951;[9]the statisticiansKarl Pearson(in 1899)[10]andUdny Yule(in 1903)[11]had mentioned similar effects earlier. The nameSimpson's paradoxwas introduced by Colin R. Blyth in 1972.[12]It is also referred to asSimpson's reversal, theYule–Simpson effect, theamalgamation paradox, or thereversal paradox.[13]
MathematicianJordan Ellenbergargues that Simpson's paradox is misnamed as "there's no contradiction involved, just two different ways to think about the same data" and suggests that its lesson "isn't really to tell us which viewpoint to take but to insist that we keep both the parts and the whole in mind at once."[14]
One of the best-known examples of Simpson's paradox comes from a study of gender bias among graduate school admissions toUniversity of California, Berkeley. The admission figures for the fall of 1973 showed that men applying were more likely than women to be admitted, and the difference was so large that it was unlikely to be due to chance.[15][16]
However, when taking into account the information about departments being applied to, the different rejection percentages reveal the different difficulty of getting into the department, and at the same time it showed that women tended to apply to more competitive departments with lower rates of admission, even among qualified applicants (such as in the English department), whereas men tended to apply to less competitive departments with higher rates of admission (such as in the engineering department). The pooled and corrected data showed a "small but statistically significant bias in favor of women".[16]
The data from the six largest departments are listed below:
Legend:
bold- the two 'most applied for' departments for each gender
The entire data showed total of 4 out of 85 departments to be significantly biased against women, while 6 to be significantly biased against men (not all present in the 'six largest departments' table above). Notably, the numbers of biased departments were not the basis for the conclusion, but rather it was the gender admissions pooled across all departments, while weighing by each department's rejection rate across all of its applicants.[16]
Another example comes from a real-life medical study[17]comparing the success rates of two treatments forkidney stones.[18]The table below shows the success rates (the termsuccess ratehere actually means the success proportion) and numbers of treatments for treatments involving both small and large kidney stones, where Treatment A includes open surgical procedures and Treatment B includes closed surgical procedures. The numbers in parentheses indicate the number of success cases over the total size of the group.
The paradoxical conclusion is that treatment A is more effective when used on small stones, and also when used on large stones, yet treatment B appears to be more effective when considering both sizes at the same time. In this example, the "lurking" variable (orconfounding variable) causing the paradox is the size of the stones, which was not previously known to researchers to be important until its effects were included.[citation needed]
Which treatment is considered better is determined by which success ratio (successes/total) is larger. The reversal of the inequality between the two ratios when considering the combined data, which creates Simpson's paradox, happens because two effects occur together:[citation needed]
Based on these effects, the paradoxical result is seen to arise because the effect of the size of the stones overwhelms the benefits of the better treatment (A). In short, the less effective treatment B appeared to be more effective because it was applied more frequently to the small stones cases, which were easier to treat.[18]
Jaynesargues that the correct conclusion is that though treatment A remains noticeably better than treatment B, the kidney stone size is more important.[19]
A common example of Simpson's paradox involves thebatting averagesof players inprofessional baseball. It is possible for one player to have a higher batting average than another player each year for a number of years, but to have a lower batting average across all of those years. This phenomenon can occur when there are large differences in the number ofat batsbetween the years. MathematicianKen Rossdemonstrated this using the batting average of two baseball players,Derek JeterandDavid Justice, during the years 1995 and 1996:[20][21]
In both 1995 and 1996, Justice had a higher batting average (in bold type) than Jeter did. However, when the two baseball seasons are combined, Jeter shows a higher batting average than Justice. According to Ross, this phenomenon would be observed about once per year among the possible pairs of players.[20]
Simpson's paradox can also be illustrated using a 2-dimensionalvector space.[22]A success rate ofpq{\textstyle {\frac {p}{q}}}(i.e.,successes/attempts) can be represented by avectorA→=(q,p){\displaystyle {\vec {A}}=(q,p)}, with aslopeofpq{\textstyle {\frac {p}{q}}}. A steeper vector then represents a greater success rate. If two ratesp1q1{\textstyle {\frac {p_{1}}{q_{1}}}}andp2q2{\textstyle {\frac {p_{2}}{q_{2}}}}are combined, as in the examples given above, the result can be represented by the sum of the vectors(q1,p1){\displaystyle (q_{1},p_{1})}and(q2,p2){\displaystyle (q_{2},p_{2})}, which according to theparallelogram ruleis the vector(q1+q2,p1+p2){\displaystyle (q_{1}+q_{2},p_{1}+p_{2})}, with slopep1+p2q1+q2{\textstyle {\frac {p_{1}+p_{2}}{q_{1}+q_{2}}}}.
Simpson's paradox says that even if a vectorL→1{\displaystyle {\vec {L}}_{1}}(in orange in figure) has a smaller slope than another vectorB→1{\displaystyle {\vec {B}}_{1}}(in blue), andL→2{\displaystyle {\vec {L}}_{2}}has a smaller slope thanB→2{\displaystyle {\vec {B}}_{2}}, the sum of the two vectorsL→1+L→2{\displaystyle {\vec {L}}_{1}+{\vec {L}}_{2}}can potentially still have a larger slope than the sum of the two vectorsB→1+B→2{\displaystyle {\vec {B}}_{1}+{\vec {B}}_{2}}, as shown in the example. For this to occur one of the orange vectors must have a greater slope than one of the blue vectors (hereL→2{\displaystyle {\vec {L}}_{2}}andB→1{\displaystyle {\vec {B}}_{1}}), and these will generally be longer than the alternatively subscripted vectors – thereby dominating the overall comparison.
Simpson's reversal can also arise incorrelations, in which two variables appear to have (say) a positive correlation towards one another, when in fact they have a negative correlation, the reversal having been brought about by a "lurking" confounder. Berman et al.[23]give an example from economics, where a dataset suggests overall demand is positively correlated with price (that is, higher prices lead tomoredemand), in contradiction of expectation. Analysis reveals time to be the confounding variable: plotting both price and demand against time reveals the expected negative correlation over various periods, which then reverses to become positive if the influence of time is ignored by simply plotting demand against price.
Psychological interest in Simpson's paradox seeks to explain why people[who?]deem sign reversal to be impossible at first.[clarification needed]The question is where people get this strongintuitionfrom, and how it is encoded in themind.
Simpson's paradox demonstrates that this intuition cannot be derived from eitherclassical logicorprobability calculusalone, and thus ledphilosophersto speculate that it is supported by an innate causal logic that guides people in reasoning about actions and their consequences.[4]Savage'ssure-thing principle[12]is an example of what such logic may entail. A qualified version of Savage's sure thing principle can indeed be derived from Pearl'sdo-calculus[4]and reads: "An actionAthat increases the probability of an eventBin each subpopulationCiofCmust also increase the probability ofBin the population as a whole, provided that the action does not change the distribution of the subpopulations." This suggests that knowledge about actions and consequences is stored in a form resembling CausalBayesian Networks.
A paper by Pavlides and Perlman presents a proof, due to Hadjicostas, that in a random 2 × 2 × 2 table with uniform distribution, Simpson's paradox will occur with aprobabilityof exactly1⁄60.[24]A study by Kock suggests that the probability that Simpson's paradox would occur at random in path models (i.e., models generated bypath analysis) with two predictors and one criterion variable is approximately 12.8 percent; slightly higher than 1 occurrence per 8 path models.[25]
A second, less well-known paradox was also discussed in Simpson's 1951 paper. It can occur when the "sensible interpretation" is not necessarily found in the separated data, like in the Kidney Stone example, but can instead reside in the combined data. Whether the partitioned or combined form of the data should be used hinges on the process giving rise to the data, meaning the correct interpretation of the data cannot always be determined by simply observing the tables.[26]
Judea Pearlhas shown that, in order for the partitioned data to represent the correct causal relationships between any two variables,X{\displaystyle X}andY{\displaystyle Y}, the partitioning variables must satisfy a graphical condition called "back-door criterion":[27][28]
This criterion provides an algorithmic solution to Simpson's second paradox, and explains why the correct interpretation cannot be determined by data alone; two different graphs, both compatible with the data, may dictate two different back-door criteria.
When the back-door criterion is satisfied by a setZof covariates, the adjustment formula (seeConfounding) gives the correct causal effect ofXonY. If no such set exists, Pearl'sdo-calculus can be invoked to discover other ways of estimating the causal effect.[4][29]The completeness ofdo-calculus[30][29]can be viewed as offering a complete resolution of the Simpson's paradox.
One criticism is that the paradox is not really a paradox at all, but rather a failure to properly account for confounding variables or to consider causal relationships between variables.[31]Focus on the paradox may distract from these more important statistical issues.[32]
Another criticism of the apparent Simpson's paradox is that it may be a result of the specific way that data are stratified or grouped. The phenomenon may disappear or even reverse if the data is stratified differently or if different confounding variables are considered. Simpson's example actually highlighted a phenomenon called noncollapsibility,[33]which occurs when subgroups with high proportions do not make simple averages when combined. This suggests that the paradox may not be a universal phenomenon, but rather a specific instance of a more general statistical issue.
Despite these criticisms, the apparent Simpson's paradox remains a popular and intriguing topic in statistics and data analysis. It continues to be studied and debated by researchers and practitioners in a wide range of fields, and it serves as a valuable reminder of the importance of careful statistical analysis and the potential pitfalls of simplistic interpretations of data. | https://en.wikipedia.org/wiki/Simpson%27s_paradox |
Intuitive statistics, orfolk statistics, is the cognitive phenomenon where organisms use data to make generalizations and predictions about the world. This can be a small amount of sample data or training instances, which in turn contribute toinductive inferencesabout either population-level properties, future data, or both. Inferences can involve revising hypotheses, or beliefs, in light of probabilistic data that inform and motivate future predictions. The informal tendency for cognitive animals to intuitively generatestatistical inferences, when formalized with certainaxioms of probabilitytheory, constitutesstatisticsas an academic discipline.
Because this capacity can accommodate a broad range of informational domains, the subject matter is similarly broad and overlaps substantially with other cognitive phenomena. Indeed, some have argued that "cognition as an intuitive statistician" is an apt companion metaphor to the computer metaphor of cognition.[1]Others appeal to a variety of statistical and probabilistic mechanisms behind theory construction[2][3]and category structuring.[4][5]Research in this domain commonly focuses on generalizations relating to number, relative frequency, risk, and any systematic signatures in inferential capacity that an organism (e.g.,humans, or non-human primates) might have.[1][6]
Intuitive inferences can involve generating hypotheses from incoming sense data, such ascategorizationandconceptstructuring. Data are typically probabilistic and uncertainty is the rule, rather than the exception, in learning, perception, language, and thought.[7][8]Recently, researchers have drawn from ideas inprobability theory,philosophy of mind,computer science, andpsychologyto model cognition as a predictive and generative system of probabilisticrepresentations, allowing information structures to support multiple inferences in a variety of contexts and combinations.[8]This approach has been called a probabilisticlanguage of thoughtbecause it constructs representations probabilistically, from pre-existing concepts to predict a possible and likely state of the world.[5]
Statisticians and probability theorists have long debated about the use of various tools, assumptions, and problems relating to inductive inference in particular.[1]David Humefamously considered theproblem of induction, questioning the logical foundations of how and why people can arrive at conclusions that extend beyond past experiences - both spatiotemporally andepistemologically.[9]More recently, theorists have considered the problem by emphasizing techniques for arriving from data to hypothesis using formal content-independent procedures, or in contrast, by considering informal, content-dependent tools for inductive inference.[10][11]Searches for formal procedures have led to different developments in statistical inference and probability theory with different assumptions, includingFisherian frequentist statistics,Bayesian inference, andNeyman-Pearson statistics.[1]
Gerd Gigerenzerand David Murray argue that twentieth century psychology as a discipline adopted probabilistic inference as a unified set of ideas and ignored the controversies among probability theorists. They claim that a normative but incorrect view of how humans "ought to think rationally" follows from this acceptance. They also maintain, however, that the intuitive statistician metaphor of cognition is promising, and should consider different formal tools orheuristicsas specialized for different problem domains, rather than a content- or context-free toolkit. Signal detection theorists and object detection models, for example, often use a Neyman-Pearson approach, whereas Fisherian frequentist statistics might aid cause-effect inferences.[1]
Frequentist inferencefocuses on the relative proportions or frequencies of occurrences to draw probabilistic conclusions. It is defined by its closely related concept,frequentist probability. This entails a view that "probability" is nonsensical in the absence of pre-existing data, because it is understood as a relative frequency that long-run samples would approach given large amounts of data.[12]Leda CosmidesandJohn Toobyhave argued that it is not possible to derive a probability without reference to some frequency of previous outcomes, and this likely hasevolutionary origins: Single-event probabilities, they claim, are not observable because organisms evolved to intuitively understand and make statistical inferences from frequencies of prior events, rather than to "see" probability as an intrinsic property of an event.[13]
Bayesian inferencegenerally emphasizes thesubjective probabilityof a hypothesis, which is computed as a posterior probability usingBayes' Theorem. It requires a "starting point" called a prior probability, which has been contentious for some frequentists who claim that frequency data are required todevelopa prior probability, in contrast to taking a probability as ana prioriassumption.[1][12]
Bayesian models have been quite popular among psychologists, particularly learning theorists, because they appear to emulate the iterative, predictive process by which people learn and develop expectations from new observations, while giving appropriate weight to previous observations.[14]Andy Clark, a cognitive scientist and philosopher, recently wrote a detailed argument in support of understanding the brain as a constructive Bayesianenginethat is fundamentally action-oriented andpredictive, rather than passive or reactive.[15]More classic lines of evidence cited among supporters of Bayesian inference include conservatism, or the phenomenon where people modify previous beliefstoward, but not all the way to, a conclusion implied by previous observations.[6]This pattern of behavior is similar to the pattern of posterior probability distributions when a Bayesian model is conditioned on data, though critics argued that this evidence had been overstated and lacked mathematical rigor.[16]
Alison Gopnikmore recently tackled the problem by advocating the use ofBayesian networks, or directedgraphrepresentations of conditional dependencies. In a Bayesian network, edge weights are conditional dependency strengths that are updated in light of new data, and nodes are observed variables. The graphical representation itself constitutes a model, or hypothesis, about the world and is subject to change, given new data.[2]
Error management theory(EMT) is an application of Neyman-Pearson statistics to cognitive andevolutionary psychology. It maintains that the possiblefitnesscosts and benefits of type I (false positive) and type II (false negative)errorsare relevant to adaptively rational inferences, toward which an organism is expected to be biased due tonatural selection. EMT was originally developed by Martie Haselton andDavid Buss, with initial research focusing on its possible role in sexual overperception bias in men and sexual underperception bias in women.[17]
This is closely related to a concept called the "smoke detector principle" in evolutionary theory. It is defined by the tendency for immune, affective, and behavioral defenses to be hypersensitive and overreactive, rather than insensitive or weakly expressed.Randolph Nessemaintains that this is a consequence of a typical payoff structure insignal detection: In a system that is invariantly structured with a relatively low cost of false positives and high cost of false negatives, naturally selected defenses are expected to err on the side of hyperactivity in response to potential threat cues.[18]This general idea has been applied to hypotheses about the apparenttendency for humans to apply agencyto non-agents based on uncertain or agent-like cues.[19]In particular, some claim that it is adaptive for potential prey to assume agency by default if it is even slightly suspected, because potential predator threats typically involve cheap false positives and lethal false negatives.[20]
Heuristicsare efficient rules, or computational shortcuts, for producing a judgment or decision. The intuitive statistician metaphor of cognition[1]led to a shift in focus for many psychologists, away from emotional or motivational principles and toward computational or inferential principles.[21]Empirical studies investigating these principles have led some to conclude that human cognition, for example, has built-in and systematic errors in inference, orcognitive biases. As a result, cognitive psychologists have largely adopted the view that intuitive judgments, generalizations, and numerical or probabilistic calculations are systematically biased. The result is commonly an error in judgment, including (but not limited to) recurrent logical fallacies (e.g., the conjunction fallacy), innumeracy, and emotionally motivated shortcuts in reasoning.[22][23][24][25]Social and cognitive psychologists have thus considered it "paradoxical" that humans can outperform powerful computers at complex tasks, yet be deeply flawed and error-prone in simple, everyday judgments.[26]
Much of this research was carried out byAmos TverskyandDaniel Kahnemanas an expansion of work byHerbert Simononbounded rationalityandsatisficing.[27]Tversky and Kahneman argue that people are regularly biased in their judgments under uncertainty, because in a speed-accuracy tradeoff they often rely on fast and intuitive heuristics with wide margins of error rather than slow calculations from statistical principles.[28]These errors are called "cognitive illusions" because they involve systematic divergences between judgments and accepted, normative rules in statistical prediction.[29]
Gigerenzer has been critical of this view, arguing that it builds from a flawed assumption that a unified "normative theory" of statistical prediction and probability exists. His contention is that cognitive psychologists neglect the diversity of ideas and assumptions in probability theory, and in some cases, their mutual incompatibility.[30][13]Consequently, Gigerenzer argues that many cognitive illusions are not violations of probability theoryper se, but involve some kind of experimenter confusion between subjective probabilities with degrees of confidence and long-run outcome frequencies.[21]Cosmides and Tooby similarly claim that different probabilistic assumptions can be more or less normative and rational in different types of situations, and that there is not general-purpose statistical toolkit for making inferences across all informational domains. In a review of several experiments they conclude, in support of Gigerenzer,[21]that previous heuristics and biases experiments did not represent problems in anecologically validway, and that re-representing problems in terms of frequencies rather than single-event probabilities can make cognitive illusions largely vanish.[13]
Tversky and Kahneman refuted this claim, arguing that making illusions disappear by manipulating them, whether they are cognitive or visual, does not undermine the initially discovered illusion. They also note that Gigerenzer ignores cognitive illusions resulting from frequency data, e.g., illusory correlations such as thehot handin basketball.[25]This, they note, is an example of an illusory positive autocorrelation that cannot be corrected by converted data to natural frequencies.[31]
Foradaptationists, EMT can be applied to inference under any informational domain, where risk or uncertainty are present, such as predator avoidance,agency detection, orforaging. Researchers advocating this adaptive rationality view argue that evolutionary theory casts heuristics and biases in a new light, namely, as computationally efficient andecologically rationalshortcuts, or instances of adaptive error management.[32]
People often neglectbase rates, or true actuarial facts about the probability or rate of a phenomenon, and instead give inappropriate amounts of weight to specific observations.[33][34]In a Bayesian model of inference, this would amount to an underweighting of the prior probability,[6]which has been cited as evidence against the appropriateness of a normative Bayesian framework for modeling cognition.[1][21]Frequency representations can resolve base rate neglect, and some consider the phenomenon to be an experimental artifact, i.e., a result of probabilities or rates being represented as mathematical abstractions, which are difficult to intuitively think about.[13]Gigerenzer speculates an ecological reason for this, noting that individuals learn frequencies through successive trials in nature.[35]Tversky and Kahneman refute Gigerenzer's claim, pointing to experiments where subjects predicted a disease based on the presence vs. absence of pre-specified symptoms across 250 trials, with feedback after each trial.[36]They note that base rate neglect was still found, despite the frequency formulation of subject trials in the experiment.[31]
Another popular example of a supposed cognitive illusion is theconjunction fallacy, described in an experiment by Tversky and Kahneman known as the "Linda problem." In this experiment, participants are presented with a short description of a person called Linda, who is 31 years old, single, intelligent, outspoken, and went to a university where she majored in philosophy, was concerned about discrimination and social justice, and participated in anti-nuclear protests. When participants were asked if it were more probable that Linda is (1) a bank teller, or (2) a bank teller and a feminist, 85% responded with option 2, even though it option 1 cannot be less probable than option 2. They concluded that this was a product of arepresentativeness heuristic, or a tendency to draw probabilistic inferences based on property similarities between instances of a concept, rather than a statistically structured inference.[24]
Gigerenzer argued that the conjunction fallacy is based on a single-event probability, and would dissolve under a frequentist approach. He and other researchers demonstrate that conclusions from the conjunction fallacy result from ambiguous language, rather than robust statistical errors or cognitive illusions.[37]In an alternative version of the Linda problem, participants are told that 100 people fit Linda's description and are asked how many are (1) bank tellers and (2) bank tellers and feminists. Experimentally, this version of the task appears to eliminate or mitigate the conjunction fallacy.[21][37]
There has been some question about how concept structuring and generalization can be understood in terms of brain architecture and processes. This question is impacted by a neighboring debate among theorists about the nature of thought, specifically betweenconnectionistandlanguage of thoughtmodels. Concept generalization and classification have been modeled in a variety of connectionist models, orneural networks, specifically in domains like language learning and categorization.[38][39]Some emphasize the limitations of pure connectionist models when they are expected to generalize future instances after training on previous instances.Gary Marcus, for example, asserts that training data would have to be completely exhaustive for generalizations to occur in existing connectionist models, and that as a result, they do not handle novel observations well. He further advocates an integrationist perspective between a language of thought, consisting of symbol representations and operations, and connectionist models than retain the distributed processing that is likely used by neural networks in the brain.[40]
In practice, humans routinely make conceptual, linguistic, and probabilistic generalizations from small amounts of data.[4][41][8][42]There is some debate about the utility of various tools of statistical inference in understanding the mind, but it is commonly accepted that the human mind issomehowan exceptionally apt prediction machine, and that action-oriented processes underlying this phenomenon, whatever they might entail, are at the core of cognition.[15][43]Probabilistic inferences and generalization play central roles in concepts and categories and language learning,[44]and infant studies are commonly used to understand the developmental trajectory of humans' intuitive statistical toolkit(s).
Developmental psychologistssuch asJean Piagethave traditionally argued that children do not develop the general cognitive capacities for probabilistic inference and hypothesis testing until concrete operational (age 7–11 years) and formal operational (age 12 years-adulthood)stages of development, respectively.[45][46]
This is sometimes contrasted to a growing preponderance of empirical evidence suggesting that humans are capable generalizers in infancy. For example, looking-time experiments using expected outcomes of red and white ping pong ball proportions found that 8-month-old infants appear to make inferences about population characteristics from which the sample came, and vice versa when given population-level data.[47]Other experiments have similarly supported a capacity for probabilistic inference with 6- and 11-month-old infants,[48]but not in 4.5-month-olds.[49]
The colored ball paradigm in these experiments did not distinguish the possibilities of infants' inferences based on quantity vs. proportion, which was addressed in follow-up research where 12-month-old infants seemed to understand proportions, basing probabilistic judgments - motivated by preferences for the more probable outcomes - on initial evidence of the proportions in their available options.[50]Critics of the effectiveness of looking-time tasks allowed infants to search for preferred objects in single-sample probability tasks, supporting the notion that infants can infer probabilities of single events when given a small or large initial sample size.[51]The researchers involved in these findings have argued that humans possess some statistically structured, inferential system during preverbal stages of development and prior to formal education.[47][50]
It is less clear, however, how and why generalization is observed in infants: It might extend directly from detection and storage of similarities and differences in incoming data, or frequency representations. Conversely, it might be produced by something like general-purpose Bayesian inference, starting with a knowledge base that is iteratively conditioned on data to update subjective probabilities, or beliefs.[52][53]This ties together questions about the statistical toolkit(s) that might be involved in learning, and how they apply to infant and childhood learning specifically.
Gopnik advocates the hypothesis that infant and childhood learning are examples of inductive inference, ageneral-purpose mechanismfor generalization, acting upon specialized information structures ("theories") in the brain.[54]On this view, infants and children are essentially proto-scientists because they regularly use a kind of scientific method, developing hypotheses, performing experiments via play, and updating models about the world based on their results.[55]For Gopnik, this use of scientific thinking and categorization in development and everyday life can be formalized as models of Bayesian inference.[56]An application of this view is the "sampling hypothesis," or the view that individual variation in children's causal and probabilistic inferences is an artifact of random sampling from a diverse set of hypotheses, and flexible generalizations based on sampling behavior and context.[57][58]These views, particularly those advocating general Bayesian updating from specialized theories, are considered successors to Piaget's theory rather than wholesale refutations because they maintain its domain-generality, viewing children as randomly and unsystematically considering a range of models before selecting a probable conclusion.[59]
In contrast to the general-purpose mechanistic view, some researchers advocate bothdomain-specificinformation structures and similarly specialized inferential mechanisms.[60][61]For example, while humans do not usually excel atconditional probabilitycalculations, the use of conditional probability calculations are central to parsing speech sounds into comprehensible syllables, a relatively straightforward and intuitive skill emerging as early as 8 months.[62]Infants also appear to be good at tracking not only spatiotemporal states of objects, but at tracking properties of objects, and these cognitive systems appear to be developmentally distinct. This has been interpreted as domain specific toolkits of inference, each of which corresponds to separate types of information and has applications toconcept learning.[60][63]
Infants use form similarities and differences to develop concepts relating to objects, and this relies on multiple trials with multiple patterns, exhibiting some kind of common property between trials.[64]Infants appear to become proficient at this ability in particular by 12 months,[65]but different concepts and properties employ different relevant principles ofGestalt psychology, many of which might emerge at different stages of development.[66]Specifically, infant categorization at as early as 4.5 months involves iterative and interdependent processes by which exemplars (data) and their similarities and differences are crucial for drawing boundaries around categories.[67]These abstract rules are statistical by nature, because they can entail common co-occurrences of certain perceived properties in past instances and facilitate inferences about their structure in future instances.[68][69]This idea has been extrapolated byDouglas Hofstadterand Emmanuel Sander, who argue that becauseanalogyis a process of inference relying on similarities and differences between concept properties, analogy and categorization are fundamentally the same process used for organizing concepts from incoming data.[4]
Infants and small children are not only capable generalizers of trait quantity and proportion, but of abstract rule-based systems such aslanguageandmusic.[70][71]These rules can be referred to as “algebraic rules” of abstract informational structure, and are representations of rule systems, orgrammars.[72]For language, creating generalizations with Bayesian inference and similarity detection has been advocated by researchers as a special case of concept formation.[73][74]Infants appear to be proficient in inferring abstract and structural rules from streams of linguistic sounds produced in their developmental environments,[75]and to generate wider predictions based on those rules.[76]
For example, 9-month-old infants are capable of more quickly and dramatically updating their expectations when repeated syllable strings contain surprising features, such as rarephonemes.[77]In general, preverbal infants appear to be capable of discriminating between grammars with which they have been trained with experience, and novel grammars.[78][79]In 7-month-old infant looking-time tasks, infants seemed to pay more attention to unfamiliar grammatical structures than to familiar ones,[72]and in a separate study using 3-syllable strings, infants appeared to similarly have generalized expectations based on abstract syllabic structure previously presented, suggesting that they used surface occurrences, or data, in order to infer deeper abstract structure. This was taken to support the “multiple hypotheses [or models]” view by the researchers involved.[80][81]
Multiple studies byIrene Pepperbergand her colleagues suggested thatGrey parrots(Psittacus erithacus) have some capacity for recognizing numbers or number-like concepts, appearing to understand ordinality andcardinalityof numerals.[82][83][84]Recent experiments also indicated that, given some language training and capacity for referencing recognized objects, they also have some ability to make inferences about probabilities and hidden object type ratios.[85]
Experiments found that when reasoning about preferred vs. non-preferred food proportions,capuchin monkeyswere able to make inferences about proportions inferred by sequentially sampled data.[86]Rhesus monkeyswere similarly capable of using probabilistic and sequentially sampled data to make inferences about rewarding outcomes, and neural activity in the parietal cortex appeared to be involved in the decision-making process when they made inferences.[87]In a series of 7 experiments using a variety of relative frequency differences between banana pellets and carrots,orangutans,chimpanzeesandgorillasalso appeared to guide their decisions based on the ratios favoring the banana pellets after this was established as their preferred food item.[88]
Research on reasoning in medicine, or clinical reasoning, usually focuses on cognitive processes and/or decision-making outcomes among physicians and patients. Considerations include assessments of risk, patient preferences, andevidence-based medicalknowledge.[89]On a cognitive level, clinical inference relies heavily on interplay betweenabstraction,abduction,deduction, andinduction.[90]Intuitive "theories," or knowledge in medicine, can be understood asprototypesin concept spaces, or alternatively, assemantic networks.[91][92]Such models serve as a starting point for intuitive generalizations to be made from a small number of cues, resulting in the physician's tradeoff between the "art and science" of medical judgement.[93]This tradeoff was captured in an artificially intelligent (AI) program called MYCIN, which outperformed medical students, but not experienced physicians with extensive practice in symptom recognition.[93][94][95][89]Some researchers argue that despite this, physicians are prone to systematic biases, or cognitive illusions, in their judgment (e.g., satisficing to make premature diagnoses,confirmation biaswhen diagnoses are suspecteda priori).[89]
Statistical literacy and risk judgments have been described as problematic for physician-patient communication.[96]For example, physicians frequently inflate the perceived risk of non-treatment,[97]alter patients' risk perceptions by positively or negativelyframingsingle statistics (e.g., 97% survival rate vs. 3% death rate), and/or fail to sufficiently communicate "reference classes" of probability statements to patients.[98]The reference class is the object of a probability statement: If a psychiatrist says, for example, “this medication can lead to a 30-50% chance of a sexual problem,” it is ambiguous whether this means that 30-50% of patients will develop a sexual problem at some point, or if all patients will have problems in 30-50% of their sexual encounters.[99]
In studies ofbase rate neglect, the problems given to participants often use base rates of disease prevalence. In these experiments, physicians and non-physicians are similarly susceptible to base rate neglect, or errors in calculating conditional probability. Here is an example from an empirical survey problem given to experienced physicians: Suppose that a hypothetical cancer had a prevalence of 0.3% in the population, and the true positive rate of a screening test was 50% with a false positive rate of 3%. Given a patient with a positive test result, what is the probability that the patient has cancer? When asked this question, physicians with an average of 14 years experience in medical practice ranged in their answers from 1-99%, with most answers being 47% or 50%. (The correct answer is 5%.)[98]This observation of clinical base rate neglect and conditional probability error has been replicated in multiple empirical studies.[96][100]Physicians' judgments in similar problems, however, improved substantially when the rates were re-formulated as natural frequencies.[101] | https://en.wikipedia.org/wiki/Intuitive_statistics |
R v Adams[1996]EWCACrim 10 and 222, are rulings in the United Kingdom that banned the expression in court of headline (soundbite), standaloneBayesian statisticsfrom thereasoningadmissible before ajuryinDNA evidencecases, in favour of the calculated average (and maximal) number of matching incidences among the nation's population. The facts involved strong but inconclusive evidence conflicting with the DNA evidence, leading to aretrial.
A rape victim described her attacker as in his twenties. A suspect, Denis Adams, wasarrestedand anidentity paradewas arranged. The woman failed to pick him out, and on being asked if he fitted her description replied in the negative. She had described a man in his twenties and when asked how old Adams looked, she replied about forty. Adams was 37; he had analibifor the night in question, his girlfriend saying he had spent the night with her. The DNA was the only incriminating evidence heard by the jury, as all the other evidence pointed towards innocence.
TheDNA profileof the suspect fitted that ofevidenceleft at the scene. Thedefenceargued that the matchprobabilityfigure put forward by the prosecution (1 in 200 million) was incorrect, and that a figure of 1 in 20 million, or perhaps even 1 in 2 million, was more appropriate. The issue of how the jury should resolve the conflicting evidence was addressed by the defence by a formalstatistical method. The jury was instructed in the use ofBayes's theoremby ProfessorPeter DonnellyofOxford University. The judge told the jury they could use Bayes's theorem if they wished. Adams was convicted and the case went to appeal. TheAppeal Courtjudges noted that the original trial judge did not direct the jury as to what to do if they did not wish to use Bayes's theorem and ordered a retrial.
At the retrial the defence team again wanted to instruct the new jury in the use of Bayes's theorem (though Prof. Donnelly had doubts about the practicality of the approach).[1]The judge asked that the statistical experts from both sides work together to produce a workable method of implementing Bayes's theorem for use in a courtroom, should the jury wish to use it. Aquestionnairewas produced which asked a series of questions such as:
These questions were intended to allow theBayes factorsof the various pieces of evidence to be assessed. The questionnaires had boxes where jurors could put their assessments and a formula to enable them to produce the overalloddsofguiltorinnocence. Adams was convicted once again and again an appeal was made to the Court of Appeal. The appeal was unsuccessful but the Appeal Court ruling was highly critical of the appropriateness of Bayes's theorem in the courtroom.
The only evidence against Adams was the DNA evidence. His age was substantially different from that reported by the victim, the victim did not identify him and he had an alibi which was never disproved. The 1 in 200 million match probability calculation did not allow for the fact that the perpetrator might be a close relative of the defendant – an important point, since the defendant had a half-brother in his 20s whose DNA was never tested.
The Court of Appeal after the appeal wrote the guidelines for the way that match probabilities should be explained to jurors. Judges should say something along the lines of the following. | https://en.wikipedia.org/wiki/R_v_Adams |
Instatisticalanalysis ofbinary classificationandinformation retrievalsystems, theF-scoreorF-measureis a measure of predictive performance. It is calculated from theprecisionandrecallof the test, where the precision is the number of true positive results divided by the number of all samples predicted to be positive, including those not identified correctly, and the recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known aspositive predictive value, and recall is also known assensitivityin diagnostic binary classification.
TheF1score is theharmonic meanof the precision and recall. It thus symmetrically represents both precision and recall in one metric. The more genericFβ{\displaystyle F_{\beta }}score applies additional weights, valuing one of precision or recall more than the other.
The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if the precision or the recall is zero.
The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the FourthMessage Understanding Conference(MUC-4, 1992).[1]
The traditional F-measure or balanced F-score (F1score) is theharmonic meanof precision and recall:[2]
Withprecision = TP / (TP + FP)andrecall = TP / (TP + FN), it follows that the numerator ofF1is the sum of their numerators and the denominator ofF1is the sum of their denominators.
A more general F score,Fβ{\displaystyle F_{\beta }}, that uses a positive real factorβ{\displaystyle \beta }, whereβ{\displaystyle \beta }is chosen such that recall is consideredβ{\displaystyle \beta }times as important as precision, is:
In terms ofType I and type II errorsthis becomes:
Two commonly used values forβ{\displaystyle \beta }are 2, which weighs recall higher than precision, and 0.5, which weighs recall lower than precision.
The F-measure was derived so thatFβ{\displaystyle F_{\beta }}"measures the effectiveness of retrieval with respect to a user who attachesβ{\displaystyle \beta }times as much importance to recall as precision".[3]It is based onVan Rijsbergen's effectiveness measure
Their relationship is:Fβ=1−E{\displaystyle F_{\beta }=1-E}whereα=11+β2{\displaystyle \alpha ={\frac {1}{1+\beta ^{2}}}}
This is related to the field ofbinary classificationwhere recall is often termed "sensitivity".
Precision-recall curve, and thus theFβ{\displaystyle F_{\beta }}score, explicitly depends on the ratior{\displaystyle r}of positive to negative test cases.[12]This means that comparison of the
F-score across different problems with differing class ratios is
problematic. One way to address this issue (see e.g., Siblini et al.,
2020[13]) is to use a standard class ratior0{\displaystyle r_{0}}when making such comparisons.
The F-score is often used in the field ofinformation retrievalfor measuringsearch,document classification, andquery classificationperformance.[14]It is particularly relevant in applications which are primarily concerned with the positive class and where the positive class is rare relative to the negative class.
Earlier works focused primarily on the F1score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall[15]and soFβ{\displaystyle F_{\beta }}is seen in wide application.
The F-score is also used inmachine learning.[16]However, the F-measures do not take true negatives into account, hence measures such as theMatthews correlation coefficient,InformednessorCohen's kappamay be preferred to assess the performance of a binary classifier.[17]
The F-score has been widely used in the natural language processing literature,[18]such as in the evaluation ofnamed entity recognitionandword segmentation.
The F1score is theDice coefficientof the set of retrieved items and the set of relevant items.[19]
David Handand others criticize the widespread use of the F1score since it gives equal importance to precision and recall. In practice, different types of mis-classifications incur different costs. In other words, the relative importance of precision and recall is an aspect of the problem.[22]
According to Davide Chicco and Giuseppe Jurman, the F1score is less truthful and informative than theMatthews correlation coefficient (MCC)in binary evaluation classification.[23]
David M W Powershas pointed out that F1ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and the true class predicting the classifier prediction, proposing separate multiclass measuresInformednessandMarkednessfor the two directions, noting that their geometric mean is correlation.[24]
Another source of critique of F1is its lack of symmetry. It means it may change its value when dataset labeling is changed - the "positive" samples are named "negative" and vice versa.
This criticism is met by theP4 metricdefinition, which is sometimes indicated as a symmetrical extension of F1.[25]
Finally, Ferrer[26]and Dyrland et al.[27]argue that the expected cost (or its counterpart, the expected utility) is the only principled metric for evaluation of classification decisions, having various advantages over the F-score and the MCC. Both works show that the F-score can result in wrong conclusions about the absolute and relative quality of systems.
While the F-measure is theharmonic meanof recall and precision, theFowlkes–Mallows indexis theirgeometric mean.[28]
The F-score is also used for evaluating classification problems with more than two classes (Multiclass classification). A common method is to average the F-score over each class, aiming at a balanced measurement of performance.[29]
Macro F1is a macro-averaged F1 score aiming at a balanced performance measurement. To calculate macro F1, two different averaging-formulas have been used: the F1 score of (arithmetic) class-wise precision and recall means or the arithmetic mean of class-wise F1 scores, where the latter exhibits more desirable properties.[30]
Micro F1is the harmonic mean ofmicro precisionandmicro recall. In single-label multi-class classification, micro precision equals micro recall, thus micro F1 is equal to both. However, contrary to a common misconception, micro F1 does not generally equalaccuracy, because accuracy takes true negatives into account while micro F1 does not.[31] | https://en.wikipedia.org/wiki/F1_Score |
Population impact measures(PIMs) arebiostatisticalmeasures of risk and benefit used inepidemiologicalandpublic healthresearch. They are used to describe the impact of health risks and benefits in a population, to inform health policy.[1][2][3]
Frequently used measures of risk and benefit identified by Jerkel, Katz and Elmore,[4]describe measures of risk difference (attributable risk), rate difference (often expressed as theodds ratioorrelative risk),population attributable risk (PAR), and therelative risk reduction, which can be recalculated into a measure ofabsolute benefit, called thenumber needed to treat. Population impact measures are an extension of these statistics, as they are measures of absolute risk at the population level, which are calculations of number of people in the population who are at risk to be harmed, or who will benefit frompublic healthinterventions.
They are measures of absolute risk and benefit, producing numbers of people who will benefit from an intervention or be at risk from arisk factorwithin a particular local or national population.[5][6][7][8][9][10]They provide local context to previous measures, allowing policy-makers to identify and prioritise the potential benefits of interventions on their own population.[11][12]They are simple to compute, and contain the elements to which policy-makers would have to pay attention in the commissioning or improvement of services. They may have special relevance for local policy-making. They depend on the ability to obtain and use local data, and by being explicit about the data required may have the added benefit of encouraging the collection of such data.
To describe the impact of preventive and treatment interventions, the number of events prevented in a population (NEPP) is defined as"the number of events prevented by the intervention in a population over a defined time period". NEPP extends the well-known measurenumber needed to treat(NNT) beyond the individual patient to the population. To describe the impact of a risk factor on causing ill health and disease the Population Impact Number of Eliminating a Risk factor (PIN − ER −t) is defined as"the potential number of disease events prevented in a population over the next t years by eliminating a risk factor". The PIN − ER −textends the well-knownpopulation attributable risk(PAR) to a particular population and relates it to disease incidence, converting the PAR from a measure of relative to absolute risk.[citation needed]
The components for the calculations are as follows: population denominator (size of the population); proportion of the population with the disease; proportion of the population exposed to the risk factor or the incremental proportion of the diseased population eligible for the proposed intervention (the latter requires the actual or estimated proportion who are currently receiving the interventions 'subtracted' from best practice goal from guidelines or targets, adjusted for likely compliance with the intervention); baseline risk – the probability of the outcome of interest in this or similar populations; and relative risk of outcome given exposure to a risk factor or relative risk reduction associated with the intervention.[citation needed]
The formula for calculating the NEPP is
where
In order to reflect the incremental effect of changing from current to 'best' practice, and to adjust for levels of compliance, the proportion eligible for treatment,Pe, is(Pb−Pt)Pc{\textstyle (P_{b}-P_{t})P_{c}}, wherePtis the proportion currently treated,Pbis the proportion that would be treated if best practice were adopted, andPcis the proportion of the population who are compliant with the intervention.
[Note: number needed to treat (NNT): 1/(baseline risk x relative risk reduction)]
The formula for calculating the PIN − ER −tis
where
The PAR or PAF, population attributable risk (or fraction), is calculated for two or multiple strata. The basic formula to compute the PAR for dichotomous variables is
where
This is modified where there are multiple strata to: | https://en.wikipedia.org/wiki/Population_impact_measures |
Therisk difference(RD),excess risk, orattributable risk[1]is the difference between theriskof an outcome in the exposed group and the unexposed group. It is computed asIe−Iu{\displaystyle I_{e}-I_{u}}, whereIe{\displaystyle I_{e}}is theincidencein the exposed group, andIu{\displaystyle I_{u}}is the incidence in the unexposed group. If the risk of an outcome is increased by the exposure, the termabsolute risk increase(ARI) is used, and computed asIe−Iu{\displaystyle I_{e}-I_{u}}. Equivalently, if the risk of an outcome is decreased by the exposure, the termabsolute risk reduction(ARR) is used, and computed asIu−Ie{\displaystyle I_{u}-I_{e}}.[2][3]
The inverse of the absolute risk reduction is thenumber needed to treat, and the inverse of the absolute risk increase is thenumber needed to harm.[2]
It is recommended to use absolute measurements, such as risk difference, alongside the relative measurements, when presenting the results of randomized controlled trials.[4]Their utility can be illustrated by the following example of a hypothetical drug which reduces the risk ofcolon cancerfrom 1 case in 5000 to 1 case in 10,000 over one year. The relative risk reduction is 0.5 (50%), while the absolute risk reduction is 0.0001 (0.01%). The absolute risk reduction reflects the low probability of getting colon cancer in the first place, while reporting only relative risk reduction, would run into risk of readers exaggerating the effectiveness of the drug.[5]
Authors such asBen Goldacrebelieve that the risk difference is best presented as anatural number- drug reduces 2 cases of colon cancer to 1 case if you treat 10,000 people. Natural numbers, which are used in the number needed to treat approach, are easily understood by non-experts.[6]
Risk difference can be estimated from a 2x2contingency table:
Thepoint estimateof the risk difference is
Thesampling distributionof RD is approximatelynormal, withstandard error
The1−α{\displaystyle 1-\alpha }confidence intervalfor the RD is then
wherezα{\displaystyle z_{\alpha }}is thestandard scorefor the chosen level ofsignificance[3]
We could assume a disease noted byD{\displaystyle D}, and no disease noted by¬D{\displaystyle \neg D}, exposure noted byE{\displaystyle E}, and no exposure noted by¬E{\displaystyle \neg E}. The risk difference can be written as | https://en.wikipedia.org/wiki/Attributable_risk |
Inepidemiology,attributable fraction among the exposed(AFe) is the proportion of incidents in the exposed group that are attributable to the risk factor. The termattributable risk percent among the exposedis used if the fraction is expressed as a percentage.[1]It is calculated asAFe=(Ie−Iu)/Ie=(RR−1)/RR{\displaystyle AF_{e}=(I_{e}-I_{u})/I_{e}=(RR-1)/RR}, whereIe{\displaystyle I_{e}}is the incidence in the exposed group,Iu{\displaystyle I_{u}}is the incidence in the unexposed group, andRR{\displaystyle RR}is therelative risk.[2]It is used when an exposure increases the risk, as opposed to reducing it, in which case its symmetrical notion ispreventable fraction among the unexposed.
Multiple synonyms of AFeare in use: attributable fraction,[1][3]relative attributable risk,[1]attributable proportion among the exposed,[1]and attributable risk among the exposed.[4]
Similarly, attributable risk percent (ARP) is used as a synonym for the attributable risk percent among the exposed.[3]
Inclimatology, fraction of attributable risk (FAR) is used to denote a proportion of adverse event risk attributable to the human influence on climate or other forcing factor.[5] | https://en.wikipedia.org/wiki/Attributable_risk_percent |
In statistics,pseudo-R-squaredvalues are used when the outcome variable is nominal or ordinal such that thecoefficient of determinationR2cannot be applied as a measure for goodness of fit and when alikelihood functionis used to fit a model.
Inlinear regression, the squared multiple correlation,R2is used to assess goodness of fit as it represents the proportion of variance in the criterion that is explained by the predictors.[1]Inlogistic regressionanalysis, there is no agreed upon analogous measure, but there are several competing measures each with limitations.[1][2]
Four of the most commonly used indices and one less commonly used one are examined in this article:
R2Lis given by Cohen:[1]
This is the most analogous index to the squared multiple correlations in linear regression.[3]It represents the proportional reduction in the deviance wherein the deviance is treated as a measure of variation analogous but not identical to thevarianceinlinear regressionanalysis.[3]One limitation of the likelihood ratioR2is that it is not monotonically related to the odds ratio,[1]meaning that it does not necessarily increase as the odds ratio increases and does not necessarily decrease as the odds ratio decreases.
R2CSis an alternative index of goodness of fit related to theR2value from linear regression.[2]It is given by:
whereLMandL0are the likelihoods for the model being fitted and the null model, respectively. The Cox andSnellindex corresponds to the standardR2in case of a linear model with normal error. In certain situations,R2CSmay be problematic as its maximum value is1−L02/n{\displaystyle 1-L_{0}^{2/n}}. For example, for logistic regression, the upper bound isRCS2≤0.75{\displaystyle R_{\text{CS}}^{2}\leq 0.75}for a symmetric marginal distribution of events and decreases further for an asymmetric distribution of events.[2]
R2N, proposed byNico Nagelkerkein a highly cited Biometrika paper,[4]provides a correction to the Cox and SnellR2so that the maximum value is equal to 1. Nevertheless, the Cox and Snell and likelihood ratioR2s show greater agreement with each other than either does with the NagelkerkeR2.[1]Of course, this might not be the case for values exceeding 0.75 as the Cox and Snell index is capped at this value. The likelihood ratioR2is often preferred to the alternatives as it is most analogous toR2inlinear regression, is independent of the base rate (both Cox and Snell and NagelkerkeR2s increase as the proportion of cases increase from 0 to 0.5) and varies between 0 and 1.
The pseudoR2by McFadden (sometimes calledlikelihood ratioindex[5]) is defined as
and is preferred overR2CSby Allison.[2]The two expressionsR2McFandR2CSare then related respectively by,
Allison[2]prefersR2Twhich is a relatively new measure developed by Tjur.[6]It can be calculated in two steps:
A word of caution is in order when interpreting pseudo-R2statistics. The reason these indices of fit are referred to aspseudoR2is that they do not represent the proportionate reduction in error as theR2inlinear regressiondoes.[1]Linear regression assumeshomoscedasticity, that the error variance is the same for all values of the criterion. Logistic regression will always beheteroscedastic– the error variances differ for each value of the predicted score. For each value of the predicted score there would be a different value of the proportionate reduction in error. Therefore, it is inappropriate to think ofR2as a proportionate reduction in error in a universal sense in logistic regression.[1] | https://en.wikipedia.org/wiki/Pseudo-R-squared |
Inevidence-based medicine,likelihood ratiosare used for assessing the value of performing adiagnostic test. They combinesensitivity and specificityinto a single metric that indicates how much a test result shifts the probability that a condition (such as a disease) is present. The first description of the use of likelihood ratios fordecision ruleswas made at a symposium on information theory in 1954.[1]In medicine, likelihood ratios were introduced between 1975 and 1980.[2][3][4]
Two versions of the likelihood ratio exist, one for positive and one for negative test results. Respectively, they are known as thepositive likelihood ratio(LR+,likelihood ratio positive,likelihood ratio for positive results) andnegative likelihood ratio(LR–,likelihood ratio negative,likelihood ratio for negative results).
The positive likelihood ratio is calculated as
which is equivalent to
or "the probability of a person whohas the diseasetesting positive divided by the probability of a person whodoes not have the diseasetesting positive."
Here "T+" or "T−" denote that the result of the test is positive or negative, respectively. Likewise, "D+" or "D−" denote that the disease is present or absent, respectively. So "true positives" are those that test positive (T+) and have the disease (D+), and "false positives" are those that test positive (T+) but do not have the disease (D−).
The negative likelihood ratio is calculated as[5]
which is equivalent to[5]
or "the probability of a person whohas the diseasetesting negative divided by the probability of a person whodoes not have the diseasetesting negative."
The calculation of likelihood ratios for tests with continuous values or more than two outcomes is similar to the calculation fordichotomousoutcomes; a separate likelihood ratio is simply calculated for every level of test result and is called interval or stratum specific likelihood ratios.[6]
Thepretest oddsof a particular diagnosis, multiplied by the likelihood ratio, determines thepost-test odds. This calculation is based onBayes' theorem. (Note that odds can be calculated from, and then converted to,probability.)
Pretest probability refers to the chance that an individual in a given population has a disorder or condition; this is the baseline probability prior to the use of a diagnostic test. Post-test probability refers to the probability that a condition is truly present given a positive test result. For a good test in a population, the post-test probability will be meaningfully higher or lower than the pretest probability. A high likelihood ratio indicates a good test for a population, and a likelihood ratio close to one indicates that a test may not be appropriate for a population.
For ascreening test, the population of interest might be the general population of an area. For diagnostic testing, the ordering clinician will have observed some symptom or other factor that raises the pretest probability relative to the general population. A likelihood ratio of greater than 1 for a test in a population indicates that a positive test result is evidence that a condition is present. If the likelihood ratio for a test in a population is not clearly better than one, the test will not provide good evidence: the post-test probability will not be meaningfully different from the pretest probability. Knowing or estimating the likelihood ratio for a test in a population allows a clinician to better interpret the result.[7]
Research suggests that physicians rarely make these calculations in practice, however,[8]and when they do, they often make errors.[9]Arandomized controlled trialcompared how well physicians interpreted diagnostic tests that were presented as eithersensitivityandspecificity, a likelihood ratio, or an inexact graphic of the likelihood ratio, found no difference between the three modes in interpretation of test results.[10]
This table provide examples of how changes in the likelihood ratio affects post-test probability of disease.
in probability[11]
Probability of disease[12]
*These estimates are accurate to within 10% of the calculated answer for all pre-test probabilities between 10% and 90%. The average error is only 4%. For polar extremes of pre-test probability >90% and <10%, seeEstimation of pre- and post-test probabilitysection below.
A medical example is the likelihood that a given test result would be expected in a patient with a certain disorder compared to the likelihood that same result would occur in a patient without the target disorder.
Some sources distinguish between LR+ and LR−.[13]A worked example is shown below.
Related calculations
This hypothetical screening test (fecal occult blood test) correctly identified two-thirds (66.7%) of patients with colorectal cancer.[a]Unfortunately, factoring in prevalence rates reveals that this hypothetical test has a high false positive rate, and it does not reliably identify colorectal cancer in the overall population of asymptomatic people (PPV = 10%).
On the other hand, this hypothetical test demonstrates very accurate detection of cancer-free individuals (NPV ≈ 99.5%). Therefore, when used for routine colorectal cancer screening with asymptomatic adults, a negative result supplies important data for the patient and doctor, such as reassuring patients worried about developing colorectal cancer.
Confidence intervalsfor all the predictive parameters involved can be calculated, giving the range of values within which the true value lies at a given confidence level (e.g. 95%).[16]
The likelihood ratio of a test provides a way to estimate thepre- and post-test probabilitiesof having a condition.
Withpre-test probabilityandlikelihood ratiogiven, then, thepost-test probabilitiescan be calculated by the following three steps:[17]
In equation above,positive post-test probabilityis calculated using thelikelihood ratio positive, and thenegative post-test probabilityis calculated using thelikelihood ratio negative.
Odds are converted to probabilities as follows:[18]
multiply equation (1) by (1 − probability)
add (probability × odds) to equation (2)
divide equation (3) by (1 + odds)
hence
Alternatively, post-test probability can be calculated directly from the pre-test probability and the likelihood ratio using the equation:
In fact,post-test probability, as estimated from thelikelihood ratioandpre-test probability, is generally more accurate than if estimated from thepositive predictive valueof the test, if the tested individual has a differentpre-test probabilitythan what is theprevalenceof that condition in the population.
Taking the medical example from above (20 true positives, 10 false negatives, and 2030 total patients), thepositive pre-test probabilityis calculated as:
As demonstrated, thepositive post-test probabilityis numerically equal to thepositive predictive value; thenegative post-test probabilityis numerically equal to (1 −negative predictive value). | https://en.wikipedia.org/wiki/Likelihood_ratios_in_diagnostic_testing |
Phrase chunkingis a phase ofnatural language processingthat separates and segments a sentence into its subconstituents, such asnoun,verb, andprepositional phrases, abbreviated as NP, VP, and PP, respectively. Typically, each subconstituent or chunk is denoted by brackets.[1]
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Phrase_chunking |
Various methods for theevaluation for machine translationhave been employed. This article focuses on the evaluation of the output ofmachine translation, rather than on performance or usability evaluation.
A typical way for lay people to assess machine translation quality is to translate from a source language to a target language and back to the source language with the same engine. Though intuitively this may seem like a good method of evaluation, it has been shown that round-trip translation is a "poor predictor of quality".[1]The reason why it is such a poor predictor of quality is reasonably intuitive. A round-trip translation is not testing one system, but two systems: the language pair of the engine for translatingintothe target language, and the language pair translatingback fromthe target language.
Consider the following examples of round-trip translation performed fromEnglishtoItalianandPortuguesefrom Somers (2005):
In the first example, where the text is translated intoItalianthen back intoEnglish—the English text is significantly garbled, but the Italian is a serviceable translation. In the second example, the text translated back into English is perfect, but thePortuguesetranslation is meaningless; the program thought "tit" was a reference to atit (bird), which was intended for a "tat", a word it did not understand.
While round-trip translation may be useful to generate a "surplus of fun,"[2]the methodology is deficient for serious study of machine translation quality.
This section covers two of the large scale evaluation studies that have had significant impact on the field—theALPAC1966 study and the ARPA study.[3]
One of the constituent parts of the ALPAC report was a study comparing different levels of human translation with machine translation output, using human subjects as judges. The human judges were specially trained for the purpose. The evaluation study compared an MT system translating fromRussianintoEnglishwith human translators, on two variables.
The variables studied were "intelligibility" and "fidelity". Intelligibility was a measure of how "understandable" the sentence was, and was measured on a scale of 1–9. Fidelity was a measure of how much information the translated sentence retained compared to the original, and was measured on a scale of 0–9. Each point on the scale was associated with a textual description. For example, 3 on the intelligibility scale was described as "Generally unintelligible; it tends to read like nonsense but, with a considerable amount of reflection and study, one can at least hypothesize the idea intended by the sentence".[4]
Intelligibility was measured without reference to the original, while fidelity was measured indirectly. The translated sentence was presented, and after reading it and absorbing the content, the original sentence was presented. The judges were asked to rate the original sentence on informativeness. So, the more informative the original sentence, the lower the quality of the translation.
The study showed that the variables were highly correlated when the human judgment was averaged per
sentence. Thevariation among raterswas small, but the researchers recommended that at the very least, three or four raters should be used. The evaluation methodology managed to separate translations by humans from translations by machines with ease.
The study concluded that, "highly reliable assessments can be made of the quality of human and machine translations".[4]
As part of the Human Language Technologies Program, theAdvanced Research Projects Agency(ARPA) created a methodology to evaluate machine translation systems, and continues to perform evaluations based on this methodology. The evaluation programme was instigated in 1991, and continues to this day. Details of the programme can be found in White et al. (1994) and White (1995).
The evaluation programme involved testing several systems based on different theoretical approaches; statistical,
rule-based and human-assisted. A number of methods for the evaluation of the output from these systems were tested in 1992 and the most recent suitable methods were selected for inclusion in the programmes for subsequent years. The methods were; comprehension evaluation, quality panel evaluation, and evaluation based on adequacy and fluency.
Comprehension evaluation aimed to directly compare systems based on the results from multiple choice comprehension tests, as in Church et al. (1993). The texts chosen were a set of articles in English on the subject of financial news. These articles were translated by professional translators into a series of language pairs, and then translated back into English using the machine translation systems. It was decided that this was not adequate for a standalone method of comparing systems and as such abandoned due to issues with the modification of meaning in the process of translating from English.
The idea of quality panel evaluation was to submit translations to a panel of expert native English speakers who were professional translators and get them to evaluate them. The evaluations were done on the basis of a metric, modelled on a standard US government metric used to rate human translations. This was good from the point of view that the metric was "externally motivated",[3]since it was not specifically developed for machine translation. However, the quality panel evaluation was very difficult to set up logistically, as it necessitated having a number of experts together in one place for a week or more, and furthermore for them to reach consensus. This method was also abandoned.
Along with a modified form of the comprehension evaluation (re-styled as informativeness evaluation), the most
popular method was to obtain ratings from monolingual judges for segments of a document. The judges were presented with a segment, and asked to rate it for two variables, adequacy and fluency. Adequacy is a rating of how much information is transferred between the original and the translation, and fluency is a rating of how good the English is. This technique was found to cover the relevant parts of the quality panel evaluation, while at the same time being easier to deploy, as it didn't require expert judgment.
Measuring systems based on adequacy and fluency, along with informativeness is now the standard methodology for the
ARPA evaluation program.[5]
In the context of this article, ametricis a measurement. A metric that evaluates machine translation output represents the quality of the output. The quality of a translation is inherently subjective, there is no objective or quantifiable "good." Therefore, any metric must assign quality scores so they correlate with the human judgment of quality. That is, a metric should score highly translations that humans score highly, and give low scores to those humans give low scores. Human judgment is the benchmark for assessing automatic metrics, as humans are the end-users of any translation output.
The measure of evaluation for metrics iscorrelationwith human judgment. This is generally done at two levels, at the sentence level, where scores are calculated by the metric for a set of translated sentences, and then correlated against human judgment for the same sentences. And at the corpus level, where scores over the sentences are aggregated for both human judgments and metric judgments, and these aggregate scores are then correlated. Figures for correlation at the sentence level are rarely reported, although Banerjee et al. (2005) do give correlation figures that show that, at least for their metric, sentence-level correlation is substantially worse than corpus level correlation.
While not widely reported, it has been noted that the genre, or domain, of a text has an effect on the correlation obtained when using metrics. Coughlin (2003) reports that comparing the candidate text against a single reference translation does not adversely affect the correlation of metrics when working in a restricted domain text.
Even if a metric correlates well with human judgment in one study on one corpus, this successful correlation may not carry over to another corpus. Good metric performance, across text types or domains, is important for the reusability of the metric. A metric that only works for text in a specific domain is useful, but less useful than one that works across many domains—because creating a new metric for every new evaluation or domain is undesirable.
Another important factor in the usefulness of an evaluation metric is to have a good correlation, even when working with small amounts of data, that is candidate sentences and reference translations. Turian et al. (2003) point out that, "Any MT evaluation measure is less reliable on shorter translations", and show that increasing the amount of data improves the reliability of a metric. However, they add that "... reliability on shorter texts, as short as one sentence or even one phrase, is highly desirable because a reliable MT evaluation measure can greatly accelerate exploratory data analysis".[6]
Banerjee et al. (2005) highlight five attributes that a good automatic metric must possess; correlation, sensitivity, consistency, reliability and generality. Any good metric must correlate highly with human judgment, it must be consistent, giving similar results to the same MT system on similar text. It must be sensitive to differences between MT systems and reliable in that MT systems that score similarly should be expected to perform similarly. Finally, the metric must be general, that is it should work with differenttext domains, in a wide range of scenarios and MT tasks.
The aim of this subsection is to give an overview of the state of the art in automatic metrics for evaluating machine translation.[7]
BLEU was one of the first metrics to report a high correlation with human judgments of quality. The
metric is currently one of the most popular in the field. The central idea behind the metric is that "the closer a
machine translation is to a professional human translation, the better it is".[8]The metric calculates scores for individual segments, generally sentences — then averages these scores over the whole corpus for a final score. It has been shown to correlate highly with human judgments of quality at the corpus level.[9]
BLEU uses a modified form of precision to compare a candidate translation against multiple reference translations. The metric modifies simple precision since machine translation systems have been known to generate more words than appear in a reference text. No other machine translation metric is yet to significantly outperform BLEU with respect to correlation with human judgment across language pairs.[10]
The NIST metric is based on theBLEUmetric, but with some alterations. WhereBLEUsimply calculatesn-gramprecision adding equal weight to each one, NIST also calculates how informative a particularn-gramis. That is to say, when a correctn-gramis found, the rarer that n-gram is, the more weight it is given.[11]For example, if the bigram "on the" correctly matches, it receives lower weight than the correct matching of bigram "interesting calculations," as this is less likely to occur. NIST also differs fromBLEUin its calculation of the brevity penalty, insofar as small variations in translation length do not impact the overall score as much.
The Word error rate (WER) is a metric based on theLevenshtein distance, where the Levenshtein distance works at the character level, WER works at the word level. It was originally used for measuring the performance ofspeech recognitionsystems but is also used in the evaluation of machine translation. The metric is based on the calculation of the number of words that differ between a piece of machine-translated text and a reference translation.
A related metric is the Position-independent word error rate (PER), which allows for the re-ordering of words and sequences of words between a translated text and a reference translation.
The METEOR metric is designed to address some of the deficiencies inherent in the BLEU metric. The metric is based on the weightedharmonic meanof unigram precision and unigram recall. The metric was designed after research by Lavie (2004) into the significance of recall in evaluation metrics. Their research showed that metrics based on recall consistently achieved higher correlation than those based on precision alone, cf. BLEU and NIST.[12]
METEOR also includes some other features not found in other metrics, such as synonymy matching, where instead of matching only on the exact word form, the metric also matches on synonyms. For example, the word "good" in the reference rendering as "well" in the translation counts as a match. The metric is also includes a stemmer, which lemmatises words and matches on the lemmatised forms. The implementation of the metric is modular insofar as the algorithms that match words are implemented as modules, and new modules that implement different matching strategies may easily be added.
A new MT evaluation metric LEPOR was proposed as the combination of many evaluation factors including existing ones (precision, recall) and modified ones (sentence-length penalty and n-gram based word order penalty). The experiments were tested on eight language pairs from ACL-WMT2011 including English-to-other (Spanish, French, German, and Czech) and the inverse, and showed that LEPOR yielded higher system-level correlation with human judgments than several existing metrics such as BLEU, Meteor-1.3, TER, AMBER and MP4IBM1.[13]An enhanced version of LEPOR metric, hLEPOR, is introduced in the paper.[14]hLEPOR utilizes the harmonic mean to combine the sub-factors of the designed metric. Furthermore, they design a set of parameters to tune the weights of the sub-factors according to different language pairs. The ACL-WMT13 Metrics shared task[15]results show that hLEPOR yields the highest Pearson correlation score with human judgment on the English-to-Russian language pair, in addition to the highest average-score on five language pairs (English-to-German, French, Spanish, Czech, Russian). The detailed results of WMT13 Metrics Task is introduced in the paper.[16]
There are some machine translation evaluation survey works,[17][18][19]where people introduced more details about what kinds of human evaluation methods they used and how they work, such as the intelligibility, fidelity, fluency, adequacy, comprehension, and informativeness, etc. For automatic evaluations, they also did some clear classifications such as the lexical similarity methods, the linguistic features application, and the subfields of these two aspects. For instance, for lexical similarity, it contains edit distance, precision, recall and word order; for linguistic feature, it is divided into the syntactic feature and the semantic feature respectively. Some state-of-the-art overview on both manual and automatic translation evaluation[20]introduced the recently developedtranslationquality assessment (TQA) methodologies, such as the crowd-sourced intelligenceAmazon Mechanical Turkutilization, statistical significance testing, re-visiting traditional criteria with newly designed strategies, as well as MT quality estimation (QE) shared tasks from the annual workshop on MT (WMT)[21]and corresponding models that do not rely on human offered reference translations. | https://en.wikipedia.org/wiki/Evaluation_of_machine_translation |
Machine translationis use of computational techniques totranslatetext or speech from onelanguageto another, including the contextual, idiomatic and pragmatic nuances of both languages.
Early approaches were mostlyrule-basedorstatistical. These methods have since been superseded byneural machine translation[1]andlarge language models.[2]
The origins of machine translation can be traced back to the work ofAl-Kindi, a ninth-century Arabiccryptographerwho developed techniques for systemic language translation, includingcryptanalysis,frequency analysis, andprobabilityandstatistics, which are used in modern machine translation.[3]The idea of machine translation later appeared in the 17th century. In 1629,René Descartesproposed a universal language, with equivalent ideas in different tongues sharing one symbol.[4]
The idea of using digital computers for translation of natural languages was proposed as early as 1947 by England'sA. D. Booth[5]andWarren WeaveratRockefeller Foundationin the same year. "The memorandum written byWarren Weaverin 1949 is perhaps the single most influential publication in the earliest days of machine translation."[6][7]Others followed. A demonstration was made in 1954 on theAPEXCmachine atBirkbeck College(University of London) of a rudimentary translation of English into French. Several papers on the topic were published at the time, and even articles in popular journals (for example an article by Cleave and Zacharov in the September 1955 issue ofWireless World). A similar application, also pioneered at Birkbeck College at the time, was reading and composingBrailletexts by computer.
The first researcher in the field,Yehoshua Bar-Hillel, began his research at MIT (1951). AGeorgetown UniversityMT research team, led by Professor Michael Zarechnak, followed (1951) with a public demonstration of itsGeorgetown-IBM experimentsystem in 1954. MT research programs popped up in Japan[8][9]and Russia (1955), and the first MT conference was held in London (1956).[10][11]
David G. Hays"wrote about computer-assisted language processing as early as 1957" and "was project leader on computational linguistics
atRandfrom 1955 to 1968."[12]
Researchers continued to join the field as the Association for Machine Translation and Computational Linguistics was formed in the U.S. (1962) and the National Academy of Sciences formed the Automatic Language Processing Advisory Committee (ALPAC) to study MT (1964). Real progress was much slower, however, and after theALPAC report(1966), which found that the ten-year-long research had failed to fulfill expectations, funding was greatly reduced.[13]According to a 1972 report by the Director of Defense Research and Engineering (DDR&E), the feasibility of large-scale MT was reestablished by the success of the Logos MT system in translating military manuals into Vietnamese during that conflict.
The French Textile Institute also used MT to translate abstracts from and into French, English, German and Spanish (1970); Brigham Young University started a project to translate Mormon texts by automated translation (1971).
SYSTRAN, which "pioneered the field under contracts from the U.S. government"[14]in the 1960s, was used by Xerox to translate technical manuals (1978). Beginning in the late 1980s, ascomputationalpower increased and became less expensive, more interest was shown instatistical models for machine translation. MT became more popular after the advent of computers.[15]SYSTRAN's first implementation system was implemented in 1988 by the online service of theFrench Postal Servicecalled Minitel.[16]Various computer based translation companies were also launched, including Trados (1984), which was the first to develop and market Translation Memory technology (1989), though this is not the same as MT. The first commercial MT system for Russian / English / German-Ukrainian was developed at Kharkov State University (1991).
By 1998, "for as little as $29.95" one could "buy a program for translating in one direction between English and a major European language of
your choice" to run on a PC.[14]
MT on the web started with SYSTRAN offering free translation of small texts (1996) and then providing this viaAltaVista Babelfish,[14]which racked up 500,000 requests a day (1997).[17]The second free translation service on the web wasLernout & Hauspie's GlobaLink.[14]Atlantic Magazinewrote in 1998 that "Systran's Babelfish and GlobaLink's Comprende" handled
"Don't bank on it" with a "competent performance."[18]
Franz Josef Och(the future head of Translation Development AT Google) won DARPA's speed MT competition (2003).[19]More innovations during this time included MOSES, the open-source statistical MT engine (2007), a text/SMS translation service for mobiles in Japan (2008), and a mobile phone with built-in speech-to-speech translation functionality for English, Japanese and Chinese (2009). In 2012, Google announced thatGoogle Translatetranslates roughly enough text to fill 1 million books in one day.
Before the advent ofdeep learningmethods, statistical methods required a lot of rules accompanied bymorphological,syntactic, andsemanticannotations.
The rule-based machine translation approach was used mostly in the creation ofdictionariesand grammar programs. Its biggest downfall was that everything had to be made explicit: orthographical variation and erroneous input must be made part of the source language analyser in order to cope with it, and lexical selection rules must be written for all instances of ambiguity.
Transfer-based machine translation was similar tointerlingual machine translationin that it created a translation from an intermediate representation that simulated the meaning of the original sentence. Unlike interlingual MT, it depended partially on the language pair involved in the translation.
Interlingual machine translation was one instance of rule-based machine-translation approaches. In this approach, the source language, i.e. the text to be translated, was transformed into an interlingual language, i.e. a "language neutral" representation that is independent of any language. The target language was then generated out of theinterlingua. The only interlingual machine translation system that was made operational at the commercial level was the KANT system (Nyberg and Mitamura, 1992), which was designed to translate Caterpillar Technical English (CTE) into other languages.
Machine translation used a method based ondictionaryentries, which means that the words were translated as they are by a dictionary.
Statistical machine translation tried to generate translations usingstatistical methodsbased on bilingual text corpora, such as theCanadian Hansardcorpus, the English-French record of the Canadian parliament andEUROPARL, the record of theEuropean Parliament. Where such corpora were available, good results were achieved translating similar texts, but such corpora were rare for many language pairs. The first statistical machine translation software wasCANDIDEfromIBM. In 2005, Google improved its internal translation capabilities by using approximately 200 billion words from United Nations materials to train their system; translation accuracy improved.[20]
SMT's biggest downfall included it being dependent upon huge amounts of parallel texts, its problems with morphology-rich languages (especially with translatingintosuch languages), and its inability to correct singleton errors.
Some work has been done in the utilization of multiparallelcorpora, that is a body of text that has been translated into 3 or more languages. Using these methods, a text that has been translated into 2 or more languages may be utilized in combination to provide a more accurate translation into a third language compared with if just one of those source languages were used alone.[21][22][23]
Adeep learning-based approach to MT,neural machine translationhas made rapid progress in recent years. However, the current consensus is that the so-called human parity achieved is not real, being based wholly on limited domains, language pairs, and certain test benchmarks[24]i.e., it lacks statistical significance power.[25]
Translations by neural MT tools likeDeepL Translator, which is thought to usually deliver the best machine translation results as of 2022, typically still need post-editing by a human.[26][27][28]
Instead of training specialized translation models on parallel datasets, one can alsodirectly promptgenerativelarge language modelslikeGPTto translate a text.[29][30][31]This approach is considered promising,[32]but is still more resource-intensive than specialized translation models.
Studies using human evaluation (e.g. by professional literary translators or human readers) havesystematically identified various issueswith the latest advanced MT outputs.[31]Common issues include the translation of ambiguous parts whose correct translation requires common sense-like semantic language processing or context.[31]There can also be errors in the source texts, missing high-quality training data and the severity of frequency of several types of problems may not get reduced with techniques used to date, requiring some level of human active participation.
Word-sense disambiguation concerns finding a suitable translation when a word can have more than one meaning. The problem was first raised in the 1950s byYehoshua Bar-Hillel.[33]He pointed out that without a "universal encyclopedia", a machine would never be able to distinguish between the two meanings of a word.[34]Today there are numerous approaches designed to overcome this problem. They can be approximately divided into "shallow" approaches and "deep" approaches.
Shallow approaches assume no knowledge of the text. They simply apply statistical methods to the words surrounding the ambiguous word. Deep approaches presume a comprehensive knowledge of the word. So far, shallow approaches have been more successful.[35]
Claude Piron, a long-time translator for the United Nations and theWorld Health Organization, wrote that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolveambiguitiesin thesource text, which thegrammaticalandlexicalexigencies of thetarget languagerequire to be resolved:
Why does a translator need a whole workday to translate five pages, and not an hour or two? ..... About 90% of an average text corresponds to these simple conditions. But unfortunately, there's the other 10%. It's that part that requires six [more] hours of work. There are ambiguities one has to resolve. For instance, the author of the source text, an Australian physician, cited the example of an epidemic which was declared during World War II in a "Japanese prisoners of war camp". Was he talking about an American camp with Japanese prisoners or a Japanese camp with American prisoners? The English has two senses. It's necessary therefore to do research, maybe to the extent of a phone call to Australia.[36]
The ideal deep approach would require the translation software to do all the research necessary for this kind of disambiguation on its own; but this would require a higher degree ofAIthan has yet been attained. A shallow approach which simply guessed at the sense of the ambiguous English phrase that Piron mentions (based, perhaps, on which kind of prisoner-of-war camp is more often mentioned in a given corpus) would have a reasonable chance of guessing wrong fairly often. A shallow approach that involves "ask the user about each ambiguity" would, by Piron's estimate, only automate about 25% of a professional translator's job, leaving the harder 75% still to be done by a human.
One of the major pitfalls of MT is its inability to translate non-standard language with the same accuracy as standard language. Heuristic or statistical based MT takes input from various sources in standard form of a language. Rule-based translation, by nature, does not include common non-standard usages. This causes errors in translation from a vernacular source or into colloquial language. Limitations on translation from casual speech present issues in the use of machine translation in mobile devices.
Ininformation extraction, named entities, in a narrow sense, refer to concrete or abstract entities in the real world such as people, organizations, companies, and places that have a proper name: George Washington, Chicago, Microsoft. It also refers to expressions of time, space and quantity such as 1 July 2011, $500.
In the sentence "Smith is the president of Fabrionix" bothSmithandFabrionixare named entities, and can be further qualified via first name or other information; "president" is not, since Smith could have earlier held another position at Fabrionix, e.g. Vice President.
The termrigid designatoris what defines these usages for analysis in statistical machine translation.
Named entities must first be identified in the text; if not, they may be erroneously translated as common nouns, which would most likely not affect theBLEUrating of the translation but would change the text's human readability.[37]They may be omitted from the output translation, which would also have implications for the text's readability and message.
Transliterationincludes finding the letters in the target language that most closely correspond to the name in the source language. This, however, has been cited as sometimes worsening the quality of translation.[38]For "Southern California" the first word should be translated directly, while the second word should be transliterated. Machines often transliterate both because they treated them as one entity. Words like these are hard for machine translators, even those with a transliteration component, to process.
Use of a "do-not-translate" list, which has the same end goal – transliteration as opposed to translation.[39]still relies on correct identification of named entities.
A third approach is a class-based model. Named entities are replaced with a token to represent their "class"; "Ted" and "Erica" would both be replaced with "person" class token. Then the statistical distribution and use of person names, in general, can be analyzed instead of looking at the distributions of "Ted" and "Erica" individually, so that the probability of a given name in a specific language will not affect the assigned probability of a translation. A study by Stanford on improving this area of translation gives the examples that different probabilities will be assigned to "David is going for a walk" and "Ankit is going for a walk" for English as a target language due to the different number of occurrences for each name in the training data. A frustrating outcome of the same study by Stanford (and other attempts to improve named recognition translation) is that many times, a decrease in theBLEUscores for translation will result from the inclusion of methods for named entity translation.[39]
While no system provides the ideal of fully automatic high-quality machine translation of unrestricted text, many fully automated systems produce reasonable output.[40][41][42]The quality of machine translation is substantially improved if the domain is restricted and controlled.[43]This enables using machine translation as a tool to speed up and simplify translations, as well as producing flawed but useful low-cost or ad-hoc translations.
Machine translation applications have also been released for most mobile devices, including mobile telephones, pocket PCs, PDAs, etc. Due to their portability, such instruments have come to be designated asmobile translationtools enabling mobile business networking between partners speaking different languages, or facilitating both foreign language learning and unaccompanied traveling to foreign countries without the need of the intermediation of a human translator.
For example, the Google Translate app allows foreigners to quickly translate text in their surrounding viaaugmented realityusing the smartphone camera that overlays the translated text onto the text.[44]It can alsorecognize speechand then translate it.[45]
Despite their inherent limitations, MT programs are used around the world. Probably the largest institutional user is theEuropean Commission. In 2012, with an aim to replace a rule-based MT by newer, statistical-based MT@EC, The European Commission contributed 3.072 million euros (via its ISA programme).[46]
Machine translation has also been used for translatingWikipediaarticles and could play a larger role in creating, updating, expanding, and generally improving articles in the future, especially as the MT capabilities may improve. There is a "content translation tool" which allows editors to more easily translate articles across several select languages.[47][48][49]English-language articles are thought to usually be more comprehensive and less biased than their non-translated equivalents in other languages.[50]As of 2022,English Wikipediahas over 6.5 million articles while, for example, theGermanandSwedish Wikipediaseach only have over 2.5 million articles,[51]each often far less comprehensive.
Following terrorist attacks in Western countries, including9-11, the U.S. and its allies have been most interested in developingArabic machine translationprograms, but also in translatingPashtoandDarilanguages.[citation needed]Within these languages, the focus is on key phrases and quick communication between military members and civilians through the use of mobile phone apps.[52]The Information Processing Technology Office inDARPAhosted programs likeTIDESandBabylon translator. US Air Force has awarded a $1 million contract to develop a language translation technology.[53]
The notable rise ofsocial networkingon the web in recent years has created yet another niche for the application of machine translation software – in utilities such asFacebook, orinstant messagingclients such asSkype,Google Talk,MSN Messenger, etc. – allowing users speaking different languages to communicate with each other.
Lineage Wgained popularity in Japan because of its machine translation features allowing players from different countries to communicate.[54]
Despite being labelled as an unworthy competitor to human translation in 1966 by the Automated Language Processing Advisory Committee put together by the United States government,[55]the quality of machine translation has now been improved to such levels that its application in online collaboration and in the medical field are being investigated. The application of this technology in medical settings where human translators are absent is another topic of research, but difficulties arise due to the importance of accurate translations in medical diagnoses.[56]
Researchers caution that the use of machine translation in medicine could risk mistranslations that can be dangerous in critical situations.[57][58]Machine translation can make it easier for doctors to communicate with their patients in day to day activities, but it is recommended to only use machine translation when there is no other alternative, and that translated medical texts should be reviewed by human translators for accuracy.[59][60]
Legal languageposes a significant challenge to machine translation tools due to its precise nature and atypical use of normal words. For this reason, specialized algorithms have been developed for use in legal contexts.[61]Due to the risk of mistranslations arising from machine translators, researchers recommend that machine translations should be reviewed by human translators for accuracy, and some courts prohibit its use informal proceedings.[62]
The use of machine translation in law has raised concerns about translation errors andclient confidentiality. Lawyers who use free translation tools such as Google Translate may accidentally violate client confidentiality by exposing private information to the providers of the translation tools.[61]In addition, there have been arguments that consent for a police search that is obtained with machine translation is invalid, with different courts issuing different verdicts over whether or not these arguments are valid.[57]
The advancements inconvolutional neural networksin recent years and in low resource machine translation (when only a very limited amount of data and examples are available for training) enabled machine translation for ancient languages, such asAkkadianand its dialects Babylonian and Assyrian.[63]
There are many factors that affect how machine translation systems are evaluated. These factors include the intended use of the translation, the nature of the machine translation software, and the nature of the translation process.
Different programs may work well for different purposes. For example,statistical machine translation(SMT) typically outperformsexample-based machine translation(EBMT), but researchers found that when evaluating English to French translation, EBMT performs better.[64]The same concept applies for technical documents, which can be more easily translated by SMT because of their formal language.
In certain applications, however, e.g., product descriptions written in acontrolled language, adictionary-based machine-translationsystem has produced satisfactory translations that require no human intervention save for quality inspection.[65]
There are various means for evaluating the output quality of machine translation systems. The oldest is the use of human judges[66]to assess a translation's quality. Even though human evaluation is time-consuming, it is still the most reliable method to compare different systems such as rule-based and statistical systems.[67]Automatedmeans of evaluation includeBLEU,NIST,METEOR, andLEPOR.[68]
Relying exclusively on unedited machine translation ignores the fact that communication inhuman languageis context-embedded and that it takes a person to comprehend thecontextof the original text with a reasonable degree of probability. It is certainly true that even purely human-generated translations are prone to error. Therefore, to ensure that a machine-generated translation will be useful to a human being and that publishable-quality translation is achieved, such translations must be reviewed and edited by a human.[69]The lateClaude Pironwrote that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolveambiguitiesin thesource text, which thegrammaticalandlexicalexigencies of the target language require to be resolved. Such research is a necessary prelude to the pre-editing necessary in order to provide input for machine-translation software such that the output will not bemeaningless.[70]
In addition to disambiguation problems, decreased accuracy can occur due to varying levels of training data for machine translating programs. Both example-based and statistical machine translation rely on a vast array of real example sentences as a base for translation, and when too many or too few sentences are analyzed accuracy is jeopardized. Researchers found that when a program is trained on 203,529 sentence pairings, accuracy actually decreases.[64]The optimal level of training data seems to be just over 100,000 sentences, possibly because as training data increases, the number of possible sentences increases, making it harder to find an exact translation match.
Flaws in machine translation have been noted fortheir entertainment value. Two videos uploaded toYouTubein April 2017 involve two Japanesehiraganacharacters えぐ (eandgu) being repeatedly pasted into Google Translate, with the resulting translations quickly degrading into nonsensical phrases such as "DECEARING EGG" and "Deep-sea squeeze trees", which are then read in increasingly absurd voices;[71][72]the full-length version of the video currently has 6.9 million views as of March 2022.[update][73]
In the early 2000s, options for machine translation between spoken and signed languages were severely limited. It was a common belief that deaf individuals could use traditional translators. However, stress, intonation, pitch, and timing are conveyed much differently in spoken languages compared to signed languages. Therefore, a deaf individual may misinterpret or become confused about the meaning of written text that is based on a spoken language.[74]
Researchers Zhao, et al. (2000), developed a prototype called TEAM (translation from English to ASL by machine) that completed English toAmerican Sign Language(ASL) translations. The program would first analyze the syntactic, grammatical, and morphological aspects of the English text. Following this step, the program accessed a sign synthesizer, which acted as a dictionary for ASL. This synthesizer housed the process one must follow to complete ASL signs, as well as the meanings of these signs. Once the entire text is analyzed and the signs necessary to complete the translation are located in the synthesizer, a computer generated human appeared and would use ASL to sign the English text to the user.[74]
Onlyworksthat areoriginalare subject tocopyrightprotection, so some scholars claim that machine translation results are not entitled to copyright protection because MT does not involvecreativity.[75]The copyright at issue is for aderivative work; the author of theoriginal workin the original language does not lose hisrightswhen a work is translated: a translator must have permission topublisha translation.[citation needed] | https://en.wikipedia.org/wiki/Machine_translation |
Translation studiesis an academicinterdisciplinedealing with the systematic study of the theory, description and application oftranslation,interpreting, andlocalization. As an interdiscipline, translation studies borrows much from the various fields of study that support translation. These includecomparative literature,computer science,history,linguistics,philology,philosophy,semiotics, andterminology.
The term “translation studies” was coined by the Amsterdam-based American scholarJames S. Holmesin his 1972 paper “The name and nature of translation studies”, which is considered a foundational statement for the discipline. Writers in English occasionally use the term "translatology" (and less commonly "traductology") to refer to translation studies, and the corresponding French term for the discipline is usuallytraductologie(as in theSociété Française de Traductologie). In the United States, there is a preference for the term "translation and interpreting studies" (as in the American Translation and Interpreting Studies Association), although European tradition includes interpreting within translation studies (as in theEuropean Society for Translation Studies).
Historically, translation studies has long been "prescriptive" (telling translators how to translate), to the point that discussions of translation that were not prescriptive were generally not considered to be about translation at all. When historians of translation studies trace early Western thought about translation, for example, they most often set the beginning at the renowned oratorCicero's remarks on how he used translation from Greek to Latin to improve his oratorical abilities—an early description of whatJeromeended up callingsense-for-sense translation. The descriptive history of interpreters in Egypt provided byHerodotusseveral centuries earlier is typically not thought of as translation studies—presumably because it does not tell translators how to translate. InChina, the discussion onhow to translateoriginated with the translation ofBuddhist sutrasduring theHan dynasty.
In 1958, at the Fourth Congress of Slavists in Moscow, the debate between linguistic and literary approaches to translation reached a point where it was proposed that the best thing might be to have a separate science that was able to study all forms of translation, without being wholly within linguistics or wholly within literary studies.[1]Within comparative literature, translation workshops were promoted in the 1960s in some American universities like theUniversity of IowaandPrinceton.[2]
During the 1950s and 1960s, systematic linguistic-oriented studies of translation began to appear. In 1958, the French linguistsJean-Paul Vinayand Jean Darbelnet carried out a contrastive comparison of French and English.[3]In 1964,Eugene NidapublishedToward a Science of Translating, a manual forBible translationinfluenced to some extent byHarris'stransformational grammar.[4]In 1965,J. C. Catfordtheorized translation from a linguistic perspective.[5]In the 1960s and early 1970s, the Czech scholarJiří Levýand the Slovak scholarsAnton Popovičand František Miko worked on the stylistics of literary translation.[6]
These initial steps toward research on literary translation were collected in James S. Holmes' paper at the Third International Congress of Applied Linguistics held inCopenhagenin 1972. In that paper, "The name and nature of translation studies", Holmes asked for the consolidation of a separate discipline and proposed a classification of the field. A visual "map" of Holmes' proposal was later presented byGideon Touryin his 1995Descriptive Translation Studies and beyond.[7]
Before the 1990s, translation scholars tended to form particular schools of thought, particularly within the prescriptive, descriptive and Skopos paradigms. Since the "cultural turn" in the 1990s, the discipline has tended to divide into separate fields of inquiry, where research projects run parallel to each other, borrowing methodologies from each other and from other academic disciplines.
The main schools of thought on the level of research have tended to cluster around key theoretical concepts, most of which have become objects of debate.
Through to the 1950s and 1960s, discussions in translation studies tended to concern how best to attain "equivalence". The term "equivalence" had two distinct meanings, corresponding to different schools of thought. In the Russian tradition, "equivalence" was usually a one-to-one correspondence between linguistic forms, or a pair of authorized technical terms or phrases, such that "equivalence" was opposed to a range of "substitutions". However, in the French tradition of Vinay and Darbelnet, drawing onBally, "equivalence" was the attainment of equal functional value, generally requiringchangesin form.Catford's notion of equivalence in 1965 was as in the French tradition. In the course of the 1970s, Russian theorists adopted the wider sense of "equivalence" as somethingresultingfromlinguistic transformations.
At about the same time, theInterpretive Theory of Translation[8]introduced the notion of deverbalized sense into translation studies, drawing a distinction between word correspondences and sense equivalences, and showing the difference between dictionary definitions of words and phrases (word correspondences) and the sense of texts or fragments thereof in a given context (sense equivalences).
The discussions of equivalence accompanied typologies of translation solutions (also called "procedures", "techniques" or "strategies"), as in Fedorov (1953) and Vinay and Darbelnet (1958). In 1958, Loh Dianyang'sTranslation: Its Principles and Techniques(英汉翻译理论与技巧) drew on Fedorov and English linguistics to present a typology of translation solutions between Chinese and English.
In these traditions, discussions of the ways to attain equivalence have mostly been prescriptive and have been related to translator training.
Descriptive translation studies aims at building an empirical descriptive discipline, to fill one section of the Holmes map. The idea that scientific methodology could be applicable to cultural products had been developed by the Russian Formalists in the early years of the 20th century, and had been recovered by various researchers incomparative literature. It was now applied to literary translation. Part of this application was thetheory of polysystems(Even-Zohar 1990[9]) in which translated literature is seen as a sub-system of the receiving or target literary system. Gideon Toury bases his theory on the need to consider translations as "facts of the target culture" for the purposes of research. The concepts of "manipulation"[10]and "patronage"[11]have also been developed in relation to literary translations.
Another discovery in translation theory can be dated from 1984 in Europe and the publication of two books in German:Foundation for a General Theory of TranslationbyKatharina Reiss(also written Reiß) andHans Vermeer,[12]andTranslatorial Action(Translatorisches Handeln) by Justa Holz-Mänttäri.[13]From these two came what is known asSkopos theory, which gives priority to the purpose to be fulfilled by the translation instead of prioritizing equivalence.
The cultural turn meant still another step forward in the development of the discipline. It was sketched bySusan BassnettandAndré LefevereinTranslation - History - Culture, and quickly represented by the exchanges between translation studies and other area studies and concepts:gender studies, cannibalism, post-colonialism[14]or cultural studies, among others.
The concept of "cultural translation" largely ensues fromHomi Bhabha's reading ofSalman RushdieinThe Location of Culture.[15]Cultural translation is a concept used incultural studiesto denote the process of transformation, linguistic or otherwise, in a givenculture.[16]The concept uses linguistic translation as a tool or metaphor in analyzing the nature of transformation and interchange in cultures.
Translation history concerns the history of translators as a professional and social group, as well as the history of translations as indicators of the way cultures develop, interact and may die. Some principles for translation history have been proposed by Lieven D'hulst[17]andPym.[18]Major projects in translation history have included theOxford History of Literary Translation in English[19]andHistoire des traductions en langue française.[20]
Historical anthologies of translation theories have been compiled byRobinson(2002)[21]for Western theories up to Nietzsche; by D'hulst (1990)[22]for French theories, 1748–1847; by Santoyo (1987)[23]for the Spanish tradition; byEdward Balcerzan(1977)[24]for the Polish experience, 1440–1974; and byCheung(2006)[25]for Chinese.
The sociology of translation includes the study of who translators are, what their forms of work are (workplace studies) and what data on translations can say about the movements of ideas between languages.
Post-colonial studies look at translations between a metropolis and former colonies, or within complex former colonies.[26]They radically question the assumption that translation occurs between cultures and languages that are radically separated.
Gender studies look at the sexuality of translators,[27]at the gendered nature of the texts they translate,[28]at the possibly gendered translation processes employed, and at the gendered metaphors used to describe translation. Pioneering studies are by Luise von Flotow,Sherry Simonand Keith Harvey.[29]The effacement or inability to efface threatening forms of same-sex sexuality is a topic taken up, when for instance ancient writers are translated by Renaissance thinkers in a Christian context.[30]
In the field of ethics, much-discussed publications have been the essays ofAntoine BermanandLawrence Venutithat differ in some aspects but agree on the idea of emphasizing the differences between source and target language and culture when translating. Both are interested in how the "cultural other [...] can best preserve [...] that otherness".[31]In more recent studies, scholars have appliedEmmanuel Levinas' philosophical work on ethics and subjectivity on this issue.[32]As his publications have been interpreted in different ways, various conclusions on his concept of ethical responsibility have been drawn from this. Some have come to the assumption that the idea of translation itself could be ethically doubtful, while others receive it as a call for considering the relationship betweenauthoror text andtranslatoras more interpersonal, thus making it an equal and reciprocal process.
Parallel to these studies, the general recognition of the translator's responsibility has increased. More and more translators and interpreters are being seen as active participants in geopolitical conflicts, which raises the question of how to act ethically independent from their own identity or judgement. This leads to the conclusion that translating and interpreting cannot be considered solely as a process oflanguage transfer, but also as socially and politically directed activities.[33]
There is general agreement on the need for an ethicalcode of practiceproviding some guiding principles to reduce uncertainties and improve professionalism, as having been stated in other disciplines (for examplemilitary medical ethicsorlegal ethics). However, as there is still no clear understanding of the concept ofethicsin this field, opinions about the particular appearance of such a code vary considerably.
Audiovisual translationstudies (AVT) is concerned with translation that takes place in audio and/or visual settings, such as the cinema, television, video games and also some live events such as opera performances.[34]The common denominator for studies in this field is that translation is carried out on multiplesemioticsystems, as the translated texts (so-called polysemiotic texts)[35]have messages that are conveyed through more than one semiotic channel, i.e. not just through the written or spoken word, but also via sound and/or images.[36]The main translation modes under study aresubtitling,film dubbingandvoice-over, but alsosurtitlingfor the opera and theatre.[37]
Media accessibility studies is often considered a part of this field as well,[38]withaudio description for the blind and partially sightedandsubtitles for the deaf or hard-of-hearingbeing the main objects of study. The various conditions and constraints imposed by the different media forms and translation modes, which influence how translation is carried out, are often at the heart of most studies of the product or process of AVT. Many researchers in the field of AVT Studies are organized in the European Association for Studies in Screen Translation, as are many practitioners in the field.
Non-professional translation refers to the translation activities performed by translators who are not working professionally, usually in ways made possible by the Internet.[39]These practices have mushroomed with the recentdemocratization of technologyand the popularization of the Internet. Volunteer translation initiatives have emerged all around the world, and deal with the translations of various types of written and multimedia products.
Normally, it is not required for volunteers to have been trained in translation, but trained translators could also participate, such as the case of Translators without Borders.[40]
Depending on the feature that each scholar considers the most important, different terms have been used to label "non-professional translation". O'Hagan has used "user-generated translation",[41]"fan translation"[42]and "community translation".[39]Fernández-Costales and Jiménez-Crespo prefer "collaborative translation",[43]while Pérez-González labels it "amateur subtitling".[44]Pym proposes that the fundamental difference between this type of translation and professional translation relies on monetary reward, and he suggests it should be called "volunteer translation".[45]
Some of the most popular fan-controlled non-professional translation practices arefansubbing,fandubbing,ROM hackingorfan translation of video games, andscanlation. These practices are mostly supported by a strong and consolidated fan base, although larger non-professional translation projects normally applycrowdsourcingmodels and are controlled by companies or organizations. Since 2008,Facebookhas used crowdsourcing to have its website translated by its users andTED conferencehas set up the open translation project TED Translators[46]in which volunteers use the Amara[47]platform to create subtitles online for TED talks.
Studies oflocalizationconcern the way the contemporary language industries translate and adapt ("localize") technical texts across languages, tailoring them for a specific "locale" (a target location defined by language variety and various cultural parameters). Localization usually concerns software, product documentation, websites andvideo games, where the technological component is key.[citation needed]
A key concept in localization isinternationalization, in which the start product is stripped of its culture-specific features in such a way that it can be simultaneously localized into several languages.
The field refers to the set of pedagogical approaches used by academic educators to teach translation, train translators, and endeavor to develop the translation discipline thoroughly. Moreover, translation learners face many difficulties in trying to come up with the right equivalence of a particular source text. For these reasons, translation education is an important field of study that encompasses a number of questions to be answered in research.
The discipline of interpreting studies is often referred to as the sister of translation studies. This is due to the similarities between the two disciplines, consisting in the transfer of ideas from one language into another. Indeed, interpreting as an activity was long seen as a specialized form of translation, before scientifically founded interpreting studies emancipated gradually from translation studies in the second half of the 20th century. While they were strongly oriented towards the theoretic framework of translation studies,[48]interpreting studies have always been concentrating on the practical and pedagogical aspect of the activity.[49]This led to the steady emancipation of the discipline and the consecutive development of a separate theoretical framework based—as are translation studies—on interdisciplinary premises. Interpreting studies have developed several approaches and undergone various paradigm shifts,[50]leading to the most recent surge of sociological studies of interpreters and their work(ing conditions).
Metaphoricalusage can challenge translators striving to balance the idiomatic with a natural style; and translation can unmask hidden metaphors.[a]The study of translation "can reveal new insights into the relationship between images and culture".[51]
The study of translating for younger audiences constitutes a relatively young research field that has developed profoundly in the four decades, ever sinceGöte Klingberg, Swedish researcher and pedagogue, organized an International Research in Children’s Literature (IRSCL) conference in Södertälje in Sweden 1976 on the translation of children’s literature. Since then, the field has attempted to build its own research area and to gain independence and recognition from other fields. Indeed, children’s literature had itself suffered from low prestige globally and its combination with translation studies had made it considered a minor research interest in disciplines of greater standing at the time, such as comparative literature, linguistics and even translation studies.[citation needed]
However, due to the recent economic success of children’s and young adult literature, the establishment of international literary prizes like theAstrid Lindgren Memorial Award (ALMA), and the existence of a large number of institutions such as IRSCL (International Research Society for Children’s Literature), in addition to IBBY (International Board on Books for Young People), established scientific research/journals (The Lion and the Unicorn: A Critical Journal of Children’s Literature, Hopkins Press orBarnboken, The Swedish Institute for Children’s Books), as well as courses in children’s literature at the university level, children’s literature has gained enough prestige since the beginning of the century to be considered its own discipline.[citation needed]
Translation studies is also a relatively new and established scientific discipline, having been grouped together with linguistics or the study of literature after World War II. Despite the seminal work of Zohar Shavit (1986), who studied children’s literature through the lens of polysystem theory, children’s literature only began to get traction in translation studies around the turn of the century. According to Borodo, “it was not before 2000 that the term 'children’s literature translation studies' (CLTS) seems to have first appeared in [an] article by Fernández López" (cited in Borodo 2017:36).[citation needed]At the beginning of the 2000s, the field grew fast, but still, few researchers identified with this field, as the discipline was not distinct (See Borodo’s Children’s Literature Translation Studies survey from 2007 in Borodo 2017:40).[citation needed]At this point things picked up with the publication of some fundamental books for the discipline such as Riita Oittinen’sTranslating for Children(2000) and Gillian Lathey’sThe Translation of Children’s Literature. A Reader(2006). Then, the discipline finally got its own entries in, e.g.,The Routledge Encyclopedia of Translation Studies(2009) by Lathey,The Routledge Handbook of Translation Studies(2010) by Alvstad, then (2013) by O’Sullivan, and much later inThe Routledge Handbook of Literary Translation(2018) by Alvstad – showing a recognition of the intersection between those two disciplines.[citation needed]
Some international conferences on translation and children’s literature were organized: in 2004 in Brussels there was “Children’s Literature in Translation: Challenges and Strategies”; in 2005 in London, “No Child is an Island: The Case of Children’s Books in Translation” (IBBY- International Board on Books for Young People); in 2012 in London “Crossing Boundaries: Translations and Migrations’ (IBBY) and in Brussels and Antwerp in 2017 by the Center of Reception Studies (CERES): “Translation Studies and Children’s Literature” (KU Leuven/Antwerp University), which resulted in a notable publicationChildren’s Literature in Translation, Texts and Contexts(2020) by Jan van Coillie and Jack McMartin. This publication won the IRSCL Edited Book Award 2021, providing official recognition of CLTS.[citation needed]
The pandemic put a stop to international events meeting face-to-face, but to compensate for the need of scholars to meet and interact, Pilar Alderete Diez from the University of Galway (IR) with the support of Owen Harrington from Heriot-Watt University (UK) created the Children in Translation Network (CITN) in 2021 and a webinar series on translation studies and children’s literature. The success was immediate, providing evidence of the interest in the discipline, and gathering more than 150 participants from 21 different countries.[citation needed]
The most recent international conference in CLTS was organized 2024 The Institute of Interpreting and Translation Studies (TÖI) of Stockholm University in Sweden under the banner of “New Voices in Children’s Literature in Translation: Culture, Power and Transnationalism”.[citation needed]The conference was held 22-23 August 2024 in Stockholm in Sweden, and around 120 persons attended from around 40 different countries with more than 80 presentations in two days.[citation needed]
As attested by the number of scientific articles/books in this specific area (e.g., 17,400[citation needed]results on Google Scholar for the period 2017-2023;[citation needed]3,338 results on EBSCO host for the same period[citation needed]), the creation of courses at the university level devoted solely to translation and children’s literature, the number of theses and dissertations being defended in this area, recent international conferences and networks like CITN identifying the growing interest for this discipline.[citation needed]
Translation studies has developed alongside the growth in translation schools and courses at the university level. In 1995, a study of 60 countries revealed there were 250 bodies at university level offering courses in translation or interpreting.[52]In 2013, the same database listed 501 translator-training institutions.[53]Accordingly, there has been a growth in conferences on translation, translation journals and translation-related publications. The visibility acquired by translation has also led to the development of national and international associations of translation studies. Ten of these associations formed the International Network of Translation and Interpreting Studies Associations in September 2016.
The growing variety of paradigms is mentioned as one of the possible sources of conflict in the discipline. As early as 1999, the conceptual gap between non-essentialist and empirical approaches came up for debate at the Vic Forum on Training Translators and Interpreters: New Directions for the Millennium. The discussants, Rosemary Arrojo andAndrew Chesterman, explicitly sought common shared ground for both approaches.[54]
Interdisciplinarity has made the creation of new paradigms possible, as most of the developed theories grew from contact with other disciplines like linguistics, comparative literature, cultural studies, philosophy, sociology or historiography. At the same time, it might have provoked the fragmentation of translation studies as a discipline on its own right.[55]
A second source of conflict rises from the breach between theory and practice. As the prescriptivism of the earlier studies gives room to descriptivism and theorization, professionals see less applicability of the studies. At the same time, university research assessment places little if any importance on translation practice.[56]
Translation studies has shown a tendency to broaden its fields of inquiry, and this trend may be expected to continue. This particularly concerns extensions into adaptation studies, intralingual translation, translation between semiotic systems (image to text to music, for example), and translation as the form of all interpretation and thus of all understanding, as suggested in Roman Jakobson's work,On Linguistic Aspects of Translation.[citation needed]
Homepages: | https://en.wikipedia.org/wiki/Translation_studies |
Natural language processing(NLP) is a subfield ofcomputer scienceand especiallyartificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded innatural languageand is thus closely related toinformation retrieval,knowledge representationandcomputational linguistics, a subfield oflinguistics.
Major tasks in natural language processing arespeech recognition,text classification,natural-language understanding, andnatural-language generation.
Natural language processing has its roots in the 1950s.[1]Already in 1950,Alan Turingpublished an article titled "Computing Machinery and Intelligence" which proposed what is now called theTuring testas a criterion of intelligence, though at the time that was not articulated as a problem separate from artificial intelligence. The proposed test includes a task that involves the automated interpretation and generation of natural language.
The premise of symbolic NLP is well-summarized byJohn Searle'sChinese roomexperiment: Given a collection of rules (e.g., a Chinese phrasebook, with questions and matching answers), the computer emulates natural language understanding (or other NLP tasks) by applying those rules to the data it confronts.
Up until the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction ofmachine learningalgorithms for language processing. This was due to both the steady increase in computational power (seeMoore's law) and the gradual lessening of the dominance ofChomskyantheories of linguistics (e.g.transformational grammar), whose theoretical underpinnings discouraged the sort ofcorpus linguisticsthat underlies the machine-learning approach to language processing.[8]
Symbolic approach, i.e., the hand-coding of a set of rules for manipulating symbols, coupled with a dictionary lookup, was historically the first approach used both by AI in general and by NLP in particular:[18][19]such as by writing grammars or devising heuristic rules forstemming.
Machine learningapproaches, which include both statistical and neural networks, on the other hand, have many advantages over the symbolic approach:
Rule-based systems are commonly used:
In the late 1980s and mid-1990s, the statistical approach ended a period ofAI winter, which was caused by the inefficiencies of the rule-based approaches.[20][21]
The earliestdecision trees, producing systems of hardif–then rules, were still very similar to the old rule-based approaches.
Only the introduction of hiddenMarkov models, applied to part-of-speech tagging, announced the end of the old rule-based approach.
A major drawback of statistical methods is that they require elaboratefeature engineering. Since 2015,[22]the statistical approach has been replaced by theneural networksapproach, usingsemantic networks[23]andword embeddingsto capture semantic properties of words.
Intermediate tasks (e.g., part-of-speech tagging and dependency parsing) are not needed anymore.
Neural machine translation, based on then-newly inventedsequence-to-sequencetransformations, made obsolete the intermediate steps, such as word alignment, previously necessary forstatistical machine translation.
The following is a list of some of the most commonly researched tasks in natural language processing. Some of these tasks have direct real-world applications, while others more commonly serve as subtasks that are used to aid in solving larger tasks.
Though natural language processing tasks are closely intertwined, they can be subdivided into categories for convenience. A coarse division is given below.
Based on long-standing trends in the field, it is possible to extrapolate future directions of NLP. As of 2020, three trends among the topics of the long-standing series of CoNLL Shared Tasks can be observed:[46]
Most higher-level NLP applications involve aspects that emulate intelligent behaviour and apparent comprehension of natural language. More broadly speaking, the technical operationalization of increasingly advanced aspects of cognitive behaviour represents one of the developmental trajectories of NLP (see trends among CoNLL shared tasks above).
Cognitionrefers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses."[47]Cognitive scienceis the interdisciplinary, scientific study of the mind and its processes.[48]Cognitive linguisticsis an interdisciplinary branch of linguistics, combining knowledge and research from both psychology and linguistics.[49]Especially during the age ofsymbolic NLP, the area of computational linguistics maintained strong ties with cognitive studies.
As an example,George Lakoffoffers a methodology to build natural language processing (NLP) algorithms through the perspective of cognitive science, along with the findings of cognitive linguistics,[50]with two defining aspects:
Ties with cognitive linguistics are part of the historical heritage of NLP, but they have been less frequently addressed since the statistical turn during the 1990s. Nevertheless, approaches to develop cognitive models towards technically operationalizable frameworks have been pursued in the context of various frameworks, e.g., of cognitive grammar,[53]functional grammar,[54]construction grammar,[55]computational psycholinguistics and cognitive neuroscience (e.g.,ACT-R), however, with limited uptake in mainstream NLP (as measured by presence on major conferences[56]of theACL). More recently, ideas of cognitive NLP have been revived as an approach to achieveexplainability, e.g., under the notion of "cognitive AI".[57]Likewise, ideas of cognitive NLP are inherent to neural modelsmultimodalNLP (although rarely made explicit)[58]and developments inartificial intelligence, specifically tools and technologies usinglarge language modelapproaches[59]and new directions inartificial general intelligencebased on thefree energy principle[60]by British neuroscientist and theoretician at University College LondonKarl J. Friston. | https://en.wikipedia.org/wiki/Natural_Language_Processing |
Natural language generation(NLG) is a software process that producesnatural languageoutput. A widely cited survey of NLG methods describes NLG as "the subfield of artificial intelligence and computational linguistics that is concerned with the construction of computer systems that can produce understandable texts in English or other human languages from some underlying non-linguistic representation of information".[1]
While it is widely agreed that the output of any NLG process is text, there is some disagreement about whether the inputs of an NLG system need to be non-linguistic.[2]Common applications of NLG methods include the production of various reports, for example weather[3]and patient reports;[4]image captions;[5]andchatbotslikeChatGPT.
Automated NLG can be compared to the process humans use when they turn ideas into writing or speech.Psycholinguistsprefer the termlanguage productionfor this process, which can also be described in mathematical terms, or modeled in a computer for psychological research. NLG systems can also be compared totranslatorsof artificial computer languages, such asdecompilersortranspilers, which also produce human-readable code generated from anintermediate representation. Human languages tend to be considerably more complex and allow for much more ambiguity and variety of expression than programming languages, which makes NLG more challenging.
NLG may be viewed as complementary tonatural-language understanding(NLU): whereas in natural-language understanding, the system needs to disambiguate the input sentence to produce the machine representation language, in NLG the system needs to make decisions about how to put a representation into words. The practical considerations in building NLU vs. NLG systems are not symmetrical. NLU needs to deal with ambiguous or erroneous user input, whereas the ideas the system wants to express through NLG are generally known precisely. NLG needs to choose a specific, self-consistent textual representation from many potential representations, whereas NLU generally tries to produce a single, normalized representation of the idea expressed.[6]
NLG has existed sinceELIZAwas developed in the mid 1960s, but the methods were first used commercially in the 1990s.[7]NLG techniques range from simple template-based systems like amail mergethat generatesform letters, to systems that have a complex understanding of human grammar. NLG can also be accomplished by training a statistical model usingmachine learning, typically on a largecorpusof human-written texts.[8]
ThePollen Forecast for Scotlandsystem[9]is a simple example of a simple NLG system that could essentially be based on a template. This system takes as input six numbers, which give predicted pollen levels in different parts of Scotland. From these numbers, the system generates a short textual summary of pollen levels as its output.
For example, using the historical data for July 1, 2005, the software produces:
Grass pollen levels for Friday have increased from the moderate to high levels of yesterday with values of around 6 to 7 across most parts of the country. However, in Northern areas, pollen levels will be moderate with values of 4.
In contrast, the actual forecast (written by a human meteorologist) from this data was:
Pollen counts are expected to remain high at level 6 over most of Scotland, and even level 7 in the south east. The only relief is in the Northern Isles and far northeast of mainland Scotland with medium levels of pollen count.
Comparing these two illustrates some of the choices that NLG systems must make; these are further discussed below.
The process to generate text can be as simple as keeping a list of canned text that is copied and pasted, possibly linked with some glue text. The results may be satisfactory in simple domains such as horoscope machines or generators of personalized business letters. However, a sophisticated NLG system needs to include stages of planning and merging of information to enable the generation of text that looks natural and does not become repetitive. The typical stages of natural-language generation, as proposed by Dale and Reiter,[6]are:
Content determination: Deciding what information to mention in the text.
For instance, in the pollen example above, deciding whether to explicitly mention that pollen
level is 7 in the southeast.
Document structuring: Overall organisation of the information to convey. For example, deciding to
describe the areas with high pollen levels first, instead of the areas with low pollen levels.
Aggregation: Merging of similar sentences to improve readability and naturalness.
For instance, merging the two following sentences:
into the following single sentence:
Lexical choice: Putting words to the concepts. For example, deciding whethermediumormoderateshould be used when describing a pollen level of 4.
Referring expression generation: Creatingreferring expressionsthat identify objects and regions. For example, deciding to usein the Northern Isles and far northeast of mainland Scotlandto refer to a certain region in Scotland.
This task also includes making decisions aboutpronounsand other types ofanaphora.
Realization: Creating the actual text, which should be correct
according to the rules ofsyntax,morphology, andorthography. For example, usingwill befor the future
tense ofto be.
An alternative approach to NLG is to use "end-to-end" machine learning to build a system, without having separate stages as above.[10]In other words, we build an NLG system by training a machine learning algorithm (often anLSTM) on a large data set of input data and corresponding (human-written) output texts. The end-to-end approach has perhaps been most successful inimage captioning,[11]that is automatically generating a textual caption for an image.
From a commercial perspective, the most successful NLG applications
have beendata-to-textsystems whichgenerate textual summariesof databases and data sets; these
systems usually performdata analysisas well as text generation. Research has shown that textual summaries can be more effective than graphs and other visuals for decision support,[12][13][14]and that computer-generated texts can be superior (from the reader's perspective) to human-written texts.[15]
The first commercial data-to-text systems produced weather forecasts from weather data. The earliest such system to be deployed was FoG,[3]which was used by Environment Canada to generate weather forecasts in French and English in the early 1990s. The success of FoG triggered other work, both research and commercial. Recent applications include theUK Met Office'stext-enhanced forecast.[16]
Data-to-text systems have since been applied in a range of settings. Following the minor earthquake near Beverly Hills, California on March 17, 2014, The Los Angeles Times reported details about the time, location and strength of the quake within 3 minutes of the event. This report was automatically generated by a 'robo-journalist', which converted the incoming data into text via a preset template.[17][18]Currently there is considerable commercial interest in using NLG to summarise financial and business data. Indeed,Gartnerhas said that NLG will become a standard feature of 90% of modern BI and analytics platforms.[19]NLG is also being used commercially inautomated journalism,chatbots, generating product descriptions for e-commerce sites, summarising medical records,[20][4]and enhancingaccessibility(for example by describing graphs and data sets to blind people[21]).
An example of an interactive use of NLG is theWYSIWYMframework. It stands forWhat you see is what you meantand allows users to see and manipulate the continuously rendered view (NLG output) of an underlying formal language document (NLG input), thereby editing the formal language without learning it.
Looking ahead, the current progress in data-to-text generation paves the way for tailoring texts to specific audiences. For example, data from babies in neonatal care can be converted into text differently in a clinical setting, with different levels of technical detail and explanatory language, depending on intended recipient of the text (doctor, nurse, patient). The same idea can be applied in a sports setting, with different reports generated for fans of specific teams.[22]
Over the past few years, there has been an increased interest inautomatically generating captionsfor images, as part of a broader endeavor to investigate the interface between vision and language. A case of data-to-text generation, the algorithm of image captioning (or automatic image description) involves taking an image, analyzing its visual content, and generating a textual description (typically a sentence) that verbalizes the most prominent aspects of the image.
An image captioning system involves two sub-tasks. In Image Analysis, features and attributes of an image are detected and labelled, before mapping these outputs to linguistic structures. Recent research utilizesdeep learning approaches through features from a pre-trainedconvolutional neural networksuch as AlexNet, VGG or Caffe, where caption generators use an activation layer from the pre-trained network as their input features. Text Generation, the second task, is performed using a wide range of techniques. For example, in the Midge system, input images are represented as triples consisting of object/stuff detections, action/posedetections and spatial relations. These are subsequently mapped to <noun, verb, preposition> triples and realized using a tree substitution grammar.[22]
A common method in image captioning is to use a vision model (such as aResNet) to encode an image into a vector, then use a language model (such as anRNN) to decode the vector into a caption.[23][24]
Despite advancements, challenges and opportunities remain in image capturing research. Notwithstanding the recent introduction of Flickr30K, MS COCO and other large datasetshaveenabled the training of more complex models such as neural networks, it has been argued thatresearch in image captioning could benefit from larger and diversified datasets.Designing automatic measures that can mimic human judgments in evaluating the suitability of image descriptions is another need in the area. Other open challenges include visualquestion-answering(VQA),[25]as well as the construction and evaluation multilingual repositories for image description.[22]
Another area where NLG has been widely applied is automateddialoguesystems, frequently in the form of chatbots. Achatbotor chatterbot is asoftwareapplication used to conduct an on-line chatconversationvia text ortext-to-speech, in lieu of providing direct contact with a live human agent. Whilenatural language processing(NLP) techniques are applied in deciphering human input, NLG informs the output part of the chatbot algorithms in facilitating real-time dialogues.
Early chatbot systems, includingCleverbotcreated by Rollo Carpenter in 1988 and published in 1997,[citation needed]reply to questions by identifying how a human has responded to the same question in a conversation database usinginformation retrieval(IR) techniques.[citation needed]Modern chatbot systems predominantly rely on machine learning (ML) models, such as sequence-to-sequence learning and reinforcement learning to generate natural language output. Hybrid models have also been explored. For example, the Alibaba shopping assistant first uses an IR approach to retrieve the best candidates from the knowledge base, then uses the ML-driven seq2seq model re-rank the candidate responses and generate the answer.[26]
Creative language generation by NLG has been hypothesized since the field's origins. A recent pioneer in the area is Phillip Parker, who has developed an arsenal of algorithms capable of automatically generating textbooks, crossword puzzles, poems and books on topics ranging from bookbinding to cataracts.[27]The advent of large pretrained transformer-based language models such as GPT-3 has also enabled breakthroughs, with such models demonstrating recognizable ability for creating-writing tasks.[28]
A related area of NLG application is computational humor production. JAPE (Joke Analysis and Production Engine) is one of the earliest large, automated humor production systems that uses a hand-coded template-based approach to create punning riddles for children. HAHAcronym creates humorous reinterpretations of any given acronym, as well as proposing new fitting acronyms given some keywords.[29]
Despite progresses, many challenges remain in producing automated creative and humorous content that rival human output. In an experiment for generating satirical headlines, outputs of their best BERT-based model were perceived as funny 9.4% of the time (while real headlines fromThe Onionwere 38.4%) and a GPT-2 model fine-tuned on satirical headlines achieved 6.9%.[30]It has been pointed out that two main issues with humor-generation systems are the lack of annotated data sets and the lack of formal evaluation methods,[29]which could be applicable to other creative content generation. Some have argued relative to other applications, there has been a lack of attention to creative aspects of language production within NLG. NLG researchers stand to benefit from insights into what constitutes creative language production, as well as structural features of narrative that have the potential to improve NLG output even in data-to-text systems.[22]
As in other scientific fields, NLG researchers need to test how well their systems, modules, and algorithms work. This is calledevaluation. There are three basic techniques for evaluating NLG systems:
An ultimate goal is how useful NLG systems are at helping people, which is the first of the above techniques. However, task-based evaluations are time-consuming and expensive, and can be difficult to carry out (especially if they require subjects with specialised expertise, such as doctors). Hence (as in other areas of NLP) task-based evaluations are the exception, not the norm.
Recently researchers are assessing how well human-ratings and metrics correlate with (predict) task-based evaluations. Work is being conducted in the context of Generation Challenges[31]shared-task events. Initial results suggest that human ratings are much better than metrics in this regard. In other words, human ratings usually do predict task-effectiveness at least to some degree (although there are exceptions), while ratings produced by metrics often do not predict task-effectiveness well. These results are preliminary. In any case, human ratings are the most popular evaluation technique in NLG; this is contrast tomachine translation, where metrics are widely used.
An AI can be graded onfaithfulnessto its training data or, alternatively, onfactuality. A response that reflects the training data but not reality is faithful but not factual. A confident but unfaithful response is ahallucination. In Natural Language Processing, a hallucination is often defined as "generated content that is nonsensical or unfaithful to the provided source content".[32] | https://en.wikipedia.org/wiki/Natural-language_generation |
Fleiss' kappa(named afterJoseph L. Fleiss) is astatistical measurefor assessing thereliability of agreementbetween a fixed number ofraterswhen assigningcategorical ratingsto a number of items or classifying items. This contrasts with other kappas such asCohen's kappa, which only work when assessing the agreement between not more than two raters or the intra-rater reliability (for one appraiser versus themself). The measure calculates the degree of agreement in classification over that which would be expected by chance.
Fleiss' kappa can be used with binary ornominal-scale. It can also be applied toordinal data(ranked data): the MiniTab online documentation[1]gives an example. However, this document notes: "When you have ordinal ratings, such as defect severity ratings on a scale of 1–5,Kendall's coefficients, which account for ordering, are usually more appropriate statistics to determine association than kappa alone." Keep in mind however, that Kendall rank coefficients are only appropriate for rank data.
Fleiss' kappa is a generalisation ofScott's pistatistic,[2]astatisticalmeasure ofinter-rater reliability.[3]It is also related to Cohen's kappa statistic andYouden's J statisticwhich may be more appropriate in certain instances.[4]Whereas Scott's pi and Cohen's kappa work for only two raters, Fleiss' kappa works for any number of raters giving categorical ratings, to a fixed number of items, at the condition that for each item raters are randomly sampled. It can be interpreted as expressing the extent to which the observed amount of agreement among raters exceeds what would be expected if all raters made their ratings completely randomly. It is important to note that whereas Cohen's kappa assumes the same two raters have rated a set of items, Fleiss' kappa specifically allows that although there are a fixed number of raters (e.g., three), different items may be rated by different individuals.[3]That is, Item 1 is rated by Raters A, B, and C; but Item 2 could be rated by Raters D, E, and F. The condition of random sampling among raters makes Fleiss' kappa not suited for cases where all raters rate all patients.[5]
Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are. The kappa,κ{\displaystyle \kappa \,}, can be defined as,
(1)κ=P¯−Pe¯1−Pe¯{\displaystyle \kappa ={\frac {{\bar {P}}-{\bar {P_{e}}}}{1-{\bar {P_{e}}}}}}
The factor1−Pe¯{\displaystyle 1-{\bar {P_{e}}}}gives the degree of agreement that is attainable above chance, and,P¯−Pe¯{\displaystyle {\bar {P}}-{\bar {P_{e}}}}gives the degree of agreement actually achieved above chance. If the raters are in complete agreement thenκ=1{\displaystyle \kappa =1~}. If there is no agreement among the raters (other than what would be expected by chance) thenκ≤0{\displaystyle \kappa \leq 0}.
An example of using Fleiss' kappa may be the following: consider several psychiatrists who are asked to look at ten patients. For each patient, 14 psychiatrists give one of possibly five diagnoses. These are compiled into a matrix, and Fleiss' kappa can be computed from thismatrix(seeexample below) to show the degree of agreement between the psychiatrists above the level of agreement expected by chance.
LetNbe the total number of elements, letnbe the number of ratings per element, and letkbe the number of categories into which assignments are made. The elements are indexed byi= 1, ...,Nand the categories are indexed byj= 1, ...,k. Letnijrepresent the number of raters who assigned thei-th element to thej-th category.
First calculatepj, the proportion of all assignments which were to thej-th category:
(2)pj=1Nn∑i=1Nnij,1=∑j=1kpj{\displaystyle p_{j}={\frac {1}{Nn}}\sum _{i=1}^{N}n_{ij},\quad \quad 1=\sum _{j=1}^{k}p_{j}}
Now calculatePi{\displaystyle P_{i}\,}, the extent to which raters agree for thei-th element (i.e., compute how many rater-rater pairs are in agreement, relative to the number of all possible rater-rater pairs):
(3)
Note thatPi{\displaystyle P_{i}}is bound between0, when ratings are assigned equally over all categories, and1, when all ratings are assigned to a single category.
Now computeP¯{\displaystyle {\bar {P}}}, the mean of thePi{\displaystyle P_{i}}'s, andPe¯{\displaystyle {\bar {P_{e}}}}, which go into the formula forκ{\displaystyle \kappa }:
(4)P¯=1N∑i=1NPi=1Nn(n−1)[∑i=1N∑j=1k(nij2)−Nn]{\displaystyle {\begin{aligned}{\bar {P}}&={\frac {1}{N}}\sum _{i=1}^{N}P_{i}\\&={\frac {1}{Nn(n-1)}}{\biggl [}\sum _{i=1}^{N}\sum _{j=1}^{k}{\bigl (}n_{ij}^{2}{\bigr )}-Nn{\biggr ]}\end{aligned}}}
(5)Pe¯=∑j=1kpj2{\displaystyle {\bar {P_{e}}}=\sum _{j=1}^{k}p_{j}^{2}}
In the following example, for each of ten "subjects" (N{\displaystyle N}) fourteen raters (n{\displaystyle n}), sampled from a larger group, assign a total of five categories (k{\displaystyle k}). The categories are presented in the columns, while the subjects are presented in the rows. Each cell lists the number of raters who assigned the indicated (row) subject to the indicated (column) category.
In the following table, given thatN=10{\displaystyle N=10},n=14{\displaystyle n=14}, andk=5{\displaystyle k=5}. The valuepj{\displaystyle p_{j}}is the proportion of all assignments that were made to thej{\displaystyle j}th category. For example, taking the first columnp1=0+0+0+0+2+7+3+2+6+0140=0.143,{\displaystyle p_{1}={\frac {0+0+0+0+2+7+3+2+6+0}{140}}=0.143,}and taking the second row,P2=114(14−1)(02+22+62+42+22−14)=0.253.{\displaystyle P_{2}={\frac {1}{14(14-1)}}\left(0^{2}+2^{2}+6^{2}+4^{2}+2^{2}-14\right)=0.253.}
In order to calculateP¯{\displaystyle {\bar {P}}}, we need to know the sum ofPi{\displaystyle P_{i}},∑i=1NPi=1.000+0.253+⋯+0.286+0.286=3.780.{\displaystyle \sum _{i=1}^{N}P_{i}=1.000+0.253+\cdots +0.286+0.286=3.780.}
Over the whole sheet,
Landis & Koch (1977)gave the following table for interpretingκ{\displaystyle \kappa }values for a 2-annotator 2-class example.[6]This table is howeverby no meansuniversally accepted. They supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful,[7]as the number of categories and subjects will affect the magnitude of the value. For example, the kappa is higher when there are fewer categories.[8]
Statistical packages can calculate astandard score(Z-score) forCohen's kappaor Fleiss's Kappa, which can be converted into aP-value. However, even when the P value reaches the threshold of statistical significance (typically less than 0.05), it only indicates that the agreement between raters is significantly better than would be expected by chance. The p-value does not tell, by itself, whether the agreement is good enough to have high predictive value. | https://en.wikipedia.org/wiki/Fleiss%27_kappa |
Scott's pi(named afterWilliam A Scott) is a statistic for measuringinter-rater reliabilityfornominal dataincommunication studies. Textual entities are annotated with categories by different annotators, and various measures are used to assess the extent of agreement between the annotators, one of which is Scott's pi. Since automatically annotating text is a popular problem innatural language processing, and the goal is to get the computer program that is being developed to agree with the humans in the annotations it creates, assessing the extent to which humans agree with each other is important for establishing a reasonable upper limit on computer performance.
Scott's pi is similar toCohen's kappain that they both improve on simple observed agreement by factoring in the extent of agreement that might be expected by chance. However, in each statistic, the expected agreement is calculated slightly differently. Scott's pi compares to the baseline of the annotators being not only independent but also having the same distribution of responses;Cohen's kappacompares to a baseline in which the annotators are assumed to be independent but to have their own, different distributions of responses. Thus, Scott's pi measures disagreements between the annotators relative to the level of agreement expected due to pure random chance if the annotators were independent and identically distributed, whereas Cohen's kappa measures disagreements between the annotators that are above and beyond any systematic, average disagreement that the annotators might have. Indeed, Cohen's kappa explicitly ignores all systematic, average disagreement between the annotators prior to comparing the annotators. So Cohen's kappa assesses only the level of randomly varying disagreements between the annotators, not systematic, average disagreements. Scott's pi is extended to more than two annotators byFleiss' kappa.
The equation for Scott's pi, as inCohen's kappa, is:
However, Pr(e) is calculated using squared "joint proportions" which are squared arithmetic means of the marginal proportions (whereas Cohen's uses squared geometric means of them).
Confusion matrix for two annotators, three categories {Yes, No, Maybe} and 45 items rated (90 ratings for 2 annotators):
To calculate the expected agreement, sum marginals across annotators and divide by the total number of ratings to obtain joint proportions. Square and total these:
To calculate observed agreement, divide the number of items on which annotators agreed by the total number of items. In this case,
Given that Pr(e) = 0.369, Scott's pi is then | https://en.wikipedia.org/wiki/Scott%27s_Pi |
Instatistics, thecorrelation ratiois a measure of the curvilinear relationship between thestatistical dispersionwithin individual categories and the dispersion across the whole population or sample. The measure is defined as theratioof twostandard deviationsrepresenting these types of variation. The context here is the same as that of theintraclass correlation coefficient, whose value is the square of the correlation ratio.
Suppose each observation isyxiwherexindicates the category that observation is in andiis the label of the particular observation. Letnxbe the number of observations in categoryxand
wherey¯x{\displaystyle {\overline {y}}_{x}}is the mean of the categoryxandy¯{\displaystyle {\overline {y}}}is the mean of the whole population. The correlation ratio η (eta) is defined as to satisfy
which can be written as
i.e. the weighted variance of the category means divided by the variance of all samples.
If the relationship between values ofx{\displaystyle x}and values ofy¯x{\displaystyle {\overline {y}}_{x}}is linear (which is certainly true when there are only two possibilities forx) this will give the same result as the square of Pearson'scorrelation coefficient; otherwise the correlation ratio will be larger in magnitude. It can therefore be used for judging non-linear relationships.
The correlation ratioη{\displaystyle \eta }takes values between 0 and 1. The limitη=0{\displaystyle \eta =0}represents the special case of no dispersion among the means of the different categories, whileη=1{\displaystyle \eta =1}refers to no dispersion within the respective categories.η{\displaystyle \eta }is undefined when all data points of the complete population take the same value.
Suppose there is a distribution of test scores in three topics (categories):
Then the subject averages are 36, 33 and 78, with an overall average of 52.
The sums of squares of the differences from the subject averages are 1952 for Algebra, 308 for Geometry and 600 for Statistics, adding to 2860. The overall sum of squares of the differences from the overall average is 9640. The difference of 6780 between these is also the weighted sum of the squares of the differences between the subject averages and the overall average:
This gives
suggesting that most of the overall dispersion is a result of differences between topics, rather than within topics. Taking the square root gives
Forη=1{\displaystyle \eta =1}the overall sample dispersion is purely due to dispersion among the categories and not at all due to dispersion within the individual categories. For quick comprehension simply imagine all Algebra, Geometry, and Statistics scores being the same respectively, e.g. 5 times 36, 4 times 33, 6 times 78.
The limitη=0{\displaystyle \eta =0}refers to the case without dispersion among the categories contributing to the overall dispersion. The trivial requirement for this extreme is that all category means are the same.
The correlation ratio was introduced byKarl Pearsonas part ofanalysis of variance.Ronald Fishercommented:
"As a descriptive statistic the utility of the correlation ratio is extremely limited. It will be noticed that the number ofdegrees of freedomin the numerator ofη2{\displaystyle \eta ^{2}}depends on the number of the arrays"[1]
to whichEgon Pearson(Karl's son) responded by saying
"Again, a long-established method such as the use of the correlation ratio [§45 The "Correlation Ratio" η] is passed over in a few words without adequate description, which is perhaps hardly fair to the student who is given no opportunity of judging its scope for himself."[2] | https://en.wikipedia.org/wiki/Correlation_ratio |
In survey research, thedesign effectis a number that shows how well a sample of people may represent a larger group of people for a specific measure of interest (such as the mean). This is important when the sample comes from asampling methodthat is different than just picking people using asimple random sample.
The design effect is a positivereal number, represented by the symbolDeff{\displaystyle {\text{Deff}}}. IfDeff=1{\displaystyle {\text{Deff}}=1}, then the sample was selected in a way that is just as good as if people were picked randomly. WhenDeff>1{\displaystyle {\text{Deff}}>1}, then inference from the data collected is not as accurate as it could have been if people were picked randomly.
When researchers use complicated methods to pick their sample, they use the design effect to check and adjust their results. It may also be used when planning a study in order todetermine the sample size.
Insurvey methodology, thedesign effect(generallydenotedasDeff{\displaystyle {\text{Deff}}},Deff{\displaystyle D_{\text{eff}}}, orDeft2{\displaystyle D_{\text{eft}}^{2}}) is a measure of the expected impact of a sampling design on thevarianceof anestimatorfor someparameterof a population. It is calculated as the ratio of the variance of an estimator based on a sample from an (often) complexsampling design, to the variance of an alternative estimator based on asimple random sample(SRS) of the same number of elements.[1]: 258TheDeff{\displaystyle {\text{Deff}}}(be it estimated, or knowna priori) can be used to evaluate the variance of an estimator in cases where the sample is not drawn using simple random sampling. It may also be useful insample size calculations[2]and for quantifying the representativeness of samples collected with various sampling designs.
The design effect is a positivereal numberthat indicates an inflation (Deff>1{\displaystyle {\text{Deff}}>1}), or deflation (Deff<1{\displaystyle {\text{Deff}}<1}) in thevarianceof an estimator for some parameter, that is due to the study not using SRS (withDeff=1{\displaystyle {\text{Deff}}=1}, when the variances are identical).[3]: 53, 54Intuitively we can getDeff<1{\displaystyle {\text{Deff}}<1}when we have some a-priori knowledge we can exploit during the sampling process (which is somewhat rare). And, in contrast, we often getDeff>1{\displaystyle {\text{Deff}}>1}when we need to compensate for some limitation in our ability to collect data (which is more common). Some sampling designs that could introduceDeff{\displaystyle {\text{Deff}}}generally greater than 1 include:cluster sampling(such as when there iscorrelationbetween observations),stratified sampling(with disproportionate allocation to the strata sizes),cluster randomized controlled trial, disproportional (unequal probability) sample (e.g.Poisson sampling), statisticaladjustments of the datafor non-coverage or non-response, and many others.Stratified samplingcan yieldDeff{\displaystyle {\text{Deff}}}that is smaller than 1 when usingProportionate allocationto strata sizes (when these are known a-priori, and correlated to the outcome of interest) orOptimum allocation(when the variance differs between strata and is known a-priori).[citation needed]
Manycalculations (and estimators)have been proposed in the literature for how a known sampling design influences the variance of estimators of interest, either increasing or decreasing it. Generally, the design effect varies among different statistics of interests, such as the total orratio mean. It also matters if the sampling design is correlated with the outcome of interest. For example, a possible sampling design might be such that each element in the sample may have a different probability to be selected. In such cases, the level of correlation between the probability of selection for an element and its measured outcome can have a direct influence on the subsequent design effect. Lastly, the design effect can be influenced by the distribution of the outcome itself. All of these factors should be considered when estimating and using design effect in practice.[4]: 13
The term "design effect" was coined byLeslie Kishin his 1965 book "Survey Sampling."[1]: 88, 258In it, Kish proposed the general definition for the design effect,[a]as well as formulas for thedesign effect of cluster sampling(with intraclass correlation);[1]: 162and the famousdesign effect formula for unequal probability sampling.[1]: 427These are often known as "Kish's design effect", and were later combined into a single formula.
In a 1995 paper,[5]: 73Kish mentions that a similar concept, termed "Lexis ratio", was described at the end of the 19th century. The closely relatedIntraclass correlationwas described byFisherin 1950, while computations of ratios of variances were already published by Kish and others from the late 1940s to the 1950s. One of the precursors to Kish's definition was work done by Cornfield in 1951.[6][4]
In his 1995 paper, Kish proposed that considering the design effect is necessary when averaging the same measured quantity from multiple surveys conducted over a period of time.[5]: 57–62He also suggested that the design effect should be considered when extrapolating from the error of simple statistics (e.g. the mean) to more complex ones (e.g. regression coefficients). However, when analyzing data (e.g., using survey data to fit models),Deff{\displaystyle {\text{Deff}}}values are less useful nowadays due to the availability of specialized software for analyzing survey data. Prior to the development of software that computes standard errors for many types of designs and estimates, analysts would adjust standard errors produced by software that assumed all records in a dataset were i.i.d by multiplying them by aDeft{\displaystyle {\text{Deft}}}(seeDeftdefinition below).[citation needed]
Thedesign effect, commonly denoted byDeff{\displaystyle {\text{Deff}}}(orDeff{\displaystyle D_{\text{eff}}}, sometimes with additional subscripts), is the ratio of two theoretical variances forestimatorsof someparameter(θ{\displaystyle \theta }):[1][7]
So that:
In other words,Deff{\displaystyle {\text{Deff}}}measures the extent to which the variance has increased (or, in some cases, decreased) because the sample was drawn and adjusted to a specific sampling design (e.g., using weights or other measures) compared to if the sample was from asimple random sample(without replacement). Notice how the definition ofDeff{\displaystyle {\text{Deff}}}is based on parameters of the population that are often unknown, and that are hard to estimate directly. Specifically, the definition involves the variances of estimators under two different sampling designs, even though only a single sampling design is used in practice.[citation needed]
For example, when estimating the population mean, theDeff{\displaystyle {\text{Deff}}}(for some sampling design p) is:[4]: 4[3]: 54[b]
Wheren{\displaystyle n}is the sample size,f=n/N{\displaystyle f=n/N}is the fraction of the sample from the population,(1−f){\displaystyle (1-f)}is the (squared)finite population correction(FPC),Sy2{\displaystyle S_{y}^{2}}is theunbiassed sample variance, andvarp(y¯p){\displaystyle var_{p}({\bar {y}}_{p})}is some estimator of the variance of the mean under the sampling design. The issue with the above formula is that it is extremely rare to be able to directly estimate the variance of the estimated mean under two different sampling designs, since most studies rely on only a single sampling design.
There are many ways of calculationDeff{\displaystyle {\text{Deff}}}, depending on the parameter of interest (e.g. population total, population mean, quantiles, ratio of quantities etc.), the estimator used, and the sampling design (e.g. clustered sampling, stratified sampling, post-stratification, multi-stage sampling, etc.).[8]: 98The process of estimatingDeff{\displaystyle {\text{Deff}}}for specific designs will be described inthe following section.
A related quantity toDeff{\displaystyle {\text{Deff}}}, proposed by Kish in 1995, is theDesign Effect Factor, abbreviated asDeft{\displaystyle {\text{Deft}}}(or alsoDeft{\displaystyle D_{\text{eft}}}).[5]: 56[4]It is defined as the square root of the variance ratios while also having the denominator use a simple random samplewithreplacement (SRSWR), instead ofwithout replacement(SRSWOR):
Deft=var(θ^w)var(θ^SRSWR){\displaystyle {\text{Deft}}={\sqrt {\frac {{\text{var}}({\hat {\theta }}_{w})}{{\text{var}}({\hat {\theta }}_{SRSWR})}}}}
In this later definition (proposed in 1995, vs 1965) Kish argued in favor of usingDeft2{\displaystyle {\text{Deft}}^{2}}overDeff{\displaystyle {\text{Deff}}}for several reasons. It was argued that SRS "without replacement" (with its positive effect on the variance) should be captured in the denominator part in the definition of the design effect, since it is part of the sampling design. Also, since often the use of the factor is inconfidence intervals), it was claimed that usingDeft{\displaystyle {\text{Deft}}}will be simpler than writingDeff{\displaystyle {\sqrt {\text{Deff}}}}. It is also said that for many cases when the population is very large,Deft{\displaystyle {\text{Deft}}}is (almost) the square root ofDeff{\displaystyle {\text{Deff}}}(Deft≈Deff{\displaystyle {\text{Deft}}\approx {\sqrt {\text{Deff}}}}), hence it is easier to use than exactly calculating thefinite population correction(FPC).[citation needed][c]
Even so, in various cases a researcher might approximate theDeft{\displaystyle {\text{Deft}}}by calculating the variance in the numerator while assuming SRS with replacement (SRSWR) instead of SRS without replacement (SRSWOR), even if it is not precise. For example, consider a multistage design with primary sampling units (PSUs) selected systematically with probability proportional to some measure of size from a list sorted in a particular way (say, by number of households in each PSU). Also, let it be combined with an estimator that usesrakingto match the totals for several demographic variables. In such a design, the joint selection probabilities for the PSUs, which are needed for a without replacement variance estimator, are 0 for some pairs of PSUs - implying that an exact design-based (i.e., repeated sampling) variance estimator does not exist. Another example is when a public use file issued by some government agency is used for analysis. In such a case the information on joint selection probabilities of first-stage units is almost never released. As a result, an analyst cannot estimate a with replacement variance for the numerator even if desired. The standard workaround is to compute a variance estimator as if the PSUs were selected with replacement. This is the default choice in software packages such as Stata, the R survey package, and the SAS survey procedures.[citation needed]
Theeffective sample size, defined by Kish in 1965, is calculated by dividing the original sample size by the design effect.[1]: 162, 259[9]: 190, 192Namely:
This quantity reflects what would be the sample size that is needed to achieve the current variance of the estimator (for some parameter) with the existing design, if the sample design (and its relevant parameter estimator) were based on asimple random sample.[10]
A related quantity is theeffective sample sizeratio, which can be calculated by simply taking the inverse ofDeff{\displaystyle {\text{Deff}}}(i.e.,neffn=1Deff{\displaystyle {\frac {n_{\text{eff}}}{n}}={\frac {1}{\text{Deff}}}}).
For example, let the design effect, for estimating the population mean based on some sampling design, be 2. If the sample size is 1,000, then the effective sample size will be 500. It means that the variance of theweighted meanbased on 1,000 samples will be the same as that of a simplemeanbased on 500 samples obtained using a simple random sample.
Different sampling designs and statistical adjustments may have substantially different impact on the bias and variance of estimators (such as the mean).[citation needed]
An example of a design which can lead to estimation efficiency, compared to simple random sampling, isStratified sampling. This efficiency is gained by leveraging information about the composition of the population. For example, if it is known that gender is correlated with the outcome of interest, and also that the male-female ratio for some population is (say) 50%-50%, then sampling exactly half of the sample from each gender will reduce the variance of the outcome's estimator. Similarly, if a particular sub-population is of special interest, deliberately over-sampling from that sub-population will decrease the variance for estimations made about it.[citation needed]
Improvement in variance efficiency might sometimes be sacrificed for convenience or cost. For example, in thecluster samplingcase the units may have equal or unequal selection probabilities, irrespective of theirintra-class correlation(and their negative effect of increasing the variance of the estimators). We might decide (for practical reasons) to collect responses from only 2 people of each household (i.e., a sampled cluster), which could lead to more complex post-sampling adjustment to deal with unequal selection probabilities. Also, such decisions could lead to less efficient estimators than just taking a fixed proportion of responses from a cluster.[citation needed]
When the sampling design isn’t set in advance and needs to be figured out from the data we have, this can lead to an increase of both the variance and bias of the weighted estimator. This might happen when making adjustments for issues like non-coverage, non-response, or an unexpected strata split of the population that wasn’t available during the initial sampling stage. In these cases, we might use statistical procedures such as post-stratification, raking, or inverse propensity score weighting (where the propensity scores are estimated), among other methods. Using these methods requires assumptions about the initial design model. For example, when we use post-stratification based on age and gender, it is assumed that these variables can explain a significant portion of the bias in the sample. The quality of these estimators is closely tied to the quality of the additional information and themissing at randomassumptions used when making them. Either way, even when estimators (like propensity score models) do a good job capturing most of the sampling design, using the weights can make a small or a large difference, depending on the specific data-set.[citation needed]
Due to the large variety in sampling designs (with or without an effect on unequal selection probabilities), different formulas have been developed to capture the potential design effect, as well as to estimate the variance of estimators when accounting for the sampling designs.[11]Sometimes, these different design effects can be compounded together (as in the case of unequal selection probability and cluster sampling, more details in the following sections). Whether or not to use these formulas, or just assume SRS, depends on the expected amount of bias reduction vs. the increase in estimator variance (and in the overhead of methodological and technical complexity).[1]: 426
There are various ways to sample units so that each unit would have the exact same probability of selection. Such methods are calledequal probability sampling(EPSEM) methods. Some of the more basic methods includesimple random sampling(SRS, with or without replacement) andsystematic samplingfor getting a fixed sample size. There is alsoBernoulli samplingwith a random sample size. More advanced techniques such asstratified samplingandcluster samplingcan also be designed to be EPSEM. For example, in cluster sampling we can use a two stage sampling in which we sample each cluster (which may be of different sizes) with equal probability, and then sample from each cluster at the second stage using SRS with a fixed proportion (e.g. sample half of the cluster, the whole cluster, etc.). This method will yield EPSEM, but the specific number of elements we end up with is stochastic (i.e., non deterministic).[d][12]: 3–8Another strategy for cluster sampling that leads to EPSEM is to sample clusters in a way that is proportional to their sizes, and then sample a fixed number of elements inside each cluster.[e]
In their works,Kishand others highlight several known reasons that lead to unequal selection probabilities:[1]: 425[9]: 185[5]: 69[13]: 50, 395[14]: 306
Adjusting for unequal probability selection through "individual case weights" (e.g. inverse probability weighting), yields various types of estimators for quantities of interest. Estimators such asHorvitz–Thompson estimatoryield unbiased estimators (if the selection probabilities are indeed known, or approximately known), for total and the mean of the population. Deville and Särndal (1992) coined the term "calibration estimator" for estimators using weights such that they satisfy some condition, such as having the sum of weights equal the population size. And more generally, that the weighted sum of weights is equal some quantity of an auxiliary variable:∑wixi=X{\displaystyle \sum w_{i}x_{i}=X}(e.g., that the sum of weighted ages of the respondents is equal to the population size in each age group).[20][17]: 132[21]: 1
The two primary ways to argue about the properties of calibration estimators are:[17]: 133–134[22]
As we will see later, some proofs in the literature rely on the randomization-based framework, while others focus on the model-based perspective. When moving from the mean to theweighted mean, more complexity is added. For example, in the context ofsurvey methodology, often the population size itself is considered an unknown quantity that is estimated. So in the calculation of the weighted mean is in fact based on aratio estimator, with an estimator of the total at the numerator and an estimator of the population size in the denominator (making thevariance calculationto be more complex).[23][3]: 182
There are many types (and subtypes) of weights, with different ways to use and interpret them. With some weights their absolute value has some important meaning, while with other weights the important part is the relative values of the weights to each other. This section introduces some of the more common types of weights so that they can be referenced in follow-up sections.
There are also indirect ways of applying "weighted" adjustments. For example, the existing cases may be duplicated toimputemissing observations (e.g. from non-response), with variance estimated using methods such asmultiple imputation. An alternative approach is to remove (assign a weight of 0 to) some cases. For example, when wanting to reduce the influence of over-sampled groups that are less essential for some analysis. Both cases are similar in nature to inverse probability weighting but the application in practice gives more/less rows of data (making the input potentially simpler to use in some software implementation), instead of applying an extra column of weights. Nevertheless, the consequences of such implementations are similar to just using weights. So while in the case of removing observations the data can easily be handled by common software implementations, the case of adding rows requires special adjustments for the uncertainty estimations. Not doing so may lead to erroneous conclusions(i.e., there isno free lunchwhen using alternative representation of the underlying issues).[9]: 189, 190
The term "Haphazard weights", coined by Kish, is used to refer to weights that correspond tounequal selection probabilities, but ones that are not related to the expectancy or variance of the selected elements.[9]: 190, 191
When taking an unrestricted sample ofn{\displaystyle n}elements, we can then randomly split these elements intoH{\displaystyle H}disjointstrata, each of them containing some size ofnh{\displaystyle n_{h}}elements so that∑h=1Hnh=n{\displaystyle \sum \limits _{h=1}^{H}n_{h}=n}. All elements in each stratumh{\displaystyle h}has some (known) non-negative weight assigned to them (wh{\displaystyle w_{h}}). The weightwh{\displaystyle w_{h}}can be produced by the inverse of someunequal selection probabilityfor elements in each stratumh{\displaystyle h}(i.e.,inverse probability weightingfollowing a procedure such as post-stratification). In this setting,Kish's design effect, for the increase in varianceof the sampleweighted meandue to this design (reflected in the weights), versusSRSof some outcome variable y (when there is no correlation between the weights and the outcome, i.e. haphazard weights) is:[1]: 427[9]: 191(4.2)
By treating each item as coming from its own stratum∀h:nh=1{\displaystyle \forall h:n_{h}=1}, Kish (in 1992) simplified the above formula to the (well-known) following version:[9]: 191(4.3)[26]: 318[4]: 8
This version of the formula is valid when one stratum had several observations taken from it (i.e., each having the same weight), or when there are just many strata were each one had one observation taken from it, but several of them had the same probability of selection. While the interpretation is slightly different, the calculation of the two scenarios comes out to be the same.
When using Kish's design effect for unequal weights, you may use the following simplified formula for "Kish's Effective Sample Size"[27][1]: 162, 259
neff=nDeff=nn∑i=1nwi2(∑i=1nwi)2=(∑i=1nwi)2∑i=1nwi2{\displaystyle n_{\text{eff}}={\frac {n}{\text{Deff}}}={\frac {n}{\frac {n\sum _{i=1}^{n}w_{i}^{2}}{(\sum _{i=1}^{n}w_{i})^{2}}}}={\frac {(\sum _{i=1}^{n}w_{i})^{2}}{\sum _{i=1}^{n}w_{i}^{2}}}}
The above formula, byKish, gives the increase in the variance of theweighted meanbased on"haphazard" weights. This can also be written as the following formula where y are observations selected usingunequal selection probabilities(with no within-cluster correlation, andno relationshipto the expectancy or variance of the outcome measurement),[9]: 190, 191and y' are the observations we would have had if we got them from asimple random sample:
DeffKish=var(y¯w)var(y¯′)=var(∑i=1nwiyi∑i=1nwi)var(∑i=1nyi′n){\displaystyle {\text{Deff}}_{\text{Kish}}={\frac {{\text{var}}\left({\bar {y}}_{w}\right)}{{\text{var}}\left({\bar {y}}'\right)}}={\frac {{\text{var}}\left({\frac {\sum \limits _{i=1}^{n}w_{i}y_{i}}{\sum \limits _{i=1}^{n}w_{i}}}\right)}{{\text{var}}\left({\frac {\sum \limits _{i=1}^{n}y_{i}'}{n}}\right)}}}
It can be shown that the ratio of variances formula can be reduced to Kish's formula by using amodel based perspective.[28]In it, Kish's formula will hold when all n observations (y1,...,yn{\displaystyle y_{1},...,y_{n}}) are (at least approximately)uncorrelated(∀(i≠j):cor(yi,yj)=0{\displaystyle \forall (i\neq j):{\text{cor}}(y_{i},y_{j})=0}), with the samevariance(σ2{\displaystyle \sigma ^{2}}) in the response variable of interest (y). It will also be required to assume the weights themselves are not arandom variablebut rather some known constants (e.g. the inverse of probability of selection, for some pre-determined and knownsampling design).[citation needed]
The following is a simplified proof for when there are no clusters (i.e., noIntraclass correlationbetween element of the sample) and each stratum includes only one observation:[28]
var(y¯w)=1var(∑i=1nwiyi∑i=1nwi)=2var(∑i=1nwi′yi)=3∑i=1nvar(wi′yi)=4∑i=1nwi′2var(yi)=5∑i=1nwi′2σ2=6σ2nn∑i=1nwi2(∑i=1nwi)2=7var(y¯′)Deff(kish)⟹Deff(kish)=var(y¯w)var(y¯′){\displaystyle {\begin{aligned}{\text{var}}\left({\bar {y}}_{w}\right)&{\stackrel {1}{=}}{\text{var}}\left({\frac {\sum \limits _{i=1}^{n}w_{i}y_{i}}{\sum \limits _{i=1}^{n}w_{i}}}\right){\stackrel {2}{=}}{\text{var}}\left(\sum \limits _{i=1}^{n}w_{i}'y_{i}\right){\stackrel {3}{=}}\sum \limits _{i=1}^{n}{\text{var}}\left(w_{i}'y_{i}\right)\\&{\stackrel {4}{=}}\sum \limits _{i=1}^{n}w_{i}'^{2}{\text{var}}\left(y_{i}\right){\stackrel {5}{=}}\sum \limits _{i=1}^{n}w_{i}'^{2}\sigma ^{2}{\stackrel {6}{=}}{\frac {\sigma ^{2}}{n}}{\frac {n\sum \limits _{i=1}^{n}w_{i}^{2}}{\left(\sum \limits _{i=1}^{n}w_{i}\right)^{2}}}{\stackrel {7}{=}}{\text{var}}\left({\bar {y}}'\right)D_{eff(kish)}\\&\implies D_{eff(kish)}={\frac {{\text{var}}\left({\bar {y}}_{w}\right)}{{\text{var}}\left({\bar {y}}'\right)}}\end{aligned}}}
Transitions:
The conditions on y are trivially held if the y observations areIIDwith the sameexpectationandvariance. In such cases,y=y′{\displaystyle y=y'}, and we can estimatevar(y¯w){\displaystyle var\left({\bar {y}}_{w}\right)}by usingvar(y¯w)¯=var(y¯)¯×Deff{\displaystyle {\overline {{\text{var}}\left({\bar {y}}_{w}\right)}}={\overline {{\text{var}}\left({\bar {y}}\right)}}\times {\text{Deff}}}.[9][29]If the y's are not all with the same expectations then we cannot use the estimated variance for calculation, since that estimation assumes that allyi{\displaystyle y_{i}}s have the same expectation. Specifically, if there is a correlation between the weights and the outcome variable y, then it means that the expectation of y is not the same for all observations (but rather, dependent on the specific weight value for each observation). In such a case, while the design effect formula might still be correct (if the other conditions are met), it would require a different estimator for the variance of the weighted mean. For example, it might be better to use aweighted variance estimator.[citation needed]
If differentyi{\displaystyle y_{i}}s values have different variances, then while the weighted variance could capture the correct population-level variance, Kish's formula for the design effect may no longer be true.[citation needed]
A similar issue happens if there is some correlation structure in the samples (such as when usingcluster sampling).[citation needed]
Notice that Kish's definition of the design effect is closely tied to thecoefficient of variation(Kish also calls itrelvarianceorrelvarfor short[h]) of the weights (when using theuncorrected (population level) sample standard deviationforestimation). This has several notations in the literature:[9]: 191[13]: 396
WhereV(w)=∑(wi−w¯)2n{\displaystyle V(w)={\frac {\sum (w_{i}-{\bar {w}})^{2}}{n}}}is the population variance ofw{\displaystyle w}, andw¯=∑win{\displaystyle {\bar {w}}={\frac {\sum w_{i}}{n}}}is the mean. When the weights are normalized to sample size (so that their sum is equal to n and their mean is equal to 1), thenCV2=V(w){\displaystyle {C_{V}}^{2}=V(w)}and the formula reduces toDeff=1+V(w){\displaystyle {\text{Deff}}=1+V(w)}. While it is true we assume the weights are fixed, we can think of their variance as the variance of anempirical distributiondefined by sampling (with equal probability) one weight from our set of weights (similar to how we would think about the correlation of x and y in asimple linear regression).[citation needed]
CV2=(sww¯)2=∑i=1n(wi−w¯)2nw¯2=∑i=1nwi2−nw¯2nw¯2=w¯2−w¯2w¯2=w¯2w¯2−1=Deff−1⟹Deff=1+CV2{\displaystyle {C_{V}}^{2}=\left({\frac {s_{w}}{\bar {w}}}\right)^{2}={\frac {\frac {\sum _{i=1}^{n}(w_{i}-{\bar {w}})^{2}}{n}}{{\bar {w}}^{2}}}={\frac {\frac {\sum _{i=1}^{n}{w_{i}}^{2}-n{\bar {w}}^{2}}{n}}{{\bar {w}}^{2}}}={\frac {{\overline {w}}^{2}-{\bar {w}}^{2}}{{\bar {w}}^{2}}}={\frac {{\overline {w}}^{2}}{{\bar {w}}^{2}}}-1={\text{Deff}}-1\implies {\text{Deff}}=1+{C_{V}}^{2}}
Kish's original definition compared the variance under some sampling design to the variance achieved through asimple random sample. Some literature provide the following alternative definition for Kish's design effect: "the ratio of the variance of the weighted survey mean under disproportionate stratified sampling to the variance underproportionate stratified samplingwhen all stratum unit variances are equal".[26]: 318[13]: 396Reflecting on this, Park and Lee (2006) stated that "The rationale behind [...][Kish's] derivation is that the loss in precision of [the weighted mean] due to haphazard unequal weighting can beapproximatedby the ratio of the variance under disproportionate stratified sampling to that under the proportionate stratified sampling".[4]: 8
Note that this alternative definition only approximated since if the denominator is based on "proportionate stratified sampling" (achieved viastratified sampling) then such a selection will yield a reduced variance as compared withsimple random sample. This is since stratified sampling removes some of the variability in the specific number of elements per stratum, as occurs under SRS.[citation needed]
Relatedly, Cochran (1977) provides a formula for the proportional increase in variance due to deviation from optimum allocation (what, in Kish's formulas, would be calledL).[3]: 116
Early papers used the termDeff{\displaystyle {\text{Deff}}}.[9]: 192As more definitions of the design effect appeared,Kish's design effect for unequal selection probabilitieswas denotedDeffKish{\displaystyle {\text{Deff}}_{\text{Kish}}}(orDeftKish2{\displaystyle {\text{Deft}}_{\text{Kish}}^{2}}) or simplyDeffK{\displaystyle {\text{Deff}}_{K}}for short.[4]: 8[13]: 396[26]: 318Kish's design effect is also known as the "Unequal Weighting Effect" (or just UWE), termed by Liu et al. in 2002.[30]: 2124
The estimator for the total is the "p-expanded with replacement" estimator (a.k.a.:pwr-estimatororHansen and Hurwitz). It is based on asimple random sample(with replacement, denotedSIR) ofnitems (yk{\displaystyle y_{k}}) from a population of size N.[i]Each item has a probability ofpk{\displaystyle p_{k}}(k from 1 to N) to be drawn in a single draw (∑Upk=1{\displaystyle \sum _{U}p_{k}=1}, i.e. it is amultinomial distribution). The probability that a specificyk{\displaystyle y_{k}}will appear in the sample ispk{\displaystyle p_{k}}. The "p-expanded with replacement" value isZi=ykpk{\displaystyle Z_{i}={\frac {y_{k}}{p_{k}}}}with the following expectancy:E[Zi]=E[Iiykpk]=ykpkE[Ii]=ykpkpk=yk{\displaystyle E[Z_{i}]=E[I_{i}{\frac {y_{k}}{p_{k}}}]={\frac {y_{k}}{p_{k}}}E[I_{i}]={\frac {y_{k}}{p_{k}}}p_{k}=y_{k}}. HenceY^pwr=1n∑inZi{\displaystyle {\hat {Y}}_{pwr}={\frac {1}{n}}\sum _{i}^{n}Z_{i}}, the pwr-estimator, is an unbiased estimator for the sum total of y.[3]: 51
In 2000, Bruce D. Spencer proposed a formula for estimating the design effect for the variance of estimating thetotal(not the mean) of some quantity (Y^{\displaystyle {\hat {Y}}}), when there is correlation between the selection probabilities of the elements and the outcome variable of interest.[31]
In this setup, a sample of sizenis drawn (with replacement) from a population of sizeN. Each item is drawn with probabilityPi{\displaystyle P_{i}}(where∑i=1NPi=1{\displaystyle \sum _{i=1}^{N}P_{i}=1}, i.e.multinomial distribution). The selection probabilities are used to define theNormalized (convex) weights:wi=1nPi{\displaystyle w_{i}={\frac {1}{nP_{i}}}}. Notice that for some random set ofnitems, the sum of weights will be equal to 1 only by expectation (E[wi]=1{\displaystyle E[w_{i}]=1}) with some variability of the sum around it (i.e., the sum of elements from aPoisson binomial distribution). The relationship betweenyi{\displaystyle y_{i}}andPi{\displaystyle P_{i}}is defined by the following (population)simple linear regression:
Whereyi{\displaystyle y_{i}}is the outcome of elementi, which linearly depends onPi{\displaystyle P_{i}}with the interceptα{\displaystyle \alpha }and slopeβ{\displaystyle \beta }. The residual from the fitted line isϵi=yi−(α+βPi){\displaystyle \epsilon _{i}=y_{i}-(\alpha +\beta P_{i})}. We can also define the population variances of the outcome and the residuals asσy2{\displaystyle \sigma _{y}^{2}}andσϵ2{\displaystyle \sigma _{\epsilon }^{2}}. The correlation betweenPi{\displaystyle P_{i}}andyi{\displaystyle y_{i}}isρy,P{\displaystyle \rho _{y,P}}.[citation needed]
Spencer's (approximate) design effect for estimating the total ofyis:[31]: 138[32]: 4[13]: 401
Where:
This assumes that the regression model fits well so that the probability of selection and the residuals areindependent, since it leads to the residuals, and the square residuals, to be uncorrelated with the weights, i.e., thatρϵ,W=0{\displaystyle \rho _{\epsilon ,W}=0}and alsoρϵ2,W=0{\displaystyle \rho _{\epsilon ^{2},W}=0}.[31]: 138
When the population size (N) is very large, the formula can be written as:[26]: 319
(sinceα=Y¯−β×P¯=Y¯−β×1N≈Y¯{\displaystyle \alpha ={\bar {Y}}-\beta \times {\bar {P}}={\bar {Y}}-\beta \times {\frac {1}{N}}\approx {\bar {Y}}}, wherecvY2=σY2Y¯2{\displaystyle cv_{Y}^{2}={\frac {\sigma _{Y}^{2}}{{\bar {Y}}^{2}}}})
This approximation assumes that the linear relationship betweenPandyholds. And also that the correlation of the weights with the errors, and the errors squared, are both zero. I.e.,ρw,e=0{\displaystyle \rho _{w,e}=0}andρw,e2=0{\displaystyle \rho _{w,e^{2}}=0}.[32]: 4
We notice that ifρ^y,P≈0{\displaystyle {\hat {\rho }}_{y,P}\approx 0}, thenα^≈y¯{\displaystyle {\hat {\alpha }}\approx {\bar {y}}}(i.e., the average ofy). In such a case, the formula reduces to
Only if the variance ofyis much larger than its mean, then the right-most term is close to 0 (i.e.,1relvar(y)=Y¯σy≈0{\displaystyle {\frac {1}{{\text{relvar}}(y)}}={\frac {\bar {Y}}{\sigma _{y}}}\approx 0}), which reduces Spencer's design effect (for the estimated total) to be equal to Kish's design effect (for the ratio means):[32]: 5DeffSpencer≈(1+L)=DeffKish{\displaystyle {\text{Deff}}_{Spencer}\approx (1+L)={\text{Deff}}_{\text{Kish}}}. Otherwise, the two formulas will yield different results, which demonstrates the difference between the design effect of the total vs. the design effect of the mean.
In 2001, Park and Lee extended Spencer's formula to the case of the ratio-mean (i.e., estimating the mean by dividing the estimator of the total with the estimator of the population size). It is:[32]: 4
Where:
Park and Lee's formula is exactly equal to Kish's formula whenρ^y,P2=0{\displaystyle {\hat {\rho }}_{y,P}^{2}=0}. Both formulas relate to the design effect of the mean ofy, while Spencer'sDeff{\displaystyle {\text{Deff}}}relates to the estimation of the population total.
In general, theDeff{\displaystyle {\text{Deff}}}for the total (Y^{\displaystyle {\hat {Y}}}) tends to be less efficient than theDeff{\displaystyle {\text{Deff}}}for the ratio mean (Y¯^{\displaystyle {\hat {\bar {Y}}}}) whenρy,P{\displaystyle \rho _{y,P}}is small. And in general,ρy,P{\displaystyle \rho _{y,P}}impacts the efficiency of both design effects.[4]: 8
For data collected usingcluster samplingwe assume the following structure:
When clusters are all of the same sizen∗{\displaystyle n^{*}}, the design effectDeff, proposed by Kish in 1965 (and later re-visited by others), is given by:[1]: 162[13]: 399[4]: 9[34][35][14]: 241
It is sometimes also denoted asDeffC{\displaystyle {\text{Deff}}_{C}}.[30]: 2124
In various papers, when cluster sizes are not equal, the above formula is also used withn∗{\displaystyle n^{*}}as the average cluster size (which is also sometimes denoted asb¯{\displaystyle {\bar {b}}}).[36][28]: 105In such cases, Kish's formula (using the average cluster weight) serves as a conservative (upper bound) of the exact design effect.[28]: 106
Alternative formulas exists for unequal cluster sizes.[1]: 193Followup work had discussed the sensitivity of using the average cluster size with various assumptions.[37]
In a 1987 paper, Kish proposed a combined design effect that incorporates both the effects due to weighting that accounts for unequal selection probabilities and cluster sampling:[36]: 16[28]: 105[38]: 4[32]: 2
The above uses notations similar to what is used in this article (the original 1987 publication used different notation).[j]Amodel basedjustification for this formula was provided by Gabler et al.[28]
In 2000, Liu and Aragon proposed a decomposition of unequal selection probabilities design effect for different strata in stratified sampling.[39]In 2002, Liu et al. extended that work to account for stratified samples, where within each stratum is a set of unequal selection probability weights. The cluster sampling is either global or per stratum.[30]Similar work was done also by Park et al. in 2003.[40]
The Chen-RustDeff{\displaystyle {\text{Deff}}}extends the model-based justification of Kish’s 1987 formula for design effects proposed by Gabler, el. al.,[28]applying it to two-stage designs with stratification at the first stage and to three-stage designs without stratification.[41]The modified formulae define the overall design effect using survey weights and population intracluster correlations. These formulae allow for insightful interpretations of design effects from various sources and can estimate intracluster correlations in completed surveys or predict design effects in future surveys.[citation needed]
Henry'sDeff{\displaystyle {\text{Deff}}}[26]proposes an extended model-assisted weighting design-effect measure for single-stage sampling and calibration weight adjustments for a case whereyi=α+βxi+ϵi{\displaystyle y_{i}=\alpha +\beta x_{i}+\epsilon _{i}}, wherexi{\displaystyle x_{i}}is a vector of covariates, the model errors are independent, and the estimator of the population total is the general regression estimator (GREG) of Särndal, Swensson, and Wretman (1992).[3]The new measure considers the combined effects of non-epsem sampling design, unequal weights from calibration adjustments, and the correlation between an analysis variable and the auxiliaries used in calibration.
Lohr'sDeff{\displaystyle {\text{Deff}}}[42]is for ordinary least squares (OLS) and generalized least squares (GLS) estimators in the context of cluster sampling, using a random coefficient regression model. Lohr presents conditions under which the GLS estimator of the regression slope has a design effect less than 1, indicating higher efficiency. However, the design effect of the GLS estimator is highly sensitive to model specification. If an underlying random coefficient model is incorrectly specified as a random intercept model, the design effect can be seriously understated. In contrast, the OLS estimator of the regression slope and the design effect calculated from a design-based perspective are robust to misspecification of the variance structure, making them more reliable in situations where the model specification may not be accurate.[citation needed]
Deff{\displaystyle Deff}may be used when planning a future data collection, as well as a diagnostic tool:[14]: 85
Considering the design effect is unnecessary when[5]: 57–62the source population is closelyIID, or when the sample design of the data was drawn as asimple random sample. It is also less useful when the sample size is relatively small (at least partially, for practical reasons).[original research?]
While Kish originally hoped to have the design effect be as agnostic as possible to the underlying distribution of the data, sampling probabilities, their correlations, and the statistics of interest, followup research has shown that these do influence the design effect. Hence, these properties should be carefully considered when deciding whichDeff{\displaystyle Deff}calculation to use, and how to use it.[4]: 13[32]: 6
The design effect is rarely applied when constructing confidence intervals. Ideally, one would be able to determine, for an estimator of a particular parameter, both the variance under Simple Random Sample (SRS) with replacement and the design effect (which accounts for all elements of the sampling design that change the variance). In such scenarios, the basic variance and the design effect could have been multiplied to compute the variance of the estimator for the specific design.[1]: 259This computed value can then be employed to form confidence intervals. However, in real-world applications, it is uncommon to estimate both values simultaneously. As a result, other methods are favored. For instance, Taylor linearization is utilized to construct confidence intervals based on thevariance of the weighted mean. More broadly, the bootstrap method, also known asreplication weights, is applied for a range of weighted statistics.[citation needed]
Kish's design effect is implemented in various statistical software packages:
This article was submitted toWikiJournal of Sciencefor externalacademic peer reviewin 2023 (reviewer reports). The updated content was reintegrated into the Wikipedia page under aCC-BY-SA-3.0license (2024). The version of record as reviewed is:Tal Galili; et al. (5 May 2024)."Design effect"(PDF).WikiJournal of Science.7(1): 4.doi:10.15347/WJS/2024.004.ISSN2470-6345.WikidataQ116768211. | https://en.wikipedia.org/wiki/Design_effect |
Instatistics, aneffect sizeis a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample ofdata, the value of one parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value.[1]Examples of effect sizes include thecorrelationbetween two variables,[2]theregressioncoefficient in a regression, themeandifference, or the risk of a particular event (such as a heart attack) happening. Effect sizes are a complement tool forstatistical hypothesis testing, and play an important role inpoweranalyses to assess the sample size required for new experiments.[3]Effect size are fundamental inmeta-analyseswhich aim to provide the combined effect size based on data from multiple studies. The cluster of data-analysis methods concerning effect sizes is referred to asestimation statistics.
Effect size is an essential component when evaluating the strength of a statistical claim, and it is the first item (magnitude) in theMAGIC criteria. Thestandard deviationof the effect size is of critical importance, since it indicates how much uncertainty is included in the measurement. A standard deviation that is too large will make the measurement nearly meaningless. In meta-analysis, where the purpose is to combine multiple effect sizes, the uncertainty in the effect size is used to weigh effect sizes, so that large studies are considered more important than small studies. The uncertainty in the effect size is calculated differently for each type of effect size, but generally only requires knowing the study's sample size (N), or the number of observations (n) in each group.
Reporting effect sizes or estimates thereof (effect estimate [EE], estimate of effect) is considered good practice when presenting empirical research findings in many fields.[4][5]The reporting of effect sizes facilitates the interpretation of the importance of a research result, in contrast to itsstatistical significance.[6]Effect sizes are particularly prominent insocial scienceand inmedical research(where size oftreatment effectis important).
Effect sizes may be measured in relative or absolute terms. In relative effect sizes, two groups are directly compared with each other, as inodds ratiosandrelative risks. For absolute effect sizes, a largerabsolute valuealways indicates a stronger effect. Many types of measurements can be expressed as either absolute or relative, and these can be used together because they convey different information. A prominent task force in the psychology research community made the following recommendation:
Always present effect sizes for primary outcomes...If the units of measurement are meaningful on a practical level (e.g., number of cigarettes smoked per day), then we usually prefer an unstandardized measure (regression coefficient or mean difference) to a standardized measure (rord).[4]
As instatistical estimation, the true effect size is distinguished from the observed effect size. For example, to measure the risk of disease in a population (the population effect size) one can measure the risk within a sample of that population (the sample effect size). Conventions for describing true and observed effect sizes follow standard statistical practices—one common approach is to use Greek letters like ρ [rho] to denote population parameters and Latin letters likerto denote the corresponding statistic. Alternatively, a "hat" can be placed over the population parameter to denote the statistic, e.g. withρ^{\displaystyle {\hat {\rho }}}being the estimate of the parameterρ{\displaystyle \rho }.
As in any statistical setting, effect sizes are estimated withsampling error, and may be biased unless the effect size estimator that is used is appropriate for the manner in which the data weresampledand the manner in which the measurements were made. An example of this ispublication bias, which occurs when scientists report results only when the estimated effect sizes are large or are statistically significant. As a result, if many researchers carry out studies with low statistical power, the reported effect sizes will tend to be larger than the true (population) effects, if any.[7]Another example where effect sizes may be distorted is in a multiple-trial experiment, where the effect size calculation is based on the averaged or aggregated response across the trials.[8]
Smaller studies sometimes show different, often larger, effect sizes than larger studies. This phenomenon is known as the small-study effect, which may signal publication bias.[9]
Sample-based effect sizes are distinguished fromtest statisticsused in hypothesis testing, in that they estimate the strength (magnitude) of, for example, an apparent relationship, rather than assigning asignificancelevel reflecting whether the magnitude of the relationship observed could be due to chance. The effect size does not directly determine the significance level, or vice versa. Given a sufficiently large sample size, a non-null statistical comparison will always show a statistically significant result unless the population effect size is exactly zero (and even there it will show statistical significance at the rate of the Type I error used). For example, a samplePearson correlationcoefficient of 0.01 is statistically significant if the sample size is 1000. Reporting only the significantp-valuefrom this analysis could be misleading if a correlation of 0.01 is too small to be of interest in a particular application.
The termeffect sizecan refer to a standardized measure of effect (such asr,Cohen'sd, or theodds ratio), or to an unstandardized measure (e.g., the difference between group means or the unstandardized regression coefficients). Standardized effect size measures are typically used when:
In meta-analyses, standardized effect sizes are used as a common measure that can be calculated for different studies and then combined into an overall summary.
The interpretation of an effect size of beingsmall,medium, orlargedepends on its substantive context and its operational definition. Jacob Cohen[10]suggested interpretation guidelines that are near ubiquitous across many fields. However, Cohen also cautioned:
"The terms 'small,' 'medium,' and 'large' are relative, not only to each other, but to the area of behavioral science or even more particularly to the specific content and research method being employed in any given investigation... In the face of this relativity, there is a certain risk inherent in offering conventional operational definitions for these terms for use in power analysis in as diverse a field of inquiry as behavioral science. This risk is nevertheless accepted in the belief that more is to be gained than lost by supplying a common conventional frame of reference which is recommended for use only when no better basis for estimating the ES index is available." (p. 25)
Sawilowsky[11]recommended that the rules of thumb for effect sizes should be revised, and expanded the descriptions to includevery small,very large, andhuge. Funder and Ozer[12]suggested that effect sizes should be interpreted based on benchmarks and consequences of findings, resulting in adjustment of guideline recommendations.
Length[13]noted for amediumeffect size, "you'll choose the samenregardless of the accuracy or reliability of your instrument, or the narrowness or diversity of your subjects. Clearly, important considerations are being ignored here. Researchers should interpret the substantive significance of their results by grounding them in a meaningful context or by quantifying their contribution to knowledge, and Cohen's effect size descriptions can be helpful as a starting point."[6]Similarly, a U.S. Dept of Education sponsored report argued that the widespread indiscriminate use of Cohen's interpretation guidelines can be inappropriate and misleading.[14]They instead suggested that norms should be based on distributions of effect sizes from comparable studies. Thus a small effect (in absolute numbers) could be consideredlargeif the effect is larger than similar studies in the field. SeeAbelson's paradoxand Sawilowsky's paradox for related points.[15][16][17]
The table below contains descriptors for various magnitudes ofd,r,fandomega, as initially suggested by Jacob Cohen,[10]and later expanded by Sawilowsky,[11]and by Funder & Ozer.[12]
About 50 to 100 different measures of effect size are known. Many effect sizes of different types can be converted to other types, as many estimate the separation of two distributions, so are mathematically related. For example, a correlation coefficient can be converted to a Cohen's d and vice versa.
These effect sizes estimate the amount of the variance within an experiment that is "explained" or "accounted for" by the experiment's model (Explained variation).
Pearson's correlation, often denotedrand introduced byKarl Pearson, is widely used as aneffect sizewhen paired quantitative data are available; for instance if one were studying the relationship between birth weight and longevity. The correlation coefficient can also be used when the data are binary. Pearson'srcan vary in magnitude from −1 to 1, with −1 indicating a perfect negative linear relation, 1 indicating a perfect positive linear relation, and 0 indicating no linear relation between two variables.
A relatedeffect sizeisr2, thecoefficient of determination(also referred to asR2or "r-squared"), calculated as the square of the Pearson correlationr. In the case of paired data, this is a measure of the proportion of variance shared by the two variables, and varies from 0 to 1. For example, with anrof 0.21 the coefficient of determination is 0.0441, meaning that 4.4% of the variance of either variable is shared with the other variable. Ther2is always positive, so does not convey the direction of the correlation between the two variables.
Eta-squared describes the ratio of variance explained in the dependent variable by a predictor while controlling for other predictors, making it analogous to ther2. Eta-squared is a biased estimator of the variance explained by the model in the population (it estimates only the effect size in the sample). This estimate shares the weakness withr2that each additional variable will automatically increase the value ofη2. In addition, it measures the variance explained of the sample, not the population, meaning that it will always overestimate the effect size, although the bias grows smaller as the sample grows larger.η2=SSTreatmentSSTotal.{\displaystyle \eta ^{2}={\frac {SS_{\text{Treatment}}}{SS_{\text{Total}}}}.}
A less biased estimator of the variance explained in the population isω2[18]ω2=SStreatment−dftreatment⋅MSerrorSStotal+MSerror.{\displaystyle \omega ^{2}={\frac {{\text{SS}}_{\text{treatment}}-df_{\text{treatment}}\cdot {\text{MS}}_{\text{error}}}{{\text{SS}}_{\text{total}}+{\text{MS}}_{\text{error}}}}.}
This form of the formula is limited to between-subjects analysis with equal sample sizes in all cells.[18]Since it is less biased (although notunbiased),ω2is preferable to η2; however, it can be more inconvenient to calculate for complex analyses. A generalized form of the estimator has been published for between-subjects and within-subjects analysis, repeated measure, mixed design, and randomized block design experiments.[19]In addition, methods to calculate partialω2for individual factors and combined factors in designs with up to three independent variables have been published.[19]
Cohen'sf2is one of several effect size measures to use in the context of anF-testforANOVAormultiple regression. Its amount of bias (overestimation of the effect size for the ANOVA) depends on the bias of its underlying measurement of variance explained (e.g.,R2,η2,ω2).
Thef2effect size measure for multiple regression is defined as:f2=R21−R2.{\displaystyle f^{2}={R^{2} \over 1-R^{2}}.}
Likewise,f2can be defined as:f2=η21−η2{\displaystyle f^{2}={\eta ^{2} \over 1-\eta ^{2}}}orf2=ω21−ω2{\displaystyle f^{2}={\omega ^{2} \over 1-\omega ^{2}}}for models described by those effect size measures.[20]
Thef2{\displaystyle f^{2}}effect size measure for sequential multiple regression and also common forPLS modeling[21]is defined as:f2=RAB2−RA21−RAB2{\displaystyle f^{2}={R_{AB}^{2}-R_{A}^{2} \over 1-R_{AB}^{2}}}whereR2Ais the variance accounted for by a set of one or more independent variablesA, andR2ABis the combined variance accounted for byAand another set of one or more independent variables of interestB. By convention,f2effect sizes of0.12{\displaystyle 0.1^{2}},0.252{\displaystyle 0.25^{2}}, and0.42{\displaystyle 0.4^{2}}are termedsmall,medium, andlarge, respectively.[10]
Cohen'sf^{\displaystyle {\hat {f}}}can also be found for factorial analysis of variance (ANOVA) working backwards, using:f^effect=(Feffectdfeffect/N).{\displaystyle {\hat {f}}_{\text{effect}}={\sqrt {(F_{\text{effect}}df_{\text{effect}}/N)}}.}
In a balanced design (equivalent sample sizes across groups) of ANOVA, the corresponding population parameter off2{\displaystyle f^{2}}isSS(μ1,μ2,…,μK)K×σ2,{\displaystyle {SS(\mu _{1},\mu _{2},\dots ,\mu _{K})} \over {K\times \sigma ^{2}},}whereinμjdenotes the population mean within thejthgroup of the totalKgroups, andσthe equivalent population standard deviations within each groups.SSis thesum of squaresin ANOVA.
Another measure that is used with correlation differences is Cohen's q. This is the difference between two Fisher transformed Pearson regression coefficients. In symbols this isq=12log1+r11−r1−12log1+r21−r2{\displaystyle q={\frac {1}{2}}\log {\frac {1+r_{1}}{1-r_{1}}}-{\frac {1}{2}}\log {\frac {1+r_{2}}{1-r_{2}}}}
wherer1andr2are the regressions being compared. The expected value ofqis zero and its variance isvar(q)=1N1−3+1N2−3{\displaystyle \operatorname {var} (q)={\frac {1}{N_{1}-3}}+{\frac {1}{N_{2}-3}}}whereN1andN2are the number of data points in the first and second regression respectively.
The raw effect size pertaining to a comparison of two groups is inherently calculated as the differences between the two means. However, to facilitate interpretation it is common to standardise the effect size; various conventions for statistical standardisation are presented below.
A (population) effect sizeθbased on means usually considers the standardized mean difference (SMD) between two populations[22]: 78θ=μ1−μ2σ,{\displaystyle \theta ={\frac {\mu _{1}-\mu _{2}}{\sigma }},}whereμ1is the mean for one population,μ2is the mean for the other population, and σ is astandard deviationbased on either or both populations.
In the practical setting the population values are typically not known and must be estimated from sample statistics. The several versions of effect sizes based on means differ with respect to which statistics are used.
This form for the effect size resembles the computation for at-teststatistic, with the critical difference that thet-test statistic includes a factor ofn{\displaystyle {\sqrt {n}}}. This means that for a given effect size, the significance level increases with the sample size. Unlike thet-test statistic, the effect size aims to estimate a populationparameterand is not affected by the sample size.
SMD values of 0.2 to 0.5 are considered small, 0.5 to 0.8 are considered medium, and greater than 0.8 are considered large.[23]
Cohen'sdis defined as the difference between two means divided by a standard deviation for the data, i.e.d=x¯1−x¯2s.{\displaystyle d={\frac {{\bar {x}}_{1}-{\bar {x}}_{2}}{s}}.}
Jacob Cohendefineds, thepooled standard deviation, as (for two independent samples):[10]: 67s=(n1−1)s12+(n2−1)s22n1+n2−2{\displaystyle s={\sqrt {\frac {(n_{1}-1)s_{1}^{2}+(n_{2}-1)s_{2}^{2}}{n_{1}+n_{2}-2}}}}where the variance for one of the groups is defined ass12=1n1−1∑i=1n1(x1,i−x¯1)2,{\displaystyle s_{1}^{2}={\frac {1}{n_{1}-1}}\sum _{i=1}^{n_{1}}(x_{1,i}-{\bar {x}}_{1})^{2},}and similarly for the other group.
Other authors choose a slightly different computation of the standard deviation when referring to "Cohen'sd" where the denominator is without "-2"[24][25]: 14s=(n1−1)s12+(n2−1)s22n1+n2{\displaystyle s={\sqrt {\frac {(n_{1}-1)s_{1}^{2}+(n_{2}-1)s_{2}^{2}}{n_{1}+n_{2}}}}}This definition of "Cohen'sd" is termed themaximum likelihoodestimator by Hedges and Olkin,[22]and it is related to Hedges'gby a scaling factor (see below).
With two paired samples, an approach is to look at the distribution of the difference scores. In that case,sis the standard deviation of this distribution of difference scores (of note, the standard deviation of difference scores is dependent on the correlation between paired samples). This creates the following relationship between the t-statistic to test for a difference in the means of the two paired groups and Cohen'sd'(computed with difference scores):t=X¯1−X¯2SEdiff=X¯1−X¯2SDdiffN=N(X¯1−X¯2)SDdiff{\displaystyle t={\frac {{\bar {X}}_{1}-{\bar {X}}_{2}}{{\text{SE}}_{diff}}}={\frac {{\bar {X}}_{1}-{\bar {X}}_{2}}{\frac {{\text{SD}}_{diff}}{\sqrt {N}}}}={\frac {{\sqrt {N}}({\bar {X}}_{1}-{\bar {X}}_{2})}{SD_{diff}}}}andd′=X¯1−X¯2SDdiff=tN{\displaystyle d'={\frac {{\bar {X}}_{1}-{\bar {X}}_{2}}{{\text{SD}}_{diff}}}={\frac {t}{\sqrt {N}}}}However, for paired samples, Cohen states that d' does not provide the correct estimate to obtain the power of the test for d, and that before looking the values up in the tables provided for d, it should be corrected for r as in the following formula:[26]d′1−r.{\displaystyle {\frac {d'}{\sqrt {1-r}}}.}where r is the correlation between paired measurements. Given the same sample size, the higher r, the higher the power for a test of paired difference.
Since d' depends on r, as a measure of effect size it is difficult to interpret; therefore, in the context of paired analyses, since it is possible to compute d' or d (estimated with a pooled standard deviation or that of a group or time-point), it is necessary to explicitly indicate which one is being reported. As a measure of effect size, d (estimated with a pooled standard deviation or that of a group or time-point) is more appropriate, for instance in meta-analysis.[27]
Cohen'sdis frequently used inestimating sample sizesfor statistical testing. A lower Cohen'sdindicates the necessity of larger sample sizes, and vice versa, as can subsequently be determined together with the additional parameters of desiredsignificance levelandstatistical power.[28]
In 1976,Gene V. Glassproposed an estimator of the effect size that uses only the standard deviation of the second group[22]: 78Δ=x¯1−x¯2s2{\displaystyle \Delta ={\frac {{\bar {x}}_{1}-{\bar {x}}_{2}}{s_{2}}}}
The second group may be regarded as a control group, and Glass argued that if several treatments were compared to the control group it would be better to use just the standard deviation computed from the control group, so that effect sizes would not differ under equal means and different variances.
Under a correct assumption of equal population variances a pooled estimate forσis more precise.
Hedges'g, suggested byLarry Hedgesin 1981,[29]is like the other measures based on a standardized difference[22]: 79g=x¯1−x¯2s∗{\displaystyle g={\frac {{\bar {x}}_{1}-{\bar {x}}_{2}}{s^{*}}}}where the pooled standard deviations∗{\displaystyle s^{*}}is computed as:s∗=(n1−1)s12+(n2−1)s22n1+n2−2.{\displaystyle s^{*}={\sqrt {\frac {(n_{1}-1)s_{1}^{2}+(n_{2}-1)s_{2}^{2}}{n_{1}+n_{2}-2}}}.}
However, as anestimatorfor the population effect sizeθit isbiased.
Nevertheless, this bias can be approximately corrected through multiplication by a factorg∗=J(n1+n2−2)g≈(1−34(n1+n2)−9)g{\displaystyle g^{*}=J(n_{1}+n_{2}-2)\,\,g\,\approx \,\left(1-{\frac {3}{4(n_{1}+n_{2})-9}}\right)\,\,g}Hedges and Olkin refer to this less-biased estimatorg∗{\displaystyle g^{*}}asd,[22]but it is not the same as Cohen'sd.
The exact form for the correction factorJ() involves thegamma function[22]: 104J(a)=Γ(a/2)a/2Γ((a−1)/2).{\displaystyle J(a)={\frac {\Gamma (a/2)}{{\sqrt {a/2\,}}\,\Gamma ((a-1)/2)}}.}There are also multilevel variants of Hedges' g, e.g., for use in cluster randomised controlled trials (CRTs).[30]CRTs involve randomising clusters, such as schools or classrooms, to different conditions and are frequently used in education research.
A similar effect size estimator for multiple comparisons (e.g.,ANOVA) is the Ψ root-mean-square standardized effect:[20]Ψ=1k−1⋅∑j=1k(μj−μσ)2{\displaystyle \Psi ={\sqrt {{\frac {1}{k-1}}\cdot \sum _{j=1}^{k}\left({\frac {\mu _{j}-\mu }{\sigma }}\right)^{2}}}}wherekis the number of groups in the comparisons.
This essentially presents the omnibus difference of the entire model adjusted by the root mean square, analogous todorg.
In addition, a generalization for multi-factorial designs has been provided.[20]
Provided that the data isGaussiandistributed a scaled Hedges'g,n1n2/(n1+n2)g{\textstyle {\sqrt {n_{1}n_{2}/(n_{1}+n_{2})}}\,g}, follows anoncentralt-distributionwith thenoncentrality parametern1n2/(n1+n2)θ{\textstyle {\sqrt {n_{1}n_{2}/(n_{1}+n_{2})}}\theta }and(n1+n2− 2)degrees of freedom. Likewise, the scaled Glass' Δ is distributed withn2− 1degrees of freedom.
From the distribution it is possible to compute theexpectationand variance of the effect sizes.
In some cases large sample approximations for the variance are used. One suggestion for the variance of Hedges' unbiased estimator is[22]: 86σ^2(g∗)=n1+n2n1n2+(g∗)22(n1+n2).{\displaystyle {\hat {\sigma }}^{2}(g^{*})={\frac {n_{1}+n_{2}}{n_{1}n_{2}}}+{\frac {(g^{*})^{2}}{2(n_{1}+n_{2})}}.}
As a statistical parameter, SSMD (denoted asβ{\displaystyle \beta }) is defined as the ratio ofmeantostandard deviationof the difference of two random values respectively from two groups. Assume that one group with random values hasmeanμ1{\displaystyle \mu _{1}}andvarianceσ12{\displaystyle \sigma _{1}^{2}}and another group hasmeanμ2{\displaystyle \mu _{2}}andvarianceσ22{\displaystyle \sigma _{2}^{2}}. Thecovariancebetween the two groups isσ12.{\displaystyle \sigma _{12}.}Then, the SSMD for the comparison of these two groups is defined as[31]
If the two groups are independent,
If the two independent groups have equalvariancesσ2{\displaystyle \sigma ^{2}},
Mahalanobis distance(D) is a multivariate generalization of Cohen's d, which takes into account the relationships between the variables.[32]
φ=χ2N{\displaystyle \varphi ={\sqrt {\frac {\chi ^{2}}{N}}}}
φc=χ2N(k−1){\displaystyle \varphi _{c}={\sqrt {\frac {\chi ^{2}}{N(k-1)}}}}
Commonly used measures of association for thechi-squared testare thePhi coefficientandCramér'sV(sometimes referred to as Cramér's phi and denoted asφc). Phi is related to thepoint-biserial correlation coefficientand Cohen'sdand estimates the extent of the relationship between two variables (2 × 2).[33]Cramér's V may be used with variables having more than two levels.
Phi can be computed by finding the square root of the chi-squared statistic divided by the sample size.
Similarly, Cramér's V is computed by taking the square root of the chi-squared statistic divided by the sample size and the length of the minimum dimension (kis the smaller of the number of rowsror columnsc).
φcis the intercorrelation of the two discrete variables[34]and may be computed for any value ofrorc. However, as chi-squared values tend to increase with the number of cells, the greater the difference betweenrandc, the more likely V will tend to 1 without strong evidence of a meaningful correlation.
Another measure of effect size used for chi-squared tests is Cohen's omega (ω{\displaystyle \omega }). This is defined asω=∑i=1m(p1i−p0i)2p0i{\displaystyle \omega ={\sqrt {\sum _{i=1}^{m}{\frac {(p_{1i}-p_{0i})^{2}}{p_{0i}}}}}}wherep0iis the proportion of theithcell underH0,p1iis the proportion of theithcell underH1andmis the number of cells.
Theodds ratio(OR) is another useful effect size. It is appropriate when the research question focuses on the degree of association between twobinary variables. For example, consider a study of spelling ability. In a control group, two students pass the class for every one who fails, so the odds of passing are two to one (or 2/1 = 2). In the treatment group, six students pass for every one who fails, so the odds of passing are six to one (or 6/1 = 6). The effect size can be computed by noting that the odds of passing in the treatment group are three times higher than in the control group (because 6 divided by 2 is 3). Therefore, the odds ratio is 3. Odds ratio statistics are on a different scale than Cohen'sd, so this '3' is not comparable to a Cohen'sdof 3.
Therelative risk(RR), also calledrisk ratio, is simply the risk (probability) of an event relative to some independent variable. This measure of effect size differs from the odds ratio in that it comparesprobabilitiesinstead ofodds, but asymptotically approaches the latter for small probabilities. Using the example above, theprobabilitiesfor those in the control group and treatment group passing is 2/3 (or 0.67) and 6/7 (or 0.86), respectively. The effect size can be computed the same as above, but using the probabilities instead. Therefore, the relative risk is 1.28. Since rather large probabilities of passing were used, there is a large difference between relative risk and odds ratio. Hadfailure(a smaller probability) been used as the event (rather thanpassing), the difference between the two measures of effect size would not be so great.
While both measures are useful, they have different statistical uses. In medical research, theodds ratiois commonly used forcase-control studies, as odds, but not probabilities, are usually estimated.[35]Relative risk is commonly used inrandomized controlled trialsandcohort studies, but relative risk contributes to overestimations of the effectiveness of interventions.[36]
Therisk difference(RD), sometimes called absolute risk reduction, is simply the difference in risk (probability) of an event between two groups. It is a useful measure in experimental research, since RD tells you the extent to which an experimental interventions changes the probability of an event or outcome. Using the example above, the probabilities for those in the control group and treatment group passing is 2/3 (or 0.67) and 6/7 (or 0.86), respectively, and so the RD effect size is 0.86 − 0.67 = 0.19 (or 19%). RD is the superior measure for assessing effectiveness of interventions.[36]
One measure used in power analysis when comparing two independent proportions is Cohen'sh. This is defined as followsh=2(arcsinp1−arcsinp2){\displaystyle h=2(\arcsin {\sqrt {p_{1}}}-\arcsin {\sqrt {p_{2}}})}wherep1andp2are the proportions of the two samples being compared and arcsin is the arcsine transformation.
To more easily describe the meaning of an effect size to people outside statistics, the common language effect size, as the name implies, was designed to communicate it in plain English. It is used to describe a difference between two groups and was proposed, as well as named, by Kenneth McGraw and S. P. Wong in 1992.[37]They used the following example (about heights of men and women): "in any random pairing of young adult males and females, the probability of the male being taller than the female is .92, or in simpler terms yet, in 92 out of 100 blind dates among young adults, the male will be taller than the female",[37]when describing the population value of the common language effect size.
Cliff's deltaord{\displaystyle d}, originally developed byNorman Clifffor use with ordinal data,[38][dubious–discuss]is a measure of how often the values in one distribution are larger than the values in a second distribution. Crucially, it does not require any assumptions about the shape or spread of the two distributions.
The sample estimated{\displaystyle d}is given by:d=∑i,j[xi>xj]−[xi<xj]mn{\displaystyle d={\frac {\sum _{i,j}[x_{i}>x_{j}]-[x_{i}<x_{j}]}{mn}}}where the two distributions are of sizen{\displaystyle n}andm{\displaystyle m}with itemsxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}, respectively, and[⋅]{\displaystyle [\cdot ]}is theIverson bracket, which is 1 when the contents are true and 0 when false.
d{\displaystyle d}is linearly related to theMann–Whitney U statistic; however, it captures the direction of the difference in its sign. Given the Mann–WhitneyU{\displaystyle U},d{\displaystyle d}is:d=2Umn−1{\displaystyle d={\frac {2U}{mn}}-1}
One of simplest effect sizes for measuring how much a proportion differs from 50% is Cohen's g.[10]: 147It measures how much a proportion differs from 50%. For example, if 85.2% of arrests for car theft are males, then effect size of sex on arrest when measured with Cohen's g isg=0.852−0.5=0.352{\displaystyle g=0.852-0.5=0.352}. In general:
g=P−0.50or0.50−P(directional),{\displaystyle g=P-0.50{\text{ or }}0.50-P\quad ({\text{directional}}),}
g=|P−0.50|(nondirectional).{\displaystyle g=|P-0.50|\quad ({\text{nondirectional}}).}
Units of Cohen's g are more intuitive (proportion) than in some other effect sizes. It is sometime used in combination withBinomial test.
Confidence intervals of standardized effect sizes, especially Cohen'sd{\displaystyle {d}}andf2{\displaystyle f^{2}}, rely on the calculation of confidence intervals ofnoncentrality parameters(ncp). A common approach to construct the confidence interval ofncpis to find the criticalncpvalues to fit the observed statistic to tailquantilesα/2 and (1 −α/2). The SAS and R-package MBESS provides functions to find critical values ofncp.
For a single group,Mdenotes the sample mean,μthe population mean,SDthe sample's standard deviation,σthe population's standard deviation, andnis the sample size of the group. Thetvalue is used to test the hypothesis on the difference between the mean and a baselineμbaseline. Usually,μbaselineis zero. In the case of two related groups, the single group is constructed by the differences in pair of samples, whileSDandσdenote the sample's and population's standard deviations of differences rather than within original two groups.t:=M−μbaselineSE=M−μbaselineSD/n=n(M−μσ)+n(μ−μbaselineσ)SDσ{\displaystyle t:={\frac {M-\mu _{\text{baseline}}}{\text{SE}}}={\frac {M-\mu _{\text{baseline}}}{{\text{SD}}/{\sqrt {n}}}}={\frac {{\sqrt {n}}\left({\frac {M-\mu }{\sigma }}\right)+{\sqrt {n}}\left({\frac {\mu -\mu _{\text{baseline}}}{\sigma }}\right)}{\frac {\text{SD}}{\sigma }}}}ncp=n(μ−μbaselineσ){\displaystyle ncp={\sqrt {n}}\left({\frac {\mu -\mu _{\text{baseline}}}{\sigma }}\right)}and Cohen'sd:=M−μbaselineSD{\displaystyle d:={\frac {M-\mu _{\text{baseline}}}{\text{SD}}}}
is the point estimate ofμ−μbaselineσ.{\displaystyle {\frac {\mu -\mu _{\text{baseline}}}{\sigma }}.}
So,
n1orn2are the respective sample sizes.t:=M1−M2SDwithin/n1n2n1+n2,{\displaystyle t:={\frac {M_{1}-M_{2}}{{\text{SD}}_{\text{within}}/{\sqrt {\frac {n_{1}n_{2}}{n_{1}+n_{2}}}}}},}
whereinSDwithin:=SSwithindfwithin=(n1−1)SD12+(n2−1)SD22n1+n2−2.{\displaystyle {\text{SD}}_{\text{within}}:={\sqrt {\frac {{\text{SS}}_{\text{within}}}{{\text{df}}_{\text{within}}}}}={\sqrt {\frac {(n_{1}-1){\text{SD}}_{1}^{2}+(n_{2}-1){\text{SD}}_{2}^{2}}{n_{1}+n_{2}-2}}}.}ncp=n1n2n1+n2μ1−μ2σ{\displaystyle ncp={\sqrt {\frac {n_{1}n_{2}}{n_{1}+n_{2}}}}{\frac {\mu _{1}-\mu _{2}}{\sigma }}}
and Cohen'sd:=M1−M2SDwithin{\displaystyle d:={\frac {M_{1}-M_{2}}{SD_{\text{within}}}}}is the point estimate ofμ1−μ2σ.{\displaystyle {\frac {\mu _{1}-\mu _{2}}{\sigma }}.}
So,d~=ncpn1n2n1+n2.{\displaystyle {\tilde {d}}={\frac {ncp}{\sqrt {\frac {n_{1}n_{2}}{n_{1}+n_{2}}}}}.}
One-way ANOVA test appliesnoncentral F distribution. While with a given population standard deviationσ{\displaystyle \sigma }, the same test question appliesnoncentral chi-squared distribution.F:=SSbetweenσ2/dfbetweenSSwithinσ2/dfwithin{\displaystyle F:={\frac {{\frac {{\text{SS}}_{\text{between}}}{\sigma ^{2}}}/{\text{df}}_{\text{between}}}{{\frac {{\text{SS}}_{\text{within}}}{\sigma ^{2}}}/{\text{df}}_{\text{within}}}}}
For eachj-th sample withini-th groupXi,j, denoteMi(Xi,j):=∑w=1niXi,wni;μi(Xi,j):=μi.{\displaystyle M_{i}(X_{i,j}):={\frac {\sum _{w=1}^{n_{i}}X_{i,w}}{n_{i}}};\;\mu _{i}(X_{i,j}):=\mu _{i}.}
While,SSbetween/σ2=SS(Mi(Xi,j);i=1,2,…,K,j=1,2,…,ni)σ2=SS(Mi(Xi,j−μi)σ+μiσ;i=1,2,…,K,j=1,2,…,ni)∼χ2(df=K−1,ncp=SS(μi(Xi,j)σ;i=1,2,…,K,j=1,2,…,ni)){\displaystyle {\begin{aligned}{\text{SS}}_{\text{between}}/\sigma ^{2}&={\frac {{\text{SS}}\left(M_{i}(X_{i,j});i=1,2,\dots ,K,\;j=1,2,\dots ,n_{i}\right)}{\sigma ^{2}}}\\&={\text{SS}}\left({\frac {M_{i}(X_{i,j}-\mu _{i})}{\sigma }}+{\frac {\mu _{i}}{\sigma }};i=1,2,\dots ,K,\;j=1,2,\dots ,n_{i}\right)\\&\sim \chi ^{2}\left({\text{df}}=K-1,\;ncp=SS\left({\frac {\mu _{i}(X_{i,j})}{\sigma }};i=1,2,\dots ,K,\;j=1,2,\dots ,n_{i}\right)\right)\end{aligned}}}
So, bothncp(s) ofFandχ2{\displaystyle \chi ^{2}}equateSS(μi(Xi,j)/σ;i=1,2,…,K,j=1,2,…,ni).{\displaystyle {\text{SS}}\left(\mu _{i}(X_{i,j})/\sigma ;i=1,2,\dots ,K,\;j=1,2,\dots ,n_{i}\right).}
In case ofn:=n1=n2=⋯=nK{\displaystyle n:=n_{1}=n_{2}=\cdots =n_{K}}forKindependent groups of same size, the total sample size isN:=n·K.Cohensf~2:=SS(μ1,μ2,…,μK)K⋅σ2=SS(μi(Xi,j)/σ;i=1,2,…,K,j=1,2,…,ni)n⋅K=ncpn⋅K=ncpN.{\displaystyle {\text{Cohens }}{\tilde {f}}^{2}:={\frac {{\text{SS}}(\mu _{1},\mu _{2},\dots ,\mu _{K})}{K\cdot \sigma ^{2}}}={\frac {{\text{SS}}\left(\mu _{i}(X_{i,j})/\sigma ;i=1,2,\dots ,K,\;j=1,2,\dots ,n_{i}\right)}{n\cdot K}}={\frac {ncp}{n\cdot K}}={\frac {ncp}{N}}.}
Thet-test for a pair of independent groups is a special case of one-way ANOVA. Note that the noncentrality parameterncpF{\displaystyle ncp_{F}}ofFis not comparable to the noncentrality parameterncpt{\displaystyle ncp_{t}}of the correspondingt. Actually,ncpF=ncpt2{\displaystyle ncp_{F}=ncp_{t}^{2}}, andf~=|d~2|{\displaystyle {\tilde {f}}=\left|{\frac {\tilde {d}}{2}}\right|}.
Further explanations | https://en.wikipedia.org/wiki/Effect_size#Eta-squared_(η2) |
Anadaptive grammaris aformal grammarthat explicitly provides mechanisms within theformalismto allow its ownproduction rulesto be manipulated.
John N. Shutt defines adaptive grammar as a grammatical formalism that allows rule sets (aka sets of production rules) to be explicitly manipulated within a grammar. Types of manipulation include rule addition, deletion, and modification.[1]
The first description of grammar adaptivity (though not under that name) in the literature is generally[2][3][4]taken to be in a paper by Alfonso Caracciolo di Forino published in 1963.[5]The next generally accepted reference to an adaptive formalism (extensible context-free grammars) came from Wegbreit in 1970[6]in the study ofextensible programminglanguages, followed by thedynamic syntaxof Hanford and Jones in 1973.[7]
Until fairly recently, much of the research into theformalproperties of adaptive grammars was uncoordinated between researchers, only first being summarized by Henning Christiansen in 1990[2]in response to a paper inACMSIGPLANNoticesby Boris Burshteyn.[8]The Department of Engineering at theUniversity of São Paulohas itsAdaptive Languages and Techniques Laboratory, specifically focusing on research and practice in adaptive technologies and theory. The LTA also maintains a page naming researchers in the field.[9]
While early efforts made reference todynamic syntax[7]andextensible,[6]modifiable,[10]dynamic,[11]andadaptable[2][12]grammars, more recent usage has tended towards the use of the termadaptive(or some variant such asadaptativa,[13][14]depending on the publication language of the literature).[3]Iwai refers to her formalism asadaptive grammars,[13]but this specific use of simplyadaptive grammarsis not typically currently used in the literature without name qualification. Moreover, no standardization or categorization efforts have been undertaken between various researchers, although several have made efforts in this direction.[3][4]
Shutt categorizes adaptive grammar models into two main categories:[3][15]
Jackson refines Shutt's taxonomy, referring to changes over time asglobaland changes over space aslocal, and adding a hybridtime-spacecategory:[4]
Adaptive formalisms may be divided into two main categories: full grammar formalisms (adaptive grammars), and adaptive machines, upon which some grammar formalisms have been based.
The following is a list (by no means complete) of grammar formalisms that, by Shutt's definition above, are considered to be (or have been classified by their own inventors as being) adaptive grammars. They are listed in their historical order of first mention in the literature.
Described in Wegbreit's doctoral dissertation in 1970,[6]an extensible context-free grammar consists of acontext-free grammarwhose rule set is modified according to instructions output by afinite state transducerwhen reading the terminal prefix during a leftmost derivation. Thus, the rule set varies over position in the generated string, but this variation ignores the hierarchical structure of the syntax tree. Extensible context-free grammars were classified by Shutt asimperative.[3]
First introduced in 1985 asGenerative Grammars[16]and later more elaborated upon,[17]Christiansen grammars (apparently dubbed so by Shutt, possibly due to conflict with Chomsky generative grammars) are an adaptive extension ofattribute grammars. Christiansen grammars were classified by Shutt asdeclarative.[3]
The redoubling languageL={ww|wis a letter}{\displaystyle L=\{ww|w{\mbox{ is a letter}}\}}is demonstrated as follows:[17]
First introduced in May 1990[8]and later expanded upon in December 1990,[10]modifiable grammarsexplicitly provide a mechanism for the addition and deletion of rules during a parse. In response to theACM SIGPLAN Noticesresponses, Burshteyn later modified his formalism and introduced his adaptiveUniversal Syntax and Semantics Analyzer(USSA) in 1992.[18]These formalisms were classified by Shutt asimperative.[3]
Introduced in 1993, Recursive Adaptive Grammars (RAGs) were an attempt to introduce aTuring powerfulformalism that maintained much of the elegance ofcontext-free grammars.[3]Shutt self-classifies RAGs as being adeclarativeformalism.
Boullier'sdynamic grammars, introduced in 1994,[11]appear to be the first adaptive grammar family of grammars to rigorously introduce the notion of a time continuum of a parse as part of the notation of the grammar formalism itself.[4]Dynamic grammars are a sequence of grammars, with each grammarGidiffering in some way from other grammars in the sequence, over time. Boullier's main paper on dynamic grammars also defines a dynamic parser, the machine that effects a parse against these grammars, and shows examples of how his formalism can handle such things astype checking,extensible languages,polymorphism, and other constructs typically considered to be in the semantic domain of programming language translation.
The work of Iwai in 2000[13]takes the adaptive automata of Neto[19]further by applying adaptive automata tocontext-sensitive grammars. Iwai's adaptive grammars (note the qualifier by name) allow for three operations during a parse: ? query (similar in some respects to asyntactic predicate, but tied to inspection of rules from which modifications are chosen), + addition, and - deletion (which it shares with its predecessor adaptive automata).
Introduced in 2000[20]and most fully discussed in 2006,[4]the §-Calculus (§ here pronouncedmeta-ess) allows for the explicit addition, deletion, and modification of productions within a grammar, as well as providing forsyntactic predicates. This formalism is self-classified by its creator as bothimperativeandadaptive, or, more specifically, as atime-spaceadaptive grammar formalism, and was further classified by others as being ananalyticformalism.[14][21]
The redoubling languageL={ww|w∈{a,b}+}{\displaystyle L=\{ww|w\in \{a,b\}^{+}\}}is demonstrated as follows:
(Note on notation: In the above example, the#phi(...)statements identify the points in the productionRthat modify the grammar explicitly.#phi(A.X<-A.X C)represents aglobalmodification (over time) and the#phi(N<=A.X)statement identifies alocalmodification (over space). The#phi(A.X<-"")statement in theSproduction effectively declares a global production calledA.Xby placing theempty stringinto that production before its reference byR.)
First described by Neto in 2001,[22]adaptive devices were later enhanced and expanded upon by Pistori in 2003.[23]
In 2002,[24]Adam Carmi introduced anLALR(1)-based adaptive grammar formalism known asAdapser. Specifics of the formalism do not appear to have been released.
In 2004,[14]César Bravo introduced the notion of merging the concept ofappearance checking[25]withadaptive context-free grammars, a restricted form of Iwai's adaptive grammars,[13]showing these new grammars, calledAdaptive CFGs with Appearance Checkingto beTuring powerful.
The formalisms listed below, while not grammar formalisms, either serve as the basis of full grammar formalisms, or are included here because they are adaptive in nature. They are listed in their historical order of first mention in the literature. | https://en.wikipedia.org/wiki/Adaptive_grammar |
Inlinguistics,metapragmaticsis the study of how the effects and conditions of language use themselves become objects of discourse. The term is commonly associated with thesemiotically-informedlinguistic anthropologyofMichael Silverstein.
Metapragmatic signalling allows participants to construe what is going on in an interaction. Examples include:
Discussions of linguisticpragmatics—that is, discussions of what speechdoesin a particular context—are meta-pragmatic, because they describe the meaning of speech as action. Although it is useful to distinguishsemantic(i.e. denotative or referential) meaning (dictionary meaning) frompragmaticmeaning, and thus metasemantic discourse (for example, "Mesa means 'table' in Spanish") from metapragmatic utterances (e.g. "Say 'thank you' to your grandmother," or "It is impolite to swear in mixed company"), meta-semanticcharacterizations of speech are a type of metapragmatic speech. This follows from the assertion that metapragmatic speech characterizes speech function, and denotation or reference are among the many functions of speech. Because metapragmatics describes relations between different discourses it relates crucially to the concepts ofintertextualityorinterdiscursivity.
Inanthropology, describing the rules of use for metapragmatic speech (in the same way that a grammar would describe the rules of use for 'ordinary' or semantic speech) is important because it can aid the understanding and analysis of a culture'slanguage ideology. Silverstein has also described universal limits on metapragmatic awareness that help explain why some linguistic forms seem to be available to their users for conscious comment, while other forms seem to escape awareness despite efforts by a researcher to ask native speakers to repeat them or characterize their use.
Self-referential, or reflexive, metapragmatic statements areindexical. That is, their meaning comes from their temporal contiguity with their referent: themselves. Example: "This is an example sentence."
The anthropologist Aomar Boum uses a related concept of "ethnometapragmatics" to explain the Moroccan concept of showing the "plastic eye" ('ayn mika), which refers to the practice of ignoring something while pretending it is not there.[1]
Thissemioticsarticle is astub. You can help Wikipedia byexpanding it.
Thispragmatics-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Metapragmatics |
In thephilosophy of languageandmetaphysics,metasemanticsis the study of the foundations ofnatural languagesemantics(thephilosophicalstudy ofmeaning).[1][2][3]Metasemantics searches for "the proper understanding ofcompositionality, the object oftruth-conditionalanalysis,metaphysics of reference, as well as, and most importantly, the scope of semantic theory itself"[4]and asks "how it is thatexpressionsbecome endowed with theirsemantic significance".[5]
Thisphilosophy-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Metasemantics |
Inlogic, ametavariable(alsometalinguistic variable[1]orsyntactical variable)[2]is asymbolor symbol string which belongs to ametalanguageand stands for elements of some object language. For instance, in the sentence
the symbolsAandBare part of the metalanguage in which the statement about the object language ℒ is formulated.
John Corcoranconsiders this terminology unfortunate because it obscures the use ofschemataand because such "variables" do not actually range over a domain.[3]: 220
The convention is that a metavariable is to be uniformly substituted with the same instance in all its appearances in a given schema. This is in contrast withnonterminalsymbols informal grammarswhere the nonterminals on the right of a production can be substituted by different instances.[4]
Attempts to formalize the notion of metavariable result in some kind oftype theory.[5] | https://en.wikipedia.org/wiki/Metavariable_(logic) |
Ecosemioticsis a branch ofsemioticsin its intersection withhuman ecology,ecological anthropologyandecocriticism. It studies sign processes in culture, which relate to other living beings, communities, and landscapes.[1]Ecosemiotics also deals with sign-mediated aspects of ecosystems.[2]
As stressed in ecosemiotic studies, environment has semiotic quality in different ways and levels. Material environment has affordances and potentials to participate in sign relations. Animal species attribute meanings to the environment based on their needs andumwelts. In human culture, environment can become meaningful in literary and artistic representations or through symbolization of animals or landscapes. Cultural representations of the environment in turn influence the natural environment through human actions.
Ecosemiotics analyzes processes, transmissions and problems that occur in and between the different semiotic layers of the environment. The central focus of ecosemiotics concerns the role of concepts (sign-based models people have) in designing and changing the environment. Concepts of ecosemiotic analysis are, for instance, semiocide,affordance, ecofield, consortium, dissent.
The field was initiated byWinfried NöthandKalevi Kull, and later elaborated by Almo Farina andTimo Maran.[3]Ecosemiotics includes (or largely overlaps) with semiotics of landscape.[4]
Thissemioticsarticle is astub. You can help Wikipedia byexpanding it.
Thisecology-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Ecosemiotics |
Ethnosemioticsis a disciplinary perspective which linkssemioticsconcepts toethnographicmethods.
Algirdas Julien Greimasand Joseph Courtés defined for the first time ethnosemiotics inSemioticsand language: an analytical dictionary.
"Ethno-semiotics is not truly an autonomous semiotics. If it were, it would be in competition with a field of knowledge already established under the name of ethnology or anthropology, whose contribution to the advent of semiotics itself is considerable. Rather, it is a privileged area of curiosities and of methodological exercises ... .
Given that general semiotics authorizes the treatment of non-linguistic (gestual, somatic, etc.) syntagmatic concatenations as discourses or texts, the field of ethno-linguistics can be enlarged to become an ethno-semiotics; analyses, still rare, of rituals and ceremonies lead us to suppose that ethnology can become, once again, the privileged locus for the construction of general models of signifying behavior."[1]
During the 2000s, in Italy, the interest toward the discipline has been renewed thanks to the studies and the researches of Maurizio del Ninno, Tarcisio Lancioni, and Francesco Marsciani. Under the direction of Francesco Marsciani, in Bologna in 2007 the Ethnosemiotic Centre of the Bologna University (CUBE) was founded, active in different fields of inquiry. In 2015, CUBE's experience led to the series Quaderni di Etnosemiotica and the Ethnosemiotics lab.
The ethnosemiologist analyses the systems of signifiers which are individuated in specific cultural contexts through the observation and the application of theethnographic methods. At the present, the principal researchers which have been realized are focused on urban spaces, therapy, rituals, folklore, everyday practices.
In Italy we can distinguish two approaches to ethnosemiotics:
From 2015 the Ethnosemiotics Lab leads researches both in the theoretical and the methodological field. Developed inside CUBE, active at the Bologna University, it conducts multidisciplinary researches. The synergy between ethnographic observation and the analysis of the manifested values offers active results from the point of view of the capacity to describe social phenomena. | https://en.wikipedia.org/wiki/Ethnosemiotics |
Agender symbolis apictogramorglyphused to representsexandgender, for example in biology and medicine, ingenealogy, or in the sociological fields ofgender politics,LGBT subcultureandidentity politics.
In his booksMantissa Plantarum(1767) andMantissa Plantarum Altera(1771),Carl Linnaeusregularly used theplanetary symbolsof Mars, Venus and Mercury –♂,♀,☿– for male, female and hermaphroditic (perfect) flowers, respectively.[1]Botanists now use⚥for the last.[2]
In genealogy, including kinship in anthropology and pedigrees in animal husbandry, the geometric shapes△or□are used for male and○for female. These are also used onpublic toiletsin some countries.
The modern internationalpictogramsused to indicate male and female public toilets,🚹︎and🚺︎, became widely used in the 1960s and 1970s. They are sometimes abstracted to▽for male and△for female.[3]
The three standard sex symbols in biology are male♂, female♀and hermaphroditic⚥; originally the symbol for Mercury,☿, was used for the last. These symbols were first used byCarl Linnaeusin 1751 to denote whether flowers were male (stamensonly), female (pistilonly) orperfect flowerswith both pistils and stamens.[1](Most flowering and conifer plant species are hermaphroditic and either bear flowers/cones that themselves are hermaphroditic, or bear both male and female flowers/cones on the same plant.) These symbols are now ubiquitous in biology and medicine to indicate the sex of an individual, for example of a patient.[4][a]
Kinship chartsuse a triangle△for male and circle○for female.[6]Pedigree chartspublished in scientific papers use an earlier anthropological convention of a square□for male and a circle○for female.[7]
Before a shape distinction was adopted, all individuals had been represented by a circle in Morgan's 1871System of Consanguinity and Affinity of Human Family, where gender is encoded in the abbreviations for the kin relation (e.g. M for 'mother' and F for 'father').[8]W. H. R. Riversdistinguished gender in the words of the language being recorded by writing male kinship terms in all capitals and female kinship terms with normal capitalization. That convention was quite influential for a time, and his convention of prioritizing male kin by placing them to the left and females to the right continues to this day though there have been exceptions, such asMargaret Mead, who placed females to the left.[9]
The modern gender symbols used forpublic toilets,🚹︎for male and🚺︎for female, are pictograms created for theBritish Railsystem in the mid-1960s.[10]Before that, local usage had been more variable. For example, schoolhouseouthousesin the 19th-century United States had ventilation holes in their doors that were shaped like a starburst Sun✴or like a crescent Moon☾, respectively, to indicate whether the toilet was for use by boys or girls.[11]The British Rail pictograms – often color-coded blue and red[citation needed]– are now the norm for marking public toilets in much of the world, with the female symbol distinguished by a triangular skirt or dress, and in early years (and sometimes still) the male symbol stylized like atuxedo.[3]
These symbols are abstracted to varying degrees in different countries – for example, the circle-and-triangle variants(male) and(female) commonly found onportable toilets, sometimes abstracted further to a triangle△(representing a skirt or dress) for female and an inverted triangle▽(representing a broad-shouldered tuxedo) for male in Lithuania.[3]
In elementary schools, the pictograms may be of children rather than of adults, with the girl distinguished by her hair. In themed locations, such as bars and tourist attractions, a thematic image or figurine of a man and woman or boy and girl may be used.[citation needed]
In Poland, an inverted triangle▽is used for male while a circle○is used for female.[3]
In mainland China, silhouettes of heads in profile may be used as gender pictograms,[citation needed]generally alongside the Chinese characters for male (男) and female (女).[12]
Some contemporary designs for restroom signage in public spaces are shifting away from symbols that demonstrate gender as binary as a way to be more inclusive.[13][14]
Since the 1970s, variations of gender symbols have been used to expresssexual orientationandgender politics. Two interlocking male symbols⚣are used to representgay menwhile two interlocking female symbols⚢are often used to representlesbians.[15]Two female and two male symbols interlocked representbisexuality, while an interlocked female and male symbol⚤representsheterosexuality.[16]
The combined male-female symbol⚥is used to representandrogynepeople;[17]when additionally combined with the female♀and male♂symbols to create the symbol⚧, it indicates gender inclusivity,[citation needed]though it is also used as a transgender symbol.[18][19][17]The male-with-stroke symbol⚦is used fortransgenderpeople.[17]
The Mercury symbol☿and combined female/male symbol⚥have both been used to representintersexpeople.[20][16]The alchemical symbol forsublimate of antimony🜬is used to representnon-binarypeople. The neuter symbol⚲is also used to represent non-binary people, especially those who are neutrois or of a neutral gender.[16]A featureless circle⚪︎is also used to represent non-binary people, especially those who are agender or genderless, as well asasexuality.[21][16]
Since the 2000s, numerous variants of gender symbols have been introduced in the context ofLGBTculture and politics.[16]Some of these symbols have been adopted intoUnicode(in theMiscellaneous Symbolsblock) beginning with version 4.1 in 2005. | https://en.wikipedia.org/wiki/Gender_symbol |
The following is a list ofsemiotics terms; that is, those words used insemiotics, the discussion, classification, criticism, and analysis of the study of sign processes (semiosis), analogy, metaphor, signification and communication,signsandsymbols. This list also includes terms which are not part of semiotic theory per se but which are commonly found alongside their semiotic brethren - these terms come from linguistics, literary theory and narratology.
Thissemioticsarticle is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Index_of_semiotics_articles |
Alanguage-game(German:Sprachspiel) is a philosophical concept developed byLudwig Wittgenstein, referring to simple examples of language use and the actions into which the language is woven. Wittgenstein argued that a word or even a sentence has meaning only as a result of the "rule" of the "game" being played. Depending on the context, for example, the utterance "Water!" could be an order, the answer to a question, or some other form of communication.
In his workPhilosophical Investigations(1953),Ludwig Wittgensteinregularly referred to the concept of language-games.[1]Wittgenstein rejected the idea that language is somehow separate and corresponding to reality, and he argued that concepts do not need clarity for meaning.[2]Wittgenstein used the term "language-game" to designate forms of language simpler than the entirety of a language itself, "consisting of language and the actions into which it is woven"[3]and connected byfamily resemblance(Familienähnlichkeit). The concept was intended "to bring into prominence the fact that the speaking of language is part of an activity, or a form of life,"[4]which gives language its meaning.
Wittgenstein develops this discussion of games into the key notion of alanguage-game. He introduces the term using simple examples,[3]but intends it to be used for the many ways in which we use language.[4]The central component of language games is that they are uses of language, and language is used in multifarious ways. For example, in one language-game, a word might be used to stand for (or refer to) an object, but in another the same word might be used for giving orders, or for asking questions, and so on. The famous example is the meaning of the word "game". We speak of various kinds of games: board games, betting games, sports, "war games". These are all different uses of the word "games". Wittgenstein also gives the example of "Water!", which can be used as an exclamation, an order, a request, or an answer to a question.[5]The meaning of the word depends on the language-game within which it is being used. Another way Wittgenstein puts the point is that the word "water" has no meaning apart from its use within a language-game. One might use the word as an order to have someone else bring you a glass of water. But it can also be used to warn someone that the water has been poisoned.
Wittgenstein does not limit the application of his concept of language games to word-meaning. He also applies it to sentence-meaning. For example, the sentence "Moses did not exist"[6]can mean various things. Wittgenstein argues that independently of use the sentence does not yet 'say' anything. It is 'meaningless' in the sense of not being significant for a particular purpose. It only acquires significance if we fix it within some context of use. Thus, it fails to say anything because the sentence as such does not yet determine some particular use. The sentence is only meaningful when it is used to say something. For instance, it can be used so as to say that no person or historical figure fits the set of descriptions attributed to the person that goes by the name of "Moses". But it can also mean that the leader of the Israelites was not called Moses. Or that there cannot have been anyone who accomplished all that the Bible relates of Moses, etc. What the sentence means thus depends on its context of use.
The term 'language-game' is used to refer to:
These meanings are not separated from each other by sharp boundaries, but blend into one another (as suggested by the idea of family resemblance). The concept is based on the following analogy: Therules of languageare analogous to the rules of games; thus saying something in a language is analogous to making a move in a game. The analogy between a language and a game demonstrates that words have meaning depending on the uses made of them in the various and multiform activities of human life. (The concept is not meant to suggest that there is anything trivial about language, or that language is "just a game".)
The classic example of a language-game is the so-called "builder's language" introduced in §2 of thePhilosophical Investigations:
The language is meant to serve for communication between a builder A and an assistant B. A is building with building-stones: there are blocks, pillars, slabs and beams. B has to pass the stones, in the order in which A needs them. For this purpose they use a language consisting of the words "block", "pillar" "slab", "beam". A calls them out; — B brings the stone which he has learnt to bring at such-and-such a call. Conceive this as a complete primitive language.[7][8]
Later "this" and "there" are added (with functions analogous to the function these words have in natural language), and "a, b, c, d" as numerals. An example of its use: builder A says "d — slab — there" and points, and builder B counts four slabs, "a, b, c, d..." and moves them to the place pointed to by A. The builder's language is an activity into which is woven something we would recognize as language, but in a simpler form. This language-game resembles the simple forms of language taught to children, and Wittgenstein asks that we conceive of it as "a complete primitive language" for a tribe of builders. | https://en.wikipedia.org/wiki/Language_game_(philosophy) |
Neurosemioticsis an area of science which studies the neural aspects of meaning making. It interconnectsneurobiology,biosemioticsandcognitive semiotics.[1]Neurolinguistics,neuropsychologyand neurosemantics can be seen as parts of neurosemiotics.
The pioneers of neurosemiotics includeJakob von Uexküll,Kurt Goldstein,Friedrich Rothschild, and others.
The first graduate courses on neurosemiotics were taught in some American and Canadian universities since 1970s. The term 'neurosemiotics' is also not much older.[2]
Neurosemiotics demonstrates which are the necessary conditions and processes responsible forsemiosisin the neural tissue. It also describes the differences in the complexity of meaning making in animals of different complexity of thenervous systemand thebrain.[3][4][5][6]
Thisneurosciencearticle is astub. You can help Wikipedia byexpanding it.
Thissemioticsarticle is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Neurosemiotics |
The followingoutlineis provided as an overview of and topical guide to semiotics:
Semiotics– study ofmeaning-making, signs and sign processes (semiosis), indication, designation, likeness, analogy, metaphor, symbolism, signification, and communication. Semiotics is closely related to the field of linguistics, which, for its part, studies the structure and meaning of language more specifically. Also called semiotic studies, or semiology (in the Saussurean tradition).
Semiotics can be described as all of the following: | https://en.wikipedia.org/wiki/Outline_of_semiotics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.