text
stringlengths
16
172k
source
stringlengths
32
122
Marketing buzzor simplybuzz—a term used inviral marketing—is the interaction of consumers and users with a product or service which amplifies or alters the original marketing message.[1]This emotion, energy, excitement, or anticipation about a product or service can be positive or negative. Buzz can be generated by intentional marketing activities by the brand owner or it can be the result of an independent event that enters public awareness through social or traditional media such asnewspapers. Marketing buzz originally referred to oral communication but in the age ofWeb 2.0,social mediasuch asFacebook,Twitter,InstagramandYouTubeare now the dominant communication channels for marketing buzz. Some of the common tactics used to create buzz include building suspense around a launch or event, creating a controversy, or reaching out tobloggersand social mediainfluencers. Social media participants in any particularvirtual communitycan be divided into three segments: influencers, individuals, and consumers. Influencers amplify both positive and negative messages to the target audience, often because of their reputation within the community. Therefore, a successful social media campaign must find, and engage with influencers that are positively inclined to the brand, providing them with product information and incentives to forward it on to the community. Individuals are members of the community who find value in absorbing the content and interacting with other members. The purpose of the marketing strategy is ultimately to turn individuals into the third group, consumers, who actually purchase the product in the real world and then developbrand loyaltythat forms the basis for ongoing positive marketing buzz. The challenge for the marketer is to understand the potentially complex dynamics of thevirtual communityand be able to use them effectively.[2] Development of a social media marketing strategy must also take into account interaction with traditional media including the potential both forsynergies, where the two combine to greater effect, and cannibalism, where one takes market from the other, leading to no realmarket expansion.[2]This can be seen in the growing connection between marketing buzz and traditional television broadcasts.[3]Shows monitor buzz, encouraging audience participation on social media during broadcasts, and in 2013 theNielsen ratingswere expanded to include social media rankings based on Twitter buzz.[4]But the best known example is theSuper Bowl advertisingphenomenon. Companies build anticipation before the game using different tactics that include releasing the ads or teasers for them on-line, soliciting user input such as Doritos’Crash the Super Bowlcompetition where on-line voting between consumer created ads determines which will air during the game, and purposefully generating controversy, such as the 2013, and 2014SodaStreamads that were rejected by the network airing the game for directly naming competitors.[citation needed] For advertising to generate effective positive buzz, research has shown that it must engage the viewer's emotions in a positive way.[5]Budweiser’sSuper Bowl advertisinghas been the most successful at generating buzz as measured by theUSA TodaySuper Bowl Ad Metersurvey over its 26-year history, a testament to its masterful use of heartwarming stories, cute baby animals, majestic horses, and core American values to stir the positive emotions of audiences across a wide range of demographics. Using controversy to generate marketing buzz can be risky because research shows that while mild controversy stimulates more buzz than completely neutral topics, as the topic becomes more uncomfortable the amount of buzz drops significantly. The most buzz will be generated in a “sweet spot” where the topic is interesting enough to invite comment, but not controversial enough to keep people away.[6]There is also substantial risk of generating negative buzz when using controversy, for exampleCoca-Cola’s 2014 It’s Beautiful ad that aired during theSuper Bowland generated substantial backlash.[7] Two common terms used to describe buzz arevolume, which quantifies the number of interchanges related to a product or topic in a given time period, andratingorlevel, a more qualitative measure of the positive or negative sentiment or amount of engagement associated with the product.[8]Basic social media measures of buzz volume include visits, views, mentions, followers and subscribers; next level measures such as shares, replies, clicks, re-tweets, comments and wall posts provide a better indication of the participants' engagement levels because they require action in response to an initial communication.[citation needed] It is possible for firms to track the marketing buzz of their products online usingbuzz monitoring. Many tools are available to gather buzz data; some search the web looking for particular mentions in blogs or posts, others monitor conversations on social media channels and score them on popularity, influence, and sentiment using algorithms that assess emotion and personal engagement. Buzz monitoring can be used to assess the performance ofmarketing strategiesas well as quickly identify negative buzz or product issues that require a response.[2]It can also be used to identify and capitalize on current trends that will shift consumer behaviors. For example the low-carb diet was buzzing months before sales at grocery stores reflected the trend.[9]Monitoring buzz around certain topics can be used as an anonymous equivalent of a traditional focus group in new product development. For some companies it is important to understand the buzz surrounding a product before committing to the market.[10] Positive "buzz" is often a goal ofviral marketing,public relations, and advertising onWeb 2.0media.[11]It occurs when high levels of individual engagement on social media drive the buzz volume up for positive associations with the product or brand. It gets to the point that capturing the attention of consumers and media easily, which catch people's attention because the information is perceived as entertaining, fascinating, or even newsworthy.[12]Examples of products with strong positive marketing buzz upon introduction areHarry Potter,Volkswagen'sNew Beetle,Pokémon,Beanie Babies, andThe Blair Witch Project.[13]Negative buzz can result from events that generate bad associations with the product in the mind of the public, such as a productsafety recall, or fromunintended consequencesof ill-advised marketing strategies. If not swiftly counteracted, negative buzz can be harmful to a product's success and the most social network savvy organizations prepare for these eventualities. Examples of negative buzz include theUnited Colors of Benetton'sshock advertisingcampaign that generated numerous boycotts and lawsuits, and the2014 General Motors recallof cars many years after a known issue with a faulty ignition switch which they admitted had caused 13 deaths. In the latter case, traditional media also contributed to the amplification of the story through reporting on the ongoing recalls and GM CEOMary Barra's testimony before theUS House of Representatives.[citation needed] Buzz works as a marketing tool because individuals in social settings are easier to trust than organizations that may be perceived to have vested interests in promoting their products and/or services. Interpersonal communication has been shown to be more effective in influencing consumers’ purchasing decisions than advertising alone and the two combined have the greatest power.[5] A 2013 paper by Xueming Luo and Jie Zhang[8]lists numerous previous studies that have shown a positive correlation between buzz rating and/or volume and product sales or company revenue. To expand further on that research, Luo and Zhang investigated the relationship of buzz andweb trafficand their effect onstock marketperformance for nine top publicly traded firms in the computer hardware and software industries. Comparing data on consumer buzz rating and volume from a popular electronic product review Website with the firms’ stock returns over the same period, they found a strong positive correlation between online buzz and stock performance. They also found that due to increasing online content and limitations in consumer attention, competing buzz for rival products could have a negative effect on a firm's performance. For these nine companies, buzz had a greater effect than traffic and accounted for approximately 11% of the total variation of stock returns, with 6% due to the firms’ own marketing driving the stock price up and 5% due to rival firms’ buzz driving it down.[citation needed] As consumers increasingly expect to have access to buzz about products as part of their purchasing decisions and to interact with the brand in social media, successful companies are being driven to adopt social media marketing strategies to stay competitive. To successfully plan and implement these campaigns requires the ability to predict their effectiveness and therefore thereturn on investmentthat can be expected for the dollars expended.[14] With the addition of new interactive and digital media technologies into the marketing industry, a significant emphasis has been put on the use of online content to generate buzz about a product, service, or company.[15]Companies well known for this practice areAmazonandNetflix, both of which utilize individual customer patterns and usage trends on these sites to cater the customers' future experiences on the site around the individuals.[15]As a result, this works towards one of the main goals of buzz marketing, to provide each customer with a unique experience that motivates them to purchase a product.[16] Many companies are also using their online presence to generate buzz by allowing users to post reviews on their sites, as well as the use of reviews posted on third party sites. This concept of online reviewing also works to generate negative buzz, and has been a topic of criticism. Online review siteYelphas been subject to criticism after allegations that business owners were paying the site to only publish the positive reviews, in an attempt to boost sales and hide negative buzz about these businesses.[17]After 10 small businesses filed a lawsuit against Yelp in 2010, the site made the decision to remove the "Favourite Review" option, that previously allowed a business owner to choose the review they liked the most and have it showcased, as well as made content previously hidden from potential customers, visible.[18] Additionally, the social media siteTwitterhas been a game changer in terms of marketing buzz in the digital age.[19]The online microblogging site, with web traffic totalling about 350,000 tweets being sent per minute,[20]has quickly become an important tool in business and in marketing. Companies are now creating Twitter pages as a means of personal communication with their target audience. Twitter allows businesses of any size to speak directly to their intended demographic, and allows the customer to communicate back, a feature unique to marketing technologies and methods utilized in the digital age.[19]In addition, companies can pay to have their tweets show up on the Twitter "timeline" of users they want to reach. Many celebrities and public figures carrying a large amount of Twitter "followers" also accept payment to tweet about products.[21] Some notable examples of buzz marketing in the digital age include the highly successful marketing campaign for thethird seasonof theAMCseriesMad Men. The TV channel created an online avatar maker that allowed fans of the show to create an online version of themselves in the 1960s style portrayed on the show. The site experienced over half a million users in the first week and has since been updated to promote consecutive seasons.[22]The campaign gave the show some of its highest ratings seen up to that point.[22]Another successful viral buzz marketing campaign surrounded the 2007 "found footage" motion pictureParanormal Activity. The small budget film was originally released to only select cities. A trailer was then released to the public with the ending calling individuals to go online and "demand" the movie be brought to a city near them. Once a city was demanded enough times, the film would be screened in theatres in that city. The success of this movie can be credited to this marketing campaign, which worked on the principle of "we always want what we don't have".[22]
https://en.wikipedia.org/wiki/Marketing_buzz
Memeticsis a theory of the evolution of culture based on Darwinian principles with thememeas the unit of culture. The term "meme" was coined by biologistRichard Dawkinsin his 1976 bookThe Selfish Gene,[1]to illustrate the principle that he later called "Universal Darwinism". All evolutionary processes depend oninformationbeing copied, varied, and selected, a process also known as variation withselective retention. The conveyor of the information being copied is known as the replicator, with thegenefunctioning as the replicator in biologicalevolution. Dawkins proposed that the same process drivescultural evolution, and he called this second replicator the "meme," citing examples such as musical tunes, catchphrases,fashions, andtechnologies. Like genes, memes are selfish replicators and have causal efficacy; in other words, their properties influence their chances of being copied and passed on. Some succeed because they are valuable or useful to their human hosts while others are more like viruses. Just as genes can work together to formco-adaptedgene complexes, so groups of memes acting together form co-adapted meme complexes ormemeplexes. Memeplexes include (among many other things)languages,traditions,scientific theories,financial institutions, andreligions. Dawkins famously referred to religions as "viruses of the mind".[2] Among proponents of memetics are psychologistSusan Blackmore, author ofThe Meme Machine, who argues that when our ancestors began imitating behaviours, they let loose a second replicator and co-evolved to become the "meme machines" that copy, vary, and select memes in culture.[3]PhilosopherDaniel Dennettdevelops memetics extensively, notably in his booksDarwin's Dangerous Idea,[4]andFrom Bacteria to Bach and Back.[5]He describes the units of memes as "the smallest elements that replicate themselves with reliability and fecundity,"[6]and claims that "Human consciousness is itself a huge complex of memes."[7]InThe Beginning of Infinity,[8]physicistDavid Deutschcontrasts static societies that depend on anti-rational memes suppressing innovation and creativity, with dynamic societies based on rational memes that encourageenlightenmentvalues, scientific curiosity, and progress. Criticisms of memetics include claims that memes do not exist, that the analogy with genes is false, that the units cannot be specified, that culture does not evolve through imitation, and that the sources of variation are intelligently designed rather than random. Critics of memetics include biologistStephen Jay Gouldwho calls memetics a "meaningless metaphor". PhilosopherDan Sperberargues against memetics as a viable approach to cultural evolution because cultural items are not directly copied or imitated but are reproduced.[9]AnthropologistRobert Boydand biologistPeter Richersonwork within the alternative, and more mainstream, field ofcultural evolution theoryandgene-culture coevolution.[10]Dual inheritance theoryhas much in common with memetics but rejects the idea that memes are replicators. From this perspective, memetics is seen as just one of several approaches to cultural evolution and one that is generally considered less useful than the alternatives of gene-culture coevolution or dual inheritance theory. The main difference is that dual inheritance theory ultimately depends on biological advantage to genes, whereas memetics treats memes as a second replicator in its own right. Memetics also extends to the analysis ofInternet cultureandInternet memes.[11] In his bookThe Selfish Gene(1976), the evolutionary biologistRichard Dawkinsused the termmemeto describe a unit of humancultural transmissionanalogous to thegene, arguing that replication also happens inculture, albeit in a different sense. Whilecultural evolutionitself is a much older topic, with a history that dates back at least as far asDarwin's era, Dawkins (1976) proposed that the meme is a unit of culture residing in the brain and is the mutatingreplicatorin human cultural evolution. After Dawkins, many discussed this unit of culture as evolutionary "information" which replicates with rules analogous toDarwinian selection.[12]Areplicatoris a pattern that can influence its surroundings – that is, it hascausal agency– and can propagate. This proposal resulted in debate among anthropologists, sociologists, biologists, and scientists of other disciplines. Dawkins did not provide a comprehensive explanation of how replication of units of information in the brain controls human behaviour and culture, as the main focus of the book was on gene expression. Dawkins apparently did not intend to present a comprehensive theory ofmemeticsinThe Selfish Gene, but rather coined the termmemein a speculative spirit.[citation needed]Accordingly, different researchers came to define the term "unit of information" in different ways. The evolutionary model of cultural information transfer is based on the concept that memes—units of information—have an independent existence, are self-replicating, and are subject to selective evolution through environmental forces.[13]Starting from a proposition put forward in the writings of Dawkins, this model has formed the basis of a new area of study, one that looks at the self-replicating units of culture. It has been proposed that just as memes are analogous to genes, memetics is analogous to genetics. The modern memetics movement dates from the mid-1980s. A January 1983 "Metamagical Themas" column[14]byDouglas Hofstadter, inScientific American, was influential – as was his 1985 book of the same name. "Memeticist" was coined as analogous to "geneticist" – originally inThe Selfish Gene.Later Arel Lucas suggested that the discipline that studies memes and their connections to human and other carriers of them be known as "memetics" by analogy with "genetics".[14]Dawkins'The Selfish Genehas been a factor in attracting the attention of people of disparate intellectual backgrounds. Another stimulus was the publication in 1991 ofConsciousness Explainedby Tufts University philosopherDaniel Dennett, which incorporated the meme concept into atheory of the mind. In his 1991 essay "Viruses of the Mind", Richard Dawkins used memetics to explain the phenomenon of religious belief and the various characteristics of organised religions. By then, memetics had also become a theme appearing in fiction (e.g. Neal Stephenson'sSnow Crash). The idea oflanguage as a virushad already been introduced byWilliam S. Burroughsas early as 1962 in his novelThe Ticket That Exploded, and continued inThe Electronic Revolution, published in 1970 inThe Job. The foundation of memetics in its full modern incarnation was launched byDouglas Rushkoff'sMedia Virus: Hidden Agendas in Popular Culturein 1995,[15]and was accelerated with the publication in 1996 of two more books by authors outside the academic mainstream:Virus of the Mind: The New Science of the Memeby formerMicrosoftexecutive turned motivational speaker and professional poker-playerRichard Brodie, andThought Contagion: How Belief Spreads Through SocietybyAaron Lynch, a mathematician and philosopher who worked for many years as an engineer atFermilab. Lynch claimed to have conceived his theory totally independently of any contact with academics in the cultural evolutionary sphere, and apparently was not aware ofThe Selfish Geneuntil his book was very close to publication.[citation needed] Around the same time as the publication of the books by Lynch and Brodie the e-journalJournal of Memetics – Evolutionary Models of Information Transmission[16](published electronically from 1997 to 2005[17]) first appeared. It was first hosted by the Centre for Policy Modelling atManchester Metropolitan University. The e-journal soon became the central point for publication and debate within the nascent memeticist community. (There had been a short-lived paper-based memetics publication starting in 1990, theJournal of Ideasedited by Elan Moritz.[18]) In 1999,Susan Blackmore, a psychologist at theUniversity of the West of England, publishedThe Meme Machine, which more fully worked out the ideas of Dennett, Lynch, and Brodie and attempted to compare and contrast them with various approaches from the cultural evolutionary mainstream, as well as providing novel (and controversial) memetics-based theories for the evolution of language and the human sense of individual selfhood. The termmemederives from theAncient Greekμιμητής (mimētḗs), meaning "imitator, pretender". The similar termmnemewas used in 1904, by the German evolutionary biologistRichard Semon, best known for his development of theengramtheory ofmemory, in his workDie mnemischen Empfindungen in ihren Beziehungen zu den Originalempfindungen, translated into English in 1921 asThe Mneme.[19]UntilDaniel SchacterpublishedForgotten Ideas, Neglected Pioneers: Richard Semon and the Story of Memoryin 2000, Semon's work had little influence, though it was quoted extensively inErwin Schrödinger’s 1956Tarner Lecture“Mind and Matter”. Richard Dawkins (1976) apparently coined the wordmemeindependently of Semon, writing this: "'Mimeme' comes from a suitable Greek root, but I want a monosyllable that sounds a bit like 'gene'. I hope my classicist friends will forgive me if I abbreviate mimeme to meme. If it is any consolation, it could alternatively be thought of as being related to 'memory', or to the French word même."[20] David Hull(2001) pointed out Dawkins's oversight of Semon's work. Hull suggests this early work as an alternative origin to memetics by which Dawkins's memetic theory and classicist connection to the concept can be negotiated. "Why not date the beginnings of memetics (or mnemetics) as 1904 or at the very least 1914? If [Semon's] two publications are taken as the beginnings of memetics, then the development of memetics [...] has been around for almost a hundred years without much in the way of conceptual or empirical advance!"[21] Despite this, Semon's work remains mostly understood as distinct to memetic origins even with the overt similarities accounted for by Hull. The memetics movement split almost immediately into two. The first group were those who wanted to stick to Dawkins' definition of a meme as "a unit ofcultural transmission". Gibron Burchett, a memeticist responsible for helping to research and co-coin the termmemetic engineering, along with Leveious Rolando and Larry Lottman, has stated that a meme can be defined, more precisely, as "a unit ofculturalinformationthat can be copied, located in the brain". This thinking is more in line with Dawkins' second definition of the meme in his bookThe Extended Phenotype. The second group wants to redefine memes as observablecultural artifactsand behaviors. However, in contrast to those two positions, the article "Consciousness in meme machines" by Susan Blackmore rejects neither movement.[22]Andrej Drapal[23]tried to bridge the gap with his differentiation of memes as quantum entities existing per se in quantum superposition and collapsing when detected by brains from cultural artifacts. Memes are to artifacts as genotypes are to phenotypes. These two schools became known as the "internalists" and the "externalists." Prominent internalists included both Lynch and Brodie; the most vocal externalists included Derek Gatherer, a geneticist fromLiverpool John Moores University, and William Benzon, a writer on cultural evolution and music. The main rationale for externalism was that internal brain entities are not observable, and memetics cannot advance as a science, especially aquantitativescience, unless it moves its emphasis onto the directly quantifiable aspects of culture. Internalists countered with various arguments: that brain states will eventually be directly observable with advanced technology, that most cultural anthropologists agree that culture is aboutbeliefsand not artifacts, or that artifacts cannot be replicators in the same sense as mental entities (or DNA) are replicators. The debate became so heated that a 1998 Symposium on Memetics, organised as part of the 15th International Conference onCybernetics, passed a motion calling for an end to definitional debates. McNamara demonstrated in 2011 that functional connectivity profiling using neuroimaging tools enables the observation of the processing of internal memes, "i-memes", in response to external "e-memes".[24]This was developed further in a paper "Memetics and Neural Models of Conspiracy Theories" by Duch, where a model of memes as a quasi-stable neural associative memoryattractor networkis proposed, and a formation ofMemeplexleading to conspiracy theories illustrated with the simulation of a self-organizing network.[25] An advanced statement of the internalist school came in 2002 with the publication ofThe Electric Meme, by Robert Aunger, an anthropologist from theUniversity of Cambridge. Aunger also organised a conference in Cambridge in 1999, at which prominent sociologists and anthropologists were able to give their assessment of the progress made in memetics to that date. This resulted in the publication ofDarwinizing Culture: The Status of Memetics as a Science, edited by Aunger and with a foreword by Dennett, in 2001.[26] In 2005, theJournal of Memeticsceased publication and published a set of articles on the future of memetics. The website states that although "there was to be a relaunch... after several years nothing has happened".[27]Susan Blackmoreleft the University of the West of England to become a freelance science-writer and now concentrates more on the field of consciousness and cognitive science. Derek Gatherer moved to work as a computer programmer in the pharmaceutical industry, although he still occasionally publishes on memetics-related matters.Richard Brodieis now climbing the world professional poker rankings.Aaron Lynchdisowned the memetics community and the words "meme" and "memetics" (without disowning the ideas in his book), adopting the self-description "thought contagionist". He died in 2005. Susan Blackmore(2002) re-stated the definition of meme as: whatever is copied from one person to another person, whether habits, skills, songs, stories, or any other kind of information. Further she said that memes, like genes, are replicators in the sense as defined by Dawkins.[28]That is, they are information that is copied. Memes are copied byimitation, teaching and other methods. The copies are not perfect: memes are copied with variation; moreover, they compete for space in our memories and for the chance to be copied again. Only some of the variants can survive. The combination of these three elements (copies; variation; competition for survival) forms precisely the condition forDarwinian evolution, and so memes (and hence human cultures) evolve. Large groups of memes that are copied and passed on together are calledco-adaptedmeme complexes, ormemeplexes. In Blackmore's definition, the way that a meme replicates is through imitation. This requiresbraincapacity to generally imitate a model or selectively imitate the model. Since the process of social learning varies from one person to another, the imitation process cannot be said to be completely imitated. The sameness of an idea may be expressed with different memes supporting it. This is to say that themutationrate in memetic evolution is extremely high, and mutations are even possible within each and every iteration of the imitation process. It becomes very interesting when we see that a social system composed of a complex network of microinteractions exists, but at the macro level an order emerges to create culture.[citation needed] Many researchers of cultural evolution regard memetic theory of this time a failed paradigm superseded bydual inheritance theory.[29]Others instead suggest it is not superseded but rather holds a small but distinct intellectual space in cultural evolutionary theory.[30] A new framework ofInternet Memeticsinitially borrowed Blackmore's conceptual developments but is effectively a data-driven approach, focusing on digital artifacts. This was led primarily by conceptual developments Colin Lankshear and Michele Knobel (2006)[31]andLimor Shifmanand Mike Thelwall (2009).[32]Shiman, in particular, followed Susan Blackmore in rejecting the internalist and externalist debate, however did not offer a clear connection to prior evolutionary frameworks. Later in 2014, she rejected the historical relevance of "information" to memetics. Instead of memes beingunits of cultural information, she argued information is exclusively delegated to be "the ways in which addressers position themselves in relation to [a meme instance's] text, its linguistic codes, the addressees, and other potential speakers."[33]This is what she calledstance,which is analytically distinguished from thecontentandformof her meme. As such, Shifman's developments can be seen as critical to Dawkins's meme, but also as a somewhat distinct conceptualization of the meme as a communicative system dependent on the internet and social media platforms. By introducing memetics as an internet study there has been a rise in empirical research. That is, memetics in this conceptualization has been notably testable by the application of social science methodologies. It has been popular enough that following Lankshear and Knobel's (2019) review of empirical trends, they warn those interested in memetics that theoretical development should not be ignored, concluding that, "[R]ight now would be a good time for anyone seriously interested in memes to revisit Dawkins’ work in light of how internet memes have evolved over the past three decades and reflect on what most merits careful and conscientious research attention."[34] As Lankshear and Knobel show, the Internet Memetic reconceptualization is limited in addressing long-standing memetic theory concerns. It is not clear that existing Internet Memetic theory's departure from conceptual dichotomies between internalist and externalist debate are compatible with most earlier concerns of memetics. Internet Memetics might be understood as a study without an agreed upon theory, as present research tends to focus on empirical developments answering theories of other areas of cultural research. It exists more as a set of distributed studies than a methodology, theory, field, or discipline, with a few exceptions such as Shifman and those closely following her motivating framework. Critics contend that some of the proponents' assertions are "untested, unsupported or incorrect."[13]Most of the history of memetic criticism has been directed at Dawkins' earlier theory of memetics framed inThe Selfish Gene.There have been some serious criticisms of memetics. Namely, there are a few key points on which most criticisms focus: mentalism, cultural determinism, Darwinian reduction, a lack of academic novelty, and a lack of empirical evidence of memetic mechanisms. Luis Benitez-Bribiesca points to the lack of memetic mechanisms. He refers to the lack of acode scriptfor memes which would suggest a genuine analogy to DNA in genes. He also suggests the meme mutation mechanism is too unstable which would render the evolutionary process chaotic. That is to say that the "unit of information" which traverses across minds is perhaps too flexible in meaning to be a realistic unit.[35]As such, he calls memetics "apseudoscientificdogma" and "a dangerous idea that poses a threat to the serious study ofconsciousnessand cultural evolution" among other things. Another criticism points to memetic triviality. That is, some have argued memetics is derivative of more rich areas of study. One of these cases comes from Peirciansemiotics, (e.g., Deacon,[36]Kull[37]) stating that the concept of meme is a less developedSign. Meme is thus described in memetics as a sign without its triadic nature. Charles Sanders Peirce's semiotic theory involves a triadic structure: a sign (a reference to an object), an object (the thing being referred to), and an interpretant (the interpreting actor of a sign). For Deacon and Kull, the meme is a degenerate sign, which includes only its ability of being copied. Accordingly, in the broadest sense, the objects of copying are memes, whereas the objects of translation and interpretation are signs. Others have pointed to the fact that memetics reduces genuine social and communicative activity to genetic arguments, and this cannot adequately describe cultural interactions between people. For example,Henry Jenkins, Joshua Green, and Sam Ford, in their bookSpreadable Media(2013), criticize Dawkins' idea of the meme, writing that "while the idea of the meme is a compelling one, it may not adequately account for how content circulates through participatory culture." The three authors also criticize other interpretations of memetics, especially those which describe memes as "self-replicating", because they ignore the fact that "culture is a human product and replicates through human agency."[38]In doing so, they align more closely with Shifman's notion of Internet Memetics and her addition of the human agency ofstanceto describe participatory structure. Mary Midgleycriticizes memetics for at least two reasons:[39] Like other critics, Maria Kronfeldner has criticized memetics for being based on an allegedly inaccurate analogy with the gene; alternately, she claims it is "heuristically trivial", being a mere redescription of what is already known without offering any useful novelty.[41] Research methodologies that apply memetics go by many names:Viral marketing, cultural evolution, the history of ideas, social analytics, and more. Many of these applications do not make reference to the literature on memes directly but are built upon the evolutionary lens of idea propagation that treats semantic units of culture as self-replicating and mutating patterns of information that are assumed to be relevant for scientific study. For example, the field of public relations is filled with attempts to introduce new ideas and alter social discourse. One means of doing this is to design a meme and deploy it through various media channels. One historic example of applied memetics is the PR campaign conducted in 1991 as part of the build-up to the first Gulf War in the United States.[54] The application of memetics to a difficult complex social system problem, environmentalsustainability, has recently been attempted at thwink.org[55]Using meme types and memetic infection in several stock and flow simulation models, Jack Harich has demonstrated several interesting phenomena that are best, and perhaps only, explained by memes. One model, The Dueling Loops of the Political Powerplace,[56]argues that the fundamental reason corruption is the norm in politics is due to an inherent structural advantage of one feedback loop pitted against another. Another model, The Memetic Evolution of Solutions to Difficult Problems,[57]uses memes, theevolutionary algorithm, and thescientific methodto show how complex solutions evolve over time and how that process can be improved. The insights gained from these models are being used to engineer memetic solution elements to the sustainability problem. Another application of memetics in the sustainability space is the crowdfunded Climate Meme Project[58]conducted by Joe Brewer and Balazs Laszlo Karafiath in the spring of 2013. This study was based on a collection of 1000 unique text-based expressions gathered from Twitter, Facebook, and structured interviews with climate activists. The major finding was that the global warming meme is not effective at spreading because it causes emotional duress in the minds of people who learn about it. Five central tensions were revealed in the discourse about [climate change], each of which represents a resonance point through which dialogue can be engaged. The tensions were Harmony/Disharmony (whether or not humans are part of the natural world), Survival/Extinction (envisioning the future as either apocalyptic collapse of civilization or total extinction of the human race), Cooperation/Conflict (regarding whether or not humanity can come together to solve global problems), Momentum/Hesitation (about whether or not we are making progress at the collective scale toaddress climate change), and Elitism/Heretic (a general sentiment that each side of the debate considers the experts of its opposition to be untrustworthy).[59] Ben Cullen, in his bookContagious Ideas,[60]brought the idea of the meme into the discipline of archaeology. He coined the term "Cultural Virus Theory", and used it to try to anchor archaeological theory in a neo-Darwinian paradigm. Archaeological memetics could assist the application of the meme concept to material culture in particular. Francis Heylighenof theCenter Leo Apostel for Interdisciplinary Studieshas postulated what he calls "memetic selection criteria". These criteria opened the way to a specialized field ofapplied memeticsto find out if these selection criteria could stand the test ofquantitative analyses. In 2003 Klaas Chielens carried out these tests in a Masters thesis project on the testability of the selection criteria. InSelfish Sounds and Linguistic Evolution,[61]Austrian linguist Nikolaus Ritt has attempted to operationalise memetic concepts and use them for the explanation of long term sound changes and change conspiracies in early English. It is argued that a generalised Darwinian framework for handling cultural change can provide explanations where established, speaker centred approaches fail to do so. The book makes comparatively concrete suggestions about the possible material structure of memes, and provides two empirically rich case studies. Australian academic S.J. Whitty has argued thatproject managementis a memeplex with the language and stories of its practitioners at its core.[62]This radical approach sees a project and its management as an illusion; a human construct about a collection of feelings, expectations, and sensations, which are created, fashioned, and labeled by the human brain. Whitty's approach requires project managers to consider that the reasons for using project management are not consciously driven to maximize profit, and are encouraged to consider project management as naturally occurring, self-serving, evolving process which shapes organizations for its own purpose. Swedish political scientist Mikael Sandberg argues against "Lamarckian" interpretations of institutional and technological evolution and studies creative innovation ofinformation technologiesin governmental and private organizations in Sweden in the 1990s from a memetic perspective.[63]Comparing the effects of active ("Lamarckian") IT strategy versus user–producer interactivity (Darwinian co-evolution), evidence from Swedish organizations shows that co-evolutionary interactivity is almost four times as strong a factor behind IT creativity as the "Lamarckian" IT strategy.
https://en.wikipedia.org/wiki/Memetics
Psychobabble(aportmanteauof "psychology" or "psychoanalysis" and "babble") is a derogatory name for therapy speech or writing that usespsychologicaljargon,buzzwords, andesotericlanguage to create an impression of truth orplausibility. The term implies that the speaker or writer lacks the experience and understanding necessary for the proper use of psychological terms. Additionally, it may imply that the content of speech deviates markedly from common sense and good judgement. Some buzzwords that are commonly heard in psychobabble have come into widespread use inbusiness management,motivational seminars,self-help,folk psychology, andpopular psychology. Frequent use of psychobabble can associate a clinical, psychological word with meaningless, or less meaningful, buzzword definitions. Laypersons often use such words when they describelife problemsas clinical maladies even though the clinical terms are not meaningful or appropriate. Mostprofessionsdevelop a uniquevocabularyorjargonwhich, with frequent use, may become commonplace buzzwords. Professional psychologists may reject the "psychobabble" label when it is applied to their own special terminology. The allusions to psychobabble imply that some psychological concepts lack precision and have become meaningless orpseudoscientific. Psychobabble was defined by the writer who coined the word, R.D. Rosen,[1][2]as a set of repetitive verbal formalities that kills off the very spontaneity, candour, and understanding it pretends to promote. It’s an idiom that reduces psychological insight to a collection of standardized observations that provides a frozen lexicon to deal with an infinite variety of problems.[3] The word itself came into popular use after his 1977 publication ofPsychobabble: Fast Talk and Quick Cure in the Era of Feeling.[4] Rosen coined the word in 1975 in a book review forThe Boston Phoenix, then featured it in a cover story for the magazineNew Timestitled "Psychobabble: The New Language of Candor."[5]His bookPsychobabbleexplores the dramatic expansion of psychological treatments and terminology in both professional and non-professional settings. Certain terms considered to bepsychological jargonmay be dismissed as psychobabble when they are used by laypersons or in discussions ofpopular psychologythemes.New Age philosophies,self-helpgroups,personal developmentcoaching, andlarge-group awareness trainingare often said to employ psychobabble. The word "psychobabble" may refer contemptuously to pretentious psychologicalgibberish. Automated talk-therapy offered by variousELIZAcomputer programs produce notable examples of conversational patterns that are psychobabble, even though they may not be loaded with jargon. ELIZA programs parody clinical conversations in which a therapist replies to a statement with a question that requires little or no specific knowledge. "Neurobabble" is a related term. Beyerstein (1990)[6]wrote that neurobabble can appear in "ads [that] suggest that brain 'repatterning' will foster effortless learning, creativity, and prosperity." He associated neuromythologies ofleft/right brain pseudosciencewith specific New Age products and techniques. He stated that "the purveyors of neurobabble urge us to equate truth with what feels right and to abandon the commonsense insistence that those who would enlighten us provide at least as much evidence as we demand of politicians or used-car salesmen." Psychobabble terms are typically words or phrases which have their roots inpsychotherapeutic practice. Psychobabblers commonly overuse such terms as if they possessed some special value or meaning. Rosen has suggested that the following terms often appear in psychobabble:co-dependent,delusion,denial,dysfunctional,empowerment,holistic,meaningful relationship,multiple personality disorder,narcissism,psychosis,self-actualization,synergy, andmindfulness. Extensive examples of psychobabble appear inCyra McFadden's satirical novelThe Serial: A Year in the Life of Marin County(1977).[7]In his collection of critical essays,Working with Structuralism(1981), the British scholar and novelistDavid Lodgegives a structural analysis of the language used in the novel and notes that McFadden endorsed the use of the term.[8] In 2010,Theodore Dalrympledefined psychobabble as "the means by which people talk about themselves without revealing anything."[9]
https://en.wikipedia.org/wiki/Psychobabble
Inrhetoric, aweasel word, oranonymous authority, is a word or phrase aimed at creating an impression that something specific and meaningful has been said, when in fact only a vague, ambiguous, or irrelevant claim has been communicated. The terms may be consideredinformal. Examples include the phrases "some people say", "it is thought", and "researchers believe". Using weasel words may allow one to later deny (aka weasel out of) any specific meaning if the statement is challenged, because the statement was never specific in the first place. Weasel words can be a form oftergiversationand may be used inconspiracy theories,advertising,popular science,opinion piecesandpolitical statementsto mislead or disguise abiasedview or unsubstantiated claim. Weasel words can weaken or understate a controversial claim. An example of this is using terms like "somewhat" or "in most respects," which make a sentence more ambiguous than it would be without them.[1] The expressionweasel wordmay have derived from the egg-eating habits ofweasels.[2]An article published by theBuffalo Newsattributes the origin of the term toWilliam Shakespeare's playsHenry VandAs You Like It, which includesimilesof weasels sucking eggs.[3]The article claims these similes are flawed because weasels have insufficient jaw musculature to be able to suck eggs.[4] Ovid'sMetamorphosesprovides an earlier source for the same etymology. Ovid describes howJunoorders the goddess of childbirth,Lucina, to preventAlcmenefrom giving birth toHercules. Alcmene's servantGalanthis, realizing that Lucina is outside the room using magic to prevent the birth, emerges to announce that the birth has been a success. Lucina, in her amazement, drops the spells of binding, and Hercules is born. Galanthis then mocks Lucina, who responds by transforming her into a weasel. Ovid writes (in A.S. Kline's translation) "And because her lying mouth helped in childbirth, she gives birth through her mouth..."[5]Ancient Greeks believed that weasels conceived through their ears and gave birth through their mouths.[6] Definitions of the word 'weasel' that imply deception and irresponsibility include: the noun form, referring to a sneaky, untrustworthy, or insincere person; the verb form, meaning tomanipulateshiftily;[7]and the phrase "toweasel out," meaning "to squeeze one's way out of something" or "to evade responsibility."[8] Theodore Rooseveltattributed the term to his friend William Sewall's older brother, Dave, claiming that he had used the term in a private conversation in 1879.[9]The expression first appeared in print in Stewart Chaplin's short story "Stained Glass Political Platform" (published in 1900 inThe Century Magazine),[10]in which weasel words were described as "words that suck the life out of the words next to them, just as a weasel sucks the egg and leaves the shell." Roosevelt apparently later put the term into public use after using it in a speech in St. Louis May 31, 1916. According to Mario Pei, Roosevelt said, "When a weasel sucks an egg, the meat is sucked out of the egg; and if you use a weasel word after another, there is nothing left of the other."[11] A 2009 study ofWikipediafound that most weasel words in it could be divided into three main categories:[12] Other forms of weasel words may include these:[13][14] Generalizingby means ofquantifiers, such asmany, when quantifiable measures could be provided, obfuscates the point being made, and if done deliberately is an example of "weaseling." Illogical or irrelevant statementsare often used in advertising, where the statement describes a beneficial feature of a product or service being advertised. An example is the endorsement of products by celebrities, regardless of whether they have any expertise relating to the product. In non-sequitur fashion, it does not follow that the endorsement provides any guarantee of quality or suitability. False authorityis defined as the use of the passive voice without specifying an actor or agent. For example, saying "it has been decided" without stating by whom, and citation of unidentified "authorities" or "experts," provide further scope for weaseling. It can be used in combination with the reverse approach of discrediting a contrary viewpoint by glossing it as "claimed" or "alleged." This embraces what is termed a "semanticcop-out," represented by the termallegedly.[16]This implies an absence of ownership of opinion, which casts a limited doubt on the opinion being articulated. The construction "mistakes were made" enables the speaker to acknowledge error without identifying those responsible. However, the passive voice is legitimately used when the identity of the actor or agent is irrelevant. For example, in the sentence "one hundred votes are required to pass the bill," there is no ambiguity, and the actors including the members of the voting community cannot practicably be named even if it were useful to do so.[17][18] The scientific journal article is another example of the legitimate use of the passive voice. For an experimental result to be useful, anyone who runs the experiment should get the same result. That is, the identity of the experimenter should be of low importance. Use of the passive voice focuses attention upon the actions, and not upon the actor—the author of the article. To achieve conciseness and clarity, however, most scientific journals encourage authors to use the active voice where appropriate, identifying themselves as "we" or even "I."[19] Themiddle voicecan be used to create a misleading impression. For example: The first of these also demonstrates false authority, in that anyone who disagrees incurs the suspicion of being unreasonable merely by dissenting. Another example from international politics is use of the phrase "the international community" to imply a false unanimity. Euphemismmay be used to soften and potentially mislead the audience. For example, the dismissal of employees may be referred to as "rightsizing," "headcount reduction," and "downsizing."[20]Jargonof this kind is used to describe thingseuphemistically.[21] Restricting informationavailable to the audience is a technique sometimes used inadvertisements. For example, stating that a product "... is now 20% cheaper!" raises the question, "Cheaper than what?" It might be said that "Four out of five people prefer ..." something, but this raises the questions of the size and selection of the sample, and the size of the majority. "Four out of five" could actually mean that there had been 8% for, 2% against, and 90% indifferent.
https://en.wikipedia.org/wiki/Weasel_word
Incomputabilityandcomplexity theory,ALLis the class of alldecision problems. ALLcontains all of the complex classes of decision problems, includingREandco-RE, and uncountably many languages that are neitherREnorco-RE. It is the largest complexity class, containing all other complexity classes. Thistheoretical computer science–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/ALL_(complexity)
Intheoretical computer science, acomputational problemis one that asks for a solution in terms of analgorithm. For example, the problem offactoring is a computational problem that has a solution, as there are many knowninteger factorizationalgorithms. A computational problem can be viewed as asetofinstancesorcasestogether with a, possibly empty, set ofsolutionsfor every instance/case. The question then is, whether there exists an algorithm that maps instances to solutions. For example, in thefactoring problem, the instances are the integersn, and solutions are prime numberspthat are the nontrivial prime factors ofn. An example of a computational problem without a solution is theHalting problem. Computational problems are one of the main objects of study in theoretical computer science. One is often interested not only in mere existence of an algorithm, but also how efficient the algorithm can be. The field ofcomputational complexity theoryaddresses such questions by determining the amount of resources (computational complexity) solving a given problem will require, and explain why some problems areintractableorundecidable. Solvable computational problems belong tocomplexity classesthat define broadly the resources (e.g. time, space/memory, energy, circuit depth) it takes to compute (solve) them with variousabstract machines. For example, the complexity classes Both instances and solutions are represented by binarystrings, namely elements of {0, 1}*.[a]For example,natural numbersare usually represented as binary strings usingbinary encoding. This is important since the complexity is expressed as a function of the length of the input representation. Adecision problemis a computational problem where the answer for every instance is either yes or no. An example of a decision problem isprimality testing: A decision problem is typically represented as the set of all instances for which the answer isyes. For example, primality testing can be represented as the infinite set In asearch problem, the answers can be arbitrary strings. For example, factoring is a search problem where the instances are (string representations of) positive integers and the solutions are (string representations of) collections of primes. A search problem is represented as arelationconsisting of all the instance-solution pairs, called asearch relation. For example, factoring can be represented as the relation which consist of all pairs of numbers (n,p), wherepis a prime factor ofn. Acounting problemasks for the number of solutions to a given search problem. For example, a counting problem associated with factoring is A counting problem can be represented by a functionffrom {0, 1}*to the nonnegative integers. For a search relationR, the counting problem associated toRis the function Anoptimization problemasks for finding a "best possible" solution among the set of all possible solutions to a search problem. One example is themaximum independent setproblem: Optimization problems are represented by their objective function and their constraints. In afunction problema single output (of atotal function) is expected for every input, but the output is more complex than that of adecision problem, that is, it isn't just "yes" or "no". One of the most famous examples is thetraveling salesmanproblem: It is anNP-hardproblem incombinatorial optimization, important inoperations researchandtheoretical computer science. Incomputational complexity theory, it is usually implicitly assumed that any string in {0, 1}*represents an instance of the computational problem in question. However, sometimes not all strings {0, 1}*represent valid instances, and one specifies a proper subset of {0, 1}*as the set of "valid instances". Computational problems of this type are calledpromise problems. The following is an example of a (decision) promise problem: Here, the valid instances are those graphs whose maximum independent set size is either at most 5 or at least 10. Decision promise problems are usually represented as pairs of disjoint subsets (Lyes,Lno) of {0, 1}*. The valid instances are those inLyes∪Lno.LyesandLnorepresent the instances whose answer isyesandno, respectively. Promise problems play an important role in several areas ofcomputational complexity, includinghardness of approximation,property testing, andinteractive proof systems.
https://en.wikipedia.org/wiki/Computational_problem
Inlogic, a true/falsedecision problemisdecidableif there exists aneffective methodfor deriving the correct answer.Zeroth-order logic(propositional logic) is decidable, whereasfirst-orderandhigher-orderlogic are not.Logical systemsare decidable if membership in their set oflogically validformulas (or theorems) can be effectively determined. Atheory(set of sentencesclosedunderlogical consequence) in a fixed logical system is decidable if there is an effective method for determining whether arbitrary formulas are included in the theory. Many important problems areundecidable, that is, it has been proven that no effective method for determining membership (returning a correct answer after finite, though possibly very long, time in all cases) can exist for them. Eachlogical systemcomes with both asyntactic component, which among other things determines the notion ofprovability, and asemantic component, which determines the notion oflogical validity. The logically valid formulas of a system are sometimes called thetheoremsof the system, especially in the context of first-order logic whereGödel's completeness theoremestablishes the equivalence of semantic and syntactic consequence. In other settings, such aslinear logic, the syntactic consequence (provability) relation may be used to define the theorems of a system. A logical system is decidable if there is an effective method for determining whether arbitrary formulas are theorems of the logical system. For example,propositional logicis decidable, because thetruth-tablemethod can be used to determine whether an arbitrarypropositional formulais logically valid. First-order logicis not decidable in general; in particular, the set of logical validities in anysignaturethat includes equality and at least one other predicate with two or more arguments is not decidable.[1]Logical systems extending first-order logic, such assecond-order logicandtype theory, are also undecidable. The validities ofmonadic predicate calculuswith identity are decidable, however. This system is first-order logic restricted to those signatures that have no function symbols and whose relation symbols other than equality never take more than one argument. Some logical systems are not adequately represented by the set of theorems alone. (For example,Kleene's logichas no theorems at all.) In such cases, alternative definitions of decidability of a logical system are often used, which ask for an effective method for determining something more general than just validity of formulas; for instance, validity ofsequents, or theconsequence relation{(Г,A) | Г ⊧A} of the logic. Atheoryis a set of formulas, often assumed to beclosedunderlogical consequence. Decidability for a theory concerns whether there is an effective procedure that decides whether the formula is a member of the theory or not, given an arbitrary formula in the signature of the theory. The problem of decidability arises naturally when a theory is defined as the set of logical consequences of a fixed set of axioms. There are several basic results about decidability of theories. Every (non-paraconsistent) inconsistent theory is decidable, as every formula in the signature of the theory will be a logical consequence of, and thus a member of, the theory. Everycompleterecursively enumerablefirst-order theory is decidable. An extension of a decidable theory may not be decidable. For example, there are undecidable theories in propositional logic, although the set of validities (the smallest theory) is decidable. A consistent theory that has the property that every consistent extension is undecidable is said to beessentially undecidable. In fact, every consistent extension will be essentially undecidable. The theory of fields is undecidable but not essentially undecidable.Robinson arithmeticis known to be essentially undecidable, and thus every consistent theory that includes or interprets Robinson arithmetic is also (essentially) undecidable. Examples of decidable first-order theories include the theory ofreal closed fields, andPresburger arithmetic, while the theory ofgroupsandRobinson arithmeticare examples of undecidable theories. Some decidable theories include (Monk 1976, p. 234):[2] Methods used to establish decidability includequantifier elimination,model completeness, and theŁoś-Vaught test. Some undecidable theories include:[2] Theinterpretabilitymethod is often used to establish undecidability of theories. If an essentially undecidable theoryTis interpretable in a consistent theoryS, thenSis also essentially undecidable. This is closely related to the concept of amany-one reductionincomputability theory. A property of a theory or logical system weaker than decidability issemidecidability. A theory is semidecidable if there is a well-defined method whose result, given an arbitrary formula, arrives as positive, if the formula is in the theory; otherwise, may never arrive at all; otherwise, arrives as negative. A logical system is semidecidable if there is a well-defined method for generating a sequence of theorems such that each theorem will eventually be generated. This is different from decidability because in a semidecidable system there may be no effective procedure for checking that a formula isnota theorem. Every decidable theory or logical system is semidecidable, but in general the converse is not true; a theory is decidable if and only if both it and its complement are semi-decidable. For example, the set of logical validitiesVof first-order logic is semi-decidable, but not decidable. In this case, it is because there is no effective method for determining for an arbitrary formulaAwhetherAis not inV. Similarly, the set of logical consequences of anyrecursively enumerable setof first-order axioms is semidecidable. Many of the examples of undecidable first-order theories given above are of this form. Decidability should not be confused withcompleteness. For example, the theory ofalgebraically closed fieldsis decidable but incomplete, whereas the set of all true first-order statements about nonnegative integers in the language with + and × is complete but undecidable. Unfortunately, as a terminological ambiguity, the term "undecidable statement" is sometimes used as a synonym forindependent statement. As with the concept of adecidable set, the definition of a decidable theory or logical system can be given either in terms ofeffective methodsor in terms ofcomputable functions. These are generally considered equivalent perChurch's thesis. Indeed, the proof that a logical system or theory is undecidable will use the formal definition of computability to show that an appropriate set is not a decidable set, and then invoke Church's thesis to show that the theory or logical system is not decidable by any effective method (Enderton 2001, pp. 206ff.). Somegameshave been classified as to their decidability:
https://en.wikipedia.org/wiki/Decidability_(logic)
Inmathematical logic, atheory(also called aformal theory) is a set ofsentencesin aformal language. In most scenarios adeductive systemis first understood from context, giving rise to aformal systemthat combines the language with deduction rules. An elementϕ∈T{\displaystyle \phi \in T}of adeductively closedtheoryT{\displaystyle T}is then called atheoremof the theory. In many deductive systems there is usually a subsetΣ⊆T{\displaystyle \Sigma \subseteq T}that is called "the set ofaxioms" of the theoryT{\displaystyle T}, in which case the deductive system is also called an "axiomatic system". By definition, every axiom is automatically a theorem. Afirst-order theoryis a set offirst-ordersentences (theorems)recursivelyobtained by theinference rulesof the system applied to the set of axioms. When defining theories for foundational purposes, additional care must be taken, as normal set-theoretic language may not be appropriate. The construction of a theory begins by specifying a definite non-emptyconceptual classE{\displaystyle {\mathcal {E}}}, the elements of which are calledstatements. These initial statements are often called theprimitive elementsorelementarystatements of the theory—to distinguish them from other statements that may be derived from them. A theoryT{\displaystyle {\mathcal {T}}}is a conceptual class consisting of certain of these elementary statements. The elementary statements that belong toT{\displaystyle {\mathcal {T}}}are called theelementary theoremsofT{\displaystyle {\mathcal {T}}}and are said to betrue. In this way, a theory can be seen as a way of designating a subset ofE{\displaystyle {\mathcal {E}}}that only contain statements that are true. This general way of designating a theory stipulates that the truth of any of its elementary statements is not known without reference toT{\displaystyle {\mathcal {T}}}. Thus the same elementary statement may be true with respect to one theory but false with respect to another. This is reminiscent of the case in ordinary language where statements such as "He is an honest person" cannot be judged true or false without interpreting who "he" is, and, for that matter, what an "honest person" is under this theory.[1] A theoryS{\displaystyle {\mathcal {S}}}is asubtheoryof a theoryT{\displaystyle {\mathcal {T}}}ifS{\displaystyle {\mathcal {S}}}is a subset ofT{\displaystyle {\mathcal {T}}}. IfT{\displaystyle {\mathcal {T}}}is a subset ofS{\displaystyle {\mathcal {S}}}thenS{\displaystyle {\mathcal {S}}}is called anextensionor asupertheoryofT{\displaystyle {\mathcal {T}}} A theory is said to be adeductive theoryifT{\displaystyle {\mathcal {T}}}is aninductive class, which is to say that its content is based on someformal deductive systemand that some of its elementary statements are taken asaxioms. In a deductive theory, any sentence that is alogical consequenceof one or more of the axioms is also a sentence of that theory.[1]More formally, if⊢{\displaystyle \vdash }is a Tarski-styleconsequence relation, thenT{\displaystyle {\mathcal {T}}}is closed under⊢{\displaystyle \vdash }(and so each of its theorems is a logical consequence of its axioms) if and only if, for all sentencesϕ{\displaystyle \phi }in the language of the theoryT{\displaystyle {\mathcal {T}}}, ifT⊢ϕ{\displaystyle {\mathcal {T}}\vdash \phi }, thenϕ∈T{\displaystyle \phi \in {\mathcal {T}}}; or, equivalently, ifT′{\displaystyle {\mathcal {T}}'}is a finite subset ofT{\displaystyle {\mathcal {T}}}(possibly the set of axioms ofT{\displaystyle {\mathcal {T}}}in the case of finitely axiomatizable theories) andT′⊢ϕ{\displaystyle {\mathcal {T}}'\vdash \phi }, thenϕ∈T′{\displaystyle \phi \in {\mathcal {T}}'}, and thereforeϕ∈T{\displaystyle \phi \in {\mathcal {T}}}. Asyntactically consistent theoryis a theory from which not every sentence in the underlying language can be proven (with respect to somedeductive system, which is usually clear from context). In a deductive system (such as first-order logic) that satisfies theprinciple of explosion, this is equivalent to requiring that there is no sentence φ such that both φ and its negation can be proven from the theory. Asatisfiable theoryis a theory that has amodel. This means there is a structureMthatsatisfiesevery sentence in the theory. Any satisfiable theory is syntactically consistent, because the structure satisfying the theory will satisfy exactly one of φ and the negation of φ, for each sentence φ. Aconsistent theoryis sometimes defined to be a syntactically consistent theory, and sometimes defined to be a satisfiable theory. Forfirst-order logic, the most important case, it follows from thecompleteness theoremthat the two meanings coincide.[2]In other logics, such assecond-order logic, there are syntactically consistent theories that are not satisfiable, such asω-inconsistent theories. Acomplete consistent theory(or just acomplete theory) is aconsistenttheoryT{\displaystyle {\mathcal {T}}}such that for every sentence φ in its language, either φ is provable fromT{\displaystyle {\mathcal {T}}}orT{\displaystyle {\mathcal {T}}}∪{\displaystyle \cup }{φ} is inconsistent. For theories closed under logical consequence, this means that for every sentence φ, either φ or its negation is contained in the theory.[3]Anincomplete theoryis a consistent theory that is not complete. (see alsoω-consistent theoryfor a stronger notion of consistency.) Aninterpretation of a theoryis the relationship between a theory and some subject matter when there is amany-to-onecorrespondence between certain elementary statements of the theory, and certain statements related to the subject matter. If every elementary statement in the theory has a correspondent it is called afull interpretation, otherwise it is called apartial interpretation.[4] Eachstructurehas several associated theories. Thecomplete theoryof a structureAis the set of allfirst-ordersentencesover thesignatureofAthat are satisfied byA. It is denoted by Th(A). More generally, thetheoryofK, a class of σ-structures, is the set of all first-orderσ-sentencesthat are satisfied by all structures inK, and is denoted by Th(K). Clearly Th(A) = Th({A}). These notions can also be defined with respect to other logics. For each σ-structureA, there are several associated theories in a larger signature σ' that extends σ by adding one new constant symbol for each element of the domain ofA. (If the new constant symbols are identified with the elements ofAthat they represent, σ' can be taken to be σ∪{\displaystyle \cup }A.) The cardinality of σ' is thus the larger of the cardinality of σ and the cardinality ofA.[further explanation needed] ThediagramofAconsists of all atomic or negated atomic σ'-sentences that are satisfied byAand is denoted by diagA. Thepositive diagramofAis the set of all atomic σ'-sentences thatAsatisfies. It is denoted by diag+A. Theelementary diagramofAis the set eldiagAofallfirst-order σ'-sentences that are satisfied byAor, equivalently, the complete (first-order) theory of the naturalexpansionofAto the signature σ'. A first-order theoryQS{\displaystyle {\mathcal {QS}}}is a set of sentences in a first-orderformal languageQ{\displaystyle {\mathcal {Q}}}. There are many formal derivation ("proof") systems for first-order logic. These includeHilbert-style deductive systems,natural deduction, thesequent calculus, thetableaux methodandresolution. AformulaAis asyntactic consequenceof a first-order theoryQS{\displaystyle {\mathcal {QS}}}if there is aderivationofAusing only formulas inQS{\displaystyle {\mathcal {QS}}}as non-logical axioms. Such a formulaAis also called a theorem ofQS{\displaystyle {\mathcal {QS}}}. The notation "QS⊢A{\displaystyle {\mathcal {QS}}\vdash A}" indicatesAis a theorem ofQS{\displaystyle {\mathcal {QS}}}. Aninterpretationof a first-order theory provides a semantics for the formulas of the theory. An interpretation is said to satisfy a formula if the formula is true according to the interpretation. Amodelof a first-order theoryQS{\displaystyle {\mathcal {QS}}}is an interpretation in which every formula ofQS{\displaystyle {\mathcal {QS}}}is satisfied. A first-order theoryQS{\displaystyle {\mathcal {QS}}}is a first-order theory with identity ifQS{\displaystyle {\mathcal {QS}}}includes the identity relation symbol "=" and the reflexivity and substitution axiom schemes for this symbol. One way to specify a theory is to define a set ofaxiomsin a particular language. The theory can be taken to include just those axioms, or their logical or provable consequences, as desired. Theories obtained this way includeZFCandPeano arithmetic. A second way to specify a theory is to begin with astructure, and let the theory be the set of sentences that are satisfied by the structure. This is a method for producing complete theories through the semantic route, with examples including the set of true sentences under the structure (N, +, ×, 0, 1, =), whereNis the set of natural numbers, and the set of true sentences under the structure (R, +, ×, 0, 1, =), whereRis the set of real numbers. The first of these, called the theory oftrue arithmetic, cannot be written as the set of logical consequences of anyenumerableset of axioms. The theory of (R, +, ×, 0, 1, =) was shown by Tarski to bedecidable; it is the theory ofreal closed fields(seeDecidability of first-order theories of the real numbersfor more).
https://en.wikipedia.org/wiki/Logical_theory
Incomputational complexity theoryandcomputability theory, asearch problemis acomputational problemof finding anadmissibleanswer for a given input value, provided that such an answer exists. In fact, a search problem is specified by abinary relationRwherexRyif and only if "yis an admissible answer givenx".[note 1]Search problems frequently occur ingraph theoryandcombinatorial optimization, e.g. searching formatchings, optionalcliques, andstable setsin a given undirected graph. Analgorithmis said to solve a search problem if, for every input valuex, it returns an admissible answeryforxwhen such an answer exists; otherwise, it returns any appropriate output, e.g. "not found" forxwith no such answer. PlanetMathdefines the problem as follows:[1] IfR{\displaystyle R}is a binary relation such thatfield⁡(R)⊆Γ+{\displaystyle \operatorname {field} (R)\subseteq \Gamma ^{+}}andT{\displaystyle T}is aTuring machine, thenT{\displaystyle T}calculatesf{\displaystyle f}if:[note 2] This article incorporates material from search problem onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Search_problem
Analbur(plural:albures) is aword playinMexican Spanishthat involves adouble entendre. The first meaning in the Spanish language of albur refers to contingency or chance to which the result is trusted. Like in: "Leave nothing to the albur" or "it was worth the risk of an albur". The term originally referred to the hidden cards in theSpanish Montebetting card game.[1]The word albur is also synonym to uncertainty or random luck "Es un albur". It is very common among groups of male friends in Mexico, especially urban youth, construction workers, factory workers, mechanics and other blue collar-derivative male groups; and is considered rude otherwise, especially when in the presence of women, given the sexualinnuendoin the jokes. Its usage is similar to the English expressions: "If you know what I mean" and "that's what she said". Albur is also a form ofcomedyand manystand-up artistsand comedians, includingAlberto Rojas "El Caballo",Polo Polo,Franco Escamillaand others are renowned for their skills at performing albures on drunkbulliesandhecklersattending to their shows (alburear).Brozohas been known for performingalbureson several prominent political figures in Mexican television such as Mexico's former president,Felipe Calderón. The game of albures is usually a subtle, verbal competition in which the players try to show superiority by using albures attempting to leave the opponent without acomeback. Mostalbureshave to do with sex,[2]but they also can be just generally degrading, as with comparing the target's stupidity to that of a donkey, ox, or mule. Specific purposes of the albur can include: Albures are commonplace in many sectors of Mexican society and are usually regarded as a sign of awittyand agile (though somewhat dirty) mind. It is possible to find people who brag about their skills (at albures) and claim to be "good at it", this phrase being itself an albur (also a self-reference to their virility and sexual power). The albures can be subtle or explicit depending on the author's intention, they are commonly found in almost every Mexican social sector being more common in low-class sectors, but present in higher ones also. However people may sometimes categorize those who use albures asnacos, which is a way of saying "you are despicable for being an urban but uneducated person, whose family comes from the rural poor areas of the country". Though at first sight outsiders may see albures as a very rude, distasteful, blunt and aggressive activity, it is usually nothing more than a pastime which promotes laughter and is an excuse to joke around with friends. An important aspect of the refinement of an albur exchange is the level at which it can be seen as adouble entendre. The most refined albureros can maintain a double entendre conversation in such a way that an unprepared listener would not even realize there was an underlying sexual connotation. In such a case that person would find the giggling, blushing and pausing awkward. It sometimes happens that a newcomer is "welcomed" into a group or conversation with a quick and unexpected albur. This behaviour tends to immediately set who is the (sexual) aggressor, thus being a form to establish a male hierarchy. A quick comeback gains recognition of the group, though this is seldom expected, due to shyness, surprise, lack of ability to recognize the albur, or lack of ability to produce an answer.[2]For example, at a diner with friends someone saying, "cómetela entera" ('eat it all'), might be jokingly insinuating "you should perform fellatio", an agile mind would give a retaliatory double entendre answer like, "I don't like it, but you can eat mine if you like" ('A mí no me gusta, pero tú cómete la mía si se te antoja'). Women play a crucial role in the language and focus of albures. While the alburero would attempt to show the albureado as thebottom(passive sexual partner,pasivo), he could as well name the female relatives of the albureado, namely sisters or, in case of great confidence, mother. However, it's a known fact that many Mexicans are over-reactive to this type of references and might be more easily moved to a physically violent response than by any other type of albur, or even overt insults. However, in the case of femalealbureras(female albur performers), the situation becomes quite complicated for the male albur performer, as the general counterattack strategy would be exhibiting the adversary as a sexual bottom, but women in albur are not trying to prove virility, only mental sharpness, and that type of attack does no harm. In this case, if the albureado has no comeback line, he suffers a double humiliation since he would not be able to respond to the albur, and he would also have been defeated by a woman, which makes it more humiliating for the macho Mexican mind among of a group of males. Albures can make use of several aspects of (empirical, innate) linguistic knowledge. For example, thephallicshape of thechili pepperadds a double meaning to the question, asked of a tourist, "Do you like Mexican chili?"[2] Many albures originate from similarities in the pronunciation of different words: which sounds like:Dámela ahora('Give it to me now'), a phrase which can connote "it" as being either thevaginaor theanus, and therefore constituting an albur. Others make use of the fact that a word may have several meanings, one of which will be exploited. For example, aMexicanbus driver may ask a passenger aboutla parada, which can mean both "the stop" and "the erected one". Therefore, a wary passenger should be careful not to reply, e.g., "al tope", which can mean "at thespeedbump" but also "the whole thing" or "until it stops". When the antecedent for a pronoun is substituted for a sexual reference the simplest (cheapest) form of albures can be achieved. Using the same example as above, Several examples can be easily constructed where the previous conversation would include antecedents for the clitic pronoun "-la" [fem, sing], and the albur comeback would re-reference it to mean the penis (among the several synonyms for penis, there are some with the correct gender and number agreement).[citation needed] In this exchange, the interesting aspect is that b's response mechanism is simply to substitute the out-of-context "sad" for a's accusation. B's response would effectively be interpreted as "I see that you are dumb". Some of the most renowned masters of the albur are "Chaf y Queli", who made several records in the 1970s under the "Discos Diablo" label.[citation needed] Armando Jiménez, who wrotePicardía mexicana, made a compilation of albures and othermexicanisms.[citation needed]
https://en.wikipedia.org/wiki/Albur
Acoincidenceis a remarkable concurrence of events or circumstances that have no apparent causal connection with one another.[2]The perception of remarkable coincidences may lead tosupernatural,occult, orparanormalclaims, or it may lead to belief infatalism, which is a doctrine that events will happen in the exact manner of a predetermined plan. In general, the perception of coincidence, for lack of more sophisticated explanations, can serve as a link tofolk psychologyand philosophy.[3] From astatisticalperspective, coincidences are inevitable and often less remarkable than they may appear intuitively. Usually, coincidences arechance eventswith underestimatedprobability.[3]An example is thebirthday problem, which shows that theprobabilityof two persons having the same birthday already exceeds 50% in a group of only 23 persons.[4]Generalizations of the birthday problem are a key tool used for mathematically modelling coincidences.[5] The first known usage of the word coincidence is from c. 1605 with the meaning "exact correspondence in substance or nature" from the Frenchcoincidence, fromcoincider, from Medieval Latincoincidere. The definition evolved in the 1640s as "occurrence or existence during the same time". The word was introduced to English readers in the 1650s by SirThomas Browne, inA Letter to a Friend(circa 1656 pub. 1690)[6]and in his discourseThe Garden of Cyrus(1658).[7] Swiss psychiatristCarl Jungdeveloped a theory that states that remarkable coincidences occur because of what he called "synchronicity," which he defined as an "acausal connecting principle."[8] TheJung-Paulitheory of "synchronicity", conceived by a physicist and a psychologist, both eminent in their fields, represents perhaps the most radical departure from the world-view of mechanistic science in our time. Yet they had a precursor, whose ideas had a considerable influence on Jung: the Austrian biologistPaul Kammerer, a wildgeniuswho committed suicide in 1926, at the age of forty-five. One of Kammerer's passions was collecting coincidences. He published a book titledDas Gesetz der Serie(The Law of Series), which has not been translated into English. In this book, he recounted 100 or so anecdotes of coincidences that led him to formulate his theory of seriality. He postulated that all events are connected by waves of seriality. Kammerer was known to make notes in public parks of how many people were passing by, how many of them carried umbrellas, etc.Albert Einsteincalled the idea of seriality "interesting and by no means absurd."[10]Carl Jung drew upon Kammerer's work in his bookSynchronicity.[11] A coincidence lacks an apparent causal connection. A coincidence may be synchronicity — the experience of events that are causally unrelated — and yet their occurrence together has meaning for the person who observes them. To be counted as synchronicity, the events should be unlikely to occur together by chance, but this is questioned because there is usually a chance, no matter how small and in vast numbers of opportunities such coincidences do happen by chance if it is only non-zero (seelaw of truly large numbers). Some skeptics (e.g.,Georges CharpakandHenri Broch) argue synchronicity is merely an instance ofapophenia.[12]They argue that probability and statistical theory (exemplified, e.g., inLittlewood's law) suffice to explain remarkable coincidences.[13][14] Charles Fortalso compiled hundreds of accounts of interesting coincidences and strange phenomena. Measuring theprobabilityof a series of coincidences is the most common method of distinguishing a coincidence from causally connected events. The mathematically naive person seems to have a more acute awareness than the specialist of the basic paradox of probability theory, over which philosophers have puzzled ever since Pascal initiated that branch of science [in 1654] .... The paradox consists, loosely speaking, of the fact that probability theory is able to predict with uncanny precision the overall outcome of processes made up of numerous individual happenings, each of which in itself is unpredictable. In other words, we observe many uncertainties producing certainty, and many chance events creating a lawful total outcome. To establish cause and effect (i.e.,causality) is notoriously difficult, as is expressed by the commonly heard statement that "correlation does not imply causation." Instatistics, it is generally accepted that observational studies can give hints but can never establish cause and effect. But, considering the probability paradox (see Koestler's quote above), it appears that the larger the set of coincidences, the more certainty increases, and the more it seems that there is some cause behind a remarkable coincidence. ... it is only the manipulation of uncertainty that interests us. We are not concerned with the matter that is uncertain. Thus we do not study the mechanism of rain; only whether it will rain. It is no great wonder if in the long process of time, whilefortunetakes her course hither and thither, numerous coincidences should spontaneously occur.
https://en.wikipedia.org/wiki/Coincidence
Dirty Mindsis aboard gamemade byTDC GamesinItasca, Illinois. Created in 1988 by Larry Balsamo andSandra Schaeffer, it was originally sold only in novelty and adult stores such asSpencer Gifts. Over its history, however, it has permeated the mainstream marketplace. The primary reason for its popularity is its use of sexualdouble entendresas clues to otherwise innocuous riddles. All of the clues are puns that may sound dirty on a first hearing, but actually refer to clean solutions. For example, the correct answer for the clue "The more you play with me the harder I get" is "Rubik's Cube". The game is completely clean unless the players have a dirty mind. All of the answers are clean. The player to correctly answer the question with the correct clean answer will be rewarded a card. Each card either displays a letter, lose a card, take two cards, or wild. The card with the letters either have a D, I, R, T, or a Y. The first person to have all five of the lettered cards wins the game. The game is easy to play and meant to be funny. It is advised to be played with only adults 18 and up. The contents that come with the game include: There are four versions to date:Dirty Minds(the original),More Dirty Minds,Deluxe Dirty Minds, which introduced an entirely new category and the travel card game edition, and Dirty Minds Supreme. As of 2011, a television game show version was in the works.Dirty Mindsis also played regularly on radio stations across the country.[citation needed] Thisboard game-related article or section is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Dirty_Minds
Doublespeakis language that deliberatelyobscures, disguises, distorts, or reverses themeaningof words. Doublespeak may take the form ofeuphemisms(e.g., "downsizing" forlayoffsand "servicing the target" forbombing),[1]in which case it is primarily meant to make the truth sound more palatable. It may also refer to intentionalambiguityin language or to actual inversions of meaning. In such cases, doublespeak disguises the nature of the truth. Doublespeak is most closely associated with political language used by large entities such as corporations and governments.[2][3] The termdoublespeakderives from two concepts inGeorge Orwell's novel,Nineteen Eighty-Four, "doublethink" and "Newspeak", despite the term itself not being used in the novel.[4]Another version of the term,doubletalk, also referring to intentionally ambiguous speech, did exist at the time Orwell wrote his book, but the usage ofdoublespeak, as well as of "doubletalk", in the sense of emphasizing ambiguity, clearly predates the publication of the novel.[5][6]Parallels have also been drawn between doublespeak and Orwell's classic essay,Politics and the English Language, which discusses linguistic distortion for purposes related to politics.[7]In the essay, he observes that political language often serves to distort and obscure reality. Orwell's description of political speech is extremely similar to the popular definition of the term, doublespeak:[8] In our time, political speech and writing are largely the defence of the indefensible… Thus political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness… the great enemy of clear language is insincerity. Where there is a gap between one's real and one's declared aims, one turns as it were instinctively to long words and exhausted idioms… The writerEdward S. Hermancited what he saw as examples of doublespeak and doublethink in modern society.[9]Herman describes in his book,Beyond Hypocrisy,the principal characteristics of doublespeak: What is really important in the world of doublespeak is the ability to lie, whether knowingly or unconsciously, and to get away with it; and the ability to use lies and choose and shape facts selectively, blocking out those that don’t fit an agenda or program.[10] Edward S. HermanandNoam Chomskycomment in their bookManufacturing Consent: the Political Economy of the Mass Mediathat Orwellian doublespeak is an important component of the manipulation of the English language in American media, through a process calleddichotomization,a component of media propaganda involving "deeply embedded double standards in the reporting of news." For example, the use of state funds by the poor and financially needy is commonly referred to as "social welfare" or "handouts," which the "coddled" poor "take advantage of". These terms, however, are not as often applied to other beneficiaries of government spending such as military spending.[11]The bellicose language used interchangeably with calls for peace towardsArmeniabyAzerbaijanipresidentAliyevafter theSecond Nagorno-Karabakh Warwere described as doublespeak in media.[12] Advertisers can use doublespeak to mask their commercial intent from users, as users' defenses against advertising become more entrenched.[13]Some are attempting to counter this technique with a number of systems offering diverse views and information to highlight the manipulative and dishonest methods that advertisers employ.[14] According toJacques Ellul, "the aim is not to even modify people’s ideas on a given subject, rather, it is to achieve conformity in the way that people act." He demonstrates this view by offering an example from drug advertising. Use of doublespeak in advertisements resulted in aspirin production rates rising by almost 50 percent from over 23 million pounds in 1960 to over 35 million pounds in 1970.[15] Doublespeak, particularly when exaggerated, can be used as a device in satirical comedy and social commentary toironicallyparody political or bureaucratic establishments' intent on obfuscation or prevarication. The television seriesYes Ministeris notable for its use of this device.[16]Oscar Wildewas an early proponent of this device[17][18][19]and a significant influence on Orwell.[18] This pattern was formulated by Hugh Rank and is a simple tool designed to teach some basic patterns of persuasion used in political propaganda and commercial advertising. The function of the intensify/downplay pattern is not to dictate what should be discussed but to encourage coherent thought and systematic organization. The pattern works in two ways: intensifying and downplaying. All people intensify, and this is done via repetition, association and composition. Downplaying is commonly done via omission, diversion and confusion as they communicate in words, gestures, numbers, et cetera. Individuals can better cope with organized persuasion by recognizing the common ways whereby communication is intensified or downplayed, so as to counter doublespeak.[20] In 2022 and 2023, it was widely reported thatsocial mediausers were using a form of doublespeak – sometimes called "algospeak" – to subvertcontent moderationon platforms such asTikTok.[21][22][23]Examples include using the word "unalive" instead of "dead" or "kill", or using "leg booty" instead ofLGBT, which users believed would prevent moderation algorithms from banning orshadow banningtheir accounts.[21][24] Doublespeak is often used by politicians to advance their agenda. TheDoublespeak Awardis an "ironic tribute to public speakers who have perpetuated language that is grossly deceptive, evasive, euphemistic, confusing, or self-centered." It has been issued by theUSNational Council of Teachers of English(NCTE) since 1974.[25]The recipients of the Doublespeak Award are usually politicians, national administration or departments. An example of this is the United States Department of Defense, which won the award three times, in 1991, 1993, and 2001. For the 1991 award, the United States Department of Defense "swept the first six places in the Doublespeak top ten"[26]for using euphemisms like "servicing the target" (bombing) and "force packages" (warplanes). Among the other phrases in contention were "difficult exercise in labor relations", meaning a strike, and "meaningful downturn in aggregate output", an attempt to avoid saying the word "recession".[1] The USNational Council of Teachers of English(NCTE) Committee on Public Doublespeak was formed in 1971, in the midst of the Watergate scandal. It was at a point when there was widespread skepticism about the degree of truth which characterized relationships between the public and the worlds of politics, the military, and business. NCTE passed two resolutions. One called for the council to find means to study dishonest and inhumane uses of language and literature by advertisers, to bring offenses to public attention, and to propose classroom techniques for preparing children to cope with commercial propaganda. The other called for the council to find means to study the relationships between language and public policy and to track, publicize, and combat semantic distortion by public officials, candidates for office, political commentators, and all others whose language is transmitted through the mass media. The two resolutions were accomplished by forming NCTE's Committee on Public Doublespeak, a body which has made significant contributions in describing the need for reform where clarity in communication has been deliberately distorted.[27] Hugh Rank helped form the Doublespeak committee in 1971 and was its first chairman. Under his editorship, the committee produced a book calledLanguage and Public Policy(1974), with the aim of informing readers of the extensive scope of doublespeak being used to deliberately mislead and deceive the audience. He highlighted the deliberate public misuses of language and provided strategies for countering doublespeak by focusing on educating people in the English language so as to help them identify when doublespeak is being put into play. He was also the founder of the Intensify/Downplay pattern that has been widely used to identify instances of doublespeak being used.[27] Daniel Dieterich, former chair of theNational Council of Teachers of English, served as the second chairman of the Doublespeak committee after Hugh Rank in 1975. He served as editor of its second publication,Teaching about Doublespeak(1976), which carried forward the committee's charge to inform teachers of ways of teaching students how to recognize and combat language designed to mislead and misinform.[27] William D. Lutz, professor emeritus atRutgers University-Camdenhas served as the third chairman of the Doublespeak Committee since 1975. In 1989, both his own bookDoublespeakand, under his editorship, the committee's third book,Beyond Nineteen Eighty-Four, were published.Beyond Nineteen Eighty-Fourconsists of 220 pages and eighteen articles contributed by long-time Committee members and others whose bodies of work have contributed to public understanding about language, as well as a bibliography of 103 sources on doublespeak.[20]Lutz was also the former editor of the now defunctQuarterly Review of Doublespeak, which examined the use of vocabulary by public officials to obscure the underlying meaning of what they tell the public. Lutz is one of the main contributors to the committee as well as promoting the term "doublespeak" to a mass audience to inform them of its deceptive qualities. He mentions:[28] There is more to being an effective consumer of language than just expressing dismay at dangling modifiers, faulty subject and verb agreement, or questionable usage. All who use language should be concerned whether statements and facts agree, whether language is, in Orwell's words, "largely the defense of the indefensible" and whether language "is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind". Charles Weingartner, one of the founding members of the NCTE committee on Public Doublespeak mentioned: "people do not know enough about the subject (the reality) to recognize that the language being used conceals, distorts, misleads. Teachers of English should teach our students that words are not things, but verbal tokens or signs of things that should finally be carried back to the things that they stand for to be verified."[29]
https://en.wikipedia.org/wiki/Doublespeak
Aeuphemism(/ˈjuːfəmɪzəm/YOO-fə-miz-əm) is an innocuous word or expression used in place of one that is deemedoffensiveor suggests something unpleasant.[1]Some euphemisms are intended to amuse, while others use bland, inoffensive terms for concepts that the user wishes to downplay. Euphemisms may be used to mask profanity or refer totopics some considertaboosuch as mental or physical disability, sexual intercourse, bodily excretions, pain, violence, illness, or death in a polite way.[2] Euphemismcomes from theGreekwordeuphemia(εὐφημία) which refers to the use of 'words of good omen'; it is a compound ofeû(εὖ), meaning 'good, well', andphḗmē(φήμη), meaning 'prophetic speech; rumour, talk'.[3]Euphemeis a reference to the female Greek spirit of words of praise and positivity, etc. The termeuphemismitself was used as a euphemism by theancient Greeks; with the meaning "to keep a holy silence" (speaking well by not speaking at all).[4] Reasons for using euphemisms vary by context and intent. Commonly, euphemisms are used to avoid directly addressing subjects that might be deemed negative or embarrassing, such asdeath,sex, and excretory bodily functions. They may be created for innocent, well-intentioned purposes or nefariously and cynically, intentionally to deceive, confuse ordeny. Euphemisms which emerge as dominant social euphemisms are often created to serve progressive causes.[5][6]TheOxford University Press'sDictionary of Euphemismsidentifies "late" as an occasionally ambiguous term, whose nature as a euphemism for dead and an adjective meaning overdue, can cause confusion in listeners.[7] Euphemisms are also used to mitigate, soften or downplay the gravity of large-scale injustices,war crimes, or other events that warrant a pattern of avoidance in official statements or documents. For instance, one reason for the comparative scarcity of written evidence documenting the exterminations atAuschwitz, relative to their sheer number, is "directives for the extermination process obscured in bureaucratic euphemisms".[8]Another example of this is during the 2022Russian invasion of Ukraine, where Russian PresidentVladimir Putin, in his speech starting the invasion, called the invasion a "special military operation".[9] Euphemisms are sometimes used to lessen the opposition to a political move. For example, according to linguistGhil'ad Zuckermann, Israeli Prime MinisterBenjamin Netanyahuused the neutral Hebrew lexical itemפעימותpeimót(literally 'beatings (of the heart)'), rather thanנסיגהnesigá('withdrawal'), to refer to the stages in the Israeli withdrawal from theWest Bank(seeWye River Memorandum), in order to lessen the opposition of right-wing Israelis to such a move.[10]Peimótwas thus used as a euphemism for 'withdrawal'.[10]: 181 Euphemism may be used as arhetorical strategy, in which case its goal is to change thevalenceof a description.[clarification needed] Using a euphemism can in itself be controversial, as in the following examples: The use of euphemism online is known as "algospeak" when used to evade automated online moderation techniques used on Meta and TikTok's platforms.[13][14][15][16][17]Algospeak has been used in debate about theIsraeli–Palestinian conflict.[18][19] Phonetic euphemism is used to replace profanities and blasphemies, diminishing their intensity. To alter the pronunciation or spelling of a taboo word (such asprofanity) to form a euphemism is known astaboo deformation, or aminced oath. Such modifications include: Euphemisms formed fromunderstatementsincludeasleepfor dead anddrinkingfor consuming alcohol. "Tired and emotional" is a notorious British euphemism for "drunk", one of manyrecurring jokespopularized by the satirical magazinePrivate Eye; it has been used by MPs to avoidunparliamentary language. Pleasant, positive, worthy, neutral, or nondescript terms are often substituted for explicit or unpleasant ones, with many substituted terms deliberately coined by sociopolitical movements,marketing,public relations, oradvertisinginitiatives, including: Some examples ofCockneyrhyming slangmay serve the same purpose: to call a person aberksounds less offensive than to call a person acunt, thoughberkis short forBerkeley Hunt,[20]which rhymes withcunt.[21] The use of a term with a softer connotation, though it shares the same meaning. For instance,screwed upis a euphemism for 'fucked up';hook-upandlaidare euphemisms for 'sexual intercourse'. Expressions or words from a foreign language may be imported for use as euphemism. For example, the French wordenceintewas sometimes used instead of the English wordpregnant;[22]abattoirforslaughterhouse, although in French the word retains its explicit violent meaning 'a place for beating down', conveniently lost on non-French speakers.Entrepreneurforbusinessman, adds glamour;douche(French for 'shower') for vaginal irrigation device;bidet('little pony') for vessel for anal washing. Ironically, although in English physical "handicaps" are almost always described with euphemism, in French the English wordhandicapis used as a euphemism for their problematic wordsinfirmitéorinvalidité.[23] Periphrasis, orcircumlocution, is one of the most common: to "speak around" a given word,implyingit without saying it. Over time, circumlocutions become recognized as established euphemisms for particular words or ideas. Bureaucraciesfrequently spawn euphemisms intentionally, asdoublespeakexpressions. For example, in the past, the US military used the term "sunshine units" for contamination byradioactive isotopes.[24]The United StatesCentral Intelligence Agencyrefers to systematictortureas "enhanced interrogation techniques".[25]An effective death sentence in the Soviet Union during theGreat Purgeoften used the clause "imprisonmentwithout right to correspondence": the person sentenced would be shot soon after conviction.[26]As early as 1939, Nazi officialReinhard Heydrichused the termSonderbehandlung("special treatment") to meansummary executionof persons viewed as "disciplinary problems" by the Nazis even before commencing thesystematic extermination of the Jews.Heinrich Himmler, aware that the word had come to be known to mean murder, replaced that euphemism with one in which Jews would be "guided" (to their deaths) through the slave-labor and extermination camps[27]after having been "evacuated" to their doom. Such was part of the formulation ofEndlösung der Judenfrage(the "Final Solution to the Jewish Question"), which became known to the outside world during theNuremberg Trials.[28] Frequently, over time, euphemisms themselves become taboo words, through the linguistic process ofsemantic changeknown aspejoration, which University of Oregon linguist Sharon Henderson Taylor dubbed the "euphemism cycle" in 1974,[29]also frequently referred to as the "euphemism treadmill", as worded bySteven Pinker.[30]For instance, the place of human defecation is a needy candidate for a euphemism in all eras.Toiletis an 18th-century euphemism, replacing the older euphemismhouse-of-office, which in turn replaced the even older euphemismsprivy-houseandbog-house.[31]In the 20th century, where the old euphemismslavatory(a place where one washes) andtoilet(a place where one dresses[32]) had grown from widespread usage (e.g., in the United States) to being synonymous with the crude act they sought to deflect, they were sometimes replaced withbathroom(a place where one bathes),washroom(a place where one washes), orrestroom(a place where one rests) or even by the extreme formpowder room(a place where one applies facial cosmetics).[citation needed]The formwater closet, often shortened toW.C., is a less deflective form.[citation needed]The wordshitappears to have originally been a euphemism for defecation in Pre-Germanic, as theProto-Indo-European root*sḱeyd-, from which it was derived, meant 'to cut off'.[33] Another example in American English is the replacement of "colored people" with "Negro" (euphemism by foreign language), which itself came to be replaced by either "African American" or "Black".[34]Also in the United States the term "ethnic minorities" in the 2010s has been replaced by "people of color".[34] Venereal disease, which associated shameful bacterial infection with a seemingly worthy ailment emanating fromVenus, the goddess of love, soon lost its deflective force in the post-classical education era, as "VD", which was replaced by thethree-letter initialism"STD" (sexually transmitted disease); later, "STD" was replaced by "STI" (sexually transmitted infection).[35] Intellectually-disabled people were originally defined with words such as "morons" or "imbeciles", which then became commonly used insults. The medical diagnosis was changed to "mentally retarded", which morphed into the pejorative, "retard", against those with intellectual disabilities. To avoid the negative connotations of their diagnoses, students who need accommodations because of such conditions are often labeled as "special needs" instead, although the words "special" or "SPED" (short for "special education") have long been schoolyard insults.[36][better source needed]As of August 2013, theSocial Security Administrationreplaced the term "mental retardation" with "intellectual disability".[37]Since 2012, that change in terminology has been adopted by theNational Institutes of Healthand the medical industry at large.[38]There are numerousdisability-related euphemisms that have negative connotations.
https://en.wikipedia.org/wiki/Euphemism
Īhām(ایهام) inPersian,Urdu,KurdishandArabic poetryis a literary device in which an author uses a word, or an arrangement of words, that can be read in several ways. Each of the meanings may be logically sound, equally true and intended.[1] In the 12th century,Rashid al-Din Vatvatdefinedīhāmas follows: "Īhāmin Persian means to create doubt. This is a literary device, also calledtakhyīl[to make one suppose and fancy], whereby a writer (dabīr), in prose, or a poet, in verse, employs a word with two different meanings, one direct and immediate (qarīb) and the other remote and strange (gharīb), in such a manner that the listener, as soon as he hears that word, thinks of its direct meaning while in actuality the remote meaning is intended."[1] Amir Khusrow(1253–1325 CE) introduced the notion that any of the several meanings of a word, or phrase, might be equally true and intended, creating a multilayered text.[2]Discerning the various layers of meanings would be a challenge to the reader, who has to focus on and keep turning over the passage in his mind, applying his erudition and imagination to perceive alternative meanings.[1] Another idea associated withīhāmis that a verse may function as a mirror of the reader's condition, as expressed by the 14th-century authorShaykh Maneri: "A verse by itself has no fixed meaning. It is the reader/listener who picks up an idea consistent with the subjective condition of his mind."[1]The 15th-century poetFawhr-e Din Nizamiconsideredīhāman essential element of any good work of poetry: "A poem that doesn't have dual-meaning words, such a poem does not attract anyone at all—a poem without words of two senses."[3] Īhāmis an important stylistic device inSufiliterature, perfected by writers such asHafez(1325/1326–1389/1390 CE).[1][4]Nalîis an example of another poet who has usedīhāmwidely in his poetry. Applications of this "art of ambiguity" or "amphibology" include texts that can be read as descriptions of earthly or divine love.[4][5][6] Haleh Pourafzal and Roger Montgomery, writing inHaféz: Teachings of the Philosopher of Love(1998), discussīhāmin terms of "biluminosity", simultaneous illumination from two directions, describing it as "a technique of comparison involving wordplay, sound association, and double entendre, keeping the reader in doubt as to the 'right' meaning of the word. Biluminosity removes the burden of choice and invites the reader to enter a more empowering dimension ofīhāmthat embraces the quality of amphibians [...]—beings capable of living equally well in two radically different environments. As a result, the reader is freed from the obsession to find the 'right answer' through speculation and instead can concentrate on enjoying nuances and being awed by how the slightest shift in perception creates a new meaning. [...] From the perspective of Haféz as the composer of poetry, biluminosity allows two different points of view to shed light upon each other."[7]
https://en.wikipedia.org/wiki/Iham
Ribaldryorblue comedyis humorous entertainment that ranges from bordering onindelicacytoindecency.[1]Blue comedy is also referred to as "bawdiness" or being "bawdy". Like any humour, ribaldry may be read as conventional orsubversive. Ribaldry typically depends on a shared background of sexual conventions and values, and itscomedygenerally depends on seeing those conventions broken. The ritualtaboo-breaking that is a usual counterpart of ribaldry underlies its controversial nature and explains why ribaldry is sometimes a subject ofcensorship. Ribaldry, whose usual aim isnot"merely" to be sexually stimulating, often does address larger concerns than mere sexual appetite. However, being presented in the form of comedy, these larger concerns may be overlooked by censors. Sex is presented in ribald material more for the purpose of poking fun at the foibles and weaknesses that manifest themselves inhuman sexuality, rather than to present sexual stimulation either overtly or artistically. Also, ribaldry may use sex as ametaphorto illustrate some non-sexual concern, in which case ribaldry borderssatire. Ribaldry differs fromblack comedyin that the latter deals with topics that would normally be consideredpainfulorfrightening, whereas ribaldry deals with topics that would only be considered offensive. Ribaldry is present to some degree in every culture and has likely been around for all of human history. Works likeLysistratabyAristophanes,MenaechmibyPlautus,Cena TrimalchionisbyPetronius, andThe Golden AssofApuleiusare ribald classics fromancient Greece and Rome.Geoffrey Chaucer's "The Miller's Tale" from hisCanterbury TalesandThe Crabfish, one of the oldest English traditional ballads, are classic examples. The FrenchmanFrançois Rabelaisshowed himself to be a master of ribaldry (technically calledgrotesque body) in hisGargantuaand other works.The Life and Opinions of Tristram Shandy, GentlemanbyLaurence SterneandThe Lady's Dressing RoombyJonathan Swiftare also in this genre; as isMark Twain's long-suppressed1601. Another example of ribaldry is "De Brevitate Vitae", a song which in manyEuropean-influenced universities is both a student beer-drinking song and an anthem sung by official universitychoirsat public graduation ceremonies. The private and public versions of the song contain vastly different words. More recent works likeCandy,Barbarella,L'Infermiera, the comedic works ofRuss Meyer,Little Annie FannyandJohn Barth'sThe Sot-Weed Factorare probably better classified as ribaldry than as either pornography or erotica.[citation needed] A bawdy song is a humorous song that emphasises sexual themes and is often rich withinnuendo. Historically these songs tend to be confined to groups of young males, either as students or in an environment where alcohol is flowing freely. An early collection wasWit and Mirth, or Pills to Purge Melancholy, edited by Thomas D'Urfey and published between 1698 and 1720. Selected songs fromWit and Mirthhave been recorded by theCity Waitesand other singers. Sailor's songs tend to be quite frank about the exploitative nature of the relationship between men and women. There are many examples of folk songs in which a man encounters a woman in the countryside. This is followed by a short conversation, and then sexual intercourse, e.g. "The Game of All Fours". Neither side demonstrates any shame or regret. If the woman becomes pregnant, the man will not be there anyway.Rugbysongs are often bawdy. Examples of bawdy folk songs are: "Seventeen Come Sunday" and "The Ballad of Eskimo Nell".Robert BurnscompiledThe Merry Muses of Caledonia(the title is not Burns's), a collection of bawdy lyrics that were popular in the music halls of Scotland as late as the 20th century. In modern timesHash House Harriershave taken on the role of tradition-bearers for this kind of song.The Unexpurgated Folk Songs of Men(Arhoolie 4006) is a gramophone record containing a collection of American bawdy songs recorded in 1959.[2] Blue comedy is comedy that isoff-colour,risqué,indecent, orprofane, largely about sex. It often containsprofanityor sexual imagery that may shock and offend some audience members.[citation needed] "Working blue" refers to the act of using swear words and discussing things that people would not discuss in "polite society". A "blue comedian" or "blue comic" is acomedianwho usually performs risqué routines layered with curse words. There is a common belief that comedianMax Miller(1894–1963) coined the phrase, after his stage act which involved telling jokes from either a white book or a blue book, chosen by audience preference (the blue book contained ribald jokes). This is not so, as theOxford English Dictionarycontains earlier references to the use of blue to mean ribald: 1890Sporting Times25 Jan. 1/1 "Shifter wondered whether the damsel knew any novel blue stories." and 1900Bulletin(Sydney) 20 Oct. 12/4 "Let someone propose to celebrateChaucerby publicly reading some of his bluest productions unexpurgated. The reader would probably be locked up." Private events at show business clubs such as theMasquersoften showed this blue side of otherwise clean-cut comedians; a recording survives of one Masquers roast from the 1950s withJack Benny,George Jessel,George Burns, andArt Linkletterall using highly risqué material and obscenities. Many comedians who are normally family-friendly might choose to work blue when off-camera or in an adult-oriented environment;Bob Sagetexemplified thisdichotomy.Bill Cosby's 1969 record album8:15 12:15records both his family-friendly evening standup comedy show, and his blue midnight show, which included a joke about impregnating his wife "right through the old midnight trampoline" (herdiaphragm) and other sexual references.[3] Some comedians build their careers on blue comedy. Among the best known of these areRedd Foxx,Lawanda Page, and the team of Leroy and Skillet, all of whom later performed on the family-friendly television showSanford and Son. Page, Leroy, and Skillet specialised in a particularAfrican Americanform of blue spoken word recitation calledsignifying or toasting.Dave Attellhas also been described by his peers as one of the greatest modern-day blue comics.[4] Ontalk radioin the United States and elsewhere, blue comedy is a staple of theshock jock's repertoire. The use of blue comedy over American radio airwaves is severely restricted due to decency regulations; theFederal Communications Commissioncan levy fines against radio stations that air obscene content. As a part of English literature, blue literature dates back to at leastMiddle English, while bawdy humor is a central element in works of such writers asShakespeareandChaucer. Examples of blue literature are also present in various cultures, among different social classes, and genders.[5]Until the 1940s, writers of English-language blue literature were almost exclusively men; since then, it has become possible for women to build a commercial career on blue literature.[5]: 170While no extensive cross-cultural study has been made in an attempt to prove the universality of blue literature, oral tradition around the world suggests that this may be the case.[5]: 169
https://en.wikipedia.org/wiki/Ribaldry
Concept creepis the process by which harm-related topics experiencesemantic expansionto include topics which would not have originally been envisaged to be included under that label.[1]It was first described in aPsychological Inquiryarticle by Nick Haslam in 2016, who identified its effects on the concepts of abuse, bullying, trauma, mental disorder, addiction, and prejudice.[2]Others have identified its effects on terms like "gaslight"[3]and "emotional labour".[4]The phenomenon can be related to the concept ofhyperbole.[5] It has been criticised for making people more sensitive to harms[6]and for blurring people's thinking and understanding of such terms, by categorising too many things together which should not be, and by losing the clarity and specificity of a term.[4] Although the initial research on concept creep has focused on concepts central to thepolitical left's ideology, psychologists have also found evidence that people identifying with thepolitical righthave more expansive interpretations of concepts central to their own ideology (ex. sexual deviance, personal responsibility and terrorism).[7] Thissociology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Concept_creep
Demonstratives(abbreviatedDEM) arewords, such asthisandthat, used to indicate which entities are being referred to and to distinguish those entities from others. They are typicallydeictic, their meaning depending on a particularframe of reference, and cannot be understood without context. Demonstratives are often used in spatial deixis (where the speaker or sometimes the listener is to provide context), but also in intra-discourse reference (includingabstract concepts) oranaphora, where the meaning is dependent on something other than the relative physical location of the speaker. An example is whether something is currently being said or was said earlier. Demonstrative constructions include demonstrativeadjectivesor demonstrativedeterminers, which specifynouns(as inPutthatcoat on), and demonstrativepronouns, which stand independently (as inPutthaton). The demonstratives inEnglisharethis,that,these,those, and the archaicyonandyonder, along withthis one, these ones,that oneandthose onesas substitutes for the pronouns. Many languages, such asEnglishandStandard Chinese, make a two-way distinction between demonstratives. Typically, one set of demonstratives isproximal, indicating objects close to the speaker (Englishthis), and the other series isdistal, indicating objects further removed from the speaker (Englishthat). Other languages, likeFinnish,Nandi,Hawaiian,Latin,Spanish,Portuguese,Italian(in some formal writing),Armenian,Serbo-Croatian,Macedonian,Georgian,Basque,Korean,Japanese,Ukrainian,Bengali, andSri Lankan Tamilmake a three-way distinction.[1]Typically there is a distinction betweenproximalor first person (objects near to the speaker),medialor second person (objects near to theaddressee), anddistalor third person[2](objects far from both). So for example, in Portuguese: Further oppositions are created with place adverbs. in Italian (medial pronouns, in most of Italy, only survive in historical texts and bureaucratic texts. However, they're of wide and very common usage in some Regions, like Tuscany): in Hawaiian: in Armenian (based on the proximal "s", medial "d/t", and distal "n"): այս ays խնձորը khndzorë այս խնձորը ays khndzorë "this apple" այդ ayd խնձորը khndzorë այդ խնձորը ayd khndzorë "that apple (near you)" այն ayn խնձորը khndzorë այն խնձորը ayn khndzorë "yon apple (over there, away from both of us)" and, in Georgian: ამისი amisi მამა mama ამისი მამა amisi mama "this one's father" იმისი imisi ცოლი coli იმისი ცოლი imisi coli "that one's wife" მაგისი magisi სახლი saxli მაგისი სახლი magisi saxli "that (by you) one's house" and, in Ukrainian (note that Ukrainian has not only number, but also threegrammatical gendersin singular): and, in Japanese: この kono リンゴ ringo この リンゴ kono ringo "this apple" その sono リンゴ ringo その リンゴ sono ringo "that apple" あの ano リンゴ ringo あの リンゴ ano ringo "that apple (over there)" In Nandi (Kalenjin of Kenya, Uganda and Eastern Congo): Chego chu, Chego choo, Chego chuun "this milk", "that milk" (near the second person) and "that milk" (away from the first and second person, near a third person or even further away). Ancient Greekhas a three-way distinction betweenὅδε(hóde"this here"),οὗτος(hoûtos"this"), andἐκεῖνος(ekeînos"that"). Spanish,TamilandSerialso make this distinction.Frenchhas a two-way distinction, with the use of postpositions "-ci" (proximal) and "-là" (distal) as incet homme-ciandcet homme-là, as well as the pronounsceandcela/ça. English has an archaic but occasionally used three-way distinction ofthis,that, andyonder. Arabichas also a three-way distinction in its formalClassicalandModern Standardvarieties. Very rich, with more than 70 variants, the demonstrative pronouns in Arabic principally change depending on the gender and the number. They mark a distinction in number for singular, dual, and plural. For example: InModern German(and theScandinavian languages), the non-selective deicticdasKind,derKleine,dieKleineand the selective onedasKind,derKleine,dieKleineare homographs, but they are spoken differently. The non-selective deictics are unstressed whereas the selective ones (demonstratives) are stressed. There is a second selective deictic, namelydiesesKind,dieserKleine,dieseKleine. Distance either from the speaker or from the addressee is either marked by the opposition between these two deictics or by the addition of a place deictic. Distance-marking Thing Demonstrative Thing Demonstrative plus Distance-marking Place Demonstrative A distal demonstrative exists inGerman, cognate to the Englishyonder, but it is used only in formal registers.[3] Cognates of "yonder" still exist in some Northern English and Scots dialects; There are languages which make a four-way distinction, such asNorthern Sami: These four-way distinctions are often termed proximal,mesioproximal,mesiodistal, and distal. Many non-European languages make further distinctions; for example, whether the object referred to is uphill or downhill from the speaker, whether the object is visible or not (as inMalagasy), and whether the object can be pointed to as a whole or only in part. TheEskimo–Aleut languages,[4]and theKirantibranch[5]of theSino-Tibetan language familyare particularly well known for their many contrasts. The demonstratives inSeriare compound forms based on the definite articles (themselves derived from verbs) and therefore incorporate the positional information of the articles (standing, sitting, lying, coming, going) in addition to the three-wayspatialdistinction. This results in a quite elaborated set of demonstratives. Latinhad several sets of demonstratives, includinghic,haec,hoc("this near me");iste,ista,istud("that near you"); andille,illa,illud("that over there") – note that Latin has not only number, but also threegrammatical genders. The third set of Latin demonstratives (ille, etc.), developed into thedefinite articlesin mostRomance languages, such asel,la,los,lasinSpanish, andle,la,lesinFrench. With the exception ofRomanian, and some varieties of Spanish and Portuguese, the neuter gender has been lost in the Romance languages. Spanish and Portuguese have kept neuter demonstratives: Some forms of Spanish (Caribbean Spanish,Andalusian Spanish, etc.) also occasionally employello, which is an archaic survival of the neuter pronoun from Latinillud.[citation needed] Neuter demonstratives refer to ideas of indeterminate gender, such as abstractions and groups of heterogeneous objects, and has a limited agreement in Portuguese, for example, "all of that" can be translated as "todo aquele" (m), "toda aquela" (f) or "tudo aquilo" (n) in Portuguese, although the neuter forms require a masculine adjective agreement: "Tudo (n) aquilo (n) está quebrado (m)" (All of that is broken). Classical Chinesehad three main demonstrative pronouns: proximal此(this), distal彼(that), and distance-neutral是(this or that).[6]The frequent use of是as aresumptivedemonstrative pronoun that reasserted thesubjectbefore a nounpredicatecaused it to develop into its colloquial use as acopulaby theHan periodand subsequently its standard use as a copula inModern Standard Chinese.[6]Modern Mandarin has two main demonstratives, proximal這/这and distal那; its use of the three Classical demonstratives has become mostlyidiomatic,[7]although此continues to be used with some frequency inmodern written Chinese.Cantoneseuses proximal呢and distal嗰instead of這and那, respectively. Similarly,Northern Wulanguages tend to also have a distance-neutral demonstrative搿, which is etymologically a checked-tone derivation of個. In lects such asShanghainese, distance-based demonstratives exist, but are only used constrastively.Suzhounese, on the other hand, has several demonstratives that form a two-way contrast, but also have搿, which is neutral.[8][9] Hungarianhas two spatial demonstratives:ez(this) andaz(that). These inflect for number and case even in attributive position (attributes usually remain uninflected in Hungarian) with possible orthographic changes; e.g.,ezzel(with this),abban(in that). A third degree of deixis is also possible in Hungarian, with the help of theam-prefix:amaz(that there). The use of this, however, is emphatic (when the speaker wishes to emphasize the distance) and not mandatory. TheCree languagehas a special demonstrative for "things just gone out of sight," andIlocano, a language of thePhilippines, has three words forthisreferring to a visible object, a fourth for things not in view and a fifth for things that no longer exist."[10]TheTiriyó languagehas a demonstrative for "things audible but non-visible"[11] While most languages andlanguage familieshave demonstrative systems, some have systems highly divergent from or more complex than the relatively simple systems employed inIndo-European languages. InYupik languages, notably in theChevak Cup’iklanguage, there exists a 29-way distinction in demonstratives, with demonstrative indicators distinguished according to placement in a three-dimensional field around the interlocutor(s), as well as by visibility and whether or not the object is in motion.[12][failed verification] It is relatively common for a language to distinguish betweendemonstrative determinersordemonstrative adjectives(sometimes also calleddeterminative demonstratives,adjectival demonstrativesoradjectival demonstrative pronouns) anddemonstrative pronouns(sometimes calledindependent demonstratives,substantival demonstratives,independent demonstrative pronounsorsubstantival demonstrative pronouns). A demonstrativedeterminerspecifies a noun asdefinite, singular or plural, and proximal or distal: A demonstrativepronounstands on its own, replacing rather than modifying a noun: There are four common demonstrative pronouns in English:this,that,these,those.[13]Some dialects, such asSouthern American English, also useyonandyonder, where the latter is usually employed as a demonstrative determiner.[14]AuthorBill Brysonlaments the "losses along the way" ofyonandyonder:[14] Today we have two demonstrative pronouns,thisandthat, but inShakespeare's day there was a third,yon(as in theMiltonline "Him that yon soars on golden wing"), which suggested a further distance thanthat. You could talk about this hat, that hat, and yon hat. Today the word survives as a colloquialadjective,yonder, but our speech is fractionally impoverished for its loss. Many languages have sets ofdemonstrative adverbsthat are closely related to the demonstrative pronouns in a language. For example, corresponding to the demonstrative pronounthatare the adverbs such asthen(= "at that time"),there(= "at that place"),thither(= "to that place"),thence(= "from that place"); equivalent adverbs corresponding to the demonstrative pronounthisarenow,here,hither,hence. A similar relationship exists between theinterrogative pronounwhatand theinterrogative adverbswhen,where,whither,whence. Seepro-formfor a full table. As mentioned above, while the primary function of demonstratives is to provide spatial references of concrete objects (that (building),this (table)), there is a secondary function: referring to items of discourse.[15]For example: In the above,this sentencerefers to the sentence being spoken, and the pronounthisrefers to what is about to be spoken;that wayrefers to "the previously mentioned way", and the pronounthatrefers to the content of the previous statement. These are abstract entities of discourse, not concrete objects. Each language may have subtly different rules on how to use demonstratives to refer to things previously spoken, currently being spoken, or about to be spoken. In English,that(or occasionallythose) refers to something previously spoken, whilethis(or occasionallythese) refers to something about to be spoken (or, occasionally, something being simultaneously spoken).[citation needed]
https://en.wikipedia.org/wiki/Demonstrative
Insemioticsanddiscourse analysis,floating signifiers(also referred to asempty signifiers,[1]although these terms have been made distinct[2]) aresignifierswithout areferent. The termopen signifieris sometimes used as a synonym due to the empty signifier's nature to "resist the constitution of any unitary meaning", enabling its ability to remain open to different meanings in different contexts.[3] Daniel Chandler defines the term as "a signifier with a vague, highly variable, unspecifiable or non-existent signified".[4]The concept of floating signifiers originates withClaude Lévi-Strauss, who identified cultural ideas likemanaas "represent[ing] an undetermined quantity of signification, in itself void of meaning and thus apt to receive any meaning".[5] As such, a "floating signifier" may "mean different things to different people: they may stand for many or even any signifieds; they may mean whatever their interpreters want them to mean".[6]Such a floating signifier—which is said to possess "symbolic value zero"—necessarily results to "allow symbolic thought to operate despite the contradiction inherent in it".[7] Roland Barthes, while not using the term "floating signifier" explicitly, referred specifically to non-linguistic signs as being so open to interpretation that they constituted a "floating chain of signifieds."Jacques Derridaalso described the "freeplay" of signifiers, arguing that they are not fixed to their signifieds but point beyond themselves to other signifiers in an "indefinite referral of signifier to signified."[4] InEmancipation(s),Ernesto Laclauframes the empty signifier in the context of social interactions. For Laclau, the empty signifier is the hegemonic representative of a collection of various demands, constituting a chain of equivalence whose members are distinguished through a differential logic (as in elements exist only in their differences to one another) but combine through an equivalential one. This chain of unsatisfied demands create an unfulfilled totality, inside of which one signifier subordinates the rest and assumes representation of the rest via a hegemonic process. The signifiers "empty" versus "floating" are distinct conceptually yet in practice meld as explained by Laclau: "As we can see, the categories of ‘empty’ and ‘floating’ signifiers are structurally different. The first concerns the construction of a popular identity once the presence of a stable frontier is taken for granted; the second tries conceptually to apprehend the logic of the displacements of that frontier."[2]In an interview in December 2013, Laclau clarified the distinction with an example: Laclau exemplified an empty signifier with the case of theSolidarnośćmovement led byLech Walesaat the Lenin shipyards in Gdansk, Poland, in 1980. At the beginning, the demands of this movement were linked to a set of precise demands of the workers of the ship industry. However, they started to be employed in the context in which many other demands in different areas were also articulated. At the end, Solidarność became the signifier of something much broader. When this universality comes about, it cuts off the connection between the signifier and the signified. In the case of Solidarność, in the beginning it had a signifier but then, because the appeal increased too much, the reference to a particular signified was diluted. A floating signifier is different. It can be connected to different contexts, so the function of meaning therein is fully realized. Even when it is ambiguous, it is not empty. It fluctuates between different forms of articulation in different projects.[8] The notion of floating signifiers can be applied to concepts such asrace[9]andgender, as a way of asserting that the word is more concrete than the concept it describes, where the concept may not be stable, but the word is. It is often applied tonon-linguistic signs, such as the example of theRorschach inkblot test. The concept is used in some more textual forms ofpostmodernism, which rejects the strict anchoring of particular signifiers to particular signifieds and argues against the concept that there are any ultimate determinable meanings to words or signs. In his 2003 bookCity of Gold,David A. Westbrookrefers to money as a "perpetually floating signifier" of pure potential, noting that "its promise to represent anything in particular is never fulfilled."[10] TheOxford Dictionary of Critical Theorygives the example that "Fredric Jamesonsuggests that the shark in theJawsseries of films is an empty signifier because it is susceptible to multiple and even contradictory interpretations, suggesting that it does not have a specific meaning itself, but functions primarily as a vehicle for absorbing meanings that viewers want to impose upon it."[11]
https://en.wikipedia.org/wiki/Floating_signifier
Insemiotics,linguistics,anthropology, andphilosophy of language,indexicalityis the phenomenon of asignpointing to (orindexing) some element in thecontextin which it occurs. A sign that signifies indexically is called anindexor, in philosophy, anindexical. The modern concept originates in thesemiotic theory of Charles Sanders Peirce, in which indexicality is one of the three fundamental sign modalities by which a sign relates to its referent (the others beingiconicityandsymbolism).[1]Peirce's concept has been adopted and extended by several twentieth-century academic traditions, including those of linguisticpragmatics,[2]: 55–57linguistic anthropology,[3]and Anglo-American philosophy of language.[4] Words and expressions inlanguageoften derive some part of their referential meaning from indexicality. For example,Iindexically refers to the entity that is speaking;nowindexically refers to a time frame including the moment at which the word is spoken; andhereindexically refers to a locational frame including the place where the word is spoken. Linguistic expressions that refer indexically are known asdeictics, which thus form a particular subclass of indexical signs, though there is some terminological variation among scholarly traditions. Linguistic signs may also derive nonreferential meaning from indexicality, for example when features of a speaker'sregisterindexically signal theirsocial class. Nonlinguistic signs may also display indexicality: for example, apointingindex fingermay index (without referring to) some object in the direction of the line implied by the orientation of the finger, and smoke may index the presence of a fire. In linguistics and philosophy of language, the study of indexicality tends to focus specifically on deixis, while in semiotics and anthropology equal attention is generally given to nonreferential indexicality, including altogether nonlinguistic indexicality. In disciplinary linguistics, indexicality is studied in the subdiscipline ofpragmatics. Specifically, pragmatics tends to focus ondeictics—words and expressions of language that derive some part of their referential meaning from indexicality—since these are regarded as "[t]he single most obvious way in which the relationship between language and context is reflected in the structures of languages themselves"[2]: 54Indeed, in linguistics the termsdeixisandindexicalityare often treated as synonymous, the only distinction being that the former is more common in linguistics and the latter in philosophy of language.[2]: 55This usage stands in contrast with that of linguistic anthropology, which distinguishes deixis as a particular subclass of indexicality. The concept of indexicality was introduced into the literature oflinguistic anthropologybyMichael Silversteinin a foundational 1976 paper, "Shifters, Linguistic Categories and Cultural Description".[5]Silverstein draws on "the tradition extending from Peirce toJakobson" of thought about sign phenomena to propose a comprehensive theoretical framework in which to understand the relationship between language andculture, the object of study of modernsociocultural anthropology. This framework, while also drawing heavily on the tradition ofstructural linguisticsfounded byFerdinand de Saussure, rejects the other theoretical approaches known asstructuralism, which attempted to project the Saussurean method of linguistic analysis onto other realms of culture, such as kinship and marriage (seestructural anthropology), literature (seesemiotic literary criticism), music, film and others. Silverstein claims that "[t]hat aspect of language which has traditionally been analyzed by linguistics, and has served as a model" for these other structuralisms, "is just the part that is functionally unique among the phenomena of culture." It is indexicality, not Saussurean grammar, which should be seen as the semiotic phenomenon which language has in common with the rest of culture.[5]: 12, 20–21 Silverstein argues that the Saussurean tradition of linguistic analysis, which includes the tradition of structural linguistics in the United States founded byLeonard Bloomfieldand including the work ofNoam Chomskyand contemporarygenerative grammar, has been limited to identifying "the contribution of elements of utterances to thereferentialor denotative value of the whole", that is, the contribution made by some word, expression, or other linguistic element to the function of forming "propositions—predicationsdescriptive of states of affairs". This study of reference and predication yields an understanding of one aspect of the meaning of utterances, theirsemantic meaning, and the subdiscipline of linguistics dedicated to studying this kind of linguistic meaning issemantics.[5]: 14–15 Yet linguistic signs in contexts of use accomplish other functions than pure reference and predication—though they often do so simultaneously, as though the signs were functioning in multiple analytically distinct semiotic modalities at once. In the philosophical literature, the most widely discussed examples are those identified byJ.L. Austinas theperformativefunctions of speech, for instance when a speaker says to an addressee "I bet you sixpence it will rain tomorrow", and in so saying, in addition to simply making a proposition about a state of affairs, actually enters into a socially constituted type of agreement with the addressee, awager.[6]Thus, concludes Silverstein, "[t]he problem set for us when we consider the actual broader uses of language is to describe the total meaning of constituent linguistic signs, only part of which is semantic." This broader study of linguistic signs relative to their general communicative functions ispragmatics, and these broader aspects of the meaning of utterances ispragmatic meaning. (From this point of view, semantic meaning is a special subcategory of pragmatic meaning, that aspect of meaning which contributes to the communicative function of pure reference and predication.).[5]: 193 Silverstein introduces some components of the semiotic theory ofCharles Sanders Peirceas the basis for a pragmatics which, rather than assuming that reference and predication are the essential communicative functions of language with other nonreferential functions being mere addenda, instead attempts to capture the total meaning of linguistic signs in terms of all of their communicative functions. From this perspective, the Peircean category of indexicality turns out to "give the key to the pragmatic description of language."[5]: 21 This theoretical framework became an essential presupposition of work throughout the discipline in the 1980s and remains so in the present. The concept of indexicality has been greatly elaborated in the literature of linguistic anthropology since its introduction by Silverstein, but Silverstein himself adopted the term from thetheory of sign phenomena, or semiotics, of Charles Sanders Peirce. As an implication of his general metaphysical theory of thethree universal categories, Peirce proposed a model of the sign as a triadic relationship: a sign is "something which stands to somebody for something in some respect or capacity."[7]Thus, more technically, a sign consists of Peirce further proposed to classify sign phenomena along three different dimensions by means ofthree trichotomies, the second of which classifies signs into three categories according to the nature of the relationship between the sign-vehicle and the object it represents. As captioned by Silverstein, these are: Silverstein observes that multiple signs may share the same sign-vehicle. For instance, as mentioned, linguistic signs as traditionally understood are symbols, and analyzed in terms of their contribution to reference and predication, since they arbitrarily denote a whole class of possible objects of reference by virtue of their semantic meanings. But in a trivial sense each linguistic sign token (word or expression spoken in an actual context of use) also functions iconically, since it is an icon of its type in the code (grammar) of the language. It also functions indexically, by indexing its symbol type, since its use in context presupposes that such a type exists in the semantico-referential grammar in use in the communicative situation (grammar is thus understood as an element of the context of communication).[5]: 27–28 So icon, index and symbol are not mutually exclusive categories—indeed, Silverstein argues, they are to be understood as distinct modes of semiotic function,[5]: 29which may be overlaid on a single sign-vehicle. This entails that one sign-vehicle may function in multiple semiotic modes simultaneously. This observation is the key to understanding deixis, traditionally a difficult problem for semantic theory. In linguistic anthropology,deixisis defined asreferentialindexicality—that is,morphemesor strings of morphemes, generally organized into closedparadigmaticsets, which function to "individuate or single out objects of reference or address in terms of their relation to the current interactive context in which the utterance occurs".[9]: 46–47Deicticexpressions are thus distinguished, on the one hand, from standard denotational categories such as commonnouns, which potentially refer to any member of a whole class or category of entities: these display purely semantico-referential meaning, and in the Peircean terminology are known assymbols. On the other hand, deixis is distinguished as a particular subclass of indexicality in general, which may be nonreferential or altogether nonlinguistic. In the older terminology ofOtto JespersenandRoman Jakobson, these forms were calledshifters.[10][11]Silverstein, by introducing the terminology of Peirce, was able to define them more specifically as referential indexicals.[5] Non-referential indices or "pure" indices do not contribute to the semantico-referential value of a speech event yet "signal some particular value of one or more contextual variables."[5]Non-referential indices encode certain metapragmatic elements of a speech event's context through linguistic variations. The degree of variation in non-referential indices is considerable and serves to infuse the speech event with, at times, multiple levels ofpragmatic"meaning".[12]Of particular note are: sex/gender indices, deference indices (including the affinal taboo index),affectindices, as well as the phenomena ofphonologicalhypercorrectionand social identity indexicality. In much of the research currently conducted upon various phenomena of non-referential indexicality, there is an increased interest in not only what is called first-order indexicality, but subsequent second-order as well as "higher-order" levels of indexical meaning. First-order indexicality can be defined as the first level of pragmatic meaning that is drawn from an utterance. For example, instances of deference indexicality, such as the variation between informaltuand formalvousin French, indicate a speaker/addressee communicative relationship built upon the values ofpowerandsolidaritypossessed by the interlocutors.[13]When a speaker addresses somebody using the V form instead of the T form, they index (via first-order indexicality) their understanding of the need for deference to the addressee. In other words, they perceive or recognize an incongruence between their levels of power and/or solidarity and employ a more formal way of addressing that person to suit the contextual constraints of the speech event. Second-Order Indexicality is concerned with the connection betweenlinguisticvariables and the metapragmatic meanings that they encode. For example, a woman is walking down the street inManhattanand she stops to ask somebody where a McDonald's is. He responds to her talking in a heavy "Brooklyn"accent. She notices this accent and considers a set of possible personal characteristics that might be indexed by it (such as the man's intelligence, economic situation, and other non-linguistic aspects of his life). The power of language to encode these preconceived "stereotypes" based solely on accent is an example of second-order indexicality (representative of a more complex and subtle system of indexical form than that of first-order indexicality). One common system of non-referential indexicality is sex/gender indices. These indices index the gender or "female/male" social status of the interlocutor. There are a multitude of linguistic variants that act to index sex and gender such as: Many instances of sex/gender indices incorporate multiple levels of indexicality (also referred to asindexical order).[12]In fact, some, such as the prefix-affixation of o- in Japanese, demonstrate complex higher-order indexical forms. In this example, the first order indexes politeness and the second order indexes affiliation with a certain gender class. It is argued that there is an even higher level of indexical order evidenced by the fact that many jobs use theo-prefix to attract female applicants.[15]This notion of higher-order indexicality is similar to Silverstein's discussion of "wine talk" in that it indexes "an identity-by-visible-consumption[12][here,employment]" that is an inherent of a certain social register (i.e. social gender indexicality). Affective meaning is seen as "the encoding, or indexing of speakers emotions into speech events."[16]The interlocutor of the event "decodes" these verbal messages of affect by giving "precedence to intentionality";[16]that is, by assuming that the affective form intentionally indexes emotional meaning. Some examples of affective forms are:diminutives(for example, diminutive affixes inIndo-EuropeanandAmerindian languagesindicate sympathy, endearment, emotional closeness, or antipathy, condescension, and emotional distance);ideophonesandonomatopoeias;expletives, exclamations,interjections, curses, insults, andimprecations(said to be "dramatizations of actions or states");intonationchange (common in tone languages such as Japanese); address terms, kinship terms, and pronouns which often display clear affective dimensions (ranging from the complex address-form systems found languages such aJavaneseto inversions of vocative kin terms found in RuralItaly);[16]lexicalprocesses such assynecdocheandmetonymyinvolved in effect meaning manipulation; certain categories of meaning likeevidentiality;reduplication,quantifiers, and comparative structures; as well asinflectional morphology. Affective forms are a means by which a speaker indexes emotional states through different linguistic mechanisms. These indices become important when applied to other forms of non-referential indexicality, such as sex indices and social identity indices, because of the innate relationship between first-order indexicality and subsequent second-order (or higher) indexical forms. (See multiple indices section for Japanese example). Deference indices encode deference from one interlocutor to another (usually representing inequalities of status, rank, age, sex, etc.).[5]Some examples of deference indices are: TheT/V deference entitlement systemofEuropean languageswas famously detailed by linguists Brown and Gilman.[13]T/V deference entitlement is a system by which a speaker/addressee speech event is determined by perceived disparities of 'power' and 'solidarity' between interlocutors. Brown and Gilman organized the possible relationships between the speaker and the addressee into six categories: The 'power semantic' indicates that the speaker in a superior position uses T and the speaker in an inferior position uses V. The 'solidarity semantic' indicates that speakers use T for close relationships and V for more formal relationships. These two principles conflict in categories 2 and 5, allowing either T or V in those cases: Brown and Gilman observed that as the solidarity semantic becomes more important than the power semantic in various cultures, the proportion of T to V use in the two ambiguous categories changes accordingly. Silverstein comments that while exhibiting a basic level of first-order indexicality, the T/V system also employs second-order indexicality vis-à-vis 'enregistered honorification'.[12]He cites that the V form can also function as an index of valued "public" register and the standards of good behavior that are entailed by use of V forms over T forms in public contexts. Therefore, people will use T/V deferenceentailmentin 1) a first-order indexical sense that distinguishes between speaker/addressee interpersonal values of 'power' and 'solidarity' and 2) a second-order indexical sense that indexes an interlocutor's inherent "honor" or social merit in employing V forms over T forms in public contexts. Japanese provides an excellent case study ofhonorifics. Honorifics in Japanese can be divided into two categories: addressee honorifics, which index deference to the addressee of the utterance; and referent honorifics, which index deference to the referent of the utterance. Cynthia Dunn claims that "almost every utterance in Japanese requires a choice between direct and distal forms of the predicate."[17]The direct form indexes intimacy and "spontaneous self-expression" in contexts involving family and close friends. Contrarily, distal form index social contexts of a more formal, public nature such as distant acquaintances, business settings, or other formal settings. Japanese also contains a set of humble forms (Japanesekenjōgo謙譲語) which are employed by the speaker to index their deference to someone else. There are alsosuppletiveforms that can be used in lieu of regular honorific endings (for example, the subject honorific form oftaberu(食べる, to eat):meshiagaru(召し上がる). Verbs that involve human subjects must choose betweendistalordirectforms (towards the addressee) as well as a distinguish between either no use of referent honorifics, use of subject honorific (for others), or use of humble form (for self). The Japanese model for non-referential indexicality demonstrates a very subtle and complicated system that encodes social context into almost every utterance. Dyirbal, a language of theCairnsrain forestinNorthern Queensland, employs a system known as the affinal taboo index. Speakers of the language maintain two sets of lexical items: 1) an "everyday" or common interaction set of lexical items and 2) a "mother-in-law" set that is employed when the speaker is in the very distinct context of interaction with their mother-in-law. In this particular system of deference indices, speakers have developed an entirely separate lexicon (there are roughly four "everyday" lexical entries for every one "mother-in-law" lexical entry; 4:1) to index deference in contexts inclusive of the mother-in-law. Hypercorrectionis defined by Wolfram as "the use of speech form on the basis of false analogy."[18]DeCamp defines hypercorrection in a more precise fashion claiming that "hypercorrection is an incorrect analogy with a form in aprestigedialect which the speaker has imperfectly mastered."[19]Many scholars argue that hypercorrection provides both an index of "social class" and an "Index ofLinguistic insecurity". The latter index can be defined as a speaker's attempts at self-correction in areas of perceived linguistic insufficiencies which denote their lower social standing and minimal social mobility.[20] Donald Winford conducted a study that measured the phonological hypercorrection in creolization of English speakers in Trinidad. He claims that the ability to use prestigious norms goes "hand-in-hand" with knowledge of stigmatization afforded to use of "lesser" phonological variants.[20]He concluded that sociologically "lesser" individuals would try to increase the frequency of certain vowels that were frequent in the high prestigedialect, but they ended up using those vowels even more than their target dialect. This hypercorrection of vowels is an example of non-referential indexicality that indexes, by virtue of innate urges forcing lower class civilians to hypercorrect phonological variants, the actual social class of the speaker. As Silverstein claims, this also conveys an "Index ofLinguistic insecurity" in which a speaker not only indexes their actual social class (via first-order indexicality) but also the insecurities about class constraints and subsequent linguistic effects that encourage hypercorrection in the first place (an incidence of second-order indexicality).[12] William Labov and many others have also studied how hypercorrection inAfrican American Vernacular Englishdemonstrates similar social class non-referential indexicality. Multiple non-referential indices can be employed to index the social identity of a speaker. An example of how multiple indexes can constitute social identity is exemplified by Ochs discussion ofcopuladeletion: "That Bad" in American English can index a speaker to be a child, foreigner, medical patient, or elderly person. Use of multiple non-referential indices at once (for example copula deletion and raising intonation), helps further index the social identity of the speaker as that of a child.[21] Linguistic and non-linguistic indices are also an important ways of indexing social identity. For example, the Japanese utterance-wain conjunction with raising intonation (indexical of increasing affect) by one person who "looks like a woman" and another who looks "like a man" may index different affective dispositions which, in turn, can index gender difference.[14]Ochs and Schieffilen also claim that facial features, gestures, as well as other non-linguistic indices may actually help specify the general information provided by the linguistic features and augment the pragmatic meaning of the utterance.[22] For demonstrations of higher (or rarefied) indexical orders, Michael Silverstein discusses the particularities of "life-style emblematization" or "convention-dependent-indexical iconicity" which, as he claims, is prototypical of a phenomenon he dubs "winetalk". Professional wine critics use a certain "technical vocabulary" that are "metaphorical of prestige realms of traditional English gentlemanlyhorticulture."[12]Thus, a certain "lingo" is created for wine that indexically entails certain notions of prestigious social classes or genres. When "yuppies" use the lingo for wine flavors created by these critics in theactual contextof drinking wine, Silverstein argues that they become the "well-bred, interesting (subtle, balanced, intriguing, winning, etc.) person" that is iconic of the metaphorical "fashion of speaking" employed by people of higher social registers, demanding notoriety as a result of this high level of connoisseurship.[12]In other words, the wine drinker becomes a refined, gentlemanly critic and, in doing so, adopts a similar level of connoisseurship and social refinement. Silverstein defines this as an example of higher-order indexical "authorization" in which the indexical order of this "wine talk" exists in a "complex, interlocking set of institutionally formed macro-sociological interests."[12]A speaker of English metaphorically transfers him- or herself into the social structure of the "wine world" that is encoded by theoinoglossiaof elite critics using a very particular "technical" terminology. The use of "wine talk" or similar "fine-cheeses talk", "perfume talk", "Hegelian-dialectics talk", "particle-physics talk", "DNA-sequencing talk", "semiotics talk" etc. confers upon an individual an identity-by-visible-consumption indexical of a certain macro-sociological elite identity[12]and is, as such, an instance of higher-order indexicality. Philosophical work on language from the mid-20th century, such as that ofJ.L. Austinand theordinary language philosophers, has provided much of the originary inspiration for the study of indexicality and related issues in linguistic pragmatics (generally under the rubric of the termdeixis), though linguists have appropriated concepts originating in philosophical work for purposes of empirical study, rather than for more strictly philosophical purposes. However, indexicality has remained an issue of interest to philosophers who work on language. In contemporaryanalytic philosophy, the preferred nominal form of the term isindexical(rather thanindex), defined as "any expression whose content varies from one context of use to another ... [for instance] pronouns such as 'I', 'you', 'he', 'she', 'it', 'this', 'that', plus adverbs such as 'now', 'then', 'today', 'yesterday', 'here', and 'actually'.[23]This exclusive focus on linguistic expressions represents a narrower construal than is preferred in linguistic anthropology, which regards linguistic indexicality (deixis) as a special subcategory of indexicality in general, which is often nonlinguistic. Indexicals appear to represent an exception to, and thus a challenge for, the understanding of natural language as the grammatical coding oflogicalpropositions; they thus "raise interesting technical challenges for logicians seeking to provide formal models of correct reasoning in natural language."[23]They are also studied in relation to fundamental issues inepistemology,self-consciousness, andmetaphysics,[23]for example asking whether indexical facts arefacts that do not follow from the physical facts, and thus also form a link between philosophy of language andphilosophy of mind. The American logicianDavid Kaplanis regarded as having developed "[b]y far the most influential theory of the meaning and logic of indexicals".[23]
https://en.wikipedia.org/wiki/Indexicality
In thephilosophy of scienceand some other branches ofphilosophy, a "natural kind" is an intellectual grouping, or categorizing of things, that is reflective of the actual world and not just human interests.[1]Some treat it as a classification identifying some structure of truth and reality that exists whether or not humans recognize it. Others treat it as intrinsically useful to the human mind, but not necessarily reflective of something more objective. Candidate examples of natural kinds are found in all the sciences, but the field ofchemistryprovides the paradigm example ofelements.Alexander Birdand Emma Tobin see natural kinds as relevant tometaphysics,epistemology, and thephilosophy of language, as well as the philosophy of science.[1] John Deweyheld a view that belief in unconditional natural kinds is a mistake, a relic of obsolete scientific practices.[2]: 419–24Hilary Putnamrejects descriptivist approaches to natural kinds with semantic reasoning.Hasok Changand Rasmus Winther hold the emerging view that natural kinds are useful and evolving scientific facts. In 1938,John DeweypublishedLogic: The Theory of Inquiry, where he explained how modern scientists create kinds through induction and deduction, and why they have no use for natural kinds. Dewey argued that modern scientists do not follow Aristotle in treating inductive and deductive propositions as facts already known about nature's stable structure. Today, scientific propositions are intermediate steps in inquiry, hypotheses about processes displaying stable patterns. Aristotle's generic and universal propositions have become conceptual tools of inquiry warranted by inductive inclusion and exclusion of traits. They are provisional means rather than results of inquiry revealing the structure of reality. Modern induction starts with a question to be answered or a problem to be solved. It identifies problematic subject-matter and seeks potentially relevant traits and conditions. Generic existential data thus identified are reformulated—stated abstractly as if-then universal relations capable of serving as answers or solutions: IfH2O{\displaystyle H_{2}O}, then water. For Dewey, induction creates warranted kinds by observing constant conjunction of relevant traits. Dewey used the example of "morning dew" to describe these abstract steps creating scientific kinds. From antiquity, the common-sense belief had been that all dew is a kind of rain, meaning dew drops fall. By the early 1800s the curious absence of rain before dew and the growth of understanding led scientists to examine new traits. Functional processes changing bodies [kinds] from solid to liquid to gas at different temperatures, and operational constants of conduction and radiation, led to new inductive hypotheses "directly suggested bythissubject-matter, not by any data [kinds] previously observable. ... There were certain [existential] conditions postulated in the content of the new [non-existential] conception about dew, and it had to be determined whether these conditions were satisfied in theobservablefacts of the case."[2]: 430 After demonstrating that dew could be formed by these generic existential phenomena, and not by other phenomena, the universal hypothesis arose that dew forms following established laws of temperature and pressure. "The outstanding conclusion is that inductive procedures are those whichprepareexistential material so that it has convincing evidential weight with respect to an inferred generalization.[2]: 432Existential data are not pre-known natural kinds, but become conceptual statements of "natural" processes. Dewey concluded that nature is not a collection of natural kinds, but rather of reliable processes discoverable by competent induction and deduction. He replaced the ambiguous label "natural kind" with "warranted assertion" to emphasize the conditional nature of all human knowings. Assuming kinds to be given unconditional knowings leads to the error of assuming that conceptual universal propositions can serve as evidence for generic propositions; observed consequences affirm unobservable imagined causes. "For an 'inference' that is notgroundedin the evidential nature of the material from which it is drawn isnotan inference. It is a more or less wild guess."[2]: 428Modern induction is not a guess about natural kinds, but a means to create instrumental understanding. In 1969,Willard Van Orman Quinebrought the term "natural kind" into contemporary analytic philosophy with an essay bearing that title.[3]: 1His opening paragraph laid out his approach in three parts. First, it questioned the logical and scientific legitimacy of reasoning inductively by counting a few examples posting traits imputed to all members of a kind: "What tends to confirm an induction?" For Quine, induction reveals warranted kinds by repeated observation of visible similarities. Second, it assumed that color can be a characteristic trait of natural kinds, despite some logical puzzles: hypothetical colored kinds such as non-black non-ravens and green-blue emeralds. Finally, it suggested that human psychological structure can explain the illogical success of induction: "an innate flair that we have for natural kinds".[4]: 41 He started with the logical hypothesis that, if all ravens are black—an observable natural kind—then non-black non-ravens are equally a natural kind: "... each [observed] black raven tends to confirm the law [universal proposition] that all ravens are black ..." Observing shared generic traits warrants the inductive universal prediction that future experience will confirm the sharing: "And every reasonable [universal] expectation depends on resemblance of [generic] circumstances, together with our tendency to expect similar causes to have similar effects." "The notion of a kind and the notion of similarity or resemblance seem to be variants or adaptations of a single [universal] notion. Similarity is immediately definable in terms of kind; for things are similar when they are two of a kind."[4]: 42 Quine posited an intuitive human capacity to recognize criteria for judging degrees of similarity among objects, an "innate flair for natural kinds”. These criteria work instrumentally when applied inductively: "... why does our innate subjective spacing [classification] of [existential] qualities accord so well with the functionally relevant [universal] groupings in nature as to make our inductions tend to come out right?" He admitted that generalizing after observing a few similarities is scientifically and logically unjustified. The numbers and degrees of similarities and differences humans experience are infinite. But the method is justified by its instrumental success in revealing natural kinds. The "problem of induction" is how humans "should stand better than random or coin-tossing chances of coming out right when we predict by inductions which are based on our innate, scientifically unjustified similarity standards."[4]: 48–9 Quine credited human ability to recognize colors as natural kinds to the evolutionary function of color in human survival—distinguishing safe from poisonous kinds of food. He recognized that modern science often judges color similarities to be superficial, but denied that equating existential similarities with abstract universal similarities makes natural kinds any less permanent and important. The human brain's capacity to recognize abstract kinds joins the brain's capacity to recognize existential similarities. Quine argued that the success of innate and learned criteria for classifying kinds on the basis of similarities observed in small samples of kinds, constitutes evidence of the existence of natural kinds; observed consequences affirm imagined causes. His reasoning continues to provoke philosophical debates. In 1975,Hilary Putnamrejected descriptivist ideas about natural kind by elaborating on semantic concepts in language.[5][6]Putnam explains his rejection of descriptivist and traditionalist approaches to natural kinds with semantic reasoning, and insists that natural kinds can not be thought of via descriptive processes or creating endless lists of properties. In Putnam'sTwin Earth thought experiment, one is asked to consider the extension of "water" when confronted with an alternate version of "water" on an imagined "Twin Earth". This "water" is composed of chemical XYZ, as opposed to H2O. However, in all other describable aspects, it is the same as Earth’s "water." Putnam argues that the mere descriptions of an object, such as "water", is insufficient in defining natural kind. There are underlying aspects, such as chemical composition, that may go unaccounted for unless experts are consulted. This information provided by experts is what Putnam argues will ultimately define natural kinds.[6] Putnam calls the essential information used to define natural kind "core facts." This discussion arises in part in response to what he refers to as "Quine’s pessimism" of theory of meaning. Putnam claims that a natural kind can be referred to via its associated stereotype. This stereotype must be a normal member of the category, and is itself defined by core facts as determined by experts. By conveying these core facts, the essential and appropriate use of natural kind terms can be conveyed.[7] The process of conveying core facts to communicate the essence and appropriate term of a natural kind term is shown in Putnam's example of describing a lemon and a tiger. With a lemon, it is possible to communicate the stimulus-meaning of what a lemon is by simply showing someone a lemon. In the case of a tiger, on the other hand, it is considerably more complicated to show someone a tiger, but a speaker can just as readily explain what a tiger is by communicating its core facts. By conveying the core facts of a tiger (e.g. big cat, four legs, orange, black stripes, etc.), the listener can, in theory, go on to use the word "tiger" correctly and refer to its extension accurately.[7] In 1993,Hilary Kornblithpublished a review of debates about natural kinds since Quine had launched that epistemological project a quarter-century earlier. He evaluated Quine's "picture of natural knowledge" as natural kinds, along with subsequent refinements.[3]: 1 He found still acceptable Quine's original assumption that discovering knowledge of mind-independent reality depends on inductive generalisations based on limited observations, despite its being illogical. Equally acceptable was Quine's further assumption that instrumental success of inductive reasoning confirms both the existence of natural kinds and the legitimacy of the method. Quine's assumption of an innate human psychological process—"standard of similarity," "subjective spacing of qualities"—also remained unquestioned. Kornbluth strengthened this assumption with new labels for the necessary cognitive qualities: "native processes of belief acquisition," "the structure of human conceptual representation," "native inferential processes," "reasonably accurate detectors of covariation."[4]: 3, 9. 95"To my mind, the primary case to be made for the view that our [universal] psychological processes dovetail with the [generic] causal structure of the world comes ... from the success of science.[4]: 3 Kornblith denied that this logic makes human classifications the same as mind-independent classifications: "The categories of modern science, of course, are not innate."[4]: 81But he offered no explanation of how kinds that work conditionally can be distinguished from mind-independent unchanging kinds. . Kornblith didn't explain how tedious modern induction accurately generalizes from a few generic traits to all of some universal kind. He attributed such success to individual sensitivity that a single case is representative of all of a kind. Accepting intuition as a legitimate ground for inductive inferences from small samples, Kornblith criticized popular arguments by Amos Tversky and Daniel Kahneman that intuition is irrational. He continued to argue that traditional induction explains the success of modern science. Hasok Changand Rasmus Winther contributed essays to a collection entitledNatural Kinds and Classification in Scientific Practice, published in 2016. The editor of the collection, Catherine Kendig, argued for a modern meaning of natural kinds, rejecting Aristotelian classifications of objects according to their "essences, laws, sameness relations, fundamental properties ... and how these map out the ontological space of the world." She thus dropped the traditional supposition that natural kinds exist permanently and independently of human reasoning. She collected original works examining results of discipline-specific classifications of kinds: "the empirical use of natural kinds and what I dub 'activities of natural kinding' and 'natural kinding practices'."[8]: 1–3Her natural kinds include scientific disciplines themselves, each with its own methods of inquiry and classifications or taxonomies.. Chang's contribution displayed Kendig's "natural kinding activities" or "practice turn" by reporting classifications in the mature discipline of chemistry—a field renowned for examples of timeless natural kinds: "All water is H2O;" "All gold has atomic number 79." He explicitly rejected Quine's basic assumption that natural kinds are real generic objects. "When I speak of a (natural) kind in this chapter, I am referring to a [universal] classificatory concept, rather than a collection of objects." His kinds result from humanity's continuous knowledge-seeking activities called science and philosophy. "Putting these notions more unambiguously in terms of concepts rather than objects, I maintain: if we hit upon some stable and effective classificatory concepts in our inquiry, we should cherish them (calling them 'natural kinds' would be one clear way of doing so), but without presuming that we have thereby found some eternal essences.[8]: 33–4 He also rejected the position taken by Bird and Tobin in our third quote above. "Alexander Bird and Emma Tobin’s succinct characterization of natural kinds is helpful here, as a foil: ‘to say that a kind isnaturalis to say that it corresponds to a grouping or ordering that does not depend on humans’. My view is precisely the opposite, to the extent that scientific inquiry does depend on humans."[8]: 42–3 For Chang, induction creates conditionally warranted kinds by "epistemic iteration"—refining classifications developmentally to reveal how constant conjunctions of relevant traits work: "fundamental classificatory concepts become refined and corrected through our practical scientific engagement with nature. Any considerable and lasting [instrumental] success of such engagement generates confidence in the classificatory concepts used in it, and invites us to consider them as 'natural'."[8]: 34 Among other examples, Chang reported the inductive iterative process by which chemists gradually redefined the kind "element". The original hypothesis was that anything that cannot be decomposed by fire or acids is an element. Learning that some chemical reactions are reversible led to the discovery of weight as a constant through reactions. And then it was discovered that some reactions involve definite and invariable weight ratios, refining understanding of constant traits. "Attempts to establish and explain the combining-weight regularities led to the development of the chemical atomic theory by John Dalton and others. ... Chemical elements were later redefined in terms of atomic number (the number of protons in the nucleus)."[8]: 38–9 Chang claimed his examples of classification practices in chemistry confirmed the fallacy of the traditional assumption that natural kinds exist as mind-independent reality. He attributed this belief more to imagining supernatural intervention in the world, than to illogical induction. He did not consider the popular belief that innate psychological capacities enable traditional induction to work. "Much natural-kind talk has been driven by an intuitive metaphysical essentialism that concerns itself with an objective [generic] order of nature whose [universal] knowledge could, ironically, only be obtained by a supernatural being. Let us renounce such an unnatural notion of natural kinds. Instead, natural kinds should be conceived as something we humans may succeed in inventing and improving through scientific practice."[8]: 44 Rasmus Winther's contribution toNatural Kinds and Classification in Scientific Practicegave new meaning to natural objects and qualities in the nascent discipline of Geographic Information Science (GIS). This "inter-discipline" engages in discovering patterns in—and displaying spatial kinds of—data, using methods that make its results unique natural kinds. But it still creates kinds using induction to identify instrumental traits. "Collecting and collating geographical data, building geographical data-bases, and engaging in spatial analysis, visualization, and map-making all require organizing, typologizing, and classifying geographic space, objects, relations, and processes. I focus on the use of natural kinds ..., showing how practices of making and using kinds are contextual, fallible, plural, and purposive. The rich family of kinds involved in these activities are here baptized mapping kinds."[8]: 197 He later identified sub-kinds of mapping kinds as "calibrating kinds," "feature kinds," and "object kinds" of "data model types."[8]: 202–3 Winther identified "inferential processes of abstraction and generalization" as methods used by GIS, and explained how they generate digital maps. He illustrated two kinds of inquiry procedures, with sub-procedures to organize data. They are reminiscent of Dewey's multiple steps in modern inductive and deductive inference.[8]: 205Methods for transforming generic phenomena into kinds involve reducing complexity, amplifying, joining, and separating. Methods for selecting among generic kinds involves elimination, classification, and collapse of data. He argued that these methods for mapping kinds can be practiced in other disciplines, and briefly considered how they might harmonize three conflicting philosophical perspectives on natural kinds. Some philosophers believe there can be a "pluralism" of kinds and classifications. They prefer to speak of "relevant" and "interesting" kinds rather than eternal "natural" kinds. They may be called social constructivists whose kinds are human products. Chang's conclusions that natural kinds are human-created and instrumentally useful would appear to put him in this group. Other philosophers, including Quine, examine the role of kinds in scientific inference. Winther does not examine Quine's commitment to traditional induction generalizing from small samples of similar objects. But he does accept Quine's willingness to call human-identified kinds that work natural. "Quine holds that kinds are "functionally relevant groupings in nature" whose recognition permits our inductions to "tend to come out right." That is, kinds ground fallible inductive inferences and predictions, so essential to scientific projects including those of GIS and cartography."[8]: 207 Finally, Winther identified a philosophical perspective seeking to reconstruct rather than reject belief in natural kinds. He placed Dewey in this group, ignoring Dewey's rejection of the traditional label in favor of "warranted assertions". "Dewey resisted the standard view of natural kinds, inherited from the Greeks ... Instead, Dewey presents an analysis of kinds (and classes and universals) as fallible and context-specific hypotheses permitting us to address problematic situations effectively."[8]: 208Winther concludes that classification practices used in Geographic Information Science are able to harmonize these conflicting philosophical perspectives on natural kinds. "GIS and cartography suggest that kinds are simultaneously discovered [as pre-existing structures] and constructed [as human classifications]. Geographic features, processes, and objects are of course real. Yet we must structure them in our data models and, subsequently, select and transform them in our maps. Realism and (social) constructivism are hence not exclusive in this field."[8]: 209
https://en.wikipedia.org/wiki/Natural_kind
Inlinguisticsandphilosophy, avaguepredicateis one which gives rise to borderline cases. For example, the English adjective "tall" is vague since it is not clearly true or false for someone of middling height. By contrast, the word "prime" is not vague since every number is definitively either prime or not. Vagueness is commonly diagnosed by a predicate's ability to give rise to theSorites paradox. Vagueness is separate fromambiguity, in which an expression has multipledenotations. For instance the word "bank" is ambiguous since it can refer either to a river bank or to a financial institution, but there are no borderline cases between both interpretations. Vagueness is a major topic of research inphilosophical logic, where it serves as a potential challenge toclassical logic. Work informal semanticshas sought to provide a compositional semantics for vague expressions in natural language. Work inphilosophy of languagehas addressed implications of vagueness for the theory of meaning, whilemetaphysicistshave considered whether reality itself is vague. The concept of vagueness has philosophical importance. Suppose one wants to come up with a definition of "right" in the moral sense. One wants a definition to cover actions that are clearly right and exclude actions that are clearly wrong, but what does one do with the borderline cases? Surely, there are such cases. Some philosophers say that one should try to come up with a definition that is itself unclear on just those cases. Others say that one has an interest in making his or her definitions more precise than ordinary language, or his or her ordinary concepts, themselves allow; they recommend one advancesprecising definitions.[1] Vagueness is also a problem which arises in law, and in some cases, judges have to arbitrate regarding whether a borderline case does, or does not, satisfy a given vague concept. Examples include disability (how much loss of vision is required before one is legally blind?), human life (at what point from conception to birth is one a legal human being, protected for instance by laws against murder?), adulthood (most familiarly reflected in legal ages for driving, drinking, voting, consensual sex, etc.), race (how to classify someone of mixed racial heritage), etc. Even such apparently unambiguous concepts such as biological sex can be subject to vagueness problems, not just fromtranssexuals' gender transitions but also from certain genetic conditions which can give an individual mixed male and female biological traits (seeintersex). In thecommon lawsystem, vagueness is a possiblelegal defenceagainst by-laws and other regulations. The legal principle is that delegated power cannot be used more broadly than the delegator intended. Therefore, a regulation may not be so vague as to regulate areas beyond what the law allows. Any such regulation would be "void for vagueness" and unenforceable. This principle is sometimes used to strike down municipal by-laws that forbid "explicit" or "objectionable" contents from being sold in a certain city; courts often find such expressions to be too vague, giving municipal inspectors discretion beyond what the law allows. In the US this is known as thevagueness doctrineand in Europe as theprinciple of legal certainty. Many scientific concepts are of necessity vague, for instancespeciesin biology cannot be precisely defined, owing to unclear cases such asring species. Nonetheless, the concept of species can be clearly applied in the vast majority of cases. As this example illustrates, to say that a definition is "vague" is not necessarily a criticism. Consider those animals in Alaska that are the result of breedinghuskiesandwolves: are theydogs? It is not clear: they are borderline cases of dogs. This means one's ordinary concept of doghood is not clear enough to let us rule conclusively in this case. The philosophical question of what the best theoretical treatment of vagueness is—which is closely related to the problem of theparadox of the heap, a.k.a. sorites paradox—has been the subject of much philosophical debate. One theoretical approach is that of fuzzy logic, developed by American mathematicianLotfi Zadeh. Fuzzy logic proposes a gradual transition between "perfect falsity", for example, the statement "Bill Clintonis bald", to "perfect truth", for, say, "Patrick Stewartis bald". In ordinary logics, there are only twotruth-values: "true" and "false". The fuzzy perspective differs by introducingan infinite number of truth-valuesalong a spectrum between perfect truth and perfect falsity. Perfect truth may be represented by "1", and perfect falsity by "0". Borderline cases are thought of as having a "truth-value" anywhere between 0 and 1 (for example, 0.6). Advocates of the fuzzy logic approach have included K. F. Machina (1976)[2]andDorothy Edgington(1993).[3] Another theoretical approach is known as "supervaluationism". This approach has been defended byKit Fineand Rosanna Keefe. Fine argues that borderline applications of vague predicates are neither true nor false, but rather are instances of "truth valuegaps". He defends an interesting and sophisticated system of vague semantics, based on the notion that a vague predicate might be "made precise" in many alternative ways. This system has the consequence that borderline cases of vague terms yield statements that are neither true, nor false.[4] Given a supervaluationist semantics, one can define the predicate "supertrue" as meaning "true on allprecisifications". This predicate will not change the semantics of atomic statements (e.g. "Frank is bald", where Frank is a borderline case of baldness), but does have consequences for logically complex statements. In particular, thetautologiesof sentential logic, such as "Frank is bald or Frank is not bald", will turn out to be supertrue, since on any precisification of baldness, either "Frank is bald" or "Frank is not bald" will be true. Since the presence of borderline cases seems to threaten principles like this one (excluded middle), the fact that supervaluationism can "rescue" them is seen as a virtue. Subvaluationismis the logical dual of supervaluationism, and has been defended by Dominic Hyde (2008) and Pablo Cobreros (2011). Whereas the supervaluationist characterises truth as 'supertruth', the subvaluationist characterises truth as 'subtruth', or "true on at least some precisifications".[5] Subvaluationism proposes that borderline applications of vague terms are both true and false. It thus has "truth-value gluts". According to this theory, a vague statement is true if it is true on at least one precisification and false if it is false under at least one precisification. If a vague statement comes out true under one precisification and false under another, it is both true and false. Subvaluationism ultimately amounts to the claim that vagueness is a truly contradictory phenomenon.[6]Of a borderline case of "bald man" it would be both true and false to say that he is bald, and both true and false to say that he is not bald. A fourth approach, known as "theepistemicistview", has been defended byTimothy Williamson(1994),[7]R. A. Sorensen(1988)[8]and (2001),[9]andNicholas Rescher(2009).[10]They maintain that vague predicates do, in fact, draw sharp boundaries, but that one cannot know where these boundaries lie. One's confusion about whether some vague word does or does not apply in a borderline case is due to one's ignorance. For example, in the epistemicist view, there is a fact of the matter, for every person, about whether that person is old or not old; some people are ignorant of this fact. One possibility is that one's words and concepts are perfectly precise, but that objects themselves are vague. ConsiderPeter Unger's example of acloud(from his famous 1980 paper, "The Problem of the Many"): it is not clear where the boundary of a cloud lies; for any given bit of water vapor, one can ask whether it is part of the cloud or not, and for many such bits, one will not know how to answer. So perhaps one's term 'cloud' denotes a vague object precisely. This strategy has been poorly received, in part due toGareth Evans'sshort paper "Can There Be Vague Objects?" (1978).[11]Evans's argument appears to show that there can be no vague identities (e.g. "Princeton = Princeton Borough"), but as Lewis (1988) makes clear, Evans takes for granted that there are in fact vague identities, and that any proof to the contrary cannot be right. Since the proof Evans produces relies on the assumption that terms precisely denote vague objects, the implication is that the assumption is false, and so the vague-objects view is wrong. Still by, for instance, proposing alternative deduction rules involvingLeibniz's lawor other rules for validity some philosophers are willing to defendontological vaguenessas some kind of metaphysical phenomenon. One has, for example,Peter van Inwagen(1990),[12]Trenton MerricksandTerence Parsons(2000).[13] Vagueness is primarily a filter[14][failed verification]of natural humancognition, other tasks of vagueness are derived from that, and they are secondary.[15]The ability to cognition is the basic natural equipment of human (and other creatures) allowing him to orient and survive in the real (material) world. The task of cognition is to obtain from the epistemologically incalculable (immensely vast and deep) reality to a human its cognitive (knowledge) model, containing an only finite amount ofinformation. For this purpose, there must be a filter performing selection and thus reduction of information. It is the vagueness[14]with which man perceive and then remember information about the real (material) world. Some information gained with less vagueness, others with greater one, according to the distance from the center (focus) of attention occupied by man during his act of cognition. Human is unable to acquire information other than vague one by his natural vague cognition. It is necessary to distinguish the internal cognitive model, i.e. the intrapsychic, stored and processed in human consciousness (and probably also in the unconscious), in hypothetical intrapsychic languages: imaginary, emotional and natural and in their mixture, and then the external model, represented in a suitable external language of communication. Cognition and language (Law of maintaining accuracy of information): Communication language should have the same amount of vagueness, as have information gained by cognition (source of the information). That means, language must be tuned to appropriate cognition considering vagueness. This is one of secondary tasks of vagueness. A person is able to speak about his inherently vague knowledge (contained in the intrapsychic cognitive model represented in hypothetical intrapsychic languages) in natural (generally informal language, e.g. Esperanto), of course only vaguely.[14]The vagueness of knowledge caused by thefilter of knowledgeis primary, we call itinternal vagueness(i.e. intrapsychic). The vagueness of a person's subsequent utterance is secondary vagueness. This utterance (transformation from intrapsychic languages to external communicative languages - it is called aformulation, see thesemantic triangle) cannot reveal all the content of the personal intrapsychic cognitive model with all its inherent vagueness. The vagueness contained in the linguistic utterance (of external communication language) is calledexternal vagueness. Linguistically, only external vagueness can be grasped (modeled). We cannot model internal vagueness; it is part of the intrapsychic model, and this vagueness is contained in (vague, emotional, subjective and variable during time) interpretation of constructs (words, sentences) ofinformal language.[16]This vagueness is hidden for the other human, he can only guess the amount of it. Informal languages, such as natural language, do not make it possible to distinguish between internal and external vagueness strictly, but only with a vague boundary.[17][18] Fortunately, however, informal languages use appropriate language constructs making meaning a little uncertain (e.g.indeterminate quantifiersPOSSIBLY, SEVERAL, MAYBE, etc.). Such quantifiers allow natural language to use external vagueness more strongly and explicitly, thus allowing internal vagueness to be partially shifted up to external vagueness. It is a way to draw the addressee's attention to the vagueness of the message more explicitly and to quantify the vagueness, thus improving understanding in communication using natural language. But the main vagueness of informal languages is the internal vagueness, and the external vagueness serves only as an auxiliary tool. Formal languages, mathematics, formal logic, programming languages (in principle, they must havezerointernal vagueness of interpretation of all language constructs, i.e. they have exact interpretation) can model external vagueness by tools of vagueness and uncertainty representation:fuzzy setsand fuzzy logic, or bystochasticquantities and stochastic functions, as the exact sciences do. Principle is: If we admit more vagueness (uncertainty), we can gain more information during cognition. See e.g. possibilities of deterministic and stochastic physic. In other cases cognitive model of certain part of real world may be simplified, such a way, that certain amount of deterministic information is possible to replace by fuzzy or stochastic one. The internal vagueness of one person's message is hidden from another person, he can only guess that. We either have to accept internal vagueness, which is human, or we can try to reduce it, or completely eliminate it, which is scientific. Demands on the accuracy of the formulation of scientific knowledge and its communication require minimizing the internal vagueness with which one connotes (vaguely, emotionally and subjectively interprets)[16]linguistic constructs of the communication language, and thus improve the accuracy of the message. Various scientific procedures aim to improve the credibility and accuracy of the scientific knowledge obtained. To formulate them, however, it is necessary to build a more precise language, with less (internal) vagueness of message than is common in daily life. This is done by purposefully (branch) constructedterminologyallowing to more accurately describe the researched reality and the acquired knowledge about it. People properly educated in the field of terminology know it with little internal vagueness, so they know accurately what the individual terms mean. Basic concepts are always formed on the basis of consensus, the other derived from them by definition, to avoid toCircular definition. To improve the accuracy of research and communication (reducing the internal vagueness of connotation), tools such as classification schemes are used, such as the taxonomy of organisms byCarl von Linné. This is how descriptive (non-exact) sciences do it. Thus, they use natural human cognition (with the Russell’s filter of vagueness[19]) and refined natural language. There is another continuation of the reduction of internal vagueness. The method of reducing internal vagueness to the extreme, that iszero, was realized by I. Newton.[20]It is an epochal idea, and it needs to be explained how it can be realized. It follows from the above-mentioned Law of maintaining accuracy of information (optimization of the truthfulness of the message) saying, if we require to eliminate the internal vagueness in the knowledge completely (to zero) then of course it must first be completely eliminated in cognition (the source of information). This means that one (Newton) must avoid the intrusion of internal vagueness, that is, to choose some filter of cognition other than vagueness. Thus we pass from the natural human world to the artificial one. We call it theexact worldand we will explain why. In the case of natural language, it is not possible to completely remove (nullify) the internal vagueness, but it is possible to build artificial formal languages (mathematics, formal logics, programming languages) that have zero internal vagueness of connotation (so they have an exact interpretation) and cannot have another in principle. (Newton for this purpose has created formal language – theory of flux - theory of flowing – infinitesimal calculus). Languages with zero internal vagueness of their interpretation, i.e. the meaning of their linguistic constructions, have the property that all these constructions are understood by every appropriately educated person with absolutely precise, i.e. exact meaning. That is why they are part of the exact world. Thus, we have some language that is able to represent knowledge with zero internal vagueness. But these must first be acquired by adequate cognition, providing cognition also with zero internal vagueness, i.e. also from the exact world. And it is already evident that we are on the way to the creation of the scientific method that creates science belonging to the exact world, that is, anexact scienceis born. It is still necessary to explain how to realize Newton`s exact cognition, that is, the cognition when the knowledge obtained from the real world is part of the exact world. The miraculous bridge between the real and exact worlds that makes this possible is called aquantity(e.g. electric field intensity, velocity, nitric acid concentration, etc.). It is common to both worlds, because in the exact world it is precisely delineated (every knowing person knows them with no doubts, so exactly), and in the real world it is an elementalmeasurableprobe into that, and thus its elemental measurable representative.The quantity is the elementary building block of exact science. In exact sciences, it is always precisely defined, either consensually (basic set) or the other derived -International System of Units. And what about the artificial filter that allows Newton to avoid internal vagueness? For every problem of the real world that is to be grasped by Newton’s method of the exact science, it is necessary to choose a group of suitable quantities, find the natural laws that apply in the real world between them, and describe them in mathematical language. We get a mathematical knowledge (cognitive) model of a given part of the real world. A group of selected quantities forms adiscrete Newtonian filter(sieve) through which man ‘’look’’ at a given part of the real world. Thus, in exact science, a given part of the real world is represented by a group of suitably chosen quantities and mathematically (programming language) described relations between them (more precisely between their names – symbols denoting them). Exact scienceis a method that allows knowledge about the real world to be acquired and recorded so that it is part of the exact world. That is the method of modeling the real world by means of exact world, in other words, a method to mathematize the science. Even exact science needs to have a tool, with which it can describe theuncertaintyof the results (obtained – knowledge), whether out of necessity or the need to abandon excessive precision. Since that cannot (must not) be an internal vagueness, it can only use linguistically graspable uncertainty (external vagueness). For this purpose it has for its disposal a description of fuzzy or stochastic values of quantities, and fuzzy or stochastic relations (represented by mathematical functions) between quantities. The difference between the non-exact sciences (called descriptive) and the exact sciences is that the former use natural human cognition (with the Russell’s filter of vagueness) and refined natural language, and the exact sciences use cognition based on the use of Newtonian discrete filter and thus the use of quantities, and artificial formal language. Artificial formal language also brings a powerful tool to exact science, which is formalinference(information formal processing) known from mathematics. The above-mentioned tools of exact and non-exact science are general principles, and different branches of science use them in combination with both. They have their parts exact and inexact. Purely exact sciences, such as theoretical physics or mathematics, use natural language as meta-language. Exact science provides most trustworthy knowledge. The question can certainly be raised as to whether all science can be transformed into an exact science. The answer is not. The condition for the establishment of an exact science is to find suitable quantities, and this is possible only for a small part of the real world and for specific views of it. In other words, the filter of vagueness makes it possible to vaguely know many; Newton's discrete filter makes it possible to know only little but exactly.
https://en.wikipedia.org/wiki/Vagueness
What Is Art?(Russian:Что такое искусство?Chto takoye iskusstvo?) is a book byLeo Tolstoy. It was completed in Russian in 1897 but first published in English in 1898 due to difficulties with the Russian censors.[1] Tolstoy cites the time, effort, public funds, and public respect spent on art and artists[2]as well as the imprecision of general opinions on art[3]as reason for writing the book. In his words, "it is difficult to say what is meant by art, and especially what is good, useful art, art for the sake of which we might condone such sacrifices as are being offered at its shrine".[4] Throughout the book Tolstoy demonstrates an "unremitting moralism",[5]evaluating artworks in light of his radical Christian ethics,[6]and displaying a willingness to dismiss accepted masters, includingWagner,[7]Shakespeare,[8]andDante,[9]as well as the bulk of his own writings.[10] Having rejected the use of beauty in definitions of art (seeaesthetics), Tolstoy conceptualises art as anything that communicates emotion: "Art begins when a man, with the purpose of communicating to other people a feeling he once experienced, calls it up again within himself and expresses it by certain external signs".[11] This view of art is inclusive: "jokes", "home decoration", and "church services" may all be considered art as long as they convey feeling.[12]It is also amoral: "[f]eelings... very bad and very good, if only they infect the reader... constitute the subject of art".[13] Tolstoy also notes that the "sincerity" of the artist – that is, the extent to which the artist "experiences the feeling he conveys" – influences the infection.[14] While Tolstoy's basic conception of art is broad[15]and amoral,[13]his idea of "good" art is strict and moralistic, based on what he sees as the function of art in the development of humanity: just as in the evolution of knowledge – that is, the forcing out and supplanting of mistaken and unnecessary knowledge by truer and more necessary knowledge – so the evolution of feelings takes place by means of art, replacing lower feelings, less kind and less needed for the good of humanity, by kinder feelings, more needed for that good. This is the purpose of art.[16] Tolstoy's analysis is influenced by his radical Christian views (seeThe Kingdom of God is Within You), views which led him to be excommunicated from the Russian Orthodox Church in 1901.[17]He states that Christian art, rooted in "the consciousness of sonship to God and the brotherhood of men":[18] can evoke reverence for each man's dignity, for every animal’s life, it can evoke the shame of luxury, of violence, of revenge, of using for one’s pleasure objects that are a necessity for other people, it can make people sacrifice themselves to serve others freely and joyfully, without noticing it.[19] Ultimately, "by calling up the feelings of brotherhood and love in people under imaginary conditions, religious art will accustom people to experiencing the same feelings in reality under the same conditions".[19] Tolstoy's examples:Schiller'sThe Robbers,Victor Hugo'sLes Misérables,Charles Dickens'sA Tale of Two CitiesandThe Chimes,Harriet Beecher Stowe'sUncle Tom's Cabin,Dostoevsky'sThe House of the Dead,George Eliot'sAdam Bede,[20]Ge'sJudgement,Liezen-Mayer'sSigning the Death Sentence, and paintings "portraying the labouring man with respect and love" such as those byMillet,Breton,Lhermitte, andDefregger.[21] "Universal" art[20]illustrates that people are "already united in the oneness of life's joys and sorrows"[22]by communicating "feelings of the simplest, most everyday sort, accessible to all people without exception, such as the feelings of merriment, tenderness, cheerfulness, peacefulness, and so on".[18]Tolstoy contrasts this ideal with art that is partisan in nature, whether it be by class, religion, nation, or style.[23] Tolstoy's examples: he mentions, with many qualifiers, the works ofCervantes,Dickens,Moliere,Gogol, andPushkin, comparing all of these unfavourably to the story ofJoseph.[21]In music he commends a violin aria ofBach, theE-flat major nocturneof Chopin, and "selected passages" fromSchubert,Haydn, Chopin, andMozart. He also speaks briefly ofgenre paintingsandlandscapes.[24] Tolstoy notes the susceptibility of his contemporaries to the "charm of obscurity".[25]Works have become laden with "euphemisms, mythological and historical allusions", and general "vagueness, mysteriousness, obscurity and inaccessibility to the masses".[25]Tolstoy lambastes such works, insisting that art can and should be comprehensible to everyone. Having emphasised that art has a function in the improvement of humanity – capable of expressing man's best sentiment – he finds it offensive that artists should be so wilfully and arrogantly abstruse.[26] One criticism Tolstoy levels against art is that at some point it "ceased to be sincere and became artificial and cerebral",[27]leading to the creation of millions of works of technical brilliance but few of honourable sentiment.[28]Tolstoy outlines four common markers of bad art: these are not however considered the canon or ultimate indicators Involves recycling and concentrating elements from other works,[29]typical examples of which are: "maidens, warriors, shepherds, hermits, angels, devils in all forms, moonlight, thunderstorms, mountains, the sea, precipices, flowers, long hair, lions, the lamb, the dove, the nightingale".[30] Imitation is highly descriptive realism, where painting becomes photography, or a scene in a book becomes a listing of facial expressions, tone of voice, the setting, and so on.[31]Any potential communication of feeling is "disrupted by the superfluity of details".[32] Reliance on "strikingness", often involving contrasts of "horrible and tender, beautiful and ugly, loud and soft, dark and light", descriptions of lust,[31]"crescendo and complication", unexpected changes in rhythm, tempo, etc.[33]Tolstoy contends that works marked by such techniques "do not convey any feeling, but only affect the nerves".[34] Diversion is "an intellectual interest added to the work of art", such as the melding of documentary and fiction, as well as the writing of novels, poetry, and music "in such a way that they must be puzzled out".[33]All such works do not correspond with Tolstoy's view of art as the infection of others with feelings previously experienced,[35]and his exhortation that art be "universal" in appeal.[24] Tolstoy approves of early Christian art for being inspired by love of Christ and man, as well as its antagonism to pleasure-seeking. He prefers this to the art born of "Church Christianity", which ostensibly evades the "essential theses of true Christianity" (that is, that all men are born of the Father, are equals, and should strive towards mutual love).[36]Art became pagan – worshipping religious figures – and subservient to the dictates of the Church.[36] The corruption of art was deepened after theCrusades, as the abuse of papal power became more obvious. The rich began to doubt, seeing contradictions between the actions of the Church and the message of Christianity.[37]But instead of turning back to the early Christian teachings, the upper classes began to appreciate and commission art that was merely pleasing.[38]This tendency was facilitated by theRenaissance, with the aggrandisement of ancient Greek art, philosophy, and culture which, Tolstoy alleges, is inclined to pleasure and beauty worship.[39] Tolstoy perceives the roots ofaestheticsin the Renaissance. Art for pleasure was validated in reference to the philosophy of the Greeks[40][41]and the elevation of “beauty” as a legitimate criterion with which to separate good from bad art.[42] Tolstoy moves to discredit aesthetics by reviewing and reducing previous theories – including those ofBaumgarten,[43]Kant[44](Critique of Judgement),Hegel,[45]Hume, andSchopenhauer[46]– to two main “aesthetic definitions of beauty”:[47] Tolstoy then argues that, despite their apparent divergence, there is little substantive difference between the two strands. This is because both schools recognise beauty only by the pleasure it gives: "both notions of beauty come down to a certain sort of pleasure that we receive, meaning that we recognize as beauty that which pleases us without awakening our lust".[48]Therefore, there is no objective definition of art in aesthetics.[49] Tolstoy condemns the focus on beauty/pleasure at length, calling aesthetics a discipline: according to which the difference between good art, conveying good feelings, and bad art, conveying wicked feelings, was totally obliterated, and one of the lowest manifestations of art, art for mere pleasure – against which all teachers of mankind have warned people – came to be regarded as the highest art. And art became, not the important thing it was intended to be, but the empty amusement of idle people.[42] Tolstoy sees the developing professionalism of art as hampering the creation of good works. The professional artist can and must create to prosper, making for art that is insincere and most likely partisan – made to suit the whims of fashion orpatrons.[50] Art criticism is a symptom of the obscurity of art, for "[a]n artist, if he is a true artist, has in his work conveyed to others the feelings he has experienced: what is there to explain?".[51]Criticism, moreover, tends to contribute to the veneration of "authorities"[52]such asShakespeareandDante.[53]By constant unfavourable comparison, the young artist is corralled into imitating the works of the greats, as all of them are said to be true art. In short, new artists imitate the classics, setting their own feelings aside, which, according to Tolstoy, is contrary to the point of art.[54] Art schools teach people how to imitate the method of the masters, but they cannot teach the sincerity of emotion that is the propellant of great works.[55]In Tolstoy's words, "[n]o school can call up feelings in a man, and still less can it teach a man what is the essence of art: the manifestation of feeling in his own particular fashion".[55] Throughout the book Tolstoy demonstrates a willingness to dismiss generally accepted masters, among themLiszt,Richard Strauss,[56]Nietzsche,[59]andOscar Wilde.[28]He also labels his own works as "bad art", excepting only the short stories "God Sees the Truth" and "Prisoner of the Caucasus".[61] He attempts to justify these conclusions by pointing to the ostensible chaos of previous aesthetic analysis. Theories usually involve selecting popular works and constructing principles from these examples.Volkelt, for instance, remarks that art cannot be judged on its moral content because thenRomeo and Julietwould not be good art. Such retrospective justification cannot, he stresses, be the basis for theory, as people will tend to create subjective frameworks to justify their own tastes.[62] Jahn notes the "often confusing use of categorisation"[63]and the lack of definition of the key concept of emotion.[64]Bayley writes that "the effectiveness ofWhat is Art?lies not so much in its positive assertions as in its rejection of much that was taken for granted in the aesthetic theories of the time".[65]Noyes criticises Tolstoy's dismissal of beauty,[66]but states that, "despite its shortcomings",What is Art?"may be pronounced the most stimulating critical work of our time".[67]Simmons mentions the "occasional brilliant passages" along with the "repetition, awkward language, and loose terminology".[68]Aylmer Maude, translator of many of Tolstoy's writings, calls it "probably the most masterly of all Tolstoy's works", citing the difficulty of the subject matter and its clarity.[69]For a comprehensive review of the reception at the time of publication, see Maude 1901b.[70]
https://en.wikipedia.org/wiki/What_Is_Art%3F
Afallacyis the use ofinvalidor otherwise faulty reasoning in the construction of an argument. All forms of human communication can contain fallacies. Because of their variety, fallacies are challenging to classify. They can be classified by their structure (formal fallacies) or content (informal fallacies). Informal fallacies, the larger group, may then be subdivided into categories such as improper presumption, faulty generalization, error in assigning causation, and relevance, among others. The use of fallacies is common when the speaker's goal of achieving common agreement is more important to them than utilizing sound reasoning. When fallacies are used, the premise should be recognized as not well-grounded, the conclusion as unproven (but not necessarily false), and the argument as unsound.[1] A formal fallacy is an error in theargument's form.[2]All formal fallacies are types ofnon sequitur. A propositional fallacy is an error that concerns compound propositions. For a compound proposition to be true, the truth values of its constituent parts must satisfy the relevant logical connectives that occur in it (most commonly: [and], [or], [not], [only if], [if and only if]). The following fallacies involve relations whose truth values are not guaranteed and therefore not guaranteed to yield true conclusions.Types ofpropositionalfallacies: A quantification fallacy is an error in logic where the quantifiers of the premises are in contradiction to the quantifier of the conclusion.Types ofquantificationfallacies: Syllogistic fallacies– logical fallacies that occur insyllogisms. Informal fallacies – arguments that are logically unsound for lack of well-grounded premises.[14] Faulty generalization– reaching a conclusion from weak premises. Questionable causeis a general type of error with many variants. Its primary basis is the confusion of association with causation, either by inappropriately deducing (or rejecting) causation or a broader failure to properly investigate the cause of an observed effect. A red herring fallacy, one of the main subtypes of fallacies of relevance, is an error in logic where a proposition is, or is intended to be, misleading in order to make irrelevant or false inferences. This includes any logical inference based on fake arguments, intended to replace the lack of real arguments or to replace implicitly the subject of the discussion.[70][71] Red herring– introducing a second argument in response to the first argument that is irrelevant and draws attention away from the original topic (e.g.: saying "If you want to complain about the dishes I leave in the sink, what about the dirty clothes you leave in the bathroom?").[72]Injury trial, it is known as aChewbacca defense. In political strategy, it is called adead cat strategy.See alsoirrelevant conclusion.
https://en.wikipedia.org/wiki/List_of_fallacies
Cognitive biasesare systematic patterns of deviation from norm and/or rationality in judgment.[1][2]They are often studied inpsychology,sociologyandbehavioral economics.[1] Although the reality of most of these biases is confirmed byreproducibleresearch,[3][4]there are often controversies about how to classify these biases or how to explain them.[5]Severaltheoretical causes are known for some cognitive biases, which provides a classification of biases by their common generative mechanism (such as noisy information-processing[6]).Gerd Gigerenzerhas criticized the framing of cognitive biases as errors in judgment, and favors interpreting them as arising from rational deviations from logical thought.[7] Explanations include information-processing rules (i.e., mental shortcuts), calledheuristics, that the brain uses to producedecisionsor judgments. Biases have a variety of forms and appear as cognitive ("cold") bias, such as mental noise,[6]or motivational ("hot") bias, such as when beliefs are distorted bywishful thinking. Both effects can be present at the same time.[8][9] There are also controversies over some of these biases as to whether they count as useless orirrational, or whether they result in useful attitudes or behavior. For example, when getting to know others, people tend to askleading questionswhich seem biased towards confirming their assumptions about the person. However, this kind ofconfirmation biashas also been argued to be an example ofsocial skill; a way to establish a connection with the other person.[10] Although this research overwhelmingly involves human subjects, some studies have found bias in non-human animals as well. For example,loss aversionhas been shown in monkeys andhyperbolic discountinghas been observed in rats, pigeons, and monkeys.[11] These biases affect belief formation, reasoning processes, business and economic decisions, and human behavior in general. The anchoring bias, or focalism, is the tendency to rely too heavily—to "anchor"—on one trait or piece of information when making decisions (usually the first piece of information acquired on that subject).[12][13]Anchoring bias includes or involves the following: The tendency to perceive meaningful connections between unrelated things.[18]The following are types of apophenia: The availability heuristic (also known as the availability bias) is the tendency to overestimate the likelihood of events with greater "availability" in memory, which can be influenced by how recent the memories are or how unusual or emotionally charged they may be.[22]The availability heuristic includes or involves the following: Cognitive dissonance is the perception of contradictory information and the mental toll of it. Confirmation bias is the tendency to search for, interpret, focus on and remember information in a way that confirms one's preconceptions.[35]There are multiple other cognitive biases which involve or are types of confirmation bias: Egocentric bias is the tendency to rely too heavily on one's own perspective and/or have a different perception of oneself relative to others.[38]The following are forms of egocentric bias: Extension neglect occurs where the quantity of the sample size is not sufficiently taken into consideration when assessing the outcome, relevance or judgement. The following are forms of extension neglect: False priors are initial beliefs and knowledge which interfere with the unbiased evaluation of factual evidence and lead to incorrect conclusions. Biases based on false priors include: The framing effect is the tendency to draw different conclusions from the same information, depending on how that information is presented. Forms of the framing effect include: The following relate to prospect theory: Association fallacies include: Attribution bias includes: Conformity is involved in the following: Ingroup bias is the tendency for people to give preferential treatment to others they perceive to be members of their own groups. It is related to the following: Inpsychologyandcognitive science, a memory bias is acognitive biasthat either enhances or impairs the recall of amemory(either the chances that the memory will be recalled at all, or the amount of time it takes for it to be recalled, or both), or that alters the content of a reported memory. There are many types of memory bias, including: The misattributions include:
https://en.wikipedia.org/wiki/List_of_memory_biases
Confirmation bias(alsoconfirmatory bias,myside bias[a]orcongeniality bias[2]) is the tendency to search for, interpret, favor and recall information in a way that confirms or supports one's priorbeliefsorvalues.[3]People display this bias when they select information that supports their views, ignoring contrary information or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, foremotionallycharged issues and for deeply entrenched beliefs. Biased search for information, biased interpretation of this information and biased memory recall, have been invoked to explain four specific effects: A series ofpsychological experimentsin the 1960s suggested that people are biased toward confirming their existing beliefs. Later work re-interpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives. Explanations for the observed biases includewishful thinkingand the limited human capacity to process information. Another proposal is that people show confirmation bias because they are pragmatically assessing the costs of being wrong rather than investigating in a neutral, scientific way. Flaweddecisionsdue to confirmation bias have been found in a wide range of political, organizational, financial and scientific contexts. These biases contribute tooverconfidencein personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. For example, confirmation bias produces systematic errors in scientific research based oninductive reasoning(the gradual accumulation of supportive evidence). Similarly, a police detective may identify a suspect early in an investigation but then may only seek confirming rather than disconfirming evidence. A medical practitioner may prematurely focus on a particular disorder early in a diagnostic session and then seek only confirming evidence. Insocial media, confirmation bias is amplified by the use offilter bubbles, or "algorithmic editing", which display to individuals only information they are likely to agree with, while excluding opposing views. Confirmation bias, previously used as a "catch-all phrase", was refined by English psychologistPeter Wason, as "a preference for information that is consistent with a hypothesis rather than information which opposes it."[4] Confirmation biases are effects ininformation processing. They differ from what is sometimes called thebehavioral confirmation effect, commonly known asself-fulfilling prophecy, in which a person's expectations influence their own behavior, bringing about the expected result.[5] Some psychologists restrict the term "confirmation bias" to selective collection of evidence that supports what one already believes while ignoring or rejecting evidence that supports a different conclusion. Others apply the term more broadly to the tendency to preserve one's existing beliefs when searching for evidence, interpreting it, or recalling it from memory.[6][b]Confirmation bias is a result of automatic, unintentional strategies rather than deliberate deception.[8][9] Experiments have found repeatedly that people tend to test hypotheses in a one-sided way, by searching for evidence consistent with their currenthypothesis.[3]: 177–178[11]Rather than searching through all the relevant evidence, they phrase questions to receive an affirmative answer that supports their theory.[12]They look for the consequences that they would expect if their hypothesis was true, rather than what would happen if it was false.[12]For example, someone using yes/no questions to find a number they suspect to be the number 3 might ask, "Is it anodd number?" People prefer this type of question, called a "positive test", even when a negative test such as "Is it an even number?" would yield exactly the same information.[13]However, this does not mean that people seek tests that guarantee a positive answer. In studies where subjects could select either such pseudo-tests or genuinely diagnostic ones, they favored the genuinely diagnostic.[14][15] The preference for positive tests in itself is not a bias, since positive tests can be highly informative.[16]However, in combination with other effects, this strategy can confirm existing beliefs or assumptions, independently of whether they are true.[8]In real-world situations, evidence is often complex and mixed. For example, various contradictory ideas about someone could each be supported by concentrating on one aspect of his or her behavior.[11]Thus any search for evidence in favor of a hypothesis is likely to succeed.[8]One illustration of this is the way the phrasing of a question can significantly change the answer.[11]For example, people who are asked, "Are you happy with your social life?" report greater satisfaction than those asked, "Are youunhappy with your social life?"[17] Even a small change in a question's wording can affect how people search through available information, and hence the conclusions they reach. This was shown using a fictional child custody case.[18]Participants read that Parent A was moderately suitable to be the guardian in multiple ways. Parent B had a mix of salient positive and negative qualities: a close relationship with the child but a job that would take them away for long periods of time. When asked, "Which parent should have custody of the child?" the majority of participants chose Parent B, looking mainly for positive attributes. However, when asked, "Which parent should be denied custody of the child?" they looked for negative attributes and the majority answered that Parent B should be denied custody, implying that Parent A should have custody.[18] Similar studies have demonstrated how people engage in a biased search for information, but also that this phenomenon may be limited by a preference for genuine diagnostic tests. In an initial experiment, participants rated another person on theintroversion–extroversionpersonality dimension on the basis of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, "What do you find unpleasant about noisy parties?" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, "What would you do to liven up a dull party?" Theseloaded questionsgave the interviewees little or no opportunity to falsify the hypothesis about them.[19]A later version of the experiment gave the participants less presumptive questions to choose from, such as, "Do you shy away from social interactions?"[20]Participants preferred to ask these more diagnostic questions, showing only a weak bias towards positive tests. This pattern, of a main preference for diagnostic tests and a weaker preference for positive tests, has been replicated in other studies.[20] Goedert, Ellefson, and Rehder (2014) examined the influence of prior distributions of the strength of causal relations on how people collect and evaluate evidence. The findings suggest that people's sense of plausibility will influence their search for evidence in a way that bolsters their prior views. In this experiment, participants read stories about a range of causes to other kinds of effect, for example, skin diseases to car accidents, and collected evidence of the probativeness of particular causes. They found that, on average, participants were more likely to search for confirming evidence for causes they concluded were plausible and disconfirming evidence for causes they considered implausible — a strategy the researchers dubbed the positive test strategy. This result implies that plausibility does not just change how people interpret evidence, but also what evidence they seek. Furthermore, the research indicated that in cases when participants perceived the cause as unlikely, one of their major concerns is to give disconfirming evidence preference, and because the explanation they modified is a source of evidence that contradicts their newly acquired explanation, it may be difficult for people to update their beliefs when faced with disconfirming evidence.[21] Personality traits influence and interact with biased search processes.[22]Individuals vary in their abilities to defend their attitudes from external attacks in relation toselective exposure. Selective exposure occurs when individuals search for information that is consistent, rather than inconsistent, with their personal beliefs.[23]An experiment examined the extent to which individuals could refute arguments that contradicted their personal beliefs.[22]People with highconfidencelevels more readily seek out contradictory information to their personal position to form an argument. This can take the form of anoppositional news consumption, where individuals seek opposing partisan news in order to counterargue.[24]Individuals with low confidence levels do not seek out contradictory information and prefer information that supports their personal position. People generate and evaluate evidence in arguments that are biased towards their own beliefs and opinions.[25]Heightened confidence levels decrease preference for information that supports individuals' personal beliefs. Another experiment gave participants a complex rule-discovery task that involved moving objects simulated by a computer.[26]Objects on the computer screen followed specific laws, which the participants had to figure out. So, participants could "fire" objects across the screen to test their hypotheses. Despite making many attempts over a ten-hour session, none of the participants figured out the rules of the system. They typically attempted to confirm rather than falsify their hypotheses, and were reluctant to consider alternatives. Even after seeing objective evidence that refuted their working hypotheses, they frequently continued doing the same tests. Some of the participants were taught proper hypothesis-testing, but these instructions had almost no effect.[26] Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons. Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information, the way they interpret it can be biased. A team atStanford Universityconducted an experiment involving participants who felt strongly about capital punishment, with half in favor and half against it.[28][29]Each participant read descriptions of two studies: a comparison ofU.S. stateswith and without the death penalty, and a comparison of murder rates in a state before and after the introduction of the death penalty. After reading a quick description of each study, the participants were asked whether their opinions had changed. Then, they read a more detailed account of each study's procedure and had to rate whether the research was well-conducted and convincing.[28]In fact, the studies were fictional. Half the participants were told that one kind of study supported thedeterrenteffect and the other undermined it, while for other participants the conclusions were swapped.[28][29] The participants, whether supporters or opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed descriptions of the two studies, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Participants described studies supporting their pre-existing view as superior to those that contradicted it, in detailed and specific ways.[28][30]Writing about a study that seemed to undermine the deterrence effect, a death penalty proponent wrote, "The research didn't cover a long enough period of time," while an opponent's comment on the same study said, "No strong evidence to contradict the researchers has been presented."[28]The results illustrated that people set higher standards of evidence for hypotheses that go against their current expectations. This effect, known as "disconfirmation bias", has been supported by other experiments.[31] Another study of biased interpretation occurred during the2004 U.S. presidential electionand involved participants who reported having strong feelings about the candidates. They were shown apparently contradictory pairs of statements, either from Republican candidateGeorge W. Bush, Democratic candidateJohn Kerryor a politically neutral public figure. They were also given further statements that made the apparent contradiction seem reasonable. From these three pieces of information, they had to decide whether each individual's statements were inconsistent.[32]: 1948There were strong differences in these evaluations, with participants much more likely to interpret statements from the candidate they opposed as contradictory.[32]: 1951 In this experiment, the participants made their judgments while in amagnetic resonance imaging(MRI) scanner which monitored their brain activity. As participants evaluated contradictory statements by their favored candidate,emotionalcenters of their brains were aroused. This did not happen with the statements by the other figures. The experimenters inferred that the different responses to the statements were not due to passive reasoning errors. Instead, the participants were actively reducing thecognitive dissonanceinduced by reading about their favored candidate's irrational orhypocriticalbehavior.[32]: 1956 Biases in belief interpretation are persistent, regardless of intelligence level. Participants in an experiment took theSATtest (a college admissions test used in the United States) to assess their intelligence levels. They then read information regarding safety concerns for vehicles, and the experimenters manipulated the national origin of the car. American participants provided their opinion if the car should be banned on a six-point scale, where one indicated "definitely yes" and six indicated "definitely no". Participants firstly evaluated if they would allow a dangerous German car on American streets and a dangerous American car on German streets. Participants believed that the dangerous German car on American streets should be banned more quickly than the dangerous American car on German streets. There was no difference among intelligence levels at the rate participants would ban a car.[25] Biased interpretation is not restricted to emotionally significant topics. In another experiment, participants were told a story about a theft. They had to rate the evidential importance of statements arguing either for or against a particular character being responsible. When they hypothesized that character's guilt, they rated statements supporting that hypothesis as more important than conflicting statements.[33] People may remember evidence selectively to reinforce their expectations, even if they gather and interpret evidence in a neutral manner. This effect is called "selective recall", "confirmatory memory", or "access-biased memory".[34]Psychological theories differ in their predictions about selective recall.Schema theorypredicts that information matching prior expectations will be more easily stored and recalled than information that does not match.[35]Some alternative approaches say that surprising information stands out and so is memorable.[35]Predictions from both these theories have been confirmed in different experimental contexts, with no theory winning outright.[36] In one study, participants read a profile of a woman which described a mix of introverted and extroverted behaviors.[37]They later had to recall examples of her introversion and extroversion. One group was told this was to assess the woman for a job as a librarian, while a second group were told it was for a job in real estate sales. There was a significant difference between what these two groups recalled, with the "librarian" group recalling more examples of introversion and the "sales" groups recalling more extroverted behavior.[37]A selective memory effect has also been shown in experiments that manipulate the desirability of personality types.[35][38]In one of these, a group of participants were shown evidence that extroverted people are more successful than introverts. Another group were told the opposite. In a subsequent, apparently unrelated study, participants were asked to recall events from their lives in which they had been either introverted or extroverted. Each group of participants provided more memories connecting themselves with the more desirable personality type, and recalled those memories more quickly.[39] Changes in emotional states can also influence memory recall.[40][41]Participants rated how they felt when they had first learned thatO. J. Simpsonhad been acquitted of murder charges.[40]They described their emotional reactions and confidence regarding the verdict one week, two months, and one year after the trial. Results indicated that participants' assessments for Simpson's guilt changed over time. The more that participants' opinion of the verdict had changed, the less stable were the participant's memories regarding their initial emotional reactions. When participants recalled their initial emotional reactions two months and a year later, past appraisals closely resembled current appraisals of emotion. People demonstrate sizable myside bias when discussing their opinions on controversial topics.[25]Memory recall and construction of experiences undergo revision in relation to corresponding emotional states. Myside bias has been shown to influence the accuracy of memory recall.[41]In an experiment, widows and widowers rated the intensity of their experienced grief six months and five years after the deaths of their spouses. Participants noted a higher experience of grief at six months rather than at five years. Yet, when the participants were asked after five years how they had felt six months after the death of their significant other, the intensity of grief participants recalled was highlycorrelatedwith their current level of grief. Individuals appear to utilize their current emotional states to analyze how they must have felt when experiencing past events.[40]Emotional memories are reconstructed by current emotional states. One study showed how selective memory can maintain belief inextrasensory perception(ESP).[42]Believers and disbelievers were each shown descriptions of ESP experiments. Half of each group were told that the experimental results supported the existence of ESP, while the others were told they did not. In a subsequent test, participants recalled the material accurately, apart from believers who had read the non-supportive evidence. This group remembered significantly less information and some of them incorrectly remembered the results as supporting ESP.[42] Myside bias was once believed to be correlated with intelligence; however, studies have shown that myside bias can be more influenced by ability to rationally think as opposed to level of intelligence.[25]Myside bias can cause an inability to effectively and logically evaluate the opposite side of an argument. Studies have stated that myside bias is an absence of "active open-mindedness", meaning the active search for why an initial idea may be wrong.[43]Typically, myside bias is operationalized in empirical studies as the quantity of evidence used in support of their side in comparison to the opposite side.[44] A study has found individual differences in myside bias. This study investigates individual differences that are acquired through learning in a cultural context and are mutable. The researcher found important individual difference in argumentation. Studies have suggested that individual differences such as deductive reasoning ability, ability to overcome belief bias, epistemological understanding, and thinking disposition are significant predictors of the reasoning and generating arguments, counterarguments, and rebuttals.[45][46][47] A study by Christopher Wolfe and Anne Britt also investigated how participants' views of "what makes a good argument?" can be a source of myside bias that influences the way a person formulates their own arguments.[44]The study investigated individual differences of argumentation schema and asked participants to write essays. The participants were randomly assigned to write essays either for or against their preferred side of an argument and were given research instructions that took either a balanced or an unrestricted approach. The balanced-research instructions directed participants to create a "balanced" argument, i.e., that included both pros and cons; the unrestricted-research instructions included nothing on how to create the argument.[44] Overall, the results revealed that the balanced-research instructions significantly increased the incidence of opposing information in arguments. These data also reveal that personal belief is not asourceof myside bias; however, that those participants, who believe that a good argument is one that is based on facts, are more likely to exhibit myside bias than other participants. This evidence is consistent with the claims proposed in Baron's article—that people's opinions about what makes good thinking can influence how arguments are generated.[44] Before psychological research on confirmation bias, the phenomenon had been observed throughout history. Beginning with the Greek historianThucydides(c.460 BC–c.395 BC), who wrote of misguided reason inThe Peloponnesian War; "... for it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy".[48]Italian poetDante Alighieri(1265–1321) noted it in theDivine Comedy, in whichSt. Thomas Aquinascautions Dante upon meeting in Paradise, "opinion—hasty—often can incline to the wrong side, and then affection for one's own opinion binds, confines the mind".[49]Ibn Khaldunnoticed the same effect in hisMuqaddimah:[50] Untruth naturally afflicts historical information. There are various reasons that make this unavoidable. One of them is partisanship for opinions and schools. ... if the soul is infected with partisanship for a particular opinion or sect, it accepts without a moment's hesitation the information that is agreeable to it. Prejudice and partisanship obscure the critical faculty and preclude critical investigation. The result is that falsehoods are accepted and transmitted. In theNovum Organum, English philosopher and scientistFrancis Bacon(1561–1626)[51]noted that biased assessment of evidence drove "all superstitions, whether in astrology, dreams, omens, divine judgments or the like".[52]He wrote:[52] The human understanding when it has once adopted an opinion ... draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects[.] In the second volume of hisThe World as Will and Representation(1844), German philosopherArthur Schopenhauerobserved that "An adopted hypothesis gives us lynx-eyes for everything that confirms it and makes us blind to everything that contradicts it."[53] In his essay (1897)What Is Art?, Russian novelistLeo Tolstoywrote:[54] I know that most men—not only those considered clever, but even those who are very clever, and capable of understanding most difficult scientific, mathematical, or philosophic problems—can very seldom discern even the simplest and most obvious truth if it be such as to oblige them to admit the falsity of conclusions they have formed, perhaps with much difficulty—conclusions of which they are proud, which they have taught to others, and on which they have built their lives. In his essay (1894)The Kingdom of God Is Within You, Tolstoy had earlier written:[55] The most difficult subjects can be explained to the most slow-witted man if he has not formed any idea of them already; but the simplest thing cannot be made clear to the most intelligent man if he is firmly persuaded that he knows already, without a shadow of doubt, what is laid before him. In Peter Wason's initial experiment published in 1960 (which does not mention the term "confirmation bias"), he repeatedly challenged participants to identify a rule applying to triples of numbers. They were told that (2,4,6) fits the rule. They generated triples, and the experimenter told them whether each triple conformed to the rule.[3]: 179 The actual rule was simply "any ascending sequence", but participants had great difficulty in finding it, often announcing rules that were far more specific, such as "the middle number is the average of the first and last".[56]The participants seemed to test only positive examples—triples that obeyed their hypothesized rule. For example, if they thought the rule was, "Each number is two greater than its predecessor," they would offer a triple that fitted (confirmed) this rule, such as (11,13,15) rather than a triple that violated (falsified) it, such as (11,12,19).[57] Wason interpreted his results as showing a preference for confirmation over falsification, hence he coined the term "confirmation bias".[c][59]Wason also used confirmation bias to explain the results of hisselection taskexperiment.[60]Participants repeatedly performed badly on various forms of this test, in most cases ignoring information that could potentially refute (falsify) the specified rule.[61][62] Klayman and Ha's 1987 paper argues that the Wason experiments do not actually demonstrate a bias towards confirmation, but instead a tendency to make tests consistent with the working hypothesis.[16][63]They called this the "positive test strategy".[11]This strategy is an example of aheuristic: a reasoning shortcut that is imperfect but easy to compute.[64]Klayman and Ha usedBayesian probabilityandinformation theoryas their standard of hypothesis-testing, rather than the falsificationism used by Wason. According to these ideas, each answer to a question yields a different amount of information, which depends on the person's prior beliefs. Thus a scientific test of a hypothesis is one that is expected to produce the most information. Since the information content depends on initial probabilities, a positive test can either be highly informative or uninformative. Klayman and Ha argued that when people think about realistic problems, they are looking for a specific answer with a small initial probability. In this case, positive tests are usually more informative than negative tests.[16]However, in Wason's rule discovery task the answer—three numbers in ascending order—is very broad, so positive tests are unlikely to yield informative answers. Klayman and Ha supported their analysis by citing an experiment that used the labels "DAX" and "MED" in place of "fits the rule" and "doesn't fit the rule". This avoided implying that the aim was to find a low-probability rule. Participants had much more success with this version of the experiment.[65][66] In light of this and other critiques, the focus of research moved away from confirmation versus falsification of an hypothesis, to examining whether people test hypotheses in an informative way, or an uninformative but positive way. The search for "true" confirmation bias led psychologists to look at a wider range of effects in how people process information.[67] There are currently three maininformation processingexplanations of confirmation bias, plus a recent addition. According toRobert MacCoun, most biased evidence processing occurs through a combination of "cold" (cognitive) and "hot" (motivated) mechanisms.[68] Cognitive explanations for confirmation bias are based on limitations in people's ability to handle complex tasks, and the shortcuts, calledheuristics, that they use.[69]For example, people may judge the reliability of evidence by using theavailability heuristicthat is, how readily a particular idea comes to mind.[70]It is also possible that people can only focus on one thought at a time, so find it difficult to test alternative hypotheses in parallel.[3]: 198–199Another heuristic is the positive test strategy identified by Klayman and Ha, in which people test a hypothesis by examining cases where they expect a property or event to occur. This heuristic avoids the difficult or impossible task of working out how diagnostic each possible question will be. However, it is not universally reliable, so people can overlook challenges to their existing beliefs.[16][3]: 200 Motivational explanations involve an effect ofdesireonbelief.[3]: 197[71]It is known that people prefer positive thoughts over negative ones in a number of ways: this is called the "Pollyanna principle".[72]Applied toargumentsor sources ofevidence, this could explain why desired conclusions are more likely to be believed true. According to experiments that manipulate the desirability of the conclusion, people demand a high standard of evidence for unpalatable ideas and a low standard for preferred ideas. In other words, they ask, "Can I believe this?" for some suggestions and, "Must I believe this?" for others.[73][74]Althoughconsistencyis a desirable feature of attitudes, an excessive drive for consistency is another potential source of bias because it may prevent people from neutrally evaluating new, surprising information. Social psychologistZiva Kundacombines the cognitive and motivational theories, arguing that motivation creates the bias, but cognitive factors determine the size of the effect.[3]: 198 Explanations in terms ofcost-benefit analysisassume that people do not just test hypotheses in a disinterested way, but assess the costs of different errors.[75]Using ideas fromevolutionary psychology, James Friedrich suggests that people do not primarily aim attruthin testing hypotheses, but try to avoid the most costly errors. For example, employers might ask one-sided questions in job interviews because they are focused on weeding out unsuitable candidates.[76]Yaacov Tropeand Akiva Liberman's refinement of this theory assumes that people compare the two different kinds of error: accepting a false hypothesis or rejecting a true hypothesis. For instance, someone who underestimates a friend's honesty might treat him or her suspiciously and so undermine the friendship. Overestimating the friend's honesty may also be costly, but less so. In this case, it would be rational to seek, evaluate or remember evidence of their honesty in a biased way.[77]When someone gives an initial impression of being introverted or extroverted, questions that match that impression come across as moreempathic.[78]This suggests that when talking to someone who seems to be an introvert, it is a sign of bettersocial skillsto ask, "Do you feel awkward in social situations?" rather than, "Do you like noisy parties?" The connection between confirmation bias and social skills was corroborated by a study of how college students get to know other people. Highlyself-monitoringstudents, who are more sensitive to their environment and tosocial norms, asked more matching questions when interviewing a high-status staff member than when getting to know fellow students.[78] PsychologistsJennifer LernerandPhilip Tetlockdistinguish two different kinds of thinking process.Exploratory thoughtneutrally considers multiple points of view and tries to anticipate all possible objections to a particular position, whileconfirmatory thoughtseeks to justify a specific point of view. Lerner and Tetlock say that when people expect to justify their position to others whose views they already know, they will tend to adopt a similar position to those people, and then use confirmatory thought to bolster their own credibility. However, if the external parties are overly aggressive or critical, people will disengage from thought altogether, and simply assert their personal opinions without justification. Lerner and Tetlock say that people only push themselves to think critically and logically when they know in advance they will need to explain themselves to others who are well-informed, genuinely interested in the truth, and whose views they do not already know. Because those conditions rarely exist, they argue, most people are using confirmatory thought most of the time.[79][80][81] Developmental psychologist Eve Whitmore has argued that beliefs and biases involved in confirmation bias have their roots in childhood coping through make-believe, which becomes "the basis for more complex forms of self-deception and illusion into adulthood." The friction brought on by questioning as an adolescent with developing critical thinking can lead to the rationalization of false beliefs, and the habit of such rationalization can become unconscious over the years.[82] Recent research in economics has challenged the traditional view of confirmation bias as purely a cognitive flaw.[83]Under conditions where acquiring and processing information is costly, seeking confirmatory evidence can actually be an optimal strategy. Instead of pursuing contrarian or disconfirming evidence, it may be more efficient to focus on sources likely to align with one's existing beliefs, given the constraints on time and resources. Economist Weijie Zhong has developed a model demonstrating that individuals who must make decisions under time pressure, and who face costs for obtaining more information, will often prefer confirmatory signals. According to this model, when individuals believe strongly in a certain hypothesis, they optimally seek information that confirms it, allowing them to build confidence more efficiently. If the expected confirmatory signals are not received, their confidence in the initial hypothesis will gradually decline, leading to belief updating. This approach shows that seeking confirmation is not necessarily biased but may be a rational allocation of limited attention and resources.[84] Insocial media, confirmation bias is amplified by the use offilter bubbles, or "algorithmic editing", which displays to individuals only information they are likely to agree with, while excluding opposing views.[85]Some have argued that confirmation bias is the reason why society can never escape from filter bubbles, because individuals are psychologically hardwired to seek information that agrees with their preexisting values and beliefs.[86]Others have further argued that the mixture of the two is degradingdemocracy—claiming that this "algorithmic editing" removes diverse viewpoints and information—and that unless filter bubble algorithms are removed, voters will be unable to make fully informed political decisions.[87][85] The rise of social media has contributed greatly to the rapid spread offake news, that is, false and misleading information that is presented as credible news from a seemingly reliable source. Confirmation bias (selecting or reinterpreting evidence to support one's beliefs) is one of three main hurdles cited as to why critical thinking goes astray in these circumstances. The other two are shortcut heuristics (when overwhelmed or short of time, people rely on simple rules such as group consensus or trusting an expert or role model) and social goals (social motivation or peer pressure can interfere with objective analysis of facts at hand).[88] In combating the spread of fake news, social media sites have considered turning toward "digital nudging".[89]This can currently be done in two different forms of nudging. This includes nudging of information and nudging of presentation. Nudging of information entails social media sites providing a disclaimer or label questioning or warning users of the validity of the source while nudging of presentation includes exposing users to new information which they may not have sought out but could introduce them to viewpoints that may combat their own confirmation biases.[90] A distinguishing feature ofscientific thinkingis the search for confirming or supportive evidence (inductive reasoning) as well as falsifying evidence (deductive reasoning).[91][92] Many times in thehistory of science, scientists have resisted new discoveries by selectively interpreting or ignoring unfavorable data.[3]: 192–194Several studies have shown that scientists rate studies that report findings consistent with their prior beliefs more favorably than studies reporting findings inconsistent with their previous beliefs.[9][93][94] However, assuming that the research question is relevant, the experimental design adequate and the data are clearly and comprehensively described, the empirical data obtained should be important to the scientific community and should not be viewed prejudicially, regardless of whether they conform to current theoretical predictions.[94]In practice, researchers may misunderstand, misinterpret, or not read at all studies that contradict their preconceptions, or wrongly cite them anyway as if they actually supported their claims.[95] Further, confirmation biases can sustain scientific theories or research programs in the face of inadequate or even contradictory evidence.[61][96]The discipline ofparapsychologyis often cited as an example.[97] An experimenter's confirmation bias can potentially affect which data are reported. Data that conflict with the experimenter's expectations may be more readily discarded as unreliable, producing the so-calledfile drawer effect. To combat this tendency, scientific training teaches ways to prevent bias.[98]For example,experimental designofrandomized controlled trials(coupled with theirsystematic review) aims to minimize sources of bias.[98][99] The social process ofpeer reviewaims to mitigate the effect of individual scientists' biases, even though the peer review process itself may be susceptible to such biases[100][101][94][102][103]Confirmation bias may thus be especially harmful to objective evaluations regarding nonconforming results since biased individuals may regard opposing evidence to be weak in principle and give little serious thought to revising their beliefs.[93]Scientific innovators often meet with resistance from the scientific community, and research presenting controversial results frequently receives harsh peer review.[104] Confirmation bias can lead investors to be overconfident, ignoring evidence that their strategies will lose money.[10][105]In studies ofpolitical stock markets, investors made more profit when they resisted bias. For example, participants who interpreted a candidate's debate performance in a neutral rather than partisan way were more likely to profit.[106]To combat the effect of confirmation bias, investors can try to adopt a contrary viewpoint "for the sake of argument".[107]In one technique, they imagine that their investments have collapsed and ask themselves why this might happen.[10] Cognitive biases are important variables in clinical decision-making by medical general practitioners (GPs) and medical specialists. Two important ones are confirmation bias and the overlapping availability bias. A GP may make a diagnosis early on during an examination, and then seek confirming evidence rather than falsifying evidence. This cognitive error is partly caused by the availability of evidence about the supposed disorder being diagnosed. For example, the client may have mentioned the disorder, or the GP may have recently read a much-discussed paper about the disorder. The basis of this cognitive shortcut or heuristic (termed anchoring) is that the doctor does not consider multiple possibilities based on evidence, but prematurely latches on (or anchors to) a single cause.[108]In emergency medicine, because of time pressure, there is a high density of decision-making, and shortcuts are frequently applied. The potential failure rate of these cognitive decisions needs to be managed by education about the 30 or more cognitive biases that can occur, so as to set in place proper debiasing strategies.[109]Confirmation bias may also cause doctors to perform unnecessary medical procedures due to pressure from adamant patients.[110] Mental disorders may be prone to misdiagnosis in being based upon observations and self-reporting rather than objective testing. Confirmation bias may play a role when practitioners stick with an early diagnosis.[111] Raymond Nickerson, a psychologist, blames confirmation bias for the ineffective medical procedures that were used for centuries before thearrival of scientific medicine.[3]: 192If a patient recovered, medical authorities counted the treatment as successful, rather than looking for alternative explanations such as that the disease had run its natural course. Biased assimilation is a factor in the modern appeal ofalternative medicine, whose proponents are swayed by positiveanecdotal evidencebut treatscientific evidencehyper-critically.[112][113][114] Cognitive therapywas developed byAaron T. Beckin the early 1960s and has become a popular approach.[115]According to Beck, biased information processing is a factor indepression.[116]His approach teaches people to treat evidence impartially, rather than selectively reinforcing negative outlooks.[51]Phobiasandhypochondriahave also been shown to involve confirmation bias for threatening information.[117] Nickerson argues that reasoning in judicial and political contexts is sometimes subconsciously biased, favoring conclusions that judges, juries or governments have already committed to.[3]: 191–193Since the evidence in a jury trial can be complex, and jurors often reach decisions about the verdict early on, it is reasonable to expect an attitude polarization effect. The prediction that jurors will become more extreme in their views as they see more evidence has been borne out in experiments withmock trials.[118][119]Bothinquisitorialandadversarialcriminal justice systems are affected by confirmation bias.[120] Confirmation bias can be a factor in creating or extending conflicts, from emotionally charged debates to wars: by interpreting the evidence in their favor, each opposing party can become overconfident that it is in the stronger position.[121]On the other hand, confirmation bias can result in people ignoring or misinterpreting the signs of an imminent or incipient conflict. For example, psychologistsStuart Sutherlandand Thomas Kida have each argued thatU.S. NavyAdmiralHusband E. Kimmelshowed confirmation bias when playing down the first signs of the Japaneseattack on Pearl Harbor.[61][122] A two-decade study of political pundits byPhilip E. Tetlockfound that, on the whole, their predictions were not much better than chance. Tetlock divided experts into "foxes" who maintained multiple hypotheses, and "hedgehogs" who were more dogmatic. In general, the hedgehogs were much less accurate. Tetlock blamed their failure on confirmation bias, and specifically on their inability to make use of new information that contradicted their existing theories.[123] In police investigations, a detective may identify a suspect early in an investigation, but then sometimes largely seek supporting or confirming evidence, ignoring or downplaying falsifying evidence.[124] Social psychologists have identified two tendencies in the way people seek or interpret information about themselves.Self-verificationis the drive to reinforce the existingself-imageandself-enhancementis the drive to seek positive feedback. Both are served by confirmation biases.[125]In experiments where people are given feedback that conflicts with their self-image, they are less likely to attend to it or remember it than when given self-verifying feedback.[126][127][128]They reduce the impact of such information by interpreting it as unreliable.[126][129][130]Similar experiments have found a preference for positive feedback, and the people who give it, over negative feedback.[125] Confirmation bias can play a key role in the propagation ofmass delusions.Witch trialsare frequently cited as an example.[131][132] For another example, in theSeattle windshield pitting epidemic, there seemed to be a "pitting epidemic" in which windshields were damaged due to an unknown cause. As news of the apparent wave of damage spread, more and more people checked their windshields, discovered that their windshields too had been damaged, thus confirming belief in the supposed epidemic. In fact, the windshields were previously damaged, but the damage went unnoticed until people checked their windshields as the delusion spread.[133] One factor in the appeal of allegedpsychicreadings is that listeners apply a confirmation bias which fits the psychic's statements to their own lives.[134]By making a large number of ambiguous statements in each sitting, the psychic gives the client more opportunities to find a match. This is one of the techniques ofcold reading, with which a psychic can deliver a subjectively impressive reading without any prior information about the client.[134]InvestigatorJames Randicompared the transcript of a reading to the client's report of what the psychic had said, and found that the client showed a strong selective recall of the "hits".[135] As a striking illustration of confirmation bias in the real world, Nickerson mentions numerologicalpyramidology: the practice of finding meaning in the proportions of the Egyptian pyramids.[3]: 190There are many different length measurements that can be made of, for example, theGreat Pyramid of Gizaand many ways to combine or manipulate them. Hence it is almost inevitable that people who look at these numbers selectively will find superficially impressive correspondences, for example with the dimensions of the Earth.[3]: 190 Unconscious cognitive bias (including confirmation bias) injob recruitmentaffects hiring decisions and can potentially prohibit a diverse and inclusive workplace. There are a variety of unconscious biases that affects recruitment decisions but confirmation bias is one of the major ones, especially during the interview stage.[136]The interviewer will often select a candidate that confirms their own beliefs, even though other candidates are equally or better qualified. When people with opposing views interpret new information in a biased way, their views can move even further apart. This is called "attitude polarization".[137]The effect was demonstrated by an experiment that involved drawing a series of red and black balls from one of two concealed "bingo baskets". Participants knew that one basket contained 60 percent black and 40 percent red balls; the other, 40 percent black and 60 percent red. The experimenters looked at what happened when balls of alternating color were drawn in turn, a sequence that does not favor either basket. After each ball was drawn, participants in one group were asked to state out loud their judgments of the probability that the balls were being drawn from one or the other basket. These participants tended to grow more confident with each successive draw—whether they initially thought the basket with 60 percent black balls or the one with 60 percent red balls was the more likely source, their estimate of the probability increased. Another group of participants were asked to state probability estimates only at the end of a sequence of drawn balls, rather than after each ball. They did not show the polarization effect, suggesting that it does not necessarily occur when people simply hold opposing positions, but rather when they openly commit to them.[138] A less abstract study was the Stanford biased interpretation experiment, in which participants with strong opinions about the death penalty read about mixed experimental evidence. Twenty-three percent of the participants reported that their views had become more extreme, and this self-reported shift correlated strongly with their initial attitudes.[28]In later experiments, participants also reported their opinions becoming more extreme in response to ambiguous information. However, comparisons of their attitudes before and after the new evidence showed no significant change, suggesting that the self-reported changes might not be real.[31][137][139]Based on these experiments, Deanna Kuhn and Joseph Lao concluded that polarization is a real phenomenon but far from inevitable, only happening in a small minority of cases, and it was prompted not only by considering mixed evidence, but by merely thinking about the topic.[137] Charles Taber and Milton Lodge argued that the Stanford team's result had been hard to replicate because the arguments used in later experiments were too abstract or confusing to evoke an emotional response. The Taber and Lodge study used the emotionally charged topics ofgun controlandaffirmative action.[31]They measured the attitudes of their participants towards these issues before and after reading arguments on each side of the debate. Two groups of participants showed attitude polarization: those with strong prior opinions and those who were politically knowledgeable. In part of this study, participants chose which information sources to read, from a list prepared by the experimenters. For example, they could read arguments on gun control from theNational Rifle Association of Americaand theBrady Anti-Handgun Coalition. Even when instructed to be even-handed, participants were more likely to read arguments that supported their existing attitudes than arguments that did not. This biased search for information correlated well with the polarization effect.[31] Thebackfire effectis a name for the finding that given evidence against their beliefs, people can reject the evidence and believe even more strongly.[140][141]The phrase was coined byBrendan Nyhanand Jason Reifler in 2010.[142]However, subsequent research has since failed to replicate findings supporting the backfire effect.[143]One study conducted out of the Ohio State University and George Washington University studied 10,100 participants with 52 different issues expected to trigger a backfire effect. While the findings did conclude that individuals are reluctant to embrace facts that contradict their already held ideology, no cases of backfire were detected.[144]The backfire effect has since been noted to be a rare phenomenon rather than a common occurrence[145](compare theboomerang effect). Beliefs can survive potent logical or empirical challenges. They can survive and even be bolstered by evidence that most uncommitted observers would agree logically demands some weakening of such beliefs. They can even survive the total destruction of their original evidential bases. Confirmation biases provide one plausible explanation for the persistence of beliefs when the initial evidence for them is removed or when they have been sharply contradicted.[3]: 187This belief perseverance effect has been first demonstrated experimentally byFestinger, Riecken, and Schachter. These psychologistsspent time witha cult whose members were convinced that the world would end on 21 December 1954. After the prediction failed, most believers still clung to their faith. Their book describing this research is aptly namedWhen Prophecy Fails.[147] The termbelief perseverance, however, was coined in a series of experiments using what is called the "debriefing paradigm": participants read fake evidence for a hypothesis, theirattitude changeis measured, then the fakery is exposed in detail. Their attitudes are then measured once more to see if their belief returns to its previous level.[146] A common finding is that at least some of the initial belief remains even after a full debriefing.[148]In one experiment, participants had to distinguish between real and fake suicide notes. The feedback was random: some were told they had done well while others were told they had performed badly. Even after being fully debriefed, participants were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.[149] In another study, participants readjob performanceratings of two firefighters, along with their responses to arisk aversiontest.[146]This fictional data was arranged to show either a negative or positive association: some participants were told that a risk-taking firefighter did better, while others were told they did less well than a risk-averse colleague.[150]Even if these two case studies were true, they would have been scientifically poor evidence for a conclusion about firefighters in general. However, the participants found them subjectively persuasive.[150]When the case studies were shown to be fictional, participants' belief in a link diminished, but around half of the original effect remained.[146]Follow-up interviews established that the participants had understood the debriefing and taken it seriously. Participants seemed to trust the debriefing, but regarded the discredited information as irrelevant to their personal belief.[150] Thecontinued influence effectis the tendency for misinformation to continue to influence memory and reasoning about an event, despite the misinformation having been retracted or corrected. This occurs even when the individual believes the correction.[151] Experiments have shown that information is weighted more strongly when it appears early in a series, even when the order is unimportant. For example, people form a more positive impression of someone described as "intelligent, industrious, impulsive, critical, stubborn, envious" than when they are given the same words in reverse order.[152]Thisirrational primacy effectis independent of theprimacy effect in memoryin which the earlier items in a series leave a stronger memory trace.[152]Biased interpretation offers an explanation for this effect: seeing the initial evidence, people form a working hypothesis that affects how they interpret the rest of the information.[3]: 187 One demonstration of irrational primacy used colored chips supposedly drawn from two urns. Participants were told the color distributions of the urns, and had to estimate the probability of a chip being drawn from one of them.[152]In fact, the colors appeared in a prearranged order. The first thirty draws favored one urn and the next thirty favored the other.[3]: 187The series as a whole was neutral, so rationally, the two urns were equally likely. However, after sixty draws, participants favored the urn suggested by the initial thirty.[152] Another experiment involved a slide show of a single object, seen as just a blur at first and in slightly better focus with each succeeding slide.[152]After each slide, participants had to state their best guess of what the object was. Participants whose early guesses were wrong persisted with those guesses, even when the picture was sufficiently in focus that the object was readily recognizable to other people.[3]: 187 Illusory correlation is the tendency to see non-existent correlations in a set of data.[153]This tendency was first demonstrated in a series of experiments in the late 1960s.[154]In one experiment, participants read a set of psychiatric case studies, including responses to theRorschach inkblot test. The participants reported that the homosexual men in the set were more likely to report seeing buttocks, anuses or sexually ambiguous figures in the inkblots. In fact the fictional case studies had been constructed so that the homosexual men were no more likely to report this imagery or, in one version of the experiment, were less likely to report it than heterosexual men.[153]In a survey, a group of experienced psychoanalysts reported the same set of illusory associations with homosexuality.[153][154] Another study recorded the symptoms experienced by arthritic patients, along with weather conditions over a 15-month period. Nearly all the patients reported that their pains were correlated with weather conditions, although the real correlation was zero.[155] This effect is a kind of biased interpretation, in that objectively neutral or unfavorable evidence is interpreted to support existing beliefs. It is also related to biases in hypothesis-testing behavior.[156]In judging whether two events, such as illness and bad weather, are correlated, people rely heavily on the number ofpositive-positivecases: in this example, instances of both pain and bad weather. They pay relatively little attention to the other kinds of observation (of no pain or good weather).[157]This parallels the reliance on positive tests in hypothesis testing.[156]It may also reflect selective recall, in that people may have a sense that two events are correlated because it is easier to recall times when they happened together.[156]
https://en.wikipedia.org/wiki/Confirmation_bias
Einstellung(German pronunciation:[ˈaɪ̯nˌʃtɛlʊŋ]ⓘ) is the development of a mechanized state of mind. Often called aproblem solvingset,Einstellungrefers to a person's predisposition to solve a given problem in a specific manner even though better or more appropriate methods of solving the problem exist. TheEinstellungeffect is the negative effect of previous experience when solving new problems. The Einstellung effect has been tested experimentally in many different contexts. The example which led to the coining of the term byAbraham S. LuchinsandEdith Hirsch Luchins[citation needed]is the Luchins water jar experiment, in which subjects were asked to solve a series ofwater jar problems. After solving many problems which had the same solution, subjects applied the same solution to later problems even though a simpler solution existed (Luchins, 1942).[1]Other experiments on the Einstellung effect can be found inThe Effect ofEinstellungon Compositional Processes[2]andRigidity of Behavior, A Variational Approach to the Effect ofEinstellung.[3] Einstellungliterally means "setting" or "installation" as well as a person's "attitude" in German. Related toEinstellungis what is referred to as anAufgabe("task" in German). The Aufgabe is the situation which could potentially invoke theEinstellungeffect. It is a task which creates a tendency to execute a previously applicable behavior. In the Luchins and Luchins experiment a water jar problem served as theAufgabe, or task. TheEinstellungeffect occurs when a person is presented with a problem or situation that is similar to problems they have worked through in the past. If the solution (or appropriate behavior) to the problem/situation has been the same in each past experience, the person will likely provide that same response, without giving the problem too much thought, even though a more appropriate response might be available. Essentially, theEinstellungeffect is one of the human brain's ways of finding an appropriate solution/behavior as efficiently as possible. The detail is that though finding the solution is efficient, the solution itself is not or might not be. (This is consistent with the famous remark of Blaise Pascal: "I would have written a shorter letter, but I didn't have the time.") Another phenomenon similar toEinstellungisfunctional fixedness(Duncker 1945).[4]Functional fixedness is an impaired ability to discover a new use for an object, owing to the subject's previous use of the object in a functionally dissimilar context. It can also be deemed a cognitive bias that limits a person to using an object only in the way it is traditionally used. Duncker also pointed out that the phenomenon occurs not only with physical objects, but also with mental objects or concepts (a point which lends itself nicely to the phenomenon ofEinstellungeffect). The water jar test, first described inAbraham S. Luchins' 1942 classic experiment,[1]is a commonly cited example of anEinstellungsituation. The experiment's participants were given the following problem: there are 3 water jars, each with the capacity to hold a different, fixed amount of water; the subject must figure out how to measure a certain amount of water using these jars. It was found that subjects used methods that they had used previously to find the solution even though there were quicker and more efficient methods available. The experiment shines light on how mental sets can hinder the solving of novel problems. In the Luchins' experiment, subjects were divided into two groups. The experimental group was given five practice problems, followed by four critical test problems. The control group did not have the five practice problems. All of the practice problems and some of the critical problems had only one solution, which was "B minus A minus 2⋅C." For example, one is given jar A capable of holding 21 units of water, B capable of holding 127, and C capable of holding 3. If an amount of 100 units must be measured out, the solution is to fill up jar B and pour out enough water to fill A once and C twice. One of the critical problems was called the extinction problem. The extinction problem was a problem that could not be solved using the previous solution B − A − 2C. In order to answer the extinction problem correctly, one had to solve the problem directly and generate a novel solution. An incorrect solution to the extinction problem indicated the presence of theEinstellungeffect. The problems after the extinction problem again had two possible solutions. These post-extinction problems helped determine the recovery of the subjects from theEinstellungeffect. The critical problems could be solved using this solution (B − A − 2C) or a shorter solution (A − C or A + C). For example, subjects were instructed to get 18 units of water from jars with capacities 15, 39, and 3. Despite the presence of a simpler solution (A + C), subjects in the experimental group tended to give the lengthier solution in lieu of the shorter one. Instead of simply filling up Jars A and C, most subjects from the experimental group preferred the previous method of B − A − 2C, whereas virtually all of the control group used the simpler solution. When Luchins and Luchins gave experimental group subjects the warning, "Don't be blind", over half of them used the simplest solution to the remaining problems.[5] TheEinstellungeffect can be supported by theories ofinductive reasoning. In a nutshell, inductive reasoning is the act of inferring a rule based on a finite number of instances. Most experiments on human inductive reasoning involve showing subjects a card with an object (or multiple objects, or letters, etc.) on it. The objects can vary in number, shape, size, color, etc., and the subject's job is to answer (initially by guessing) "yes" or "no" whether (or not) the card is a positive instance of the rule (which must be inferred by the subject). Over time, the subjects do tend to learn the rule, but the question ishow? Kendler and Kendler (1962)[6]proposed that older children and adults tend to exhibitnoncontinuity theory; that is, the subjects tend to pick a reasonable rule and assume it to be true until it proves false. Regarding theEinstellungeffect, one can view noncontinuity theory as a way of explaining the tendency to maintain a specific behavior until it fails to work. In the water-jar problem, subjects generated a specific rule because it seemed to work in all situations; when they were given problems for which the same solution worked, but a better solution was possible, they still gave their 'tried and true' response. Where theories of inductive reasoning tend to diverge from the idea of theEinstellungeffect is when analyzing the fact that, even after an instance where theEinstellungrule failed to work, many subjects reverted to the old solution when later presented with a problem for which it did work (again, this problem also had a better solution). One way to explain this observation is that in actuality subjects know (consciously) that the same solution might not always work, yet since they were presented with so many instances where it did work, they still tend to test that solution before any other (and so if it works, it will be the first solution found). Neurologically, the idea ofsynaptic plasticity, which is an important neurochemical explanation of memory, can help to understand theEinstellungeffect. Specifically,Hebbian theory(which in many regards is the neuroscience equivalent of originalassociationisttheories) is one explanation of synaptic plasticity (Hebb, 1949).[7]It states that when two associated neurons frequently fire together – while infrequently firing apart from one another – the strength of their association tends to become stronger (making future stimulation of one neuron even more likely to stimulate the other). Since the frontal lobe is most often attributed with the roles of planning and problem solving, if there is a neurological pathway which is fundamental to the understanding of theEinstellungeffect, the majority of it most likely falls within the frontal lobe. Essentially, a Hebbian explanation ofEinstellungcould be as follows:stimuliare presented in such a way that the subject recognizes themself as being in a situation which they have been in before. That is, the subject sees, hears, smells, etc., an environment which is akin to an environment which they have been in before. The subject then must process the stimuli which are presented in such a way that they exhibit a behavior which is appropriate for the situation (be it run, throw, eat, etc.). Because neural growth is, at least in part, due to the associations between two events/ideas, it follows that the more a given stimulus is followed by a specific response, the more likely in the future that stimulus will invoke the same response. Regarding the Luchins' experiment,[1]the stimulus presented was a water-jar problem (or to be more technical, the stimulus was a piece of paper which had words and numbers on it which, when interpreted correctly, portray a water-jar problem) and the invoked response was B − A − 2C. While it is a bit of a stretch to assume that there is a direct connection between awater-jar problemandB−A− 2Cwithin the brain, it is not unreasonable to assume that the specific neural connections which are active during a water-jar problem-state and those that are active when one thinks "take the second term, subtract the first term, then subtract two of the third term" tend to increase in the amount of overlap as more and more instances where B − A − 2C works are presented. The following experiments were designed to gauge the effect of differentstressfulsituations on theEinstellungeffect. Overall, these experiments show that stressful situations increase the prevalence of the Einstellung effect. Luchins gave an elementary-school class a set of water jar problems. In order to create a stressful situation, experimenters told the students that the test would be timed, that the speed and accuracy of the test would be reviewed by their principal and teachers, and that the test would affect their grades. To further agitate the students during the test, experimenters were instructed to comment on how much slower the children were compared to children in lower grades. The experimenters observed anxious, stressed, and sometimes tearful faces during the experiment. (Note that while such methods were common in the 1950s, today it violatesethical practices in research.) The results of the experiment indicated that the stressful speed test situation increased rigidity. Luchins found that only three of the ninety-eight students tested were able to solve the extinction problem, and only two students used the direct method for the critical problems. The same experiment conducted under non-stress conditions showed 70% rigidity during the test problems and 58% failure of the extinction problem, while the anxiety-inducing situation showed 98% and 97% respectively. The speed test was performed with college students as well, which yielded similar results. Even when college students were told ahead of time to use the direct method in order to avoid mistakes made by children, the college students continued to exhibit rigidity under time pressure. The results of these studies showed that the emphasis on speed increased theEinstellungeffect on the water jar problems.[8] Luchins also instructed subjects to draw a solution through a maze without crossing any of the maze's lines. The maze was either traced normally or traced using the mirror reflection of the maze. If the subject drew over the lines of the figure, they had to start at the beginning, which was disadvantageous since the subject was told that their score depended on the time and smoothness of the solution. The mirror-tracing situation was the stressful situation, and the normal tracing was the non-stressful, control situation. Experimenters observed that the mirror-tracing task caused more drawing outside the boundaries, increased overt signs of stress and anxiety, and required more time to accurately complete. The mirror-tracing situation produced 89%Einstellungsolution on the first two criticals instead of the 71% observed for normal tracing. In addition, 55% of the subjects failed with the mirror while only 18% failed without the mirror.[9] In 1951, Solomon[10]gave bothstutterersand fluent speakers a hidden word test, an arithmetical test, and a mirror maze test. Experimenters called the hidden word test a "speech test" to increase stutterer anxiety. There were no marked differences between the stutterers and the fluent speakers for the arithmetical and mirror maze tests. However, the results reveal asignificant differencebetween the performance of the stutterers and the fluent speakers on the "speech test". On the first two critical problems, 58 percent of the stutterers gaveEinstellungsolutions whereas only 4 percent of the fluent speakers showedEinstellungeffects.[11] The original Luchins and Luchins experiment tested nine-, ten-, eleven-, and twelve-year-olds for theEinstellungeffect.[1]The older groups showed moreEinstellungeffects than the younger groups in general. However, this initial study did not control for differences in educational level and intelligence. To remedy this problem, Ross (1952)[12]conducted a study on middle-aged (mean 37.3 years) and older adults (mean 60.8 years). The adults were grouped according to the I.Q., years of schooling, and occupation. Ross administered fiveEinstellungtests including the arithmetical (water jar) test, the maze test, the hidden word test, and two other tests. For every test, the middle-aged group performed better than the older group. For example, 65% of the older adults failed the extinction task of the arithmetical test, whereas only 29% of the middle-aged adults failed the extinction problem. Luchins devised another experiment to determine the difference betweenEinstellungeffects in children and in adults. In this study, 140 fifth-graders (mean 10.5 years) were compared to 79 college students (mean 21 years) and 21 adults (mean 43 years).Einstellungeffects prior to the extinction task increased with age: the observedEinstellungeffects for the extinction task were 56, 68, and 69 percent for young adults, children, and older adults respectively. This implies that there exists a curvilinear relationship between age and the recovery from theEinstellungeffect. A similar experiment conducted by Heglin in 1955, also found this relationship when the three age groups were equated for IQ. Therefore, the initial manifestation of theEinstellungeffect on the arithmetic test increases with age. However, the recovery from theEinstellungeffect is greatest for young adults (average age 21 years) and decreases as the subject moves away from this age.[13] In Luchins and Luchins' original experiment with 483 children, they found that boys demonstrated less of anEinstellungeffect than girls.[1]The experimental difference was only significant for the group that was instructed to write "Don't be blind" on their papers after the sixth problem (the DBB group). "Don't be blind" was meant as a reminder to pay attention and guard against rigidity for the sixth problem. However, this message was interpreted in many different ways including thinking of the message as just some more words to remember. The alternative interpretations occurred more frequently in girls and increased with IQ score within the female group. This difference in interpretation of "don't be blind" may account for the fact that the male DBB group showed more direct solutions than their female counterparts. To determine sex differences in adults, Luchins gave college students the mazeEinstellungtest. The female group showed slightly more (although not statistically significant)Einstellungeffects than the male group. Other studies have provided conflicting data about the sex differences in theEinstellungeffect.[14] Luchins and Luchins looked at the relationship between theintelligence quotient(IQ) and theEinstellungeffects for the children in their original experiment. They found that there was a statistically insignificant negative relationship between theEinstellungeffect and intelligence.[15]In general, largeEinstellungeffects were observed for all subject groups regardless of IQ score. When Luchins and Luchins looked at the IQ range for children who did and did not demonstrateEinstellungeffects, they spanned from 51 to 160 and from 75 to 155 respectively. These ranges show a slight negative correlation between intelligence andEinstellungeffects.
https://en.wikipedia.org/wiki/Einstellung_effect
Functional fixednessis acognitive biasthat limits a person to use an object only in the way it is traditionally used. The concept of functional fixedness originated inGestalt psychology, a movement in psychology that emphasizesholisticprocessing.Karl Dunckerdefined functional fixedness as being a mental block against using an object in a new way that is required tosolve a problem.[1]This "block" limits the ability of an individual to use components given to them to complete a task, as they cannot move past the original purpose of those components. For example, if someone needs a paperweight, but they only have a hammer, they may not see how the hammer can be used as a paperweight. Functional fixedness is this inability to see a hammer's use as anything other than for pounding nails; the person fails to think to use the hammer in a way other than in its conventional function. When tested, five-year-old children show no signs of functional fixedness. It has been argued that this is because at age five, any goal to be achieved with an object is equivalent to any other goal. However, by age seven, children have acquired the tendency to treat the originally intended purpose of an object as special.[2] Experimental paradigms typically involvesolving problemsin novel situations in which the subject has the use of a familiar object in an unfamiliar context. The object may be familiar from the subject's past experience or from previous tasks within an experiment. In a classic experiment demonstrating functional fixedness,Duncker(1945)[1]gave participants a candle, a box of thumbtacks, and a book of matches, and asked them to attach the candle to the wall so that it did not drip onto the table below. Duncker found that participants tried to attach the candle directly to the wall with the tacks, or to glue it to the wall by melting it. Very few of them thought of using the inside of the box as a candle-holder and tacking this to the wall. In Duncker's terms, the participants were "fixated" on the box's normal function of holding thumbtacks and could not re-conceptualize it in a manner that allowed them to solve the problem. For instance, participants presented with an empty tack box were two times more likely to solve the problem than those presented with the tack box used as a container.[3] More recently, Frank and Ramscar (2003)[4]gave a written version of the candle problem to undergraduates atStanford University. When the problem was given with identical instructions to those in the original experiment, only 23% of the students were able to solve the problem. For another group of students, the noun phrases such as "box of matches" were underlined, and for a third group, the nouns (e.g., "box") were underlined. For these two groups, 55% and 47% were able to solve the problem effectively. In a follow-up experiment, all the nouns except "box" were underlined and similar results were produced. The authors concluded that students' performance was contingent on their representation of the lexical concept "box" rather than instructional manipulations. The ability to overcome functional fixedness was contingent on having a flexible representation of the word box which allows students to see that the box can be used when attaching a candle to a wall. When Adamson (1952)[3]replicated Duncker's box experiment, Adamson split participants into two experimental groups: preutilization and no preutilization. In this experiment, when there is preutilization, meaning when objects are presented to participants in a traditional manner (materials are in the box, thus using the box as a container), participants are less likely to consider the box for any other use, whereas with no preutilization (when boxes are presented empty), participants are more likely to think of other uses for the box. Birch and Rabinowitz (1951)[5]adapted the two-cord problem from experiments byNorman Maier(1930, 1931), where a participant would be shown two cords hanging from the ceiling and instructed to connect them, but the cords are far enough apart so that the participant cannot reach one while holding the other. The only solution was to tie a heavy object to one cord as a weight, making it possible to swing the cord as a pendulum, then catch the swinging cord while holding the stationary cord, and tie them together. The only heavy objects provided were an electrical switch and an electrical relay. Participants were questioned on their choice between the two objects after successfully solving the problem. The participants were split into three groups: Group R was given a pretest task to complete an electrical circuit using a relay, Group S completed an identical circuit using a switch, and Group C was the control group made up of engineering students and was given no pretraining. Participants from Group C used both objects equally as the pendulum weight, while Group R exclusively used the switch as the pendulum weight, and most from Group S used the relay. When questioned on their choice, participants argued that whichever object they had used was obviously better suited for solving the problem. Their previous experience emphasised the other object as an electrical object, and functional fixedness prevented them from seeing it as being used for another purpose. The barometer question is an example of an incorrectly designed examination question demonstrating functional fixedness that causes a moral dilemma for the examiner. In its classic form, popularized by American test designer professor Alexander Calandra (1911–2006), the question asked the student to "show how it is possible to determine the height of a tall building with the aid of a barometer?"[6]The examiner was confident that there was one, and only one, correct answer. Contrary to the examiner's expectations, the student responded with a series of completely different answers. These answers were also correct, yet none of them proved the student's competence in the specific academic field being tested. Calandra presented the incident as a real-life,first-personexperience that occurred during theSputnik crisis.[7]Calandra's essay, "Angels on a Pin", was published in 1959 inPride, a magazine of theAmerican College Public Relations Association.[8]It was reprinted inCurrent Sciencein 1964,[9]reprinted again inSaturday Reviewin 1968,[10]and included in the 1969 edition of Calandra'sThe Teaching of Elementary Science and Mathematics.[11]In the same year (1969), Calandra's essay became a subject of an academic discussion.[12]The essay has been referenced frequently since,[13]making its way into books on subjects ranging from teaching,[14]writing skills,[15]workplace counseling,[16]and investment inreal estate[17]tochemical industry,[18]computer programming,[19]andintegrated circuitdesign.[20] Researchers have investigated whether functional fixedness is affected byculture. In a recent study, preliminary evidence supporting the universality of functional fixedness was found.[21]The study's purpose was to test if individuals from non-industrialized societies, specifically with low exposure to "high-tech" artifacts, demonstrated functional fixedness. The study tested theShuar, hunter-horticulturalists of the Amazon region of Ecuador, and compared them to a control group from an industrial culture. The Shuar community had only been exposed to a limited amount of industrialized artifacts, such as machete, axes, cooking pots, nails, shotguns, and fishhooks, all considered "low-tech". Two tasks were assessed to participants for the study: the box task, where participants had to build a tower to help a character from a fictional storyline to reach another character with a limited set of varied materials; the spoon task, where participants were also given a problem to solve based on a fictional story of a rabbit that had to cross a river (materials were used to represent settings) and they were given varied materials including a spoon. In the box-task, participants were slower to select the materials than participants in control conditions, but no difference in time to solve the problem was seen. In the spoon task, participants were slower in selection and completion of task. Results showed that Individuals from non-industrial ("technologically sparse cultures") were susceptible to functional fixedness. They were faster to use artifacts without priming than when design function was explained to them. This occurred even though participants were less exposed to industrialized manufactured artifacts, and that the few artifacts they currently use were used in multiple ways regardless of their design.[21] Investigators examined in two experiments "whether the inclusion of examples with inappropriate elements, in addition to the instructions for a design problem, would produce fixation effects in students naive to design tasks".[22]They examined the inclusion of examples of inappropriate elements, by explicitly depicting problematic aspects of the problem presented to the students through example designs. They tested non-expert participants on three problem conditions: with standard instruction, fixated (with inclusion of problematic design), and defixated (inclusion of problematic design accompanied with helpful methods). They were able to support their hypothesis by finding that a) problematic design examples produce significant fixation effects, and b) fixation effects can be diminished with the use of defixating instructions. In "The Disposable Spill-Proof Coffee Cup Problem", adapted from Janson & Smith, 1991, participants were asked to construct as many designs as possible for an inexpensive, disposable, spill-proof coffee cup. Standard condition participants were presented only with instructions. In the fixated condition, participants were presented with instructions, a design, and problems they should be aware of. Finally, in the defixated condition, participants were presented the same as other conditions in addition to suggestions of design elements they should avoid using. The other two problems included building a bike rack, and designing a container for cream cheese. Based on the assumption that students are functionally fixed, a study onanalogical transferin the science classroom shed light on significant data that could provide an overcoming technique for functional fixedness. The findings support the fact that students show positive transfer (performance) on problem solving after being presented with analogies of certain structure and format.[23]The present study expanded Duncker's experiments from 1945 by trying to demonstrate that when students were "presented with a single analogy formatted as a problem, rather than as a story narrative, they would orient the task of problem-solving and facilitate positive transfer".[23] A total of 266 freshmen students from a high school science class participated in the study. The experiment was a 2x2 design where conditions: "task contexts" (type and format) vs. "prior knowledge" (specific vs. general) were attested. Students were classified into five different groups, where four were according to their prior science knowledge (ranging from specific to general), and one served as a control group (no analog presentation). The four different groups were then classified into "analog type and analog format" conditions, structural or surface types and problem or surface formats. Inconclusive evidence was found for positive analogical transfer based on prior knowledge; however, groups did demonstrate variability. The problem format and the structural type of analog presentation showed the highest positive transference to problem solving. The researcher suggested that a well-thought and planned analogy relevant in format and type to the problem-solving task to be completed can be helpful for students to overcome functional fixedness. This study not only brought new knowledge about the human mind at work but also provides important tools for educational purposes and possible changes that teachers can apply as aids to lesson plans.[23] One study suggests that functional fixedness can be combated by design decisions from functionally fixed designs so that the essence of the design is kept (Latour, 1994).[24]This helps the subjects who have created functionally fixed designs understand how to go about solving general problems of this type, rather than using the fixed solution for a specific problem. Latour performed an experiment researching this by having software engineers analyze a fairly standard bit of code—thequicksortalgorithm—and use it to create a partitioning function. Part of the quicksort algorithm involves partitioning a list into subsets so that it can be sorted; the experimenters wanted to use the code from within the algorithm to just do the partitioning. To do this, they abstracted each block of code in the function, discerning the purpose of it, and deciding if it is needed for the partitioning algorithm. This abstracting allowed them to reuse the code from the quicksort algorithm to create a working partition algorithm without having to design it from scratch.[24] A comprehensive study exploring several classical functional fixedness experiments showed an overlying theme of overcoming prototypes. Those that were successful at completing the tasks had the ability to look beyond the prototype, or the original intention for the item in use. Conversely, those that could not create a successful finished product could not move beyond the original use of the item. This seemed to be the case for functional fixedness categorization studies as well. Reorganization into categories of seemingly unrelated items was easier for those that could look beyond intended function. Therefore, there is a need to overcome the prototype in order to avoid functional fixedness. Carnevale (1998)[25]suggests analyzing the object and mentally breaking it down into its components. After that is completed, it is essential to explore the possible functions of those parts. In doing so, an individual may familiarize themselves with new ways to use the items that are available to them at the givens. Individuals are therefore thinking creatively and overcoming the prototypes that limit their ability to successfully complete the functional fixedness problem.[25] For each object, you need to decouple its function from its form. McCaffrey (2012)[26]shows a highly effective technique for doing so. As you break an object into its parts, ask yourself two questions. "Can I subdivide the current part further?" If yes, do so. "Does my current description imply a use?" If yes, create a more generic description involving its shape and material. For example, initially I divide a candle into its parts: wick and wax. The word "wick" implies a use: burning to emit light. So, describe it more generically as a string. Since "string" implies a use, I describe it more generically: interwoven fibrous strands. This brings to mind that I could use the wick to make a wig for my hamster. Since "interwoven fibrous strands" does not imply a use, I can stop working on wick and start working on wax. People trained in this technique solved 67% more problems that suffered from functional fixedness than a control group. This technique systematically strips away all the layers of associated uses from an object and its parts.[27]
https://en.wikipedia.org/wiki/Functional_fixedness
Insociology, theiron cageis a concept introduced byMax Weberto describe the increasedrationalizationinherent in social life, particularly in Westerncapitalistsocieties. The "iron cage" thus traps individuals in systems based purely onteleologicalefficiency, rational calculation and control. Weber also described thebureaucratizationofsocial orderas "the polar night of icy darkness".[1] The originalGermanterm isstahlhartes Gehäuse(steel-hard casing); this was translated into "iron cage", an expression made familiar toEnglish-speakers byTalcott Parsonsin his 1930 translation of Weber'sThe Protestant Ethic and the Spirit of Capitalism.[2]This choice has been questioned recently by scholars who prefer the more direct translation: "shell as hard as steel".[2][3] Weber (in Parsons' translation) wrote: InBaxter's view the care for external goods should only lie on the shoulders of the 'saint like a light cloak, which can be thrown aside at any moment.' But fate decreed that the cloak should become an iron cage.[4] In his 1904 bookThe Protestant Ethic and the Spirit of Capitalism, Weber introduces the metaphor of an "iron cage": The Puritan wanted to work in a calling; we are forced to do so. For when asceticism was carried out of monastic cells into everyday life, and began to dominate worldly morality, it did its part in building the tremendous cosmos of the modern economic order. This order is now bound to the technical and economic conditions of machine production which to-day determine the lives of all the individuals who are born into this mechanism, not only those directly concerned with economic acquisition, with irresistible force. Perhaps it will so determine them until the last ton of fossilized coal is burnt. In Baxter's view the care for external goods should only lie on the shoulders of the "saint like a light cloak, which can be thrown aside at any moment". But fate decreed that the cloak should become an iron cage. According to Weber, the market-dominated economic order was created by innovative, religiously motivated economic forces. But the individual today can no longer engage in such creative action. Instead, the worker must operate in a narrowly-defined specialization, and economic enterprises must continually strive to maximize profits and rationalize their production for the sake of efficiency. This is the present-day iron cage of institutionalized capitalism. Weber presents his argument in anironicform. Religion of a particular sort was necessary to revolutionize the economy and the world. A Protestant ethic drove the reorganization of traditional economic life to become a calculating efficient system. But now such religious views are no longer needed to sustain capitalism. Moreover, the systematic efficient calculations of capitalism help propel thesecularizationof the world and the decline in religious belief. "The course of development," Weber argues, "involves... the bringing in of calculation into the traditional brotherhood, displacing the old religious relationship."[5] Bureaucracies were distinct from thefeudal systemandpatrimonialismwhere people were promoted on the basis of personal relationships.[6]In bureaucracies, there was a set of rules that are clearly defined and promotion through technical qualifications,seniority[7]and disciplinary control. Weber believes that this influenced modern society[8]and how we operate today, especially politically.[9] Bureaucratic formalism is often connected to Weber's metaphor of the iron cage because the bureaucracy is the greatest expression of rationality. Weber wrote that bureaucracies are goal-oriented organizations that are based on rational principles that are used to efficiently reach their goals.[10]However, Weber also recognizes that there are constraints within the "iron cage" of such a bureaucratic system.[11] Bureaucracies concentrate large amounts of power in a small number of people and are generally unregulated.[12]Weber believed that those who control these organizations control the quality of our lives as well. Bureaucracies tend to generateoligarchy; which is where a few officials are the political and economic power. According to Weber, because bureaucracy is a form of organization superior to all others,[13]further bureaucratization and rationalization may be aninescapable fate.[14] Because of these aforementioned reasons, there will be an evolution of an iron cage, which will be a technically ordered, rigid, dehumanized society.[15]The iron cage is the one set of rules and laws that we are all subjected and must adhere to.[16]Bureaucracy puts us in an iron cage, which limits individual human freedom and potential instead of a "technological eutopia" that should set us free.[15][17]It is the way of the institution, where we do not have a choice anymore.[18]Oncecapitalismcame about, it was like a machine that you were being pulled into without an alternative option.[19]Laws of bureaucracies include the following:[20] "Rational calculation ... reduces every worker to a cog in thisbureaucraticmachine and, seeing himself in this light, he will merely ask how to transform himself... to a bigger cog... The passion for bureaucratization at this meeting drives us to despair."[21] Bureaucratic hierarchies can control resources in pursuit of their own personal interests,[31]which impacts society's lives greatly and society has no control over this. It also affects society's political order and governments because bureaucracies were built to regulate these organizations, butcorruptionremains an issue.[32]The goal of the bureaucracy has a single-minded pursuit[33]that can ruinsocial order; what might be good for the organization might not be good for thesocietyas a whole, which can later harm the bureaucracy's future.[34]Formal rationalization[35]in bureaucracy has its problems as well. There are issues of control,depersonalizationand increasing domination. Once the bureaucracy is created, the control is indestructible.[36]There is only one set of rules and procedures, which reduces everyone to the same level.Depersonalizationoccurs because individual situations are not accounted for.[37]Most importantly, the bureaucracies will become more dominating over time unless they are stopped. In an advancedindustrial-bureaucratic society, everything becomes part of the expanding machine, even people.[38] While bureaucracies are supposed to be based onrationalization, they act in the exact opposite manner. Political bureaucracies are established so that they protect ourcivil liberties, but they violate them with their imposing rules.Developmentand agricultural bureaucracies are set so that they help farmers, but put them out of business due tomarket competitionthat the bureaucracies contribute to. Service bureaucracies likehealth careare set to help the sick and elderly, but then they deny care based on specific criteria.[15] Weber argues that bureaucracies have dominated modern society'ssocial structure;[39]but we need these bureaucracies to help regulate ourcomplex society.[citation needed]Bureaucracies may have desirable intentions to some, but they tend to undermine human freedom anddemocracyin the long run.[40] Rationalization destroyed the authority of magical powers, but it also brought into being the machine-like regulation of bureaucracy, which ultimately challenges all systems of belief.[41] According to Weber,societysets up these bureaucratic systems, and it is up to society to change them. Weber argues that it is very difficult to change or break these bureaucracies, but if they are indeedsocially constructed, then society should be able to intervene and shift the system.
https://en.wikipedia.org/wiki/Iron_cage
Apanacea(/pænəˈsiːə/) is any supposedremedythat is claimed (for example) to cure alldiseasesandprolong life indefinitely. Named after the Greek goddess of universal remedyPanacea, it was in the past sought byalchemistsin connection with theelixir of lifeand thephilosopher's stone, a mythical substance that would enable thetransmutationof commonmetalsintogold. Through the 18th and 19th centuries, many "patent medicines" were claimed to be panaceas, and they became very big business. The term "panacea" is used in a negative way to describe the overuse of any one solution to solve many different problems, especially in medicine.[1]The word has acquired connotations ofsnake oilandquackery.[2] A panacea (orpanaceum) is also a literary term to represent any solution to solve all problems related to a particular issue.[3] In Greek mythology, Panacea was one of the daughters of the Greek god of medicineAsclepius, along with her four sisters, each of whom performed one aspect of health care:[4] According to the mythology, Panacea had an elixir or potion with which she was able to heal any human malady, and her name has become interchangeable with the name of the cure itself.[5][2] Ancient GreekandRomanscholars described various kinds of plants that were calledpanaceaorpanaces, such asOpopanaxsp.,Centaureasp.,Levisticum officinale,Achillea millefoliumandEchinophora tenuifolia.[6] TheCahuillapeople of theColorado Desertregion of California used the red sap of the elephant tree (Bursera microphylla) as a panacea.[7] The Latin genus name ofginsengisPanax, (or "panacea") reflectingLinneanunderstanding thattraditional Chinese medicineused ginseng widely as a cure-all.[8] In 1581 the Dutch doctor Giles Everard (also known as Gilles Everaerts) publishedOn the Panacea Herb [De herba panacea], a book that implied thattobacco, then growing in popularity after its recent introduction to Europe, was the long-lost ancient panacea. A work attributed to him appeared in English in 1659, entitledPanacea; Or The Universal Medicine: Being a Discovery of the Wonderfull Vertues of Tobacco Taken in a Pipe, with Its Operation and Use Both in Physick and Chyrurgery.[9][10] "The increase of ... patent medicines within the 19th century, is an evil over which the friends of science and humanity can never cease to mourn." The cure-alls became known as "patent medicines" that grew starting in the late 17th century with increasingmarketing. Some found favour withroyaltyand were issuedletters patentauthorising the use of the royal endorsement in advertising. Eighteenth century England has been popularly referred to as thegolden age of physic, due to the widespread availability and consumption of enormous amounts of proprietary medicines - many of which were principally laxatives but with the added claim that they somehow purified the blood and so cured all manner of illness.[11][12][13]: Ch.6 The first such preparation that is known to use the term "panacea" for promotion was Panacea of William Swaim, starting in 1820. Defending the use of that term later, he stated that it was often used "in the restricted sense of a remedy for a large class of diseases, and not in its literal and more comprehensive meaning." He started publishing pamphlets to promote it in 1822, and in 1824 he published a book with the titleA Treatise on Swaim's Panacea; Being a Recent Discovery for the Cure of Scrofula or King's Evil, Mercurial Disease, Deep-Seated Syphilis, Rheumatism, and All Disorders Arising from a Contaminated or Impure State of the Blood. The Philadelphia Medical Society took particular exception, forming a committee to tackle quack medicines which reported that The Panacea was neither effective nor safe.[13]: Ch.5 Touting these nostrums with implausible claims was one of the first major projects of the advertising industry. An early pioneer in the use ofadvertisingto promote patent medicine was New York businessmanBenjamin Brandreth, whose "Vegetable Universal Pill" eventually became one of the best-selling patent medicines in the United States.[14]For fifty years Brandreth's name was a household word in the United States;[15]the Brandreth pills were a purgative that allegedly cured many ills by purging toxins out of the blood, which he claimed was the cause of all maladies. In the absence of proof, Brandreth justified this claim by quoting scriptureLeviticus 17:11: "The life of the flesh is in the blood."[13]: Ch.6An advertisement from 1865 claimed that "By their use acute disease of every kind is cured. Perseverance will cure most chronic cases."[16]They became so well known they received mention in Herman Melville's classic novelMoby-Dick.[17] Similarly,James Morisonwas a British quack-physician who sold "Hygeian Vegetable Universal Medicine", which were advertised as "A cure for all curable ills". Morrison established his own medical school, The British College of Health, which trained agents known as Hygeists to sell the pills. However, satirists were brutal in their attacks on the business and its gullible clients, with caricatures even showing people re-growing severed limbs.[18][19] Many other patent medicines made claims to cure implausibly wide-ranging conditions. An early nineteenth century advertisement forDaffy's Elixirsaid it was used for the following ailments: The Stone in Babies and Children;Convulsionfits;Consumptionand BadDigestives;Agues;Piles; Surfeits; Fits of the Mother andVapoursfrom theSpleen;Green Sickness; Children's Distempers, whether theWorms,Rickets,Stones, Convulsions,Gripes,King's Evil, Joint Evil or any other disorder proceeding fromWindor Crudities;GoutandRheumatism;Stone or Gravelin the Kidnies;Cholic and Griping of the Bowels; thePhthisic;DropsyandScurvy.[20]In 1891, Dr. John Collis Browne'sChlorodynewas advertised as a treatment for coughs, consumption, bronchitis, asthma, diphtheria, fever, croup, ague, diarrhoea, cholera, dysentery, epilepsy, hysteria, palpatation, spasms, neuralgia, rheumatism, gout, cancer, toothache, meningitis, etc.[21][circular reference]EvenCoca-Colawas marketed as a patent medicine in its early days: it was claimed to cure many diseases, including morphine addiction, indigestion, nerve disorders, headaches, and impotence.[22] The conventional medical profession pushed back against the claims of panaceas. In 1828, the New York state medical society adopted one of America's first medical ethics codes, which stated that patent medicines were not to be tolerated. From about the 1830s, one of the principal targets was Morison's "Hygeian Vegetable Universal Medicine". Morison retaliated by appealing to what is now known as thelogical fallacyof the "appeal to nature", criticizing the medical reliance on chemicals in contrast to his remedies made from natural vegetables. In contrast to their advertised safety, Morison's pills could be dangerous, and fatal if taken in large enough quantities. In 1836, John MacKenzie, aged 32, who was diagnosed with rheumatism in the knee, died after one of Morison's agents gave him 1,000 pills over 20 days; that agent was indicted and found guilty of manslaughter. The following year, excessive consumption of Morison's pills was found to cause 12 deaths following investigations in York. Morison himself evaded punishment as the charges were against his agents. After Morrison died in 1840 and his sons took over, they expanded the product range. The pills were finally withdrawn from sale in the 1920s.[13]: Ch.5[23][24] The founding of the Pharmaceutical Society of Great Britain in 1841 marked another step away from patent medicines and panaceas.[23] Legislation covering claims of cure-all preparations varies by jurisdiction. In Australia, the criteria for the registration of drugs and other therapeutic goods in the Australian Register of Therapeutic Goods, which includes guidelines on advertising, labelling, and product design, are outlined in the Therapeutic Goods Act, Regulations, and Orders. Other aspects, like the scheduling of substances and secure storage of therapeutic goods, are subject to State or Territory laws.[25] In 1906, the United States passed of the firstPure Food and Drug Act. This statute did not ban the alcohol, narcotics, and stimulants in the medicines; it required them to be labelled as such, and curbed some of the more misleading, overstated, or fraudulent claims that appeared on the labels. In 1936 the statute was revised to ban them, and the United States entered a long period of ever more drastic reductions in the medications available unmediated byphysiciansandprescriptions.Morris Fishbein, editor of theJournal of the American Medical Association, who was active in the first half of the 20th century, based much of his career on exposing quacks and driving them out of business.[26][27] In the United States, there have been cases of products that claimed to cure many, many diseases.Seasilver, a commercialdietary supplementthat was sold viamulti-level marketingplan, was promoted with the false claim that it could "cure 650 diseases", resulting in the prosecution and fining of the owners.[28]
https://en.wikipedia.org/wiki/Panacea_(medicine)
Elegant variationis the use ofsynonymsto avoid repetition or add variety. The term was introduced in 1906 byH. W. FowlerandF. G. FowlerinThe King's English. In their meaning of the term, they focus particularly on instances when the word being avoided is anounor itspronoun. Pronouns are themselves variations intended to avoid awkward repetition, and variations are so often not necessary, that they should be used only when needed. The Fowlers recommend that "variations should take place only when there is some awkwardness, such as ambiguity or noticeable monotony, in the word avoided".[1] Henry Fowler's laterDictionary of Modern English Usage, published in 1926, keeps the same definition, but more explicitly cautions against overuse of variations orsynonymsby writers who are "intent on expressing themselves prettily", rather than "conveying their meaning clearly", adding that "there are few literary faults so prevalent." Fowler then quotes examples of when variations should have been used, and when they should not have been used.[2] Since the term was established in 1906, it has been referred to in style and usage guides, but the original meaning has seen a number of variations. For exampleBryan A. Garnersuggests that when Fowler uses the word "elegant", he actually means the opposite—"inelegant"—because, according to Garner, at the time Fowler wrote, the word "elegant" was an "almost pejorative" word. Garner also claims that Fowler used the term elegant variation to refer to the "practice of never using the same word twice in the same sentence or passage". That is not Fowler's definition, and, asRichard W. Baileypoints out, in misrepresenting Fowler, "Garner has created a linguist made ofstraw".[3][4]Nevertheless, following Garner,inelegant variationhas been used by others, including Gerald Lebovits[5]and Wayne Schiess.[6] The Emperorreceived yesterday and to-day GeneralBaron von Beck... It may therefore be assumed with some confidence that the terms of a feasible solution are maturing themselves inHis Majesty'smind and may form the basis of further negotiations with Hungarian party leaders whenthe Monarchgoes again to Budapest.[7] InFrenchthe use of elegant variations is considered essential for good style.[12][13]A humorist imagined writing a news article aboutGaston Defferre: "It's OK to say Defferre once, but not twice. So next you say the Mayor of Marseille. Then, the Minister of Planning. Then, the husband ofEdmonde. Then, Gaston. Then, Gastounet and then... Well, then you stop talking about him because you don't know what to call him next."[14]
https://en.wikipedia.org/wiki/Elegant_variation
Afigure of speechorrhetorical figureis a word or phrase that intentionally deviates from straightforward language use orliteral meaningto produce arhetoricalor intensified effect (emotionally, aesthetically, intellectually, etc.).[1][2]In the distinction betweenliteral and figurative language, figures of speech constitute the latter. Figures of speech are traditionally classified intoschemes, which vary the ordinary sequence of words, andtropes, where words carry a meaning other than what they ordinarily signify. An example of a scheme is apolysyndeton: the repetition of a conjunction before every element in a list, whereas the conjunction typically would appear only before the last element, as in "Lions and tigers and bears, oh my!"—emphasizing the danger and number of animals more than theprosaicwording with only the second "and". An example of a trope is themetaphor, describing one thing as something it clearly is not, as a way to illustrate by comparison, as in "All the world's a stage." Classical rhetoriciansclassified figures of speech intofour categoriesorquadripita ratio:[3] These categories are often still used. The earliest known text listing them, though not explicitly as a system, is theRhetorica ad Herennium, of unknown authorship, where they are calledπλεονασμός (pleonasmos—addition),ἔνδεια (endeia—omission), μετάθεσις (metathesis—transposition) andἐναλλαγή (enallage—permutation).[4]Quintillian then mentioned them inInstitutio Oratoria.[5]Philo of Alexandriaalso listed them as addition (πρόσθεσις—prosthesis), subtraction (ἀφαίρεσις—afairesis), transposition (μετάθεσις—metathesis), and transmutation (ἀλλοίωσις—alloiosis).[6] Figures of speech come in many varieties.[7]The aim is to use the language imaginatively to accentuate the effect of what is being said. A few examples follow: Scholars of classical Western rhetoric have divided figures of speech into two main categories:schemesandtropes. Schemes (from the Greekschēma, 'form or shape') are figures of speech that change the ordinary or expected pattern of words. For example, the phrase, "John, my best friend" uses the scheme known asapposition. Tropes (from Greektrepein, 'to turn') change the general meaning of words. An example of a trope is irony, which is the use of words to convey the opposite of their usual meaning ("For Brutus is an honorable man; / So are they all, all honorable men"). During theRenaissance, scholars meticulously enumerated and classified figures of speech.Henry Peacham, for example, in hisThe Garden of Eloquence(1577), enumerated 184 different figures of speech. Professor Robert DiYanni, in his bookLiterature: Reading Fiction, Poetry, Drama and the Essay[8]wrote: "Rhetoricians have catalogued more than 250 differentfigures of speech, expressions or ways of using words in a nonliteral sense." For simplicity, this article divides the figures between schemes and tropes, but does not further sub-classify them (e.g., "Figures of Disorder"). Within each category, words are listed alphabetically. Most entries link to a page that provides greater detail and relevant examples, but a short definition is placed here for convenience. Some of those listed may be consideredrhetorical devices, which are similar in many ways. Schemes are words or phrases whose syntax, sequence, or pattern occurs in a manner that varies from an ordinary usage. Tropes are words or phrases whose contextual meaning differs from the manner or sense in which they are ordinarily used. Using these formulas, a pupil could render the same subject or theme in a myriad of ways. For the mature author, this principle offered a set of tools to rework source texts into a new creation. In short, the quadripartita ratio offered the student or author a ready-made framework, whether for changing words or the transformation of entire texts. Since it concerned relatively mechanical procedures of adaptation that for the most part could be learned, the techniques concerned could be taught at school at a relatively early age, for example in the improvement of pupils' own writing.
https://en.wikipedia.org/wiki/Figure_of_speech
Owing to its origin inancient GreeceandRome, English rhetorical theory frequently employsGreekandLatinwords asterms of art. This page explains commonly used rhetorical terms in alphabetical order. The brief definitions here are intended to serve as a quick reference rather than an in-depth discussion. For more information, click the terms.
https://en.wikipedia.org/wiki/Glossary_of_rhetorical_terms
Graphomania(fromAncient Greek:γρᾰ́φειν,gráphein,lit.'to write';[1]andμᾰνῐ́ᾱ,maníā,lit.'madness, frenzy'),[2]also known asscribomania, is anobsessiveimpulsetowrite.[3][4]When used in a specific psychiatric context, it labels a morbid mental condition which results in writing rambling and confused statements, often degenerating into a meaningless succession of words or even nonsense then calledgraphorrhea[5](seehypergraphia). The term "graphomania" was used in the early 19th century byEsquiroland later byEugen Bleuler, becoming more or less common.[6]Graphomania is related totypomania, which is obsessiveness with seeing one's name in publication or with writing for being published, excessive symbolism or typology.[7] Outside the psychiatric definitions of graphomania and related conditions, the word is used more broadly to label the urge and need to write excessively, professionally or not.Max Nordau, in his attack of what he saw asdegenerate art, frequently used the term "graphomania" to label the production of the artists he condemned (most notablyRichard Wagner[8]or the Frenchsymbolist poets[8]). InThe Book of Laughter and Forgetting(1979),Milan Kunderaexplains proliferation of non-professional writing as follows: Graphomania inevitably takes on epidemic proportions when a society develops to the point of creating three basic conditions: Czesław Miłosz—winner of the Nobel Prize for Literature in 1980—used the term "graphomania" in a context much different than Kundera's. InThe Captive Mind(1951), Miłosz wrote that the typical writer in theEastern Blocwho accepted socialist realism "believes that the by-ways of 'philosophizing' lead to a greater or lesser degree of graphomania. Anyone gripped in the claws of dialectics [the philosophy of dialectical materialism] is forced to admit that the thinking of private philosophers, unsupported by citations [failing to regurgigate Stalinist propaganda], is sheer nonsense."[9] Entopic graphomania is a surrealist drawing exercise designed to highlight patterns and meaning in pieces of paper, including newspapers, blank pieces of copy paper, and pages of a book.[10]The process consists of closely examining a page for distinguishing features (folds, creases, blank spaces) and marking them with a writing utensil. These marks are then connected by any type of line (squiggly, straight, dotted, etc.).
https://en.wikipedia.org/wiki/Graphomania
Hypergraphiais a behavioral condition characterized by the intense desire to write or draw. Forms of hypergraphia can vary in writing style and content. It is a symptom associated withtemporal lobechanges inepilepsyand inGeschwind syndrome.[1]Structures that may have an effect on hypergraphia when damaged due to temporal lobe epilepsy are thehippocampusandWernicke's area. Aside from temporal lobe epilepsy, chemical causes may be responsible for inducing hypergraphia. American neurologistsStephen WaxmanandNorman Geschwindwere the first to describe hypergraphia, in the 1970s.[2]The patients they observed displayed highly compulsive detailed writing, sometimes with literary creativity. The patients kept diaries, which some used to meticulously document minute details of their everyday activities, write poetry, or create lists. Case 1 of their study wrote lists of her relatives, her likes and dislikes, and the furniture in her apartment. Beside lists, the patient wrote poetry, often with a moral or philosophical undertone. She described an incident in which she wrote the lyrics of a song she learned when she was 17 several hundred times and another incident in which she felt the urge to write a word over and over again. Another patient wroteaphorismsand certain sentences in repetition.[2] A patient from a separate study experienced continuous "rhyming in his head" for five years after a seizure and said that he "felt the need to write them down."[3]The patient did not talk in rhyme, nor did he read poetry. Language capacity and mental status were normal for this patient, except for recorded right-temporal spikes onelectroencephalograms. This patient had right-hemisphere epilepsy. FunctionalMRIscans of other studies suggest that rhyming behavior is produced in the left hemisphere, but Mendez proposed that postictal hypoactivity of the right hemisphere may induce a release of writing and rhyming abilities in the left hemisphere.[3] In addition to writing in different forms (poetry, books, repetition of one word), hypergraphia patients differ in the complexity of their writings. While some writers (e.g.Alice Flaherty[4]and Dyane Harwood[5]) use their hypergraphia to help them write extensive papers and books, most patients do not write things of substance. Flaherty describes hypergraphia as a result of decreased temporal lobe function which disinhibits frontal lobe idea and language generation, "sometimes at the expense of quality."[6]Patients hospitalized with temporal lobe epilepsy and other disorders causing hypergraphia have written memos and lists (like their favorite songs) and recorded their dreams in extreme length and detail.[6] There are many accounts of patients writing in nonsensical patterns including writing in a center-seeking spiral starting around the edges of a piece of paper.[7]In one case study, a patient even wrote backward, so that the writing could only be interpreted with the aid of a mirror.[2]Sometimes the writing can consist of scribbles and frantic, random thoughts that are quickly jotted down on paper very frequently. Grammar can be present, but the meaning of these thoughts is generally hard to grasp and the sentences are loose.[7]In some cases, patients write extremely detailed accounts of events that are occurring or descriptions of where they are.[7] In some cases, hypergraphia can manifest with compulsive drawing.[8]The composerRobert Schumann, during periods of high musical output, also wrote many long letters to his wife Clara; similarly, Vincent van Gogh had much more written correspondence during bouts of intense painting.[4]Many drawings by patients with hypergraphia exhibit repetition and a high level of detail, sometimes mixing both compulsive writing and drawing together.[9] Some studies have suggested that hypergraphia is related tobipolar disorder,hypomania, andschizophrenia.[10]Although creative ability was observed in the patients of these studies, signs of creativity were observed, not hypergraphia specifically. Therefore, it is difficult to say with absolute certainty that hypergraphia is a symptom of these psychiatric illnesses because creativity in patients with bipolar disorder, hypomania, or schizophrenia may manifest into something aside from writing. However, other studies have shown significant accounts between hypergraphia and temporal lobe epilepsy[11]and chemical causes.[12] Hypergraphia was first studied as a symptom oftemporal lobe epilepsy, a condition of reoccurring seizures caused by excessive neuronal activity, but it is not a common symptom among patients. Less than 10 percent of patients with temporal lobe epilepsy exhibit characteristics of hypergraphia.[medical citation needed]Temporal lobe epilepsy patients may exhibit irritability, discomfort, or an increasing feeling of dread if their writing activity is disrupted.[13]To elicit such responses when interrupting their writing suggests that hypergraphia is a compulsive condition, resulting in an obsessive motivation to write.[10]A temporal lobe epilepsy may influence frontotemporal connections in such a way that the drive to write is increased in thefrontal lobe, beginning with theprefrontalandpremotor cortexplanning out what to write, and then leading to themotor cortex(located next to thecentral fissure) executing the physical movement of writing.[10] Most temporal lobe epilepsy patients who suffer from hypergraphia can write words, but not all may have the capacity to write complete sentences that have meaning.[7] The disorder most often associated with high-output writers is bipolar disorder, especially during hypomania.[14]In fact, temporal lobe epilepsy is more likely to produce hypergraphia if it also produces manic symptoms. While depression has been linked to increased writing, it appears that most writers with depression write little while depressed, and high output periods correspond to rebound mood elevation after the end of a depression, or in mixed mood states.[14] Drugs that boost mood and energy have been known to induce hypergraphia, possibly by increasing activity in brain networks utilizing one of the body's neurotransmitters,dopamine. Dopamine has been known to decreaselatent inhibition, which causes a decrease in the ability to habituate to screen out unexpected stimuli. Low latent inhibition leads to an excessive level of stimulation and could contribute to the onset of hypergraphia and general creativity.[15]This research implies that there is a direct correlation between the levels of dopamine between neuronal synapses and the level of creativity exhibited by the patient. Dopamine agonists increase the levels of dopamine between synapses which results in higher levels of creativity, and the opposite is true for dopamine antagonists. In one case study, a patient taking donepezil reported an elevation in mood and energy levels which led to hypergraphia and other excessive forms of speech (such as singing).[16]Six other cases of patients taking donepezil and experiencing mania have been previously reported. These patients also had cases ofdementia, cognitive impairment from acerebral aneurysm, bipolar I disorder, and/or depression. Researchers are unsure why donepezil can induce mania and hypergraphia. It could potentially result from an increase inacetylcholinelevels, which would have an effect on the other neurotransmitters in the brain.[16] Several regions of the brain are involved in the act of written composition. Handwriting depends on the superiorparietal cortex, and motor control areas in thefrontal lobeandcerebellum.[17]An area of the frontal lobe that is especially active is Exner's area, located in thepremotor cortex.[17]Writing creatively and generating ideas, on the other hand, activates multiple sites in the limbic system and cerebral cortex, including the left inferior frontal gyrus (BA 45) and the left temporal pole (BA 38).[18]Lesions toWernicke's area(in the left temporal lobe) can increase speech output, which can sometimes manifest itself in writing.[6]In one study, patients with hippocampal atrophy showed signs of having Geschwind syndrome, including hypergraphia.[19]While epilepsy-induced hypergraphia is usually lateralized to the left cerebral hemisphere in the language areas, hypergraphia associated with lesions and other brain damage usually occurs in the right cerebral hemisphere.[20]Lesions to the right side of the brain usually cause hypergraphia because they can disinhibit language function on the left side of the brain.[6]Hypergraphia has also been known to be caused by right hemisphere strokes and tumors.[7][21] Hypergraphia was one of the central issues in the 1999 trial of Alvin Ridley for the imprisonment and murder of his wifeVirginia Ridley.[22]The mysterious woman, who had died in bed of apparent suffocation, had remained secluded in her home for 27 years in the small town ofRinggold, Georgia, United States. Her 10,000-page journal, which provided abundant evidence that she suffered fromepilepsyand had remained housebound of her own will, was instrumental in the acquittal of her husband.[22] In 1969,Isaac Asimovsaid "I am a compulsive writer".[23]Other artistic figures reported to have been affected by hypergraphia includeVincent van Gogh,[citation needed]Fyodor Dostoevsky,[24]andRobert Burns.[25]Alice in WonderlandauthorLewis Carrollis also said to have had the condition,[26]having written more than 98,000 letters in various formats throughout his life. Some were written backward, inrebus, and in patterns, as with "The Mouse's Tale" inAlice. Eleanor Alice Burford, whose pen-names includedJean Plaidy,Victoria Holt,Philippa Carr,Eleanor Burford,Elbur Ford,Kathleen Kellow,Anna Percival, andEllalice Tate, described herself as a compulsive writer. Naomi Mitchison, often called a doyenne of Scottish literature, writing over 90 books of historical and science fiction, travel writing and autobiography, has been described as a compulsive writer.
https://en.wikipedia.org/wiki/Hypergraphia
AnIrish bullis a ludicrous, incongruent orlogicallyabsurd statement, generally unrecognized as such by its author. The inclusion of the epithetIrishis a late addition.[1] John Pentland Mahaffy, Provost of Trinity College, Dublin, observed, "an Irish bull is always pregnant", i.e. with truthful meaning.[2]The "father" of the Irish bull is often said to be SirBoyle Roche,[3]who once asked "Why should we put ourselves out of our way to do anything for posterity, for what has posterity ever done for us?".[4]Roche may have beenSheridan'smodel forMrs Malaprop.[5] The derivation of "bull" in this sense is unclear. It may be related toOld Frenchboul"fraud, deceit, trickery",Icelandicbull"nonsense",Middle Englishbull"falsehood", or the verbbull"befool, mock, cheat".[6] As the Oxford English Dictionary points out, the epithet "Irish" is a more recent addition, the original wordbullfor such nonsense having been traced back at least to the early 17th century.[1]By the late 19th century the expressionIrish bullwas well known, but writers were expressing reservations such as: "But it is a cruel injustice to poor Paddy to speak of the genuine 'bull' as something distinctly Irish, when countless examples of the same kind of blunder, not a whit less startling, are to be found elsewhere." The passage continues, presenting Scottish, English and French specimens in support.[7]
https://en.wikipedia.org/wiki/Irish_bull
Aplace nameistautologicalif two differently sounding parts of it are synonymous. This often occurs when a name from one language is imported into another and a standard descriptor is added on from the second language. Thus, for example, New Zealand'sMount Maunganuiis tautological since "maunganui"isMāorifor "great mountain". The following is a list of place names often used tautologically, plus the languages from which the non-English name elements have come. Tautological place names are systematically generated in languages such as English and Russian, where the type of the feature is systematically added to a name regardless of whether it contains it already. For example, in Russian, the format "Ozero X-ozero" (i.e. "Lake X-lake") is used. In English, it is usual to do the same for foreign names, even if they already describe the feature, for exampleLake Kemijärvi(Lake Kemi-lake),Faroe Islands(literallySheep-Island Islands, asøyis Modern Faroese forIsland), orSaaremaaisland(Island land island). On rare occasions, such formations may occur by coincidence when a place is named after a person who shares their name with the feature. Examples include theOuterbridge Crossingnamed afterEugenius Harvey Outerbridge, theHall BuildingofConcordia Universitynamed afterHenry Foss Hall, andAlice Keck Park Memorial Gardensin Santa Barbara named after Alice Keck Park. Asterisks (*) indicate examples that are also commonly referred to without the inclusion of one of the tautological elements.
https://en.wikipedia.org/wiki/List_of_tautological_place_names
Inpsychology,logorrheaorlogorrhoea(fromAncient Greekλόγοςlogos"word" and ῥέωrheo"to flow") is acommunication disorderthat causes excessivewordinessand repetitiveness, which can cause incoherency. Logorrhea is sometimes classified as amental illness, though it is more commonly classified as a symptom of mental illness orbrain injury. This ailment is often reported as a symptom ofWernicke's aphasia, where damage to thelanguage processing centerof the brain creates difficulty in self-centered speech. Logorrhea is characterized by "rapid, uncontrollable, and incoherent speech".[1]Occasionally, patients with logorrhea may produce speech with normalprosodyand a slightly fast speech rate.[2]Other related symptoms include the use ofneologisms(new words without clear derivation, e.g. hipidomateous for hippopotamus), words that bear no apparent meaning, and, in some extreme cases, the creation of new words andmorphosyntacticconstructions. From the "stream of unchecked nonsense often under pressure and the lack of self-correction" that the patient may exhibit, and their circumlocution (the ability to talk around missing words) we may conclude that they are unaware of the grammatical errors they are making.[3] When a clinician said, "Tell me what you do with a comb", to a patient with mildWernicke's aphasia, which produces the symptom of logorrhea, the patient responded: What do I do with a comb ... what I do with a comb. Well a comb is a utensil or some such thing that can be used for arranging and rearranging the hair on the head both by men and by women. One could also make music with it by putting a piece of paper behind and blowing through it. Sometimes it could be used in art — in sculpture, for example, to make a series of lines in soft clay. It's usually made of plastic and usually black, although it comes in other colors. It is carried in the pocket or until it's needed, when it is taken out and used, then put back in the pocket. Is that what you had in mind?[4] In this case, the patient maintained proper grammar and did not exhibit any signs of neologisms. However, the patient did use an overabundance of speech in responding to the clinician, as most people would simply respond, "I use a comb to comb my hair." In a more extreme version of logorrheaaphasia, a clinician asked a male patient, also with Wernicke's aphasia, what brought him to the hospital. The patient responded: Is this some of the work that we work as we did before? ... All right ... From when wine [why] I'm here. What's wrong with me because I ... was myself until the taenz took something about the time between me and my regular time in that time and they took the time in that time here and that's when the time took around here and saw me around in it's started with me no time and I bekan [began] work of nothing else that's the way the doctor find me that way  ...[5] In this example, the patient's aphasia was much more severe. Not only was this a case of logorrhea, but this includedneologisms(such as "taenz" for "stroke" and "regular time" for "regular bath")[6]and a loss of proper sentence structure. Logorrhea has been shown to be associated with traumatic brain injuries in thefrontal lobe[7]as well as withlesionsin thethalamus[8][9]and theascending reticular inhibitory system[10]and has been associated withaphasia.[11]Logorrhea can also result from a variety ofpsychiatricandneurologicaldisorders[10]includingtachypsychia,[12]mania,[13]hyperactivity,[14]catatonia,[15]ADHDandschizophrenia. Logorrhea is often associated withWernicke'sand other aphasias.Aphasiarefers to the neurological disruption of language that occurs as a consequence of brain dysfunction. A patient who truly has an aphasia cannot have been diagnosed with any other medical condition that may affect cognition.[citation needed]Logorrhea is a common symptom ofWernicke'saphasia, along withcircumlocution,paraphasias, andneologisms. A patient with aphasia may present all of these symptoms at one time.[citation needed] Excessive talking may be asymptomof an underlying illness and should be addressed by a medical provider if combined with hyperactivity or symptoms of mental illness, such as hallucinations.[16]Treatment of logorrhea depends on its underlying disorder, if any.Antipsychoticsare often used, andlithiumis a common supplement given to manic patients.[12]For patients with lesions of the brain, attempting to correct their errors may upset and anger the patients, since the language center of their brain may not be able to process that what they are saying is incorrect and wordy.[citation needed]
https://en.wikipedia.org/wiki/Logorrhea_(psychology)
Inliterary criticism,purple proseis overly ornateprosetext that may disrupt anarrativeflow by drawing undesirable attention to its own extravagant style of writing, thereby diminishing the appreciation of the prose overall.[1]Purple prose is characterized by the excessive use of adjectives, adverbs, andmetaphors. When it is limited to certain passages, they may be termedpurple patchesorpurple passages, standing out from the rest of the work. Purple prose is criticized for desaturating the meaning in an author's text by overusing melodramatic and fanciful descriptions. As there is no precise rule or absolute definition of what constitutes purple prose, deciding if a text, passage, or complete work has fallen victim is subjective. According toPaul West, "It takes a certain amount of sass to speak up for prose that's rich, succulent and full of novelty. Purple is immoral, undemocratic and insincere; at best artsy, at worst the exterminating angel of depravity."[2] The termpurple proseis derived from a reference by the Roman poetHorace[3][4](Quintus Horatius Flaccus, 65–8 BC) who wrote in hisArs Poetica(lines 14–21):[5] Inceptis grauibus plerumque et magna professispurpureus, late qui splendeat, unus et alteradsuitur pannus, cum lucus et ara Dianaeet properantis aquae per amoenos ambitus agrosaut flumen Rhenum aut pluuius describitur arcus;sed nunc non erat his locus. Et fortasse cupressumscis simulare; quid hoc, si fractis enatat exspesnauibus, aere dato qui pingitur? Weighty openings and grand declarations oftenHave one or twopurplepatches tacked on, that gleamFar and wide, whenDiana's grove and her altar,The winding stream hastening through lovely fields,Or the riverRhine, or the rainbow's being described.There's no place for them here. Perhaps you know howTo draw a cypress tree: so what, if you've been givenMoney to paint a sailor plunging from a shipwreckIn despair?[6][7] Your opening shows great promise, and yet flashypurplepatches; as when describinga sacred grove, or the altar ofDiana,or a stream meandering through fields,or the riverRhine, or a rainbow;but this was not the place for them. If you can realistically rendera cypress tree, would you include one when commissioned to painta sailor in the midst of a shipwreck?[original research?]
https://en.wikipedia.org/wiki/Purple_prose
Verbosity, orverboseness, is speech or writing that uses more words than necessary.[1]The opposite of verbosity issuccinctness.[dubious–discuss] Some teachers, including the author ofThe Elements of Style, warn against verbosity. SimilarlyMark TwainandErnest Hemingway, among others, famously avoided it. Synonyms of "verbosity" includewordiness,verbiage,loquacity,garrulousness,logorrhea,prolixity,grandiloquence,expatiation,sesquipedalianism, andoverwriting. The wordverbositycomes fromLatinverbosus, "wordy". There are many other English words that also refer to the use of excessive words. Prolixitycomes from Latinprolixus, "extended".Prolixitycan also be used to refer to the length of amonologueor speech, especially a formal address such as a lawyer'soral argument.[2] Grandiloquenceis complex speech or writing judged to be pompous or bombasticdiction. It is a combination of the Latin wordsgrandis("great") andloqui("to speak").[3] Logorrheaorlogorrhoea(fromGreekλογόρροια,logorrhoia, "word-flux") is an excessive flow of words. It is often usedpejorativelyto describe prose that is hard to understand because it is needlessly complicated or uses excessive jargon. Sesquipedalianismis a linguistic style that involves the use of long words. Roman poetHoracecoined the phrasesesquipedalia verbain hisArs Poetica.[4]It is acompoundofsesqui, "one and a half", andpes, "foot", a reference tometer(notwords a foot long). The earliest recorded usage in English ofsesquipedalianis in 1656, and ofsesquipedalianism, 1863.[5] Garrulouscomes from Latingarrulus, "talkative", a form of the verbgarrīre, "to chatter". The adjective may describe a person who is excessively talkative, especially about trivial matters, or a speech that is excessively wordy or diffuse[6] The nounexpatiationand the verbexpatiatecome from Latinexpatiātus, past participle fromspatiārī, "to wander". They refer to enlarging a discourse, text, or description.[7] Overwritingis a simple compound of the English prefix "over-" ("excessive") and "writing", and as the name suggests, means using extra words that add little value. One rhetoric professor described it as "a wordy writing style characterized by excessive detail, needless repetition, overwrought figures of speech, and/or convoluted sentence structures."[8]Another writer cited "meaningless intensifiers", "adjectival & adverbial verbosity", "long conjunctions and subordinators", and "repetition and needless information" as common traps that the non-native writers of English the author studied fell into.[9] An essay intentionally filled with "logorrhea" that mixed physics concepts with sociological concepts in a nonsensical way was published by physics professorAlan Sokalin a journal (Social Text) as ascholarly publishing sting. The episode became known as theSokal Affair.[10] The term is sometimes also applied to unnecessarily wordy speech in general; this is more usually referred to asprolixity. Some people defend the use of additional words asidiomatic, a matter of artistic preference, or helpful in explaining complex ideas or messages.[11] Warren G. Harding, the 29thpresident of the United States, was notably verbose even for his era.[12]A Democratic leader,William Gibbs McAdoo, described Harding's speeches as "an army of pompous phrases moving across the landscape in search of an idea."[13] TheMichigan Law Reviewpublished a 229-page parody of postmodern writing titled "Pomobabble: Postmodern Newspeak and Constitutional 'Meaning' for the Uninitiated". The article consists of complicated and context-sensitive self-referencing narratives. The text is peppered with a number of parenthetical citations and asides, which is supposed to mock the cluttered style of postmodern writing.[14] InThe King's English, Fowler gives a passage fromThe Timesas an example of verbosity: The Emperorreceived yesterday and to-day General Baron von Beck.... It may therefore be assumed with some confidence that the terms of a feasible solution are maturing themselves inHis Majesty'smind and may form the basis of further negotiations with Hungarian party leaders whenthe Monarchgoes again to Budapest.[15] Fowler objected to this passage becauseThe Emperor,His Majesty, andthe Monarchall refer to the same person: "the effect", he pointed out inModern English Usage, "is to set readers wondering what the significance of the change is, only to conclude that there is none." Fowler called this tendency "elegant variation" in his later style guides. The ancient Greek philosopherCallimachusis quoted as saying "Big book, big evil" (μέγα βιβλίον μέγα κακόν,mega biblion, mega kakon),[16]rejecting theepicstyle ofpoetryin favor of his own.[clarification needed] Many style guides advise against excessive verbosity. While it may be rhetorically useful[1]verbose parts in communications are sometimes referred to as "fluff" or "fuzz".[17]For instance,William Strunk, an American professor of English advised in 1918 to "Use the active voice: Put statements in positive form; Omit needless words."[18] InA Dictionary of Modern English Usage(1926)Henry Watson Fowlersays, "It is the second-rate writers, those intent rather on expressing themselves prettily than on conveying their meaning clearly, & still more those whose notions of style are based on a few misleading rules of thumb, that are chiefly open to the allurements of elegant variation," Fowler's term for the over-use ofsynonyms.[19]Contrary to Fowler's criticism of several words being used to name the same thing in Englishprose, in many other languages, includingFrench, it might be thought to be a good writing style.[20][21] An inquiry into the2005 London bombingsfound that verbosity can be dangerous if used by emergency services. It can lead to delay that could cost lives.[22] A 2005 study from thepsychologydepartment ofPrinceton Universityfound that using long and obscure words does not make people seem more intelligent. Dr. Daniel M. Oppenheimer did research which showed that students rated short, concise texts as being written by the most intelligent authors. But those who used long words or complexfonttypes were seen as less intelligent.[23] In contrast to advice against verbosity, some editors and style experts suggest that maxims such as "omit needless words"[18]are unhelpful. It may be unclear which words are unnecessary, or where advice against prolixity may harm writing. In some cases a degree of repetition and redundancy, or use of figurative language and long or complex sentences can have positive effects on style or communicative effect.[11] In nonfiction writing, experts[who?]suggest that both concision and clarity are important: Elements that do not improve communication should be removed without rendering a style that is "too terse" to be clear, as similarly advised by law professorNeil Andrewson the writing and reasoning of legal decisions.[24]In such cases, attention should be paid to a conclusion's underlying argument so that the language used is both simple and precise. A number of writers advise against excessive verbosity in fiction. For example,Mark Twain(1835–1910) wrote "generally, the fewer the words that fully communicate or evoke the intended ideas and feelings, the more effective the communication."[25]SimilarlyErnest Hemingway(1899–1961), the 1954Nobel laureatefor literature, defended his concise style against a charge byWilliam Faulknerthat he "had never been known to use a word that might send the reader to the dictionary."[26]Hemingway responded by saying, "Poor Faulkner. Does he really think big emotions come from big words? He thinks I don't know the ten-dollar words. I know them all right. But there are older and simpler and better words, and those are the ones I use."[27] George Orwellmocked logorrhea in "Politics and the English Language" (1946) by taking verse (9:11) from the book ofEcclesiastesin theKing James Versionof theBible: I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all. and rewriting it as Objective consideration of contemporary phenomena compels the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account. In contrast, though, some authors warn against pursuing concise writing for its own sake. Literary criticSven Birkerts, for instance, notes that authors striving to reduce verbosity might produce prose that is unclear in its message or dry in style. "There's no vivid world where every character speaks in one-line, three-word sentences," he notes.[28]There is a danger that the avoidance of prolixity can produce writing that feels unnatural or sterile. PhysicistRichard Feynmanhas spoken out against verbosity in scientific writing.[29] Wordiness is common in informal or playful conversation, lyrics, and comedy. People withAsperger syndromeandautismoften present with verbose speech.[30]
https://en.wikipedia.org/wiki/Verbosity
The termautopoiesis(fromGreekαὐτo-(auto)'self'andποίησις(poiesis)'creation, production'), one of several current theories of life, refers to asystemcapable of producing and maintaining itself by creating its own parts.[1]The term was introduced in the 1972 publicationAutopoiesis and Cognition: The Realization of the Livingby Chilean biologistsHumberto MaturanaandFrancisco Varelato define the self-maintainingchemistryof livingcells.[2] The concept has since been applied to the fields ofcognition,neurobiology,systems theory,architectureandsociology.Niklas Luhmannbriefly introduced the concept of autopoiesis toorganizational theory.[3] In their 1972 bookAutopoiesis and Cognition, Chilean biologists Maturana and Varela described how they invented the word autopoiesis.[4]: 89: 16 "It was in these circumstances ... in which he analyzed Don Quixote's dilemma of whether to follow the path of arms (praxis, action) or the path of letters (poiesis, creation, production), I understood for the first time the power of the word "poiesis" and invented the word that we needed:autopoiesis. This was a word without a history, a word that could directly mean what takes place in the dynamics of the autonomy proper to living systems." They explained that,[4]: 78 "An autopoieticmachineis a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components which: (i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological domain of its realization as such a network." They described the "space defined by an autopoietic system" as "self-contained", a space that "cannot be described by using dimensions that define another space. When we refer to our interactions with a concrete autopoietic system, however, we project this system on the space of our manipulations and make a description of this projection."[4]: 89 Autopoiesis was originally presented as a system description that was said to define and explain the nature ofliving systems. A canonical example of an autopoietic system is thebiological cell. Theeukaryoticcell, for example, is made of variousbiochemicalcomponents such asnucleic acidsandproteins, and is organized into bounded structures such as thecell nucleus, variousorganelles, acell membraneandcytoskeleton. These structures, based on an internal flow of molecules and energy,producethe components which, in turn, continue to maintain the organized bounded structure that gives rise to these components. An autopoietic system is to be contrasted with anallopoieticsystem, such as a car factory, which uses raw materials (components) to generate a car (an organized structure) which is somethingotherthan itself (the factory). However, if the system is extended from the factory to include components in the factory's "environment", such as supply chains, plant / equipment, workers, dealerships, customers, contracts, competitors, cars, spare parts, and so on, then as a total viable system it could be considered to be autopoietic.[5] Of course, cells also require raw materials (nutrients), and produce numerous products -waste products, the extracellular matrix, intracellular messaging molecules, etc. Autopoiesis in biological systems can be viewed as a network of constraints that work to maintain themselves. This concept has been called organizational closure[6]or constraint closure[7]and is closely related to the study ofautocatalytic chemical networkswhere constraints are reactions required to sustain life. Though others have often used the term as a synonym forself-organization, Maturana himself stated he would "[n]ever use the notion of self-organization ... Operationally it is impossible. That is, if the organization of a thing changes, the thing changes".[8]Moreover, an autopoietic system is autonomous and operationally closed, in the sense that there are sufficient processes within it to maintain the whole. Autopoietic systems are "structurally coupled" with their medium, embedded in a dynamic of changes that can be recalled assensory-motor coupling.[9]This continuous dynamic is considered as a rudimentary form ofknowledgeorcognitionand can be observed throughout life-forms. An application of the concept of autopoiesis tosociologycan be found in Niklas Luhmann'sSystems Theory, which was subsequently adapted byBob Jessopin his studies of the capitalist state system.Marjatta Maulaadapted the concept of autopoiesis in a business context.[10]The theory of autopoiesis has also been applied in the context of legal systems by not only Niklas Luhmann, but also Gunther Teubner.[11][12]Patrik Schumacherhas applied the term to refer to the 'discursive self-referential making of architecture.'[13][14]Varela eventually further applied autopoesis to develop models of mind, brain, and behavior called non-representationalist,enactive,embodied cognitive neuroscience, culminating inneurophenomenology. In the context of textual studies,Jerome McGannargues that texts are "autopoietic mechanisms operating as self-generating feedback systems that cannot be separated from those who manipulate and use them".[15]Citing Maturana and Varela, he defines an autopoietic system as "a closed topological space that 'continuously generates and specifies its own organization through its operation as a system of production of its own components, and does this in an endless turnover of components'", concluding that "Autopoietic systems are thus distinguished from allopoietic systems, which are Cartesian and which 'have as the product of their functioning something different from themselves'". Coding and markup appearallopoietic", McGann argues, but are generative parts of the system they serve to maintain, and thus language and print or electronic technology are autopoietic systems.[16]: 200–1 The philosopherSlavoj Žižek, in his discussion ofHegel, argues: "Hegel is – to use today's terms – the ultimate thinker of autopoiesis, of the process of the emergence of necessary features out of chaotic contingency, the thinker of contingency's gradual self-organisation, of the gradual rise of order out of chaos."[17] Autopoiesis can be defined as the ratio between the complexity of a system and the complexity of its environment.[18] This generalized view of autopoiesis considers systems as self-producing not in terms of their physical components, but in terms of its organization, which can be measured in terms of information and complexity. In other words, we can describe autopoietic systems as those producing more of their own complexity than the one produced by their environment. Autopoiesis has been proposed as a potential mechanism ofabiogenesis, by which molecules evolved into more complex cells that could support the development of life.[20] Autopoiesis is just one of several current theories of life, including thechemoton[21]ofTibor Gánti, thehypercycleofManfred EigenandPeter Schuster,[22][23][24]the(M,R) systems[25][26]ofRobert Rosen, and theautocatalytic sets[27]ofStuart Kauffman, similar to an earlier proposal byFreeman Dyson.[28]All of these (including autopoiesis) found their original inspiration in Erwin Schrödinger's bookWhat is Life?[29]but at first they appear to have little in common with one another, largely because the authors did not communicate with one another, and none of them made any reference in their principal publications to any of the other theories. Nonetheless, there are more similarities than may be obvious at first sight, for example between Gánti and Rosen.[30]Until recently[31][32][33]there have been almost no attempts to compare the different theories and discuss them together. An extensive discussion of the connection of autopoiesis tocognitionis provided by Evan Thompson in his 2007 publication,Mind in Life.[34]The basic notion of autopoiesis as involving constructive interaction with the environment is extended to include cognition. Initially, Maturana defined cognition as behavior of an organism "with relevance to the maintenance of itself".[35]: 13However, computer models that are self-maintaining but non-cognitive have been devised, so some additional restrictions are needed, and the suggestion is that the maintenance process, to be cognitive, involves readjustment of the internal workings of the system in somemetabolic process. On this basis it is claimed that autopoiesis is a necessary but not a sufficient condition for cognition.[36]Thompson wrote that this distinction may or may not be fruitful, but what matters is that living systems involve autopoiesis and (if it is necessary to add this point) cognition as well.[37]: 127It can be noted that this definition of 'cognition' is restricted, and does not necessarily entail any awareness orconsciousnessby the living system. With the publication of The Embodied Mind in 1991, Varela, Thompson and Rosch applied autopoesis to make non-representationalist, andenactive models of mind, brain and behavior, which further developedembodied cognitive neuroscience, later culminating inneurophenomenology. The connection of autopoiesis to cognition, or if necessary, of living systems to cognition, is an objective assessment ascertainable by observation of a living system. One question that arises is about the connection between cognition seen in this manner and consciousness. The separation of cognition and consciousness recognizes that the organism may be unaware of the substratum where decisions are made. What is the connection between these realms? Thompson refers to this issue as the "explanatory gap", and one aspect of it is thehard problem of consciousness, how and why we havequalia.[38] A second question is whether autopoiesis can provide a bridge between these concepts. Thompson discusses this issue from the standpoint ofenactivism. An autopoietic cell actively relates to its environment. Its sensory responses trigger motor behavior governed by autopoiesis, and this behavior (it is claimed) is a simplified version of a nervous system behavior. The further claim is that real-time interactions like this require attention, and an implication of attention is awareness.[39] There are multiple criticisms of the use of the term in both its original context, as an attempt to define and explain the living, and its various expanded usages, such as applying it to self-organizing systems in general or social systems in particular.[40]Critics have argued that the concept and its theory fail to define or explain living systems and that, because of the extreme language ofself-referentialityit uses without any external reference, it is really an attempt to give substantiation to Maturana's radicalconstructivistorsolipsisticepistemology,[41]or whatDanilo Zolo[42][43]has called instead a "desolate theology". An example is the assertion by Maturana and Varela that "We do not see what we do not see and what we do not see does not exist".[44] According to Razeto-Barry, the influence ofAutopoiesis and Cognition: The Realization of the Livingin mainstream biology has proven to be limited. Razeto-Barry believes that autopoiesis is not commonly used as the criterion for life.[45] Zoologist and philosopherDonna Harawayalso criticizes the usage of the term, arguing that "nothing makes itself; nothing is really autopoietic or self-organizing",[46]and suggests the use ofsympoiesis, meaning "making-with", instead.
https://en.wikipedia.org/wiki/Autopoesis
Acircular reference(orreference cycle[1]) is a series ofreferenceswhere the last object references the first, resulting in a closed loop. A newcomer asks a local where the town library is. "Just in front of the post office," says the local. The newcomer nods, and follows up: "But where is the post office?" "Why, that's simple," replies the local. "It's just behind the library!" A circular reference is not to be confused with thelogical fallacyof acircular argument. Although a circular reference will often be unhelpful and reveal no information, such as two entries in a book index referring to each other, it is not necessarily so that a circular reference is of no use. Dictionaries, for instance, must always ultimately be a circular reference since all words in a dictionary are defined in terms of other words, but a dictionary nevertheless remains a useful reference. Sentences containing circular references can still be meaningful: is circular, but not without meaning. Indeed, it can be argued that self-reference is a necessary consequence of Aristotle'slaw of non-contradiction, a fundamental philosophicalaxiom. In this view, without self-reference,logicandmathematicsbecome impossible, or at least, lack usefulness.[2][3] Circular references can appear incomputer programmingwhen one piece of code requires the result from another, but that code needs the result from the first. For example, the two functions, posn and plus1 in the following Python program comprise a circular reference:[further explanation needed] Circular references like the above example may return valid results if they have a terminating condition. If there is no terminating condition, a circular reference leads to a condition known aslivelockorinfinite loop, meaning it theoretically could run forever. In ISO Standard, SQL circular integrity constraints are implicitly supported within a single table. Between multiple tables circular constraints (e.g. foreign keys) are permitted by defining the constraints as deferrable (SeeCREATE TABLEfor PostgreSQL andDEFERRABLE Constraint Examplesfor Oracle). In that case the constraint is checked at the end of the transaction not at the time the DML statement is executed. To update a circular reference, two statements can be issued in a single transaction that will satisfy both references once the transaction is committed. Circular references can also happen between instances of data of a mutable type, such as in this Python script: Theprint(mydict)function will output{'this':'that','these':'those','myself':{...}}, where{...}indicates a circular reference, in this case, to themydictdictionary. Circular references also occur inspreadsheetswhen two cells require each other's result. For example, if the value in Cell A1 is to be obtained by adding 5 to the value in Cell B1, and the value in Cell B1 is to be obtained by adding 3 to the value in Cell A1, no values can be computed. (Even if the specifications are A1:=B1+5 and B1:=A1-5, there is still a circular reference. It does not help that, for instance, A1=3 and B1=-2 would satisfy both formulae, as there are infinitely many other possible values of A1 and B1 that can satisfy both instances.) Circular reference in worksheets can be a very useful technique for solving implicit equations such as theColebrook equationand many others, which might otherwise require tediousNewton-Raphsonalgorithms in VBA or use of macros.[4] A distinction should be made with processes containing a circular reference between those that are incomputable and those that are an iterative calculation with a final output. The latter may fail in spreadsheets not equipped to handle them but are nevertheless still logically valid.[3]
https://en.wikipedia.org/wiki/Circular_reference
Certainty(also known asepistemic certaintyorobjective certainty) is theepistemicproperty ofbeliefswhich a person has no rational grounds for doubting.[1]One standard way of defining epistemic certainty is that a belief is certain if and only if the person holding that belief could not be mistaken in holding that belief. Other common definitions of certainty involve the indubitable nature of such beliefs or define certainty as a property of those beliefs with the greatest possiblejustification. Certainty is closely related toknowledge, although contemporary philosophers tend to treat knowledge as having lower requirements than certainty.[1] Importantly, epistemic certainty is not the same thing aspsychological certainty(also known assubjective certaintyorcertitude), which describes the highest degree to which a person could be convinced that something is true. While a person may be completely convinced that a particular belief is true, and might even be psychologically incapable of entertaining its falsity, this does not entail that the belief is itself beyond rational doubt or incapable of being false.[2]While the word "certainty" is sometimes used to refer to a person'ssubjectivecertainty about the truth of a belief, philosophers are primarily interested in the question of whether any beliefs ever attainobjectivecertainty. Thephilosophicalquestion of whether one can ever be truly certain about anything has been widely debated for centuries. Many proponents ofphilosophical skepticismdeny that certainty is possible, or claim that it is only possible ina prioridomains such as logic or mathematics. Historically, many philosophers have held that knowledge requires epistemic certainty, and therefore that one must haveinfalliblejustification in order to count as knowing the truth of a proposition. However, many philosophers such asRené Descarteswere troubled by the resulting skeptical implications, since all of our experiences at least seem to be compatible with variousskeptical scenarios. It is generally accepted today that most of our beliefs are compatible with their falsity and are thereforefallible, although the status of being certain is still often ascribed to a limited range of beliefs (such as "I exist"). The apparent fallibility of our beliefs has led many contemporary philosophers to deny that knowledge requires certainty.[1] If you tried to doubt everything you would not get as far as doubting anything. The game of doubting itself presupposes certainty. On Certaintyis a series of notes made byLudwig Wittgensteinjust prior to his death. The main theme of the work is thatcontextplays a role in epistemology. Wittgenstein asserts ananti-foundationalistmessage throughout the work: that every claim can be doubted but certainty is possible in a framework. "The function [propositions] serve in language is to serve as a kind of framework within which empirical propositions can make sense".[3] PhysicistLawrence M. Krausssuggests that the need for identifying degrees of certainty is under-appreciated in various domains, including policy-making and the understanding of science. This is because different goals require different degrees of certainty – and politicians are not always aware of (or do not make it clear) how much certainty we are working with.[4] Rudolf Carnapviewed certainty as a matter of degree ("degrees of certainty") which could beobjectivelymeasured, with degree one being certainty.Bayesian analysisderives degrees of certainty which are interpreted as a measure ofsubjectivepsychologicalbelief. Alternatively, one might use thelegal degrees of certainty. These standards ofevidenceascend as follows: no credible evidence, some credible evidence, a preponderance of evidence, clear and convincing evidence, beyond reasonable doubt, and beyond any shadow of a doubt (i.e.undoubtable– recognized as an impossible standard to meet – which serves only to terminate the list). If knowledge requires absolute certainty, thenknowledge is most likely impossible, as evidenced by the apparent fallibility of our beliefs. Thefoundational crisis of mathematicswas the early 20th century's term for the search for proper foundations of mathematics. After several schools of thephilosophy of mathematicsran into difficulties one after the other in the 20th century, the assumption that mathematics had any foundation that could be stated withinmathematicsitself began to be heavily challenged. One attempt after another to provide unassailable foundations for mathematics was found to suffer from variousparadoxes(such asRussell's paradox) and to beinconsistent. Various schools of thought were opposing each other. The leading school was that of theformalistapproach, of whichDavid Hilbertwas the foremost proponent, culminating in what is known asHilbert's program, which sought to ground mathematics on a small basis of aformal systemproved sound bymetamathematicalfinitisticmeans. The main opponent was theintuitionistschool, led byL.E.J. Brouwer, which resolutely discarded formalism as a meaningless game with symbols.[5]The fight was acrimonious. In 1920 Hilbert succeeded in having Brouwer, whom he considered a threat to mathematics, removed from the editorial board ofMathematische Annalen, the leading mathematical journal of the time. Gödel's incompleteness theorems, proved in 1931, showed that essential aspects of Hilbert's program could not be attained. InGödel's first result he showed how to construct, for any sufficiently powerful and consistent finitely axiomatizable system – such as necessary to axiomatize the elementary theory ofarithmetic– a statement that can be shown to be true, but that does not follow from the rules of the system. It thus became clear that the notion of mathematical truth cannot be reduced to a purely formal system as envisaged in Hilbert's program. In a next result Gödel showed that such a system was not powerful enough for proving its own consistency, let alone that a simpler system could do the job. This proves that there is no hope toprovethe consistency of any system that contains an axiomatization of elementary arithmetic, and, in particular, to prove the consistency ofZermelo–Fraenkel set theory(ZFC), the system which is generally used for building all mathematics. However, if ZFC is not consistent, there exists a proof of both a theorem and its negation, and this would imply a proof of all theorems and all their negations. As, despite the large number of mathematical areas that have been deeply studied, no such contradiction has ever been found, this provides an almost certainty of mathematical results. Moreover, if such a contradiction would eventually be found, most mathematicians are convinced that it will be possible to resolve it by a slight modification of the axioms of ZFC. Moreover, the method offorcingallows proving the consistency of a theory, provided that another theory is consistent. For example, if ZFC is consistent, adding to it thecontinuum hypothesisor a negation of it defines two theories that are both consistent (in other words, the continuum is independent from the axioms of ZFC). This existence of proofs of relative consistency implies that the consistency of modern mathematics depends weakly on a particular choice on the axioms on which mathematics are built. In this sense, the crisis has been resolved, as, although consistency of ZFC is not provable, it solves (or avoids) all logical paradoxes at the origin of the crisis, and there are many facts that provide a quasi-certainty of the consistency of modern mathematics.
https://en.wikipedia.org/wiki/Certainty
Thetheory of belief functions, also referred to asevidence theoryorDempster–Shafer theory(DST), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such asprobability,possibilityandimprecise probability theories. First introduced byArthur P. Dempster[1]in the context ofstatistical inference, the theory was later developed byGlenn Shaferinto a general framework for modeling epistemic uncertainty—a mathematical theory ofevidence.[2][3]The theory allows one to combine evidence from different sources and arrive at a degree of belief (represented by a mathematical object calledbelief function) that takes into account all the available evidence. In a narrow sense, the term Dempster–Shafer theory refers to the original conception of the theory by Dempster and Shafer. However, it is more common to use the term in the wider sense of the same general approach, as adapted to specific kinds of situations. In particular, many authors have proposed different rules for combining evidence, often with a view to handling conflicts in evidence better.[4]The early contributions have also been the starting points of many important developments, including thetransferable belief modeland the theory of hints.[5] Dempster–Shafer theory is a generalization of theBayesian theory of subjective probability. Belief functions base degrees of belief (or confidence, or trust) for one question on the subjective probabilities for a related question. The degrees of belief themselves may or may not have the mathematical properties of probabilities; how much they differ depends on how closely the two questions are related.[6]Put another way, it is a way of representingepistemicplausibilities, but it can yield answers that contradict those arrived at usingprobability theory. Often used as a method ofsensor fusion, Dempster–Shafer theory is based on two ideas: obtaining degrees of belief for one question from subjective probabilities for a related question, and Dempster's rule[7]for combining such degrees of belief when they are based on independent items of evidence. In essence, the degree of belief in a proposition depends primarily upon the number of answers (to the related questions) containing the proposition, and the subjective probability of each answer. Also contributing are the rules of combination that reflect general assumptions about the data. In this formalism adegree of belief(also referred to as amass) is represented as abelief functionrather than aBayesianprobability distribution. Probability values are assigned tosetsof possibilities rather than single events: their appeal rests on the fact they naturally encode evidence in favor of propositions. Dempster–Shafer theory assigns its masses to all of the subsets of the set of states of a system—inset-theoreticterms, thepower setof the states. For instance, assume a situation where there are two possible states of a system. For this system, any belief function assigns mass to the first state, the second, to both, and to neither. Shafer's formalism starts from a set ofpossibilitiesunder consideration, for instance numerical values of a variable, or pairs of linguistic variables like "date and place of origin of a relic" (asking whether it is antique or a recent fake). A hypothesis is represented by a subset of thisframe of discernment, like "(Ming dynasty, China)", or "(19th century, Germany)".[2]: p.35f. Shafer's framework allows for belief about such propositions to be represented as intervals, bounded by two values,belief(orsupport) andplausibility: In a first step, subjective probabilities (masses) are assigned to all subsets of the frame; usually, only a restricted number of sets will have non-zero mass (focal elements).[2]: 39f.Beliefin a hypothesis is constituted by the sum of the masses of all subsets of the hypothesis-set. It is the amount of belief that directly supports either the given hypothesis or a more specific one, thus forming a lower bound on its probability. Belief (usually denotedBel) measures the strength of the evidence in favor of a propositionp. It ranges from 0 (indicating no evidence) to 1 (denoting certainty).Plausibilityis 1 minus the sum of the masses of all sets whose intersection with the hypothesis is empty. Or, it can be obtained as the sum of the masses of all sets whose intersection with the hypothesis is not empty. It is an upper bound on the possibility that the hypothesis could be true, because there is only so much evidence that contradicts that hypothesis. Plausibility (denoted by Pl) is thus related to Bel by Pl(p) = 1 − Bel(~p). It also ranges from 0 to 1 and measures the extent to which evidence in favor of ~pleaves room for belief inp. For example, suppose we have a belief of 0.5 for a proposition, say "the cat in the box is dead." This means that we have evidence that allows us to state strongly that the proposition is true with a confidence of 0.5. However, the evidence contrary to that hypothesis (i.e. "the cat is alive") only has a confidence of 0.2. The remaining mass of 0.3 (the gap between the 0.5 supporting evidence on the one hand, and the 0.2 contrary evidence on the other) is "indeterminate," meaning that the cat could either be dead or alive. This interval represents the level of uncertainty based on the evidence in the system. The "neither" hypothesis is set to zero by definition (it corresponds to "no solution"). The orthogonal hypotheses "Alive" and "Dead" have probabilities of 0.2 and 0.5, respectively. This could correspond to "Live/Dead Cat Detector" signals, which have respective reliabilities of 0.2 and 0.5. Finally, the all-encompassing "Either" hypothesis (which simply acknowledges there is a cat in the box) picks up the slack so that the sum of the masses is 1. The belief for the "Alive" and "Dead" hypotheses matches their corresponding masses because they have no subsets; belief for "Either" consists of the sum of all three masses (Either, Alive, and Dead) because "Alive" and "Dead" are each subsets of "Either". The "Alive" plausibility is 1 −m(Dead): 0.5 and the "Dead" plausibility is 1 −m(Alive): 0.8. In other way, the "Alive" plausibility ism(Alive) +m(Either) and the "Dead" plausibility ism(Dead) +m(Either). Finally, the "Either" plausibility sumsm(Alive) +m(Dead) +m(Either). The universal hypothesis ("Either") will always have 100% belief and plausibility—it acts as achecksumof sorts. Here is a somewhat more elaborate example where the behavior of belief and plausibility begins to emerge. We're looking through a variety of detector systems at a single faraway signal light, which can only be coloured in one of three colours (red, yellow, or green): Events of this kind would not be modeled as distinct entities in probability space as they are here in mass assignment space. Rather the event "Red or Yellow" would be considered as the union of the events "Red" and "Yellow", and (seeprobability axioms)P(Red or Yellow) ≥P(Yellow), andP(Any) = 1, whereAnyrefers toRedorYelloworGreen. In DST the mass assigned toAnyrefers to the proportion of evidence that can not be assigned to any of the other states, which here means evidence that says there is a light but does not say anything about what color it is. In this example, the proportion of evidence that shows the light is eitherRedorGreenis given a mass of 0.05. Such evidence might, for example, be obtained from a R/G color blind person. DST lets us extract the value of this sensor's evidence. Also, in DST the empty set is considered to have zero mass, meaning here that the signal light system exists and we are examining its possible states, not speculating as to whether it exists at all. Beliefs from different sources can be combined with various fusion operators to model specific situations of belief fusion, e.g. withDempster's rule of combination, which combines belief constraints[8]that are dictated by independent belief sources, such as in the case of combining hints[5]or combining preferences.[9]Note that the probability masses from propositions that contradict each other can be used to obtain a measure of conflict between the independent belief sources. Other situations can be modeled with different fusion operators, such as cumulative fusion of beliefs from independent sources, which can be modeled with the cumulative fusion operator.[10] Dempster's rule of combination is sometimes interpreted as an approximate generalisation ofBayes' rule. In this interpretation the priors and conditionals need not be specified, unlike traditional Bayesian methods, which often use a symmetry (minimax error) argument to assign prior probabilities to random variables (e.g.assigning 0.5 to binary values for which no information is available about which is more likely). However, any information contained in the missing priors and conditionals is not used in Dempster's rule of combination unless it can be obtained indirectly—and arguably is then available for calculation using Bayes equations. Dempster–Shafer theory allows one to specify a degree of ignorance in this situation instead of being forced to supply prior probabilities that add to unity. This sort of situation, and whether there is a real distinction betweenriskandignorance, has been extensively discussed by statisticians and economists. See, for example, the contrasting views ofDaniel Ellsberg,Howard Raiffa,Kenneth ArrowandFrank Knight.[citation needed] LetXbe theuniverse: the set representing all possible states of a system under consideration. Thepower set is the set of all subsets ofX, including theempty set∅{\displaystyle \emptyset }. For example, if: then The elements of the power set can be taken to represent propositions concerning the actual state of the system, by containing all and only the states in which the proposition is true. The theory of evidence assigns a belief mass to each element of the power set. Formally, a function is called abasic belief assignment(BBA), when it has two properties. First, the mass of the empty set is zero: Second, the masses of all the members of the power set add up to a total of 1: The massm(A) ofA, a given member of the power set, expresses the proportion of all relevant and available evidence that supports the claim that the actual state belongs toAbut to no particular subset ofA. The value ofm(A) pertainsonlyto the setAand makes no additional claims about any subsets ofA, each of which have, by definition, their own mass. From the mass assignments, the upper and lower bounds of a probability interval can be defined. This interval contains the precise probability of a set of interest (in the classical sense), and is bounded by two non-additive continuous measures calledbelief(orsupport) andplausibility: The belief bel(A) for a setAis defined as the sum of all the masses of subsets of the set of interest: The plausibility pl(A) is the sum of all the masses of the setsBthat intersect the set of interestA: The two measures are related to each other as follows: And conversely, for finiteA, given the belief measure bel(B) for all subsetsBofA, we can find the massesm(A) with the following inverse function: where |A−B| is the difference of the cardinalities of the two sets.[4] Itfollows fromthe last two equations that, for a finite setX, one needs to know only one of the three (mass, belief, or plausibility) to deduce the other two; though one may need to know the values for many sets in order to calculate one of the other values for a particular set. In the case of an infiniteX, there can be well-defined belief and plausibility functions but no well-defined mass function.[11] The problem we now face is how to combine two independent sets of probability mass assignments in specific situations. In case different sources express their beliefs over the frame in terms of belief constraints such as in the case of giving hints or in the case of expressing preferences, then Dempster's rule of combination is the appropriate fusion operator. This rule derives common shared belief between multiple sources and ignoresallthe conflicting (non-shared) belief through a normalization factor. Use of that rule in other situations than that of combining belief constraints has come under serious criticism, such as in case of fusing separate belief estimates from multiple sources that are to be integrated in a cumulative manner, and not as constraints. Cumulative fusion means that all probability masses from the different sources are reflected in the derived belief, so no probability mass is ignored. Specifically, the combination (called thejoint mass) is calculated from the two sets of massesm1andm2in the following manner: where Kis a measure of the amount of conflict between the two mass sets. The normalization factor above, 1 −K, has the effect of completely ignoring conflict and attributinganymass associated with conflict to the empty set. This combination rule for evidence can therefore produce counterintuitive results, as we show next. The following example shows how Dempster's rule produces intuitive results when applied in a preference fusion situation, even when there is high conflict. An example with exactly the same numerical values was introduced byLotfi Zadehin 1979,[12][13][14]to point out counter-intuitive results generated by Dempster's rule when there is a high degree of conflict. The example goes as follows: Such result goes against common sense since both doctors agree that there is a little chance that the patient has a meningitis. This example has been the starting point of many research works for trying to find a solid justification for Dempster's rule and for foundations of Dempster–Shafer theory[15][16]or to show the inconsistencies of this theory.[17][18][19] The following example shows where Dempster's rule produces a counter-intuitive result, even when there is low conflict. This result impliescomplete supportfor the diagnosis of a brain tumor, which both doctors believedvery likely. The agreement arises from the low degree of conflict between the two sets of evidence comprised by the two doctors' opinions. In either case, it would be reasonable to expect that: since the existence of non-zero belief probabilities for other diagnoses impliesless than complete supportfor the brain tumour diagnosis. As in Dempster–Shafer theory, a Bayesian belief functionbel:2X→[0,1]{\displaystyle \operatorname {bel} :2^{X}\rightarrow [0,1]\,\!}has the propertiesbel⁡(∅)=0{\displaystyle \operatorname {bel} (\emptyset )=0}andbel⁡(X)=1{\displaystyle \operatorname {bel} (X)=1}. The third condition, however, is subsumed by, but relaxed in DS theory:[2]: p. 19 Either of the following conditions implies the Bayesian special case of the DS theory:[2]: p. 37, 45 As an example of how the two approaches differ, a Bayesian could model the color of a car as a probability distribution over (red, green, blue), assigning one number to each color. Dempster–Shafer would assign numbers to each of (red, green, blue, (red or green), (red or blue), (green or blue), (red or green or blue)). These numbers do not have to be coherent; for example, Bel(red)+Bel(green) does not have to equal Bel(red or green). Thus, Bayes' conditional probability can be considered as a special case of Dempster's rule of combination.[2]: p. 19f.However, it lacks many (if not most) of the properties that make Bayes' rule intuitively desirable, leading some to argue that it cannot be considered a generalization in any meaningful sense.[20]For example, DS theory violates the requirements forCox's theorem, which implies that it cannot be considered a coherent (contradiction-free) generalization ofclassical logic—specifically, DS theory violates the requirement that a statement be either true or false (but not both). As a result, DS theory is subject to theDutch Bookargument, implying that any agent using DS theory would agree to a series of bets that result in a guaranteed loss. The Bayesian approximation[21][22]reduces a given bpam{\displaystyle m}to a (discrete) probability distribution, i.e. only singleton subsets of the frame of discernment are allowed to be focal elements of the approximated versionm_{\displaystyle {\underline {m}}}ofm{\displaystyle m}: It's useful for those who are only interested in the single state hypothesis. We can perform it in the 'light' example. Judea Pearl(1988a, chapter 9;[23]1988b[24]and 1990)[25]has argued that it is misleading to interpret belief functions as representing either "probabilities of an event," or "the confidence one has in the probabilities assigned to various outcomes," or "degrees of belief (or confidence, or trust) in a proposition," or "degree of ignorance in a situation." Instead, belief functions represent the probability that a given proposition isprovablefrom a set of other propositions, to which probabilities are assigned. Confusing probabilities oftruthwith probabilities ofprovabilitymay lead to counterintuitive results in reasoning tasks such as (1) representing incomplete knowledge, (2) belief-updating and (3) evidence pooling. He further demonstrated that, if partial knowledge is encoded and updated by belief function methods, the resulting beliefs cannot serve as a basis for rational decisions. Kłopotek and Wierzchoń[26]proposed to interpret the Dempster–Shafer theory in terms of statistics of decision tables (of therough set theory), whereby the operator of combining evidence should be seen as relational joining of decision tables. In another interpretation M. A. Kłopotek and S. T. Wierzchoń[27]propose to view this theory as describing destructive material processing (under loss of properties), e.g. like in some semiconductor production processes. Under both interpretations reasoning in DST gives correct results, contrary to the earlier probabilistic interpretations, criticized by Pearl in the cited papers and by other researchers. Jøsang proved that Dempster's rule of combination actually is a method for fusing belief constraints.[8]It only represents an approximate fusion operator in other situations, such as cumulative fusion of beliefs, but generally produces incorrect results in such situations. The confusion around the validity of Dempster's rule therefore originates in the failure of correctly interpreting the nature of situations to be modeled. Dempster's rule of combination always produces correct and intuitive results in situation of fusing belief constraints from different sources. In considering preferences one might use thepartial orderof alatticeinstead of thetotal orderof the real line as found in Dempster–Schafer theory. Indeed,Gunther Schmidthas proposed this modification and outlined the method.[28] Given a set of criteriaCand abounded latticeLwith ordering ≤, Schmidt defines arelational measureto be a functionμfrom thepower setofCintoLthat respects the order ⊆ onP{\displaystyle \mathbb {P} }(C): and such thatμtakes the empty subset ofP{\displaystyle \mathbb {P} }(C) to the least element ofL, and takesCto the greatest element ofL. Schmidt comparesμwith the belief function of Schafer, and he also considers a method of combining measures generalizing the approach of Dempster (when new evidence is combined with previously held evidence). He also introduces arelational integraland compares it to theChoquet integralandSugeno integral. Any relationmbetweenCandLmay be introduced as a "direct valuation", then processed with thecalculus of relationsto obtain apossibility measureμ.
https://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory
"Further research is needed" (FRIN), "more research is needed" and other variants of similar phrases are commonly used inresearch papers. Theclichéis so common that it has attracted research, regulation and cultural commentary. Someresearch journalshave banned the phrase "more research is needed" on the grounds that it is redundant;[1]it is almost always true and fits almost any article, and so can be taken as understood. A 2004 metareview by theCochrane collaborationof their ownsystematic medical reviewsfound that 93% of the reviews studied made indiscriminate FRIN-like statements, reducing their ability to guide future research. The presence of FRIN had no correlation with thestrength of the evidenceagainst the medical intervention. Authors who thought a treatment was useless were just as likely to recommend researching it further.[2] Indeed, authors may recommend "further research" when, given the existing evidence, further research would be extremely unlikely to be approved by anethics committee.[3] Studies finding that a treatment hasno noticeable effectsare sometimes greeted with statements that "more research is needed" by those convinced that the treatment is effective, but the effect has not yet been found.[4]Since even the largest study can never rule out an infinitesimal effect, an effect can only ever be shown to be insignificant, not non-existent.[5]Similarly,Trish Greenhalgh, Professor of Primary Care Health Sciences at the University of Oxford, argues that FRIN is often used as a way in which a "[l]ack of hard evidence to support the original hypothesis gets reframed as evidence that investment efforts need to be redoubled", and a way to avoid upsetting hopes and vested interests. She has also described FRIN as "an indicator that serious scholarly thinking on the topic has ceased", saying that "it is almost never the only logical conclusion that can be drawn from a set of negative, ambiguous, incomplete or contradictory data."[6] Greenhalgh suggests that, because vague FRIN statements are an argument that "tomorrow's research investments should be pitched into preciselythe same patch of long grassas yesterday's", funding should be refused to those making them. She and others argue that more thought and research is needed into methods for determining where more research is needed.[6][7] Academic journaleditors were banning unqualified FRIN statements as early as 1990, requiring more specific information such as whattypesof research were needed, and what questions they ought to address.[1]Researchers themselves have strongly recommended thatresearch articlesdetail what research is needed.[8][2]This is conventional in some fields.[9][10]Other commentators suggest that articles would benefit by assessing thelikely valueof possible further research.[11] Both the needfulness and needlessness of further research may be overlooked. Theblobbogramleading this article is from asystematic review; it showsclinical trialsof theuse of corticosteroids to hasten lung developmentin pregnancies where a baby is likely to beborn prematurely. Longafterthere was enough evidence to show that this treatment saved babies' lives, the evidence was not widely known, the treatment was not widely used, and further research was done into the same question. After the review made the evidence better known, the treatment was used more, preventing thousands of pre-term babies from dying ofinfant respiratory distress syndrome.[12] However, when the treatment was rolled out in lower- and middle-income countries, early data suggested that more pre-term babies died. It was thought that this could be because of a higher risk of infection, which is more likely to kill a baby in places with poor medical care and more malnourished mothers.[12]The 2017 version of the review therefore said that there was "little need" for further research into the usefulness of the treatment in higher-income countries, but further research was needed on optimal dosage and on how to best treat lower-income and higher-risk mothers.[13] Further research was done, and found the treatment did actually benefit babies in lower-income countries, too. The December 2020 version of the review stated that the "evidence [that the treatment saves babies] is robust, regardless of resource setting (high, middle or low)" and that further research should focus on "specific understudied subgroups such as multiple pregnancies and other high-risk obstetric groups, and the risks and benefits in the very early or very late preterm periods".[14] The idea that research papers always end with some variation of FRIN was described as an "old joke" in a 1999epidemiologyeditorial.[8] FRIN has been advocated as a position politicians should take on under-evidenced claims.[15]Requests for further research on questions relevant to political policy can lead to better-informed decisions, but FRIN statements have also been used in bad faith: for instance, to delay political decisions, or as a justification for ignoring existing research knowledge (as was done by nicotine companies). Policymakers may also not know of existing research; they seldom systematically search databases of research literature, preferring to useGoogleand ask colleagues for research papers.[16] FRIN has been advocated as a motto for life, applicable everywhere except research papers;[4]it has been printed on T-shirts,[17]andsatirizedby the "Collectively Unconscious" blog, which reported that an article in the journalSciencehad concluded that "no further research is needed, at all, anywhere, ever".[18] The webcomicxkcdhas also used the phrase as a topic, for self-satire, and as abatheticpunchline.[19]
https://en.wikipedia.org/wiki/Further_research_is_needed
Inmathematics,fuzzy sets(also known asuncertain sets) aresetswhoseelementshave degrees of membership. Fuzzy sets were introduced independently byLotfi A. Zadehin 1965 as an extension of the classical notion of set.[1][2]At the same time,Salii (1965)defined a more general kind of structure called an "L-relation", which he studied in anabstract algebraiccontext; fuzzy relations are special cases ofL-relations whenLis theunit interval[0, 1]. They are now used throughoutfuzzy mathematics, having applications in areas such aslinguistics(De Cock, Bodenhofer & Kerre 2000),decision-making(Kuzmin 1982), andclustering(Bezdek 1978). In classicalset theory, the membership of elements in a set is assessed in binary terms according to abivalent condition—an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of amembership functionvalued in therealunit interval [0, 1]. Fuzzy sets generalize classical sets, since theindicator functions(aka characteristic functions) of classical sets are special cases of the membership functions of fuzzy sets, if the latter only takes values 0 or 1.[3]In fuzzy set theory, classical bivalent sets are usually calledcrisp sets. The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such asbioinformatics.[4] A fuzzy set is a pair(U,m){\displaystyle (U,m)}whereU{\displaystyle U}is a set (often required to benon-empty) andm:U→[0,1]{\displaystyle m\colon U\rightarrow [0,1]}a membership function. The reference setU{\displaystyle U}(sometimes denoted byΩ{\displaystyle \Omega }orX{\displaystyle X}) is calleduniverse of discourse, and for eachx∈U,{\displaystyle x\in U,}the valuem(x){\displaystyle m(x)}is called thegradeof membership ofx{\displaystyle x}in(U,m){\displaystyle (U,m)}. The functionm=μA{\displaystyle m=\mu _{A}}is called themembership functionof the fuzzy setA=(U,m){\displaystyle A=(U,m)}. For a finite setU={x1,…,xn},{\displaystyle U=\{x_{1},\dots ,x_{n}\},}the fuzzy set(U,m){\displaystyle (U,m)}is often denoted by{m(x1)/x1,…,m(xn)/xn}.{\displaystyle \{m(x_{1})/x_{1},\dots ,m(x_{n})/x_{n}\}.} Letx∈U{\displaystyle x\in U}. Thenx{\displaystyle x}is called The (crisp) set of all fuzzy sets on a universeU{\displaystyle U}is denoted withSF(U){\displaystyle SF(U)}(or sometimes justF(U){\displaystyle F(U)}).[citation needed] For any fuzzy setA=(U,m){\displaystyle A=(U,m)}andα∈[0,1]{\displaystyle \alpha \in [0,1]}the following crisp sets are defined: Note that some authors understand "kernel" in a different way; see below. Although the complement of a fuzzy set has a single most common definition, the other main operations, union and intersection, do have some ambiguity. By the definition of the t-norm, we see that the union and intersection arecommutative,monotonic,associative, and have both anulland anidentity element. For the intersection, these are ∅ andU, respectively, while for the union, these are reversed. However, the union of a fuzzy set and its complement may not result in the full universeU, and the intersection of them may not give the empty set ∅. Since the intersection and union are associative, it is natural to define the intersection and union of a finitefamilyof fuzzy sets recursively. It is noteworthy that the generally accepted standard operators for the union and intersection of fuzzy sets are the max and min operators: The case of exponent two is special enough to be given a name. Taking00=1{\displaystyle 0^{0}=1}, we haveA0=U{\displaystyle A^{0}=U}andA1=A.{\displaystyle A^{1}=A.} In contrast to the general ambiguity of intersection and union operations, there is clearness for disjoint fuzzy sets: Two fuzzy setsA,B{\displaystyle A,B}aredisjointiff which is equivalent to and also equivalent to We keep in mind thatmin/maxis a t/s-norm pair, and any other will work here as well. Fuzzy sets are disjoint if and only if their supports aredisjointaccording to the standard definition for crisp sets. For disjoint fuzzy setsA,B{\displaystyle A,B}any intersection will give ∅, and any union will give the same result, which is denoted as with its membership function given by Note that only one of both summands is greater than zero. For disjoint fuzzy setsA,B{\displaystyle A,B}the following holds true: This can be generalized to finite families of fuzzy sets as follows: Given a familyA=(Ai)i∈I{\displaystyle A=(A_{i})_{i\in I}}of fuzzy sets with index setI(e.g.I= {1,2,3,...,n}). This family is(pairwise) disjointiff A family of fuzzy setsA=(Ai)i∈I{\displaystyle A=(A_{i})_{i\in I}}is disjoint, iff the family of underlying supportsSupp∘A=(Supp⁡(Ai))i∈I{\displaystyle \operatorname {Supp} \circ A=(\operatorname {Supp} (A_{i}))_{i\in I}}is disjoint in the standard sense for families of crisp sets. Independent of the t/s-norm pair, intersection of a disjoint family of fuzzy sets will give ∅ again, while the union has no ambiguity: with its membership function given by Again only one of the summands is greater than zero. For disjoint families of fuzzy setsA=(Ai)i∈I{\displaystyle A=(A_{i})_{i\in I}}the following holds true: For a fuzzy setA{\displaystyle A}with finite supportSupp⁡(A){\displaystyle \operatorname {Supp} (A)}(i.e. a "finite fuzzy set"), itscardinality(akascalar cardinalityorsigma-count) is given by In the case thatUitself is a finite set, therelative cardinalityis given by This can be generalized for the divisor to be a non-empty fuzzy set: For fuzzy setsA,G{\displaystyle A,G}withG≠ ∅, we can define therelative cardinalityby: which looks very similar to the expression forconditional probability. Note: For any fuzzy setA{\displaystyle A}the membership functionμA:U→[0,1]{\displaystyle \mu _{A}:U\to [0,1]}can be regarded as a familyμA=(μA(x))x∈U∈[0,1]U{\displaystyle \mu _{A}=(\mu _{A}(x))_{x\in U}\in [0,1]^{U}}. The latter is ametric spacewith several metricsd{\displaystyle d}known. A metric can be derived from anorm(vector norm)‖‖{\displaystyle \|\,\|}via For instance, ifU{\displaystyle U}is finite, i.e.U={x1,x2,...xn}{\displaystyle U=\{x_{1},x_{2},...x_{n}\}}, such a metric may be defined by: For infiniteU{\displaystyle U}, the maximum can be replaced by a supremum. Because fuzzy sets are unambiguously defined by their membership function, this metric can be used to measure distances between fuzzy sets on the same universe: which becomes in the above sample: Again for infiniteU{\displaystyle U}the maximum must be replaced by a supremum. Other distances (like the canonical 2-norm) may diverge, if infinite fuzzy sets are too different, e.g.,∅{\displaystyle \varnothing }andU{\displaystyle U}. Similarity measures (here denoted byS{\displaystyle S}) may then be derived from the distance, e.g. after a proposal by Koczy: or after Williams and Steele: whereα>0{\displaystyle \alpha >0}is a steepness parameter andexp⁡(x)=ex{\displaystyle \exp(x)=e^{x}}.[citation needed] Sometimes, more general variants of the notion of fuzzy set are used, with membership functions taking values in a (fixed or variable)algebraorstructureL{\displaystyle L}of a given kind; usually it is required thatL{\displaystyle L}be at least aposetorlattice. These are usually calledL-fuzzy sets, to distinguish them from those valued over the unit interval. The usual membership functions with values in [0, 1] are then called [0, 1]-valued membership functions. These kinds of generalizations were first considered in 1967 byJoseph Goguen, who was a student of Zadeh.[8]A classical corollary may be indicating truth and membership values by {f, t} instead of {0, 1}. An extension of fuzzy sets has been provided byAtanassov. Anintuitionistic fuzzy set(IFS)A{\displaystyle A}is characterized by two functions: with functionsμA,νA:U→[0,1]{\displaystyle \mu _{A},\nu _{A}:U\to [0,1]}with∀x∈U:μA(x)+νA(x)≤1{\displaystyle \forall x\in U:\mu _{A}(x)+\nu _{A}(x)\leq 1}. This resembles a situation like some person denoted byx{\displaystyle x}voting After all, we have a percentage of approvals, a percentage of denials, and a percentage of abstentions. For this situation, special "intuitive fuzzy" negators, t- and s-norms can be defined. WithD∗={(α,β)∈[0,1]2:α+β=1}{\displaystyle D^{*}=\{(\alpha ,\beta )\in [0,1]^{2}:\alpha +\beta =1\}}and by combining both functions to(μA,νA):U→D∗{\displaystyle (\mu _{A},\nu _{A}):U\to D^{*}}this situation resembles a special kind ofL-fuzzy sets. Once more, this has been expanded by definingpicture fuzzy sets(PFS) as follows: A PFS A is characterized by three functions mappingUto [0, 1]:μA,ηA,νA{\displaystyle \mu _{A},\eta _{A},\nu _{A}}, "degree of positive membership", "degree of neutral membership", and "degree of negative membership" respectively and additional condition∀x∈U:μA(x)+ηA(x)+νA(x)≤1{\displaystyle \forall x\in U:\mu _{A}(x)+\eta _{A}(x)+\nu _{A}(x)\leq 1}This expands the voting sample above by an additional possibility of "refusal of voting". WithD∗={(α,β,γ)∈[0,1]3:α+β+γ=1}{\displaystyle D^{*}=\{(\alpha ,\beta ,\gamma )\in [0,1]^{3}:\alpha +\beta +\gamma =1\}}and special "picture fuzzy" negators, t- and s-norms this resembles just another type ofL-fuzzy sets.[9] One extension of IFS is what is known as Pythagorean fuzzy sets. Such sets satisfy the constraintμA(x)2+νA(x)2≤1{\displaystyle \mu _{A}(x)^{2}+\nu _{A}(x)^{2}\leq 1}, which is reminiscent of the Pythagorean theorem.[10][11][12]Pythagorean fuzzy sets can be applicable to real life applications in which the previous condition ofμA(x)+νA(x)≤1{\displaystyle \mu _{A}(x)+\nu _{A}(x)\leq 1}is not valid. However, the less restrictive condition ofμA(x)2+νA(x)2≤1{\displaystyle \mu _{A}(x)^{2}+\nu _{A}(x)^{2}\leq 1}may be suitable in more domains.[13][14] As an extension of the case ofmulti-valued logic, valuations (μ:Vo→W{\displaystyle \mu :{\mathit {V}}_{o}\to {\mathit {W}}}) ofpropositional variables(Vo{\displaystyle {\mathit {V}}_{o}}) into a set of membership degrees (W{\displaystyle {\mathit {W}}}) can be thought of asmembership functionsmappingpredicatesinto fuzzy sets (or more formally, into an ordered set of fuzzy pairs, called a fuzzy relation). With these valuations, many-valued logic can be extended to allow for fuzzypremisesfrom which graded conclusions may be drawn.[15] This extension is sometimes called "fuzzy logic in the narrow sense" as opposed to "fuzzy logic in the wider sense," which originated in theengineeringfields ofautomatedcontrol andknowledge engineering, and which encompasses many topics involving fuzzy sets and "approximated reasoning."[16] Industrial applications of fuzzy sets in the context of "fuzzy logic in the wider sense" can be found atfuzzy logic. Afuzzy number[17]is a fuzzy set that satisfies all the following conditions: If these conditions are not satisfied, then A is not afuzzy number. The core of this fuzzy number is asingleton; its location is: Fuzzy numbers can be likened to thefunfairgame "guess your weight," where someone guesses the contestant's weight, with closer guesses being more correct, and where the guesser "wins" if he or she guesses near enough to the contestant's weight, with the actual weight being completely correct (mapping to 1 by the membership function). The kernelK(A)=Kern⁡(A){\displaystyle K(A)=\operatorname {Kern} (A)}of a fuzzy intervalA{\displaystyle A}is defined as the 'inner' part, without the 'outbound' parts where the membership value is constant ad infinitum. In other words, the smallest subset ofR{\displaystyle \mathbb {R} }whereμA(x){\displaystyle \mu _{A}(x)}is constant outside of it, is defined as the kernel. However, there are other concepts of fuzzy numbers and intervals as some authors do not insist on convexity. The use ofset membershipas a key component ofcategory theorycan be generalized to fuzzy sets. This approach, which began in 1968 shortly after the introduction of fuzzy set theory,[18]led to the development ofGoguen categoriesin the 21st century.[19][20]In these categories, rather than using two valued set membership, more general intervals are used, and may be lattices as inL-fuzzy sets.[20][21] There are numerous mathematical extensions similar to or more general than fuzzy sets. Since fuzzy sets were introduced in 1965 by Zadeh, many new mathematical constructions and theories treating imprecision, inaccuracy, vagueness, uncertainty and vulnerability have been developed. Some of these constructions and theories are extensions of fuzzy set theory, while others attempt to mathematically model inaccuracy/vagueness and uncertainty in a different way. The diversity of such constructions and corresponding theories includes: Thefuzzy relation equationis an equation of the formA·R=B, whereAandBare fuzzy sets,Ris a fuzzy relation, andA·Rstands for thecompositionofAwithR[citation needed]. A measuredof fuzziness for fuzzy sets of universeU{\displaystyle U}should fulfill the following conditions for allx∈U{\displaystyle x\in U}: In this cased(A){\displaystyle d(A)}is called theentropyof the fuzzy setA. ForfiniteU={x1,x2,...xn}{\displaystyle U=\{x_{1},x_{2},...x_{n}\}}the entropy of a fuzzy setA{\displaystyle A}is given by or just whereS(x)=He(x){\displaystyle S(x)=H_{e}(x)}isShannon's function(natural entropy function) andk{\displaystyle k}is a constant depending on the measure unit and the logarithm base used (here we have used the natural basee). The physical interpretation ofkis theBoltzmann constantkB. LetA{\displaystyle A}be a fuzzy set with acontinuousmembership function (fuzzy variable). Then and its entropy is There are many mathematical constructions similar to or more general than fuzzy sets. Since fuzzy sets were introduced in 1965, many new mathematical constructions and theories treating imprecision, inexactness, ambiguity, and uncertainty have been developed. Some of these constructions and theories are extensions of fuzzy set theory, while others try to mathematically model imprecision and uncertainty in a different way.[24]
https://en.wikipedia.org/wiki/Fuzzy_set_theory
A Treatise on Probability,[1]published byJohn Maynard Keynesin 1921, provides a much more general logic ofuncertaintythan the more familiar and straightforward 'classical' theories ofprobability.[notes 1][3][notes 2]This has since become known as a "logical-relationist" approach,[5][notes 3]and become regarded as the seminal and still classic account of the logicalinterpretation of probability(orprobabilistic logic), a view of probability that has been continued by such later works asCarnap'sLogical Foundations of ProbabilityandE.T. JaynesProbability Theory: The Logic of Science.[8] Keynes's conception of this generalised notion of probability is that it is a strictly logical relation between evidence and hypothesis, a degree of partial implication. It was in part pre-empted by Bertrand Russell's use of an unpublished version.[9][notes 4] In a 1922 review,Bertrand Russell, the co-author ofPrincipia Mathematica, called it "undoubtedly the most important work on probability that has appeared for a very long time," and said that the "book as a whole is one which it is impossible to praise too highly."[17][notes 5] With recent developments inmachine learningto enable 'artificial intelligence' andbehavioural economicsthe need for a logical approach that neither assumes some unattainable 'objectivity' nor relies on the subjective views of its designers or policy-makers has become more appreciated, and there has been a renewed interest in Keynes's work.[20][21] Here Keynes generalises the conventional concept of numerical probabilities to expressions of uncertainty that are not necessarily quantifiable or even comparable.[notes 6][26] In Chapter 1 'The Meaning of Probability'Keynes notes that one needs to consider the probability of propositions, not events.[notes 7] In Chapter 2 'Probability in Relation to the Theory of knowledge'Keynes considers 'knowledge', 'rational belief' and 'argument' in relation to probability.[29] In Chapter 3 'The Measurement of Probabilities'he considers probability as a not necessarily precise normalised measure[notes 8]and used the example of taking an umbrella in case of rain to illustrate this idea, that generalised probabilities can't always be compared. Is our expectation of rain, when we start out for a walk, always more likely than not, or less likely than not, or as likely as not? I am prepared to argue that on some occasions none of these alternatives hold, and that it will be an arbitrary matter to decide for or against the umbrella. If the barometer is high, but the clouds are black, it is not always rational that one should prevail over the other in our minds, or even that we should balance them, though it will be rational to allow caprice to determine us and to waste no time on the debate.[30] Chapter 4 'The Principle of Indifference'summarises and develops some objections to the over-use of 'the principle of indifference' (otherwise known as 'the principle of insufficient reason') to justify treating some probabilities as necessarily equal.[notes 9] In Chapter 5 'Other Methods of Determining Probabilities'Keynes gives some examples of common fallacies, including: It might plausibly be supposed that evidence would be favourable to our conclusion which is favourable to favourable evidence ... Whilst, however, this argument is frequently employed under conditions, which, if explicitly stated, would justify it, there are also conditions in which this is not so, so that it is not necessarily valid. For the very deceptive fallacy involved in the above supposition,Mr. Johnson, has suggested to me the name ofthe Fallacy of the Middle Term.[33] He also presents some arguments to justify the use of 'direct judgement' to determine that one probability is greater than another in particular cases.[notes 10] Chapter 6 'Weight of Argument'develops the idea of 'weight of argument' from chapter 3 and discusses the relevance of the 'amount' of evidence in support of a given probability judgement.[notes 11]Chapter 3 further noted the importance of the 'weight' of evidence in addition to any probability: This comparison turns upon a balance, not between the favourable and the unfavourable evidence, but between the absolute amounts of relevant knowledge and of relevant ignorance respectively. As the relevant evidence at our disposal increases, the magnitude of the probability of the argument may either decrease or increase, according as the new knowledge strengthens the unfavourable or the favourable evidence; but something seems to have increased in either case, we have a more substantial basis upon which to rest our conclusion. I express this by saying that an accession of new evidence increases theweightof an argument. New evidence will sometimes decrease the probability of an argument, but it will always increase its 'weight.'[37] Chapter 7 provides a 'Historical Retrospect'while Chapter 8 describes 'The Frequency Theory of Probability', noting some limitations and caveats. In particular, he notes difficulties in establishing 'relevance'[38]and, further, the lack of support that the theory gives for common uses of induction and statistics.[39][notes 12] Part 1 concludes with Chapter 9 'The Constructive Theory of Part I. Summarised.' Keynes notes the ground to be covered by the subsequent parts. This part has been likened to an appendix to Russell and Whitehead'sPrincipia Mathematica.[41]According to Whitehead Chapter 12 'The Definition and Axioms of Inference and Probability' 'has the great merit that accompanies good symbolism, that essential points which without it are subtle and easily lost sight of, with it become simple and obvious. Also the axioms are good ... The very certainty and ease by which he is enabled to solve difficult questions and to detect ambiguities and errors inn the work of his predecessors exemplifies and at the same time almost conceals that advance which he has made.[42] Chapter 14 'The Fundamental Theorems of Probable Inference'gives the main results on the addition, multiplication independence and relevance of conditional probabilities, leading up to an exposition of the 'Inverse principle' (now known asBayes Rule) incorporating some previously unpublished work fromW. E. Johnsoncorrecting some common text-book errors in formulation and fallacies in interpretation, including 'the fallacy of the middle term'.[43] In chapter 15 'Numerical Measurement and Approximation of Probabilities'Keynes develops the formalism of interval estimates as examples of generalised probabilities: Intervals that overlap are not greater than, less than or equal to each other.[notes 13] Part 2 concludes with Chapter 17 'Some Problems in Inverse Probability, including Averages'. Keynes' concept of probability is significantly more subject to variation with evidence than the more conventional quantified classical probability.[notes 14] Here Keynes considers under what circumstances conventionalinductive reasoningmight be applicable to both conventional and generalise probabilities, and how the results might be interpreted. He concludes that inductive arguments only affirm that 'relative to certain evidence there is a probability in its favour'.[45][notes 15] Chapter 21 'The Nature of Inductive Argument Continued'discusses the practical application of induction, particularly within the sciences. The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be much less simple than the bare principle of Uniformity. They appear to assume something much more like what mathematicians call the principle of the superposition of small effects, or, as I prefer to call it, in this connection, theatomiccharacter of natural law. ... ... Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts. In this case natural law would be organic and not, as it is generally supposed, atomic.[46][notes 16] Part 3 concludes with Chapter 23 'Some Historical Notes on Induction'. This notes thatFrancis BaconandJohn Stuart Millhad implicitly made assumptions similar to those Keynes criticised above, but that nevertheless their arguments provide useful insights.[48] Here Keynes considers some broader issues of application and interpretation. He concludes this part with Chapter 26 'The Application of Probability to Conduct'. Here Keynes notes that the conventional notion of utility as 'mathematical expectation' (summing value times probability) is derived from gambling. he doubts that value is 'subject to the laws of arithmetic' and in any case cites part 1 as denying that probabilities are. He further notes that often 'weights' are relevant and that in any case it 'assumes that an even chance of heaven or hell is precisely as much to be desired as the certain attainment of a state of mediocrity'.[49]He goes on to expand on these objections to what is known by economists as theexpected utility hypothesis, particularly with regard to extreme cases.[notes 17] Keynes ends by noting: The chance that a man of 56taken at randomwill die within a day ... is practically disregarded by a man of 56 who knows his health to be good.[notes 18] and To a stranger the probability that I shall send a letter to the post unstamped may be derived from the statistic of the Post Office; for me those figures would have not then slightest bearing on the situation.[51][notes 19] Keynes goes beyond induction to consider statistical inference, particularly as then used by the sciences. In Chapter 28 'The Law of Great Numbers'Keynes attributes to Poisson the view that 'in the long ... each class of events does eventually occur in a definite proportion of cases.'[53]He goes on: The existence of numerous instances of the Law of Great Numbers, or of something of the kind, is absolutely essential for the importance of Statistical Induction. Apart from this the more precise parts of statistics, the collection of facts for the prediction of future frequencies and associations, would be nearly useless. But the 'Law of Great Numbers' is not at all a good name for the principle which underlies Statistical Induction. The 'Stability of Statistical Frequencies ' would be a much better name for it. The former suggests, as perhaps Poisson intended to suggest, but what is certainly false, that every class of event shows statistical regularity of occurrence if only one takes a sufficient number of instances of it. It also encourages the method of procedure, by which it is thought legitimate to take any observed degree of frequency or association, which is shown in a fairly numerous set of statistics and to assume with insufficient investigation that, because the statistics are numerous, the observed degree of frequency is therefore stable. Observation shows that some statistical frequencies are, within narrower or wider limits, stable. But stable frequencies are not very common, and cannot be assumed lightly.[54] The key chapter is Chapter 32 'The Inductive Use of Statistical Frequencies for the Determination of Probabilitya posteriori- The Method of Lexis'. After citing Lexis' observations on both 'subnormal' and 'supernormal' dispersion, he notes that 'a supernormal dispersion [can] also arise out ofconnexiteor organic connection between the successive terms.[55] He concludes with Chapter 33, ‘An Outline of a Constructive Theory’. He notes a significant limitation of conventional statistical methods, as then used: Where there is no stability at all and the frequencies are chaotic, the resulting series can be described as 'non-statistical.' Amongst 'statistical series ' we may term 'independent series' those of which the instances are independent and the stability normal, and 'organic series', those of which the instances are mutually dependent and the stability abnormal, whether in excess or in defect.[56] Keynes also deals with the special case where the conventional notion of probability seems reasonable: There is a great difference between the proposition "It is probable that every instance of this generalisation is true" and the proposition "It is probable of any instance of this generalisation taken at random that it is true." The latter proposition may remain valid, even if it is certain that some instances of the generalisation are false. It is more likely than not, for example, that any number will be divisible either by two or by three, but it is not more likely than not that all numbers are divisible either by two or by three. The first type of proposition has been discussed in Part III. under the name of Universal Induction. The latter belongs to Inductive Correlation or Statistical Induction, an attempt at the logical analysis of which must be my final task. His final paragraph reveals Keynes views on the significance of his findings, based on the then conventional view of classical science as traditionally understood at Cambridge: In laying the foundations of the subject of Probability, I have departed a good deal from the conception of it which governed the minds of Laplace and Quetelet and has dominated through their influence the thought of the past century, though I believe that Leibniz and Hume might have read what I have written with sympathy. But in taking leave of Probability, I should like to say that, in my judgment; the practical usefulness of those modes of inference, here termed Universal and Statistical Induction, on the validity of which the boasted knowledge of modern science depends, can only exist and I do not now pause to inquire again whether such an argument must be circular if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appear more and more clearly as the ultimate result to which material science is tending …. Here, though I have complained sometimes at their want of logic, I am in fundamental sympathy with the deep underlying conceptions of the statistical theory of the day. If the contemporary doctrines of Biology and Physics remain tenable, we may have a remarkable, if undeserved, justification of some of the methods of the traditional Calculus of Probabilities.[notes 20] The above assumptions of non-organic ‘characteristics of atomism and limited variety’ and hence the applicability of the then conventional statistical methods was not long to remain credible, even for the natural sciences,[58][59][60]and some economists, notably in the US, applied some of his ideas in the interwar years,[61][62]although some philosophers continued to find it 'very puzzling indeed'.[63][notes 21][notes 22] Keynes had also noted in Chapter 21 the limitations of 'mathematical expectation' for 'rational' decision making.[67][68]Keynes developed this point in his more well-knownGeneral Theory of Employment, Interest and Moneyand subsequently, specifically in his thinking on the nature and role of long-term expectation in economics,[69]notably onAnimal spirits.[70][notes 23] Keynes' ideas found practical application by Turing and Good atBletchley Parkduring WWII, which practice formed the basis for the subsequent development of 'modern Bayesian probability',[73]and the notion of imprecise probabilities is now well established in statistics, with a wide range of important applications.[74][notes 24] The significance of 'true' uncertainty beyond mere precise probabilities had already been highlighted byFrank Knight[76]and the additional insights of Keynes tended to be overlooked.[notes 25]From the late 60s onwards even this limited aspect began to be less appreciated by economists, and was even disregarded or discounted by many 'Keynesian' economists.[78]After the financial crashes of 2007-9 'mainstream economics' was regarded as having been 'further away' from Keynes' ideas than ever before.[79]But subsequently there was a partial 'return of the master'[3]leading to calls for a 'paradigm shift' building further on Keynes' insights into 'the nature of behaviour under conditions of uncertainty'.[80] The centenary event organised by the University of Oxford and supported by TheAlan Turing Institutefor the Treatise andFrank Knight's Risk, Uncertainty, and Profit noted:[81] In Risk, Uncertainty, and Profit, Knight put forward the vital difference between risk, where empirical evaluation of unknown outcomes can still be applicable, and uncertainty, where no quantified measurement is valid but subjective estimate. In A Treatise on Probability, Keynes argued that the concept of probability should be about the logical implication from premises to hypotheses, in contrast to the classical quantified perspective of probability. The fundamental uncertainty proposed in both works has then deeply influenced the development of economic and probability theory in the past century and it still resonates with our lives today, considering the ups and downs that the world economy is experiencing. However it has often been regarded as more philosophical in nature despite extensive mathematical formulations and its implications for practice.[82][83][8] Informational notes Citations Bibliography
https://en.wikipedia.org/wiki/A_Treatise_on_Probability
Morphological analysisorgeneral morphological analysisis a method for exploring possible solutions to a multi-dimensional, non-quantified complex problem. It was developed by Swiss astronomerFritz Zwicky.[1]General morphology has found use in fields includingengineering design,technological forecasting,organizational developmentand policy analysis.[2] General morphology was developed by Fritz Zwicky, the Bulgarian-born, Swiss-nationalastrophysicistbased at theCalifornia Institute of Technology. Among others, Zwicky applied morphological analysis to astronomical studies and jet androcket propulsionsystems. As a problem-structuring andproblem-solvingtechnique, morphological analysis was designed for multi-dimensional, non-quantifiable problems where causal modelling and simulation do not function well, or at all. Zwicky developed this approach to address seemingly non-reducible complexity: using the technique ofcross-consistency assessment(CCA),[1]the system allows for reduction by identifying the possible solutions that actually exist, eliminating the illogical solution combinations in a grid box (sometimes called amorphological box) rather than reducing the number of variables involved.[3] Problems that involve many governing factors, where most of them cannot be expressed numerically can be well suited for morphological analysis. The conventional approach is to break a complex system into parts, isolate the parts (dropping the 'trivial' elements) whose contributions are critical to the output and solve the simplified system for desired scenarios. The disadvantage of this method is that many real-world phenomena do not have obviously trivial elements and cannot be simplified. Morphological analysis works backwards from the output towards the system internals without a simplification step.[4]The system's interactions are fully accounted for in the analysis. Robert A. Heinleinhas his characters use a "Zwicky box" inTime Enough for Love, to figure out what's available to break the ennui of his 2000-year-old character. David Brinused "Zwicky Choice Boxes" inSundiveras a means to help solve a murder mystery.
https://en.wikipedia.org/wiki/Morphological_analysis_(problem-solving)
Inquantum mechanics,Schrödinger's catis athought experimentconcerningquantum superposition. In the thought experiment, a hypotheticalcatin a closed box may be considered to be simultaneously both alive and dead while it is unobserved, as a result of its fate being linked to a randomsubatomicevent that may or may not occur. This experiment, viewed this way, is described as aparadox. This thought experiment was devised by physicistErwin Schrödingerin 1935[1]in a discussion withAlbert Einstein[2]to illustrate what Schrödinger saw as the problems of theCopenhagen interpretationof quantum mechanics. In Schrödinger's original formulation, a cat, a flask of poison, and aradioactivesource are placed in a sealed box. If an internal radiation monitor such as aGeiger counterdetects radioactivity (a single atom decaying), the flask is shattered, releasing the poison, which kills the cat. If no decaying atom triggers the monitor, the cat remains alive. The Copenhagen interpretation implies that the cat is thereforesimultaneouslyaliveanddead. Yet, when one looks in the box, one sees the cateitheraliveordead, not both aliveanddead. This poses the question of when exactlyquantum superpositionends and reality resolves into one possibility or the other. Although originally a critique on the Copenhagen interpretation, Schrödinger's seemingly paradoxical thought experiment became part of the foundation of quantum mechanics. It is often featured in theoretical discussions of theinterpretations of quantum mechanics, particularly in situations involving themeasurement problem. As a result, Schrödinger's cat has had enduringappeal in popular culture. The experiment is not intended to be actually performed on a cat, but rather as an easily understandable illustration of the behavior of atoms. Experiments at the atomic scale have been carried out, showing that very small objects may exist as superpositions, but superposing an object as large as a cat would pose considerable technical difficulties.[3] Fundamentally, the Schrödinger's cat experiment asks how long quantum superpositions last and when (orwhether) they collapse. Differentinterpretations of the mathematics of quantum mechanicshave been proposed that give different explanations for this process. Schrödinger intended his thought experiment as a discussion of theEPR article—named after its authorsEinstein,Podolsky, andRosen—in 1935.[4][5]The EPR article highlighted the counterintuitive nature ofquantum superpositions, in which a quantum system for two particles does not separate[6]: 150even when the particles are detected far from their last point of contact. The EPR paper concludes with a claim that this lack of separability meant that quantum mechanics as a theory of reality was incomplete. Schrödinger andEinsteinexchanged letters aboutEinstein's EPR article, in the course of which Einstein pointed out that the state of anunstablekeg ofgunpowderwill, after a while, contain a superposition of both exploded and unexploded states.[5] To further illustrate, Schrödinger described how one could, in principle, create a superposition in a large-scale system by making it dependent on a quantum particle that was in a superposition. He proposed a scenario with a cat in a closed steel chamber, wherein the cat's life or death depended on the state of aradioactiveatom, whether it had decayed and emitted radiation or not. According to Schrödinger, the Copenhagen interpretation implies thatthe cat remains both alive and deaduntil the state has been observed. Schrödinger did not wish to promote the idea of dead-and-live cats as a serious possibility; on the contrary, he intended the example to illustrate the absurdity of the existing view of quantum mechanics,[1]thus employingreductio ad absurdum. Since Schrödinger's time, variousinterpretations of the mathematics of quantum mechanicshave been advanced by physicists, some of which regard the "alive and dead" cat superposition as quite real, others do not.[7][8]Intended as a critique of the Copenhagen interpretation (the prevailing orthodoxy in 1935), the Schrödinger's cat thought experiment remains atouchstonefor modern interpretations of quantum mechanics and can be used to illustrate and compare their strengths and weaknesses.[9] Schrödinger wrote:[1][10] One can contrive even completely burlesque [farcical] cases. A cat is put in a steel chamber along with the following infernal device (which must be secured against direct interference by the cat): in aGeiger counter, there is a tiny amount of radioactive substance, so tiny that in the course of an hour one of the atoms will perhaps decay, but also, with equal probability, that none of them will; if it does happen, the counter tube will discharge and through a relay release a hammer that will shatter a small flask ofhydrocyanic acid. If one has left this entire system to itself for an hour, one would tell oneself that the cat is still alive if no atom hasdecayedin the meantime. Even a single atomic decay would have poisoned it. Thepsi-functionof the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or spread out in equal parts. It is typical of these cases that an indeterminacy originally restricted to the atomic domain turns into a sensually observable [macroscopic] indeterminacy, which can then be resolved by direct observation. This prevents us from so naïvely accepting a "blurred model" as representative of reality. Per se, it would not embody anything unclear or contradictory. There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks. Schrödinger developed his famousthought experimentin correspondence with Einstein. He suggested this 'quite ridiculous case' to illustrate his conclusion that the wave function cannot represent reality.[6]:153The wave function description of the complete cat system implies that the reality of the cat mixes the living and dead cat.[6]: 154Einstein was impressed by the ability of the thought experiment to highlight these issues. In a letter to Schrödinger dated 1950, he wrote:[6]: 157 You are the only contemporary physicist, besidesLaue, who sees that one cannot get around the assumption of reality, if only one is honest. Most of them simply do not see what sort of risky game they are playing with reality—reality as something independent of what is experimentally established. Their interpretation is, however, refuted most elegantly by your system of radioactive atom + amplifier + charge of gun powder + cat in a box, in which the psi-function of the system contains both the cat alive and blown to bits. Nobody really doubts that the presence or absence of the cat is something independent of the act of observation.[11] Note that the charge of gunpowder is not mentioned in Schrödinger's setup, which uses a Geiger counter as an amplifier and hydrocyanic poison instead of gunpowder. The gunpowder had been mentioned in Einstein's original suggestion to Schrödinger 15 years before, and Einstein carried it forward to the present discussion.[5] In modern terms Schrodinger's hypothetical cat experiment describes themeasurement problem: quantum theory describes the cat system as a combination of two possible outcomes but only one outcome is ever observed.[12]:57[13]:1269The experiment poses the question, "whendoes a quantum system stop existing as a superposition of states and become one or the other?" (More technically, when does the actual quantum state stop being a non-triviallinear combinationof states, each of which resembles different classical states, and instead begin to have a unique classical description?) Standard microscopic quantum mechanics describes multiple possible outcomes of experiments but only one outcome is observed. The thought experiment illustrates this apparent paradox. Our intuition says that the cat cannot be in more than one state simultaneously—yet the quantum mechanical description of the thought experiment requires such a condition. Since Schrödinger's time, other interpretations of quantum mechanics have been proposed that give different answers to the questions posed by Schrödinger's cat of how long superpositions last and when (orwhether) they collapse. A commonly held interpretation of quantum mechanics is the Copenhagen interpretation.[14]In the Copenhagen interpretation, a measurement results in only one state of a superposition. This thought experiment makes apparent the fact that this interpretation simply provides no explanation for the state of the cat while the box is closed. The wavefunction description of the system consists of a superposition of the states "decayed nucleus/dead cat" and "undecayed nucleus/living cat". Only when the box is opened and observed can we make a statement about the cat.[6]:157 In 1932,John von Neumanndescribed in his bookMathematical Foundations of Quantum Mechanicsa pattern where the radioactive source is observed by a device, which itself is observed by another device and so on. It makes no difference in the predictions of quantum theory where along this chain of causal effects the superposition collapses.[15]This potentially infinite chain could be broken if the last device is replaced by a conscious observer. This solved the problem because it was claimed that an individual's consciousness cannot be multiple.[16]Eugene Wigner asserted that an observer is necessary for a collapse to one or the other (e.g., either a live cat or a dead cat) of the terms on the right-hand side of awave function. Wigner discussed the interpretation in a thought experiment known asWigner's friend.[17] Wigner supposed that a friend opened the box and observed the cat without telling anyone. From Wigner's conscious perspective, the friend is now part of the wave function and has seen a live cat and seen a dead cat. To a third person's conscious perspective, Wigner himself becomes part of the wave function once Wigner learns the outcome from the friend. This could be extended indefinitely.[17] A resolution of the paradox is that the triggering of the Geiger counter counts as a measurement of the state of the radioactive substance. Because a measurement has already occurred deciding the state of the cat, the subsequent observation by a human records only what has already occurred.[18]Analysis of an actual experiment byRoger Carpenterand A. J. Anderson found that measurement alone (for example by a Geiger counter) is sufficient to collapse a quantum wave function before any human knows of the result.[19]The apparatus indicates one of two colors depending on the outcome. The human observer sees which color is indicated, but they don't consciously know which outcome the color represents. A second human, the one who set up the apparatus, is told of the color and becomes conscious of the outcome, and the box is opened to check if the outcome matches.[15]However, it is disputed whether merely observing the color counts as a conscious observation of the outcome.[20] Analysis of the work ofNiels Bohr, one of the main scientists associated with the Copenhagen interpretation, suggests he viewed the state of the cat before the box is opened as indeterminate. The superposition itself had no physical meaning to Bohr: Schrödinger's cat would be either dead or alive long before the box is opened but the cat and box form a inseparable combination.[21]Bohr saw no role for a human observer.[22]: 35Bohr emphasized the classical nature of measurement results. An "irreversible" or effectively irreversible process imparts the classical behavior of "observation" or "measurement".[23][24][25] In 1957,Hugh Everettformulated the many-worlds interpretation of quantum mechanics, which does not single out observation as a special process. In the many-worlds interpretation, both alive and dead states of the cat persist after the box is opened, but aredecoherentfrom each other. In other words, when the box is opened, the observer and the possibly-dead cat split into an observer looking at a box with a dead cat and an observer looking at a box with a live cat. But since the dead and alive states are decoherent, there is no communication or interaction between them. When opening the box, the observer becomes entangled with the cat, so "observer states" corresponding to the cat's being alive and dead are formed; each observer state isentangled, or linked, with the cat so that the observation of the cat's state and the cat's state correspond with each other. Quantum decoherence ensures that the different outcomes have no interaction with each other. Decoherence is generally considered to prevent simultaneous observation of multiple states.[26][27] A variant of the Schrödinger's cat experiment, known as thequantum suicidemachine, has been proposed by cosmologistMax Tegmark. It examines the Schrödinger's cat experiment from the point of view of the cat, and argues that by using this approach, one may be able to distinguish between the Copenhagen interpretation and many-worlds. Theensemble interpretationstates that superpositions are nothing but subensembles of a larger statistical ensemble. The state vector would not apply to individual cat experiments, but only to the statistics of many similarly prepared cat experiments. Proponents of this interpretation state that this makes the Schrödinger's cat paradox a trivial matter, or a non-issue. This interpretation serves todiscardthe idea that a single physical system in quantum mechanics has a mathematical description that corresponds to it in any way.[28] Therelational interpretationmakes no fundamental distinction between the human experimenter, the cat, and the apparatus or between animate and inanimate systems; all are quantum systems governed by the same rules of wavefunctionevolution, and all may be considered "observers". But the relational interpretation allows that different observers can give different accounts of the same series of events, depending on the information they have about the system.[29]The cat can be considered an observer of the apparatus; meanwhile, the experimenter can be considered another observer of the system in the box (the cat plus the apparatus). Before the box is opened, the cat, by nature of its being alive or dead, has information about the state of the apparatus (the atom has either decayed or not decayed); but the experimenter does not have information about the state of the box contents. In this way, the two observers simultaneously have different accounts of the situation: To the cat, the wavefunction of the apparatus has appeared to "collapse"; to the experimenter, the contents of the box appear to be in superposition. Not until the box is opened, and both observers have the same information about what happened, do both system states appear to "collapse" into the same definite result, a cat that is either alive or dead. In thetransactional interpretationthe apparatus emits an advanced wave backward in time, which combined with the wave that the source emits forward in time, forms a standing wave. The waves are seen as physically real, and the apparatus is considered an "observer". In the transactional interpretation, the collapse of the wavefunction is "atemporal" and occurs along the whole transaction between the source and the apparatus. The cat is never in superposition. Rather the cat is only in one state at any particular time, regardless of when the human experimenter looks in the box. The transactional interpretation resolves this quantum paradox.[30] According toobjective collapse theories, superpositions are destroyed spontaneously (irrespective of external observation) when some objective physical threshold (of time, mass, temperature,irreversibility, etc.) is reached. Thus, the cat would be expected to have settled into a definite state long before the box is opened. This could loosely be phrased as "the cat observes itself" or "the environment observes the cat". Objective collapse theories require a modification of standard quantum mechanics to allow superpositions to be destroyed by the process of time evolution.[31]These theories could ideally be tested by creating mesoscopic superposition states in the experiment. For instance, energy cat states has been proposed as a precise detector of the quantum gravity related energy decoherence models.[32] The experiment as described is a purely theoretical one, and the machine proposed is not known to have been constructed. However, successful experiments involving similar principles, e.g. superpositions ofrelatively large(by the standards of quantum physics) objects have been performed.[33][better source needed]These experiments do not show that a cat-sized object can be superposed, but the known upper limit on "cat states" has been pushed upwards by them. In many cases the state is short-lived, even when cooled to nearabsolute zero. Inquantum computingthe phrase "cat state" sometimes refers to theGHZ state, wherein several qubits are in an equal superposition of all being 0 and all being 1; e.g., According to at least one proposal, it may be possible to determine the state of the catbeforeobserving it.[40][41] In August 2020, physicists presented studies involving interpretations ofquantum mechanicsthat are related to the Schrödinger's cat andWigner's friendparadoxes, resulting in conclusions that challenge seemingly established assumptions aboutreality.[42][43][44]
https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat
Scientific consensusis the generally held judgment, position, and opinion of themajorityor thesupermajorityofscientistsin aparticular fieldof study at any particular time.[1][2] Consensus is achieved throughscholarly communicationatconferences, thepublicationprocess, replication ofreproducibleresults by others, scholarlydebate,[3][4][5][6]andpeer review. A conference meant to create a consensus is termed as a consensus conference.[7][8][9]Such measures lead to a situation in which those within the discipline can often recognize such a consensus where it exists; however, communicating to outsiders that consensus has been reached can be difficult, because the "normal" debates through which science progresses may appear to outsiders as contestation.[10]On occasion, scientific institutes issue position statements intended to communicate a summary of the science from the "inside" to the "outside" of the scientific community, or consensus review articles[11]orsurveys[12]may be published. In cases where there is little controversy regarding the subject under study, establishing the consensus can be quite straightforward. Popular or political debate on subjects that are controversial within the public sphere but not necessarily controversial within the scientific community may invoke scientific consensus: note such topics asevolution,[13][14]climate change,[15]the safety ofgenetically modified organisms,[16]or the lack of a link betweenMMR vaccinations and autism.[10] Scientific consensus is related to (and sometimes used to mean)convergent evidence, that is, the concept that independent sources of evidence converge on a conclusion.[17][18] There are many philosophical and historical theories as to how scientific consensus changes over time. Because the history of scientific change is extremely complicated, and because there is a tendency to project "winners" and "losers" onto the past in relation to thecurrentscientific consensus, it is very difficult to come up with accurate and rigorous models for scientific change.[19]This is made exceedingly difficult also in part because each of the various branches of science functions in somewhat different ways with different forms of evidence and experimental approaches.[20][21] Most models of scientific change rely on new data produced by scientificexperiment.Karl Popperproposed that since no amount of experiments could everprovea scientific theory, but a single experiment coulddisproveone, science should be based onfalsification.[22]Whilst this forms a logical theory for science, it is in a sense "timeless" and does not necessarily reflect a view on how science should progress over time. Among the most influential challengers of this approach wasThomas Kuhn, who argued instead that experimentaldataalways provide some data which cannot fit completely into a theory, and that falsification alone did not result in scientific change or an undermining of scientific consensus. He proposed that scientific consensus worked in the form of "paradigms", which were interconnected theories and underlying assumptions about the nature of the theory itself which connected various researchers in a given field. Kuhn argued that only after the accumulation of many "significant" anomalies would scientific consensus enter a period of "crisis". At this point, new theories would be sought out, and eventually one paradigm would triumph over the old one – a series ofparadigm shiftsrather than a linear progression towards truth. Kuhn's model also emphasized more clearly the social and personal aspects of theory change, demonstrating through historical examples that scientific consensus was never truly a matter of pure logic or pure facts.[23]However, these periods of 'normal' and 'crisis' science are not mutually exclusive. Research shows that these are different modes of practice, more than different historical periods.[10] Perception of whether a scientific consensus exists on a given issue, and how strong that conception is, has been described as a "gateway belief" upon which other beliefs and then action are based.[28] In public policy debates, the assertion that there exists a consensus of scientists in a particular field is often used as an argument for the validity of a theory. Similarly arguments for alackof scientific consensus are often used to support doubt about the theory.[citation needed] For example, thescientific consensus on the causes of global warmingis thatglobal surface temperatureshave increased in recent decades and that the trend is caused primarily by human-inducedemissions of greenhouse gases.[29][30][31]Thehistorian of scienceNaomi Oreskespublished an article inSciencereporting that a survey of the abstracts of 928 science articles published between 1993 and 2003 showed none which disagreed explicitly with the notion ofanthropogenic global warming.[29]In an editorial published inThe Washington Post, Oreskes stated that those who opposed these scientific findings are amplifying the normal range of scientific uncertainty about any facts into an appearance that there is a great scientific disagreement, or a lack of scientific consensus.[32]Oreskes's findings were replicated by other methods that require no interpretation.[10] The theory ofevolution through natural selectionis also supported by an overwhelming scientific consensus; it is one of the most reliable and empirically tested theories in science.[33][34]Opponents of evolution claim that there is significant dissent on evolution within the scientific community.[35]Thewedge strategy, a plan to promoteintelligent design, depended greatly on seeding and building on public perceptions of absence of consensus on evolution.[36] The inherentuncertainty in science, where theories are neverprovenbut can only bedisproven(seefalsifiability), poses a problem for politicians, policymakers, lawyers, and business professionals. Where scientific or philosophical questions can often languish in uncertainty for decades within their disciplinary settings, policymakers are faced with the problems of making sound decisions based on the currently available data, even if it is likely not a final form of the "truth". The tricky part is discerning what is close enough to "final truth". For example, social action against smoking probably came too long after science was 'pretty consensual'.[10] Certain domains, such as the approval of certain technologies for public consumption, can have vast and far-reaching political, economic, and human effects should things run awry with the predictions of scientists. However, insofar as there is an expectation that policy in a given field reflect knowable and pertinent data and well-accepted models of the relationships between observable phenomena, there is little good alternative for policy makers than to rely on so much of what may fairly be called 'the scientific consensus' in guiding policy design and implementation, at least in circumstances where the need for policy intervention is compelling. While science cannot supply 'absolute truth' (or even its complement 'absolute error') its utility is bound up with the capacity to guide policy in the direction of increased public good and away from public harm. Seen in this way, the demand that policy rely only on what is proven to be "scientific truth" would be a prescription for policy paralysis and amount in practice to advocacy of acceptance of all of the quantified and unquantified costs and risks associated with policy inaction.[10] No part of policy formation on the basis of the ostensible scientific consensus precludes persistent review either of the relevant scientific consensus or the tangible results of policy. Indeed, the same reasons that drove reliance upon the consensus drives the continued evaluation of this reliance over time – and adjusting policy as needed.[citation needed]
https://en.wikipedia.org/wiki/Scientific_consensus
Inphysics,statistical mechanicsis a mathematical framework that appliesstatistical methodsandprobability theoryto large assemblies of microscopic entities. Sometimes calledstatistical physicsorstatistical thermodynamics, its applications include many problems in a wide variety of fields such asbiology,[1]neuroscience,[2]computer science,[3][4]information theory[5]andsociology.[6]Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.[7][8] Statistical mechanics arose out of the development ofclassical thermodynamics, a field for which it was successful in explaining macroscopic physical properties—such astemperature,pressure, andheat capacity—in terms of microscopic parameters that fluctuate about average values and are characterized byprobability distributions.[9]: 1–4 While classical thermodynamics is primarily concerned withthermodynamic equilibrium, statistical mechanics has been applied innon-equilibrium statistical mechanicsto the issues of microscopically modeling the speed ofirreversible processesthat are driven by imbalances.[9]: 3Examples of such processes includechemical reactionsand flows of particles and heat. Thefluctuation–dissipation theoremis the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles.[9]: 572–573 In 1738, Swiss physicist and mathematicianDaniel BernoullipublishedHydrodynamicawhich laid the basis for thekinetic theory of gases. In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience asheatis simply the kinetic energy of their motion.[10] The founding of the field of statistical mechanics is generally credited to three physicists: In 1859, after reading a paper on the diffusion of molecules byRudolf Clausius, Scottish physicistJames Clerk Maxwellformulated theMaxwell distributionof molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range.[11]This was the first-ever statistical law in physics.[12]Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium.[13]Five years later, in 1864,Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and spent much of his life developing the subject further. Statistical mechanics was initiated in the 1870s with the work of Boltzmann, much of which was collectively published in his 1896Lectures on Gas Theory.[14]Boltzmann's original papers on the statistical interpretation of thermodynamics, theH-theorem,transport theory,thermal equilibrium, theequation of stateof gases, and similar subjects, occupy about 2,000 pages in the proceedings of the Vienna Academy and other societies. Boltzmann introduced the concept of an equilibrium statistical ensemble and also investigated for the first time non-equilibrium statistical mechanics, with hisH-theorem. The term "statistical mechanics" was coined by the American mathematical physicistJ. Willard Gibbsin 1884.[15]According to Gibbs, the term "statistical", in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicistJames Clerk Maxwellin 1871: "In dealing with masses of matter, while we do not perceive the individual molecules, we are compelled to adopt what I have described as the statistical method of calculation, and to abandon the strict dynamical method, in which we follow every motion by the calculus." "Probabilistic mechanics" might today seem a more appropriate term, but "statistical mechanics" is firmly entrenched.[17]Shortly before his death, Gibbs published in 1902Elementary Principles in Statistical Mechanics, a book which formalized statistical mechanics as a fully general approach to address all mechanical systems—macroscopic or microscopic, gaseous or non-gaseous.[18]Gibbs' methods were initially derived in the frameworkclassical mechanics, however they were of such generality that they were found to adapt easily to the laterquantum mechanics, and still form the foundation of statistical mechanics to this day.[19] In physics, two types of mechanics are usually examined:classical mechanicsandquantum mechanics. For both types of mechanics, the standard mathematical approach is to consider two concepts: Using these two concepts, the state at any other time, past or future, can in principle be calculated. There is however a disconnect between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in. Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces thestatistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is aprobability distributionover all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in aphase spacewithcanonical coordinateaxes. In quantum statistical mechanics, the ensemble is a probability distribution over pure states and can be compactly summarized as adensity matrix. As is usual for probabilities, the ensemble can be interpreted in different ways:[18] These two meanings are equivalent for many purposes, and will be used interchangeably in this article. However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by theLiouville equation(classical mechanics) or thevon Neumann equation(quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state. One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known asequilibrium ensemblesand their condition is known asstatistical equilibrium. Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. (By contrast,mechanical equilibriumis a state with a balance of forces that has ceased to evolve.) The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems. The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive theclassical thermodynamicsof materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials inthermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material. Whereas statistical mechanics proper involves dynamics, here the attention is focused onstatistical equilibrium(steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving. Asufficient(but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.).[18]There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics.[18]Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another. A common approach found in many textbooks is to take theequal a priori probability postulate.[19]This postulate states that The equal a priori probability postulate therefore provides a motivation for themicrocanonical ensembledescribed below. There are various arguments in favour of the equal a priori probability postulate: Other fundamental postulates for statistical mechanics have also been proposed.[10][21][22]For example, recent studies shows that the theory of statistical mechanics can be built without the equal a priori probability postulate.[21][22]One such formalism is based on thefundamental thermodynamic relationtogether with the following set of postulates:[21] where the third postulate can be replaced by the following:[22] There are three equilibrium ensembles with a simple form that can be defined for anyisolated systembounded inside a finite volume.[18]These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics. For systems containing many particles (thethermodynamic limit), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used.[9]: 227The Gibbs theorem about equivalence of ensembles[23]was developed into the theory ofconcentration of measurephenomenon,[24]which has applications in many areas of science, from functional analysis to methods ofartificial intelligenceandbig datatechnology.[25] Important cases where the thermodynamic ensemblesdo notgive identical results include: In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system.[19] Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily a simple task, however, since it involves considering every possible state of the system. While some hypothetical systems have been exactly solved, the most general (and realistic) case is too complex for an exact solution. Various approaches exist to approximate the true ensemble and allow calculation of average quantities. There are some cases which allow exact solutions. Although some problems in statistical physics can be solved analytically using approximations and expansions, most current research utilizes the large processing power of modern computers to simulate or approximate solutions. A common approach to statistical problems is to use aMonte Carlo simulationto yield insight into the properties of acomplex system. Monte Carlo methods are important incomputational physics,physical chemistry, and related fields, and have diverse applications includingmedical physics, where they are used to model radiation transport for radiation dosimetry calculations.[27][28][29] TheMonte Carlo methodexamines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level. Many physical phenomena involve quasi-thermodynamic processes out of equilibrium, for example: All of these processes occur over time with characteristic rates. These rates are important in engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.) In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such asLiouville's equationor its quantum equivalent, thevon Neumann equation. These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. These ensemble evolution equations inherit much of the complexity of the underlying mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution equations are fully reversible and do not destroy information (the ensemble'sGibbs entropyis preserved). In order to make headway in modelling irreversible processes, it is necessary to consider additional factors besides probability and reversible mechanics. Non-equilibrium mechanics is therefore an active area of theoretical research as the range of validity of these additional assumptions continues to be explored. A few approaches are described in the following subsections. One approach to non-equilibrium statistical mechanics is to incorporatestochastic(random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside fromhypothetical situations involving black holes, a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear aschaoticorpseudorandominfluences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier. The Boltzmann transport equation and related approaches are important tools in non-equilibrium statistical mechanics due to their extreme simplicity. These approximations work well in systems where the "interesting" information is immediately (after just one collision) scrambled up into subtle correlations, which essentially restricts them to rarefied gases. The Boltzmann transport equation has been found to be very useful in simulations of electron transport in lightly dopedsemiconductors(intransistors), where the electrons are indeed analogous to a rarefied gas. Another important class of non-equilibrium statistical mechanical models deals with systems that are only very slightly perturbed from equilibrium. With very small perturbations, the response can be analysed inlinear response theory. A remarkable result, as formalized by thefluctuation–dissipation theorem, is that the response of a system when near equilibrium is precisely related to thefluctuationsthat occur when the system is in total equilibrium. Essentially, a system that is slightly away from equilibrium—whether put there by external forces or by fluctuations—relaxes towards equilibrium in the same way, since the system cannot tell the difference or "know" how it came to be away from equilibrium.[30]: 664 This provides an indirect avenue for obtaining numbers such asohmic conductivityandthermal conductivityby extracting results from equilibrium statistical mechanics. Since equilibrium statistical mechanics is mathematically well defined and (in some cases) more amenable for calculations, the fluctuation–dissipation connection can be a convenient shortcut for calculations in near-equilibrium statistical mechanics. A few of the theoretical tools used to make this connection include: An advanced approach uses a combination of stochastic methods andlinear response theory. As an example, one approach to compute quantum coherence effects (weak localization,conductance fluctuations) in the conductance of an electronic system is the use of the Green–Kubo relations, with the inclusion of stochasticdephasingby interactions between various electrons by use of the Keldysh method.[31][32] The ensemble formalism can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in: Statistical physics explains and quantitatively describessuperconductivity,superfluidity,turbulence, collective phenomena insolidsandplasma, and the structural features ofliquid. It underlies the modernastrophysicsandvirial theorem. In solid state physics, statistical physics aids the study ofliquid crystals,phase transitions, andcritical phenomena. Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of coldneutrons,X-ray,visible light, and more. Statistical physics also plays a role in materials science, nuclear physics, astrophysics, chemistry, biology and medicine (e.g. study of the spread of infectious diseases).[citation needed] Analytical and computational techniques derived from statistical physics of disordered systems, can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deepneural networks.[33]Statistical physics is thus finding applications in the area ofmedical diagnostics.[34] Quantum statistical mechanicsisstatistical mechanicsapplied toquantum mechanical systems. In quantum mechanics, astatistical ensemble(probability distribution over possiblequantum states) is described by adensity operatorS, which is a non-negative,self-adjoint,trace-classoperator of trace 1 on theHilbert spaceHdescribing the quantum system. This can be shown under variousmathematical formalisms for quantum mechanics. One such formalism is provided byquantum logic.[citation needed]
https://en.wikipedia.org/wiki/Statistical_mechanics
Subjective logicis a type ofprobabilistic logicthat explicitly takes epistemicuncertaintyand source trust into account. In general, subjective logic is suitable for modeling and analysing situations involving uncertainty and relatively unreliable sources.[1][2][3]For example, it can be used for modeling and analysingtrust networksandBayesian networks. Arguments in subjective logic are subjective opinions about state variables which can take values from a domain (akastate space), where a state value can be thought of as a proposition which can be true or false. A binomial opinion applies to a binary state variable, and can be represented as aBeta PDF(Probability Density Function). A multinomial opinion applies to a state variable of multiple possible values, and can be represented as aDirichlet PDF(Probability Density Function). Through the correspondence between opinions and Beta/Dirichlet distributions, subjective logic provides an algebra for these functions. Opinions are also related to the belief representation inDempster–Shafer belief theory. A fundamental aspect of the human condition is that nobody can ever determine with absolute certainty whether a proposition about the world is true or false. In addition, whenever the truth of a proposition is expressed, it is always done by an individual, and it can never be considered to represent a general and objective belief. These philosophical ideas are directly reflected in the mathematical formalism of subjective logic. Subjective opinions express subjective beliefs about the truth of state values/propositions with degrees of epistemicuncertainty, and can explicitly indicate the source of belief whenever required. An opinion is usually denoted asωXA{\displaystyle \omega _{X}^{A}}whereA{\displaystyle A\,\!}is the source of the opinion, andX{\displaystyle X\,\!}is the state variable to which the opinion applies. The variableX{\displaystyle X\,\!}can take values from a domain (also called state space) e.g. denoted asX{\displaystyle \mathbb {X} }. The values of a domain are assumed to be exhaustive and mutually disjoint, and sources are assumed to have a common semantic interpretation of a domain. The source and variable are attributes of an opinion. Indication of the source can be omitted whenever irrelevant. Letx{\displaystyle x\,\!}be a state value in a binary domain. A binomial opinion about the truth of state valuex{\displaystyle x\,\!}is the ordered quadrupleωx=(bx,dx,ux,ax){\displaystyle \omega _{x}=(b_{x},d_{x},u_{x},a_{x})\,\!}where: These components satisfybx+dx+ux=1{\displaystyle b_{x}+d_{x}+u_{x}=1\,\!}andbx,dx,ux,ax∈[0,1]{\displaystyle b_{x},d_{x},u_{x},a_{x}\in [0,1]\,\!}. The characteristics of various opinion classes are listed below. The projected probability of a binomial opinion is defined asPx=bx+axux{\displaystyle \mathrm {P} _{x}=b_{x}+a_{x}u_{x}\,\!}. Binomial opinions can be represented on an equilateral triangle as shown below. A point inside the triangle represents a(bx,dx,ux){\displaystyle (b_{x},d_{x},u_{x})\,\!}triple. Theb,d,u-axes run from one edge to the opposite vertex indicated by the Belief, Disbelief or Uncertainty label. For example, a strong positive opinion is represented by a point towards the bottom right Belief vertex. The base rate, also called the prior probability, is shown as a red pointer along the base line, and the projected probability,Px{\displaystyle \mathrm {P} _{x}\,\!}, is formed by projecting the opinion onto the base, parallel to the base rate projector line. Opinions about three values/propositions X, Y and Z are visualized on the triangle to the left, and their equivalent Beta PDFs (Probability Density Functions) are visualized on the plots to the right. The numerical values and verbal qualitative descriptions of each opinion are also shown. TheBeta PDFis normally denoted asBeta(p(x);α,β){\displaystyle \mathrm {Beta} (p(x);\alpha ,\beta )\,\!}whereα{\displaystyle \alpha \,\!}andβ{\displaystyle \beta \,\!}are its two strength parameters. The Beta PDF of a binomial opinionωx=(bx,dx,ux,ax){\displaystyle \omega _{x}=(b_{x},d_{x},u_{x},a_{x})\,\!}is the functionBeta(p(x);α,β)where{α=Wbxux+Waxβ=Wdxux+W(1−ax){\displaystyle \mathrm {Beta} (p(x);\alpha ,\beta ){\mbox{ where }}{\begin{cases}\alpha &={\frac {Wb_{x}}{u_{x}}}+Wa_{x}\\\beta &={\frac {Wd_{x}}{u_{x}}}+W(1-a_{x})\end{cases}}\,\!}whereW{\displaystyle W}is the noninformative prior weight, also called a unit of evidence,[4]normally set toW=2{\displaystyle W=2}. LetX{\displaystyle X\,\!}be a state variable which can take state valuesx∈X{\displaystyle x\in \mathbb {X} \,\!}. A multinomial opinion overX{\displaystyle X\,\!}is the tupleωX=(bX,uX,aX){\displaystyle \omega _{X}=(b_{X},u_{X},a_{X})\,\!}, wherebX{\displaystyle b_{X}\,\!}is a belief mass distribution over the possible state values ofX{\displaystyle X\,\!},uX{\displaystyle u_{X}\,\!}is the uncertainty mass, andaX{\displaystyle a_{X}\,\!}is the prior (base rate) probability distribution over the possible state values ofX{\displaystyle X\,\!}. These parameters satisfyuX+∑bX(x)=1{\displaystyle u_{X}+\sum b_{X}(x)=1\,\!}and∑aX(x)=1{\displaystyle \sum a_{X}(x)=1\,\!}as well asbX(x),uX,aX(x)∈[0,1]{\displaystyle b_{X}(x),u_{X},a_{X}(x)\in [0,1]\,\!}. Trinomial opinions can be simply visualised as points inside atetrahedron, but opinions with dimensions larger than trinomial do not lend themselves to simple visualisation. Dirichlet PDFsare normally denoted asDir(pX;αX){\displaystyle \mathrm {Dir} (p_{X};\alpha _{X})\,\!}wherepX{\displaystyle p_{X}\,\!}is a probability distribution over the state values ofX{\displaystyle X}, andαX{\displaystyle \alpha _{X}\,\!}are the strength parameters. The Dirichlet PDF of a multinomial opinionωX=(bX,uX,aX){\displaystyle \omega _{X}=(b_{X},u_{X},a_{X})\,\!}is the functionDir(pX;αX){\displaystyle \mathrm {Dir} (p_{X};\alpha _{X})}where the strength parameters are given byαX(x)=WbX(x)uX+WaX(x){\displaystyle \alpha _{X}(x)={\frac {Wb_{X}(x)}{u_{X}}}+Wa_{X}(x)\,\!}, whereW{\displaystyle W}is the noninformative prior weight, also called a unit of evidence,[4]normally set toW=2{\displaystyle W=2}. Alternatively, the noninformative prior weightW{\displaystyle W}can be dynamic as a function of the evidence strengthαX{\displaystyle \alpha _{X}}and the cardinalityk{\displaystyle k}of the state spaceX{\displaystyle \mathbb {X} },[5]whereW=k{\displaystyle W=k}whenαX=aX{\displaystyle \alpha _{X}=a_{X}}(vacuous opinion), and rapidly converges toW=2{\displaystyle W=2}with increasing evidence strength (belief mass). The advantage of a dynamicW{\displaystyle W}is to have a uniform Dirichlet for vacuous opinions with a uniform base rate distributionaX{\displaystyle a_{X}}, while at the same time ensuring that any Dirichlet with arbitrarily large cardinality is equally sensitive to new evidence, as would be expected. Most operators in the table below are generalisations of binary logic and probability operators. For exampleadditionis simply a generalisation of addition of probabilities. Some operators are only meaningful for combining binomial opinions, and some also apply to multinomial opinion.[6]Most operators are binary, butcomplementis unary, andabductionis ternary. See the referenced publications for mathematical details of each operator. Transitive source combination can be denoted in a compact or expanded form. For example, the transitive trust path from analyst/sourceA{\displaystyle A\,\!}via sourceB{\displaystyle B\,\!}to the variableX{\displaystyle X\,\!}can be denoted as[A;B,X]{\displaystyle [A;B,X]\,\!}in compact form, or as[A;B]:[B,X]{\displaystyle [A;B]:[B,X]\,\!}in expanded form. Here,[A;B]{\displaystyle [A;B]\,\!}expresses thatA{\displaystyle A}has some trust/distrust in sourceB{\displaystyle B}, whereas[B,X]{\displaystyle [B,X]\,\!}expresses thatB{\displaystyle B}has an opinion about the state of variableX{\displaystyle X}which is given as an advice toA{\displaystyle A}. The expanded form is the most general, and corresponds directly to the way subjective logic expressions are formed with operators. In case the argument opinions are equivalent to Boolean TRUE or FALSE, the result of any subjective logic operator is always equal to that of the corresponding propositional/binary logic operator. Similarly, when the argument opinions are equivalent to traditional probabilities, the result of any subjective logic operator is always equal to that of the corresponding probability operator (when it exists). In case the argument opinions contain degrees of uncertainty, the operators involving multiplication and division (including deduction, abduction and Bayes' theorem) will produce derived opinions that always have correct projectedprobabilitybut possibly with approximatevariancewhen seen as Beta/Dirichlet PDFs.[1]All other operators produce opinions where the projected probabilities and the variance are always analytically correct. Different logic formulas that traditionally are equivalent in propositional logic do not necessarily have equal opinions. For exampleωx∧(y∨z)≠ω(x∧y)∨(x∧z){\displaystyle \omega _{x\land (y\lor z)}\neq \omega _{(x\land y)\lor (x\land z)}\,\!}in general although thedistributivityof conjunction over disjunction, expressed asx∧(y∨z)⇔(x∧y)∨(x∧z){\displaystyle x\land (y\lor z)\Leftrightarrow (x\land y)\lor (x\land z)\,\!}, holds in binary propositional logic. This is no surprise as the corresponding probability operators are also non-distributive. However, multiplication is distributive over addition, as expressed byωx∧(y∪z)=ω(x∧y)∪(x∧z){\displaystyle \omega _{x\land (y\cup z)}=\omega _{(x\land y)\cup (x\land z)}\,\!}.De Morgan's lawsare also satisfied as e.g. expressed byωx∧y¯=ωx¯∨y¯{\displaystyle \omega _{\overline {x\land y}}=\omega _{{\overline {x}}\lor {\overline {y}}}\,\!}. Subjective logic allows very efficient computation of mathematically complex models. This is possible by approximation of the analytically correct functions. While it is relatively simple to analytically multiply two Beta PDFs in the form of ajoint Beta PDF, anything more complex than that quickly becomes intractable. When combining two Beta PDFs with some operator/connective, the analytical result is not always a Beta PDF and can involvehypergeometric series. In such cases, subjective logic always approximates the result as an opinion that is equivalent to a Beta PDF. Subjective logic is applicable when the situation to be analysed is characterised by considerable epistemicuncertaintydue to incomplete knowledge. In this way, subjective logic becomes a probabilistic logic for epistemic-uncertain probabilities. The advantage is that uncertainty is preserved throughout the analysis and is made explicit in the results so that it is possible to distinguish between certain and uncertain conclusions. The modelling oftrust networksandBayesian networksare typical applications of subjective logic. Subjective trust networks can be modelled with a combination of the transitivity and fusion operators. Let[A;B]{\displaystyle [A;B]\,\!}express the referral trust edge fromA{\displaystyle A\,\!}toB{\displaystyle B\,\!}, and let[B,X]{\displaystyle [B,X]\,\!}express the belief edge fromB{\displaystyle B\,\!}toX{\displaystyle X\,\!}. A subjective trust network can for example be expressed as([A;B]:[B,X])⋄([A;C]:[C,X]){\displaystyle ([A;B]:[B,X])\diamond ([A;C]:[C,X])\,\!}as illustrated in the figure below. The indices 1, 2 and 3 indicate the chronological order in which the trust edges and advice are formed. Thus, given the set of trust edges with index 1, the origin trustorA{\displaystyle A\,\!}receives advice fromB{\displaystyle B\,\!}andC{\displaystyle C\,\!}, and is thereby able to derive belief in variableX{\displaystyle X\,\!}. By expressing each trust edge and belief edge as an opinion, it is possible forA{\displaystyle A\,\!}to derive belief inX{\displaystyle X\,\!}expressed asωXA=ωX[A;B]⋄[A;C]=(ωBA⊗ωXB)⊕(ωCA⊗ωXC){\displaystyle \omega _{X}^{A}=\omega _{X}^{[A;B]\diamond [A;C]}=(\omega _{B}^{A}\otimes \omega _{X}^{B})\oplus (\omega _{C}^{A}\otimes \omega _{X}^{C})\,\!}. Trust networks can express the reliability of information sources, and can be used to determine subjective opinions about variables that the sources provide information about. Evidence-based subjective logic(EBSL)[4]describes an alternative trust-network computation, where the transitivity of opinions (discounting) is handled by applying weights to the evidence underlying the opinions. In the Bayesian network below,X{\displaystyle X\,\!}andY{\displaystyle Y\,\!}are parent variables andZ{\displaystyle Z\,\!}is the child variable. The analyst must learn the set of joint conditional opinionsωZ|XY{\displaystyle \omega _{Z|XY}}in order to apply the deduction operator and derive the marginal opinionωZ‖XY{\displaystyle \omega _{Z\|XY}}on the variableZ{\displaystyle Z}. The conditional opinions express a conditional relationship between the parent variables and the child variable. The deduced opinion is computed asωZ‖XY=ωZ|XY⊚ωXY{\displaystyle \omega _{Z\|XY}=\omega _{Z|XY}\circledcirc \omega _{XY}}. The joint evidence opinionωXY{\displaystyle \omega _{XY}}can be computed as the product of independent evidence opinions onX{\displaystyle X\,\!}andY{\displaystyle Y\,\!}, or as the joint product of partially dependent evidence opinions. The combination of a subjective trust network and a subjective Bayesian network is a subjective network. The subjective trust network can be used to obtain from various sources the opinions to be used as input opinions to the subjective Bayesian network, as illustrated in the figure below. Traditional Bayesian network typically do not take into account the reliability of the sources. In subjective networks, the trust in sources is explicitly taken into account.
https://en.wikipedia.org/wiki/Subjective_logic
Ambiguity tolerance–intolerancerefers to a proposed aspect of personality that influences how individuals respond toambiguousstimuli, though whether it constitutes a distinct psychological trait is disputed.[1]Ambiguity may arise from being presented information that is unfamiliar or conflicting or when there is too much information available to process.[2]When presented with such situations, ambiguity intolerant individuals are likely to experience anxiety, interpret the situation as threatening, and may attempt to avoid or ignore the ambiguity by rigidly adhering to inaccurate, simplistic interpretations. In contrast, an individual who is tolerant of ambiguity is more likely to remain neutral, adopt a flexible and open disposition, and adapt to the situation.[2]Much of the initial research into the concept focused on intolerance of ambiguity, which has been correlated with prejudicial beliefs and theauthoritarian personality. Ambiguity tolerance–intolerance was formally introduced in 1949 through an article published byElse Frenkel-Brunswik, who developed the concept in earlier work onethnocentrismin children[3]In the article which defines the term, she considers, among other evidence, a study of schoolchildren who exhibit prejudice as the basis for the existence of intolerance of ambiguity. In the study, she tested the notion that children who are ethnically prejudiced also tend to reject ambiguity more so than their peers. To do this she used a story recall test to measure the children's prejudice, then presented the children with an ambiguous image, interviewed them about it, and recorded their responses. She found that the children who scored high in prejudice took longer to give a response to the shape, were less likely to make changes on their response, and less likely to change their perspectives. Frenkel-Brunswik continued to examine the concept inThe Authoritarian Personality, which she co-authored withTheodor Adorno,Daniel Levinson, andNevitt Sanford. In the book, intolerance of ambiguity is one aspect of the cognitive style of the authoritarian personality.[4]Interest in and research on ambiguity tolerance-intolerance was highest in the two decades following Frenkel-Brunswik's initial publication, but the concept is still in use in contemporary work. In the years following Frenkel-Brunswik's publications, her work and the concept of ambiguity tolerance-intolerance have been the subject of criticism. In 1958 a study by Kenny and Ginsberg was unable to replicate Frenkel-Brunswik's results, casting some doubt on her findings.[5]In 1965, Stephen Bochner published an article criticizing Frenkel-Brunswik for failing to give a consistent definition of the term and arguing that Kenny and Ginsberg's replication study may have failed due to the inconsistency of Frenkel-Brunswik's use of ambiguity tolerance-intolerance.[1] Ambiguity tolerance–intolerance has been subject to criticism on the grounds that it has been poorly defined, and as such there have been many attempts to create a standardized definition that can be used more easily, while retaining a relationship to Frenkel-Brunswik's definition. Budner (1962) defines intolerance of ambiguity to be the tendency to interpret ambiguity as a threat, while tolerance of ambiguity is the tendency to interpret ambiguity as desirable. In addition, he developed a scale with 16 items designed to measure how subjects would respond to an ambiguous situation in order to allow for more controlled research into the phenomenon.[2] Bochner (1965), though critical of Frenkel-Brunswik's definition, also organized a set of defining characteristics which are set out in her work.[1]Bochner's attempt to organize her work resulted in nine primary characteristics of intolerance of ambiguity: In addition Bochner lists nine secondary characteristics which describe what individuals who are intolerant of ambiguity will be: Bochner however, is skeptical of whether clinging to Frenkel-Brunswik's definition and attempting to find measures of the characteristics is useful, as he argues that ambiguity tolerance-intolerance may not describe a unified, distinct phenomenon. Further methods for measuring ambiguity intolerance have been proposed by Block and Block (1951) and Levitt (1953). Block and Block (1951) operationalized the construct by measuring the amount of time required to structure an ambiguous situation. In this method, the amount of time required to structure is associated with ambiguity tolerance; someone intolerant of ambiguity will desire to find a structure quickly, while a person tolerant of ambiguity would take more time to consider the situation.[6]Levitt (1953) studied intolerance of ambiguity in children and asserted that the decision location test and misconception scale both served as accurate measures of ambiguity intolerance.[7] Ambiguity tolerance–intolerance is relevant to and used in many branches ofpsychologyincludingpersonality psychology,developmental psychology, andsocial psychology. Some examples of the construct's use in different disciplines are listed below. The construct of ambiguity intolerance was first used in the study of personality and research on the topic is still undertaken despite criticism of the link between intolerance of ambiguity and authoritarianism. A study testing college students' tolerance for ambiguity[8]found that students who were involved in the arts had higher scores on ambiguity tolerance than business students and concluded that tolerance of ambiguity correlates with creativity. Harington, Block, and Block (1978) assessed intolerance of ambiguity in children at an early age, ranging from 3.5 to 4.5 years. The children were assessed using two tests performed by caretakers in a daycare center. The researchers then re-evaluated the children when they turned seven, and their data showed that male students who were ranked high in ambiguity intolerance at an early age had more anxiety, required more structure, and had less effective cognitive structure than their female peers who had also tested high in ambiguity intolerance.[9] Ambiguity intolerance can affect how an individual perceives others. Social psychology uses ambiguity tolerance–intolerance to investigate and explain interpersonal relationship dynamics. Research has been conducted on how ambiguity tolerance–intolerance interacts with racial identity,[10]homophobia,[11]marital satisfaction,[12]and pregnancy adjustment.[13] Research shows that ranking in the extremes of ambiguity tolerance or intolerance can be detrimental to mental health. Ambiguity intolerance is thought to serve as a cognitive vulnerability that can contribute to the development of depression. Anderson and Schwartz hypothesize that ambiguity intolerance may lead to depression as those who are intolerant tend to see the world as concrete and unchanging and are unable to effectively interpret and cope with external change. The discontinuity between their interpretations and their external situation results in negative thoughts, and due to the need for certainty ambiguity intolerant individuals have, these negative thoughts are quickly interpreted as certainties. This certainty can serve as a predictive measure of depression.[14]
https://en.wikipedia.org/wiki/Uncertainty_tolerance
Antifragilityis a property of systems in which they benefit from shocks. AntifragileorAnti-fragilemay refer to:
https://en.wikipedia.org/wiki/Antifragile_(disambiguation)
TheCynefin framework(/kəˈnɛvɪn/kuh-NEV-in)[1]is aconceptual frameworkused to aiddecision-making.[2]Created in 1999 byDave Snowdenwhen he worked forIBM Global Services, it has been described as a "sense-makingdevice".[3][4]Cynefinis aWelshword for 'habitat'.[5] Cynefin offers five decision-making contexts or "domains"—clear(also known assimpleorobvious),complicated,complex,chaotic, andconfusion(ordisorder)—that help managers to identify how they perceive situations and make sense of their own and other people's behaviour.[a]The framework draws on research intosystems theory,complexity theory,network theoryandlearning theories.[6] The idea of the Cynefin framework is that it offers decision-makers a "sense of place" from which to view their perceptions.[7]Cynefinis a Welsh word meaning 'habitat', 'haunt', 'acquainted', 'familiar'. Snowden uses the term to refer to the idea that we all have connections, such as tribal, religious and geographical, of which we may not be aware.[8][5]It has been compared to theMāoriwordtūrangawaewae, meaning a place to stand, or the "ground and place which is your heritage and that you come from".[9] In 2021, the Welsh government introduced the original Welsh concept of Cynefin as a core principle in the school curriculum.[10]In this context Cynefin extends beyond a physical or geographical place and includes historic, cultural and social dimensions that have shaped and continue to shape the community which inhabits a place.[11]The concept is intended to "help pupils explore, make connections and develop understanding of themselves within a modern, diverse and inclusive society. This cynefin is not simply local but provides a foundation for a national and international citizenship’’.[10] Snowden, then of IBM Global Services, began work on a Cynefin model in 1999 to help manageintellectual capitalwithin the company.[3][b][c]He continued developing it as European director of IBM's Institute of Knowledge Management,[15]and later as founder and director of the IBM Cynefin Centre for Organizational Complexity, established in 2002.[16]Cynthia Kurtz, an IBM researcher, and Snowden described the framework in detail the following year in a paper, "The new dynamics of strategy: Sense-making in a complex and complicated world", published inIBM Systems Journal.[4][17][18] The domain names have changed over the years.Kurtz & Snowden (2003)called themknown, knowable, complex, and chaotic.Snowden & Boone (2007)changedknownandknowabletosimpleandcomplicated. From 2014 Snowden usedobviousin place ofsimple, and as of 2015 is using the termclear.[19] The Cynefin Centre—a network of members and partners from industry, government and academia—began operating independently of IBM in 2004.[20]In 2007 Snowden and Mary E. Boone described the Cynefin framework in theHarvard Business Review.[2]Their paper, "A Leader's Framework for Decision Making", won them an "Outstanding Practitioner-Oriented Publication in OB" award from theAcademy of Management's Organizational Behavior division.[21] Cynefin offers five decision-making contexts or "domains":clear, complicated, complex, chaotic, and a centre ofconfusion.[d]The domains on the right,clearandcomplicated, are "ordered": cause and effect are known or can be discovered. The domains on the left,complexandchaotic, are "unordered": cause and effect can be deduced only with hindsight or not at all.[22] Thecleardomain represents the "known knowns". This means that there are rules in place (orbest practice), the situation is stable, and the relationship between cause and effect is clear: if you do X, expect Y. The advice in such a situation is to "sense–categorize–respond": establish the facts ("sense"), categorize, then respond by following the rule or applying best practice. Snowden and Boone (2007) offer the example of loan-payment processing. An employee identifies the problem (for example, a borrower has paid less than required), categorizes it (reviews the loan documents), and responds (follows the terms of the loan).[2]According toThomas A. Stewart, This is the domain of legal structures, standard operating procedures, practices that are proven to work. Never draw to aninside straight. Never lend to a client whose monthly payments exceed 35 percent of gross income. Never end the meeting without asking for the sale. Here, decision-making lies squarely in the realm of reason: Find the proper rule and apply it.[23] Snowden and Boone write that managers should beware of forcing situations into this domain by oversimplifying, by "entrained thinking" (being blind to new ways of thinking), or by becoming complacent (seehuman error). When success breeds complacency ("best practice is, by definition, past practice"), there can be a catastrophic clockwise shift into the chaotic domain. They recommend that leaders provide a communication channel, if necessary an anonymous one, so that dissenters (for example, within a workforce) can warn about complacency.[2] Thecomplicateddomain consists of the "known unknowns". The relationship between cause and effect requires analysis or expertise; there are a range of right answers. The framework recommends "sense–analyze–respond": assess the facts, analyze, and apply the appropriate good operating practice.[2]According to Stewart: "Here it is possible to work rationally toward a decision, but doing so requires refined judgment and expertise. ... This is the province of engineers, surgeons, intelligence analysts, lawyers, and other experts. Artificial intelligence copes well here:Deep Blueplays chess as if it were a complicated problem, looking at every possible sequence of moves."[23] Thecomplexdomain represents the "unknown unknowns". Cause and effect can only be deduced in retrospect, and there are no right answers. "Instructive patterns ... can emerge," write Snowden and Boone, "if the leader conducts experiments that are safe to fail." Cynefin calls this process "probe–sense–respond".[2]Hard insurance cases are one example. "Hard cases ... need human underwriters," Stewart writes, "and the best all do the same thing: Dump the file and spread out the contents." Stewart identifies battlefields, markets, ecosystems and corporate cultures as complex systems that are "impervious to areductionist, take-it-apart-and-see-how-it-works approach, because your very actions change the situation in unpredictable ways."[23] In 2024, Snowden, the creator of the Cynefin framework, acknowledged that he may have been overconfident believing that the use of 'chaos' in Cynefin was clear. In a blog post, he explored several distinct but coherent meanings of the wordchaos. One interpretation ismathematical chaos, which refers to unpredictability or randomness—an unconstrained and formless state, comparable to how gas relates to liquid and solid in physical states. In contrast to its everyday English usage, mathematical chaos cannot be conventionally modeled; instead, it must be either simulated or stimulated to understand its properties. Snowden also differentiates betweensimple chaosandcomplicated chaos, referencingJ. Doyne Farmer's book Making Sense of Chaos. Snowden originally intended the mathematical definition to be the primary meaning of chaos within the Cynefin framework,[24]although others have contradicted him and stated that Cynefin useschaoticin the ordinary sense.[25] Another interpretation isdeterministic chaos, a state in which no agent can engage in dialogue with another. This perspective frames chaos as a deliberate decision-support technique. While Snowden acknowledges the value of collective wisdom in generating new ideas and forming well-rounded perspectives, he remains skeptical of this application of chaos.[24]Tom Graves has criticized Cynefin for providing no tactics to manage deterministic chaos.[26] A third meaning,accidental chaos, represents confusion, disorder, or even evil—the primordial darkness before order (light) is imposed. Events in this domain are "too confusing to wait for a knowledge-based response", writes Patrick Lambe.[27]According to Snowden, resolving accidental chaos requires creating enough structure to categorize issues into either complex or ordered domains, a process he terms theaporetic turn. In this sense, chaos is temporary and necessitates constraints, though Snowden also recognizes the need to balance chaos and order. He asserts that a leader's role is not to solve problems directly but to establish constraints that enable solutions to emerge.[24]"Action—anyaction—is the first and only way to respond appropriately."[27]In this context, managers "act–sense–respond":actto establish order;sensewhere stability lies;respondto turn the chaotic into the complex.[2]Snowden and Boone write: In the chaotic domain, a leader’s immediate job is not to discover patterns but to staunch the bleeding. A leader must first act to establish order, then sense where stability is present and from where it is absent, and then respond by working to transform the situation from chaos to complexity, where the identification of emerging patterns can both help prevent future crises and discern new opportunities. Communication of the most direct top-down or broadcast kind is imperative; there’s simply no time to ask for input.[2] Snowden and Boone give the example of the 1993Brown's Chicken massacreinPalatine, Illinois—when robbers murdered seven employees in Brown's Chicken and Pasta restaurant—as a situation in which local police faced all the domains. Deputy Police Chief Walt Gasior had to act immediately to stem the early panic (chaotic), while keeping the department running (clear), calling in experts (complicated), and maintaining community confidence in the following weeks (complex).[2] TheSeptember 11 attackswere another example.[2]Stewart offers others: "the firefighter whose gut makes him turn left or the trader who instinctively sells when the news about the stock seems too good to be true." Onecrisis executivesaid of thecollapse of Enron: "People were afraid. ... Decision-making was paralyzed. ... You've got to be quick and decisive—make little steps you know will succeed, so you can begin to tell a story that makes sense."[23] The darkconfusiondomain in the centre represents situations where there is no clarity about which of the other domains apply (this domain has also been known asdisorderedin earlier versions of the framework). By definition it is hard to see when this domain applies. "Here, multiple perspectives jostle for prominence, factional leaders argue with one another, and cacophony rules", write Snowden and Boone. "The way out of this realm is to break down the situation into constituent parts and assign each to one of the other four realms. Leaders can then make decisions and intervene in contextually appropriate ways."[2] As knowledge increases, there is a "clockwise drift" fromchaoticthroughcomplexandcomplicatedtoclear. Similarly, a "buildup of biases", complacency or lack of maintenance can cause a "catastrophic failure": a clockwise movement fromcleartochaotic, represented by the "fold" between those domains. There can be counter-clockwise movement as people die and knowledge is forgotten, or as new generations question the rules; and a counter-clockwise push fromchaotictoclearcan occur when a lack of order causes rules to be imposed suddenly.[4][2] Cynefin was used by its IBM developers inpolicy-making,product development,market creation,supply chain management,brandingandcustomer relations.[4]Later uses include analysing the impact of religion on policymaking within theGeorge W. Bushadministration,[29]emergency management,[30]network scienceand the military,[31]the management of food-chain risks,[32]homeland securityin the United States,[33]agile software development,[34]and policing theOccupy Movementin the United States.[28] It has also been used in health-care research, including to examine the complexity of care in the BritishNational Health Service,[35]the nature of knowledge in health care,[36]and the fight againstHIV/AIDSinSouth Africa.[37]In 2017 theRAND Corporationused the Cynefin framework in a discussion of theories and models of decision making.[38]The European Commission has published a field guide to use Cynefin as a "guide to navigate crisis".[39] Criticism of Cynefin includes that the framework is difficult and confusing, needs a more rigorous foundation, and covers too limited a selection of possible contexts.[40]Another criticism is that terms such asknown, knowable, sense,andcategorizeare ambiguous.[41] Prof Simon French recognizes "the value of the Cynefin framework in categorising decision contexts and identifying how to address many uncertainties in an analysis" and as such believes it builds on seminal works such asRussell L. Ackoff'sScientific Method: optimizing applied research decisions(1962),C. West Churchman'sInquiring Systems(1967), Rittel and Webber'sDilemmas in a General Theory of Planning(1973), Douglas John White'sDecision Methodology(1975),John Tukey'sExploratory data analysis(1977), Mike Pidd'sTools for Thinking: Modelling in Management Science(1996), and Ritchey'sGeneral Morphological Analysis(1998).[42] Firestone and McElroy argue that Cynefin is a model of sensemaking rather than a full model ofknowledge managementand processing.[43]: 118 Steve Holt compares Cynefin to thetheory of constraints. The theory of constraints argues that most systems outcomes are limited by certain bottlenecks (constraints) and improvements away from these constraints tend to be counterproductive because they just place more strain on a constraint. Holt places the theory of constraints within the Cynefin framing by arguing the theory of constraints moves from complex situations to complicated ones by usingabductive reasoningand intuition then logic to creating an understanding, before creating a probe to test understanding.[44]: 367 Cynefin defines several types of constraints.Fixed constraintsstipulate that actions must be done in a certain way in a certain order and apply in the clear domain,governing constraintsare looser and act more like rules or policies applying in the complicated domain,enabling constraintsthat operate in the complex domain are constraints that allow a system to function but do not control the entire process.[44]: 371Holt argues that constraints in the theory of constraints correspond the Cynefin's fixed and governing constraints. Holt argues thatinjectionsin the theory of constraints correspond to enabling constraints.[44]: 373 "Cynefin".Welsh-English / English-Welsh On-line Dictionary. University of Wales. Retrieved24 November2016.
https://en.wikipedia.org/wiki/Cynefin_framework
Fear, uncertainty, and doubt(FUD) is a manipulativepropagandatactic used in technology sales,marketing,public relations, politics,polling, andcults. FUD is generally a strategy to influence perception by disseminating negative and dubious orfalse informationand is a manifestation of theappeal to fear. In public policy, a similar concept has been referred to asmanufactured uncertainty, which involves casting doubt on academic findings, exaggerating their claimed imperfections.[1]Amanufactured controversy(sometimes shortened tomanufactroversy) is a contrived disagreement, typically motivated byprofitor ideology, designed to create public confusion concerning an issue about which there is no substantial academic dispute.[2][3] The similar formulation "doubts, fears, and uncertainties" first appeared in 1693.[4][5]The phrase "fear, uncertainty, and doubt" first appeared in the 1920s.[6][7]It is also sometimes rendered as "fear, uncertainty, and disinformation".[8] By 1975, "FUD" was appearing in contexts of marketing, sales,[9]and inpublic relations:[10] One of the messages dealt with is FUD—the fear, uncertainty and doubt on the part of customer and sales person alike that stifles the approach and greeting.[9] FUD was first used with its common current technology-related meaning byGene Amdahlin 1975, after he leftIBMto foundAmdahl Corp.[11] FUD is the fear, uncertainty and doubt that IBM sales people instill in the minds of potential customers who might be considering Amdahl products.[11] This usage of FUD to describe disinformation in thecomputer hardwareindustry is said to have led to subsequent popularization of the term.[12] AsEric S. Raymondwrote:[11] The idea, of course, was to persuade buyers to go with safe IBM gear rather than with competitors' equipment. This implicit coercion was traditionally accomplished by promising thatGood Thingswould happen to people who stuck with IBM, butDark Shadowsloomed over the future of competitors' equipment or software. After 1991, the term has become generalized to refer to any kind ofdisinformationused as a competitive weapon.[11] By spreading questionable information about the drawbacks of less well-known products, an established company can discourage decision-makers from choosing those products over its own, regardless of the relativetechnicalmerits. This is a recognized phenomenon, epitomized by the traditional axiom of purchasing agents that "nobody ever got fired for buying IBM equipment". The aim is to haveITdepartments buy software they know to be technically inferior because upper management is more likely torecognize the brand.[citation needed] Manufacturing controversy has been a tactic used by ideological and corporate groups to "neutralize the influence of academic scientists" in public policy debates.Cherry pickingof favorable data and sympathetic experts, aggrandizement of uncertainties withintheoretical models, andfalse balance in media reportingcontribute to the generation of FUD. Alan D. Attie describes its process as "to amplify uncertainties, cherry-pick experts, attack individual scientists, marginalize the traditional role of distinguished scientific bodies and get the media to report "both sides" of a manufactured controversy."[13] Those manufacturing uncertainty may label academic research as "junk science" and use a variety of tactics designed to stall and increase the expense of the distribution of sound scientific information.[1][14]Delay tactics are also used to slow the implementation of regulations and public warnings in response to previously undiscovered health risks (e.g., the increased risk ofReye's syndromein children who takeaspirin).[14]Chief among these stalling tactics is generating scientific uncertainty, "no matter how powerful or conclusive the evidence",[14]to prevent regulation. Another tactic used to manufacture controversy is to cast thescientific communityas intolerant of dissent and conspiratorially aligned with industries or sociopolitical movements that quash challenges toconventional wisdom.[15]This form of manufactured controversy has been used byenvironmentalist advocacygroups, religious challengers of thetheory of evolution, and opponents ofglobal warming legislation.[16] Ideas that have been labeled as manufactured uncertainty include: Thetobacco industry playbook, tobacco strategy or simply disinformation playbook[22][23]describes a strategy used by thetobacco industryin the 1950s to protect revenues in the face of mounting evidence of links between tobacco smoke and serious illnesses, primarily cancer.[24]Such tactics were used even earlier, beginning in the 1920s, by the oil industry to support the use oftetraethylleadingasoline.[25]They continue to be used by other industries, notably thefossil fuel industry, even using the same PR firms and researchers.[26] Much of the playbook is known from industry documents made public by whistleblowers or as a result of theTobacco Master Settlement Agreement. These documents are now curated by the UCSFTruth Tobacco Industry Documentsproject and are a primary source for much commentary on both the tobacco playbook and its similarities to the tactics used by other industries such as thefossil fuel industry.[26][27] A 1969R. J. Reynoldsinternal memorandum noted, "Doubt is our product since it is the best means of competing with the 'body of fact' that exists in the mind of the general public."[28][29] In the United States, the generation of manufactured uncertainty about scientific data has affected political and legal proceedings in many different areas. TheData Quality Actand theSupreme Court'sDaubertstandardhave been cited as tools used by those manufacturing controversy to obfuscatescientific consensus.[1][13] Concerns have been raised regarding theconflicts of interestinherent in many types ofindustry regulation. For example, many industries, such as thepharmaceutical industry, are a major source of funding for the research necessary to achieve government regulatory approval for their product.[35]In developing regulations, agencies such as theFood and Drug Administrationand theEnvironmental Protection Agencyrely heavily on unpublished studies from industry sources that have not beenpeer reviewed.[21]This can allow a given industry control over the extent of available research, and the pace at which it is reviewable, when challenging scientific research that may threaten their business interests.[citation needed] In the 1990s, the term became most often associated withMicrosoft. Roger Irwin said:[36] Microsoft soon picked up the art of FUD from IBM, and throughout the '80s used FUD as a primary marketing tool, much as IBM had in the previous decade. They ended up out FUD-ing IBM themselves during theOS/2vs Win3.1 years. In 1996,Caldera, Inc.accused Microsoft of severalanti-competitive practices, including issuingvaporwareannouncements, creating FUD, and excluding competitors from participating inbeta-test programsto destroy competition in theDOSmarket.[37][38] In 1991, Microsoft released a beta version ofWindows 3.1whoseAARD codewould display a vaguely unnerving error message when the user ran it on theDR DOS 6.0operating system instead of Microsoft-written OSs:[37][39][40][41][42] Non-Fatal error detected: error #2726Please contact Windows 3.1 beta supportPress ENTER to exit or C to continue[40][41][42] If the user chose to pressC, Windows would continue to run on DR DOS without problems. Speculation that this code was meant to create doubts about DR DOS'scompatibilityand thereby destroy the product'sreputation[40][41]was confirmed years later by internal Microsoft memos published as part of theUnited States v. Microsoftantitrust case.[43]At one point, Microsoft CEOBill Gatessent a memo to a number of employees, reading You never sent me a response on the question of what things an app would do that would make it run withMS-DOSand not run with DR-DOS. Is there [a] feature they have that might get in our way?[37][44] Microsoft Senior Vice PresidentBrad Silverberglater sent another memo, stating What the [user] is supposed to do is feel uncomfortable, and when he has bugs, suspect that the problem is DR-DOS and then go out to buy MS-DOS.[37][44] In 2000, Microsoft settled thelawsuitout-of-court for an undisclosed sum, which in 2009 was revealed to be $280 million.[45][46][47][48] At around the same time, the leaked internal Microsoft "Halloween documents" stated "OSS [Open Source Software]is long-term credible… [therefore] FUD tactics cannot be used to combat it."[49]Open source software, and theLinuxcommunity in particular, are widely perceived as frequent targets of Microsoft's FUD: TheSCO Group's 2003lawsuit against IBM, funded byMicrosoft, claiming $5 billion inintellectual propertyinfringements by thefree software community, is an example of FUD, according to IBM, which argued in its counterclaim that SCO was spreading "fear, uncertainty, and doubt".[57] Magistrate Judge Brooke C. Wells wrote (and JudgeDale Albert Kimballconcurred) in her order limiting SCO's claims: "The court finds SCO's arguments unpersuasive. SCO's arguments are akin to SCO telling IBM, 'sorry, we are not going to tell you what you did wrong because you already know...' SCO was required to disclose in detail what it feels IBM misappropriated... the court finds it inexcusable that SCO is... not placing all the details on the table. Certainly if an individual were stopped and accused ofshopliftingafter walking out ofNeiman Marcusthey would expect to be eventually told what they allegedly stole. It would be absurd for an officer to tell the accused that 'you know what you stole, I'm not telling.' Or, to simply hand the accused individual a catalog of Neiman Marcus' entire inventory and say 'it's in there somewhere, you figure it out.'"[58] Regarding the matter,Darl Charles McBride, President and CEO of SCO, made the following statements: SCO stock skyrocketed from under US$3 a share to over US$20 in a matter of weeks in 2003. It later dropped to around[60]US$1.2—then crashed to under 50 cents on 13 August 2007, in the aftermath of a ruling thatNovellowns the UNIXcopyrights.[61] Apple's claim thatiPhone jailbreakingcould potentially allow hackers to crashcell phone towerswas described byFred von Lohmann, a representative of theElectronic Frontier Foundation(EFF), as a "kind of theoretical threat...more FUD than truth".[62] FUD is widely recognized as a tactic to promote the sale or implementation of security products and measures. It is possible to find pages describing purely artificial problems. Such pages frequently contain links to the demonstrating source code that does not point to any valid location and sometimes even links that "will execute malicious code on your machine regardless of current security software", leading to pages without any executable code.[citation needed] The drawback to the FUD tactic in this context is that, when the stated or implied threats fail to materialize over time, the customer or decision-maker frequently reacts by withdrawing budgeting or support from future security initiatives.[63] FUD has also been utilized intechnical support scams, which may use fake error messages to scare unwitting computer users, especially the elderly or computer-illiterate, into paying for a supposed fix for a non-existent problem,[64]to avoid being framed for criminal charges such as unpaid taxes, or in extreme cases, false accusations of illegal acts such aschild pornography.[65] The FUD tactic was used byCaltexAustralia in 2003. According to an internal memo, which was subsequently leaked, they wished to use FUD to destabilize franchisee confidence, and thus get a better deal for Caltex. This memo was used as an example of unconscionable behaviour in aSenateinquiry. Senior management claimed that it was contrary to and did not reflect company principles.[66][67][68] In 2008,Cloroxwas the subject of both consumer and industry criticism for advertising itsGreen Worksline of allegedlyenvironmentally friendlycleaning products using theslogan, "Finally, Green Works."[69]The slogan implied both that "green" products manufactured by other companies which had been available to consumers prior to the introduction of Clorox's GreenWorks line had all been ineffective, and also that the new GreenWorks line was at least as effective as Clorox's existing product lines. The intention of this slogan and the associated advertising campaign has been interpreted as appealing to consumers' fears that products from companies with lessbrand recognitionare less trustworthy or effective. Critics also pointed out that, despite its representation of GreenWorks products as "green" in the sense of being less harmful to the environment and/or consumers using them, the products contain a number of ingredients advocates of natural products have long campaigned against the use of in household products due totoxicityto humans or their environment.[70]All three implicit claims have been disputed, and some of their elements disproven, by environmental groups, consumer-protection groups, and the industry self-regulatoryBetter Business Bureau.[71] This article is based in part on theJargon File, which is in the public domain.
https://en.wikipedia.org/wiki/Fear,_uncertainty,_and_doubt
Simplicityis the state or quality of beingsimple. Something easy to understand or explain seems simple, in contrast to something complicated. Alternatively, asHerbert A. Simonsuggests, something is simple orcomplexdepending on the way we choose to describe it.[1]In some uses, the label "simplicity" can implybeauty, purity, or clarity. In other cases, the term may suggest a lack of nuance or complexity relative to what is required. The concept of simplicity is related to the field ofepistemologyandphilosophy of science(e.g., inOccam's razor). Religions also reflect on simplicity with concepts such asdivine simplicity. In humanlifestyles, simplicity can denote freedom from excessive possessions or distractions, such as having asimple livingstyle. In some cases, the term may have negative connotations, as when referring to someone as asimpleton. There is a widespread philosophical presumption that simplicity is a theoretical virtue. This presumption that simpler theories are preferable appears in many guises. Often it remains implicit; sometimes it is invoked as a primitive, self-evident proposition; other times it is elevated to the status of a ‘Principle’ and labeled as such (for example, the 'Principle of Parsimony'.[2] According toOccam's razor, all other things being equal, thesimplesttheory is most likely true. In other words, simplicity is a meta-scientific criterion by which scientists evaluate competing theories. A distinction is often made by many persons[by whom?]between two senses of simplicity:syntactic simplicity(the number and complexity of hypotheses), andontological simplicity(the number and complexity of things postulated). These two aspects of simplicity are often referred to aseleganceandparsimonyrespectively.[3] John von Neumanndefines simplicity as an important esthetic criterion of scientific models: [...] (scientific model) must satisfy certain esthetic criteria - that is, in relation to how much it describes, it must be rather simple. I think it is worth while insisting on these vague terms - for instance, on the use of word rather. One cannot tell exactly how "simple" simple is. [...] Simplicity is largely a matter of historical background, of previous conditioning, of antecedents, of customary procedures, and it is very much a function of what is explained by it.[4] The recognition that too much complexity can have a negative effect on business performance was highlighted in research undertaken in 2011 by Simon Collinson of theWarwick Business Schooland the Simplicity Partnership, which found that managers who are orientated towards finding ways of making business "simpler and more straightforward" can have a beneficial impact on their organisation. Most organizations contain some amount of complexity that is not performance enhancing, but drains value out of the company. Collinson concluded that this type of 'bad complexity' reduced profitability (EBITDA) by more than 10%.[5] Collinson identified a role for "simplicity-minded managers", managers who were "predisposed towards simplicity", and identified a set of characteristics related to the role, namely "ruthless prioritisation", the ability to say "no", willingness to iterate, to reduce communication to the essential points of a message and the ability to engage a team.[5]His report, theGlobal Simplicity Index 2011, was the first ever study to calculate the cost of complexity in the world's largest organisations.[6] TheGlobal Simplicity Indexidentified that complexity occurs in five key areas of an organisation: people, processes, organisational design, strategy, and products and services.[7]As the "global brands report", the research is repeated and published annually.[8]: 3The 2022 report incorporates a "brandsimplicity score" and an "industry simplicity score".[9] Research by Ioannis Evmoiridis atTilburg Universityfound that earnings reported by "high simplicity firms" are higher than among other businesses, and that such firms "exhibit[ed] a superior performance during the period 2010 - 2015", whilst requiring lower average capital expenditure and lowerleverage.[8]: 18 Simplicity is a theme in theChristianreligion. According toSt. Thomas Aquinas, God isinfinitely simple. The Roman Catholic and Anglican religious orders ofFranciscansalso strive for personal simplicity. Members of theReligious Society of Friends(Quakers) practice theTestimony of Simplicity, which involves simplifying one'slifeto focus on what is important and disregard or avoid what is least important. Simplicity is tenet of Anabaptistism, and someAnabaptistgroups like theBruderhof, make an effort to live simply.[10][11] In the context of humanlifestyle, simplicity can denote freedom from excessive material consumption and psychological distractions. "Receive with simplicity everything that happens to you." —Rashi(French rabbi, 11th century), citation at the beginning of the filmA Serious Man(2009),Coen Brothers
https://en.wikipedia.org/wiki/Global_Simplicity_Index
TheGoldilocks principleis named by analogy to the children's story "Goldilocks and the Three Bears", in which a young girl named Goldilocks tastes three different bowls ofporridgeand finds she prefers porridge that is neither too hot nor too cold but has just the right temperature.[1]The concept of "just the right amount" is easily understood and applied to a wide range of disciplines, includingdevelopmental psychology,biology,[2]astronomy,economics[3]andengineering. Incognitive scienceanddevelopmental psychology, the Goldilocks effect or principle refers to aninfant's preference to attend events that are neither too simple nor too complex according to their current representation of the world.[4]This effect was observed in infants, who are less likely to look away from a visual sequence when the current event is moderately probable, as measured by an idealized learning model. Inastrobiology, the Goldilocks zone refers to thehabitable zonearound astar. AsStephen Hawkingput it, "Like Goldilocks, the development of intelligent life requires that planetary temperatures be 'just right'".[5]TheRare Earth hypothesisuses the Goldilocks principle in the argument that aplanetmust be neither too far away from nor too close to astarandgalactic centreto support life, while either extreme would result in a planet incapable of supporting life.[6]Such a planet is colloquially called a "Goldilocks Planet".[7][8]Paul Davieshas argued for the extension of the principle to cover the selection of our universe from a (postulated)multiverse: "Observers arise only in those universes where, like Goldilocks' porridge, things are by accident 'just right'".[9] Inmedicine, it can refer to a drug that can hold both antagonist (inhibitory) and agonist (excitatory) properties. For example, the antipsychoticAripiprazolecauses not only antagonism of dopamine D2 receptors in areas such as the mesolimbic area of the brain (which shows increased dopamine activity in psychosis) but also agonism of dopamine receptors in areas of dopamine hypoactivity, such as the mesocortical area.[citation needed] In economics, aGoldilocks economysustains moderate economic growth and low inflation, which allows a market-friendlymonetary policy. A Goldilocks market occurs when the price ofcommoditiessits between abear marketand abull market. Goldilocks pricing, also known asgood–better–bestpricing, is a marketing strategy that usesproduct differentiationto offer three versions of a product to corner different parts of the market: a high-end version, a middle version, and a low-end version. In communication, the Goldilocks principle describes the amount, type, and detail ofcommunicationnecessary in a system to maximise effectiveness while minimising redundancy and excessive scope on the "too much" side and avoiding incomplete or inaccurate communication on the "too little" side.[10] Instatistics, the "Goldilocks Fit" references alinear regressionmodel that represents the perfect flexibility to reduce the error caused by bias and variance. In thedesign sprint, the "Goldilocks Quality" means to create a prototype with just enough quality to evoke honest reactions from customers.[11] Inmachine learning, the Goldilockslearning rateis the learning rate that results in an algorithm taking the fewest steps to achieve minimalloss. Algorithms with a learning rate that is too large often fail to converge at all, while those with too small a learning rate take too long to converge.[12]
https://en.wikipedia.org/wiki/Goldilocks_process
Theinnovation butterflyis ametaphorthat describes how seemingly minor perturbations (disturbances or changes) to project plans in a system connecting markets, demand, product features, and a firm's capabilities can steer the project, or an entire portfolio of projects, down an irreversible path in terms of technology and market evolution. The metaphor was developed by researchers Anderson and Joglekar.[1]It was conceived as a specific instance of the more general 'butterfly effect' encountered inchaos theory. The innovation butterfly arises because many innovation systems are made up of a large number of elements that interact with each other via several non-linearfeedback loopscontaining embedded delays, thus constituting acomplex system.[2] Perturbations can come from decisions made within the firm or from those made by its competitors, or they can result from external forces such as government legislation or environmental regulations, or unexpected spikes in theprice of oil. How the innovation system evolves as a result of the innovation butterfly can lead ultimately to an innovative firm's success or failure. Complex systems, in domains such asphysics,biology, orsociology, are known to be prone to bothpath dependenceandemergent behavior. What makes the behavior of the innovation butterfly different is market selection, along with biases in individual and group decision making within distributed innovation settings, which may influence the emergent behavior. Furthermore, managers in most fields of business endeavor to reduce uncertainty in order to better manage risk. In innovation settings, however, because success is based uponcreativity, managers must actively embraceuncertainty. This leads to a management conundrum because innovation managers and management systems must encourage the potential for a butterfly effect but then must also learn how to cope with its aftermath.[3][4] How innovation butterflies are 'chased' is highly managerially relevant.[5]Most butterflies end up 'merely' consuming a considerable amount of time and resources within a project, or for an innovation portfolio, within a firm. However, some butterflies can also unleashregime-alteringemergent outcomes within an entire industry segment.[6]Moreover, once these emergent outcomes begin to mature, and in some instances lead todisruptive innovations, they become extremely difficult to manage,[7]Hence, shaping the innovation system before potential innovation butterfly's effects completely emerge is critical.[3]
https://en.wikipedia.org/wiki/Innovation_butterfly
Aregular expression(shortened asregexorregexp),[1]sometimes referred to asrational expression,[2][3]is a sequence ofcharactersthat specifies amatch patternintext. Usually such patterns are used bystring-searching algorithmsfor "find" or "find and replace" operations onstrings, or forinput validation. Regular expression techniques are developed intheoretical computer scienceandformal languagetheory. The concept of regular expressions began in the 1950s, when the American mathematicianStephen Cole Kleeneformalized the concept of aregular language. They came into common use withUnixtext-processing utilities. Differentsyntaxesfor writing regular expressions have existed since the 1980s, one being thePOSIXstandard and another, widely used, being thePerlsyntax. Regular expressions are used insearch engines, in search and replace dialogs ofword processorsandtext editors, intext processingutilities such assedandAWK, and inlexical analysis. Regular expressions are supported in many programming languages. Library implementations are often called an "engine",[4][5]andmany of theseare available for reuse. Regular expressions originated in 1951, when mathematicianStephen Cole Kleenedescribedregular languagesusing his mathematical notation calledregular events.[6][7]These arose intheoretical computer science, in the subfields ofautomata theory(models of computation) and the description and classification offormal languages, motivated by Kleene's attempt to describe earlyartificial neural networks. (Kleene introduced it as an alternative toMcCulloch & Pitts's"prehensible", but admitted "We would welcome any suggestions as to a more descriptive term."[8]) Other early implementations ofpattern matchinginclude theSNOBOLlanguage, which did not use regular expressions, but instead its own pattern matching constructs. Regular expressions entered popular use from 1968 in two uses: pattern matching in a text editor[9]and lexical analysis in a compiler.[10]Among the first appearances of regular expressions in program form was whenKen Thompsonbuilt Kleene's notation into the editorQEDas a means to match patterns intext files.[9][11][12][13]For speed, Thompson implemented regular expression matching byjust-in-time compilation(JIT) toIBM 7094code on theCompatible Time-Sharing System, an important early example of JIT compilation.[14]He later added this capability to the Unix editored, which eventually led to the popular search toolgrep's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor:g/re/pmeaning "Global search for Regular Expression and Print matching lines").[15]Around the same time when Thompson developed QED, a group of researchers includingDouglas T. Rossimplemented a tool based on regular expressions that is used for lexical analysis incompilerdesign.[10] Many variations of these original forms of regular expressions were used inUnix[13]programs atBell Labsin the 1970s, includinglex,sed,AWK, andexpr, and in other programs such asvi, andEmacs(which has its own, incompatible syntax and behavior). Regexes were subsequently adopted by a wide range of programs, with these early forms standardized in thePOSIX.2standard in 1992. In the 1980s, the more complicated regexes arose inPerl, which originally derived from a regex library written byHenry Spencer(1986), who later wrote an implementation forTclcalledAdvanced Regular Expressions.[16]The Tcl library is a hybridNFA/DFAimplementation with improved performance characteristics. Software projects that have adopted Spencer's Tcl regular expression implementation includePostgreSQL.[17]Perl later expanded on Spencer's original library to add many new features.[18]Part of the effort in the design ofRaku(formerly named Perl 6) is to improve Perl's regex integration, and to increase their scope and capabilities to allow the definition ofparsing expression grammars.[19]The result is amini-languagecalledRaku rules, which are used to define Raku grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regexes, but also allowBNF-style definition of arecursive descent parservia sub-rules. The use of regexes in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards likeISO SGML(precursored by ANSI "GCA 101-1983") consolidated. The kernel of thestructure specification languagestandards consists of regexes. Its use is evident in theDTDelement group syntax. Prior to the use of regular expressions, many search languages allowed simple wildcards, for example "*" to match any sequence of characters, and "?" to match a single character. Relics of this can be found today in theglobsyntax for filenames, and in theSQLLIKEoperator. Starting in 1997,Philip HazeldevelopedPCRE(Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regex functionality and is used by many modern tools includingPHPandApache HTTP Server.[20] Today, regexes are widely supported in programming languages, text processing programs (particularlylexers), advanced text editors, and some other programs. Regex support is part of thestandard libraryof many programming languages, includingJavaandPython, and is built into the syntax of others, including Perl andECMAScript. In the late 2010s, several companies started to offer hardware,FPGA,[21]GPU[22]implementations ofPCREcompatible regex engines that are faster compared toCPUimplementations. The phraseregular expressions, orregexes, is often used to mean the specific, standard textual syntax for representing patterns for matching text, as distinct from the mathematical notation described below. Each character in a regular expression (that is, each character in the string describing its pattern) is either ametacharacter, having a special meaning, or a regular character that has a literal meaning. For example, in the regexb., 'b' is a literal character that matches just 'b', while '.' is a metacharacter that matches every character except a newline. Therefore, this regex matches, for example, 'b%', or 'bx', or 'b5'. Together, metacharacters and literal characters can be used to identify text of a given pattern or process a number of instances of it. Pattern matches may vary from a precise equality to a very general similarity, as controlled by the metacharacters. For example,.is a very general pattern,[a-z](match all lower case letters from 'a' to 'z') is less general andbis a precise pattern (matches just 'b'). The metacharacter syntax is designed specifically to represent prescribed targets in a concise and flexible way to direct the automation of text processing of a variety of input data, in a form easy to type using a standardASCIIkeyboard. A very simple case of a regular expression in this syntax is to locate a word spelled two different ways in atext editor, the regular expressionseriali[sz]ematches both "serialise" and "serialize".Wildcard charactersalso achieve this, but are more limited in what they can pattern, as they have fewer metacharacters and a simple language-base. The usual context of wildcard characters is inglobbingsimilar names in a list of files, whereas regexes are usually employed in applications that pattern-match text strings in general. For example, the regex^[ \t]+|[ \t]+$matches excess whitespace at the beginning or end of a line. An advanced regular expression that matches any numeral is[+-]?(\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?. Aregex processortranslates a regular expression in the above syntax into an internal representation that can be executed and matched against astringrepresenting the text being searched in. One possible approach is theThompson's construction algorithmto construct anondeterministic finite automaton(NFA), which is thenmade deterministicand the resultingdeterministic finite automaton(DFA) is run on the target text string to recognize substrings that match the regular expression. The picture shows the NFA schemeN(s*)obtained from the regular expressions*, wheresdenotes a simpler regular expression in turn, which has already beenrecursivelytranslated to the NFAN(s). A regular expression, often called apattern, specifies asetof strings required for a particular purpose. A simple way to specify a finite set of strings is to list itselementsor members. However, there are often more concise ways: for example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the patternH(ä|ae?)ndel; we say that this patternmatcheseach of the three strings. However, there can be many ways to write a regular expression for the same set of strings: for example,(Hän|Han|Haen)delalso specifies the same set of three strings in this example. Most formalisms provide the following operations to construct regular expressions. These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷. The precisesyntaxfor regular expressions varies among tools and with context; more detail is given in§ Syntax. Regular expressions describeregular languagesinformal language theory. They have the same expressive power asregular grammars. Regular expressions consist of constants, which denote sets of strings, and operator symbols, which denote operations over these sets. The following definition is standard, and found as such in most textbooks on formal language theory.[24][25]Given a finitealphabetΣ, the following constants are defined as regular expressions: Given regular expressions R and S, the following operations over them are defined to produce regular expressions: To avoid parentheses, it is assumed that the Kleene star has the highest priority followed by concatenation, then alternation. If there is no ambiguity, then parentheses may be omitted. For example,(ab)ccan be written asabc, anda|(b(c*))can be written asa|bc*. Many textbooks use the symbols ∪, +, or ∨ for alternation instead of the vertical bar. Examples: The formal definition of regular expressions is minimal on purpose, and avoids defining?and+—these can be expressed as follows:a+=aa*, anda?=(a|ε). Sometimes thecomplementoperator is added, to give ageneralized regular expression; hereRcmatches all strings over Σ* that do not matchR. In principle, the complement operator is redundant, because it does not grant any more expressive power. However, it can make a regular expression much more concise—eliminating a single complement operator can cause adouble exponentialblow-up of its length.[26][27][28] Regular expressions in this sense can express the regular languages, exactly the class of languages accepted bydeterministic finite automata. There is, however, a significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size growsexponentiallyin the size of the shortest equivalent regular expressions. The standard example here is the languagesLkconsisting of all strings over the alphabet {a,b} whosekth-from-last letter equalsa. On the one hand, a regular expression describingL4is given by(a∣b)∗a(a∣b)(a∣b)(a∣b){\displaystyle (a\mid b)^{*}a(a\mid b)(a\mid b)(a\mid b)}. Generalizing this pattern toLkgives the expression: On the other hand, it is known that every deterministic finite automaton accepting the languageLkmust have at least 2kstates. Luckily, there is a simple mapping from regular expressions to the more generalnondeterministic finite automata(NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3grammarsof theChomsky hierarchy.[24] In the opposite direction, there are many languages easily described by a DFA that are not easily described by a regular expression. For instance, determining the validity of a givenISBNrequires computing the modulus of the integer base 11, and can be easily implemented with an 11-state DFA. However, converting it to a regular expression results in a 2,14 megabytes file .[29] Given a regular expression,Thompson's construction algorithmcomputes an equivalent nondeterministic finite automaton. A conversion in the opposite direction is achieved byKleene's algorithm. Finally, many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; rather, they implementregexes. Seebelowfor more on this. As seen in many of the examples above, there is more than one way to construct a regular expression to achieve the same results. It is possible to write analgorithmthat, for two given regular expressions, decides whether the described languages are equal; the algorithm reduces each expression to aminimal deterministic finite state machine, and determines whether they areisomorphic(equivalent). Algebraic laws for regular expressions can be obtained using a method by Gischer which is best explained along an example: In order to check whether (X+Y)∗and (X∗Y∗)∗denote the same regular language, for all regular expressionsX,Y, it is necessary and sufficient to check whether the particular regular expressions (a+b)∗and (a∗b∗)∗denote the same language over the alphabet Σ={a,b}. More generally, an equationE=Fbetween regular-expression terms with variables holds if, and only if, its instantiation with different variables replaced by different symbol constants holds.[30][31] Every regular expression can be written solely in terms of theKleene starandset unionsover finite words. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to thestar height problem. In 1991,Dexter Kozenaxiomatized regular expressions as aKleene algebra, using equational andHorn clauseaxioms.[32]Already in 1964, Redko had proved that no finite set of purely equational axioms can characterize the algebra of regular languages.[33] A regexpatternmatches a targetstring. The pattern is composed of a sequence ofatoms. An atom is a single point within the regex pattern which it tries to match to the target string. The simplest atom is a literal, but grouping parts of the pattern to match an atom will require using( )as metacharacters. Metacharacters help form:atoms;quantifierstelling how many atoms (and whether it is agreedyquantifieror not); a logical OR character, which offers a set of alternatives, and a logical NOT character, which negates an atom's existence; and backreferences to refer to previous atoms of a completing pattern of atoms. A match is made, not when all the atoms of the string are matched, but rather when all the pattern atoms in the regex have matched. The idea is to make a small pattern of characters stand for a large number of possible strings, rather than compiling a large list of all the literal possibilities. Depending on the regex processor there are about fourteen metacharacters, characters that may or may not have theirliteralcharacter meaning, depending on context, or whether they are "escaped", i.e. preceded by anescape sequence, in this case, the backslash\. Modern and POSIX extended regexes use metacharacters more often than their literal meaning, so to avoid "backslash-osis" orleaning toothpick syndrome, they have a metacharacter escape to a literal mode; starting out, however, they instead have the four bracketing metacharacters( )and{ }be primarily literal, and "escape" this usual meaning to become metacharacters. Common standards implement both. The usual metacharacters are{}[]()^$.|*+?and\. The usual characters that become metacharacters when escaped aredswDSWandN. When entering a regex in a programming language, they may be represented as a usual string literal, hence usually quoted; this is common in C, Java, and Python for instance, where the regexreis entered as"re". However, they are often written with slashes asdelimiters, as in/re/for the regexre. This originates ined, where/is the editor command for searching, and an expression/re/can be used to specify a range of lines (matching the pattern), which can be combined with other commands on either side, most famouslyg/re/pas ingrep("global regex print"), which is included in mostUnix-based operating systems, such asLinuxdistributions. A similar convention is used insed, where search and replace is given bys/re/replacement/and patterns can be joined with a comma to specify a range of lines as in/re1/,/re2/. This notation is particularly well known due to its use inPerl, where it forms part of the syntax distinct from normal string literals. In some cases, such as sed and Perl, alternative delimiters can be used to avoid collision with contents, and to avoid having to escape occurrences of the delimiter character in the contents. For example, in sed the commands,/,X,will replace a/with anX, using commas as delimiters. TheIEEEPOSIXstandard has three sets of compliance:BRE(Basic Regular Expressions),[34]ERE(Extended Regular Expressions), andSRE(Simple Regular Expressions). SRE isdeprecated,[35]in favor of BRE, as both providebackward compatibility. The subsection below covering thecharacter classesapplies to both BRE and ERE. BRE and ERE work together. ERE adds?,+, and|, and it removes the need to escape the metacharacters( )and{ }, which arerequiredin BRE. Furthermore, as long as the POSIX standard syntax for regexes is adhered to, there can be, and often is, additional syntax to serve specific (yet POSIX compliant) applications. Although POSIX.2 leaves some implementation specifics undefined, BRE and ERE provide a "standard" which has since been adopted as the default syntax of many tools, where the choice of BRE or ERE modes is usually a supported option. For example,GNUgrephas the following options: "grep -E" for ERE, and "grep -G" for BRE (the default), and "grep -P" forPerlregexes. Perl regexes have become a de facto standard, having a rich and powerful set of atomic expressions. Perl has no "basic" or "extended" levels. As in POSIX EREs,( )and{ }are treated as metacharacters unless escaped; other metacharacters are known to be literal or symbolic based on context alone. Additional functionality includeslazy matching,backreferences, named capture groups, andrecursivepatterns. In thePOSIXstandard, Basic Regular Syntax (BRE) requires that themetacharacters( )and{ }be designated\(\)and\{\}, whereas Extended Regular Syntax (ERE) does not. The-character is treated as a literal character if it is the last or the first (after the^, if present) character within the brackets:[abc-],[-abc],[^-abc]. Backslash escapes are not allowed. The]character can be included in a bracket expression if it is the first (after the^, if present) character:[]abc],[^]abc]. Examples: According to Russ Cox, the POSIX specification requires ambiguous subexpressions to be handled in a way different from Perl's. The committee replaced Perl's rules with one that is simple to explain, but the new "simple" rules are actually more complex to implement: they were incompatible with pre-existing tooling and made it essentially impossible to define a "lazy match" (see below) extension. As a result, very few programs actually implement the POSIX subexpression rules (even when they implement other parts of the POSIX syntax).[37] The meaning of metacharactersescapedwith a backslash is reversed for some characters in the POSIX Extended Regular Expression (ERE) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example,\( \)is now( )and\{ \}is now{ }. Additionally, support is removed for\nbackreferences and the following metacharacters are added: Examples: POSIX Extended Regular Expressions can often be used with modern Unix utilities by including thecommand lineflag-E. The character class is the most basic regex concept after a literal match. It makes one small sequence of characters match a larger set of characters. For example,[A-Z]could stand for any uppercase letter in the English alphabet, and\dcould mean any digit. Character classes apply to both POSIX levels. When specifying a range of characters, such as[a-Z](i.e. lowercaseato uppercaseZ), the computer's locale settings determine the contents by the numeric ordering of the character encoding. They could store digits in that sequence, or the ordering could beabc...zABC...Z, oraAbBcC...zZ. So the POSIX standard defines a character class, which will be known by the regex processor installed. Those definitions are in the following table: POSIX character classes can only be used within bracket expressions. For example,[[:upper:]ab]matches the uppercase letters and lowercase "a" and "b". An additional non-POSIX class understood by some tools is[:word:], which is usually defined as[:alnum:]plus underscore. This reflects the fact that in many programming languages these are the characters that may be used in identifiers. The editorVimfurther distinguisheswordandword-headclasses (using the notation\wand\h) since in many programming languages the characters that can begin an identifier are not the same as those that can occur in other positions: numbers are generally excluded, so an identifier would look like\h\w*or[[:alpha:]_][[:alnum:]_]*in POSIX notation. Note that what the POSIX regex standards callcharacter classesare commonly referred to asPOSIX character classesin other regex flavors which support them. With most other regex flavors, the termcharacter classis used to describe what POSIX callsbracket expressions. Because of its expressive power and (relative) ease of reading, many other utilities and programming languages have adopted syntax similar toPerl's—for example,Java,JavaScript,Julia,Python,Ruby,Qt, Microsoft's.NET Framework, andXML Schema. Some languages and tools such asBoostandPHPsupport multiple regex flavors. Perl-derivative regex implementations are not identical and usually implement a subset of features found in Perl 5.0, released in 1994. Perl sometimes does incorporate features initially found in other languages. For example, Perl 5.10 implements syntactic extensions originally developed in PCRE and Python.[38] In Python and some other implementations (e.g. Java), the three common quantifiers (*,+and?) aregreedyby default because they match as many characters as possible.[39]The regex".+"(including the double-quotes) applied to the string "Ganymede," he continued, "is the largest moon in the Solar System." matches the entire line (because the entire line begins and ends with a double-quote) instead of matching only the first part,"Ganymede,". The aforementioned quantifiers may, however, be madelazyorminimalorreluctant, matching as few characters as possible, by appending a question mark:".+?"matches only"Ganymede,".[39] In Java and Python 3.11+,[40]quantifiers may be madepossessiveby appending a plus sign, which disables backing off (in a backtracking engine), even if doing so would allow the overall match to succeed:[41]While the regex".*"applied to the string "Ganymede," he continued, "is the largest moon in the Solar System." matches the entire line, the regex".*+"doesnot match at all, because.*+consumes the entire input, including the final". Thus, possessive quantifiers are most useful with negated character classes, e.g."[^"]*+", which matches"Ganymede,"when applied to the same string. Another common extension serving the same function is atomic grouping, which disables backtracking for a parenthesized group. The typical syntax is(?>group). For example, while^(wi|w)i$matches bothwiandwii,^(?>wi|w)i$only matcheswiibecause the engine is forbidden from backtracking and so cannot try setting the group to "w" after matching "wi".[42] Possessive quantifiers are easier to implement than greedy and lazy quantifiers, and are typically more efficient at runtime.[41] IETF RFC 9485 describes "I-Regexp: An Interoperable Regular Expression Format". It specifies a limited subset of regular-expression idioms designed to be interoperable, i.e. produce the same effect, in a large number of regular-expression libraries. I-Regexp is also limited to matching, i.e. providing a true or false match between a regular expression and a given piece of text. Thus, it lacks advanced features such as capture groups, lookahead, and backreferences.[43] Many features found in virtually all modern regular expression libraries provide an expressive power that exceeds theregular languages. For example, many implementations allow grouping subexpressions with parentheses and recalling the value they match in the same expression (backreferences). This means that, among other things, a pattern can match strings of repeated words like "papa" or "WikiWiki", calledsquaresin formal language theory. The pattern for these strings is(.+)\1. The language of squares is not regular, nor is itcontext-free, due to thepumping lemma. However,pattern matchingwith an unbounded number of backreferences, as supported by numerous modern tools, is stillcontext sensitive.[44]The general problem of matching any number of backreferences isNP-complete, and the execution time for known algorithms grows exponentially by the number of backreference groups used.[45] However, many tools, libraries, and engines that provide such constructions still use the termregular expressionfor their patterns. This has led to a nomenclature where the term regular expression has different meanings informal language theoryand pattern matching. For this reason, some people have taken to using the termregex,regexp, or simplypatternto describe the latter.Larry Wall, author of the Perl programming language, writes in an essay about the design of Raku: "Regular expressions" […] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).[19] Other features not found in describing regular languages include assertions. These include the ubiquitous^and$, used since at least 1970,[46]as well as some more sophisticated extensions like lookaround that appeared in 1994.[47]Lookarounds define the surrounding of a match and do not spill into the match itself, a feature only relevant for the use case of string searching.[citation needed]Some of them can be simulated in a regular language by treating the surroundings as a part of the language as well.[48] Thelook-ahead assertions(?=...)and(?!...)have been attested since at least 1994, starting with Perl 5.[47]The lookbehind assertions(?<=...)and(?<!...)are attested since 1997 in a commit by Ilya Zakharevich to Perl 5.005.[49] There are at least three differentalgorithmsthat decide whether and how a given regex matches a string. The oldest and fastest relies on a result in formal language theory that allows everynondeterministic finite automaton(NFA) to be transformed into adeterministic finite automaton(DFA). The DFA can be constructed explicitly and then run on the resulting input string one symbol at a time. Constructing the DFA for a regular expression of sizemhas the time and memory cost ofO(2m), but it can be run on a string of sizenin timeO(n). Note that the size of the expression is the size after abbreviations, such as numeric quantifiers, have been expanded. An alternative approach is to simulate the NFA directly, essentially building each DFA state on demand and then discarding it at the next step. This keeps the DFA implicit and avoids the exponential construction cost, but running cost rises toO(mn). The explicit approach is called the DFA algorithm and the implicit approach the NFA algorithm. Adding caching to the NFA algorithm is often called the "lazy DFA" algorithm, or just the DFA algorithm without making a distinction. These algorithms are fast, but using them for recalling grouped subexpressions, lazy quantification, and similar features is tricky.[50][51]Modern implementations include the re1-re2-sregex family based on Cox's code. The third algorithm is to match the pattern against the input string bybacktracking. This algorithm is commonly called NFA, but this terminology can be confusing. Its running time can be exponential, which simple implementations exhibit when matching against expressions like(a|aa)*bthat contain both alternation and unbounded quantification and force the algorithm to consider an exponentially increasing number of sub-cases. This behavior can cause a security problem calledRegular expression Denial of Service(ReDoS). Although backtracking implementations only give an exponential guarantee in the worst case, they provide much greater flexibility and expressive power. For example, any implementation which allows the use of backreferences, or implements the various extensions introduced by Perl, must include some kind of backtracking. Some implementations try to provide the best of both algorithms by first running a fast DFA algorithm, and revert to a potentially slower backtracking algorithm only when a backreference is encountered during the match. GNU grep (and the underlying gnulib DFA) uses such a strategy.[52] Sublinear runtime algorithms have been achieved usingBoyer-Moore (BM) based algorithmsand related DFA optimization techniques such as the reverse scan.[53]GNU grep, which supports a wide variety of POSIX syntaxes and extensions, uses BM for a first-pass prefiltering, and then uses an implicit DFA. Wuagrep, which implements approximate matching, combines the prefiltering into the DFA in BDM (backward DAWG matching). NR-grep's BNDM extends the BDM technique with Shift-Or bit-level parallelism.[54] A few theoretical alternatives to backtracking for backreferences exist, and their "exponents" are tamer in that they are only related to the number of backreferences, a fixed property of some regexp languages such as POSIX. One naive method that duplicates a non-backtracking NFA for each backreference note has a complexity of⁠O(n2k+2){\displaystyle {\mathrm {O} }(n^{2k+2})}⁠time and⁠O(n2k+1){\displaystyle {\mathrm {O} }(n^{2k+1})}⁠space for a haystack of length n and k backreferences in the RegExp.[55]A very recent theoretical work based on memory automata gives a tighter bound based on "active" variable nodes used, and a polynomial possibility for some backreferenced regexps.[56] In theoretical terms, any token set can be matched by regular expressions as long as it is pre-defined. In terms of historical implementations, regexes were originally written to useASCIIcharacters as their token set though regex libraries have supported numerous othercharacter sets. Many modern regex engines offer at least some support forUnicode. In most respects it makes no difference what the character set is, but some issues do arise when extending regexes to support Unicode. Mostgeneral-purpose programming languagessupport regex capabilities, either natively or vialibraries. Regexes are useful in a wide variety of text processing tasks, and more generallystring processing, where the data need not be textual. Common applications includedata validation,data scraping(especiallyweb scraping),data wrangling, simpleparsing, the production ofsyntax highlightingsystems, and many other tasks. Some high-enddesktop publishingsoftware has the ability to use regexes to automatically apply text styling, saving the person doing the layout from laboriously doing this by hand for anything that can be matched by a regex. For example, by defining acharacter stylethat makes text intosmall capsand then using the regex[A-Z]{4,}to apply that style, any word of four or more consecutive capital letters will be automatically rendered as small caps instead. While regexes would be useful on Internetsearch engines, processing them across the entire database could consume excessive computer resources depending on the complexity and design of the regex. Although in many cases system administrators can run regex-based queries internally, most search engines do not offer regex support to the public. Notable exceptions includeGoogle Code SearchandExalead. However, Google Code Search was shut down in January 2012.[59] The specific syntax rules vary depending on the specific implementation,programming language, orlibraryin use. Additionally, the functionality of regex implementations can vary betweenversions. Because regexes can be difficult to both explain and understand without examples, interactive websites for testing regexes are a useful resource for learning regexes by experimentation. This section provides a basic description of some of the properties of regexes by way of illustration. The following conventions are used in the examples.[60] These regexes are all Perl-like syntax. StandardPOSIXregular expressions are different. Unless otherwise indicated, the following examples conform to thePerlprogramming language, release 5.8.8, January 31, 2006. This means that other implementations may lack support for some parts of the syntax shown here (e.g. basic vs. extended regex,\( \)vs.(), or lack of\dinstead ofPOSIX[:digit:]). The syntax and conventions used in these examples coincide with that of other programming environments as well.[61] Output: Output: Output: Output: Output: Output: Output: Output: Output: (^\w|\w$|\W\w|\w\W). Output: in Unicode,[58]where theAlphabeticproperty contains more than Latin letters, and theDecimal_Numberproperty contains more than Arab digits. Output: in Unicode. Output: Output: Output: Output: Output: Output: Output: Output: Output: Output: Regular expressions can often be created ("induced" or "learned") based on a set of example strings. This is known as theinduction of regular languagesand is part of the general problem ofgrammar inductionincomputational learning theory. Formally, given examples of strings in a regular language, and perhaps also given examples of stringsnotin that regular language, it is possible to induce a grammar for the language, i.e., a regular expression that generates that language. Not all regular languages can be induced in this way (seelanguage identification in the limit), but many can. For example, the set of examples {1, 10, 100}, and negative set (of counterexamples) {11, 1001, 101, 0} can be used to induce the regular expression 1⋅0* (1 followed by zero or more 0s).
https://en.wikipedia.org/wiki/Regular_expression#Implementations_and_running_times
Phrase structure rulesare a type ofrewrite ruleused to describe a given language'ssyntaxand are closely associated with the early stages oftransformational grammar, proposed byNoam Chomskyin 1957.[1]They are used to break down a naturallanguagesentence into its constituent parts, also known assyntactic categories, including both lexical categories (parts of speech) andphrasalcategories. A grammar that uses phrase structure rules is a type ofphrase structure grammar. Phrase structure rules as they are commonly employed operate according to theconstituencyrelation, and a grammar that employs phrase structure rules is therefore aconstituency grammar; as such, it stands in contrast todependency grammars, which are based on thedependencyrelation.[2] Phrase structure rules are usually of the following form: meaning that theconstituentA{\displaystyle A}is separated into the two subconstituentsB{\displaystyle B}andC{\displaystyle C}. Some examples for English are as follows: The first rule reads: A S (sentence) consists of a NP (noun phrase) followed by a VP (verb phrase). The second rule reads: A noun phrase consists of an optional Det (determiner) followed by a N (noun). The third rule means that a N (noun) can be preceded by an optional AP (adjective phrase) and followed by an optional PP (prepositional phrase). The round brackets indicate optional constituents. Beginning with the sentence symbol S, and applying the phrase structure rules successively, finally applying replacement rules to substitute actual words for the abstract symbols, it is possible to generate many proper sentences of English (or whichever language the rules are specified for). If the rules are correct, then any sentence produced in this way ought to be grammatically (syntactically)correct. It is also to be expected that the rules will generate syntactically correct butsemanticallynonsensical sentences, such as the following well-known example: This sentence was constructed byNoam Chomskyas an illustration that phrase structure rules are capable of generating syntactically correct but semantically incorrect sentences. Phrase structure rules break sentences down into their constituent parts. These constituents are often represented astree structures(dendrograms). The tree for Chomsky's sentence can be rendered as follows: A constituent is any word or combination of words that is dominated by a single node. Thus each individual word is a constituent. Further, the subject NPColorless green ideas, the minor NPgreen ideas, and the VPsleep furiouslyare constituents. Phrase structure rules and the tree structures that are associated with them are a form ofimmediate constituent analysis. Intransformational grammar, systems of phrase structure rules are supplemented by transformation rules, which act on an existing syntactic structure to produce a new one (performing such operations asnegation,passivization, etc.). These transformations are not strictly required for generation, as the sentences they produce could be generated by a suitably expanded system of phrase structure rules alone, but transformations provide greater economy and enable significant relations between sentences to be reflected in the grammar. An important aspect of phrase structure rules is that they view sentence structure from the top down. The category on the left of the arrow is a greater constituent and the immediate constituents to the right of the arrow are lesser constituents. Constituents are successively broken down into their parts as one moves down a list of phrase structure rules for a given sentence. This top-down view of sentence structure stands in contrast to much work done in modern theoretical syntax. InMinimalism[3]for instance, sentence structure is generated from the bottom up. The operationMergemerges smaller constituents to create greater constituents until the greatest constituent (i.e. the sentence) is reached. In this regard, theoretical syntax abandoned phrase structure rules long ago, although their importance forcomputational linguisticsseems to remain intact. Phrase structure rules as they are commonly employed result in a view of sentence structure that isconstituency-based. Thus, grammars that employ phrase structure rules areconstituency grammars(=phrase structure grammars), as opposed todependency grammars,[4]which view sentence structure asdependency-based. What this means is that for phrase structure rules to be applicable at all, one has to pursue a constituency-based understanding of sentence structure. The constituency relation is a one-to-one-or-more correspondence. For every word in a sentence, there is at least one node in the syntactic structure that corresponds to that word. The dependency relation, in contrast, is a one-to-one relation; for every word in the sentence, there is exactly one node in the syntactic structure that corresponds to that word. The distinction is illustrated with the following trees: The constituency tree on the left could be generated by phrase structure rules. The sentence S is broken down into smaller and smaller constituent parts. The dependency tree on the right could not, in contrast, be generated by phrase structure rules (at least not as they are commonly interpreted). A number of representational phrase structure theories of grammar never acknowledged phrase structure rules, but have pursued instead an understanding of sentence structure in terms the notion ofschema. Here phrase structures are not derived from rules that combine words, but from the specification or instantiation of syntactic schemata or configurations, often expressing some kind of semantic content independently of the specific words that appear in them. This approach is essentially equivalent to a system of phrase structure rules combined with a noncompositionalsemantictheory, since grammatical formalisms based on rewriting rules are generally equivalent in power to those based on substitution into schemata. So in this type of approach, instead of being derived from the application of a number of phrase structure rules, the sentenceColorless green ideas sleep furiouslywould be generated by filling the words into the slots of a schema having the following structure: And which would express the following conceptual content: Though they are non-compositional, such models are monotonic. This approach is highly developed withinConstruction grammar[5]and has had some influence inHead-Driven Phrase Structure Grammar[6]andlexical functional grammar,[7]the latter two clearly qualifying as phrase structure grammars.
https://en.wikipedia.org/wiki/Phrase_structure_rules
TheSpirit Parser Frameworkis anobject orientedrecursive descentparser generatorframework implemented using templatemetaprogrammingtechniques.Expression templatesallow users to approximate the syntax ofextended Backus–Naur form(EBNF) completely inC++.[1]Parser objects are composed throughoperator overloadingand the result is a backtrackingLL(∞)parser that is capable of parsing ratherambiguousgrammars. Spirit can be used for bothlexingand parsing, together or separately. This framework is part of theBoost libraries. Because of limitations of the C++ language, the syntax of Spirit has been designed around the operator precedences of C++, while bearing resemblance to bothEBNFandregular expressions. This example shows how to use an inline parser expression with a semantic action. Thisprogramming-tool-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Spirit_Parser_Framework
Pattern calculusbases all computation onpattern matchingof a very general kind. Likelambda calculus, it supports a uniform treatment offunction evaluation. Also, it allows functions to be passed as arguments and returned as results. In addition, pattern calculus supports uniform access to the internal structure of arguments, be they pairs orlistsortrees. Also, it allows patterns to be passed as arguments and returned as results. Uniform access is illustrated by a pattern-matching functionsizethat computes the size of an arbitrarydata structure. In the notation of theprogramming languagebondi, it is given by therecursive function The second, ordefault casex -> 1matches the patternxagainst the argument and returns1. This case is used only if the matching failed in the first case. The first, orspecial casematches against anycompound, such as a non-empty list, or pair. Matching bindsxto the left component andyto the right component. Then the body of the case adds the sizes of these components together. Similar techniques yield generic queries for searching and updating. Combining recursion and decomposition in this way yieldspath polymorphism. The ability to pass patterns as parameters (pattern polymorphism) is illustrated by defining a generic eliminator. Suppose given constructorsLeaffor creating the leaves of a tree, andCountfor converting numbers into counters. The corresponding eliminators are then For example,elimLeaf (Leaf 3)evaluates to3as doeselimCount (Count 3). These examples can be produced by applying the generic eliminatorelimto the constructors in question. It is defined by Nowelim Leafevaluates to| {y} Leaf y -> ywhich is equivalent toelimLeaf. Alsoelim Countis equivalent toelimCount. In general, the curly braces{}contain the bound variables of the pattern, so thatxis free andyis bound in| {y} x y -> y.
https://en.wikipedia.org/wiki/Pattern_calculus
glob()(/ɡlɒb/) is alibcfunction forglobbing, which is the archetypal use of pattern matching against thenames in a filesystem directorysuch that a name pattern is expanded into a list of names matching that pattern. Althoughglobbingmay now refer to glob()-style pattern matching of any string, not just expansion into a list of filesystem names, the original meaning of the term is still widespread. Theglob()function and the underlyinggmatch()function originated atBell Labsin the early 1970s alongside the originalAT&T UNIXitself and had a formative influence on the syntax of UNIX command line utilities and therefore also on the present-day reimplementations thereof. In their original form,glob()andgmatch()derived from code used inBell Labsin-house utilities that developed alongside the original Unix in the early 1970s. Among those utilities were also two command line tools calledglobandfind; each could be used to pass a list of matching filenames to other command line tools, and they shared the backend code subsequently formalized asglob()andgmatch(). Shell-statement-level globbing by default became commonplace following the"builtin"-integrationof globbing-functionality into the7th editionof theUnix shellin 1978. The Unix shell's -f option to disable globbing — i.e. revert to literal "file" mode — appeared in the same version. The globpattern quantifiersnow standardized byPOSIX.2(IEEE Std 1003.2) fall into two groups, and can be applied to any character sequence ("string"), not just to directory entries. As reimplementations ofBell Labs' UNIX proliferated, so did reimplementations of its Bell Labs' libc and shell, and with themglob()andglobbing. Today,glob()andglobbingare standardized by thePOSIX.2specification and are integral part of every Unix-like libc ecosystem and shell, including AT&T Bourne shell-compatibleKorn shell (ksh),Z shell (zsh),Almquist shell (ash)and its derivatives and reimplementations such asbusybox,toybox,GNU bash,Debian dash. The glob command, short forglobal, originates in the earliest versions of Bell Labs'Unix.[1]The command interpreters of the early versions of Unix (1st through 6th Editions, 1969–1975) relied on a separate program to expandwildcard charactersin unquoted arguments to a command:/etc/glob. That program performed the expansion and supplied the expanded list of file paths to the command for execution. Glob was originally written in theB programming language. It was the first piece of mainline Unix software to be developed in ahigh-level programming language.[2]Later, this functionality was provided as a Clibrary function,glob(), used by programs such as theshell. It is usually defined based on a function namedfnmatch(), which tests for whether a string matches a given pattern - the program using this function can then iterate through a series of strings (usually filenames) to determine which ones match. Both functions are a part ofPOSIX: the functions defined in POSIX.1 since 2001, and the syntax defined in POSIX.2.[3][4]The idea of defining a separate match function started withwildmat(wildcard match), a simple library to match strings against Bourne Shell globs. Traditionally, globs do not match hidden files in the form of Unixdotfiles; to match them the pattern must explicitly start with.. For example,*matches all visible files while.*matches all hidden files. The most common wildcards are*,?, and[…]. Normally, the path separator character (/on Linux/Unix, MacOS, etc. or\on Windows) will never be matched. Some shells, such asUnix shellhave functionality allowing users to circumvent this.[5] OnUnix-likesystems*,?is defined as above while[…]has two additional meanings:[6][7] The ranges are also allowed to include pre-defined character classes, equivalence classes for accented characters, and collation symbols for hard-to-type characters. They are defined to match up with the brackets in POSIX regular expressions.[6][7] Unix globbing is handled by theshellper POSIX tradition. Globbing is provided on filenames at thecommand lineand inshell scripts.[8]The POSIX-mandatedcasestatement in shells provides pattern-matching using glob patterns. Some shells (such as theC shellandBash) support additional syntax known asalternationorbrace expansion. Because it is not part of the glob syntax, it is not provided incase. It is only expanded on the command line before globbing. The Bash shell also supports the following extensions:[9] The originalDOSwas a clone ofCP/Mdesigned to work on Intel's8088and8086processors. Windows shells, following DOS, do not traditionally perform any glob expansion in arguments passed to external programs. Shells may use an expansion for their own builtin commands: Windows and DOS programs receive a long command-line string instead of argv-style parameters, and it is their responsibility to perform any splitting, quoting, or glob expansion. There is technically no fixed way of describing wildcards in programs since they are free to do what they wish. Two common glob expanders include:[12] Most other parts of Windows, including the Indexing Service, use the MS-DOS style of wildcards found in CMD. A relic of the 8.3 filename age, this syntax pays special attention to dots in the pattern and the text (filename). Internally this is done using three extra wildcard characters,<>". On the Windows API end, theglob()equivalent isFindFirstFile, andfnmatch()corresponds to its underlyingRtlIsNameInExpression.[14](Another fnmatch analogue isPathMatchSpec.) Both open-source msvcrt expanders useFindFirstFile, so 8.3 filename quirks will also apply in them. TheSQLLIKEoperator has an equivalent to?and*but not[…]. Standard SQL uses a glob-like syntax for simple string matching in itsLIKEoperator, although the term "glob" is not generally used in the SQL community. The percent sign (%) matches zero or more characters and the underscore (_) matches exactly one. Many implementations of SQL have extended theLIKEoperator to allow a richer pattern-matching language, incorporating character ranges ([…]), their negation, and elements of regular expressions.[15] Globs do not include syntax for theKleene starwhich allows multiple repetitions of the preceding part of the expression; thus they are not consideredregular expressions, which can describe the full set ofregular languagesover any given finite alphabet.[16] Globs attempt to match the entire string (for example,S*.DOCmatches S.DOC and SA.DOC, but not POST.DOC or SURREY.DOCKS), whereas, depending on implementation details, regular expressions may match a substring. The original Mozillaproxy auto-configimplementation, which provides a glob-matching function on strings, uses a replace-as-RegExp implementation as above. The bracket syntax happens to be covered by regex in such an example. Python's fnmatch uses a more elaborate procedure to transform the pattern into a regular expression.[17] Beyond their uses in shells, globs patterns also find use in a variety of programming languages, mainly to process human input. A glob-style interface for returning files or an fnmatch-style interface for matching strings are found in the following programming languages:
https://en.wikipedia.org/wiki/Glob_(programming)
Insoftware, awildcard characteris a kind ofplaceholderrepresented by a singlecharacter, such as anasterisk(*), which can be interpreted as a number of literal characters or anempty string. It is often used in file searches so the full name need not be typed.[1] Intelecommunications, a wildcard is a character that may be substituted for any of a defined subset of all possible characters. Incomputer(software) technology, a wildcard is a symbol used to replace or represent zero or more characters.[2]Algorithms for matching wildcardshave been developed in a number ofrecursiveand non-recursive varieties.[3] When specifying file names (or paths) inCP/M,DOS,Microsoft Windows, andUnix-likeoperating systems, theasteriskcharacter (*, also called "star") matches zero or more characters. For example,doc*matchesdocanddocumentbut notdodo. If files are named with a date stamp, wildcards can be used to match date ranges, such as202505*.mp4to select video recordings from May 2025, to facilitate file operations such as copying and moving. In Unix-like and DOS operating systems, thequestion mark?matches exactly one character. In DOS, if the question mark is placed at the end of the word, it will also match missing (zero) trailing characters; for example, the pattern123?will match123and1234, but not12345. InUnix shellsandWindows PowerShell, ranges of characters enclosed insquare brackets([and]) match a single character within the set; for example,[A-Za-z]matches any single uppercase or lowercase letter. In Unix shells, a leading exclamation mark!negates the set and matches only a character not within the list. In shells that interpret!as a history substitution, a leading caret^can be used instead. The operation of matching of wildcard patterns to multiple file or path names is referred to asglobbing. InSQL, wildcard characters can be used in LIKE expressions; thepercentsign%matches zero or more characters, andunderscore_a single character.Transact-SQLalso supportssquare brackets([and]) to list sets and ranges of characters to match, a leading caret^negates the set and matches only a character not within the list. InMicrosoft Access, theasterisksign*matches zero or more characters, thequestion mark?matches a single character, thenumber sign#matches a single digit (0–9), and square brackets can be used for sets or ranges of characters to match. Inregular expressions, theperiod(., also called "dot") is the wildcard pattern which matches any single character. Followed by theKleene staroperator, which is denoted as anasterisk(*), we obtain.*, which will match zero or more arbitrary characters.
https://en.wikipedia.org/wiki/Wildcard_character
Analgorithmis fundamentally a set of rules or defined procedures that is typically designed and used to solve a specific problem or a broad set of problems. Broadly, algorithms define process(es), sets of rules, or methodologies that are to be followed in calculations, data processing, data mining, pattern recognition, automated reasoning or other problem-solving operations. With the increasing automation of services, more and more decisions are being made by algorithms. Some general examples are; risk assessments, anticipatory policing, and pattern recognition technology.[1] The following is alist of well-known algorithmsalong with one-line descriptions for each. HybridAlgorithms
https://en.wikipedia.org/wiki/List_of_algorithms
Inmathematicsandtheoretical computer science, aset constraintis an equation or an inequation between sets ofterms. Similar to systems of (in)equationsbetween numbers, methods are studied for solving systems of set constraints. Different approaches admit different operators (like "∪", "∩", "\", and function application)[note 1]on sets and different (in)equation relations (like "=", "⊆", and "⊈") between set expressions. Systems of set constraints are useful to describe (in particular infinite) sets ofground terms.[note 2]They arise in program analysis,abstract interpretation, andtype inference. Eachregular tree grammarcan be systematically transformed into a system of set inclusions such that its minimal solution corresponds to the tree language of the grammar. For example, the grammar (terminal and nonterminal symbols indicated by lower and upper case initials, respectively) with the rules is transformed to the set inclusion system (constants and variables indicated by lower and upper case initials, respectively): This system has a minimal solution, viz. ("L(N)" denoting the tree language corresponding to the nonterminalNin the above tree grammar): The maximal solution of the system is trivial; it assigns the set of all terms to every variable. Thismathematical logic-related article is astub. You can help Wikipedia byexpanding it. Thistheoretical computer science–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Set_constraint
In thetheory of computation, a branch oftheoretical computer science, adeterministic finite automaton(DFA)—also known asdeterministic finite acceptor(DFA),deterministic finite-state machine(DFSM), ordeterministic finite-state automaton(DFSA)—is afinite-state machinethat accepts or rejects a givenstringof symbols, by running through a state sequence uniquely determined by the string.[1]Deterministicrefers to the uniqueness of the computation run. In search of the simplest models to capture finite-state machines,Warren McCullochandWalter Pittswere among the first researchers to introduce a concept similar to finite automata in 1943.[2][3] The figure illustrates a deterministic finite automaton using astate diagram. In this example automaton, there are three states: S0, S1, and S2(denoted graphically by circles). The automaton takes a finitesequenceof 0s and 1s as input. For each state, there is a transition arrow leading out to a next state for both 0 and 1. Upon reading a symbol, a DFA jumpsdeterministicallyfrom one state to another by following the transition arrow. For example, if the automaton is currently in state S0and the current input symbol is 1, then it deterministically jumps to state S1. A DFA has astart state(denoted graphically by an arrow coming in from nowhere) where computations begin, and asetofaccept states(denoted graphically by a double circle) which help define when a computation is successful. A DFA is defined as an abstract mathematical concept, but is often implemented in hardware and software for solving various specific problems such aslexical analysisandpattern matching. For example, a DFA can model software that decides whether or not online user input such as email addresses are syntactically valid.[4] DFAs have been generalized tonondeterministic finite automata(NFA)which may have several arrows of the same label starting from a state. Using thepowerset constructionmethod, every NFA can be translated to a DFA that recognizes the same language. DFAs, and NFAs as well, recognize exactly the set ofregular languages.[1] A deterministic finite automatonMis a 5-tuple,(Q, Σ,δ,q0,F), consisting of Letw=a1a2...anbe a string over the alphabetΣ. The automatonMaccepts the stringwif a sequence of states,r0,r1, ...,rn, exists inQwith the following conditions: In words, the first condition says that the machine starts in the start stateq0. The second condition says that given each character of stringw, the machine will transition from state to state according to the transition functionδ. The last condition says that the machine acceptswif the last input ofwcauses the machine to halt in one of the accepting states. Otherwise, it is said that the automatonrejectsthe string. The set of strings thatMaccepts is thelanguagerecognizedbyMand this language is denoted byL(M). A deterministic finite automaton without accept states and without a starting state is known as atransition systemorsemiautomaton. For more comprehensive introduction of the formal definition seeautomata theory. The following example is of a DFAM, with a binary alphabet, which requires that the input contains an even number of 0s. M= (Q, Σ,δ,q0,F)where The stateS1represents that there has been an even number of 0s in the input so far, whileS2signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s,Mwill finish in stateS1, an accepting state, so the input string will be accepted. The language recognized byMis theregular languagegiven by theregular expression(1*) (0 (1*) 0 (1*))*, where*is theKleene star, e.g.,1*denotes any number (possibly zero) of consecutive ones. According to the above definition, deterministic finite automata are alwayscomplete: they define from each state a transition for each input symbol. While this is the most common definition, some authors use the term deterministic finite automaton for a slightly different notion: an automaton that definesat mostone transition for each state and each input symbol; the transition function is allowed to bepartial.[5]When no transition is defined, such an automaton halts. Alocal automatonis a DFA, not necessarily complete, for which all edges with the same label lead to a single vertex. Local automata accept the class oflocal languages, those for which membership of a word in the language is determined by a "sliding window" of length two on the word.[6][7] AMyhill graphover an alphabetAis adirected graphwithvertex setAand subsets of vertices labelled "start" and "finish". The language accepted by a Myhill graph is the set of directed paths from a start vertex to a finish vertex: the graph thus acts as an automaton.[6]The class of languages accepted by Myhill graphs is the class of local languages.[8] When the start state and accept states are ignored, a DFA ofnstates and an alphabet of sizekcan be seen as adigraphofnvertices in which all vertices havekout-arcs labeled1, ...,k(ak-out digraph). It is known that whenk≥ 2is a fixed integer, with high probability, the largeststrongly connected component(SCC) in such ak-out digraph chosen uniformly at random is of linear size and it can be reached by all vertices.[9]It has also been proven that ifkis allowed to increase asnincreases, then the whole digraph has a phase transition for strong connectivity similar toErdős–Rényi modelfor connectivity.[10] In a random DFA, the maximum number of vertices reachable from one vertex is very close to the number of vertices in the largestSCCwith high probability.[9][11]This is also true for the largestinduced sub-digraphof minimum in-degree one, which can be seen as a directed version of1-core.[10] If DFAs recognize the languages that are obtained by applying an operation on the DFA recognizable languages then DFAs are said to beclosed underthe operation. The DFAs are closed under the following operations. For each operation, an optimal construction with respect to the number of states has been determined instate complexityresearch. Since DFAs areequivalenttonondeterministic finite automata(NFA), these closures may also be proved using closure properties of NFA. A run of a given DFA can be seen as a sequence of compositions of a very general formulation of the transition function with itself. Here we construct that function. For a given input symbola∈Σ{\displaystyle a\in \Sigma }, one may construct a transition functionδa:Q→Q{\displaystyle \delta _{a}:Q\rightarrow Q}by definingδa(q)=δ(q,a){\displaystyle \delta _{a}(q)=\delta (q,a)}for allq∈Q{\displaystyle q\in Q}. (This trick is calledcurrying.) From this perspective,δa{\displaystyle \delta _{a}}"acts" on a state in Q to yield another state. One may then consider the result offunction compositionrepeatedly applied to the various functionsδa{\displaystyle \delta _{a}},δb{\displaystyle \delta _{b}}, and so on. Given a pair of lettersa,b∈Σ{\displaystyle a,b\in \Sigma }, one may define a new functionδ^ab=δa∘δb{\displaystyle {\widehat {\delta }}_{ab}=\delta _{a}\circ \delta _{b}}, where∘{\displaystyle \circ }denotes function composition. Clearly, this process may be recursively continued, giving the following recursive definition ofδ^:Q×Σ⋆→Q{\displaystyle {\widehat {\delta }}:Q\times \Sigma ^{\star }\rightarrow Q}: δ^{\displaystyle {\widehat {\delta }}}is defined for all wordsw∈Σ∗{\displaystyle w\in \Sigma ^{*}}. A run of the DFA is a sequence of compositions ofδ^{\displaystyle {\widehat {\delta }}}with itself. Repeated function composition forms amonoid. For the transition functions, this monoid is known as thetransition monoid, or sometimes thetransformation semigroup. The construction can also be reversed: given aδ^{\displaystyle {\widehat {\delta }}}, one can reconstruct aδ{\displaystyle \delta }, and so the two descriptions are equivalent. DFAs are one of the most practical models of computation, since there is a trivial linear time, constant-space,online algorithmto simulate a DFA on a stream of input. Also, there are efficient algorithms to find a DFA recognizing: Because DFAs can be reduced to acanonical form(minimal DFAs), there are also efficient algorithms to determine: DFAs are equivalent in computing power tonondeterministic finite automata(NFAs). This is because, firstly any DFA is also an NFA, so an NFA can do what a DFA can do. Also, given an NFA, using thepowerset constructionone can build a DFA that recognizes the same language as the NFA, although the DFA could have exponentially larger number of states than the NFA.[15][16]However, even though NFAs are computationally equivalent to DFAs, the above-mentioned problems are not necessarily solved efficiently also for NFAs. The non-universality problem for NFAs isPSPACE completesince there are small NFAs with shortest rejecting word in exponential size. A DFA is universal if and only if all states are final states, but this does not hold for NFAs. The Equality, Inclusion and Minimization Problems are also PSPACE complete since they require forming the complement of an NFA which results in an exponential blow up of size.[17] On the other hand, finite-state automata are of strictly limited power in the languages they can recognize; many simple languages, including any problem that requires more than constant space to solve, cannot be recognized by a DFA. The classic example of a simply described language that no DFA can recognize is bracket orDyck language, i.e., the language that consists of properly paired brackets such as word "(()())". Intuitively, no DFA can recognize the Dyck language because DFAs are not capable of counting: a DFA-like automaton needs to have a state to represent any possible number of "currently open" parentheses, meaning it would need an unbounded number of states. Another simpler example is the language consisting of strings of the formanbnfor some finite but arbitrary number ofa's, followed by an equal number ofb's.[18] Given a set ofpositivewordsS+⊂Σ∗{\displaystyle S^{+}\subset \Sigma ^{*}}and a set ofnegativewordsS−⊂Σ∗{\displaystyle S^{-}\subset \Sigma ^{*}}one can construct a DFA that accepts all words fromS+{\displaystyle S^{+}}and rejects all words fromS−{\displaystyle S^{-}}: this problem is calledDFA identification(synthesis, learning). WhilesomeDFA can be constructed in linear time, the problem of identifying a DFA with the minimal number of states is NP-complete.[19]The first algorithm for minimal DFA identification has been proposed by Trakhtenbrot and Barzdin[20]and is called theTB-algorithm. However, the TB-algorithm assumes that all words fromΣ{\displaystyle \Sigma }up to a given length are contained in eitherS+∪S−{\displaystyle S^{+}\cup S^{-}}. Later, K. Lang proposed an extension of the TB-algorithm that does not use any assumptions aboutS+{\displaystyle S^{+}}andS−{\displaystyle S^{-}}, theTraxbaralgorithm.[21]However, Traxbar does not guarantee the minimality of the constructed DFA. In his work[19]E.M. Gold also proposed a heuristic algorithm for minimal DFA identification. Gold's algorithm assumes thatS+{\displaystyle S^{+}}andS−{\displaystyle S^{-}}contain acharacteristic setof the regular language; otherwise, the constructed DFA will be inconsistent either withS+{\displaystyle S^{+}}orS−{\displaystyle S^{-}}. Other notable DFA identification algorithms include the RPNI algorithm,[22]the Blue-Fringe evidence-driven state-merging algorithm,[23]and Windowed-EDSM.[24]Another research direction is the application ofevolutionary algorithms: the smart state labeling evolutionary algorithm[25]allowed to solve a modified DFA identification problem in which the training data (setsS+{\displaystyle S^{+}}andS−{\displaystyle S^{-}}) isnoisyin the sense that some words are attributed to wrong classes. Yet another step forward is due to application ofSATsolvers byMarjin J. H. Heuleand S. Verwer: the minimal DFA identification problem is reduced to deciding the satisfiability of a Boolean formula.[26]The main idea is to build an augmented prefix-tree acceptor (atriecontaining all input words with corresponding labels) based on the input sets and reduce the problem of finding a DFA withC{\displaystyle C}states tocoloringthe tree vertices withC{\displaystyle C}states in such a way that when vertices with one color are merged to one state, the generated automaton is deterministic and complies withS+{\displaystyle S^{+}}andS−{\displaystyle S^{-}}. Though this approach allows finding the minimal DFA, it suffers from exponential blow-up of execution time when the size of input data increases. Therefore, Heule and Verwer's initial algorithm has later been augmented with making several steps of the EDSM algorithm prior to SAT solver execution: the DFASAT algorithm.[27]This allows reducing the search space of the problem, but leads to loss of the minimality guarantee. Another way of reducing the search space has been proposed by Ulyantsev et al.[28]by means of new symmetry breaking predicates based on thebreadth-first searchalgorithm: the sought DFA's states are constrained to be numbered according to the BFS algorithm launched from the initial state. This approach reduces the search space byC!{\displaystyle C!}by eliminating isomorphic automata. Read-only right-moving Turing machinesare a particular type ofTuring machinethat only moves right; these are almost exactly equivalent to DFAs.[29]The definition based on a singly infinite tape is a 7-tuple where The machine always accepts a regular language. There must exist at least one element of the setF(aHALTstate) for the language to be nonempty.
https://en.wikipedia.org/wiki/Deterministic_finite_automaton
Incomputer science, in particular inautomata theory, atwo-way finite automatonis afinite automatonthat is allowed to re-read its input. Atwo-way deterministic finite automaton(2DFA) is anabstract machine, a generalized version of thedeterministic finite automaton(DFA) which can revisit characters already processed. As in a DFA, there are a finite number of states with transitions between them based on the current character, but each transition is also labelled with a value indicating whether the machine will move its position in the input to the left, right, or stay at the same position. Equivalently, 2DFAs can be seen asread-only Turing machineswith no work tape, only a read-only input tape. 2DFAs were introduced in a seminal 1959 paper byRabinandScott,[1]who proved them to have equivalent power to one-wayDFAs. That is, anyformal languagewhich can be recognized by a 2DFA can be recognized by a DFA which only examines and consumes each character in order. Since DFAs are obviously a special case of 2DFAs, this implies that both kinds of machines recognize precisely the class ofregular languages. However, the equivalent DFA for a 2DFA may require exponentially many states, making 2DFAs a much more practical representation for algorithms for some common problems. 2DFAs are also equivalent toread-only Turing machinesthat use only a constant amount of space on their work tape, since any constant amount of information can be incorporated into the finite control state via a product construction (a state for each combination of work tape state and control state). Formally, a two-way deterministic finite automaton can be described by the following 8-tuple:M=(Q,Σ,L,R,δ,s,t,r){\displaystyle M=(Q,\Sigma ,L,R,\delta ,s,t,r)}where In addition, the following two conditions must also be satisfied: It says that there must be some transition possible when the pointer reaches either end of the input word. It says that once the automaton reaches the accept or reject state, it stays in there forever and the pointer goes to the right most symbol and cycles there infinitely.[2] Atwo-way nondeterministic finite automaton(2NFA) may have multiple transitions defined in the same configuration. Its transition function is Like a standard one-wayNFA, a 2NFA accepts a string if at least one of the possible computations is accepting. Like the 2DFAs, the 2NFAs also accept only regular languages. Atwo-way alternating finite automaton(2AFA) is a two-way extension of analternating finite automaton(AFA). Its state set is States inQ∃{\displaystyle Q_{\exists }}andQ∀{\displaystyle Q_{\forall }}are calledexistentialresp.universal. In an existential state a 2AFA nondeterministically chooses the next state like an NFA, and accepts if at least one of the resulting computations accepts. In a universal state 2AFA moves to all next states, and accepts if all the resulting computations accept. Two-way and one-way finite automata, deterministic and nondeterministic and alternating, accept the same class of regular languages. However, transforming an automaton of one type to an equivalent automaton of another type incurs a blow-up in the number of states.Christos Kapoutsis[3]determined that transforming ann{\displaystyle n}-state 2DFA to an equivalent DFA requiresn(nn−(n−1)n){\displaystyle n(n^{n}-(n-1)^{n})}states in the worst case. If ann{\displaystyle n}-state 2DFA or a 2NFA is transformed to an NFA, the worst-case number of states required is(2nn+1)=O(4nn){\displaystyle {\binom {2n}{n+1}}=O\left({\frac {4^{n}}{\sqrt {n}}}\right)}.Ladner,LiptonandStockmeyer.[4]proved that ann{\displaystyle n}-state 2AFA can be converted to a DFA with2n2n{\displaystyle 2^{n2^{n}}}states. The 2AFA to NFA conversion requires2Θ(nlog⁡n){\displaystyle 2^{\Theta (n\log n)}}states in the worst case, seeGeffertand Okhotin.[5] It is an open problem whether every 2NFA can be converted to a 2DFA with only a polynomial increase in the number of states. The problem was raised by Sakoda andSipser,[6]who compared it to theP vs. NPproblem in thecomputational complexity theory. Berman and Lingas[7]discovered a formal relation between this problem and theLvs.NLopen problem, seeKapoutsis[8]for a precise relation. Sweeping automata are 2DFAs of a special kind that process the input string by making alternating left-to-right and right-to-left sweeps, turning only at the endmarkers.Sipser[9]constructed a sequence of languages, each accepted by an n-state NFA, yet which is not accepted by any sweeping automata with fewer than2n{\displaystyle 2^{n}}states. The concept of 2DFAs was in 1997 generalized toquantum computingbyJohn Watrous's "On the Power of 2-Way Quantum Finite State Automata", in which he demonstrates that these machines can recognize nonregular languages and so are more powerful than DFAs.[10] Apushdown automatonthat is allowed to move either way on its input tape is calledtwo-way pushdown automaton(2PDA);[11]it has been studied by Hartmanis, Lewis, and Stearns (1965).[12]Aho, Hopcroft, Ullman (1968)[13]and Cook (1971)[14]characterized the class of languages recognizable by deterministic (2DPDA) and non-deterministic (2NPDA) two-way pushdown automata; Gray, Harrison, and Ibarra (1967) investigated the closure properties of these languages.[15]
https://en.wikipedia.org/wiki/Two-way_nondeterministic_finite_automaton
Intheoretical computer science, anondeterministic Turing machine(NTM) is a theoretical model of computation whose governing rules specify more than one possible action when in some given situations. That is, an NTM's next state isnotcompletely determined by its action and the current symbol it sees, unlike adeterministic Turing machine. NTMs are sometimes used inthought experimentsto examine the abilities and limits of computers. One of the most important open problems in theoreticalcomputer scienceis theP versus NP problem, which (among other equivalent formulations) concerns the question of how difficult it is to simulate nondeterministic computation with a deterministic computer. In essence, a Turing machine is imagined to be a simple computer that reads and writes symbols one at a time on an endless tape by strictly following a set of rules. It determines what action it should perform next according to its internalstateandwhat symbol it currently sees. An example of one of a Turing Machine's rules might thus be: "If you are in state 2 and you see an 'A', then change it to 'B', move left, and switch to state 3." In adeterministic Turing machine(DTM), the set of rules prescribes at most one action to be performed for any given situation. A deterministic Turing machine has atransition functionthat, for a given state and symbol under the tape head, specifies three things: For example, an X on the tape in state 3 might make the DTM write a Y on the tape, move the head one position to the right, and switch to state 5. In contrast to a deterministic Turing machine, in anondeterministic Turing machine(NTM) the set of rules may prescribe more than one action to be performed for any given situation. For example, an X on the tape in state 3 might allow the NTM to: or Because there can be multiple actions that can follow from a given situation, there can be multiple possible sequences of steps that the NTM can take starting from a given input. If at least one of these possible sequences leads to an "accept" state, the NTM is said to accept the input. While a DTM has a single "computation path" that it follows, an NTM has a "computationtree". A nondeterministic Turing machine can be formally defined as a six-tupleM=(Q,Σ,ι,⊔,A,δ){\displaystyle M=(Q,\Sigma ,\iota ,\sqcup ,A,\delta )}, where The difference with a standard (deterministic)Turing machineis that, for deterministic Turing machines, the transition relation is a function rather than just a relation. Configurations and theyieldsrelation on configurations, which describes the possible actions of the Turing machine given any possible contents of the tape, are as for standard Turing machines, except that theyieldsrelation is no longer single-valued. (If the machine is deterministic, the possible computations are all prefixes of a single, possibly infinite, path.) The input for an NTM is provided in the same manner as for a deterministic Turing machine: the machine is started in the configuration in which the tape head is on the first character of the string (if any), and the tape is all blank otherwise. An NTM accepts an input string if and only ifat least oneof the possible computational paths starting from that string puts the machine into an accepting state. When simulating the many branching paths of an NTM on a deterministic machine, we can stop the entire simulation as soon asanybranch reaches an accepting state. As a mathematical construction used primarily in proofs, there are a variety of minor variations on the definition of an NTM, but these variations all accept equivalent languages. The head movement in the output of the transition relation is often encoded numerically instead of using letters to represent moving the head Left (-1), Stationary (0), and Right (+1); giving a transition function output of(Q×Σ×{−1,0,+1}){\displaystyle \left(Q\times \Sigma \times \{-1,0,+1\}\right)}. It is common to omit the stationary (0) output,[1]and instead insert the transitive closure of any desired stationary transitions. Some authors add an explicitrejectstate,[2]which causes the NTM to halt without accepting. This definition still retains the asymmetry thatanynondeterministic branch can accept, buteverybranch must reject for the string to be rejected. Any computational problem that can be solved by a DTM can also be solved by a NTM, and vice versa. However, it is believed that in general thetime complexitymay not be the same. NTMs include DTMs as special cases, so every computation that can be carried out by a DTM can also be carried out by the equivalent NTM. It might seem that NTMs are more powerful than DTMs, since they can allow trees of possible computations arising from the same initial configuration, accepting a string if any one branch in the tree accepts it. However, it is possible to simulate NTMs with DTMs, and in fact this can be done in more than one way. One approach is to use a DTM of which the configurations represent multiple configurations of the NTM, and the DTM's operation consists of visiting each of them in turn, executing a single step at each visit, and spawning new configurations whenever the transition relation defines multiple continuations. Another construction simulates NTMs with 3-tape DTMs, of which the first tape always holds the original input string, the second is used to simulate a particular computation of the NTM, and the third encodes a path in the NTM's computation tree.[3]The 3-tape DTMs are easily simulated with a normal single-tape DTM. In the second construction, the constructed DTM effectively performs abreadth-first searchof the NTM's computation tree, visiting all possible computations of the NTM in order of increasing length until it finds an accepting one. Therefore, the length of an accepting computation of the DTM is, in general, exponential in the length of the shortest accepting computation of the NTM. This is believed to be a general property of simulations of NTMs by DTMs. TheP = NP problem, the most famous unresolved question in computer science, concerns one case of this issue: whether or not every problem solvable by a NTM in polynomial time is necessarily also solvable by a DTM in polynomial time. An NTM has the property of bounded nondeterminism. That is, if an NTM always halts on a given input tapeTthen it halts in a bounded number of steps, and therefore can only have a bounded number of possible configurations. Becausequantum computersusequantum bits, which can be insuperpositionsof states, rather than conventional bits, there is sometimes a misconception thatquantum computersare NTMs.[4]However, it is believed by experts (but has not been proven) that the power of quantum computers is, in fact, incomparable to that of NTMs; that is, problems likely exist that an NTM could efficiently solve that a quantum computer cannot and vice versa.[5][better source needed]In particular, it is likely thatNP-completeproblems are solvable by NTMs but not by quantum computers in polynomial time. Intuitively speaking, while a quantum computer can indeed be in a superposition state corresponding to all possible computational branches having been executed at the same time (similar to an NTM), the final measurement will collapse the quantum computer into a randomly selected branch. This branch then does not, in general, represent the sought-for solution, unlike the NTM, which is allowed to pick the right solution among the exponentially many branches.
https://en.wikipedia.org/wiki/Nondeterministic_Turing_machine
Automationdescribes a wide range of technologies that reduce human intervention in processes, mainly by predetermining decision criteria, subprocess relationships, and related actions, as well as embodying those predeterminations in machines.[1][2]Automation has been achieved by various means includingmechanical,hydraulic,pneumatic,electrical,electronic devices, andcomputers, usually in combination. Complicated systems, such as modernfactories,airplanes, and ships typically use combinations of all of these techniques. The benefit of automation includes labor savings, reducing waste, savings inelectricitycosts, savings in material costs, and improvements to quality, accuracy, and precision. Automation includes the use of various equipment andcontrol systemssuch asmachinery, processes infactories,boilers,[3]and heat-treatingovens, switching ontelephone networks,steering,stabilization of ships,aircraftand other applications andvehicleswith reduced human intervention.[4]Examples range from a householdthermostatcontrolling a boiler to a large industrial control system with tens of thousands of input measurements and output control signals. Automation has also found a home in the banking industry. It can range from simple on-off control to multi-variable high-level algorithms in terms of control complexity. In the simplest type of an automaticcontrol loop, a controller compares a measured value of a process with a desired set value and processes the resulting error signal to change some input to the process, in such a way that the process stays at its set point despite disturbances. This closed-loop control is an application ofnegative feedbackto a system. The mathematical basis ofcontrol theorywas begun in the 18th century and advanced rapidly in the 20th. The termautomation, inspired by the earlier wordautomatic(coming fromautomaton), was not widely used before 1947, whenFordestablished an automation department.[5]It was during this time that the industry was rapidly adoptingfeedback controllers, Technological advancements introduced in the 1930s revolutionized various industries significantly.[6] TheWorld Bank'sWorld Development Reportof 2019 shows evidence that the new industries and jobs in the technology sector outweigh the economic effects of workers being displaced by automation.[7]Job lossesanddownward mobilityblamed on automation have been cited as one of many factors in the resurgence ofnationalist,protectionistandpopulistpolitics in the US, UK and France, among other countries since the 2010s.[8][9][10][11][12] It was a preoccupation of the Greeks and Arabs (in the period between about 300 BC and about 1200 AD) to keep accurate track of time. InPtolemaic Egypt, about 270 BC,Ctesibiusdescribed a float regulator for awater clock, a device not unlike the ball and cock in a modern flush toilet. This was the earliest feedback-controlled mechanism.[13]The appearance of the mechanical clock in the 14th century made the water clock and its feedback control system obsolete. ThePersianBanū Mūsābrothers, in theirBook of Ingenious Devices(850 AD), described a number of automatic controls.[14]Two-step level controls for fluids, a form of discontinuousvariable structure controls, were developed by the Banu Musa brothers.[15]They also described afeedback controller.[16][17]The design of feedback control systems up through the Industrial Revolution was by trial-and-error, together with a great deal of engineering intuition. It was not until the mid-19th century that the stability of feedback control systems was analyzed using mathematics, the formal language of automatic control theory.[citation needed] Thecentrifugal governorwas invented byChristiaan Huygensin the seventeenth century, and used to adjust the gap betweenmillstones.[18][19][20] The introduction ofprime movers, or self-driven machines advanced grain mills, furnaces, boilers, and thesteam enginecreated a new requirement for automatic control systems includingtemperature regulators(invented in 1624; seeCornelius Drebbel),pressure regulators(1681),float regulators(1700) andspeed controldevices. Another control mechanism was used to tent the sails of windmills. It was patented by Edmund Lee in 1745.[21]Also in 1745,Jacques de Vaucansoninvented the first automated loom. Around 1800,Joseph Marie Jacquardcreateda punch-card systemto program looms.[22] In 1771Richard Arkwrightinvented the first fully automated spinning mill driven by water power, known at the time as thewater frame.[23]An automatic flour mill was developed byOliver Evansin 1785, making it the first completely automated industrial process.[24][25] A centrifugal governor was used by Mr. Bunce of England in 1784 as part of a modelsteam crane.[26][27]The centrifugal governor was adopted byJames Wattfor use on a steam engine in 1788 after Watt's partner Boulton saw one at a flour millBoulton & Wattwere building.[21]The governor could not actually hold a set speed; the engine would assume a new constant speed in response to load changes. The governor was able to handle smaller variations such as those caused by fluctuating heat load to the boiler. Also, there was a tendency for oscillation whenever there was a speed change. As a consequence, engines equipped with this governor were not suitable for operations requiring constant speed, such as cotton spinning.[21] Several improvements to the governor, plus improvements to valve cut-off timing on the steam engine, made the engine suitable for most industrial uses before the end of the 19th century. Advances in the steam engine stayed well ahead of science, both thermodynamics and control theory.[21]The governor received relatively little scientific attention untilJames Clerk Maxwellpublished a paper that established the beginning of a theoretical basis for understanding control theory. Relay logic was introduced with factoryelectrification, which underwent rapid adaption from 1900 through the 1920s. Central electric power stations were also undergoing rapid growth and the operation of new high-pressure boilers, steam turbines and electrical substations created a large demand for instruments and controls. Central control rooms became common in the 1920s, but as late as the early 1930s, most process controls were on-off. Operators typically monitored charts drawn by recorders that plotted data from instruments. To make corrections, operators manually opened or closed valves or turned switches on or off. Control rooms also used color-coded lights to send signals to workers in the plant to manually make certain changes.[28] The development of the electronic amplifier during the 1920s, which was important for long-distance telephony, required a higher signal-to-noise ratio, which was solved by negative feedback noise cancellation. This and other telephony applications contributed to the control theory. In the 1940s and 1950s, German mathematicianIrmgard Flügge-Lotzdeveloped the theory of discontinuous automatic controls, which found military applications during theSecond World Wartofire control systemsand aircraftnavigation systems.[6] Controllers, which were able to make calculated changes in response to deviations from a set point rather than on-off control, began being introduced in the 1930s. Controllers allowed manufacturing to continue showing productivity gains to offset the declining influence of factory electrification.[29] Factory productivity was greatly increased by electrification in the 1920s. U.S. manufacturing productivity growth fell from 5.2%/yr 1919–29 to 2.76%/yr 1929–41. Alexander Field notes that spending on non-medical instruments increased significantly from 1929 to 1933 and remained strong thereafter.[29] The First and Second World Wars saw major advancements in the field ofmass communicationandsignal processing. Other key advances in automatic controls includedifferential equations,stability theoryandsystem theory(1938),frequency domain analysis(1940),ship control(1950), andstochastic analysis(1941). Starting in 1958, various systems based onsolid-state[30][31]digital logicmodules for hard-wired programmed logic controllers (the predecessors ofprogrammable logic controllers[PLC]) emerged to replace electro-mechanical relay logic inindustrial control systemsforprocess controland automation, including earlyTelefunken/AEGLogistat,SiemensSimatic,Philips/Mullard/Valvo[de]Norbit,BBCSigmatronic,ACECLogacec,Akkord[de]Estacord, Krone Mibakron, Bistat, Datapac, Norlog, SSR, or Procontic systems.[30][32][33][34][35][36] In 1959Texaco'sPort Arthur Refinerybecame the first chemical plant to usedigital control.[37]Conversion of factories to digital control began to spread rapidly in the 1970s as the price ofcomputer hardwarefell. The automatictelephone switchboardwas introduced in 1892 along with dial telephones. By 1929, 31.9% of the Bell system was automatic.[38]: 158Automatic telephone switching originally used vacuum tube amplifiers and electro-mechanical switches, which consumed a large amount of electricity. Call volume eventually grew so fast that it was feared the telephone system would consume all electricity production, promptingBell Labsto begin research on thetransistor.[39] The logic performed by telephone switching relays was the inspiration for the digital computer. The first commercially successful glass bottle-blowing machine was an automatic model introduced in 1905.[40]The machine, operated by a two-man crew working 12-hour shifts, could produce 17,280 bottles in 24 hours, compared to 2,880 bottles made by a crew of six men and boys working in a shop for a day. The cost of making bottles by machine was 10 to 12 cents per gross compared to $1.80 per gross by the manual glassblowers and helpers. Sectional electric drives were developed using control theory. Sectional electric drives are used on different sections of a machine where a precise differential must be maintained between the sections. In steel rolling, the metal elongates as it passes through pairs of rollers, which must run at successively faster speeds. In paper making paper, the sheet shrinks as it passes around steam-heated drying arranged in groups, which must run at successively slower speeds. The first application of a sectional electric drive was on a paper machine in 1919.[41]One of the most important developments in the steel industry during the 20th century was continuous wide strip rolling, developed by Armco in 1928.[42] Before automation, many chemicals were made in batches. In 1930, with the widespread use of instruments and the emerging use of controllers, the founder of Dow Chemical Co. was advocatingcontinuous production.[43] Self-acting machine tools that displaced hand dexterity so they could be operated by boys and unskilled laborers were developed byJames Nasmythin the 1840s.[44]Machine toolswere automated withNumerical control(NC) using punched paper tape in the 1950s. This soon evolved into computerized numerical control (CNC). Today extensive automation is practiced in practically every type of manufacturing and assembly process. Some of the larger processes include electrical power generation, oil refining, chemicals, steel mills, plastics, cement plants, fertilizer plants, pulp and paper mills, automobile and truck assembly, aircraft production, glass manufacturing, natural gas separation plants, food and beverage processing, canning and bottling and manufacture of various kinds of parts. Robots are especially useful in hazardous applications like automobile spray painting. Robots are also used to assemble electronic circuit boards. Automotive welding is done with robots and automatic welders are used in applications like pipelines. With the advent of the space age in 1957, controls design, particularly in the United States, turned away from the frequency-domain techniques of classical control theory and backed into the differential equation techniques of the late 19th century, which were couched in the time domain. During the 1940s and 1950s, German mathematicianIrmgard Flugge-Lotzdeveloped the theory of discontinuous automatic control, which became widely used inhysteresis control systemssuch asnavigation systems,fire-control systems, andelectronics. Through Flugge-Lotz and others, the modern era saw time-domain design fornonlinear systems(1961),navigation(1960),optimal controlandestimation theory(1962),nonlinear control theory(1969),digital controlandfiltering theory(1974), and thepersonal computer(1983). Perhaps the most cited advantage of automation in industry is that it is associated with faster production and cheaper labor costs. Another benefit could be that it replaces hard, physical, or monotonous work.[45]Additionally, tasks that take place inhazardous environmentsor that are otherwise beyond human capabilities can be done by machines, as machines can operate even under extreme temperatures or in atmospheres that are radioactive or toxic. They can also be maintained with simple quality checks. However, at the time being, not all tasks can be automated, and some tasks are more expensive to automate than others. Initial costs of installing the machinery in factory settings are high, and failure to maintain a system could result in the loss of the product itself. Moreover, some studies seem to indicate that industrial automation could impose ill effects beyond operational concerns, including worker displacement due to systemic loss of employment and compounded environmental damage; however, these findings are both convoluted and controversial in nature, and could potentially be circumvented.[46] The mainadvantagesof automation are: Automation primarily describes machines replacing human action, but it is also loosely associated with mechanization, machines replacing human labor. Coupled with mechanization, extending human capabilities in terms of size, strength, speed, endurance, visual range & acuity, hearing frequency & precision, electromagnetic sensing & effecting, etc., advantages include:[48] The maindisadvantagesof automation are: Theparadoxof automation says that the more efficient the automated system, the more crucial the human contribution of the operators. Humans are less involved, but their involvement becomes more critical.Lisanne Bainbridge, a cognitive psychologist, identified these issues notably in her widely cited paper "Ironies of Automation."[49]If an automated system has an error, it will multiply that error until it is fixed or shut down. This is where human operators come in.[50]A fatal example of this wasAir France Flight 447, where a failure of automation put the pilots into a manual situation they were not prepared for.[51] Many roles for humans in industrial processes presently lie beyond the scope of automation. Human-levelpattern recognition,language comprehension, and language production ability are well beyond the capabilities of modern mechanical and computer systems (but seeWatson computer). Tasks requiring subjective assessment or synthesis of complex sensory data, such as scents and sounds, as well as high-level tasks such as strategic planning, currently require human expertise. In many cases, the use of humans is morecost-effectivethan mechanical approaches even where the automation of industrial tasks is possible. Therefore,algorithmic managementas the digital rationalization of human labor instead of its substitution has emerged as an alternative technological strategy.[53]Overcoming these obstacles is a theorized path topost-scarcityeconomics.[54] Increased automation often causes workers to feel anxious about losing their jobs as technology renders their skills or experience unnecessary. Early in theIndustrial Revolution, when inventions like thesteam enginewere making some job categories expendable, workers forcefully resisted these changes.Luddites, for instance, were Englishtextile workerswho protested the introduction ofweaving machinesby destroying them.[55]More recently, some residents ofChandler, Arizona, have slashed tires and pelted rocks atself-driving car, in protest over the cars' perceived threat to human safety and job prospects.[56] The relative anxiety about automation reflected in opinion polls seems to correlate closely with the strength oforganized laborin that region or nation. For example, while a study by thePew Research Centerindicated that 72% of Americans are worried about increasing automation in the workplace, 80% of Swedes see automation andartificial intelligence(AI) as a good thing, due to the country's still-powerful unions and a more robust nationalsafety net.[57] According to one estimate, 47% of all current jobs in the US have the potential to be fully automated by 2033.[58]Furthermore, wages and educational attainment appear to be strongly negatively correlated with an occupation's risk of being automated.[58]Erik BrynjolfssonandAndrew McAfeeargue that "there's never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value. However, there's never been a worse time to be a worker with only 'ordinary' skills and abilities to offer, because computers, robots, and other digital technologies are acquiring these skills and abilities at an extraordinary rate."[59]Others however argue that highly skilled professional jobs like alawyer,doctor,engineer,journalistare also at risk of automation.[60] According to a 2020 study in theJournal of Political Economy, automation has robust negative effects on employment and wages: "One more robot per thousand workers reduces the employment-to-population ratio by 0.2 percentage points and wages by 0.42%."[61]A 2025 study in theAmerican Economic Journalfound that the introduction of industrial robots reduced 1993 and 2014 led to reduced employment of men and women by 3.7 and 1.6 percentage points.[62] Research byCarl Benedikt Freyand Michael Osborne of theOxford Martin Schoolargued that employees engaged in "tasks following well-defined procedures that can easily be performed by sophisticated algorithms" are at risk of displacement, and 47% of jobs in the US were at risk. The study, released as aworking paperin 2013 and published in 2017, predicted that automation would put low-paid physical occupations most at risk, by surveying a group of colleagues on their opinions.[63]However, according to a study published inMcKinsey Quarterly[64]in 2015 the impact of computerization in most cases is not the replacement of employees but the automation of portions of the tasks they perform.[65]Themethodologyof the McKinsey study has been heavily criticized for being intransparent and relying on subjective assessments.[66]The methodology of Frey and Osborne has been subjected to criticism, as lacking evidence, historical awareness, or credible methodology.[67][68]Additionally, the Organisation for Economic Co-operation and Development (OECD) found that across the 21 OECD countries, 9% of jobs are automatable.[69] Based on a formula byGilles Saint-Paul, an economist atToulouse 1 University, the demand for unskilled human capital declines at a slower rate than the demand for skilled human capital increases.[70]In the long run and for society as a whole it has led to cheaper products,lower average work hours, and new industries forming (i.e., robotics industries, computer industries, design industries). These new industries provide many high salary skill-based jobs to the economy. By 2030, between 3 and 14 percent of the global workforce will be forced to switch job categories due to automation eliminating jobs in an entire sector. While the number of jobs lost to automation is often offset by jobs gained from technological advances, the same type of job loss is not the same one replaced and that leading to increasing unemployment in the lower-middle class. This occurs largely in the US and developed countries where technological advances contribute to higher demand for highly skilled labor but demand for middle-wage labor continues to fall. Economists call this trend "income polarization" where unskilled labor wages are driven down and skilled labor is driven up and it is predicted to continue in developed economies.[71] Lights-out manufacturing is a production system with no human workers, to eliminate labor costs. It grew in popularity in the U.S. when General Motors in 1982 implemented humans "hands-off" manufacturing to "replace risk-averse bureaucracy with automation and robots". However, the factory never reached full "lights out" status.[72] The expansion of lights out manufacturing requires:[73] The costs of automation to the environment are different depending on the technology, product or engine automated. There are automated engines that consume more energy resources from the Earth in comparison with previous engines and vice versa.[citation needed]Hazardous operations, such asoil refining, the manufacturing ofindustrial chemicals, and all forms ofmetal working, were always early contenders for automation.[dubious–discuss][citation needed] The automation of vehicles could prove to have a substantial impact on the environment, although the nature of this impact could be beneficial or harmful depending on several factors. Becauseautomated vehiclesare much less likely to get into accidents compared to human-driven vehicles, some precautions built into current models (such asanti-lock brakesorlaminated glass) would not be required for self-driving versions. Removal of these safety features reduces the weight of the vehicle, and coupled with more precise acceleration and braking, as well as fuel-efficient route mapping, can increasefuel economyand reduce emissions. Despite this, some researchers theorize that an increase in the production of self-driving cars could lead to a boom in vehicle ownership and usage, which could potentially negate any environmental benefits of self-driving cars if they are used more frequently.[74] Automation of homes and home appliances is also thought to impact the environment. A study of energy consumption of automated homes in Finland showed thatsmart homescould reduce energy consumption by monitoring levels of consumption in different areas of the home and adjusting consumption to reduce energy leaks (e.g. automatically reducing consumption during the nighttime when activity is low). This study, along with others, indicated that the smart home's ability to monitor and adjust consumption levels would reduce unnecessary energy usage. However, some research suggests that smart homes might not be as efficient as non-automated homes. A more recent study has indicated that, while monitoring and adjusting consumption levels do decrease unnecessary energy use, this process requires monitoring systems that also consume an amount of energy. The energy required to run these systems sometimes negates their benefits, resulting in little to no ecological benefit.[75] Another major shift in automation is the increased demand forflexibilityand convertibility inmanufacturing processes. Manufacturers are increasingly demanding the ability to easily switch from manufacturing Product A to manufacturing Product B without having to completely rebuild theproduction lines. Flexibility and distributed processes have led to the introduction ofAutomated Guided Vehicleswith Natural Features Navigation. Digital electronics helped too. Former analog-basedinstrumentationwas replaced by digital equivalents which can be more accurate and flexible, and offer greater scope for more sophisticatedconfiguration,parametrization, and operation. This was accompanied by thefieldbusrevolution which provided a networked (i.e. a single cable) means of communicating between control systems and field-level instrumentation, eliminating hard-wiring. Discrete manufacturingplants adopted these technologies fast. The more conservative process industries with their longer plant life cycles have been slower to adopt and analog-based measurement and control still dominate. The growing use ofIndustrial Etherneton the factory floor is pushing these trends still further, enabling manufacturing plants to be integrated more tightly within the enterprise, via the internet if necessary. Global competition has also increased demand forReconfigurable Manufacturing Systems.[76] Engineers can now havenumerical controlover automated devices. The result has been a rapidly expanding range of applications and human activities.Computer-aided technologies(or CAx) now serve as the basis for mathematical and organizational tools used to create complex systems. Notable examples of CAx includecomputer-aided design(CAD software) andcomputer-aided manufacturing(CAM software). The improved design, analysis, and manufacture of products enabled by CAx has been beneficial for industry.[77] Information technology, together withindustrial machineryandprocesses, can assist in the design, implementation, and monitoring of control systems. One example of anindustrial control systemis aprogrammable logic controller(PLC). PLCs are specialized hardened computers which are frequently used to synchronize the flow of inputs from (physical)sensorsand events with the flow of outputs to actuators and events.[78] Human-machine interfaces(HMI) orcomputer human interfaces(CHI), formerly known asman-machine interfaces, are usually employed to communicate with PLCs and other computers. Service personnel who monitor and control through HMIs can be called by different names. In the industrial process and manufacturing environments, they are called operators or something similar. In boiler houses and central utility departments, they are calledstationary engineers.[79] Different types of automation tools exist: Hostsimulation software(HSS) is a commonly used testing tool that is used to test the equipment software. HSS is used to test equipment performance concerning factory automation standards (timeouts, response time, processing time).[80] Cognitive automation, as a subset of AI, is an emerging genus of automation enabled bycognitive computing. Its primary concern is the automation of clerical tasks and workflows that consist of structuringunstructured data.[citation needed]Cognitive automation relies on multiple disciplines:natural language processing,real-time computing,machine learning algorithms,big data analytics, andevidence-based learning.[81] According toDeloitte, cognitive automation enables the replication of human tasks and judgment "at rapid speeds and considerable scale."[82]Such tasks include: Artificially intelligentcomputer-aided design(CAD) can use text-to-3D, image-to-3D, and video-to-3D to automate in3D modeling.[83]AICAD librariescould also be developed usinglinkedopen dataofschematicsanddiagrams.[84]Ai CADassistantsare used as tools to help streamline workflow.[85] Technologies likesolar panels,wind turbines, and otherrenewable energysources—together withsmart grids,micro-grids,battery storage—can automate power production. Many agricultural operations are automated withmachinery and equipmentto improve their diagnosis, decision-making and/or performing. Agricultural automation can relieve the drudgery of agricultural work, improve the timeliness and precision of agricultural operations, raise productivity and resource-use efficiency, build resilience, and improve food quality and safety.[86]Increased productivity can free up labour, allowing agricultural households to spend more time elsewhere.[87] The technological evolution in agriculture has resulted in progressive shifts to digital equipment and robotics.[86]Motorized mechanization using engine power automates the performance of agricultural operations such as ploughing and milking.[88]With digital automation technologies, it also becomes possible to automate diagnosis and decision-making of agricultural operations.[86]For example, autonomous crop robots can harvest and seed crops, while drones can gather information to help automate input application.[87]Precision agriculture often employs such automation technologies[87] Motorized mechanization has generally increased in recent years.[89]Sub-Saharan Africa is the only region where the adoption of motorized mechanization has stalled over the past decades.[90][87] Automation technologies are increasingly used for managing livestock, though evidence on adoption is lacking. Global automatic milking system sales have increased over recent years,[91]but adoption is likely mostly in Northern Europe,[92]and likely almost absent in low- and middle-income countries.[93][87]Automated feeding machines for both cows and poultry also exist, but data and evidence regarding their adoption trends and drivers is likewise scarce.[87][89] Manysupermarketsand even smaller stores are rapidly introducingself-checkoutsystems reducing the need for employing checkout workers. In the U.S., the retail industry employs 15.9 million people as of 2017 (around 1 in 9 Americans in the workforce). Globally, an estimated 192 million workers could be affected by automation according to research byEurasia Group.[94] Online shoppingcould be considered a form of automated retail as the payment and checkout are through an automatedonline transaction processingsystem, with the share of online retail accounting jumping from 5.1% in 2011 to 8.3% in 2016.[citation needed]However, two-thirds of books, music, and films are now purchased online. In addition, automation and online shopping could reduce demands for shopping malls, and retail property, which in the United States is currently estimated to account for 31% of all commercial property or around 7 billion square feet (650 million square metres).Amazonhas gained much of the growth in recent years for online shopping, accounting for half of the growth in online retail in 2016.[94]Other forms of automation can also be an integral part of online shopping, for example, the deployment of automated warehouse robotics such as that applied byAmazonusingKiva Systems. The food retail industry has started to apply automation to the ordering process;McDonald'shas introduced touch screen ordering and payment systems in many of its restaurants, reducing the need for as many cashier employees.[95]The University of Texas at Austinhas introduced fully automated cafe retail locations.[96]Some cafes and restaurants have utilized mobile and tablet "apps" to make the ordering process more efficient by customers ordering and paying on their device.[97]Some restaurants have automated food delivery to tables of customers using aconveyor belt system. The use ofrobotsis sometimes employed to replacewaiting staff.[98] Automation in construction is the combination of methods, processes, and systems that allow for greater machine autonomy in construction activities. Construction automation may have multiple goals, including but not limited to, reducingjobsiteinjuries, decreasing activity completion times, and assisting withquality controlandquality assurance.[99] Automated mining involves the removal of human labor from theminingprocess.[100]Themining industryis currently in the transition towards automation. Currently, it can still require a large amount ofhuman capital, particularly in thethird worldwhere labor costs are low so there is less incentive for increasing efficiency through automation. The Defense Advanced Research Projects Agency (DARPA) started the research and development of automated visualsurveillanceand monitoring (VSAM) program, between 1997 and 1999, and airborne video surveillance (AVS) programs, from 1998 to 2002. Currently, there is a major effort underway in the vision community to develop a fully-automatedtracking surveillancesystem. Automated video surveillance monitors people and vehicles in real-time within a busy environment. Existing automated surveillance systems are based on the environment they are primarily designed to observe, i.e., indoor, outdoor or airborne, the number of sensors that the automated system can handle and the mobility of sensors, i.e., stationary camera vs. mobile camera. The purpose of a surveillance system is to record properties and trajectories of objects in a given area, generate warnings or notify the designated authorities in case of occurrence of particular events.[101] As demands for safety and mobility have grown and technological possibilities have multiplied, interest in automation has grown. Seeking to accelerate the development and introduction of fully automated vehicles and highways, theU.S. Congressauthorized more than $650 million over six years forintelligent transport systems(ITS) and demonstration projects in the 1991Intermodal Surface Transportation Efficiency Act(ISTEA). Congress legislated in ISTEA that:[102] [T]heSecretary of Transportationshall develop an automated highway and vehicle prototype from which future fully automated intelligent vehicle-highway systems can be developed. Such development shall include research in human factors to ensure the success of the man-machine relationship. The goal of this program is to have the first fully automated highway roadway or an automated test track in operation by 1997. This system shall accommodate the installation of equipment in new and existing motor vehicles. Full automation commonly defined as requiring no control or very limited control by the driver; such automation would be accomplished through a combination of sensor, computer, and communications systems in vehicles and along the roadway. Fully automated driving would, in theory, allow closer vehicle spacing and higher speeds, which could enhance traffic capacity in places where additional road building is physically impossible, politically unacceptable, or prohibitively expensive. Automated controls also might enhance road safety by reducing the opportunity for driver error, which causes a large share of motor vehicle crashes. Other potential benefits include improved air quality (as a result of more-efficient traffic flows), increased fuel economy, and spin-off technologies generated during research and development related to automated highway systems.[103] Automated waste collection trucks prevent the need for as many workers as well as easing the level of labor required to provide the service.[104] Business process automation (BPA) is the technology-enabled automation of complexbusiness processes.[105]It can help to streamline a business for simplicity, achievedigital transformation, increaseservice quality, improve service delivery or contain costs. BPA consists of integrating applications, restructuring labor resources and using software applications throughout the organization.Robotic process automation(RPA; or RPAAI for self-guided RPA 2.0) is an emerging field within BPA and uses AI. BPAs can be implemented in a number of business areas including marketing, sales and workflow. Home automation (also calleddomotics) designates an emerging practice of increased automation of household appliances and features in residential dwellings, particularly through electronic means that allow for things impracticable, overly expensive or simply not possible in recent past decades. The rise in the usage of home automation solutions has taken a turn reflecting the increased dependency of people on such automation solutions. However, the increased comfort that gets added through these automation solutions is remarkable.[106] Automation is essential for many scientific and clinical applications.[107]Therefore, automation has been extensively employed in laboratories. From as early as 1980 fully automated laboratories have already been working.[108]However, automation has not become widespread in laboratories due to its high cost. This may change with the ability of integrating low-cost devices with standard laboratory equipment.[109][110]Autosamplersare common devices used in laboratory automation. Logistics automation is the application ofcomputer softwareor automated machinery to improve the efficiency oflogisticsoperations. Typically this refers to operations within awarehouseordistribution center, with broader tasks undertaken bysupply chain engineeringsystems andenterprise resource planningsystems. Industrial automation deals primarily with the automation ofmanufacturing,quality control, andmaterial handlingprocesses. General-purpose controllers for industrial processes includeprogrammable logic controllers,stand-alone I/O modules, and computers. Industrial automation is to replace the human action and manual command-response activities with the use of mechanized equipment and logical programming commands. One trend is increased use ofmachine vision[111]to provide automatic inspection and robot guidance functions, another is a continuing increase in the use of robots. Industrial automation is simply required in industries. The rise of industrial automation is directly tied to the "Fourth Industrial Revolution", which is better known now as Industry 4.0. Originating from Germany, Industry 4.0 encompasses numerous devices, concepts, and machines,[112]as well as the advancement of theindustrial internet of things(IIoT). An "Internet of Thingsis a seamless integration of diverse physical objects in the Internet through a virtual representation."[113]These new revolutionary advancements have drawn attention to the world of automation in an entirely new light and shown ways for it to grow to increase productivity and efficiency in machinery and manufacturing facilities. Industry 4.0 works with the IIoT and software/hardware to connect in a way that (throughcommunication technologies) add enhancements and improve manufacturing processes. Being able to create smarter, safer, and more advanced manufacturing is now possible with these new technologies. It opens up a manufacturing platform that is more reliable, consistent, and efficient than before. Implementation of systems such asSCADAis an example of software that takes place in Industrial Automation today. SCADA is a supervisory data collection software, just one of the many used in Industrial Automation.[114]Industry 4.0 vastly covers many areas in manufacturing and will continue to do so as time goes on.[112] Industrial roboticsis a sub-branch in industrial automation that aids in various manufacturing processes. Such manufacturing processes include machining, welding, painting, assembling and material handling to name a few.[115]Industrial robots use various mechanical, electrical as well as software systems to allow for high precision, accuracy and speed that far exceed any human performance. The birth of industrial robots came shortly after World War II as the U.S. saw the need for a quicker way to produce industrial and consumer goods.[116]Servos, digital logic and solid-state electronics allowed engineers to build better and faster systems and over time these systems were improved and revised to the point where a single robot is capable of running 24 hours a day with little or no maintenance. In 1997, there were 700,000 industrial robots in use, the number has risen to 1.8M in 2017[117]In recent years, AI withroboticsis also used in creating an automatic labeling solution, using robotic arms as the automatic label applicator, and AI for learning and detecting the products to be labelled.[118] Industrial automation incorporates programmable logic controllers in the manufacturing process.Programmable logic controllers(PLCs) use a processing system which allows for variation of controls of inputs and outputs using simple programming. PLCs make use of programmable memory, storing instructions and functions like logic, sequencing, timing, counting, etc. Using a logic-based language, a PLC can receive a variety of inputs and return a variety of logical outputs, the input devices being sensors and output devices being motors, valves, etc. PLCs are similar to computers, however, while computers are optimized for calculations, PLCs are optimized for control tasks and use in industrial environments. They are built so that only basic logic-based programming knowledge is needed and to handle vibrations, high temperatures, humidity, and noise. The greatest advantage PLCs offer is their flexibility. With the same basic controllers, a PLC can operate a range of different control systems. PLCs make it unnecessary to rewire a system to change the control system. This flexibility leads to a cost-effective system for complex and varied control systems.[119] PLCs can range from small "building brick" devices with tens of I/O in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC andSCADAsystems. They can be designed for multiple arrangements of digital and analoginputs and outputs(I/O), extended temperature ranges, immunity toelectrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up ornon-volatile memory. It was from the automotive industry in the United States that the PLC was born. Before the PLC, control, sequencing, and safety interlock logic for manufacturing automobiles was mainly composed ofrelays,cam timers,drum sequencers, and dedicated closed-loop controllers. Since these could number in the hundreds or even thousands, the process for updating such facilities for the yearly modelchange-overwas very time-consuming and expensive, aselectriciansneeded to individually rewire the relays to change their operational characteristics. When digital computers became available, being general-purpose programmable devices, they were soon applied to control sequential and combinatorial logic in industrial processes. However, these early computers required specialist programmers and stringent operating environmental control for temperature, cleanliness, and power quality. To meet these challenges, the PLC was developed with several key attributes. It would tolerate the shop-floor environment, it would support discrete (bit-form) input and output in an easily extensible manner, it would not require years of training to use, and it would permit its operation to be monitored. Since many industrial processes have timescales easily addressed by millisecond response times, modern (fast, small, reliable) electronics greatly facilitate building reliable controllers, and performance could be traded off for reliability.[120] Agent-assisted automation refers to automation used by call center agents to handle customer inquiries. The key benefit of agent-assisted automation is compliance and error-proofing. Agents are sometimes not fully trained or they forget or ignore key steps in the process. The use of automation ensures that what is supposed to happen on the call actually does, every time. There are two basic types: desktop automation and automated voice solutions. Fundamentally, there are two types of control loop:open-loop control(feedforward), andclosed-loop control(feedback). The definition of a closed loop control system according to theBritish Standards Institutionis "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."[122] One of the simplest types of control ison-offcontrol. An example is a thermostat used on household appliances which either open or close an electrical contact. (Thermostats were originally developed as true feedback-control mechanisms rather than the on-off common household appliance thermostat.) Sequence control, in which a programmed sequence ofdiscreteoperations is performed, often based on system logic that involves system states. An elevator control system is an example of sequence control. A proportional–integral–derivative controller (PID controller) is acontrol loopfeedback mechanism(controller) widely used inindustrial control systems. In a PID loop, the controller continuously calculates anerror valuee(t){\displaystyle e(t)}as the difference between a desiredsetpointand a measuredprocess variableand applies a correction based onproportional,integral, andderivativeterms, respectively (sometimes denotedP,I, andD) which give their name to the controller type. The theoretical understanding and application date from the 1920s, and they are implemented in nearly all analog control systems; originally in mechanical controllers, and then using discrete electronics and latterly in industrial process computers. Sequential control may be either to a fixed sequence or to a logical one that will perform different actions depending on various system states. An example of an adjustable but otherwise fixed sequence is a timer on a lawn sprinkler. States refer to the various conditions that can occur in a use or sequence scenario of the system. An example is an elevator, which uses logic based on the system state to perform certain actions in response to its state and operator input. For example, if the operator presses the floor n button, the system will respond depending on whether the elevator is stopped or moving, going up or down, or if the door is open or closed, and other conditions.[124] Early development of sequential control wasrelay logic, by whichelectrical relaysengage electrical contacts which either start or interrupt power to a device. Relays were first used in telegraph networks before being developed for controlling other devices, such as when starting and stopping industrial-sized electric motors or opening and closingsolenoid valves. Using relays for control purposes allowed event-driven control, where actions could be triggered out of sequence, in response to external events. These were more flexible in their response than the rigid single-sequencecam timers. More complicated examples involved maintaining safe sequences for devices such as swing bridge controls, where a lock bolt needed to be disengaged before the bridge could be moved, and the lock bolt could not be released until the safety gates had already been closed. The total number of relays and cam timers can number into the hundreds or even thousands in some factories. Earlyprogrammingtechniques and languages were needed to make such systems manageable, one of the first beingladder logic, where diagrams of the interconnected relays resembled the rungs of a ladder. Special computers calledprogrammable logic controllerswere later designed to replace these collections of hardware with a single, more easily re-programmed unit. In a typical hard-wired motor start and stop circuit (called acontrol circuit) a motor is started by pushing a "Start" or "Run" button that activates a pair of electrical relays. The "lock-in" relay locks in contacts that keep the control circuit energized when the push-button is released. (The start button is a normally open contact and the stop button is a normally closed contact.) Another relay energizes a switch that powers the device that throws the motor starter switch (three sets of contacts for three-phase industrial power) in the main power circuit. Large motors use high voltage and experience high in-rush current, making speed important in making and breaking contact. This can be dangerous for personnel and property with manual switches. The "lock-in" contacts in the start circuit and the main power contacts for the motor are held engaged by their respective electromagnets until a "stop" or "off" button is pressed, which de-energizes the lock in relay.[125] Commonlyinterlocksare added to a control circuit. Suppose that the motor in the example is powering machinery that has a critical need for lubrication. In this case, an interlock could be added to ensure that the oil pump is running before the motor starts. Timers, limit switches, and electric eyes are other common elements in control circuits. Solenoid valves are widely used oncompressed airorhydraulic fluidfor poweringactuatorsonmechanicalcomponents. Whilemotorsare used to supply continuousrotary motion, actuators are typically a better choice for intermittently creating a limited range of movement for a mechanical component, such as moving various mechanical arms, opening or closingvalves, raising heavy press-rolls, applying pressure to presses. Computers can perform both sequential control and feedback control, and typically a single computer will do both in an industrial application.Programmable logic controllers(PLCs) are a type of special-purposemicroprocessorthat replaced many hardware components such as timers and drum sequencers used in relay logic–type systems. General-purpose process control computers have increasingly replaced stand-alone controllers, with a single computer able to perform the operations of hundreds of controllers. Process control computers can process data from a network of PLCs, instruments, and controllers to implement typical (such as PID) control of many individual variables or, in some cases, to implement complex controlalgorithmsusing multiple inputs and mathematical manipulations. They can also analyze data and create real-time graphical displays for operators and run reports for operators, engineers, and management. Control of anautomated teller machine(ATM) is an example of an interactive process in which a computer will perform a logic-derived response to a user selection based on information retrieved from a networked database. The ATM process has similarities with other online transaction processes. The different logical responses are calledscenarios. Such processes are typically designed with the aid ofuse casesandflowcharts, which guide the writing of the software code. The earliest feedback control mechanism was the water clock invented by Greek engineer Ctesibius (285–222 BC).
https://en.wikipedia.org/wiki/Automation
Autonomous agency theory(AAT) is aviable system theory(VST) which models autonomous socialcomplex adaptive systems. It can be used to model the relationship between an agency and its environment(s), and these may include other interactive agencies. The nature of that interaction is determined by both the agency's external and internal attributes and constraints. Internal attributes may include immanent dynamic "self" processes that drive agency change. Stafford Beercoined the termviable systemsin the 1950s, and developed it within hismanagement cyberneticstheories. He designed hisviable system modelas a diagnostic tool for organisationalpathologies(conditions of social ill-health). This model involves a system concerned with operations and their direct management, and ameta-systemthat "observes" the system and controls it. Beer's work refers toMaturana's concept ofautopoiesis,[1]which explains why living systems actually live. However, Beer did not make general use of the concept in his modelling process. In the 1980s Eric Schwarz developed an alternative model from the principles of complexity science. This not only embraces the ideas of autopoiesis (self-production), but also autogenesis (self-creation) which responds to a proposition that living systems also need to learn to maintain their viability. Self-production and self-creation are both networks of processes that connect an operational system of agency structure from which behaviour arises, an observing relational meta-system, this itself observed by an "existential" meta-meta-system. As such Schwarz' VST constitutes a different paradigm from that of Beer. AAT is a development of Schwarz' paradigm through the addition of propositions setting it in a knowledge context.[2] AAT is a generic modelling approach that has the capacity to anticipate future potentials for behaviour. Suchanticipationoccurs because behaviour in the agency as a living system is "structure determined",[3]where the structure itself of the agency is responsible for that anticipation. This is like anticipating the behaviour of both a tiger or a giraffe when faced with food options. The tiger has a structure that allows it to have speed, strength and sharp inbuilt weapons to kill moving prey, but the giraffe has a structure that allows it to acquire its food in high places in a way the tiger could not duplicate. Even if a giraffe has the speed to chase prey, it does not have the resources to kill and eat it. Agency genericstructureis a substructure defined by three systems that are, in general terms, referred to as: These generic systems are ontologically distinct; their natures being determined by the context in which the autonomous agency exists. The substructure also maintains a superstructure that is constructed through context related propositional theory. Superstructural theory may include attributes of collective identity, cognition, emotion, personality; purpose and intention; self-reference, self-awareness, self-reflection, self-regulation and self-organisation. The substructural systems are connected by autopoietic and autogenetic networks of processes as shown in Figure 1 below. The terminology becomes simplified when the existential system is taken to be culture, and it is recognised that Piaget's[4]concept ofoperative intelligenceis equivalent to autopoiesis, and his figurative intelligence to autogenesis. The noumenal system now becomes a personality system, and autonomous agency theory now becomescultural agency theory(CAT).[5]This is normally used to model plural situations like organisations or a nation states, when its personality system is taken to have normative characteristics (see alsoNormative personality),[6][7]that is, driven by cultural norms as represented in Figure 2 below. This has developed further throughmindset agency theory[8]enabling agency behaviour to be anticipated.[9] A feature of this modelling approach is that the properties of the cultural system act as anattractorfor the agency as a whole, providing constraint for the properties of its personality and operative systems. This attraction ceases with culturalinstability, when CAT reduces toinstrumentalitywith no capacity to learn. Another feature is driven by possibilities ofrecursionpermitted using Beer's proposition of viability law: every viable system contains and is contained in a viable system.[10] Cultural agency theory (CAT) as a development of AAT.[11]It is principally used to model organisational contexts that have at least potentially stable cultures. The existential system of AAT becomes the cultural system, the figurative system becomes a normative personality,[12]and the operative system now represents the organisational structure that facilitates and constrains behaviour. The cultural system may be regarded as a (second-order) "observer" of the instrumental couple that occurs between the normative personality and the operative system. The function of this couple is to manifest figurative attributes of the personality, like goals or ideology, operatively consequently influencing behaviour. This instrumental nature occurs through feedforward processes such that personality attributes can be processed for operative action. Where there are issues in doing this, feedback processes create imperatives for adjustment. This is like having a goal, and finding that it cannot be implemented, thereby having to reconsider the goal. This instrumental couple can also be seen in terms of the operative system and its first-order "observing" system, the normative personality. So, while personality is a first-order "observer" of CAT's operative system, it is ultimately directed by its second-order cultural "observer" system. A development of this has occurred using trait theory from psychology. Unlike other trait theories of personality, this adopts epistemic traits[13]that centres on values, an approach that tends to be more stable (since basic values tend to be stable) in terms of personality testing and retesting, than other approaches that use (for instance) agency preferences (likeMyers-Briggs Type Indicator) that may change between test and retest. This trait theory for the normative personality is called mindset agency theory,[14]and is a development of Maruyama's Mindscape Theory.[15] The cognitive process by which personality is represented through epistemic trait functions (called types), can be explained through both instrumental and epistemicrationality,[citation needed]where instrumental rationality (also referred to as utilitarian,[16]and related to the expectations about the behaviour of other human beings or objects in the environment given some cognitive basis for those expectation) is independent of, if constrained by, epistemic rationality (related to the formation of beliefs in an unbiased manner, normally set in terms of believable propositions: due to their being strongly supported by evidence, as opposed to being agnostic towards propositions that are unsupported by "sufficient" evidence, whatever this means). Applications of CAT could be found in social, political and economic sciences, for instance, recent studies analyzed Donald Trump and Theresa May's personalities. Stafford Beer's (1979)viable system modelis a well-known diagnostic model that comes out of hismanagement cyberneticsparadigm. Related to this is the idea of first-order andsecond-order cybernetics. Cybernetics is concerned withfeedforwardandfeedbackprocesses, and first-order cybernetics is concerned with this relationship between the system and its environment. Second-order cybernetics is concerned with the relationship between the system and its internal meta-system (that some refer to as "the observer" of the system). Von Foerster[17]has referred to second-order cybernetics as the "cybernetics of cybernetics". While attempts to explore higher orders of cybernetics have been made,[18]no development into a general theory of higher cybernetic orders has emerged from this paradigm. In contrast, extending the principles of autonomous agency theory, a generic model has been formulated for the generation of higher cybernetic orders,[19]developed using the concepts of recursion and incursion as proposed by Dubois.[20][21]The model is reflective, for instance, of processes ofknowledgecreation for communitylearning[22]andsymbolic convergence theory.[23]This nth-order theory of cybernetics links with "the cybernetics of cybernetics" by assigning to its second-order cybernetic conceptinferencesthat may arise from any higher-order cybernetics that may exist, if unperceived. The network of processes in this general representation of higher cybernetic orders is expressed in terms of orders of autopoiesis, so that for instance autogenesis may be seen as a second-order of autopoiesis.
https://en.wikipedia.org/wiki/Autonomous_agency_theory
TheGaia hypothesis(/ˈɡaɪ.ə/), also known as theGaia theory,Gaia paradigm, or theGaia principle, proposes that livingorganismsinteract with theirinorganicsurroundings onEarthto form asynergisticandself-regulatingcomplex systemthat helps to maintain and perpetuate the conditions forlifeon the planet. The Gaia hypothesis was formulated by the chemistJames Lovelock[1]and co-developed by the microbiologistLynn Margulisin the 1970s.[2]Following the suggestion by his neighbour, novelistWilliam Golding, Lovelock named the hypothesis afterGaia, the primordial deity who personified the Earth inGreek mythology. In 2006, theGeological Society of Londonawarded Lovelock theWollaston Medalin part for his work on the Gaia hypothesis.[3] Topics related to the hypothesis include how thebiosphereand theevolutionof organisms affect the stability ofglobal temperature,salinityofseawater,atmospheric oxygenlevels, the maintenance of ahydrosphereof liquid water and other environmental variables that affect thehabitability of Earth. The Gaia hypothesis was initially criticized for beingteleologicaland against the principles ofnatural selection, but later refinements aligned the Gaia hypothesis with ideas from fields such asEarth system science,biogeochemistryandsystems ecology.[4][5][6]Even so, the Gaia hypothesis continues to attract criticism, and today many scientists consider it to be only weakly supported by, or at odds with, the available evidence.[7][8][9][10] Gaian hypotheses suggest that organismsco-evolvewith their environment: that is, they "influence theirabioticenvironment, and that environment in turn influences thebiotabyDarwinian process". Lovelock (1995) gave evidence of this in his second book,Ages of Gaia, showing the evolution from the world of the earlythermo-acido-philicandmethanogenic bacteriatowards the oxygen-enrichedatmospheretoday that supports morecomplex life. A reduced version of the hypothesis has been called "influential Gaia"[11]in the 2002 paper "Directed Evolution of the Biosphere: Biogeochemical Selection or Gaia?" by Andrei G. Lapenis, which states thebiotainfluence certain aspects of the abiotic world, e.g.temperatureand atmosphere. This is not the work of an individual but a collective of Russian scientific research that was combined into this peer-reviewed publication. It states the coevolution of life and the environment through "micro-forces"[11]and biogeochemical processes. An example is how the activity ofphotosyntheticbacteria during Precambrian times completely modified theEarth atmosphereto turn it aerobic, and thus supports the evolution of life (in particulareukaryoticlife). Since barriers existed throughout the twentieth century between Russia and the rest of the world, it is only relatively recently that the early Russian scientists who introduced concepts overlapping the Gaia paradigm have become better known to the Western scientific community.[11]These scientists includePiotr Alekseevich Kropotkin(1842–1921) (although he spent much of his professional life outside Russia),Rafail Vasil’evich Rizpolozhensky(1862 –c.1922),Vladimir Ivanovich Vernadsky(1863–1945), andVladimir Alexandrovich Kostitzin(1886–1963). Biologists and Earth scientists usually view the factors that stabilize the characteristics of a period as an undirectedemergent propertyorentelechyof the system; as each individual species pursues its own self-interest, for example, their combined actions may have counterbalancing effects on environmental change. Opponents of this view sometimes reference examples of events that resulted in dramatic change rather than stable equilibrium, such as the conversion of the Earth's atmosphere from areducing environmentto anoxygen-rich one at the end of theArchaeanand the beginning of theProterozoicperiods. Less accepted versions of the hypothesis claim that changes in the biosphere are brought about through thecoordination of living organismsand maintain those conditions throughhomeostasis. In some versions ofGaia philosophy, all lifeforms are considered part of one single living planetary being calledGaia. In this view, the atmosphere, the seas and the terrestrial crust would be results of interventions carried out by Gaia through thecoevolvingdiversity of living organisms. The Gaia paradigm was an influence on thedeep ecologymovement.[12] The Gaia hypothesis posits that the Earth is a self-regulatingcomplex systeminvolving thebiosphere, theatmosphere, thehydrospheresand thepedosphere, tightly coupled as an evolving system. The hypothesis contends that this system as a whole, called Gaia, seeks a physical and chemical environment optimal for contemporary life.[13] Gaia evolves through acyberneticfeedbacksystem operated by thebiota, leading to broad stabilization of the conditions of habitability in a full homeostasis. Many processes in the Earth's surface, essential for the conditions of life, depend on the interaction of living forms, especiallymicroorganisms, with inorganic elements. These processes establish a global control system that regulates Earth'ssurface temperature,atmosphere compositionandoceansalinity, powered by the global thermodynamic disequilibrium state of the Earth system.[14] The existence of a planetary homeostasis influenced by living forms had been observed previously in the field ofbiogeochemistry, and it is being investigated also in other fields likeEarth system science. The originality of the Gaia hypothesis relies on the assessment that such homeostatic balance is actively pursued with the goal of keeping the optimal conditions for life, even when terrestrial or external events menace them.[15] Since life started on Earth, the energy provided by theSunhas increased by 25–30%;[16]however, the surface temperature of the planet has remained within the levels of habitability, reaching quite regular low and high margins. Lovelock has also hypothesised that methanogens produced elevated levels of methane in the early atmosphere, giving a situation similar to that found in petrochemical smog, similar in some respects to the atmosphere onTitan.[17]This, he suggests, helped to screen out ultraviolet light until the formation of the ozone layer, maintaining a degree of homeostasis. However, theSnowball Earth[18]research has suggested that "oxygen shocks" and reduced methane levels led, during theHuronian,SturtianandMarinoan/VarangerIce Ages, to a world that very nearly became a solid "snowball". These epochs are evidence against the ability of the prePhanerozoicbiosphere to fully self-regulate. Processing of the greenhouse gas CO2, explained below, plays a critical role in the maintenance of the Earth temperature within the limits of habitability. TheCLAW hypothesis, inspired by the Gaia hypothesis, proposes afeedback loopthat operates betweenoceanecosystemsand theEarth'sclimate.[19]Thehypothesisspecifically proposes that particularphytoplanktonthat producedimethyl sulfideare responsive to variations inclimate forcing, and that these responses lead to anegative feedbackloop that acts to stabilise thetemperatureof theEarth's atmosphere. Currently the increase in human population and the environmental impact of its activities, such as the multiplication ofgreenhouse gasesmay cause negative feedbacks in the environment to becomepositive feedback. Lovelock has stated that this could bring anextremely accelerated global warming,[20]but he has since stated the effects will likely occur more slowly.[21] In response to the criticism that the Gaia hypothesis seemingly required unrealisticgroup selectionandcooperationbetween organisms, James Lovelock andAndrew Watsondeveloped a mathematical model,Daisyworld, in whichecological competitionunderpinned planetary temperature regulation.[22] Daisyworld examines theenergy budgetof aplanetpopulated by two different types of plants, blackdaisiesand white daisies, which are assumed to occupy a significant portion of the surface. The colour of the daisies influences thealbedoof the planet such that black daisies absorb more light and warm the planet, while white daisies reflect more light and cool the planet. The black daisies are assumed to grow and reproduce best at a lower temperature, while the white daisies are assumed to thrive best at a higher temperature. As the temperature rises closer to the value the white daisies like, the white daisies outreproduce the black daisies, leading to a larger percentage of white surface, and more sunlight is reflected, reducing the heat input and eventually cooling the planet. Conversely, as the temperature falls, the black daisies outreproduce the white daisies, absorbing more sunlight and warming the planet. The temperature will thus converge to the value at which the reproductive rates of the plants are equal. Lovelock and Watson showed that, over a limited range of conditions, thisnegative feedbackdue to competition can stabilize the planet's temperature at a value which supports life, if the energy output of the Sun changes, while a planet without life would show wide temperature changes. The percentage of white and black daisies will continually change to keep the temperature at the value at which the plants' reproductive rates are equal, allowing both life forms to thrive. It has been suggested that the results were predictable because Lovelock and Watson selected examples that produced the responses they desired.[23] Oceansalinityhas been constant at about 3.5% for a very long time.[24]Salinity stability in oceanic environments is important as most cells require a rather constant salinity and do not generally tolerate values above 5%. The constant ocean salinity was a long-standing mystery, because no process counterbalancing the salt influx from rivers was known. Recently it was suggested[25]that salinity may also be strongly influenced byseawatercirculation through hotbasalticrocks, and emerging as hot water vents onmid-ocean ridges. However, the composition of seawater is far from equilibrium, and it is difficult to explain this fact without the influence of organic processes. One suggested explanation lies in the formation of salt plains throughout Earth's history. It is hypothesized that these are created by bacterial colonies that fix ions and heavy metals during their life processes.[24] In the biogeochemical processes of Earth, sources and sinks are the movement of elements. The composition of salt ions within our oceans and seas is: sodium (Na+), chlorine (Cl−), sulfate (SO42−), magnesium (Mg2+), calcium (Ca2+) and potassium (K+). The elements that comprise salinity do not readily change and are a conservative property of seawater.[24]There are many mechanisms that change salinity from a particulate form to a dissolved form and back. Considering the metallic composition of iron sources across a multifaceted grid of thermomagnetic design, not only would the movement of elements hypothetically help restructure the movement of ions, electrons, and the like, but would also potentially and inexplicably assist in balancing the magnetic bodies of the Earth's geomagnetic field. The known sources of sodium i.e. salts are when weathering, erosion, and dissolution of rocks are transported into rivers and deposited into the oceans. TheMediterranean Seaas being Gaia's kidney is found (here) byKenneth J. Hsu, a correspondence author in 2001. Hsu suggests the "desiccation" of the Mediterranean is evidence of a functioning Gaia "kidney". In this and earlier suggested cases, it is plate movements and physics, not biology, which performs the regulation. Earlier "kidney functions" were performed during the "depositionof theCretaceous(South Atlantic),Jurassic(Gulf of Mexico),Permo-Triassic(Europe),Devonian(Canada), andCambrian/Precambrian(Gondwana) saline giants."[26] The Gaia hypothesis states that the Earth'satmospheric compositionis kept at a dynamically steady state by the presence of life.[27]The atmospheric composition provides the conditions that contemporary life has adapted to. All the atmospheric gases other thannoble gasespresent in the atmosphere are either made by organisms or processed by them. The stability of the atmosphere in Earth is not a consequence ofchemical equilibrium.Oxygenis a reactive compound, and should eventually combine with gases and minerals of the Earth's atmosphere and crust. Oxygen only began to persist in the atmosphere in small quantities about 50 million years before the start of theGreat Oxygenation Event.[28]Since the start of theCambrianperiod, atmospheric oxygen concentrations have fluctuated between 15% and 40% of atmospheric volume.[29]Traces ofmethane(at an amount of 100,000 tonnes produced per year)[30]should not exist, as methane is combustible in an oxygen atmosphere. Dry air in theatmosphere of Earthcontains roughly (by volume) 78.09%nitrogen, 20.95% oxygen, 0.93%argon, 0.039%carbon dioxide, and small amounts of other gases includingmethane. Lovelock originally speculated that concentrations of oxygen above about 25% would increase the frequency of wildfires and conflagration of forests. This mechanism, however, would not raise oxygen levels if they became too low. If plants can be shown to robustly over-produce O2then perhaps only the high oxygen forest fires regulator is necessary. Recent work on the findings of fire-caused charcoal in Carboniferous and Cretaceous coal measures, in geologic periods when O2did exceed 25%, has supported Lovelock's contention.[citation needed] Gaia scientists see the participation of living organisms in thecarbon cycleas one of the complex processes that maintain conditions suitable for life. The only significant natural source ofatmospheric carbon dioxide(CO2) isvolcanic activity, while the only significant removal is through the precipitation ofcarbonate rocks.[31]Carbon precipitation, solution andfixationare influenced by thebacteriaand plant roots in soils, where they improve gaseous circulation, or in coral reefs, where calcium carbonate is deposited as a solid on the sea floor. Calcium carbonate is used by living organisms to manufacture carbonaceous tests and shells. Once dead, the living organisms' shells fall. Some arrive at the bottom of shallow seas where the heat and pressure of burial, and/or the forces of plate tectonics, eventually convert them to deposits of chalk and limestone. Much of the falling dead shells, however, redissolve into the ocean below the carbon compensation depth. One of these organisms isEmiliania huxleyi, an abundantcoccolithophorealgaewhich may have a role in the formation ofclouds.[32]CO2excess is compensated by an increase of coccolithophorid life, increasing the amount of CO2locked in the ocean floor. Coccolithophorids, if the CLAW Hypothesis turns out to be supported (see "Regulation of Global Surface Temperature" above), could help increase the cloud cover, hence control the surface temperature, help cool the whole planet and favor precipitation necessary for terrestrial plants.[citation needed]Lately the atmospheric CO2concentration has increased and there is some evidence that concentrations of oceanalgal bloomsare also increasing.[33] Lichenand other organisms accelerate theweatheringof rocks in the surface, while the decomposition of rocks also happens faster in the soil, thanks to the activity of roots, fungi, bacteria and subterranean animals. The flow of carbon dioxide from the atmosphere to the soil is therefore regulated with the help of living organisms. When CO2levels rise in the atmosphere the temperature increases and plants grow. This growth brings higher consumption of CO2by the plants, who process it into the soil, removing it from the atmosphere. The idea of the Earth as an integrated whole, a living being, has a long tradition. Themythical Gaiawas the primal Greek goddess personifying theEarth, the Greek version of "Mother Nature" (from Ge = Earth, and Aia =PIEgrandmother), or theEarth Mother. James Lovelock gave this name to his hypothesis after a suggestion from the novelistWilliam Golding, who was living in the same village as Lovelock at the time (Bowerchalke,Wiltshire, UK). Golding's advice was based on Gea, an alternative spelling for the name of the Greek goddess, which is used as prefix in geology, geophysics and geochemistry.[34]Golding later made reference to Gaia in hisNobel prizeacceptance speech. In the eighteenth century, asgeologyconsolidated as a modern science,James Huttonmaintained that geological and biological processes are interlinked.[35]Later, thenaturalistand explorerAlexander von Humboldtrecognized the coevolution of living organisms, climate, and Earth's crust.[35]In the twentieth century,Vladimir Vernadskyformulated a theory of Earth's development that is now one of the foundations of ecology. Vernadsky was a Ukrainiangeochemistand was one of the first scientists to recognize that the oxygen, nitrogen, and carbon dioxide in the Earth's atmosphere result from biological processes. During the 1920s he published works arguing that living organisms could reshape the planet as surely as any physical force. Vernadsky was a pioneer of the scientific bases for the environmental sciences.[36]His visionary pronouncements were not widely accepted in the West, and some decades later the Gaia hypothesis received the same type of initial resistance from the scientific community. Also in the turn to the 20th centuryAldo Leopold, pioneer in the development of modernenvironmental ethicsand in the movement forwildernessconservation, suggested a living Earth in his biocentric or holistic ethics regarding land. It is at least not impossible to regard the earth's parts—soil, mountains, rivers, atmosphere etc,—as organs or parts of organs of a coordinated whole, each part with its definite function. And if we could see this whole, as a whole, through a great period of time, we might perceive not only organs with coordinated functions, but possibly also that process of consumption as replacement which in biology we call metabolism, or growth. In such case we would have all the visible attributes of a living thing, which we do not realize to be such because it is too big, and its life processes too slow. Another influence for the Gaia hypothesis and theenvironmental movementin general came as a side effect of theSpace Racebetween the Soviet Union and the United States of America. During the 1960s, the first humans in space could see how the Earth looked as a whole. The photographEarthrisetaken by astronautWilliam Andersin 1968 during theApollo 8mission became, through theOverview Effectan early symbol for the global ecology movement.[38] Lovelock started defining the idea of a self-regulating Earth controlled by the community of living organisms in September 1965, while working at theJet Propulsion Laboratoryin California on methods of detectinglife on Mars.[39][40]The first paper to mention it wasPlanetary Atmospheres: Compositional and other Changes Associated with the Presence of Life, co-authored with C.E. Giffin.[41]A main concept was that life could be detected in a planetary scale by the chemical composition of the atmosphere. According to the data gathered by thePic du Midi observatory, planets like Mars or Venus had atmospheres inchemical equilibrium. This difference with the Earth atmosphere was considered to be a proof that there was no life in these planets. Lovelock formulated theGaia Hypothesisin journal articles in 1972[1]and 1974,[2]followed by a popularizing 1979 bookGaia: A new look at life on Earth. An article in theNew Scientistof February 6, 1975,[42]and a popular book length version of the hypothesis, published in 1979 asThe Quest for Gaia, began to attract scientific and critical attention. Lovelock called it first the Earth feedback hypothesis,[43]and it was a way to explain the fact that combinations of chemicals includingoxygenandmethanepersist in stable concentrations in the atmosphere of the Earth. Lovelock suggested detecting such combinations in other planets' atmospheres as a relatively reliable and cheap way to detect life. Later, other relationships such as sea creatures producing sulfur and iodine in approximately the same quantities as required by land creatures emerged and helped bolster the hypothesis.[44] In 1971microbiologistDr.Lynn Margulisjoined Lovelock in the effort of fleshing out the initial hypothesis into scientifically proven concepts, contributing her knowledge about how microbes affect the atmosphere and the different layers in the surface of the planet.[4]The American biologist had also awakened criticism from the scientific community with her advocacy of the theory on the origin ofeukaryoticorganellesand her contributions to theendosymbiotic theory, nowadays accepted. Margulis dedicated the last of eight chapters in her book,The Symbiotic Planet, to Gaia. However, she objected to the widespread personification of Gaia and stressed that Gaia is "not an organism", but "an emergent property of interaction among organisms". She defined Gaia as "the series of interacting ecosystems that compose a single huge ecosystem at the Earth's surface. Period". The book's most memorable "slogan" was actually quipped by a student of Margulis'. James Lovelock called his first proposal theGaia hypothesisbut has also used the termGaia theory. Lovelock states that the initial formulation was based on observation, but still lacked a scientific explanation. The Gaia hypothesis has since been supported by a number of scientific experiments[45]and provided a number of useful predictions.[46] In 1985, the first public symposium on the Gaia hypothesis,Is The Earth a Living Organism?was held atUniversity of Massachusetts Amherst, August 1–6.[47]The principal sponsor was theNational Audubon Society. Speakers included James Lovelock,Lynn Margulis,George Wald,Mary Catherine Bateson,Lewis Thomas,Thomas Berry,David Abram,John Todd, Donald Michael,Christopher Bird,Michael Cohen, and William Fields. Some 500 people attended.[48] In 1988,climatologistStephen Schneiderorganised a conference of theAmerican Geophysical Union. The first Chapman Conference on Gaia,[4]was held in San Diego, California, on March 7, 1988. During the "philosophical foundations" session of the conference,David Abramspoke on the influence of metaphor in science, and of the Gaia hypothesis as offering a new and potentially game-changing metaphorics, whileJames Kirchnercriticised the Gaia hypothesis for its imprecision. Kirchner claimed that Lovelock and Margulis had not presented one Gaia hypothesis, but four: Of Homeostatic Gaia, Kirchner recognised two alternatives. "Weak Gaia" asserted that life tends to make the environment stable for the flourishing of all life. "Strong Gaia" according to Kirchner, asserted that life tends to make the environment stable,to enablethe flourishing of all life. Strong Gaia, Kirchner claimed, was untestable and therefore not scientific.[49] Lovelock and other Gaia-supporting scientists, however, did attempt to disprove the claim that the hypothesis is not scientific because it is impossible to test it by controlled experiment. For example, against the charge that Gaia was teleological, Lovelock and Andrew Watson offered theDaisyworldModel (and its modifications, above) as evidence against most of these criticisms.[22]Lovelock said that the Daisyworld model "demonstrates that self-regulation of the global environment can emerge from competition amongst types of life altering their local environment in different ways".[50] Lovelock was careful to present a version of the Gaia hypothesis that had no claim that Gaia intentionally or consciously maintained the complex balance in her environment that life needed to survive. It would appear that the claim that Gaia acts "intentionally" was a statement in his popular initial book and was not meant to be taken literally. This new statement of the Gaia hypothesis was more acceptable to the scientific community. Most accusations ofteleologismceased, following this conference.[citation needed] By the time of the 2nd Chapman Conference on the Gaia Hypothesis, held at Valencia, Spain, on 23 June 2000,[51]the situation had changed significantly. Rather than a discussion of the Gaian teleological views, or "types" of Gaia hypotheses, the focus was upon the specific mechanisms by which basic short term homeostasis was maintained within a framework of significant evolutionary long term structural change. The major questions were:[52] In 1997,Tyler Volkargued that a Gaian system is almost inevitably produced as a result of an evolution towards far-from-equilibrium homeostatic states that maximiseentropy production, and Axel Kleidon (2004) agreed stating: "...homeostatic behavior can emerge from a state of MEP associated with the planetary albedo"; "...the resulting behavior of a symbiotic Earth at a state of MEP may well lead to near-homeostatic behavior of the Earth system on long time scales, as stated by the Gaia hypothesis". M. Staley (2002) has similarly proposed "...an alternative form of Gaia theory based on more traditional Darwinian principles... In [this] new approach, environmental regulation is a consequence of population dynamics. The role of selection is to favor organisms that are best adapted to prevailing environmental conditions. However, the environment is not a static backdrop for evolution, but is heavily influenced by the presence of living organisms. The resulting co-evolving dynamical process eventually leads to the convergence of equilibrium and optimal conditions". A fourth international conference on the Gaia hypothesis, sponsored by the Northern Virginia Regional Park Authority and others, was held in October 2006 at the Arlington, Virginia campus of George Mason University.[53] Martin Ogle, Chief Naturalist, for NVRPA, and long-time Gaia hypothesis proponent, organized the event. Lynn Margulis, Distinguished University Professor in the Department of Geosciences, University of Massachusetts-Amherst, and long-time advocate of the Gaia hypothesis, was a keynote speaker. Among many other speakers: Tyler Volk, co-director of the Program in Earth and Environmental Science at New York University; Dr. Donald Aitken, Principal of Donald Aitken Associates;Dr. Thomas Lovejoy, President of the Heinz Center for Science, Economics and the Environment;Robert Corell, Senior Fellow, Atmospheric Policy Program, American Meteorological Society and noted environmental ethicist,J. Baird Callicott. After initially receiving little attention from scientists (from 1969 until 1977), thereafter for a period the initial Gaia hypothesis was criticized by a number of scientists, includingFord Doolittle,[54]Richard Dawkins[55]andStephen Jay Gould.[4]Lovelock has said that because his hypothesis is named after a Greek goddess, and championed by many non-scientists,[43]the Gaia hypothesis was interpreted as aneo-Paganreligion. Many scientists in particular also criticized the approach taken in his popular bookGaia, a New Look at Life on Earthfor beingteleological—a belief that things are purposeful and aimed towards a goal. Responding to this critique in 1990, Lovelock stated, "Nowhere in our writings do we express the idea that planetary self-regulation is purposeful, or involves foresight or planning by thebiota". Stephen Jay Gouldcriticized Gaia as being "a metaphor, not a mechanism."[56]He wanted to know the actual mechanisms by which self-regulating homeostasis was achieved. In his defense of Gaia, David Abram argues that Gould overlooked the fact that "mechanism", itself, is a metaphor—albeit an exceedingly common and often unrecognized metaphor—one which leads us to consider natural and living systems as though they were machines organized and built from outside (rather than asautopoieticor self-organizing phenomena). Mechanical metaphors, according to Abram, lead us to overlook the active or agentic quality of living entities, while the organismic metaphors of the Gaia hypothesis accentuate the active agency of both the biota and the biosphere as a whole.[57]With regard to causality in Gaia, Lovelock argues that no single mechanism is responsible, that the connections between the various known mechanisms may never be known, that this is accepted in other fields of biology and ecology as a matter of course, and that specific hostility is reserved for his own hypothesis for other reasons.[43] Aside from clarifying his language and understanding of what is meant by a life form, Lovelock himself ascribes most of the criticism to a lack of understanding of non-linear mathematics by his critics, and a linearizing form ofgreedy reductionismin which all events have to be immediately ascribed to specific causes before the fact. He also states that most of his critics are biologists but that his hypothesis includes experiments in fields outside biology, and that some self-regulating phenomena may not be mathematically explainable.[43] Lovelock has suggested that global biological feedback mechanisms could evolve bynatural selection, stating that organisms that improve their environment for their survival do better than those that damage their environment. However, in the early 1980s,W. Ford DoolittleandRichard Dawkinsseparately argued against this aspect of Gaia. Doolittle argued that nothing in thegenomeof individual organisms could provide the feedback mechanisms proposed by Lovelock, and therefore the Gaia hypothesis proposed no plausible mechanism and was unscientific.[54]Dawkins meanwhile stated that for organisms to act in concert would require foresight and planning, which is contrary to the current scientific understanding of evolution.[55]Like Doolittle, he also rejected the possibility that feedback loops could stabilize the system. Margulis argued in 1999 that "Darwin's grand vision was not wrong,only incomplete.In accentuating the direct competition between individuals for resources as the primary selection mechanism, Darwin (and especially his followers) created the impression that the environment was simply a static arena". She wrote that the composition of the Earth's atmosphere, hydrosphere, and lithosphere are regulated around "set points" as inhomeostasis, but those set points change with time.[58] Evolutionary biologistW. D. Hamiltoncalled the concept of GaiaCopernican, adding that it would take anotherNewtonto explain how Gaian self-regulation takes place through Darwiniannatural selection.[34][better source needed]More recently Ford Doolittle building on his and Inkpen's ITSNTS (It's The Song Not The Singer) proposal[59]proposed that differential persistence can play a similar role to differential reproduction in evolution by natural selections, thereby providing a possible reconciliation between the theory of natural selection and the Gaia hypothesis.[60] The Gaia hypothesis continues to be broadly skeptically received by the scientific community. For instance, arguments both for and against it were laid out in the journalClimatic Changein 2002 and 2003. A significant argument raised against it are the many examples where life has had a detrimental or destabilising effect on the environment rather than acting to regulate it.[7][8]Several recent books have criticised the Gaia hypothesis, expressing views ranging from "... the Gaia hypothesis lacks unambiguous observational support and has significant theoretical difficulties"[61]to "Suspended uncomfortably between tainted metaphor, fact, and false science, I prefer to leave Gaia firmly in the background"[9]to "The Gaia hypothesis is supported neither by evolutionary theory nor by the empirical evidence of the geological record".[62]TheCLAW hypothesis,[19]initially suggested as a potential example of direct Gaian feedback, has subsequently been found to be less credible as understanding ofcloud condensation nucleihas improved.[63]In 2009 theMedea hypothesiswas proposed: that life has highly detrimental (biocidal) impacts on planetary conditions, in direct opposition to the Gaia hypothesis.[64] In a2013 book-length evaluation of the Gaia hypothesisconsidering modern evidence from across the various relevant disciplines, Toby Tyrrell concluded that: "I believe Gaia is a dead end. Its study has, however, generated many new and thought provoking questions. While rejecting Gaia, we can at the same time appreciate Lovelock's originality and breadth of vision, and recognize that his audacious concept has helped to stimulate many new ideas about the Earth, and to champion a holistic approach to studying it".[65]Elsewhere he presents his conclusion "The Gaia hypothesis is not an accurate picture of how our world works".[66]This statement needs to be understood as referring to the "strong" and "moderate" forms of Gaia—that the biota obeys a principle that works to make Earth optimal (strength 5) or favourable for life (strength 4) or that it works as a homeostatic mechanism (strength 3). The latter is the "weakest" form of Gaia that Lovelock has advocated. Tyrrell rejects it. However, he finds that the two weaker forms of Gaia—Coeveolutionary Gaia and Influential Gaia, which assert that there are close links between the evolution of life and the environment and that biology affects the physical and chemical environment—are both credible, but that it is not useful to use the term "Gaia" in this sense and that those two forms were already accepted and explained by the processes of natural selection and adaptation.[67] As emphasized by multiple critics, no plausible mechanism exists that would drive the evolution of negative feedback loops leading to planetary self-regulation of the climate.[8][9]Indeed, multiple incidents in Earth's history (see theMedea hypothesis) have shown that the Earth and the biosphere can enter self-destructive positive feedback loops that lead to mass extinction events.[68] For example, theSnowball Earthglaciations appeared to result from the development ofphotosynthesisduring a period when theSun was coolerthan it is now. These mechanisms will have some effect, but any understanding of glacial-interglacial cycles requires study of the variations in the Earth’s orbit around the Sun, the tilt of its axis of rotation, and the ‘wobble’ in that rotational movement which causes the periodicity in Northern Hemisphere insolation, thereby setting the Earth’s thermal regime. Including studies from the fields of mathematics and Earth science, the fields of geology and geography provide insight into the causes of ice ages. Meanwhile, the removal of carbon dioxide from the atmosphere, along with the oxidation ofatmospheric methaneby the released oxygen, resulted in a dramatic diminishment of thegreenhouse effect.[note 1]The resulting expansion of the polar ice sheets decreased the overall fraction of sunlight absorbed by the Earth, resulting in a runawayice–albedo positive feedback loopultimately resulting in glaciation over nearly the entire surface of the Earth.[70]However, volcanic processes at this scale should be understood as relating to the pressure exerted on the Earth’s crust, and released during periods of ice sheet retreat. Breaking out of the Earth from the frozen condition appears to have directly been due to the release of carbon dioxide and methane by volcanos,[71]although release of methane by microbes trapped underneath the ice could also have played a part.[72]Lesser contributions to warming would come from the fact that coverage of the Earth by ice sheets largely inhibited photosynthesis and lessened the removal of carbon dioxide from the atmosphere by the weathering of siliceous rocks. However, in the absence of tectonic activity, the snowball condition could have persisted indefinitely.[73]: 43–68 Geologic events with amplifying positive feedbacks (along with some possible biologic participation) led to the greatest mass extinction event on record, thePermian–Triassic extinction eventabout 250 million years ago. The precipitating event appears to have been volcanic eruptions in theSiberian Traps, a hilly region offlood basaltsin Siberia. These eruptions released high levels ofcarbon dioxideandsulfur dioxidewhich elevated world temperatures and acidified the oceans.[74]Estimates of the rise in carbon dioxide levels range widely, from as little as a two-fold increase, to as much as a twenty-fold increase.[73]: 69–91Amplifying feedbacks increased the warming to considerably greater than that to be expected merely from the greenhouse effect of carbon dioxide: these include the ice albedo feedback, the increased evaporation of water vapor (another greenhouse gas) into the atmosphere, the release of methane from the warming ofmethane hydratedeposits buried under the permafrost and beneath continental shelf sediments, and increased wildfires.[73]: 69–91The rising carbon dioxide acidified the oceans, leading to widespread die-off of creatures with calcium carbonate shells, killing mollusks and crustaceans like crabs and lobsters and destroying coral reefs.[75]Their demise led to disruption of the entire oceanic food chain.[76]It has been argued that rising temperatures may have led to disruption of thechemoclineseparating sulfidic deep waters from oxygenated surface waters, which led to massive release of toxichydrogen sulfide(produced byanerobicbacteria) to the surface ocean and even into atmosphere, contributing to the (primarily methane-driven) collapse of the ozone layer,[77]and helping to explain the die-off of terrestrial animal and plant life.[78] According to the weakanthropic principle, our observation of such stabilizing feedback loops is an observer selection effect.[79][80][81]In all the universe, it is only planets with Gaian properties that could have evolved intelligent, self-aware organisms capable of asking such questions.[73]: 50One can imagine innumerable worlds where life evolved with different biochemistries or where the worlds had different geophysical properties such that the worlds are presently dead due to runaway greenhouse effect, or else are in perpetual Snowball, or else due to one factor or another, life has been inhibited from evolving beyond the microbial level.[note 2] If no means exists for natural selection to operate at the biosphere level, then it would appear that the anthropic principle provides the only explanation for the survival of Earth's biosphere over geologic time. But in recent years, this strictly reductionistic view has been modified by recognition that natural selection can operate at multiple levels of the biological hierarchy — not just at the level of individual organisms.[82]Traditional Darwinian natural selection requires reproducing entities that display inheritable properties or abilities that result in their having more offspring than their competitors. Successful biospheres clearly cannot reproduce to spawn copies of themselves, and so traditional Darwinian natural selection cannot operate. A mechanism for biosphere-level selection was proposed by Ford Doolittle: Although he had been a strong and early critic of the Gaia hypothesis,[54]he had by 2015 started to think of ways whereby Gaia might be "Darwinised", seeking means whereby the planet could have evolved biosphere-level adaptations. Doolittle has suggested thatdifferential persistence— mere survival — could be considered a legitimate mechanism for natural selection. As the Earth passes through various challenges, the phenomenon of differential persistence enables selected entities to achieve fixation by surviving the death of their competitors. Although Earth's biosphere is not competing against other biospheres on other planets, there are many competitors for survival onthisplanet. Collectively, Gaia constitutes the singlecladeof all living survivors descended from life’slast universal common ancestor(LUCA).[81]Various other proposals for biosphere-level selection include sequential selection, entropic hierarchy,[83]and considering Gaia as aholobiont-like system.[84]Ultimately speaking, differential persistence and sequential selection are variants of the anthropic principle,[83]while entropic hierarchy and holobiont arguments may possibly allow understanding the emergence of Gaia without anthropic arguments.[83][84]
https://en.wikipedia.org/wiki/Gaia_hypothesis
The Human Use of Human Beingsis a book byNorbert Wiener, the founding thinker ofcyberneticstheory and an influential advocate ofautomation; it was first published in 1950 and revised in 1954. The text argues for the benefits of automation to society; it analyzes the meaning of productive communication and discusses ways for humans and machines to cooperate, with the potential to amplify human power and release people from the repetitive drudgery of manual labor, in favor of more creative pursuits inknowledge workand the arts. The risk that such changes might harm society (through dehumanization or subordination of our species) is explored, and suggestions are offered on how to avoid such risk. The wordcyberneticsrefers to the theory of message transmission among people and machines. The thesis of the book is that: society can only be understood through a study of the messages and the communication facilities which belong to it; and that in the future development of these messages and communication facilities, messages between man and machines, between machines and man, and between machine and machine, are destined to play an ever-increasing part. (p. 16) Communicationmethods have entered a new realm, involving new technologies. Whether a transmission is between people, or between people and machines, the process is similar in that information is sent by one party and received by another, which can send a response. This is a type offeedback. People, animals, and plants all have the ability to take certain actions in response to their environments; in the same way, machines have feedback systems in order for their performances to be altered or evaluated in accordance with results. In the context of human/machine society, Wiener offers a definition of themessageas: "a sequence of events in time which, though in itself has a certain contingency, strives to hold back nature's tendency toward disorder by adjusting its parts to various purposive ends" (p. 27). The physical world has a "tendency toward disorder."Entropy(although a broad concept used in somewhat different ways across disciplines) roughly describes the way that isolated systems naturally become less and less organized with the passage of time; popularly understood as meaning a gradual decline into a state of chaos, the concept more accurately refers to the diffusion of energy toward a state of equilibrium, following thesecond law of thermodynamics. Wiener believed that communication of information is essentiallynegentropic– it resists entropy – because it relies on organizational structures. There are two kinds of possible disorganizational forces, passive and active: "Nature offers resistance to decoding, but it does not show ingenuity in finding new and undecipherable methods for jamming our communication with the outer world" (pp. 35–36). Nature's passive resistance is in contrast to active resistance, like that of a chess opponent. This is similar toEinstein's view, expressed in his famous comment: "The Lord is subtle but he is not vicious". An increase of information, whether communicated by a living being or a machine, will increase organization. The feedback systems of an organism and those of a machine (informational organization in machines does not necessarily constitute "vitality" or a "soul") function in a similar way, allowing either to make assessments and act on the actual effectiveness of previous actions; when such feedback modifies not just a discrete action but an entire set of behaviors, Wiener calls thislearning. The individuality of a being is a certain intricate form, not an enduring substance. In order to understand an organism, it must be thought of as a pattern which maintains itself throughhomeostasis– life continues by maintaining an internal balance of various factors such as temperature and molecular structure. While the material substances that compose a living being may be constantly replaced by nearly identical ones, an organism continues functioning with the same identity as long as the pattern is kept sufficiently intact. Since patterns can be transmitted, modified, or duplicated, they are therefore a kind of information. Based on this, Wiener suggests it should be theoretically possible to transmit the entirety of a living person as a message (which is practically indistinguishable from the concept of physicalteleportation) – although he admits that the obstacles to such a process would be great, because of the enormous amount of information embodied in a person, and the difficulty of reading or writing it. According to Wiener, the "progress" of human society as we conceive it today did not exist until four hundred years ago, but now we have entered "a special period in the history of the world" (p. 46). The progress of recent centuries has changed our world so dramatically that humans are being forced to adapt to the new environmental order or disorder that we are still creating. Wiener believes the quickness and range of our adaptability has always been the strong point of the human species, which distinguishes us from even the most intelligent of other living creatures. Our advancements in technology have created new opportunities along with new restrictions. Increasingly better sensory mechanics will allow machines to react to changes in stimuli, and adapt more efficiently to their surroundings. This type of machine will be most useful in factory assembly lines, giving humans the freedom to supervise and use their creative abilities constructively. Medicine can benefit from robotic advances in the design of prostheses for the handicapped. Wiener mentions theVocorder, a device fromBell Telephone Companythat creates visual speech. He discusses the possibility of creating an automated prosthesis that inputs speech directly into the brain for processing, effectively giving deaf individuals the ability to "hear" speech again. Progress in these areas is ongoing and rapid, exemplified by such devices as the palatometer, a new device created to replace a damaged larynx; it uses a speech synthesizer to recreate words based on its ability to monitor tongue movements. This device effectively rids people with damaged larynxes of the robotic tones associated with artificial speech synthesizers (like the one famously used by disabled physicistStephen Hawking), enabling people to have more natural social interactions. Machines, in Wiener's opinion, are meant to interact harmoniously with humanity and provide respite from the industrial trap we have made for ourselves. Wiener describes the automaton as inherently necessary to humanity's societal evolution. People could be free to expand their minds, pursue artistic careers, while automatons take over assembly line production to create necessary commodities. These machines must be "used for the benefit of man, for increasing his leisure and enriching his spiritual life, rather than merely for profits and the worship of the machine as a new brazen calf" (p. 162). Though hopeful that humanity will ultimately prosper by the use of automatons, he mentions a few ways this relationship with technology could be detrimental. The immediate danger is that just as less sophisticated machines deprived the laborer of the only commodity he had to trade, so in the near future almost all workers would be out of a job. Automatons must not be taken for granted, because with advances in technology that allow them to learn, the machines may be able to escape human control if humans do not continue proper supervision of them. We might become entirely dependent on them, or even controlled by them. There is danger in trusting decisions to something which cannot think abstractly, and may therefore be unlikely to identify with intellectual human values which are not purely utilitarian. Norbert Wiener's book was the forerunner of studies in cybernetics, and has influenced many theorists. It has impacted the fields of computers and technology, engineering, biology, sociology, and a broad range of other sciences. Numerous books have been published in relation to cybernetics theory which explore alternative concepts and models of feedback, human/machine relationships, systems science, and industrial advancement.William Ross Ashby, another founder of cybernetics, wrote the bookIntroduction to Cybernetics, which presents many new interpretations and definitions. Other theorists have produced writings on systems, communication, and the human experience in cybernetics.N. Katherine Hayles, author ofHow We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, describes the effects of technology in the age of virtual information and what it means for humans to live in an ever-advancing society. TheAmerican Society for Cybernetics(ASC) is a research association founded in 1964, the same year Wiener died, and is dedicated to the cooperative understanding and further improvement of cybernetics theory. The Human Use of Human Beings was translated to French in 1950 asCybernétique et société(Paris : 10/18). In 2025, a 75th anniversary edition ofThe Human Use of Human Beingswas published byMariner BooksClassics, featuring a new introduction byBrian Christian, author ofThe Most Human HumanandThe Alignment Problem. In his introduction, Christian describes Wiener as the "progenitor of contemporaryAI-safetydiscourse" and examines how Wiener's pioneering views on human-machine relationships have become increasingly relevant in the age of artificial intelligence and ubiquitous computing. The anniversary edition emphasizes how cybernetics itself as well as Wiener's prescient warnings about technology continue to illuminate contemporary challenges related to AI, automation, and digital communication, demonstrating the enduring significance of his work in an era where many of his predictions have materialized.[1]
https://en.wikipedia.org/wiki/The_Human_Use_of_Human_Beings
Industrial ecology(IE) is the study ofmaterialandenergy flowsthrough industrial systems. Theglobalindustrial economycan be modelled as a network of industrial processes that extract resources from theEarthand transform those resources intoby-products,productsandserviceswhich can be bought and sold to meet the needs of humanity. Industrial ecology seeks to quantify the material flows and document the industrial processes that make modern society function. Industrial ecologists are often concerned with the impacts that industrial activities have on theenvironment, with use of the planet's supply ofnatural resources, and with problems ofwaste disposal. Industrial ecology is a young but growing multidisciplinary field of research which combines aspects ofengineering,economics,sociology,toxicologyand thenatural sciences. Industrial ecology has been defined as a "systems-based, multidisciplinary discourse that seeks to understand emergent behavior of complex integrated human/natural systems".[1]The field approaches issues ofsustainabilityby examining problems from multiple perspectives, usually involving aspects of sociology, theenvironment,economyandtechnology.[2][3]The name comes from the idea that the analogy of natural systems should be used as an aid in understanding how to design sustainable industrial systems.[4] Industrial ecology is concerned with the shifting of industrial process from linear (open loop) systems, in which resource and capital investments move through the system to become waste, to a closed loop system where wastes can become inputs for new processes. Much of the research focuses on the following areas:[5] Industrial ecology seeks to understand the way in which industrial systems (for example a factory, anecoregion, or national or global economy) interact with thebiosphere. Natural ecosystems provide a metaphor for understanding how different parts of industrial systems interact with one another, in an "ecosystem" based on resources andinfrastructural capitalrather than onnatural capital. It seeks to exploit the idea that natural systems do not have waste in them to inspiresustainable design. Along with more generalenergy conservationand material conservation goals, and redefining related internationaltrademarkets andproduct stewardshiprelations strictly as aservice economy, industrial ecology is one of the four objectives ofNatural Capitalism. This strategy discourages forms of amoral purchasing arising from ignorance of what goes on at a distance and implies apolitical economythat valuesnatural capitalhighly and relies on more instructional capital to design and maintain each unique industrial ecology. Industrial ecology was popularized in 1989 in aScientific Americanarticle byRobert Froschand Nicholas E. Gallopoulos.[6]Frosch and Gallopoulos' vision was "why would not our industrial system behave like anecosystem, where the wastes of a species may beresourceto another species? Why would not the outputs of an industry be the inputs of another, thus reducing use ofraw materials,pollution, and saving onwaste treatment?"[4]A notable example resides in a Danish industrial park in the city ofKalundborg. Here several linkages ofbyproductsandwaste heatcan be found between numerous entities such as a large power plant, an oil refinery, a pharmaceutical plant, a plasterboard factory, an enzyme manufacturer, a waste company and the city itself.[7]Another example is the Rantasalmi EIP in Rantasalmi, Finland. While this country has had previous organically formed EIP's, the park at Rantasalmi is Finland's first planned EIP. The scientific field of industrial ecology has grown quickly. The Journal of Industrial Ecology (since 1997), theInternational Society for Industrial Ecology(since 2001), and the journal Progress in Industrial Ecology (since 2004) give Industrial Ecology a strong and dynamic position in the internationalscientific community. Industrial ecology principles are also emerging in various policy realms such as the idea of thecircular economy. Although the definition of the circular economy has yet to be formalized, generally the focus is on strategies such as creating a circular flow of materials, and cascading energy flows. An example of this would be using waste heat from one process to run another process that requires a lower temperature. The hope is that strategies such as this will create a more efficient economy with fewer pollutants and other unwanted by-products.[8] TheKalundborgindustrial park is located in Denmark. This industrial park is special because companies reuse each other's waste (which then becomes by-products). For example, the Energy E2Asnæs Power Stationproducesgypsumas a by-product of the electricity generation process; this gypsum becomes a resource for the BPB Gyproc A/S which producesplasterboards.[7]This is one example of a system inspired by the biosphere-technosphere metaphor: in ecosystems, the waste from one organism is used as inputs to other organisms; in industrial systems, waste from a company is used as a resource by others. Apart from the direct benefit of incorporating waste into the loop, the use of an eco-industrial park can be a means of making renewable energy generating plants, likeSolar PV, more economical and environmentally friendly. In essence, this assists the growth of therenewable energy industryand the environmental benefits that come with replacing fossil-fuels.[9] Additional examples of industrial ecology include: Theecosystemmetaphor popularized byFroschand Gallopoulos[4]has been a valuable creative tool for helping researchers look for novel solutions to difficult problems. Recently, it has been pointed out that this metaphor is based largely on a model of classical ecology, and that advancements in understanding ecology based oncomplexity sciencehave been made by researchers such asC. S. Holling,James J. Kay,[24]and further advanced in terms of contemporary ecology by others.[25][26][27][28]For industrial ecology, this may mean a shift from a more mechanistic view of systems, to one wheresustainabilityis viewed as anemergentproperty of a complex system.[29][30]To explore this further, several researchers are working withagent based modeling techniques.[31][32] Exergyanalysis is performed in the field of industrial ecology to use energy more efficiently.[33]The termexergywas coined byZoran Rantin 1956, but the concept was developed byJ. Willard Gibbs. In recent decades, utilization of exergy has spread outside physics and engineering to the fields of industrial ecology,ecological economics,systems ecology, andenergetics.
https://en.wikipedia.org/wiki/Industrial_ecology
Principia Cyberneticais an international cooperation of scientists in the field ofcyberneticsandsystems science, especially known for their website, Principia Cybernetica. They have dedicated their organization to what they call "a computer-supported evolutionary-systemic philosophy, in the context of the transdisciplinary academic fields of Systems Science and Cybernetics".[1] Principia Cybernetica was initiated in 1989 in the USA byCliff JoslynandValentin Turchin, and a year later broadened to Europe withFrancis Heylighenfrom Belgium joining their cooperation. Major activities of the Principia Cybernetica Project are: The organization is associated with: The Principia Cybernetica Web,[4]which wentonline in 1993, is one of the first complex webs in the world. It contains content oncybernetics,systems theory,complexity, and related approaches. Especially in the 1990s the Principia Cybernetica has organized a series of workshops and international symposia oncyberneticthemes.[5]On the 1st Principia Cybernetica Workshop in June 1991 inBrusselsmanycyberneticistsattended like Harry Bronitz,Gordon Pask, J.L. Elohim, Robert Glueck,Ranulph Glanville, Annemie Van Kerkhoven, Don McNeil, Elan Moritz, Cliff Joslyn, A. Comhaire andValentin Turchin.[5]
https://en.wikipedia.org/wiki/Principia_Cybernetica
Asuperorganism, orsupraorganism,[1]is a group ofsynergeticallyinteracting organisms of the samespecies. Acommunityof synergetically interacting organisms of different species is called aholobiont. The term superorganism is used most often to describe a social unit ofeusocialanimals in whichdivision of labouris highly specialised and individuals cannot survive by themselves for extended periods.Antsare the best-known example of such a superorganism. A superorganism can be defined as "a collection of agents which can act in concert to produce phenomena governed by the collective",[2]phenomena being any activity "the hive wants" such as ants collecting food andavoiding predators,[3][4]or bees choosing a new nest site.[5]In challenging environments, micro organisms collaborate and evolve together to process unlikely sources of nutrients such as methane. This process calledsyntrophy("eating together") might be linked to the evolution of eukaryote cells and involved in the emergence or maintenance of life forms in challenging environments on Earth and possibly other planets.[6]Superorganisms tend to exhibithomeostasis,power lawscaling, persistent disequilibrium and emergent behaviours.[7] The term was coined in 1789 byJames Hutton, the "father of geology", to refer toEarthin the context ofgeophysiology. TheGaia hypothesisofJames Lovelock,[8]andLynn Margulisas well as the work of Hutton,Vladimir VernadskyandGuy Murchie, have suggested that thebiosphereitself can be considered a superorganism, but that has been disputed.[9]This view relates tosystems theoryand the dynamics of acomplex system. The concept of a superorganism raises the question of what is to be considered anindividual. Toby Tyrrell's critique of the Gaia hypothesis argues that Earth's climate system does not resemble an animal's physiological system. Planetary biospheres are not tightly regulated in the same way that animal bodies are: "planets, unlike animals, are not products of evolution. Therefore we are entitled to be highly skeptical (or even outright dismissive) about whether to expect something akin to a 'superorganism'". He concludes that "the superorganism analogy is unwarranted".[9] Some scientists have suggested that individual human beings can be thought of as "superorganisms";[10]as a typical human digestive system contains 1013to 1014microorganisms whose collectivegenome, themicrobiomestudied by theHuman Microbiome Project, contains at least 100 times as many genes as the human genome itself.[11][12]Salvucci wrote that superorganism is another level of integration that is observed in nature. These levels include the genomic, the organismal and the ecological levels. The genomic structure of organisms reveals the fundamental role of integration and gene shuffling along evolution.[13] The 19th-century thinkerHerbert Spencercoined the termsuper-organicto focus on social organization (the first chapter of hisPrinciples ofSociologyis entitled "Super-organic Evolution"[14]), though this was apparently a distinction between the organic and the social,notan identity: Spencer explored theholisticnature of society as asocial organismwhile distinguishing the ways in which society did not behave like an organism.[15]For Spencer, the super-organic was anemergentproperty of interacting organisms, that is, human beings. And, as has been argued by D. C. Phillips, there is a "difference between emergence and reductionism".[16] The economistCarl Mengerexpanded upon the evolutionary nature of much social growth but never abandonedmethodological individualism. Many social institutions arose, Menger argued, not as "the result of socially teleological causes, but the unintended result of innumerable efforts of economic subjects pursuing 'individual' interests".[17] Both Spencer and Menger argued that because individuals choose and act, any social whole should be considered less than an organism, but Menger emphasized that more strongly. Spencer used the organistic idea to engage in extended analysis ofsocial structureand conceded that it was primarily an analogy. For Spencer, the idea of the super-organic best designated a distinct level ofsocial realityabove that of biology and psychology, not a one-to-one identity with an organism. Nevertheless, Spencer maintained that "every organism of appreciable size is a society", which has suggested to some that the issue may be terminological.[18] The termsuperorganicwas adopted by the anthropologistAlfred L. Kroeberin 1917.[19]Social aspects of the superorganism concept are analysed byAlan Marshallin his 2002 book "The Unity of Nature".[20]Finally, recent work in social psychology has offered the superorganism metaphor as a unifying framework to understand diverse aspects of human sociality, such as religion, conformity, and social identity processes.[21] Superorganisms are important incybernetics, particularlybiocybernetics, since they are capable of the so-called "distributed intelligence", a system composed of individual agents that have limited intelligence and information.[22]They can pool resources and so can complete goals that are beyond reach of the individuals on their own.[22]Existence of suchbehaviorin organisms has many implications for military and management applications and is being actively researched.[22] Superorganisms are also considered dependent upon cybernetic governance and processes.[23]This is based on the idea that a biological system – in order to be effective – needs a sub-system of cybernetic communications and control.[24]This is demonstrated in the way a mole rat colony uses functional synergy and cybernetic processes together.[25] Joël de Rosnayalso introduced a concept called "cybionte" to describe cybernetic superorganism.[26]The notion associates superorganism withchaos theory, multimedia technology, and other new developments. If Col. Thorpe [of the USDARPA] has his way, the four divisions of the US military and hundreds of industrial subcontractors will become a single interconnected superorganism. The immediate step to this world of distributed intelligence is an engineering protocol developed by a consortium of defense simulation centers in Orlando Florida ...
https://en.wikipedia.org/wiki/Superorganism
Synergeticsis an interdisciplinary science explaining the formation andself-organizationofpatternsand structures inopen systemsfar fromthermodynamic equilibrium. It is founded byHermann Haken, inspired by thelasertheory. Haken's interpretation of the laser principles as self-organization ofnon-equilibrium systemspaved the way at the end of the 1960s to the development of synergetics. One of his successful popular books isErfolgsgeheimnisse der Natur, translated into English asThe Science of Structure: Synergetics.[1] Self-organization requires a 'macroscopic'system, consisting of many nonlinearly interacting subsystems. Depending on the external control parameters (environment,energy fluxes) self-organization takes place. Essential in synergetics is the order-parameter concept which was originally introduced in theGinzburg–Landau theoryin order to describephase transitionsinthermodynamics. The order parameter concept is generalized by Haken to the "enslaving-principle" saying that the dynamics of fast-relaxing (stable) modes is completely determined by the 'slow' dynamics of, as a rule, only a few 'order-parameters' (unstable modes). The order parameters can be interpreted as the amplitudes of the unstable modes determining the macroscopic pattern. As a consequence, self-organization means an enormous reduction ofdegrees of freedom(entropy) of the system which macroscopically reveals an increase of 'order' (pattern-formation). This far-reaching macroscopic order is independent of the details of themicroscopicinteractions of the subsystems. This supposedly explains theself-organizationof patterns in so many different systems inphysics, chemistry and biology. [...] the statistical properties of laser light change qualitatively at the laser threshold. Below laser thresholdnoiseincreases more and more while above threshold it decreases again. [...] Below laser threshold, light consists of individualwavetracks which are emitted from the individual atoms independently of each other. Above laser threshold, a practically infinitely long wave track is produced. In order to make contact with other processes of self-organization let us interpret the processes in a lamp or in a laser by means ofBohr's model of the atom. A lamp produces its light in such a way that theexcited electronsof the atoms make theirtransitionsfrom the outer orbit to the inner orbit entirely independently of each other. On the other hand, the properties of laser light can be understood only if we assume that the transitions of the individual electrons occur in acorrelatedfashion. [...] Above laser threshold thecoherentfield grows more and more and it can slave the degrees of freedom of thedipole momentsand of the inversion. Within synergetics it has turned out that is a quite typical equation describing effects of self-organization. [...] This equation tells us that the amplitude of the dipoles, which is proportional to A, is instantaneously given by the field amplitude B(t) (and by the fluctuating force). This is probably the simplest example of a principle which has turned out to be of fundamental importance in synergetics and which is called the slaving principle.
https://en.wikipedia.org/wiki/Synergetics_(Haken)
Tektology(sometimes transliterated astectology) is a term used byAlexander Bogdanovto describe a new universal science that consisted of unifying all social, biological and physical sciences by considering them as systems of relationships and by seeking the organizational principles that underlie all systems. Tektology is now regarded as a precursor ofsystems theoryand related aspects ofsynergetics.[1]The word "tectology" was introduced byErnst Haeckel,[2]but Bogdanov used it for a different purpose.[3][4] His workTektology: Universal Organization Science, published in Russia between 1912 and 1917, anticipated many of the ideas that were popularized later byNorbert WienerinCyberneticsandLudwig von Bertalanffyin theGeneral Systems Theory. There are suggestions that both Wiener and von Bertalanffy might have read the German edition ofTektologywhich was published in 1928.[5][6] InSources and Precursors of Bogdanov's Tectology, James White (1998) acknowledged the intellectual debt of Bogdanov's work on tectology to the ideas ofLudwig Noiré. His work drew on the ideas of Noiré who in the 1870s also attempted to construct a monistic system using the principle of conservation of energy as one of its structural elements. More recently, in her 2016 bookMolecular Red: Theory for the Anthropocene,McKenzie Warkattempts to establish Bogdanov as a precursor to contemporaryAnthropocenetheorists, likeDonna Haraway, by considering Bogdanov's works of fiction as an extension of his general work in Tectology. In this, Wark also considers Tectology as an alternative to the Soviet state philosophy ofdialectical materialism, which may help in explainingLenin's vehement opposition to Tectology in his ownMaterialism and Empirio-Criticism. According to Bogdanov[7]"the aim of Tectology is the systematization of organizedexperience", through the identification of universal organizationalprinciples: "all things are organizational, allcomplexescould only be understood through their organizational character."[8]Bogdanov considered that any complex should correspond to its environment and adapt to it. A stable and organized complex is greater than the sum of its parts. In Tectology, the term 'stability' refers not to adynamic stability, but to the possibility of preserving the complex in the given environment. A 'complex' is not identical to a 'complicated, a hard-to-comprehend, largeunit. In Tectology, Bogdanov made the first 'modern' attempt to formulate the most generallawsoforganization. Tectology addressed issues such asholistic,emergentphenomena and systemic development. Tectology as a constructive science built elements into a functional entity using general laws of organization. According to his "empirio-monistic" principle (1899), he does not recognize differences betweenobservationandperception[further explanation needed]and thus creates the beginning of a general empirical, trans-disciplinary science of physical organization, as an expedientunityand precursor ofSystems TheoryandHolism. The "whole" in Tectology, and the laws of its integrity, were derived from biological rather than the physicalistic view of the world. Regarding the three scientific cycles which comprise the basis of Tectology (mathematical, physico-biological, and natural-philosophical), it is from the physico-biological cycle that the central concepts have been taken and universalized.[citation needed] The starting point in Bogdanov'sUniversal Science of Organization - Tectology(1913-1922) was that nature has a general, organized character,with one set of laws of organization for all objects. This set of laws also organizes the internal development of the complex units, as implied bySimona Poustilnik's "macro-paradigm", which induces synergistic consequences into an adaptive assembling phenomenon (1995). Bogdanov's visionary view of nature was one of an 'organization' with interconnected systems.[example needed] Alexander Bogdanov wrote several works about Tectology:
https://en.wikipedia.org/wiki/Tektology
Viable system theory(VST) concernscyberneticprocesses in relation to the development/evolution ofdynamical systems: it can be used to explainliving systems, which are considered to be complex andadaptive, can learn, and are capable of maintaining an autonomous existence, at least within the confines of their constraints. These attributes involve the maintenance ofinternal stabilitythroughadaptationto changingenvironments. One can distinguish between two strands such theory:formal systemsand principally non-formal system. Formal viable system theory is normally referred to asviability theory, and provides a mathematical approach to explore the dynamics ofcomplex systemsset within the context ofcontrol theory. In contrast, principally non-formal viable system theory is concerned with descriptive approaches to the study of viability through the processes ofcontrol and communication, though these theories may have mathematical descriptions associated with them. The concept of viability arose withStafford Beerin the 1950s through hisparadigmof management systems.[1][2][3]Its formal relative,viability theorybegan its life in 1976 with the mathematical interpretation of a book byJacques Monodpublished in 1971 and entitledChance and Necessity, and which concerned processes ofevolution.[4]Viability theory is concerned with dynamic adaptation of uncertain evolutionary systems to environments defined by constraints, the values of which determine the viability of the system. Both formal and non-formal approaches ultimately concern the structure and evolutionary dynamics of viability incomplex systems. An alternative non-formal paradigm arose in the late 1980s through the work of Eric Schwarz,[5]which increases the dimensionality of Beer's paradigm[6][7] The viable system theory of Beer is most well known through hisviable system model[8]and is concerned with viable organisations capable of evolving.[9]Through both internal and external analysis it is possible to identify the relationships and modes of behaviour that constitute viability. The model is underpinned by the realisation that organisations are complex, and recognising the existence of complexity is inherent to processes of analysis. Beer's management systems paradigm is underpinned by a set of propositions, sometimes referred to as cybernetic laws. Sitting within this is his viable systems model (VSM) and one of its laws is a principle ofrecursion, so that just as the model can be applied to divisions in a department, it can also be applied to the departments themselves. This is permitted through Beer's viability law which states thatevery viable system contains and is contained in a viable system.[10]The cybernetic laws are applied to all types of human activity systems[11]like organisations and institutions. Now, paradigms are concerned with not only theory but also modes of behaviour within inquiry. One significant part of Beer's paradigm is the development of his Viable Systems Model (VSM) that addresses problem situations in terms of control and communication processes, seeking to ensure system viability within the object of attention. Another is Beer'sSyntegrityprotocol which centres on the means by which effectivecommunicationsin complex situations can occur. VSM has been used successfully to diagnose organisational pathologies (conditions of social ill-health). The model involves not only an operative system that has both structure (e.g., divisions in an organisation or departments in a division) from whichbehaviouremanates that is directed towards an environment, but also a meta-system, which some have called the observer of the system.[12]The system and meta-system areontologicallydifferent, so that for instance where in a production company the system is concerned with production processes and their immediate management, the meta-system is more concerned with the management of the production system as a whole. The connection between the system and meta-system is explained through Beer's Cybernetic map.[13]Beer considered that viable social systems should be seen as living systems.[14]Humberto Maturanaused the term orautopoiesis(self-production) to explain biological living systems, but was reluctant to accept that social systems were living. The viable system theory of Schwarz is more directed towards the explicit examination of issues of complexity than is that of Beer. The theory begins with the idea ofdissipative systems. While all isolatedsystemsconserveenergy, in non-isolated systems, one can distinguish between conservative systems (in which thekinetic energyis conserved) and dissipative systems (where the total kinetic andpotential energyis conserved, but where part of the energy is changed in form and lost). If dissipated systems are far from equilibrium they "try" to recoverequilibriumso quickly that they form dissipative structures to accelerate the process. Dissipative systems can create structured spots whereentropylocally decreases and sonegentropylocally increases to generate order and organisation. Dissipative systems involve far-from-equilibrium process that are inherently dynamically unstable, though they survive through the creation of order that is beyond the thresholds of instability. Schwarz explicitly defined the living system in terms of its metastructure[15]involving a system, a metasystem and a meta-meta-system, this latter being an essential attribute. As with Beer, the system is concerned with operative attributes. Schwarz's meta-system is essentially concerned with relationships, and the meta-meta system is concerned with all forms ofknowledgeand its acquisition. Thus, where in Beer's theorylearningprocesses can only be discussed in terms of implicit processes, in Schwarz's theory they can be discussed in explicit terms. Schwarz's living system model is a summary of much of the knowledge ofcomplex adaptive systems, but succinctly compressed as a graphical genericmetamodel. It is this capacity of compression that establishes it as a new theoretical structure that is beyond the concept of autopoiesis/self-production proposed byHumberto Maturana, through the concept of autogenesis. While the concept of autogenesis has not had the collective coherence that autopoiesis has,[16][17]Schwarz clearly defined it as a network of self-creation processes and firmly integrated it with relevant theory in complexity in a way not previously done. The outcome illustrates how a complex and adaptive viable system is able to survive - maintaining an autonomous durable existence within the confines of its own constraints. The nature of viable systems is that they should have at least potential independence in their processes of regulation, organisation, production, and cognition. The generic model provides a holistic relationship between the attributes that explains the nature of viable systems and how they survive. It addresses the emergence and possible evolution of organisations towards complexity and autonomy intended to refer to any domain of system (e.g., biological, social, or cognitive). Systems in general, but also human activity systems, are able to survive (in other words they become viable) when they develop: (a) patterns of self-organisation that lead to self-organisation through morphogenesis and complexity; (b) patterns for long term evolution towards autonomy; (c) patterns that lead to the functioning of viable systems. This theory was intended to embrace the dynamics of dissipative systems using three planes. Each of the three planes (illustrated in Figure 1 below) is an independent ontological domain, interactively connected through networks of processes, and it shows the basic ontological structure of the viable system. Connected with this is an evolutionary spiral of self-organisation (adapted from Schwarz's 1997 paper), shown in Figure 2 below. Here, there are 4 phases or modes that a viable system can pass through. Mode 3 occurs with one of three possible outcomes (trifurcation): system death when viability is lost; more of the same; and metamorphosis when the viable system survives because it changes form. The dynamic process that viable living systems have, as they move from stability to instability and back again, is explained in Table 1, referring to aspects of both Figures 1 and 2. Schwarz's VST has been further developed, set within a social knowledge context, and formulated asautonomous agency theory.[18][19]
https://en.wikipedia.org/wiki/Viable_system_theory
Tactile technologyis the integration of multi-sensory triggers within physical objects, allowing "real world" interactions with technology. It is similar tohaptic technology, as both focus on touch interactions with technology, but whereas haptic is simulated touch, tactile is physical touch. Rather than using a digital interface to interact with the physical world, asaugmented realitydoes, tactile technology involves a physical interaction that triggers a digital response. The word "tactile" means "related to the sense of touch"[1]or "that can be perceived by the touch; tangible".[2]Touch is incredibly important to human communication and learning, but increasingly, most of the content people interact with is purely visual. Tactile technology presents a way to use advances in technology and combined with touch. Studies show that humans work and learn better in a multi-sensory environment. Something as simple as having toys (like thefidget spinner) in the workplace, or using physical props to teach children in schools, can have significant impacts on productivity and information retention according to themultisensory learningtheory. As stated in one article, "Many teachers are turning to tactile learning and evolving technologies as a way to engage students across different learning styles and needs. As part of a multi-sensory learning approach, tactile technology can help students across a range of skill development areas and a broad range of subjects".[3][better source needed] At the simplest level, a physical trigger that can be used to create a technological reaction is nothing new: it can be as basic as a button or switch. More modern versions of buttons includeconductive paint[4]andprojectors- both are tools that can make a non-digital surface act like a touchscreen, turning anything from tables to sculptures into interactive displays. Games are an example of a field that transformed from entirely tactile to largely digital, and where the trend is now turning back to a more multi-sensory experience. Withvideo games, players want the element of touch thatcontrollersprovide, while researchers suggest that incorporating an element of physical interaction to digital games for children may mitigate concerns about excessive screen-time. For example: Technology is increasingly being incorporated into physical objects that we already use - and one of the most significant examples of this is in thetextile industry. Companies are creating curtains that control light or detect smoke, clothing that monitors temperature, or fabric that integrates lighting. This is a swiftly growing field of "smart" apparel and home goods.[11][12][13][14]This is also an example ofwearable technology. In order to experience art or communicate information, art galleries and museums are increasingly incorporating technology, especially as it makes art and education more immersive and personalized.
https://en.wikipedia.org/wiki/Tactile_technology
Pressure measurementis the measurement of an appliedforceby afluid(liquidorgas) on a surface.Pressureis typically measured inunitsof force per unit ofsurface area. Many techniques have been developed for the measurement of pressure andvacuum. Instruments used to measure and display pressure mechanically are calledpressure gauges,vacuum gaugesorcompound gauges(vacuum & pressure). The widely used Bourdon gauge is a mechanical device, which both measures and indicates and is probably the best known type of gauge. A vacuum gauge is used to measure pressures lower than the ambientatmospheric pressure, which is set as the zero point, in negative values (for instance, −1 bar or −760mmHgequals total vacuum). Most gauges measure pressure relative to atmospheric pressure as the zero point, so this form of reading is simply referred to as "gauge pressure". However, anything greater than total vacuum is technically a form of pressure. For very low pressures, a gauge that uses total vacuum as the zero point reference must be used, giving pressure reading as an absolute pressure. Other methods of pressure measurement involve sensors that can transmit the pressure reading to a remote indicator or control system (telemetry). Everyday pressure measurements, such as for vehicle tire pressure, are usually made relative to ambient air pressure. In other cases measurements are made relative to a vacuum or to some other specific reference. When distinguishing between these zero references, the following terms are used: The zero reference in use is usually implied by context, and these words are added only when clarification is needed.Tire pressureandblood pressureare gauge pressures by convention, whileatmospheric pressures, deep vacuum pressures, andaltimeter pressuresmust be absolute. For mostworking fluidswhere a fluid exists in aclosed system, gauge pressure measurement prevails. Pressure instruments connected to the system will indicate pressures relative to the current atmospheric pressure. The situation changes when extreme vacuum pressures are measured, then absolute pressures are typically used instead and measuring instruments used will be different. Differential pressures are commonly used in industrial process systems. Differential pressure gauges have two inlet ports, each connected to one of the volumes whose pressure is to be monitored. In effect, such a gauge performs the mathematical operation of subtraction through mechanical means, obviating the need for an operator or control system to watch two separate gauges and determine the difference in readings. Moderatevacuum pressurereadings can be ambiguous without the proper context, as they may represent absolute pressure or gauge pressure without a negative sign. Thus a vacuum of 26 inHg gauge is equivalent to an absolute pressure of 4 inHg, calculated as 30 inHg (typical atmospheric pressure) − 26 inHg (gauge pressure). Atmospheric pressure is typically about 100kPaat sea level, but is variable with altitude and weather. If the absolute pressure of a fluid stays constant, the gauge pressure of the same fluid will vary as atmospheric pressure changes. For example, when a car drives up a mountain, the (gauge) tire pressure goes up because atmospheric pressure goes down. The absolute pressure in the tire is essentially unchanged. Using atmospheric pressure as reference is usually signified by a "g" for gauge after the pressure unit, e.g. 70 psig, which means that the pressure measured is the total pressure minusatmospheric pressure. There are two types of gauge reference pressure: vented gauge (vg) and sealed gauge (sg). A vented-gaugepressure transmitter, for example, allows the outside air pressure to be exposed to the negative side of the pressure-sensing diaphragm, through a vented cable or a hole on the side of the device, so that it always measures the pressure referred to ambientbarometric pressure. Thus a vented-gauge referencepressure sensorshould always read zero pressure when the process pressure connection is held open to the air. A sealed gauge reference is very similar, except that atmospheric pressure is sealed on the negative side of the diaphragm. This is usually adopted on high pressure ranges, such ashydraulics, where atmospheric pressure changes will have a negligible effect on the accuracy of the reading, so venting is not necessary. This also allows some manufacturers to provide secondary pressure containment as an extra precaution for pressure equipment safety if the burst pressure of the primary pressure sensingdiaphragmis exceeded. There is another way of creating a sealed gauge reference, and this is to seal a highvacuumon the reverse side of the sensing diaphragm. Then the output signal is offset, so the pressure sensor reads close to zero when measuring atmospheric pressure. A sealed gauge referencepressure transducerwill never read exactly zero because atmospheric pressure is always changing and the reference in this case is fixed at 1 bar. To produce anabsolute pressure sensor, the manufacturer seals a high vacuum behind the sensing diaphragm. If the process-pressure connection of an absolute-pressure transmitter is open to the air, it will read the actualbarometric pressure. Asealed pressure sensoris similar to a gauge pressure sensor except that it measures pressure relative to some fixed pressure rather than the ambient atmospheric pressure (which varies according to the location and the weather). For much of human history, the pressure of gases like air was ignored, denied, or taken for granted, but as early as the 6th century BC, Greek philosopherAnaximenesofMiletusclaimed that all things are made of air that is simply changed by varying levels of pressure. He could observe water evaporating, changing to a gas, and felt that this applied even to solid matter. More condensed air made colder, heavier objects, and expanded air made lighter, hotter objects. This was akin to how gases really do become less dense when warmer, more dense when cooler. In the 17th century,Evangelista Torricelliconducted experiments with mercury that allowed him to measure the presence of air. He would dip a glass tube, closed at one end, into a bowl of mercury and raise the closed end up out of it, keeping the open end submerged. The weight of the mercury would pull it down, leaving a partial vacuum at the far end. This validated his belief that air/gas has mass, creating pressure on things around it. Previously, the more popular conclusion, even forGalileo, was that air was weightless and it is vacuum that provided force, as in a siphon. The discovery helped bring Torricelli to the conclusion: We live submerged at the bottom of an ocean of the element air, which by unquestioned experiments is known to have weight. This test, known asTorricelli's experiment, was essentially the first documented pressure gauge. Blaise Pascalwent further, having his brother-in-law try the experiment at different altitudes on a mountain, and finding indeed that the farther down in the ocean of atmosphere, the higher the pressure. TheSIunit for pressure is thepascal(Pa), equal to onenewtonpersquare metre(N·m−2or kg·m−1·s−2). This special name for the unit was added in 1971; before that, pressure in SI was expressed in units such as N·m−2. When indicated, the zero reference is stated in parentheses following the unit, for example 101 kPa (abs). Thepound per square inch(psi) is still in widespread use in the US and Canada, for measuring, for instance, tire pressure. A letter is often appended to the psi unit to indicate the measurement's zero reference; psia for absolute, psig for gauge, psid for differential, although this practice is discouraged by theNIST.[3] Because pressure was once commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g.,inches of water). Manometric measurement is the subject ofpressure headcalculations. The most common choices for a manometer's fluid aremercury(Hg) and water; water is nontoxic and readily available, while mercury's density allows for a shorter column (and so a smaller manometer) to measure a given pressure. The abbreviation "W.C." or the words "water column" are often printed on gauges and measurements that use water for the manometer. Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. So measurements in "millimetres of mercury" or "inches of mercury" can be converted to SI units as long as attention is paid to the local factors of fluid density andgravity. Temperature fluctuations change the value of fluid density, while location can affect gravity. Although no longer preferred, thesemanometric unitsare still encountered in many fields.Blood pressureis measured in millimetres of mercury (seetorr) in most of the world,central venous pressureand lung pressures incentimeters of waterare still common, as in settings for CPAP machines. Natural gas pipeline pressures are measured ininches of water, expressed as "inches W.C." Underwater diversuse manometric units: the ambient pressure is measured in units ofmetres sea water(msw) which is defined as equal to one tenth of a bar.[4][5]The unit used in the US is thefoot sea water(fsw), based onstandard gravityand a sea-water density of 64 lb/ft3. According to the US Navy Diving Manual, one fsw equals 0.30643 msw,0.030643bar, or0.44444psi,[4][5]though elsewhere it states that 33 fsw is14.7 psi(one atmosphere), which gives one fsw equal to about 0.445 psi.[6]The msw and fsw are the conventional units for measurement ofdiverpressure exposure used indecompression tablesand the unit of calibration forpneumofathometersandhyperbaric chamberpressure gauges.[7]Both msw and fsw are measured relative to normal atmospheric pressure. In vacuum systems, the unitstorr(millimeter of mercury),micron(micrometer of mercury),[8]and inch of mercury (inHg) are most commonly used. Torr and micron usually indicates an absolute pressure, while inHg usually indicates a gauge pressure. Atmospheric pressures are usually stated using hectopascal (hPa), kilopascal (kPa), millibar (mbar) or atmospheres (atm). In American and Canadian engineering,stressis often measured inkip. Stress is not a true pressure since it is notscalar. In thecgssystem the unit of pressure was thebarye(ba), equal to 1 dyn·cm−2. In themtssystem, the unit of pressure was thepieze, equal to 1stheneper square metre. Many other hybrid units are used such as mmHg/cm2or grams-force/cm2(sometimes askg/cm2without properly identifying the force units). Using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as a unit of force is prohibited in SI; the unit of force in SI is the newton (N). Static pressureis uniform in all directions, so pressure measurements are independent of direction in an immovable (static) fluid. Flow, however, applies additional pressure on surfaces perpendicular to the flow direction, while having little impact on surfaces parallel to the flow direction. This directional component of pressure in a moving (dynamic) fluid is calleddynamic pressure. An instrument facing the flow direction measures the sum of the static and dynamic pressures; this measurement is called thetotal pressureorstagnation pressure. Since dynamic pressure is referenced to static pressure, it is neither gauge nor absolute; it is a differential pressure. While static gauge pressure is of primary importance to determining net loads on pipe walls, dynamic pressure is used to measure flow rates and airspeed. Dynamic pressure can be measured by taking the differential pressure between instruments parallel and perpendicular to the flow.Pitot-static tubes, for example perform this measurement on airplanes to determine airspeed. The presence of the measuring instrument inevitably acts to divert flow and create turbulence, so its shape is critical to accuracy and the calibration curves are often non-linear. Example:A water tank has a pressure of 10 atm. The atmospheric pressure is 1 atm. What is the gauge pressure? P_g = P_a - P_v= 10 atm - 1 atm= 9 atm Therefore, the gauge pressure is 9 atm. Apressure sensoris a device for pressure measurement ofgasesorliquids. Pressure sensors can alternatively be calledpressure transducers,pressure transmitters,pressure senders,pressure indicators,piezometersandmanometers, among other names. Pressure is an expression of the force required to stop a fluid from expanding, and is usually stated in terms of force per unit area. A pressure sensor usually acts as atransducer; it generates a signal as afunctionof the pressure imposed. Pressure sensors can vary drastically in technology, design, performance, application suitability and cost. A conservative estimate would be that there may be over 50 technologies and at least 300 companies making pressure sensors worldwide. There is also a category of pressure sensors that are designed to measure in a dynamic mode for capturing very high speed changes in pressure. Example applications for this type of sensor would be in the measuring of combustion pressure in an engine cylinder or in a gas turbine. These sensors are commonly manufactured out ofpiezoelectricmaterials such as quartz. Some pressure sensors arepressure switches, which turn on or off at a particular pressure. For example, a water pump can be controlled by a pressure switch so that it starts when water is released from the system, reducing the pressure in a reservoir. Pressure range, sensitivity, dynamic response and cost all vary by several orders of magnitude from one instrument design to the next. The oldest type is the liquid column (a vertical tube filled with mercury) manometer invented byEvangelista Torricelliin 1643. The U-Tube was invented byChristiaan Huygensin 1661. There are two basic categories of analog pressure sensors: force collector and other types. A pressure sensor, a resonantquartz crystalstrain gaugewith aBourdon tubeforce collector, is the critical sensor ofDART.[16]DART detectstsunamiwaves from the bottom of the open ocean. It has a pressure resolution of approximately 1mm of water when measuring pressure at a depth of several kilometers.[17] Hydrostaticgauges (such as the mercury column manometer) compare pressure to the hydrostatic force per unit area at the base of a column of fluid. Hydrostatic gauge measurements are independent of the type of gas being measured, and can be designed to have a very linear calibration. They have poor dynamic response. Piston-type gauges counterbalance the pressure of a fluid with a spring (for exampletire-pressure gaugesof comparatively low accuracy) or a solid weight, in which case it is known as adeadweight testerand may be used for calibration of other gauges. Liquid-column gauges consist of a column of liquid in a tube whose ends are exposed to different pressures. The column will rise or fall until its weight (a force applied due to gravity) is in equilibrium with the pressure differential between the two ends of the tube (a force applied due to fluid pressure). A very simple version is a U-shaped tube half-full of liquid, one side of which is connected to the region of interest while thereferencepressure (which might be theatmospheric pressureor a vacuum) is applied to the other. The difference in liquid levels represents the applied pressure. The pressure exerted by a column of fluid of heighthand densityρis given by the hydrostatic pressure equation,P=hgρ. Therefore, the pressure difference between the applied pressurePaand the reference pressureP0in a U-tube manometer can be found by solvingPa−P0=hgρ. In other words, the pressure on either end of the liquid (shown in blue in the figure) must be balanced (since the liquid is static), and soPa=P0+hgρ. In most liquid-column measurements, the result of the measurement is the heighth, expressed typically in mm, cm, or inches. Thehis also known as thepressure head. When expressed as a pressure head, pressure is specified in units of length and the measurement fluid must be specified. When accuracy is critical, the temperature of the measurement fluid must likewise be specified, because liquid density is a function oftemperature. So, for example, pressure head might be written "742.2 mmHg" or "4.2 inH2Oat 59 °F" for measurements taken with mercury or water as the manometric fluid respectively. The word "gauge" or "vacuum" may be added to such a measurement to distinguish between a pressure above or below the atmospheric pressure. Both mm of mercury and inches of water are common pressure heads, which can be converted to S.I. units of pressure usingunit conversionand the above formulas. If the fluid being measured is significantly dense, hydrostatic corrections may have to be made for the height between the moving surface of the manometer working fluid and the location where the pressure measurement is desired, except when measuring differential pressure of a fluid (for example, across anorifice plateor venturi), in which case the density ρ should be corrected by subtracting the density of the fluid being measured.[18] Although any fluid can be used,mercuryis preferred for its high density (13.534 g/cm3) and lowvapour pressure. Its convexmeniscusis advantageous since this means there will be no pressure errors fromwettingthe glass, though under exceptionally clean circumstances, the mercury will stick to glass and the barometer may become stuck (the mercury can sustain anegative absolute pressure) even under a strong vacuum.[19]For low pressure differences, light oil or water are commonly used (the latter giving rise to units of measurement such asinches water gaugeandmillimetres H2O). Liquid-column pressure gauges have a highly linear calibration. They have poor dynamic response because the fluid in the column may react slowly to a pressure change. When measuring vacuum, the working liquid may evaporate and contaminate the vacuum if itsvapor pressureis too high. When measuring liquid pressure, a loop filled with gas or a light fluid can isolate the liquids to prevent them from mixing, but this can be unnecessary, for example, when mercury is used as the manometer fluid to measure differential pressure of a fluid such as water. Simple hydrostatic gauges can measure pressures ranging from a fewtorrs(a few 100 Pa) to a few atmospheres (approximately1000000Pa). A single-limb liquid-column manometer has a larger reservoir instead of one side of the U-tube and has a scale beside the narrower column. The column may be inclined to further amplify the liquid movement. Based on the use and structure, following types of manometers are used[20] AMcLeod gaugeisolates a sample of gas and compresses it in a modified mercury manometer until the pressure is a fewmillimetres of mercury. The technique is very slow and unsuited to continual monitoring, but is capable of good accuracy. Unlike other manometer gauges, the McLeod gauge reading is dependent on the composition of the gas, since the interpretation relies on the sample compressing as anideal gas. Due to the compression process, the McLeod gauge completely ignores partial pressures from non-ideal vapors that condense, such as pump oils, mercury, and even water if compressed enough. 0.1 mPa is the lowest direct measurement of pressure that is possible with current technology. Other vacuum gauges can measure lower pressures, but only indirectly by measurement of other pressure-dependent properties. These indirect measurements must be calibrated to SI units by a direct measurement, most commonly a McLeod gauge.[22] Aneroidgauges are based on a metallic pressure-sensing element that flexes elastically under the effect of a pressure difference across the element. "Aneroid" means "without fluid", and the term originally distinguished these gauges from the hydrostatic gauges described above. However, aneroid gauges can be used to measure the pressure of a liquid as well as a gas, and they are not the only type of gauge that can operate without fluid. For this reason, they are often calledmechanicalgauges in modern language. Aneroid gauges are not dependent on the type of gas being measured, unlike thermal and ionization gauges, and are less likely to contaminate the system than hydrostatic gauges. The pressure sensing element may be aBourdon tube, a diaphragm, a capsule, or a set of bellows, which will change shape in response to the pressure of the region in question. The deflection of the pressure sensing element may be read by a linkage connected to a needle, or it may be read by a secondary transducer. The most common secondary transducers in modern vacuum gauges measure a change in capacitance due to the mechanical deflection. Gauges that rely on a change in capacitance are often referred to as capacitance manometers. The Bourdon pressure gauge uses the principle that a flattened tube[23]tends to straighten or regain its circular form in cross-section when pressurized. (Aparty hornillustrates this principle.) This change in cross-section may be hardly noticeable, involving moderatestresseswithin the elastic range of easily workable materials. Thestrainof the material of the tube is magnified by forming the tube into a C shape or even a helix, such that the entire tube tends to straighten out or uncoil elastically as it is pressurized.Eugène Bourdonpatented his gauge in France in 1849, and it was widely adopted because of its superior simplicity, linearity, and accuracy; Bourdon is now part of the Baumer group and still manufacture Bourdon tube gauges in France. Edward Ashcroft purchased Bourdon's American patent rights in 1852 and became a major manufacturer of gauges. Also in 1849, Bernard Schaeffer in Magdeburg, Germany patented a successful diaphragm (see below) pressure gauge, which, together with the Bourdon gauge, revolutionized pressure measurement in industry.[24]But in 1875 after Bourdon's patents expired, his companySchaeffer and Budenbergalso manufactured Bourdon tube gauges. In practice, a flattened thin-wall, closed-end tube is connected at the hollow end to a fixed pipe containing the fluid pressure to be measured. As the pressure increases, the closed end moves in an arc, and this motion is converted into the rotation of a (segment of a) gear by a connecting link that is usually adjustable. A small-diameter pinion gear is on the pointer shaft, so the motion is magnified further by thegear ratio. The positioning of the indicator card behind the pointer, the initial pointer shaft position, the linkage length and initial position, all provide means to calibrate the pointer to indicate the desired range of pressure for variations in the behavior of the Bourdon tube itself. Differential pressure can be measured by gauges containing two different Bourdon tubes, with connecting linkages (but is more usually measured via diaphragms or bellows and a balance system). Bourdon tubes measuresgauge pressure, relative to ambient atmospheric pressure, as opposed toabsolute pressure; vacuum is sensed as a reverse motion. Some aneroid barometers use Bourdon tubes closed at both ends (but most use diaphragms or capsules, see below). When the measured pressure is rapidly pulsing, such as when the gauge is near areciprocating pump, anorificerestriction in the connecting pipe is frequently used to avoid unnecessary wear on the gears and provide an average reading; when the whole gauge is subject to mechanical vibration, the case (including the pointer and dial) can be filled with an oil orglycerin. Typical high-quality modern gauges provide an accuracy of ±1% of span (Nominal diameter 100mm, Class 1 EN837-1), and a special high-accuracy gauge can be as accurate as 0.1% of full scale.[25] Force-balanced fused quartz Bourdon tube sensors work on the same principle but uses the reflection of a beam of light from a mirror to sense the angular displacement and current is applied to electromagnets to balance the force of the tube and bring the angular displacement back to zero, the current that is applied to the coils is used as the measurement. Due to the extremely stable and repeatable mechanical and thermal properties of quartz and the force balancing which eliminates nearly all physical movement these sensors can be accurate to around 1PPMof full scale.[26]Due to the extremely fine fused quartz structures which must be made by hand these sensors are generally limited to scientific and calibration purposes. In the following illustrations of a compound gauge (vacuum and gauge pressure), the case and window has been removed to show only the dial, pointer and process connection. This particular gauge is a combination vacuum and pressure gauge used for automotive diagnosis: Mechanical details include stationary and moving parts. Stationary parts: Moving parts: A second type of aneroid gauge usesdeflectionof a flexiblemembranethat separates regions of different pressure. The amount of deflection is repeatable for known pressures so the pressure can be determined by using calibration. The deformation of a thin diaphragm is dependent on the difference in pressure between its two faces. The reference face can be open to atmosphere to measure gauge pressure, open to a second port to measure differential pressure, or can be sealed against a vacuum or other fixed reference pressure to measure absolute pressure. The deformation can be measured using mechanical, optical or capacitive techniques. Ceramic and metallic diaphragms are used. The useful range is above 10−2Torr(roughly 1Pa).[27]For absolute measurements, welded pressure capsules with diaphragms on either side are often used. Membrane shapes include: In gauges intended to sense small pressures or pressure differences, or require that an absolute pressure be measured, the gear train and needle may be driven by an enclosed and sealed bellows chamber, called ananeroid. (Earlybarometersused a column of liquid such as water or the liquid metalmercurysuspended by avacuum.) This bellows configuration is used in aneroid barometers (barometers with an indicating needle and dial card),altimeters, altitude recordingbarographs, and the altitudetelemetryinstruments used inweather balloonradiosondes. These devices use the sealed chamber as a reference pressure and are driven by the external pressure. Other sensitive aircraft instruments such asair speed indicatorsand rate of climb indicators (variometers) have connections both to the internal part of the aneroid chamber and to an external enclosing chamber. These gauges use the attraction of two magnets to translate differential pressure into motion of a dial pointer. As differential pressure increases, a magnet attached to either a piston or rubber diaphragm moves. A rotary magnet that is attached to a pointer then moves in unison. To create different pressure ranges, the spring rate can be increased or decreased. The spinning-rotor gauge works by measuring how a rotating ball is slowed by the viscosity of the gas being measured. The ball is made of steel and is magnetically levitated inside a steel tube closed at one end and exposed to the gas to be measured at the other. The ball is brought up to speed (about 2500 or 3800rad/s), and the deceleration rate is measured after switching off the drive, by electromagnetic transducers.[28]The range of the instrument is 5−5to 102Pa (103Pa with less accuracy). It is accurate and stable enough to be used as asecondary standard. During the last years this type of gauge became much more user friendly and easier to operate. In the past the instrument was famous for requiring some skill and knowledge to use correctly. For high accuracy measurements various corrections must be applied and the ball must be spun at a pressure well below the intended measurement pressure for five hours before using. It is most useful in calibration and research laboratories where high accuracy is required and qualified technicians are available.[29]Insulation vacuum monitoring of cryogenic liquids is a well suited application for this system too. With the inexpensive and long term stable, weldable sensor, that can be separated from the more costly electronics, it is a perfect fit to all static vacuums. This is an over-simplified diagram, but you can see the fundamental design of the internal ports in the sensor. The important item here to note is the "diaphragm" as this is the sensor itself. Is it slightly convex in shape (highly exaggerated in the drawing); this is important as it affects the accuracy of the sensor in use. The shape of the sensor is important because it is calibrated to work in the direction of air flow as shown by the RED arrows. This is normal operation for the pressure sensor, providing a positive reading on the display of the digital pressure meter. Applying pressure in the reverse direction can induce errors in the results as the movement of the air pressure is trying to force the diaphragm to move in the opposite direction. The errors induced by this are small, but can be significant, and therefore it is always preferable to ensure that the more positive pressure is always applied to the positive (+ve) port and the lower pressure is applied to the negative (-ve) port, for normal 'gauge pressure' application. The same applies to measuring the difference between two vacuums, the larger vacuum should always be applied to the negative (-ve) port. The measurement of pressure via the Wheatstone Bridge looks something like this.... The effective electrical model of the transducer, together with a basic signal conditioning circuit, is shown in the application schematic. The pressure sensor is a fully active Wheatstone bridge which has been temperature compensated and offset adjusted by means of thick film, laser trimmed resistors. The excitation to the bridge is applied via a constant current. The low-level bridge output is at +O and -O, and the amplified span is set by the gain programming resistor (r). The electrical design is microprocessor controlled, which allows for calibration, the additional functions for the user, such as Scale Selection, Data Hold, Zero and Filter functions, the Record function that stores/displays MAX/MIN. Generally, as areal gasincreases in density -which may indicate an increase inpressure- its ability to conduct heat increases. In this type of gauge, a wirefilamentis heated by running current through it. Athermocoupleorresistance thermometer(RTD) can then be used to measure the temperature of the filament. This temperature is dependent on the rate at which the filament loses heat to the surrounding gas, and therefore on thethermal conductivity. A common variant is thePirani gauge, which uses a single platinum filament as both the heated element and RTD. These gauges are accurate from 10−3Torr to 10Torr, but their calibration is sensitive to the chemical composition of the gases being measured. APirani gaugeconsists of a metal wire open to the pressure being measured. The wire isheatedby a current flowing through it and cooled by the gas surrounding it. If the gas pressure is reduced, the cooling effect will decrease, hence the equilibrium temperature of the wire will increase. Theresistanceof the wire is afunction of its temperature: by measuring thevoltageacross the wire and thecurrentflowing through it, the resistance (and so the gas pressure) can be determined. This type of gauge was invented byMarcello Pirani. In two-wire gauges, one wire coil is used as a heater, and the other is used to measure temperature due toconvection.Thermocouple gaugesandthermistor gaugeswork in this manner using athermocoupleorthermistor, respectively, to measure the temperature of the heated wire. Ionization gaugesare the most sensitive gauges for very low pressures (also referred to as hard or high vacuum). They sense pressure indirectly by measuring the electrical ions produced when the gas is bombarded with electrons. Fewer ions will be produced by lower density gases. The calibration of an ion gauge is unstable and dependent on the nature of the gases being measured, which is not always known. They can be calibrated against aMcLeod gaugewhich is much more stable and independent of gas chemistry. Thermionic emissiongenerates electrons, which collide with gas atoms and generate positiveions. The ions are attracted to a suitablybiasedelectrode known as the collector. The current in the collector is proportional to the rate of ionization, which is a function of the pressure in the system. Hence, measuring the collector current gives the gas pressure. There are several sub-types of ionization gauge. Most ion gauges come in two types: hotcathodeand cold cathode. In thehot cathodeversion, an electrically heated filament produces an electron beam. The electrons travel through the gauge and ionize gas molecules around them. The resulting ions are collected at a negative electrode. The current depends on the number of ions, which depends on the pressure in the gauge. Hot cathode gauges are accurate from 10−3Torr to 10−10Torr. The principle behindcold cathodeversion is the same, except that electrons are produced in the discharge of a high voltage. Cold cathode gauges are accurate from 10−2Torrto 10−9Torr. Ionization gauge calibration is very sensitive to construction geometry, chemical composition of gases being measured, corrosion and surface deposits. Their calibration can be invalidated by activation at atmospheric pressure or low vacuum. The composition of gases at high vacuums will usually be unpredictable, so amass spectrometermust be used in conjunction with the ionization gauge for accurate measurement.[30] Ahot-cathode ionization gaugeis composed mainly of three electrodes acting together as atriode, wherein thecathodeis the filament. The three electrodes are a collector or plate, afilament, and agrid. The collector current is measured inpicoamperesby anelectrometer. The filament voltage to ground is usually at a potential of 30 volts, while the grid voltage at 180–210 volts DC, unless there is an optionalelectron bombardmentfeature, by heating the grid, which may have a high potential of approximately 565 volts. The most common ion gauge is the hot-cathodeBayard–Alpert gauge, with a small ion collector inside the grid. A glass envelope with an opening to the vacuum can surround the electrodes, but usually thenude gaugeis inserted in the vacuum chamber directly, the pins being fed through a ceramic plate in the wall of the chamber. Hot-cathode gauges can be damaged or lose their calibration if they are exposed to atmospheric pressure or even low vacuum while hot. The measurements of a hot-cathode ionization gauge are always logarithmic. Electrons emitted from the filament move several times in back-and-forth movements around the grid before finally entering the grid. During these movements, some electrons collide with a gaseous molecule to form a pair of an ion and an electron (electron ionization). The number of theseionsis proportional to the gaseous molecule density multiplied by the electron current emitted from the filament, and these ions pour into the collector to form an ion current. Since the gaseous molecule density is proportional to the pressure, the pressure is estimated by measuring the ion current. The low-pressure sensitivity of hot-cathode gauges is limited by the photoelectric effect. Electrons hitting the grid produce x-rays that produce photoelectric noise in the ion collector. This limits the range of older hot-cathode gauges to 10−8Torr and the Bayard–Alpert to about 10−10Torr. Additional wires at cathode potential in the line of sight between the ion collector and the grid prevent this effect. In the extraction type the ions are not attracted by a wire, but by an open cone. As the ions cannot decide which part of the cone to hit, they pass through the hole and form an ion beam. This ion beam can be passed on to a: There are two subtypes ofcold-cathodeionization gauges: thePenning gauge(invented byFrans Michel Penning), and theinverted magnetron, also called aRedhead gauge. The major difference between the two is the position of theanodewith respect to thecathode. Neither has a filament, and each may require aDCpotential of about 4kVfor operation. Inverted magnetrons can measure down to 1×10−12Torr. Likewise, cold-cathode gauges may be reluctant to start at very low pressures, in that the near-absence of a gas makes it difficult to establish an electrode current - in particular in Penning gauges, which use an axially symmetric magnetic field to create path lengths for electrons that are of the order of metres. In ambient air, suitable ion-pairs are ubiquitously formed by cosmic radiation; in a Penning gauge, design features are used to ease the set-up of a discharge path. For example, the electrode of a Penning gauge is usually finely tapered to facilitate the field emission of electrons. Maintenance cycles of cold cathode gauges are, in general, measured in years, depending on the gas type and pressure that they are operated in. Using a cold cathode gauge in gases with substantial organic components, such as pump oil fractions, can result in the growth of delicate carbon films and shards within the gauge that eventually either short-circuit the electrodes of the gauge or impede the generation of a discharge path. 2.5 to 13.5% between 10−2and 1 mbar 10 to 10−3mbar (const. voltage) ±20% at 10−3and 10−9mbar ±100% at 10−10mbar When fluid flows are not in equilibrium, local pressures may be higher or lower than the average pressure in a medium. These disturbances propagate from their source as longitudinal pressure variations along the path of propagation. This is also called sound. Sound pressure is the instantaneous local pressure deviation from the average pressure caused by a sound wave. Sound pressure can be measured using amicrophonein air and ahydrophonein water. The effective sound pressure is theroot mean squareof the instantaneous sound pressure over a given interval of time. Sound pressures are normally small and are often expressed in units of microbar. The American Society of Mechanical Engineers (ASME) has developed two separate and distinct standards on pressure measurement, B40.100 and PTC 19.2. B40.100 provides guidelines on Pressure Indicated Dial Type and Pressure Digital Indicating Gauges, Diaphragm Seals, Snubbers, and Pressure Limiter Valves. PTC 19.2 provides instructions and guidance for the accurate determination of pressure values in support of the ASME Performance Test Codes. The choice of method, instruments, required calculations, and corrections to be applied depends on the purpose of the measurement, the allowable uncertainty, and the characteristics of the equipment being tested. The methods for pressure measurement and the protocols used for data transmission are also provided. Guidance is given for setting up the instrumentation and determining the uncertainty of the measurement. Information regarding the instrument type, design, applicable pressure range, accuracy, output, and relative cost is provided. Information is also provided on pressure-measuring devices that are used in field environments i.e., piston gauges, manometers, and low-absolute-pressure (vacuum) instruments. These methods are designed to assist in the evaluation of measurement uncertainty based on current technology and engineering knowledge, taking into account published instrumentation specifications and measurement and application techniques. This Supplement provides guidance in the use of methods to establish the pressure-measurement uncertainty. There are many applications for pressure sensors: This is where the measurement of interest ispressure, expressed as aforceper unit area. This is useful in weather instrumentation, aircraft, automobiles, and any other machinery that has pressure functionality implemented. This is useful in aircraft, rockets, satellites, weather balloons, and many other applications. All these applications make use of the relationship between changes in pressure relative to the altitude. This relationship is governed by the following equation:[32]h=(1−(P/Pref)0.190284)×145366.45ft{\displaystyle h=(1-(P/P_{\mathrm {ref} })^{0.190284})\times 145366.45\mathrm {ft} }This equation is calibrated for analtimeter, up to 36,090 feet (11,000 m). Outside that range, an error will be introduced which can be calculated differently for each different pressure sensor. These error calculations will factor in the error introduced by the change in temperature as we go up. Barometric pressure sensors can have an altitude resolution of less than 1 meter, which is significantly better than GPS systems (about 20 meters altitude resolution). In navigation applications altimeters are used to distinguish between stacked road levels for car navigation and floor levels in buildings for pedestrian navigation. This is the use of pressure sensors in conjunction with theventuri effectto measure flow. Differential pressure is measured between two segments of a venturi tube that have a different aperture. The pressure difference between the two segments is directly proportional to the flow rate through the venturi tube. A low pressure sensor is almost always required as the pressure difference is relatively small. A pressure sensor may also be used to calculate the level of a fluid. This technique is commonly employed to measure the depth of a submerged body (such as a diver or submarine), or level of contents in a tank (such as in a water tower). For most practical purposes, fluid level is directly proportional to pressure. In the case of fresh water where the contents are under atmospheric pressure, 1psi = 27.7 inH2O / 1Pa = 9.81 mmH2O. The basic equation for such a measurement isP=ρgh{\displaystyle P=\rho gh}whereP= pressure,ρ= density of the fluid,g= standard gravity,h= height of fluid column above pressure sensor A pressure sensor may be used to sense the decay of pressure due to a system leak. This is commonly done by either comparison to a known leak using differential pressure, or by means of utilizing the pressure sensor to measure pressure change over time. Apiezometeris either a device used to measure liquidpressurein a system by measuring the height to which a column of the liquid rises against gravity, or a device which measures the pressure (more precisely, thepiezometric head) ofgroundwater[33]at a specific point. A piezometer is designed to measure static pressures, and thus differs from apitot tubeby not being pointed into the fluid flow. Observation wells give some information on the water level in a formation, but must be read manually. Electrical pressuretransducersof several types can be read automatically, making data acquisition more convenient. The first piezometers ingeotechnical engineeringwere open wells or standpipes (sometimes calledCasagrande piezometers)[34]installed into anaquifer. A Casagrande piezometer will typically have a solid casing down to the depth of interest, and a slotted or screened casing within the zone where water pressure is being measured. The casing is sealed into the drillhole with clay, bentonite or concrete to prevent surface water from contaminating the groundwater supply. In an unconfined aquifer, the water level in the piezometer would not be exactly coincident with thewater table, especially when the vertical component of flow velocity is significant. In aconfined aquifer under artesian conditions, the water level in the piezometer indicates the pressure in the aquifer, but not necessarily the water table.[35]Piezometer wells can be much smaller in diameter than production wells, and a 5 cm diameter standpipe is common. Piezometers in durable casings can be buried or pushed into the ground to measure the groundwater pressure at the point of installation. The pressure gauges (transducer) can be vibrating-wire, pneumatic, or strain-gauge in operation, converting pressure into an electrical signal. These piezometers are cabled to the surface where they can be read bydata loggersor portable readout units, allowing faster or more frequent reading than is possible with open standpipe piezometers.
https://en.wikipedia.org/wiki/Pressure_measurement
Thesensitivityof anelectronic device, such as acommunications systemreceiver, or detection device, such as aPIN diode, is the minimummagnitudeof inputsignalrequired to produce a specified output signal having a specifiedsignal-to-noise ratio, or other specified criteria. In general, it is the signal level required for a particular quality of received information.[1] Insignal processing, sensitivity also relates tobandwidthandnoise flooras is explained in more detail below. In the field of electronics different definitions are used for sensitivity. The IEEE dictionary[2][3]states: "Definitions of sensitivity fall into two contrasting categories." It also provides multiple definitions relevant to sensors among which 1: "(measuring devices) The ratio of the magnitude of its response to the magnitude of the quantity measured.” and 2: "(radio receiver or similar device) Taken as the minimum input signal required to produce a specified output signal having a specified signal-to-noise ratio.”. The first of these definitions is similar to the definition ofresponsivityand as a consequence sensitivity is sometimes considered to be improperly used as a synonym forresponsivity,[4][5]and it is argued that the second definition, which is closely related to thedetection limit, is a better indicator of the performance of a measuring system.[6] To summarize, two contrasting definitions of sensitivity are used in the field of electronics The sensitivity of amicrophoneis usually expressed as thesoundfield strengthindecibels(dB) relative to 1V/Pa(Pa =N/m2) or as the transfer factor in millivolts perpascal(mV/Pa) into anopen circuitor into a 1 kiloohmload.[citation needed]The sensitivity of ahydrophoneis usually expressed as dB relative to 1 V/μPa.[7] The sensitivity of aloudspeakeris usually expressed as dB / 2.83 VRMSat 1 metre.[citation needed]This is not the same as theelectrical efficiency; seeEfficiency vs sensitivity. This is an example where sensitivity is defined as the ratio of the sensor's response to the quantity measured. One should realize that when using this definition to compare sensors, the sensitivity of the sensor might depend on components like output voltage amplifiers, that can increase the sensor response such that the sensitivity is not a pure figure of merit of the sensor alone, but of the combination of all components in the signal path from input to response. Sensitivity in a receiver, such aradio receiver, indicates its capability to extract information from a weak signal, quantified as the lowest signal level that can be useful.[8]It is mathematically defined as the minimum input signalSi{\displaystyle S_{i}}required to produce a specified signal-to-noise S/N ratio at the output port of the receiver and is defined as the mean noise power at the input port of the receiver times the minimum required signal-to-noise ratio at the output of the receiver: where The same formula can also be expressed in terms of noise factor of the receiver as where Because receiver sensitivity indicates how faint an input signal can be to be successfully received by the receiver, the lower power level, the better. Lower input signal power for a given S/N ratio means better sensitivity since the receiver's contribution to the noise is smaller. When the power is expressed in dBm the larger the absolute value of the negative number, the better the receive sensitivity. For example, a receiver sensitivity of −98dBmis better than a receive sensitivity of −95 dBm by 3 dB, or a factor of two. In other words, at a specified data rate, a receiver with a −98 dBm sensitivity can hear (or extract useable audio, video or data from) signals that are half the power of those heard by a receiver with a −95 dBm receiver sensitivity.[citation needed]. For electronic sensors the input signalSi{\textstyle S_{i}}can be of many types, like position, force, acceleration, pressure, or magnetic field. The output signal for an electronicanalogsensor is usually a voltage or a current signalSo{\textstyle S_{o}}. Theresponsivityof an ideal linear sensor in the absence of noise is defined asR=So/Si{\textstyle R=S_{o}/S_{i}}, whereas for nonlinear sensors it is defined as the local slopedSo/dSi{\displaystyle \mathrm {d} S_{o}/\mathrm {d} S_{i}}. In the absence of noise and signals at the input, the sensor is assumed to generate a constant intrinsic output noiseNoi{\textstyle N_{oi}}. To reach a specified signal to noise ratio at the outputSNRo=So/Noi{\displaystyle SNR_{o}=S_{o}/N_{oi}}, one combines these equations and obtains the following idealized equation for its sensitivity[5]S{\displaystyle S}, which is equal to the value of the input signalSi,SNRo{\textstyle S_{i,SNR_{o}}}that results in the specified signal-to-noise ratioSNRo{\displaystyle SNR_{o}}at the output: S=Si,SNRo=NoiRSNRo{\displaystyle S=S_{i,SNR_{o}}={\frac {N_{oi}}{R}}SNR_{o}} This equation shows that sensor sensitivity can be decreased (=improved) by either reducing the intrinsic noise of the sensorNoi{\textstyle N_{oi}}or by increasing its responsivityR{\textstyle R}. This is an example of a case where sensivity is defined as the minimum input signal required to produce a specified output signal having a specified signal-to-noise ratio.[2]This definition has the advantage that the sensitivity is closely related to thedetection limitof a sensor if the minimum detectableSNRois specified (SNR). The choice for theSNRoused in the definition of sensitivity depends on the required confidence level for a signal to be reliably detected (confidence (statistics)), and lies typically between 1-10. The sensitivity depends on parameters likebandwidthBWor integration timeτ=1/(2BW)(as explained here:NEP), because noise level can be reduced bysignal averaging, usually resulting in a reduction of the noise amplitude asNoi∝1/τ{\displaystyle N_{oi}\propto 1/{\sqrt {\tau }}}whereτ{\displaystyle \tau }is the integration time over which the signal is averaged. A measure of sensitivity independent of bandwidth can be provided by using the amplitude or powerspectral densityof the noise and or signals (Si,So,Noi{\displaystyle S_{i},S_{o},N_{oi}}) in the definition, with units like m/Hz1/2, N/Hz1/2, W/Hz or V/Hz1/2. For awhite noisesignal over the sensor bandwidth, its power spectral density can be determined from the total noise powerNoi,tot{\displaystyle N_{oi,\mathrm {tot} }}(over the full bandwidth) using the equationNoi,PSD=Noi,tot/BW{\displaystyle N_{oi,\mathrm {PSD} }=N_{oi,\mathrm {tot} }/BW}. Its amplitude spectral density is the square-root of this valueNoi,ASD=Noi,PSD{\displaystyle N_{oi,\mathrm {ASD} }={\sqrt {N_{oi,\mathrm {PSD} }}}}. Note that in signal processing the words energy and power are also used for quantities that do not have the unit Watt (Energy (signal processing)). In some instruments, likespectrum analyzers, aSNRoof 1 at a specified bandwidth of 1 Hz is assumed by default when defining their sensitivity.[2]For instruments that measure power, which also includes photodetectors, this results in the sensitivity becoming equal to thenoise-equivalent powerand for other instruments it becomes equal to the noise-equivalent-input[9]NEI=Noi,ASD/R{\displaystyle NEI=N_{oi,ASD}/R}. A lower value of the sensitivity corresponds to better performance (smaller signals can be detected), which seems contrary to the common use of the word sensitivity where higher sensitivity corresponds to better performance.[6][10]It has therefore been argued that it is preferable to usedetectivity, which is the reciprocal of the noise-equivalent input, as a metric for the performance of detectors[9][11]D=R/Noi{\displaystyle D=R/N_{oi}}. As an example, consider apiezoresistiveforce sensor through which a constant current runs, such that it has a responsivityR=1.0V/N{\displaystyle R=1.0~\mathrm {V} /\mathrm {N} }. TheJohnson noiseof the resistor generates a noise amplitude spectral density ofNoi,ASD=10nV/Hz{\displaystyle N_{oi,{\textrm {ASD}}}=10~\mathrm {nV} /{\sqrt {\mathrm {Hz} }}}. For a specifiedSNRoof 1, this results in a sensitivity and noise-equivalent input ofSi,ASD=NEI=10nN/Hz{\displaystyle S_{i,ASD}=NEI=10~\mathrm {nN} /{\sqrt {\mathrm {Hz} }}}and a detectivity of(10nN/Hz)−1{\displaystyle (10~\mathrm {nN} /{\sqrt {\mathrm {Hz} }})^{-1}}, such that an input signal of 10 nN generates the same output voltage as the noise does over a bandwidth of 1 Hz. This article incorporatespublic domain materialfromFederal Standard 1037C.General Services Administration. Archived fromthe originalon 2022-01-22.(in support ofMIL-STD-188).
https://en.wikipedia.org/wiki/Sensitivity_(electronics)
Atouch switchis a type ofswitchthat only has to be touched by an object to operate. It is used in manylampsand wall switches that have a metal exterior as well as on public computer terminals. Atouchscreenincludes an array of touch switches on a display. A touch switch is the simplest kind oftactile sensor. There are three types of switches called touch switches: A self-capacitance switch needs only one electrode to function. The electrode can be placed behind a non-conductive panel such as wood, glass, or plastic. The switch works usingbody capacitance, a property of the human body that gives it great electrical characteristics. The switch keeps charging and discharging its metal exterior to detect changes incapacitance. When a person touches it, their body increases the capacitance and triggers the switch. Unlike self-capacitance, mutual capacitive touch is based on capacitance changes between two electrodes. This system employs two sets of electrodes—transmitting electrodes (Tx) and receiving electrodes (Rx). When a user’s finger or another object approaches these electrodes, it disrupts the electric field between them, resulting in a change incapacitancevalue. Mutual capacitance is also known as projected capacitance. The advantages of mutual capacitance technology include tight electric field coupling, allowing for more flexible design. For example, keyboards can have closely grouped keys without worrying about cross-coupling. However, mutual capacitance also has its limitations, such as its measurement noise being generally greater than self-capacitance. Capacitance switches are available commercially asintegrated circuitsfrom a number of manufacturers. These devices can also be used as a short-rangeproximity sensor. A resistance switch needs two electrodes to be physically in contact with something electrically conductive (for example a finger) to operate. They work by lowering the resistance between two pieces of metal. It is thus much simpler in construction compared to the capacitance switch. Placing one or two fingers across the plates achieves a turn on or closed state. Removing the finger(s) from the metal pieces turns the device off. One implementation of a resistance touch switch would be twoDarlington-pairedtransistors where the base of the first transistor is connected to one of the electrodes. Also, an N-channel, enhancement-mode, metal oxide field effect transistor can be used. Its gate can be connected to one of the electrodes and the other electrode through a resistance to a positive voltage. Piezotouch switches are based on mechanical bending of piezo ceramic, typically constructed directly behind a surface. This solution enables touch interfaces with any kind of material. Another characteristic of piezo is that it can function asactuatoras well. Current commercial solutions construct the piezo in such a way that touching it with approximately 1.5Nis enough, even for stiff materials like stainless steel. Piezo touch switches are available commercially. Piezo switches respond to a mechanicalforceapplied to the switch. The switch will operate regardless of whether force is applied through insulating or conducting materials. Capacitive switches respond to anelectric fieldapplied to the switch. The field will pass through thin gloves, but not through thick gloves.[1] Piezo switches usually cost more than capacitive switches.[1]
https://en.wikipedia.org/wiki/Touch_sensor
Elastographyis any of a class ofmedical imagingdiagnostic methods that map theelastic propertiesandstiffnessofsoft tissue.[1][2]The main idea is that whether the tissue is hard or soft will give diagnostic information about the presence or status ofdisease. For example,canceroustumours will often be harder than the surrounding tissue, and diseasedliversare stiffer than healthy ones.[1][2][3][4] The most prominent techniques useultrasoundormagnetic resonance imaging(MRI) to make both the stiffness map and an anatomical image for comparison.[citation needed] Palpationis the practice of feeling the stiffness of a person's or animal's tissues with the health practitioner's hands. Manual palpation dates back at least to 1500 BC, with the EgyptianEbers PapyrusandEdwin Smith Papyrusboth giving instructions on diagnosis with palpation. Inancient Greece,Hippocratesgave instructions on many forms of diagnosis using palpation, including palpation of the breasts, wounds, bowels, ulcers, uterus, skin, and tumours. In the modern Western world, palpation became considered a respectable method of diagnosis in the 1930s.[1]Since then, the practice of palpation has become widespread, and it is considered an effective method of detecting tumours and other pathologies. Manual palpation has several important limitations: it is limited to tissues accessible to the physician's hand, it is distorted by any intervening tissue, and it isqualitativebut notquantitative. Elastography, the measurement of tissue stiffness, seeks to address these challenges. There are numerous elastographic techniques, in development stages from early research to extensive clinical application. Each of these techniques works in a different way. What all methods have in common is that they create a distortion in the tissue, observe and process the tissue response to infer the mechanical properties of the tissue, and then display the results to the operator, usually as an image. Each elastographic method is characterized by the way it does each of these things. To image the mechanical properties of tissue, we need to see how it behaves when deformed. There are three main ways of inducing a distortion to observe. These are: The primary way elastographic techniques are categorized is by what imaging modality (type) they use to observe the response. Elastographic techniques useultrasound,magnetic resonance imaging(MRI) and pressure/stress sensors intactile imaging(TI) usingtactile sensor(s). There are a handful of other methods that exist as well. The observation of the tissue response can take many forms. In terms of the image obtained, it can be1-D(i.e. a line), 2-D (a plane), 3-D (a volume), or 0-D (a single value), and it can be a video or a single image. In most cases, the result is displayed to the operator along with a conventional image of the tissue, which shows where in the tissue the different stiffness values occur. Once the response has been observed, the stiffness can be calculated from it. Most elastography techniques find the stiffness of tissue based on one of two main principles: Some techniques will simply display the distortion and/or response, or the wave speed to the operator, while others will compute the stiffness (specifically theYoung's modulusor similarshear modulus) and display that instead. Some techniques present results quantitatively, while others only present qualitative (relative) results. There are a great many ultrasound elastographic techniques. The most prominent are highlighted below. Quasistatic elastography (sometimes called simply 'elastography' for historical reasons) is one of the earliest elastography techniques. In this technique, an external compression is applied to tissue, and the ultrasound images before and after the compression are compared. The areas of the image that are least deformed are the ones that are the stiffest, while the most deformed areas are the least stiff.[3]Generally, what is displayed to the operator is an image of the relative distortions (strains), which is often of clinical utility.[1] From the relative distortion image, however, making aquantitativestiffness map is often desired. To do this requires that assumptions be made about the nature of the soft tissue being imaged and about tissue outside of the image. Additionally, under compression, objects can move into or out of the image or around in the image, causing problems with interpretation. Another limit of this technique is that like manual palpation, it has difficulty with organs or tissues that are not close to the surface or easily compressed.[4] Acoustic radiation force impulse imaging (ARFI)[5]uses ultrasound to create a qualitative 2-D map of tissue stiffness. It does so by creating a 'push' inside the tissue using theacoustic radiation forcefrom a focused ultrasound beam. The amount the tissue along the axis of the beam is pushed down is reflective of tissue stiffness; softer tissue is more easily pushed than stiffer tissue. ARFI shows a qualitative stiffness value along the axis of the pushing beam. By pushing in many different places, a map of the tissue stiffness is built up. Virtual Touch imaging quantification (VTIQ) has been successfully used to identify malignant cervical lymph nodes.[6] In shear-wave elasticity imaging (SWEI),[7]similar to ARFI, a 'push' is induced deep in the tissue byacoustic radiation force. The disturbance created by this push travels sideways through the tissue as ashear wave. By using an image modality likeultrasoundorMRIto see how fast the wave gets to different lateral positions, the stiffness of the intervening tissue is inferred. Since the terms "elasticity imaging" and "elastography" are synonyms, the original term SWEI denoting the technology for elasticity mapping using shear waves is often replaced by SWE. The principal difference between SWEI and ARFI is that SWEI is based on the use of shear waves propagating laterally from the beam axis and creating elasticity map by measuring shear wave propagation parameters whereas ARFI gets elasticity information from the axis of the pushing beam and uses multiple pushes to create a 2-D stiffness map. No shear waves are involved in ARFI and no axial elasticity assessment is involved in SWEI. SWEI is implemented in supersonic shear imaging (SSI). Supersonic shear imaging (SSI)[8][9]gives a quantitative, real-time two-dimensional map of tissue stiffness. SSI is based on SWEI: it uses acoustic radiation force to induce a 'push' inside the tissue of interest generating shear waves and the tissue's stiffness is computed from how fast the resulting shear wave travels through the tissue. Local tissue velocity maps are obtained with a conventional speckle tracking technique and provide a full movie of the shear wave propagation through the tissue. There are two principal innovations implemented in SSI. First, by using many near-simultaneous pushes, SSI creates a source of shear waves which is moved through the medium at a supersonic speed. Second, the generated shear wave is visualized by using ultrafast imaging technique. Using inversion algorithms, the shear elasticity of medium is mapped quantitatively from the wave propagation movie. SSI is the first ultrasonic imaging technology able to reach more than 10,000 frames per second of deep-seated organs. SSI provides a set of quantitative and in vivo parameters describing the tissue mechanical properties: Young's modulus, viscosity, anisotropy. This approach demonstrated clinical benefit in breast, thyroid, liver, prostate, andmusculoskeletalimaging. SSI is used for breast examination with a number of high-resolution linear transducers.[10]A large multi-center breast imaging study has demonstrated both reproducibility[11]and significant improvement in the classification[12]of breast lesions when shear wave elastography images are added to the interpretation of standard B-mode and Color mode ultrasound images. In the food industry, low-intensity ultrasonics has already been used since the 1980s to provide information about the concentration, structure, and physical state of components in foods such as vegetables, meats, and dairy products and also for quality control,[13]for example to evaluate the rheological qualities of cheese.[14] Transient elastography was initially called time-resolved pulse elastography[15]when it was introduced in the late 1990s. The technique relies on a transient mechanical vibration which is used to induce a shear wave into the tissue. The propagation of the shear wave is tracked using ultrasound in order to assess the shear wave speed from which the Young's modulus is deduced under hypothesis of homogeneity, isotropy and pure elasticity (E=3ρV²). An important advantage of transient elastography compared to harmonic elastography techniques is the separation of shear waves and compression waves.[16]The technique can be implemented in 1D[17]and 2D which required the development of an ultrafast ultrasound scanner.[18] Transient elastography gives a quantitativeone-dimensional(i.e. a line) image of "tissue" stiffness. It functions by vibrating the skin with a motor to create a passing distortion in the tissue (ashear wave), and imaging the motion of that distortion as it passes deeper into the body using a 1D ultrasound beam. It then displays a quantitative line of tissue stiffness data (theYoung's modulus).[19][20]This technique is used mainly by the FibroScan system, which is used for liver assessment,[21]for example, to diagnosecirrhosis.[22]A specific implementation of 1D transient elastography called VCTE has been developed to assess average liver stiffness which correlates to liver fibrosis assessed by liver biopsy.[23][24]This technique is implemented in a device which can also assess the controlled attenuation parameter (CAP) which is good surrogate marker ofliver steatosis.[25] Magnetic resonance elastography (MRE)[26]was introduced in the mid-1990s, and multiple clinical applications have been investigated. In MRE, a mechanical vibrator is used on the surface of the patient's body; this creates shear waves that travel into the patient's deeper tissues. An imaging acquisition sequence that measures the velocity of the waves is used, and this is used to infer the tissue's stiffness (theshear modulus).[27][28]The result of an MRE scan is a quantitative 3-D map of the tissue stiffness, as well as a conventional 3-D MRI image. One strength of MRE is the resulting 3-D elasticity map, which can cover an entire organ.[2]Because MRI is not limited by air or bone, it can access some tissues ultrasound cannot, notably the brain. It also has the advantage of being more uniform across operators and less dependent on operator skill than most methods of ultrasound elastography. MR elastography has made significant advances over the past few years with acquisition times down to a minute or less and has been used in a variety of medical applications including cardiology research on living human hearts. MR elastography's short acquisition time also makes it competitive with other elastography techniques. Optical elastography is an emerging technique that utilizes optical microscopy to obtain tissue images. The most common form of optical elastography, optical coherence elastography (OCE), is based on optical coherence tomography (OCT), which combines interferometry with lateral beam scanning for rapid 3D image acquisition and achieves spatial resolutions of 5-15 μm.[29]For OCE, a mechanical load is applied to the tissue and the resultant deformation is measured using speckle tracking or phase sensitive detection.[30]Early implementations of OCE involved applying a quasi-static compression to the tissue,[31]though more recently dynamic loading has been achieved through the application of a sinusoidal modulation via a contact transducer or acoustic wave.[29]Other imaging modalities with greater optical resolution have also been introduced for optical elastography to probe the microscale between cells and whole tissues.[29]OCT relies on longer wavelengths, of 850 - 1050 nm, and therefore provides a lower optical resolution compared to common light microscopy, which uses visible wavelengths of 400-700 nm, and provides lateral spatial resolutions of <1 μm. Examples of higher resolution analysis include the use of confocal and light-sheet microscopy respectively for mechanical characterization of multicellular spheroids[32]and for structural analysis of 3D organoids at a single-cell resolution.[33]When using these imaging modalities, quasi-static compression may be induced in the tissue sample by a micro-indentation device, such as a microtweezer.[32]The resultant deformation can be measured from the microscopy images using image-based nodal tracking algorithms,[32][33]and mechanical properties can be discerned using finite element method (FEM) analyses. Elastography is used for the investigation of many disease conditions in many organs. It can be used for additional diagnostic information compared to a mere anatomical image, and it can be used to guidebiopsiesor, increasingly, replace them entirely. Biopsies are invasive and painful, presenting a risk of hemorrhage or infection, whereas elastography is completely noninvasive. Elastography is used to investigate disease in the liver. Liver stiffness is usually indicative offibrosisorsteatosis(fatty liver disease), which are in turn indicative of numerous disease conditions, includingcirrhosisandhepatitis. Elastography is particularly advantageous in this case because when fibrosis is diffuse (spread around in clumps rather than continuous scarring), a biopsy can easily miss sampling the diseased tissue, which results in afalse negativemisdiagnosis. Naturally, elastography sees use for organs and diseases where manual palpation was already widespread. Elastography is used for detection and diagnosis ofbreast,thyroid, andprostatecancers. Certain types of elastography are also suitable formusculoskeletalimaging, and they can determine the mechanical properties and state ofmusclesandtendons. Because elastography does not have the same limitations as manual palpation, it is being investigated in some areas for which there is no history of diagnosis with manual palpation. For example, magnetic resonance elastography is capable of assessing the stiffness of thebrain,[34]and there is a growing body ofscientific literatureon elastography in healthy and diseased brains. In 2015, preliminary reports on elastography used ontransplanted kidneysto evaluate cortical fibrosis have been published showing promising results.[35]InBristol University's studyChildren of the 90s, 2.5% of 4,000 people born in 1991 and 1992 were found by ultrasound scanning at the age of 18 to have non-alcoholic fatty liver disease; five years later transient elastography found over 20% to have the fatty deposits on the liver of steatosis, indicating non-alcoholic fatty liver disease; half of those were classified as severe. The scans also found that 2.4% had the liver scarring offibrosis, which can lead tocirrhosis.[36] Other techniques include elastography withoptical coherence tomography[37](i.e. light). Tactile imaging involves translating the results of a digital "touch" into an image. Many physical principles have been explored for the realization oftactile sensors: resistive, inductive, capacitive, optoelectric, magnetic, piezoelectric, and electroacoustic principles, in a variety of configurations.[38]
https://en.wikipedia.org/wiki/Tactile_imaging
Ariadne's thread, named for the legend ofAriadne, is solving a problem which has multiple apparent ways to proceed—such as a physicalmaze, alogic puzzle, or anethical dilemma—through an exhaustive application of logic to all available routes. It is the particular method used that is able to follow completely through to trace steps or take point by point a series of found truths in a contingent, ordered search that reaches an end position. This process can take the form of a mental record, a physical marking, or even a philosophical debate; it is the process itself that assumes the name. The key element to applying Ariadne's thread to a problem is the creation and maintenance of a record—physical or otherwise—of the problem's available and exhausted options at all times. This record is referred to as the "thread", regardless of its actual medium. The purpose the record serves is to permitbacktracking—that is, reversing earlier decisions and trying alternatives. Given the record, applying thealgorithmis straightforward: This algorithm will terminate upon either finding a solution or marking all initial choices as failures; in the latter case, there is no solution. If a thorough examination is desired even though a solution has been found, one can revert to the previous decision, mark the success, and continue on as if a solution were never found; the algorithm will exhaust all decisions and find all solutions. The terms "Ariadne's thread" and "trial and error" are often used interchangeably, which is not necessarily correct. They have two distinctive differences: In short, trial and errorapproachesa desired solution; Ariadne's thread blindly exhausts the search space completely, finding any and all solutions. Each has its appropriate distinct uses. They can be employed in tandem—for example, although the editing of a Wikipedia article is arguably a trial-and-error process (given how in theory it approaches an ideal state), article histories provide the record for which Ariadne's thread may be applied, reverting detrimental edits and restoring the article back to the most recent error-free version, from which other options may be attempted. Obviously, Ariadne's thread may be applied to the solving of mazes in the same manner as the legend; an actual thread can be used as the record, or chalk or a similar marker can be applied to label passages. If the maze is on paper, the thread may well be a pencil. Logic problems of all natures may be resolved via Ariadne's thread, the maze being but an example. At present, it is most prominently applied toSudokupuzzles, used to attempt values for as-yet-unsolved cells. The medium of the thread for puzzle-solving can vary widely, from a pencil to numbered chits to a computer program, but all accomplish the same task. Note that as the compilation of Ariadne's thread is aninductiveprocess, and due to its exhaustiveness leaves no room for actual study, it is largely frowned upon as a solving method, to be employed only as a last resort whendeductivemethods fail. Artificial intelligence is heavily dependent upon Ariadne's thread when it comes to game-playing, most notably in programs which playchess; the possible moves are the decisions, game-winning states the solutions, and game-losing states failures. Due to the massive depth of many games, most algorithms cannot afford to apply Ariadne's threadentirelyon every move due to time constraints, and therefore work in tandem with aheuristicthat evaluates game states and limits abreadth-first searchonly to those that are most likely to be beneficial, a trial-and-error process. Even circumstances where the concept of "solution" is not so well defined have had Ariadne's thread applied to them, such as navigating theWorld Wide Web, making sense of patent law, and in philosophy; "Ariadne's Thread" is a popular name for websites of many purposes, but primarily for those that feature philosophical or ethical debate.
https://en.wikipedia.org/wiki/Ariadne%27s_thread_(logic)
Inconstraint programmingandSAT solving,backjumping(also known asnon-chronological backtracking[1]orintelligent backtracking[2]) is an enhancement forbacktrackingalgorithmswhich reduces thesearch space. While backtracking always goes up one level in thesearch treewhen all values for a variable have been tested, backjumping may go up more levels. In this article, a fixed order of evaluation of variablesx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}is used, but the same considerations apply to a dynamic order of evaluation. Whenever backtracking has tried all values for a variable without finding any solution, it reconsiders the last of the previously assigned variables, changing its value or further backtracking if no other values are to be tried. Ifx1=a1,…,xk=ak{\displaystyle x_{1}=a_{1},\ldots ,x_{k}=a_{k}}is the current partial assignment and all values forxk+1{\displaystyle x_{k+1}}have been tried without finding a solution, backtracking concludes that no solution extendingx1=a1,…,xk=ak{\displaystyle x_{1}=a_{1},\ldots ,x_{k}=a_{k}}exists. The algorithm then "goes up" toxk{\displaystyle x_{k}}, changingxk{\displaystyle x_{k}}'s value if possible, backtracking again otherwise. The partial assignment is not always necessary in full to prove that no value ofxk+1{\displaystyle x_{k+1}}leads to a solution. In particular, a prefix of the partial assignment may have the same property, that is, there exists an indexj<k{\displaystyle j<k}such thatx1,…,xj=a1,…,aj{\displaystyle x_{1},\ldots ,x_{j}=a_{1},\ldots ,a_{j}}cannot be extended to form a solution with whatever value forxk+1{\displaystyle x_{k+1}}. If the algorithm can prove this fact, it can directly consider a different value forxj{\displaystyle x_{j}}instead of reconsideringxk{\displaystyle x_{k}}as it would normally do. The efficiency of a backjumping algorithm depends on how high it is able to backjump. Ideally, the algorithm could jump fromxk+1{\displaystyle x_{k+1}}to whichever variablexj{\displaystyle x_{j}}is such that the current assignment tox1,…,xj{\displaystyle x_{1},\ldots ,x_{j}}cannot be extended to form a solution with any value ofxk+1{\displaystyle x_{k+1}}. If this is the case,j{\displaystyle j}is called asafe jump. Establishing whether a jump is safe is not always feasible, as safe jumps are defined in terms of the set of solutions, which is what the algorithm is trying to find. In practice, backjumping algorithms use the lowest index they can efficiently prove to be a safe jump. Different algorithms use different methods for determining whether a jump is safe. These methods have different costs, but a higher cost of finding a higher safe jump may be traded off a reduced amount of search due to skipping parts of the search tree. The simplest condition in which backjumping is possible is when all values of a variable have been proved inconsistent without further branching. Inconstraint satisfaction, a partial evaluation isconsistentif and only if it satisfies all constraints involving the assigned variables, andinconsistentotherwise. It might be the case that a consistent partial solution cannot be extended to a consistent complete solution because some of the unassigned variables may not be assigned without violating other constraints. The condition in which all values of a given variablexk+1{\displaystyle x_{k+1}}are inconsistent with the current partial solutionx1,…,xk=a1,…,ak{\displaystyle x_{1},\ldots ,x_{k}=a_{1},\ldots ,a_{k}}is called aleaf dead end. This happens exactly when the variablexk+1{\displaystyle x_{k+1}}is a leaf of the search tree (which correspond to nodes having only leaves as children in the figures of this article.) The backjumping algorithm by John Gaschnig does a backjump only in leaf dead ends.[3]In other words, it works differently from backtracking only when every possible value ofxk+1{\displaystyle x_{k+1}}has been tested and resulted inconsistent without the need of branching over another variable. A safe jump can be found by simply evaluating, for every valueak+1{\displaystyle a_{k+1}}, the shortest prefix ofx1,…,xk=a1,…,ak{\displaystyle x_{1},\ldots ,x_{k}=a_{1},\ldots ,a_{k}}inconsistent withxk+1=ak+1{\displaystyle x_{k+1}=a_{k+1}}. In other words, ifak+1{\displaystyle a_{k+1}}is a possible value forxk+1{\displaystyle x_{k+1}}, the algorithm checks the consistency of the following evaluations: The smallest index (lowest the listing) for which evaluations are inconsistent would be a safe jump ifxk+1=ak+1{\displaystyle x_{k+1}=a_{k+1}}were the only possible value forxk+1{\displaystyle x_{k+1}}. Since every variable can usually take more than one value, the maximal index that comes out from the check for each value is a safe jump, and is the point where John Gaschnig's algorithm jumps. In practice, the algorithm can check the evaluations above at the same time it is checking the consistency ofxk+1=ak+1{\displaystyle x_{k+1}=a_{k+1}}. The previous algorithm only backjumps when the values of a variable can be shown inconsistent with the current partial solution without further branching. In other words, it allows for a backjump only at leaf nodes in the search tree. An internal node of the search tree represents an assignment of a variable that is consistent with the previous ones. If no solution extends this assignment, the previous algorithm always backtracks: no backjump is done in this case. Backjumping at internal nodes cannot be done as for leaf nodes. Indeed, if some evaluations ofxk+1{\displaystyle x_{k+1}}required branching, it is because they are consistent with the current assignment. As a result, searching for a prefix that is inconsistent with these values of the last variable does not succeed. In such cases, what proved an evaluationxk+1=ak+1{\displaystyle x_{k+1}=a_{k+1}}not to be part of a solution with the current partial evaluationx1,…,xk{\displaystyle x_{1},\ldots ,x_{k}}is therecursivesearch. In particular, the algorithm "knows" that no solution exists from this point on because it comes back to this node instead of stopping after having found a solution. This return is due to a number ofdead ends, points where the algorithm has proved a partial solution inconsistent. In order to further backjump, the algorithm has to take into account that the impossibility of finding solutions is due to these dead ends. In particular, the safe jumps are indexes of prefixes that still make these dead ends to be inconsistent partial solutions. In other words, when all values ofxk+1{\displaystyle x_{k+1}}have been tried, the algorithm can backjump to a previous variablexi{\displaystyle x_{i}}provided that the current truth evaluation ofx1,…,xi{\displaystyle x_{1},\ldots ,x_{i}}is inconsistent with all the truth evaluations ofxk+1,xk+2,...{\displaystyle x_{k+1},x_{k+2},...}in the leaf nodes that are descendants of the nodexk+1{\displaystyle x_{k+1}}. Due to the potentially high number of nodes that are in the subtree ofxk+1{\displaystyle x_{k+1}}, the information that is necessary to safely backjump fromxk+1{\displaystyle x_{k+1}}is collected during the visit of its subtree. Finding a safe jump can be simplified by two considerations. The first is that the algorithm needs a safe jump, but still works with a jump that is not the highest possible safe jump. The second simplification is that nodes in the subtree ofxl{\displaystyle x_{l}}that have been skipped by a backjump can be ignored while looking for a backjump forxl{\displaystyle x_{l}}. More precisely, all nodes skipped by a backjump from nodexm{\displaystyle x_{m}}up to nodexl{\displaystyle x_{l}}are irrelevant to the subtree rooted atxm{\displaystyle x_{m}}, and also irrelevant are their other subtrees. Indeed, if an algorithm went down from nodexl{\displaystyle x_{l}}toxm{\displaystyle x_{m}}via a path but backjumps in its way back, then it could have gone directly fromxl{\displaystyle x_{l}}toxm{\displaystyle x_{m}}instead. Indeed, the backjump indicates that the nodes betweenxl{\displaystyle x_{l}}andxm{\displaystyle x_{m}}are irrelevant to the subtree rooted atxm{\displaystyle x_{m}}. In other words, a backjump indicates that the visit of a region of the search tree had been a mistake. This part of the search tree can therefore be ignored when considering a possible backjump fromxl{\displaystyle x_{l}}or from one of its ancestors. This fact can be exploited by collecting, in each node, a set of previously assigned variables whose evaluation suffices to prove that no solution exists in the subtree rooted at the node. This set is built during the execution of the algorithm. When retracting from a node, this set is removed the variable of the node and collected in the set of the destination of backtracking or backjumping. Since nodes that are skipped from backjumping are never retracted from, their sets are automatically ignored. The rationale of graph-based backjumping is that a safe jump can be found by checking which of the variablesx1,…,xk{\displaystyle x_{1},\ldots ,x_{k}}are in a constraint with the variablesxk+1,xk+2,...{\displaystyle x_{k+1},x_{k+2},...}that are instantiated in leaf nodes. For every leaf node and every variablexi{\displaystyle x_{i}}of indexi>k{\displaystyle i>k}that is instantiated there, the indexes less than or equal tok{\displaystyle k}whose variable is in a constraint withxi{\displaystyle x_{i}}can be used to find safe jumps. In particular, when all values forxk+1{\displaystyle x_{k+1}}have been tried, this set contains the indexes of the variables whose evaluations allow proving that no solution can be found by visiting the subtree rooted atxk+1{\displaystyle x_{k+1}}. As a result, the algorithm can backjump to the highest index in this set. The fact that nodes skipped by backjumping can be ignored when considering a further backjump can be exploited by the following algorithm. When retracting from a leaf node, the set of variables that are in constraint with it is created and "sent back" to its parent, or ancestor in case of backjumping. At every internal node, a set of variables is maintained. Every time a set of variables is received from one of its children or descendants, their variables are added to the maintained set. When further backtracking or backjumping from the node, the variable of the node is removed from this set, and the set is sent to the node that is the destination of backtracking or backjumping. This algorithm works because the set maintained in a node collects all variables that are relevant to prove unsatisfiability in the leaves that are descendants of this node. Since sets of variables are only sent when retracing from nodes, the sets collected at nodes skipped by backjumping are automatically ignored. Conflict-based backjumping (a.k.a.conflict-directed backjumping) is a more refined algorithm and sometimes able to achieve larger backjumps. It is based on checking not only the common presence of two variables in the same constraint but also on whether the constraint actually caused any inconsistency. In particular, this algorithm collects one of the violated constraints in every leaf. At every node, the highest index of a variable that is in one of the constraints collected at the leaves is a safe jump. While the violated constraint chosen in each leaf does not affect the safety of the resulting jump, choosing constraints of highest possible indices increases the highness of the jump. For this reason, conflict-based backjumping orders constraints in such a way that constraints over lower indices variables are preferred over constraints on higher index variables. Formally, a constraintC{\displaystyle C}is preferred over another oneD{\displaystyle D}if the highest index of a variable inC{\displaystyle C}but not inD{\displaystyle D}is lower than the highest index of a variable inD{\displaystyle D}but not inC{\displaystyle C}. In other words, excluding common variables, the constraint that has the all lower indices is preferred. In a leaf node, the algorithm chooses the lowest indexi{\displaystyle i}such thatx1,…,xi{\displaystyle x_{1},\ldots ,x_{i}}is inconsistent with the last variable evaluated in the leaf. Among the constraints that are violated in this evaluation, it chooses the most preferred one, and collects all its indices less thank+1{\displaystyle k+1}. This way, when the algorithm comes back to the variablexk+1{\displaystyle x_{k+1}}, the lowest collected index identifies a safe jump. In practice, this algorithm is simplified by collecting all indices in a single set, instead of creating a set for every value ofk{\displaystyle k}. In particular, the algorithm collects, in each node, all sets coming from its descendants that have not been skipped by backjumping. When retracting from this node, this set is removed the variable of the node and collected into the destination of backtracking or backjumping. Conflict-directed backjumping was proposed forConstraint Satisfaction ProblemsbyPatrick Prosserin his seminal 1993 paper[4]
https://en.wikipedia.org/wiki/Backjumping
Backward chaining(orbackward reasoning) is aninferencemethod described colloquially as working backward from the goal. It is used inautomated theorem provers,inference engines,proof assistants, and otherartificial intelligenceapplications.[1] Ingame theory, researchers apply it to (simpler)subgamesto find a solution to the game, in a process calledbackward induction. In chess, it is calledretrograde analysis, and it is used to generate table bases forchess endgamesforcomputer chess. Backward chaining is implemented inlogic programmingbySLD resolution. Both rules are based on themodus ponensinference rule. It is one of the two most commonly used methods ofreasoningwithinference rulesandlogical implications– the other isforward chaining. Backward chaining systems usually employ adepth-first searchstrategy, e.g.Prolog.[2] Backward chaining starts with a list ofgoals(or ahypothesis) and works backwards from theconsequentto theantecedentto see if anydatasupports any of these consequents.[3]Aninference engineusing backward chaining would search theinferencerules until it finds one with a consequent (Thenclause) that matches a desired goal. If the antecedent (Ifclause) of that rule is not known to be true, then it is added to the list of goals (for one's goal to be confirmed one must also provide data that confirms this new rule). For example, suppose a new pet, Fritz, is delivered in an opaque box along with two facts about Fritz: The goal is to decide whether Fritz is green, based on arule basecontaining the following four rules: With backward reasoning, an inference engine can determine whether Fritz is green in four steps. To start, the query is phrased as a goal assertion that is to be proven: "Fritz is green". 1. Fritz is substituted for X in rule #3 to see if its consequent matches the goal, so rule #3 becomes: Since the consequent matches the goal ("Fritz is green"), the rules engine now needs to see if the antecedent ("Fritz is a frog") can be proven. The antecedent, therefore, becomes the new goal: 2. Again substituting Fritz for X, rule #1 becomes: Since the consequent matches the current goal ("Fritz is a frog"), the inference engine now needs to see if the antecedent ("Fritz croaks and eats flies") can be proven. The antecedent, therefore, becomes the new goal: 3. Since this goal is a conjunction of two statements, the inference engine breaks it into two sub-goals, both of which must be proven: 4. To prove both of these sub-goals, the inference engine sees that both of these sub-goals were given as initial facts. Therefore, the conjunction is true: therefore the antecedent of rule #1 is true and the consequent must be true: therefore the antecedent of rule #3 is true and the consequent must be true: This derivation, therefore, allows the inference engine to prove that Fritz is green. Rules #2 and #4 were not used. Note that the goals always match the affirmed versions of the consequents of implications (and not the negated versions as inmodus tollens) and even then, their antecedents are then considered as the new goals (and not the conclusions as inaffirming the consequent), which ultimately must match known facts (usually defined as consequents whose antecedents are always true); thus, the inference rule used ismodus ponens. Because the list of goals determines which rules are selected and used, this method is calledgoal-driven, in contrast todata-drivenforward-chaininginference. The backward chaining approach is often employed byexpert systems. Programming languages such asProlog,Knowledge MachineandECLiPSesupport backward chaining within their inference engines.[4]
https://en.wikipedia.org/wiki/Backward_chaining
Incomputer science, anenumeration algorithmis analgorithmthatenumeratesthe answers to acomputational problem. Formally, such an algorithm applies to problems that take an input and produce a list of solutions, similarly tofunction problems. For each input, the enumeration algorithm must produce the list of all solutions, without duplicates, and then halt. The performance of an enumeration algorithm is measured in terms of the time required to produce the solutions, either in terms of thetotal timerequired to produce all solutions, or in terms of the maximaldelaybetween two consecutive solutions and in terms of apreprocessingtime, counted as the time before outputting the first solution. This complexity can be expressed in terms of the size of the input, the size of each individual output, or the total size of the set of all outputs, similarly to what is done withoutput-sensitive algorithms. An enumeration problemP{\displaystyle P}is defined as a relationR{\displaystyle R}overstringsof an arbitraryalphabetΣ{\displaystyle \Sigma }: R⊆Σ∗×Σ∗{\displaystyle R\subseteq \Sigma ^{*}\times \Sigma ^{*}} An algorithm solvesP{\displaystyle P}if for every inputx{\displaystyle x}the algorithm produces the (possibly infinite) sequencey{\displaystyle y}such thaty{\displaystyle y}has no duplicate andz∈y{\displaystyle z\in y}if and only if(x,z)∈R{\displaystyle (x,z)\in R}. The algorithm should halt if the sequencey{\displaystyle y}is finite. Enumeration problems have been studied in the context ofcomputational complexity theory, and severalcomplexity classeshave been introduced for such problems. A very general such class isEnumP,[1]the class of problems for which the correctness of a possible output can be checked inpolynomial timein the input and output. Formally, for such a problem, there must exist an algorithm A which takes as input the problem inputx, the candidate outputy, and solves thedecision problemof whetheryis a correct output for the inputx, in polynomial time inxandy. For instance, this class contains all problems that amount to enumerating thewitnessesof a problem in theclassNP. Other classes that have been defined include the following. In the case of problems that are also inEnumP, these problems are ordered from least to most specific: The notion of enumeration algorithms is also used in the field ofcomputability theoryto define some high complexity classes such asRE, the class of allrecursively enumerableproblems. This is the class of sets for which there exist an enumeration algorithm that will produce all elements of the set: the algorithm may run forever if the set is infinite, but each solution must be produced by the algorithm after a finite time.
https://en.wikipedia.org/wiki/Enumeration_algorithm
A standardSudokucontains 81 cells, in a 9×9 grid, and has 9 boxes, each box being the intersection of the first, middle, or last 3 rows, and the first, middle, or last 3 columns. Each cell may contain a number from one to nine, and each number can only occur once in each row, column, and box. A Sudoku starts with some cells containing numbers (clues), and the goal is to solve the remaining cells. Proper Sudokus have one solution.[1]Players and investigators use a wide range of computer algorithms to solve Sudokus, study their properties, and make new puzzles, including Sudokus with interesting symmetries and other properties. There are several computer algorithms that will solve 9×9 puzzles (n= 9) in fractions of a second, butcombinatorial explosionoccurs asnincreases, creating limits to the properties of Sudokus that can be constructed, analyzed, and solved asnincreases. Some hobbyists have developed computer programs that will solve Sudoku puzzles using abacktrackingalgorithm, which is a type ofbrute force search.[3]Backtracking is adepth-first search(in contrast to abreadth-first search), because it will completely explore one branch to a possible solution before moving to another branch. Although it has been established that approximately 5.96 x 1026final grids exist, a brute force algorithm can be a practical method to solve Sudoku puzzles. A brute force algorithm visits the empty cells in some order, filling in digits sequentially, or backtracking when the number is found to be not valid.[4][5][6][7]Briefly, a program would solve a puzzle by placing the digit "1" in the first cell and checking if it is allowed to be there. If there are no violations (checking row, column, and box constraints) then the algorithm advances to the next cell and places a "1" in that cell. When checking for violations, if it is discovered that the "1" is not allowed, the value is advanced to "2". If a cell is discovered where none of the 9 digits is allowed, then the algorithm leaves that cell blank and moves back to the previous cell. The value in that cell is then incremented by one. This is repeated until the allowed value in the last (81st) cell is discovered. The animation shows how a Sudoku is solved with this method. The puzzle's clues (red numbers) remain fixed while the algorithm tests each unsolved cell with a possible solution. Notice that the algorithm may discard all the previously tested values if it finds the existing set does not fulfill the constraints of the Sudoku. Advantages of this method are: The disadvantage of this method is that the solving time may be slow compared to algorithms modeled after deductive methods. One programmer reported that such an algorithm may typically require as few as 15,000 cycles, or as many as 900,000 cycles to solve a Sudoku, each cycle being the change in position of a "pointer" as it moves through the cells of a Sudoku.[8][9] A different approach which also uses backtracking, draws from the fact that in the solution to a standard sudoku the distribution for every individual symbol (value) must be one of only 46656 patterns. In manual sudoku solving this technique is referred to as pattern overlay or using templates and is confined to filling in the last values only. A library with all the possible patterns may get loaded or created at program start. Then every given symbol gets assigned a filtered set with those patterns, which are in accordance with the given clues. In the last step, the actual backtracking part, patterns from these sets are tried to be combined or overlayed in a non-conflicting way until the one permissible combination is hit upon. The Implementation is exceptionally easy when using bit vectors, because for all the tests only bit-wise logical operations are needed, instead of any nested iterations across rows and columns. Significant optimization can be achieved by reducing the sets of patterns even further during filtering. By testing every questionable pattern against all the reduced sets that were already accepted for the other symbols the total number of patterns left for backtracking is greatly diminished. And as with all sudoku brute-force techniques, run time can be vastly reduced by first applying some of the most simple solving practices which may fill in some 'easy' values. A Sudoku can be constructed to work against backtracking. Assuming the solver works from top to bottom (as in the animation), a puzzle with few clues (17), no clues in the top row, and has a solution "987654321" for the first row, would work in opposition to the algorithm. Thus the program would spend significant time "counting" upward before it arrives at the grid which satisfies the puzzle. In one case, a programmer found a brute force program required six hours to arrive at the solution for such a Sudoku (albeit using a 2008-era computer). Such a Sudoku can be solved nowadays in less than 1 second using an exhaustive search routine and faster processors.[10]p:25 Sudoku can be solved using stochastic (random-based) algorithms.[11][12]An example of this method is to: A solution to the puzzle is then found. Approaches for shuffling the numbers includesimulated annealing,genetic algorithmandtabu search. Stochastic-based algorithms are known to be fast, though perhaps not as fast as deductive techniques. Unlike the latter however, optimisation algorithms do not necessarily require problems to be logic-solvable, giving them the potential to solve a wider range of problems. Algorithms designed for graph colouring are also known to perform well with Sudokus.[13]It is also possible to express a Sudoku as aninteger linear programmingproblem. Such approaches get close to a solution quickly, and can then use branching towards the end. Thesimplex algorithmis able to solve proper Sudokus, indicating if the Sudoku is not valid (no solution). If there is more than one solution (non-proper Sudokus) the simplex algorithm will generally yield a solution with fractional amounts of more than one digit in some squares. However, for proper Sudokus, linear programming presolve techniques alone will deduce the solution without any need for simplex iterations. The logical rules used by presolve techniques for the reduction of LP problems include the set of logical rules used by humans to solve Sudokus. A Sudoku may also be modelled as aconstraint satisfaction problem. In his paperSudoku as a Constraint Problem,[14]Helmut Simonis describes manyreasoning algorithmsbased on constraints which can be applied to model and solve problems. Some constraint solvers include a method to model and solve Sudokus, and a program may require fewer than 100 lines of code to solve a simple Sudoku.[15][16]If the code employs a strong reasoning algorithm, incorporating backtracking is only needed for the most difficult Sudokus. An algorithm combining a constraint-model-based algorithm with backtracking would have the advantage of fast solving time – of the order of a few milliseconds[17]– and the ability to solve all sudokus.[5] Sudoku puzzles may be described as anexact coverproblem, or more precisely, an exacthitting setproblem. This allows for an elegant description of the problem and an efficient solution. Modelling Sudoku as an exact cover problem and using an algorithm such asKnuth's Algorithm Xand hisDancing Linkstechnique "is the method of choice for rapid finding [measured in microseconds] of all possible solutions to Sudoku puzzles."[18]An alternative approach is the use of Gauss elimination in combination with column and row striking. LetQbe the 9x9 Sudoku matrix,N= {1, 2, 3, 4, 5, 6, 7, 8, 9}, andXrepresent a generic row, column, or block.Nsupplies symbols for fillingQas well as theindex setfor the 9 elements of anyX. The given elementsqinQrepresent aunivalent relationfromQtoN. The solutionRis atotal relationand hence afunction. Sudoku rules require that therestrictionofRtoXis abijection, so any partial solutionC, restricted to anX, is apartial permutationofN. LetT= {X:Xis a row, column, or block ofQ}, soThas 27 elements. Anarrangementis either a partial permutation or apermutationonN. LetZbe the set of all arrangements onN. A partial solutionCcan be reformulated to include the rules as acomposition of relationsA(one-to-three) andBrequiring compatible arrangements: Solution of the puzzle, suggestions for newqto enterQ, come from prohibited arrangementsC¯,{\displaystyle {\bar {C}},}, thecomplementofCinQxZ: useful tools in the calculus of relations areresiduals:
https://en.wikipedia.org/wiki/Sudoku_solving_algorithms
Incomputer science, anLL parser(left-to-right,leftmost derivation) is atop-down parserfor a restrictedcontext-free language. It parses the input fromLeft to right, performingLeftmost derivationof the sentence. An LL parser is called an LL(k) parser if it usesktokensoflookaheadwhen parsing a sentence. A grammar is called anLL(k) grammarif an LL(k) parser can be constructed from it. A formal language is called an LL(k) language if it has an LL(k) grammar. The set of LL(k) languages is properly contained in that of LL(k+1) languages, for eachk≥ 0.[1]A corollary of this is that not all context-free languages can be recognized by an LL(k) parser. An LL parser is called LL-regular (LLR) if it parses anLL-regular language.[clarification needed][2][3][4]The class ofLLR grammarscontains every LL(k) grammar for everyk. For every LLR grammar there exists an LLR parser that parses the grammar in linear time.[citation needed] Two nomenclative outlier parser types are LL(*) and LL(finite). A parser is called LL(*)/LL(finite) if it uses the LL(*)/LL(finite) parsing strategy.[5][6]LL(*) and LL(finite) parsers are functionally closer toPEGparsers. An LL(finite) parser can parse an arbitrary LL(k) grammar optimally in the amount of lookahead and lookahead comparisons. The class of grammars parsable by the LL(*) strategy encompasses some context-sensitive languages due to the use of syntactic and semantic predicates and has not been identified. It has been suggested that LL(*) parsers are better thought of asTDPLparsers.[7]Against the popular misconception, LL(*) parsers are not LLR in general, and are guaranteed by construction to perform worse on average (super-linear against linear time) and far worse in the worst-case (exponential against linear time). LL grammars, particularly LL(1) grammars, are of great practical interest, as parsers for these grammars are easy to construct, and manycomputer languagesare designed to be LL(1) for this reason.[8]LL parsers may be table-based,[citation needed]i.e. similar toLR parsers, but LL grammars can also be parsed byrecursive descent parsers. According to Waite and Goos (1984),[9]LL(k) grammars were introduced by Stearns and Lewis (1969).[10] For a givencontext-free grammar, the parser attempts to find theleftmost derivation. Given an example grammarG: the leftmost derivation forw=((i+i)+i){\displaystyle w=((i+i)+i)}is: Generally, there are multiple possibilities when selecting a rule to expand the leftmost non-terminal. In step 2 of the previous example, the parser must choose whether to apply rule 2 or rule 3: To be efficient, the parser must be able to make this choice deterministically when possible, without backtracking. For some grammars, it can do this by peeking on the unread input (without reading). In our example, if the parser knows that the next unread symbol is(, the only correct rule that can be used is 2. Generally, an LL(k) parser can look ahead atksymbols. However, given a grammar, the problem of determining if there exists a LL(k) parser for somekthat recognizes it is undecidable. For eachk, there is a language that cannot be recognized by an LL(k) parser, but can be by anLL(k+ 1). We can use the above analysis to give the following formal definition: LetGbe a context-free grammar andk≥ 1. We say thatGis LL(k), if and only if for any two leftmost derivations: the following condition holds: the prefix of the stringu{\displaystyle u}of lengthk{\displaystyle k}equals the prefix of the stringv{\displaystyle v}of lengthkimpliesβ=γ{\displaystyle \beta =\gamma }. In this definition,S{\displaystyle S}is the start symbol andA{\displaystyle A}any non-terminal. The already derived inputw{\displaystyle w}, and yet unreadu{\displaystyle u}andv{\displaystyle v}are strings of terminals. The Greek lettersα{\displaystyle \alpha },β{\displaystyle \beta }andγ{\displaystyle \gamma }represent any string of both terminals and non-terminals (possibly empty). The prefix length corresponds to the lookahead buffer size, and the definition says that this buffer is enough to distinguish between any two derivations of different words. The LL(k) parser is adeterministic pushdown automatonwith the ability to peek on the nextkinput symbols without reading. This peek capability can be emulated by storing the lookahead buffer contents in the finite state space, since both buffer and input alphabet are finite in size. As a result, this does not make the automaton more powerful, but is a convenient abstraction. The stack alphabet isΓ=N∪Σ{\displaystyle \Gamma =N\cup \Sigma }, where: The parser stack initially contains the starting symbol above the EOI:[ S$]. During operation, the parser repeatedly replaces the symbolX{\displaystyle X}on top of the stack: If the last symbol to be removed from the stack is the EOI, the parsing is successful; the automaton accepts via an empty stack. The states and the transition function are not explicitly given; they are specified (generated) using a more convenientparse tableinstead. The table provides the following mapping: If the parser cannot perform a valid transition, the input is rejected (empty cells). To make the table more compact, only the non-terminal rows are commonly displayed, since the action is the same for terminals. To explain an LL(1) parser's workings we will consider the following small LL(1) grammar: and parse the following input: An LL(1) parsing table for a grammar has a row for each of the non-terminals and a column for each terminal (including the special terminal, represented here as$, that is used to indicate the end of the input stream). Each cell of the table may point to at most one rule of the grammar (identified by its number). For example, in the parsing table for the above grammar, the cell for the non-terminal 'S' and terminal '(' points to the rule number 2: The algorithm to construct a parsing table is described in a later section, but first let's see how the parser uses the parsing table to process its input. In each step, the parser reads the next-available symbol from the input stream, and the top-most symbol from the stack. If the input symbol and the stack-top symbol match, the parser discards them both, leaving only the unmatched symbols in the input stream and on the stack. Thus, in its first step, the parser reads the input symbol '(' and the stack-top symbol 'S'. The parsing table instruction comes from the column headed by the input symbol '(' and the row headed by the stack-top symbol 'S'; this cell contains '2', which instructs the parser to apply rule (2). The parser has to rewrite 'S' to '(S+F)' on the stack by removing 'S' from stack and pushing ')', 'F', '+', 'S', '(' onto the stack, and this writes the rule number 2 to the output. The stack then becomes: In the second step, the parser removes the '(' from its input stream and from its stack, since they now match. The stack now becomes: Now the parser has an 'a'on its input stream and an 'S' as its stack top. The parsing table instructs it to apply rule (1) from the grammar and write the rule number 1 to the output stream. The stack becomes: The parser now has an 'a'on its input stream and an 'F' as its stack top. The parsing table instructs it to apply rule (3) from the grammar and write the rule number 3 to the output stream. The stack becomes: The parser now has an 'a'on the input stream and an 'a' at its stack top. Because they are the same, it removes it from the input stream and pops it from the top of the stack. The parser then has an '+' on the input stream and '+' is at the top of the stack meaning, like with 'a', it is popped from the stack and removed from the input stream. This results in: In the next three steps the parser will replace 'F' on the stack by 'a', write the rule number 3 to the output stream and remove the 'a' and ')' from both the stack and the input stream. The parser thus ends with '$' on both its stack and its input stream. In this case the parser will report that it has accepted the input string and write the following list of rule numbers to the output stream: This is indeed a list of rules for aleftmost derivationof the input string, which is: Below follows a C++ implementation of a table-based LL parser for the example language: Outputs: As can be seen from the example, the parser performs three types of steps depending on whether the top of the stack is a nonterminal, a terminal or the special symbol$: These steps are repeated until the parser stops, and then it will have either completely parsed the input and written aleftmost derivationto the output stream or it will have reported an error. In order to fill the parsing table, we have to establish what grammar rule the parser should choose if it sees a nonterminalAon the top of its stack and a symbolaon its input stream. It is easy to see that such a rule should be of the formA→wand that the language corresponding towshould have at least one string starting witha. For this purpose we define theFirst-setofw, written here asFi(w), as the set of terminals that can be found at the start of some string inw, plus ε if the empty string also belongs tow. Given a grammar with the rulesA1→w1, ...,An→wn, we can compute theFi(wi) andFi(Ai) for every rule as follows: The result is the least fixed point solution to the following system: where, for sets of wordsUandV, the truncated product is defined byU⋅V={(uv):1∣u∈U,v∈V}{\displaystyle U\cdot V=\{(uv):1\mid u\in U,v\in V\}}, and w:1 denotes the initial length-1 prefix of words w of length 2 or more, orw, itself, if w has length 0 or 1. Unfortunately, the First-sets are not sufficient to compute the parsing table. This is because a right-hand sidewof a rule might ultimately be rewritten to the empty string. So the parser should also use the ruleA→wifεis inFi(w) and it sees on the input stream a symbol that could followA. Therefore, we also need theFollow-setofA, written asFo(A) here, which is defined as the set of terminalsasuch that there is a string of symbolsαAaβthat can be derived from the start symbol. We use$as a special terminal indicating end of input stream, andSas start symbol. Computing the Follow-sets for the nonterminals in a grammar can be done as follows: This provides the least fixed point solution to the following system: Now we can define exactly which rules will appear where in the parsing table. IfT[A,a] denotes the entry in the table for nonterminalAand terminala, then Equivalently:T[A,a] contains the ruleA→wfor eacha∈Fi(w)·Fo(A). If the table contains at most one rule in every one of its cells, then the parser will always know which rule it has to use and can therefore parse strings without backtracking. It is in precisely this case that the grammar is called anLL(1) grammar. The construction for LL(1) parsers can be adapted to LL(k) fork> 1 with the following modifications: where an input is suffixed bykend-markers$, to fully account for theklookahead context. This approach eliminates special cases for ε, and can be applied equally well in the LL(1) case. Until the mid-1990s, it was widely believed thatLL(k) parsing[clarify](fork> 1) was impractical,[11]: 263–265since the parser table would haveexponentialsize inkin the worst case. This perception changed gradually after the release of thePurdue Compiler Construction Tool Setaround 1992, when it was demonstrated that manyprogramming languagescan be parsed efficiently by an LL(k) parser without triggering the worst-case behavior of the parser. Moreover, in certain cases LL parsing is feasible even with unlimited lookahead. By contrast, traditional parser generators likeyaccuseLALR(1)parser tables to construct a restrictedLR parserwith a fixed one-token lookahead. As described in the introduction, LL(1) parsers recognize languages that have LL(1) grammars, which are a special case of context-free grammars; LL(1) parsers cannot recognize all context-free languages. The LL(1) languages are a proper subset of the LR(1) languages, which in turn are a proper subset of all context-free languages. In order for a context-free grammar to be an LL(1) grammar, certain conflicts must not arise, which we describe in this section. LetAbe a non-terminal. FIRST(A) is (defined to be) the set of terminals that can appear in the first position of any string derived fromA. FOLLOW(A) is the union over:[12] There are two main types of LL(1) conflicts: The FIRST sets of two different grammar rules for the same non-terminal intersect. An example of an LL(1) FIRST/FIRST conflict: FIRST(E) = {b, ε} and FIRST(Ea) = {b,a}, so when the table is drawn, there is conflict under terminalbof production ruleS. Left recursionwill cause a FIRST/FIRST conflict with all alternatives. The FIRST and FOLLOW set of a grammar rule overlap. With anempty string(ε) in the FIRST set, it is unknown which alternative to select. An example of an LL(1) conflict: The FIRST set ofAis {a, ε}, and the FOLLOW set is {a}. A common left-factor is "factored out". becomes Can be applied when two alternatives start with the same symbol like a FIRST/FIRST conflict. Another example (more complex) using above FIRST/FIRST conflict example: becomes (merging into a single non-terminal) then through left-factoring, becomes Substituting a rule into another rule to remove indirect or FIRST/FOLLOW conflicts. Note that this may cause a FIRST/FIRST conflict. For a general method, seeremoving left recursion. A simple example for left recursion removal: The following production rule has left recursion on E This rule is nothing but list of Ts separated by '+'. In a regular expression form T ('+' T)*. So the rule could be rewritten as Now there is no left recursion and no conflicts on either of the rules. However, not all context-free grammars have an equivalent LL(k)-grammar, e.g.: It can be shown that there does not exist any LL(k)-grammar accepting the language generated by this grammar.
https://en.wikipedia.org/wiki/LL_parser
Incomputer science,LR parsersare a type ofbottom-up parserthat analysedeterministic context-free languagesin linear time.[1]There are several variants of LR parsers:SLR parsers,LALR parsers,canonical LR(1) parsers,minimal LR(1) parsers, andgeneralized LR parsers(GLR parsers). LR parsers can be generated by aparser generatorfrom aformal grammardefining the syntax of the language to be parsed. They are widely used for the processing ofcomputer languages. An LR parser (left-to-right, rightmost derivation in reverse) reads input text from left to right without backing up (this is true for most parsers), and produces arightmost derivationin reverse: it does abottom-up parse– not atop-down LL parseor ad-hoc parse. The name "LR" is often followed by a numeric qualifier, as in "LR(1)" or sometimes "LR(k)". To avoidbacktrackingor guessing, the LR parser is allowed to peek ahead atklookaheadinputsymbolsbefore deciding how to parse earlier symbols. Typicallykis 1 and is not mentioned. The name "LR" is often preceded by other qualifiers, as in "SLR" and "LALR". The "LR(k)" notation for a grammar was suggested by Knuth to stand for "translatable from left to right with boundk."[1] LR parsers are deterministic; they produce a single correct parse without guesswork or backtracking, in linear time. This is ideal for computer languages, but LR parsers are not suited for human languages which need more flexible but inevitably slower methods. Some methods which can parse arbitrary context-free languages (e.g.,Cocke–Younger–Kasami,Earley,GLR) have worst-case performance of O(n3) time. Other methods which backtrack or yield multiple parses may even take exponential time when they guess badly.[2] The above properties ofL,R, andkare actually shared by allshift-reduce parsers, includingprecedence parsers. But by convention, the LR name stands for the form of parsing invented byDonald Knuth, and excludes the earlier, less powerful precedence methods (for exampleOperator-precedence parser).[1]LR parsers can handle a larger range of languages and grammars than precedence parsers or top-downLL parsing.[3]This is because the LR parser waits until it has seen an entire instance of some grammar pattern before committing to what it has found. An LL parser has to decide or guess what it is seeing much sooner, when it has only seen the leftmost input symbol of that pattern. An LR parser scans and parses the input text in one forward pass over the text. The parser builds up theparse treeincrementally, bottom up, and left to right, without guessing or backtracking. At every point in this pass, the parser has accumulated a list of subtrees or phrases of the input text that have been already parsed. Those subtrees are not yet joined together because the parser has not yet reached the right end of the syntax pattern that will combine them. At step 6 in an example parse, only "A * 2" has been parsed, incompletely. Only the shaded lower-left corner of the parse tree exists. None of the parse tree nodes numbered 7 and above exist yet. Nodes 3, 4, and 6 are the roots of isolated subtrees for variable A, operator *, and number 2, respectively. These three root nodes are temporarily held in a parse stack. The remaining unparsed portion of the input stream is "+ 1". As with other shift-reduce parsers, an LR parser works by doing some combination of Shift steps and Reduce steps. If the input has no syntax errors, the parser continues with these steps until all of the input has been consumed and all of the parse trees have been reduced to a single tree representing an entire legal input. LR parsers differ from other shift-reduce parsers in how they decide when to reduce, and how to pick between rules with similar endings. But the final decisions and the sequence of shift or reduce steps are the same. Much of the LR parser's efficiency is from being deterministic. To avoid guessing, the LR parser often looks ahead (rightwards) at the next scanned symbol, before deciding what to do with previously scanned symbols. The lexical scanner works one or more symbols ahead of the parser. Thelookaheadsymbols are the 'right-hand context' for the parsing decision.[4] Like other shift-reduce parsers, an LR parser lazily waits until it has scanned and parsed all parts of some construct before committing to what the combined construct is. The parser then acts immediately on the combination instead of waiting any further. In the parse tree example, the phrase A gets reduced to Value and then to Products in steps 1-3 as soon as lookahead * is seen, rather than waiting any later to organize those parts of the parse tree. The decisions for how to handle A are based only on what the parser and scanner have already seen, without considering things that appear much later to the right. Reductions reorganize the most recently parsed things, immediately to the left of the lookahead symbol. So the list of already-parsed things acts like astack. Thisparse stackgrows rightwards. The base or bottom of the stack is on the left and holds the leftmost, oldest parse fragment. Every reduction step acts only on the rightmost, newest parse fragments. (This accumulative parse stack is very unlike the predictive, leftward-growing parse stack used bytop-down parsers.) Step 6 applies a grammar rule with multiple parts: This matches the stack top holding the parsed phrases "... Products * Value". The reduce step replaces this instance of the rule's right hand side, "Products * Value" by the rule's left hand side symbol, here a larger Products. If the parser builds complete parse trees, the three trees for inner Products, *, and Value are combined by a new tree root for Products. Otherwise,semantic[broken anchor]details from the inner Products and Value are output to some later compiler pass, or are combined and saved in the new Products symbol.[5] In LR parsers, the shift and reduce decisions are potentially based on the entire stack of everything that has been previously parsed, not just on a single, topmost stack symbol. If done in an unclever way, that could lead to very slow parsers that get slower and slower for longer inputs. LR parsers do this with constant speed, by summarizing all the relevant left context information into a single number called the LR(0)parser state. For each grammar and LR analysis method, there is a fixed (finite) number of such states. Besides holding the already-parsed symbols, the parse stack also remembers the state numbers reached by everything up to those points. At every parse step, the entire input text is divided into a stack of previously parsed phrases, a current look-ahead symbol, and the remaining unscanned text. The parser's next action is determined by its current LR(0)state number(rightmost on the stack) and the lookahead symbol. In the steps below, all the black details are exactly the same as in other non-LR shift-reduce parsers. LR parser stacks add the state information in purple, summarizing the black phrases to their left on the stack and what syntax possibilities to expect next. Users of an LR parser can usually ignore state information. These states are explained in a later section. At initial step 0, the input stream "A * 2 + 1" is divided into The parse stack begins by holding only initial state 0. When state 0 sees the lookaheadid, it knows to shift thatidonto the stack, and scan the next input symbol*, and advance to state 9. At step 4, the total input stream "A * 2 + 1" is currently divided into The states corresponding to the stacked phrases are 0, 4, and 5. The current, rightmost state on the stack is state 5. When state 5 sees the lookaheadint, it knows to shift thatintonto the stack as its own phrase, and scan the next input symbol+, and advance to state 8. At step 12, all of the input stream has been consumed but only partially organized. The current state is 3. When state 3 sees the lookaheadeof, it knows to apply the completed grammar rule by combining the stack's rightmost three phrases for Sums,+, and Products into one thing. State 3 itself doesn't know what the next state should be. This is found by going back to state 0, just to the left of the phrase being reduced. When state 0 sees this new completed instance of a Sums, it advances to state 1 (again). This consulting of older states is why they are kept on the stack, instead of keeping only the current state. LR parsers are constructed from a grammar that formally defines the syntax of the input language as a set of patterns. The grammar doesn't cover all language rules, such as the size of numbers, or the consistent use of names and their definitions in the context of the whole program. LR parsers use acontext-free grammarthat deals just with local patterns of symbols. The example grammar used here is a tiny subset of the Java or C language: The grammar'sterminal symbolsare the multi-character symbols or 'tokens' found in the input stream by alexical scanner. Here these include+*andintfor any integer constant, andidfor any identifier name, andeoffor end of input file. The grammar doesn't care what theintvalues oridspellings are, nor does it care about blanks or line breaks. The grammar uses these terminal symbols but does not define them. They are always leaf nodes (at the bottom bushy end) of the parse tree. The capitalized terms like Sums arenonterminal symbols. These are names for concepts or patterns in the language. They are defined in the grammar and never occur themselves in the input stream. They are always internal nodes (above the bottom) of the parse tree. They only happen as a result of the parser applying some grammar rule. Some nonterminals are defined with two or more rules; these are alternative patterns. Rules can refer back to themselves, which are calledrecursive. This grammar uses recursive rules to handle repeated math operators. Grammars for complete languages use recursive rules to handle lists, parenthesized expressions, and nested statements. Any given computer language can be described by several different grammars. An LR(1) parser can handle many but not all common grammars. It is usually possible to manually modify a grammar so that it fits the limitations of LR(1) parsing and the generator tool. The grammar for an LR parser must beunambiguousitself, or must be augmented by tie-breaking precedence rules. This means there is only one correct way to apply the grammar to a given legal example of the language, resulting in a unique parse tree with just one meaning, and a unique sequence of shift/reduce actions for that example. LR parsing is not a useful technique for human languages with ambiguous grammars that depend on the interplay of words. Human languages are better handled by parsers likeGeneralized LR parser, theEarley parser, or theCYK algorithmthat can simultaneously compute all possible parse trees in one pass. Most LR parsers are table driven. The parser's program code is a simple generic loop that is the same for all grammars and languages. The knowledge of the grammar and its syntactic implications are encoded into unchanging data tables calledparse tables(orparsing tables). Entries in a table show whether to shift or reduce (and by which grammar rule), for every legal combination of parser state and lookahead symbol. The parse tables also tell how to compute the next state, given just a current state and a next symbol. The parse tables are much larger than the grammar. LR tables are hard to accurately compute by hand for big grammars. So they are mechanically derived from the grammar by someparser generatortool likeBison.[6] Depending on how the states and parsing table are generated, the resulting parser is called either aSLR(simple LR) parser,LALR(look-ahead LR) parser, orcanonical LR parser. LALR parsers handle more grammars than SLR parsers. Canonical LR parsers handle even more grammars, but use many more states and much larger tables. The example grammar is SLR. LR parse tables are two-dimensional. Each current LR(0) parser state has its own row. Each possible next symbol has its own column. Some combinations of state and next symbol are not possible for valid input streams. These blank cells trigger syntax error messages. TheActionleft half of the table has columns for lookahead terminal symbols. These cells determine whether the next parser action is shift (to staten), or reduce (by grammar rulern). TheGotoright half of the table has columns for nonterminal symbols. These cells show which state to advance to, after some reduction's Left Hand Side has created an expected new instance of that symbol. This is like a shift action but for nonterminals; the lookahead terminal symbol is unchanged. The table column "Current Rules" documents the meaning and syntax possibilities for each state, as worked out by the parser generator. It is not included in the actual tables used at parsing time. The•(pink dot) marker shows where the parser is now, within some partially recognized grammar rules. The things to the left of•have been parsed, and the things to the right are expected soon. A state has several such current rules if the parser has not yet narrowed possibilities down to a single rule. In state 2 above, the parser has just found and shifted-in the+of grammar rule The next expected phrase is Products. Products begins with terminal symbolsintorid. If the lookahead is either of those, the parser shifts them in and advances to state 8 or 9, respectively. When a Products has been found, the parser advances to state 3 to accumulate the complete list of summands and find the end of rule r0. A Products can also begin with nonterminal Value. For any other lookahead or nonterminal, the parser announces a syntax error. In state 3, the parser has just found a Products phrase, that could be from two possible grammar rules: The choice between r1 and r3 can't be decided just from looking backwards at prior phrases. The parser has to check the lookahead symbol to tell what to do. If the lookahead is*, it is in rule 3, so the parser shifts in the*and advances to state 5. If the lookahead iseof, it is at the end of rule 1 and rule 0, so the parser is done. In state 9 above, all the non-blank, non-error cells are for the same reduction r6. Some parsers save time and table space by not checking the lookahead symbol in these simple cases. Syntax errors are then detected somewhat later, after some harmless reductions, but still before the next shift action or parser decision. Individual table cells must not hold multiple, alternative actions, otherwise the parser would be nondeterministic with guesswork and backtracking. If the grammar is not LR(1), some cells will have shift/reduce conflicts between a possible shift action and reduce action, or reduce/reduce conflicts between multiple grammar rules. LR(k) parsers resolve these conflicts (where possible) by checking additional lookahead symbols beyond the first. The LR parser begins with a nearly empty parse stack containing just the start state 0, and with the lookahead holding the input stream's first scanned symbol. The parser then repeats the following loop step until done, or stuck on a syntax error: The topmost state on the parse stack is some states, and the current lookahead is some terminal symbolt. Look up the next parser action from rowsand columntof the Lookahead Action table. That action is either Shift, Reduce, Accept, or Error: LR parser stack usually stores just the LR(0) automaton states, as the grammar symbols may be derived from them (in the automaton, all input transitions to some state are marked with the same symbol, which is the symbol associated with this state). Moreover, these symbols are almost never needed as the state is all that matters when making the parsing decision.[7] This section of the article can be skipped by most users of LR parser generators. State 2 in the example parse table is for the partially parsed rule This shows how the parser got here, by seeing Sums then+while looking for a larger Sums. The•marker has advanced beyond the beginning of the rule. It also shows how the parser expects to eventually complete the rule, by next finding a complete Products. But more details are needed on how to parse all the parts of that Products. The partially parsed rules for a state are called its "core LR(0) items". The parser generator adds additional rules or items for all the possible next steps in building up the expected Products: The•marker is at the beginning of each of these added rules; the parser has not yet confirmed and parsed any part of them. These additional items are called the "closure" of the core items. For each nonterminal symbol immediately following a•, the generator adds the rules defining that symbol. This adds more•markers, and possibly different follower symbols. This closure process continues until all follower symbols have been expanded. The follower nonterminals for state 2 begins with Products. Value is then added by closure. The follower terminals areintandid. The kernel and closure items together show all possible legal ways to proceed from the current state to future states and complete phrases. If a follower symbol appears in only one item, it leads to a next state containing only one core item with the•marker advanced. Sointleads to next state 8 with core If the same follower symbol appears in several items, the parser cannot yet tell which rule applies here. So that symbol leads to a next state that shows all remaining possibilities, again with the•marker advanced. Products appears in both r1 and r3. So Products leads to next state 3 with core In words, that means if the parser has seen a single Products, it might be done, or it might still have even more things to multiply together. All the core items have the same symbol preceding the•marker; all transitions into this state are always with that same symbol. Some transitions will be to cores and states that have been enumerated already. Other transitions lead to new states. The generator starts with the grammar's goal rule. From there it keeps exploring known states and transitions until all needed states have been found. These states are called "LR(0)" states because they use a lookahead ofk=0, i.e. no lookahead. The only checking of input symbols occurs when the symbol is shifted in. Checking of lookaheads for reductions is done separately by the parse table, not by the enumerated states themselves. The parse table describes all possible LR(0) states and their transitions. They form afinite-state machine(FSM). An FSM is a simple engine for parsing simple unnested languages, without using a stack. In this LR application, the FSM's modified "input language" has both terminal and nonterminal symbols, and covers any partially parsed stack snapshot of the full LR parse. Recall step 5 of the Parse Steps Example: 0Products4*5int8 The parse stack shows a series of state transitions, from the start state 0, to state 4 and then on to 5 and current state 8. The symbols on the parse stack are the shift or goto symbols for those transitions. Another way to view this, is that the finite state machine can scan the stream "Products *int+ 1" (without using yet another stack) and find the leftmost complete phrase that should be reduced next. And that is indeed its job! How can a mere FSM do this when the original unparsed language has nesting and recursion and definitely requires an analyzer with a stack? The trick is that everything to the left of the stack top has already been fully reduced. This eliminates all the loops and nesting from those phrases. The FSM can ignore all the older beginnings of phrases, and track just the newest phrases that might be completed next. The obscure name for this in LR theory is "viable prefix". The states and transitions give all the needed information for the parse table's shift actions and goto actions. The generator also needs to calculate the expected lookahead sets for each reduce action. InSLRparsers, these lookahead sets are determined directly from the grammar, without considering the individual states and transitions. For each nonterminal S, the SLR generator works out Follows(S), the set of all the terminal symbols which can immediately follow some occurrence of S. In the parse table, each reduction to S uses Follow(S) as its LR(1) lookahead set. Such follow sets are also used by generators for LL top-down parsers. A grammar that has no shift/reduce or reduce/reduce conflicts when using Follow sets is called an SLR grammar. LALRparsers have the same states as SLR parsers, but use a more complicated, more precise way of working out the minimum necessary reduction lookaheads for each individual state. Depending on the details of the grammar, this may turn out to be the same as the Follow set computed by SLR parser generators, or it may turn out to be a subset of the SLR lookaheads. Some grammars are okay for LALR parser generators but not for SLR parser generators. This happens when the grammar has spurious shift/reduce or reduce/reduce conflicts using Follow sets, but no conflicts when using the exact sets computed by the LALR generator. The grammar is then called LALR(1) but not SLR. An SLR or LALR parser avoids having duplicate states. But this minimization is not necessary, and can sometimes create unnecessary lookahead conflicts.Canonical LRparsers use duplicated (or "split") states to better remember the left and right context of a nonterminal's use. Each occurrence of a symbol S in the grammar can be treated independently with its own lookahead set, to help resolve reduction conflicts. This handles a few more grammars. Unfortunately, this greatly magnifies the size of the parse tables if done for all parts of the grammar. This splitting of states can also be done manually and selectively with any SLR or LALR parser, by making two or more named copies of some nonterminals. A grammar that is conflict-free for a canonical LR generator but has conflicts in an LALR generator is called LR(1) but not LALR(1), and not SLR. SLR, LALR, and canonical LR parsers make exactly the same shift and reduce decisions when the input stream is the correct language. When the input has a syntax error, the LALR parser may do some additional (harmless) reductions before detecting the error than would the canonical LR parser. And the SLR parser may do even more. This happens because the SLR and LALR parsers are using a generous superset approximation to the true, minimal lookahead symbols for that particular state. LR parsers can generate somewhat helpful error messages for the first syntax error in a program, by simply enumerating all the terminal symbols that could have appeared next instead of the unexpected bad lookahead symbol. But this does not help the parser work out how to parse the remainder of the input program to look for further, independent errors. If the parser recovers badly from the first error, it is very likely to mis-parse everything else and produce a cascade of unhelpful spurious error messages. In theyaccand bison parser generators, the parser has an ad hoc mechanism to abandon the current statement, discard some parsed phrases and lookahead tokens surrounding the error, and resynchronize the parse at some reliable statement-level delimiter like semicolons or braces. This often works well for allowing the parser and compiler to look over the rest of the program. Many syntactic coding errors are simple typos or omissions of a trivial symbol. Some LR parsers attempt to detect and automatically repair these common cases. The parser enumerates every possible single-symbol insertion, deletion, or substitution at the error point. The compiler does a trial parse with each change to see if it worked okay. (This requires backtracking to snapshots of the parse stack and input stream, normally unneeded by the parser.) Some best repair is picked. This gives a very helpful error message and resynchronizes the parse well. However, the repair is not trustworthy enough to permanently modify the input file. Repair of syntax errors is easiest to do consistently in parsers (like LR) that have parse tables and an explicit data stack. The LR parser generator decides what should happen for each combination of parser state and lookahead symbol. These decisions are usually turned into read-only data tables that drive a generic parser loop that is grammar- and state-independent. But there are also other ways to turn those decisions into an active parser. Some LR parser generators create separate tailored program code for each state, rather than a parse table. These parsers can run several times faster than the generic parser loop in table-driven parsers. The fastest parsers use generated assembler code. In therecursive ascent parservariation, the explicit parse stack structure is also replaced by the implicit stack used by subroutine calls. Reductions terminate several levels of subroutine calls, which is clumsy in most languages. So recursive ascent parsers are generally slower, less obvious, and harder to hand-modify thanrecursive descent parsers. Another variation replaces the parse table by pattern-matching rules in non-procedural languages such asProlog. GLRGeneralized LR parsersuse LR bottom-up techniques to find all possible parses of input text, not just one correct parse. This is essential for ambiguous grammar such as used for human languages. The multiple valid parse trees are computed simultaneously, without backtracking. GLR is sometimes helpful for computer languages that are not easily described by a conflict-free LALR(1) grammar. LCLeft corner parsersuse LR bottom-up techniques for recognizing the left end of alternative grammar rules. When the alternatives have been narrowed down to a single possible rule, the parser then switches to top-down LL(1) techniques for parsing the rest of that rule. LC parsers have smaller parse tables than LALR parsers and better error diagnostics. There are no widely used generators for deterministic LC parsers. Multiple-parse LC parsers are helpful with human languages with very large grammars. LR parsers were invented byDonald Knuthin 1965 as an efficient generalization ofprecedence parsers. Knuth proved that LR parsers were the most general-purpose parsers possible that would still be efficient in the worst cases.[citation needed] In other words, if a language was reasonable enough to allow an efficient one-pass parser, it could be described by an LR(k) grammar. And that grammar could always be mechanically transformed into an equivalent (but larger) LR(1) grammar. So an LR(1) parsing method was, in theory, powerful enough to handle any reasonable language. In practice, the natural grammars for many programming languages are close to being LR(1).[citation needed] The canonical LR parsers described by Knuth had too many states and very big parse tables that were impractically large for the limited memory of computers of that era. LR parsing became practical whenFrank DeRemerinventedSLRandLALRparsers with much fewer states.[10][11] For full details on LR theory and how LR parsers are derived from grammars, seeThe Theory of Parsing, Translation, and Compiling, Volume 1(Aho and Ullman).[7][2] Earley parsersapply the techniques and•notation of LR parsers to the task of generating all possible parses for ambiguous grammars such as for human languages. While LR(k) grammars have equal generative power for allk≥1, the case of LR(0) grammars is slightly different. A languageLis said to have theprefix propertyif no word inLis aproper prefixof another word inL.[12]A languageLhas an LR(0) grammar if and only ifLis adeterministic context-free languagewith the prefix property.[13]As a consequence, a languageLis deterministic context-free if and only ifL$has an LR(0) grammar, where "$" is not a symbol ofL'salphabet.[14] This example of LR parsing uses the following small grammar with goal symbol E: to parse the following input: The two LR(0) parsing tables for this grammar look as follows: Theaction tableis indexed by a state of the parser and a terminal (including a special terminal $ that indicates the end of the input stream) and contains three types of actions: Thegoto tableis indexed by a state of the parser and a nonterminal and simply indicates what the next state of the parser will be if it has recognized a certain nonterminal. This table is important to find out the next state after every reduction. After a reduction, the next state is found by looking up thegoto tableentry for top of the stack (i.e. current state) and the reduced rule's LHS (i.e. non-terminal). The table below illustrates each step in the process. Here the state refers to the element at the top of the stack (the right-most element), and the next action is determined by referring to the action table above. A $ is appended to the input string to denote the end of the stream. The parser starts out with the stack containing just the initial state ('0'): The first symbol from the input string that the parser sees is '1'. To find the next action (shift, reduce, accept or error), the action table is indexed with the current state (the "current state" is just whatever is on the top of the stack), which in this case is 0, and the current input symbol, which is '1'. The action table specifies a shift to state 2, and so state 2 is pushed onto the stack (again, all the state information is in the stack, so "shifting to state 2" is the same as pushing 2 onto the stack). The resulting stack is where the top of the stack is 2. For the sake of explaining the symbol (e.g., '1', B) is shown that caused the transition to the next state, although strictly speaking it is not part of the stack. In state 2, the action table says to reduce with grammar rule 5 (regardless of what terminal the parser sees on the input stream), which means that the parser has just recognized the right-hand side of rule 5. In this case, the parser writes 5 to the output stream, pops one state from the stack (since the right-hand side of the rule has one symbol), and pushes on the stack the state from the cell in the goto table for state 0 and B, i.e., state 4. The resulting stack is: However, in state 4, the action table says the parser should now reduce with rule 3. So it writes 3 to the output stream, pops one state from the stack, and finds the new state in the goto table for state 0 and E, which is state 3. The resulting stack: The next terminal that the parser sees is a '+' and according to the action table it should then shift to state 6: The resulting stack can be interpreted as the history of afinite-state machinethat has just read a nonterminal E followed by a terminal '+'. The transition table of this automaton is defined by the shift actions in the action table and the goto actions in the goto table. The next terminal is now '1' and this means that the parser performs a shift and go to state 2: Just as the previous '1' this one is reduced to B giving the following stack: The stack corresponds with a list of states of a finite automaton that has read a nonterminal E, followed by a '+' and then a nonterminal B. In state 8 the parser always performs a reduce with rule 2. The top 3 states on the stack correspond with the 3 symbols in the right-hand side of rule 2. This time we pop 3 elements off of the stack (since the right-hand side of the rule has 3 symbols) and look up the goto state for E and 0, thus pushing state 3 back onto the stack Finally, the parser reads a '$' (end of input symbol) from the input stream, which means that according to the action table (the current state is 3) the parser accepts the input string. The rule numbers that will then have been written to the output stream will be [5, 3, 5, 2] which is indeed arightmost derivationof the string "1 + 1" in reverse. The construction of these parsing tables is based on the notion ofLR(0) items(simply calleditemshere) which are grammar rules with a special dot added somewhere in the right-hand side. For example, the rule E → E + B has the following four corresponding items: Rules of the formA→ ε have only a single itemA→•. The item E → E•+ B, for example, indicates that the parser has recognized a string corresponding with E on the input stream and now expects to read a '+' followed by another string corresponding with B. It is usually not possible to characterize the state of the parser with a single item because it may not know in advance which rule it is going to use for reduction. For example, if there is also a rule E → E * B then the items E → E•+ B and E → E•* B will both apply after a string corresponding with E has been read. Therefore, it is convenient to characterize the state of the parser by a set of items, in this case the set { E → E•+ B, E → E•* B }. An item with a dot before a nonterminal, such as E → E +•B, indicates that the parser expects to parse the nonterminal B next. To ensure the item set contains all possible rules the parser may be in the midst of parsing, it must include all items describing how B itself will be parsed. This means that if there are rules such as B → 1 and B → 0 then the item set must also include the items B →•1 and B →•0. In general this can be formulated as follows: Thus, any set of items can be extended by recursively adding all the appropriate items until all nonterminals preceded by dots are accounted for. The minimal extension is called theclosureof an item set and written asclos(I) whereIis an item set. It is these closed item sets that are taken as the states of the parser, although only the ones that are actually reachable from the begin state will be included in the tables. Before the transitions between the different states are determined, the grammar is augmented with an extra rule where S is a new start symbol and E the old start symbol. The parser will use this rule for reduction exactly when it has accepted the whole input string. For this example, the same grammar as above is augmented thus: It is for this augmented grammar that the item sets and the transitions between them will be determined. The first step of constructing the tables consists of determining the transitions between the closed item sets. These transitions will be determined as if we are considering a finite automaton that can read terminals as well as nonterminals. The begin state of this automaton is always the closure of the first item of the added rule: S →•E eof: Theboldfaced"+" in front of an item indicates the items that were added for the closure (not to be confused with the mathematical '+' operator which is a terminal). The original items without a "+" are called thekernelof the item set. Starting at the begin state (S0), all of the states that can be reached from this state are now determined. The possible transitions for an item set can be found by looking at the symbols (terminals and nonterminals) found following the dots; in the case of item set 0 those symbols are the terminals '0' and '1' and the nonterminals E and B. To find the item set that each symbolx∈{0,1,E,B}{\textstyle x\in \{0,1,E,B\}}leads to, the following procedure is followed for each of the symbols: For the terminal '0' (i.e. where x = '0') this results in: and for the terminal '1' (i.e. where x = '1') this results in: and for the nonterminal E (i.e. where x = E) this results in: and for the nonterminal B (i.e. where x = B) this results in: The closure does not add new items in all cases - in the new sets above, for example, there are no nonterminals following the dot. Above procedure is continued until no more new item sets are found. For the item sets 1, 2, and 4 there will be no transitions since the dot is not in front of any symbol. For item set 3 though, we have dots in front of terminals '*' and '+'. For symbolx=*{\textstyle x={\texttt {*}}}the transition goes to: and forx=+{\textstyle x={\texttt {+}}}the transition goes to: Now, the third iteration begins. For item set 5, the terminals '0' and '1' and the nonterminal B must be considered, but the resulting closed item sets for the terminals are equal to already found item sets 1 and 2, respectively. For the nonterminal B, the transition goes to: For item set 6, the terminal '0' and '1' and the nonterminal B must be considered, but as before, the resulting item sets for the terminals are equal to the already found item sets 1 and 2. For the nonterminal B the transition goes to: These final item sets 7 and 8 have no symbols beyond their dots so no more new item sets are added, so the item generating procedure is complete. The finite automaton, with item sets as its states is shown below. The transition table for the automaton now looks as follows: From this table and the found item sets, the action and goto table are constructed as follows: The reader may verify that these steps produce the action and goto table presented earlier. Only step 4 of the above procedure produces reduce actions, and so all reduce actions must occupy an entire table row, causing the reduction to occur regardless of the next symbol in the input stream. This is why these are LR(0) parse tables: they don't do any lookahead (that is, they look ahead zero symbols) before deciding which reduction to perform. A grammar that needs lookahead to disambiguate reductions would require a parse table row containing different reduce actions in different columns, and the above procedure is not capable of creating such rows. Refinements to theLR(0) table construction procedure (such asSLRandLALR) are capable of constructing reduce actions that do not occupy entire rows. Therefore, they are capable of parsing more grammars than LR(0) parsers. The automaton is constructed in such a way that it is guaranteed to be deterministic. However, when reduce actions are added to the action table it can happen that the same cell is filled with a reduce action and a shift action (ashift-reduce conflict) or with two different reduce actions (areduce-reduce conflict). However, it can be shown that when this happens the grammar is not an LR(0) grammar. A classic real-world example of a shift-reduce conflict is thedangling elseproblem. A small example of a non-LR(0) grammar with a shift-reduce conflict is: One of the item sets found is: There is a shift-reduce conflict in this item set: when constructing the action table according to the rules above, the cell for [item set 1, terminal '1'] containss1(shift to state 1)and r2(reduce with grammar rule 2). A small example of a non-LR(0) grammar with a reduce-reduce conflict is: In this case the following item set is obtained: There is a reduce-reduce conflict in this item set because in the cells in the action table for this item set there will be both a reduce action for rule 3 and one for rule 4. Both examples above can be solved by letting the parser use the follow set (seeLL parser) of a nonterminalAto decide if it is going to use one ofAs rules for a reduction; it will only use the ruleA→wfor a reduction if the next symbol on the input stream is in the follow set ofA. This solution results in so-calledSimple LR parsers.
https://en.wikipedia.org/wiki/LR_parser
Incomputer science, aSimple LRorSLR parseris a type ofLR parserwith smallparse tablesand a relatively simple parser generator algorithm. As with other types of LR(1) parser, an SLR parser is quite efficient at finding the single correctbottom-up parsein a single left-to-right scan over the input stream, without guesswork or backtracking. The parser is mechanically generated from a formal grammar for the language. SLR and the more general methodsLALR parserandCanonical LR parserhave identical methods and similar tables at parse time; they differ only in the mathematical grammar analysis algorithms used by the parser generator tool. SLR and LALR generators create tables of identical size and identical parser states. SLR generators accept fewer grammars than LALR generators likeyaccandBison.[citation needed]Many computer languages don't readily fit the restrictions of SLR, as is. Bending the language's natural grammar intoSLR grammarform requires more compromises and grammar hackery. So LALR generators have become much more widely used than SLR generators, despite being somewhat more complicated tools. SLR methods remain a useful learning step in college classes on compiler theory.[citation needed] SLR and LALR were both developed byFrank DeRemeras the first practical uses ofDonald Knuth's LR parser theory.[1][2]The tables created for real grammars by full LR methods were impractically large, larger than most computer memories of that decade, with 100 times or more parser states than the SLR and LALR methods.[3] To understand the differences between SLR and LALR, it is important to understand their many similarities and how they both make shift-reduce decisions. (See the articleLR parsernow for that background, up through the section on reductions'lookahead sets.) The one difference between SLR and LALR is how their generators calculate the lookahead sets of input symbols that should appear next, whenever some completedproduction ruleis found and reduced. SLR generators calculate that lookahead by an easy approximation method based directly on the grammar, ignoring the details of individual parser states and transitions. This ignores the particular context of the current parser state. If some nonterminal symbolSis used in several places in the grammar, SLR treats those places in the same single way rather than handling them individually. The SLR generator works outFollow(S), the set of all terminal symbols which can immediately follow some occurrence ofS. In the parse table, each reduction toSuses Follow(S) as its LR(1) lookahead set. Such follow sets are also used by generators for LL top-down parsers. A grammar that has no shift/reduce or reduce/reduce conflicts when using follow sets is called anSLR grammar.[citation needed] LALR generators calculate lookahead sets by a more precise method based on exploring the graph of parser states and their transitions. This method considers the particular context of the current parser state. It customizes the handling of each grammar occurrence of some nonterminal S. See articleLALR parserfor further details of this calculation. The lookahead sets calculated by LALR generators are a subset of (and hence better than) the approximate sets calculated by SLR generators. If a grammar has table conflicts when using SLR follow sets, but is conflict-free when using LALR follow sets, it is called a LALR grammar.[citation needed] A grammar that can be parsed by an SLR parser but not by an LR(0) parser is the following: Constructing the action and goto table as is done for LR(0) parsers would give the following item sets and tables: The action and goto tables: As can be observed there is a shift-reduce conflict for state 1 and terminal '1'. This occurs because, when the action table for an LR(0) parser is created, reduce actions are inserted on a per-row basis. However, by using a follow set, reduce actions can be added with finer granularity. The follow set for this grammar: A reduce only needs to be added to a particular action column if that action is in the follow set associated with that reduce. This algorithm describes whether a reduce action must be added to an action column: for example,mustBeAdded(r2, "1")is false, because the left hand side of rule 2 is "E", and 1 is not in E's follow set. Contrariwise,mustBeAdded(r2, "$")is true, because "$" is in E's follow set. By using mustBeAdded on each reduce action in the action table, the result is a conflict-free action table:
https://en.wikipedia.org/wiki/Simple_LR_parser
Insoftware engineering,domain analysis, orproduct line analysis, is the process of analyzing relatedsoftwaresystems in adomainto find their common and variable parts. It is a model of wider business context for the system. The term was coined in the early 1980s by James Neighbors.[1][2]Domain analysis is the first phase ofdomain engineering. It is a key method for realizing systematicsoftware reuse.[3] Domain analysis producesdomain modelsusing methodologies such asdomain specific languages,feature tables,facet tables,facet templates, andgeneric architectures, which describe all of thesystemsin a domain. Several methodologies for domain analysis have been proposed.[4] The products, or "artifacts", of a domain analysis are sometimesobject-oriented models(e.g. represented with theUnified Modeling Language(UML)) ordata modelsrepresented withentity-relationship diagrams(ERD).Software developerscan use these models as a basis for the implementation ofsoftware architecturesandapplications. This approach to domain analysis is sometimes calledmodel-driven engineering. Ininformation science, the term "domain analysis" was suggested in 1995 byBirger Hjørlandand H. Albrechtsen.[5][6] Several domain analysis techniques have been identified, proposed and developed due to the diversity of goals, domains, and involved processes. Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Domain_analysis
Incomputing, acompileris acomputer programthat transformssource codewritten in aprogramming languageor computer language (thesource language), into another computer language (thetarget language, often having a binary form known asobject codeormachine code). The most common reason for transforming source code is to create anexecutableprogram. Any program written in ahigh-level programming languagemust be translated to object code before it can be executed, so all programmers using such a language use a compiler or aninterpreter, sometimes even both. Improvements to a compiler may lead to a large number of improved features in executable programs. TheProduction Quality Compiler-Compiler, in the late 1970s, introduced the principles of compiler organization that are still widely used today (e.g., a front-end handling syntax and semantics and a back-end generating machine code). Software for early computers was primarily written inassembly language, and before that directly inmachine code. It is usually more productive for a programmer to use a high-level language, and programs written in a high-level language can bereusedon different kinds of computers. Even so, it took a while for compilers to become established, because they generated code that did not perform as well as hand-written assembler, they were daunting development projects in their own right, and the very limitedmemorycapacity of early computers created many technical problems for practical compiler implementations. Between 1942 and 1945,Konrad ZusedevelopedPlankalkül("plan calculus"), the first high-level language for a computer, for which he envisioned aPlanfertigungsgerät("plan assembly device"), which would automatically translate the mathematical formulation of a program into machine-readablepunched film stock.[1]However, the first actual compiler for the language was implemented only decades later. Between 1949 and 1951,Heinz RutishauserproposedSuperplan, a high-level language and automatic translator.[2]His ideas were later refined byFriedrich L. BauerandKlaus Samelson.[3] The first practical compiler was written byCorrado Böhmin 1951 for his PhD thesis,[4][5]one of the first computer science doctorates awarded anywhere in the world. The first implemented compiler was written byGrace Hopper, who also coined the term "compiler",[6][7]referring to herA-0 systemwhich functioned as aloaderorlinker, not the modern notion of a compiler. The firstAutocodeand compiler in the modern sense were developed byAlick Glenniein 1952 at theUniversity of Manchesterfor theMark 1computer.[8][9]TheFORTRANteam led byJohn W. BackusatIBMintroduced the first commercially available compiler, in 1957, which took 18 person-years to create.[10] The firstALGOL 58compiler was completed by the end of 1958 byFriedrich L. Bauer, Hermann Bottenbruch,Heinz Rutishauser, andKlaus Samelsonfor theZ22computer. Bauer et al. had been working on compiler technology for theSequentielle Formelübersetzung(i.e.sequential formula translation) in the previous years. By 1960, an extended Fortran compiler, ALTAC, was available on thePhilco2000, so it is probable that a Fortran program was compiled for both IBM and Philcocomputer architecturesin mid-1960.[11]The first known demonstratedcross-platformhigh-level language wasCOBOL. In a demonstration in December 1960, a COBOL program was compiled and executed on both theUNIVAC IIand theRCA501.[7][12] Like any other software, there are benefits from implementing a compiler in a high-level language. In particular, a compiler can beself-hosted– that is, written in the programming language it compiles. Building a self-hosting compiler is abootstrappingproblem, i.e. the first such compiler for a language must be either hand written machine code, compiled by a compiler written in another language, or compiled by running the compiler's source on itself in aninterpreter. Corrado Böhm developed a language, a machine, and a translation method for compiling that language on the machine in his PhD dissertation submitted in 1951.[4][5]He not only described a complete compiler, but also defined for the first time that compiler in its own language. The language was interesting in itself, because every statement (including input statements, output statements and control statements) was a special case of anassignment statement. TheNavy Electronics Laboratory InternationalALGOLCompilerorNELIACwas adialectand compiler implementation of theALGOL 58programming languagedeveloped by theNaval Electronics Laboratoryin 1958.[13] NELIAC was the brainchild ofHarry Huskey– then Chairman of theACMand a well knowncomputer scientist(and later academic supervisor ofNiklaus Wirth), and supported by Maury Halstead, the head of the computational center at NEL. The earliest version was implemented on the prototypeUSQ-17computer (called the Countess) at the laboratory. It was the world's first self-compiling compiler – the compiler was first coded in simplified form in assembly language (thebootstrap), then re-written in its own language and compiled by the bootstrap, and finally re-compiled by itself, making the bootstrap obsolete. Another earlyself-hostingcompiler was written forLispby Tim Hart and Mike Levin atMITin 1962.[14]They wrote a Lisp compiler in Lisp, testing it inside an existing Lisp interpreter. Once they had improved the compiler to the point where it could compile its own source code, it was self-hosting.[15] The compiler as it exists on the standard compiler tape is a machine language program that was obtained by having theS-expressiondefinition of the compiler work on itself through the interpreter. This technique is only possible when an interpreter already exists for the very same language that is to be compiled. It borrows directly from the notion of running a program on itself as input, which is also used in various proofs intheoretical computer science, such as the proof that thehalting problemisundecidable. Forthis an example of a self-hosting compiler. Theself compilation and cross compilationfeatures of Forth are synonymous withmetacompilationandmetacompilers.[16][17]LikeLisp, Forth is anextensible programminglanguage. It is theextensible programminglanguage features of Forth and Lisp that enable them to generate new versions of themselves or port themselves to new environments. Aparseris an important component of a compiler. It parses the source code of a computer programming language to create some form of internal representation. Programming languages tend to be specified in terms of acontext-free grammarbecause fast and efficient parsers can be written for them. Parsers can be written by hand or generated by aparser generator. A context-free grammar provides a simple and precise mechanism for describing how programming language constructs are built from smallerblocks. The formalism of context-free grammars was developed in the mid-1950s byNoam Chomsky.[18] Block structure was introduced into computer programming languages by the ALGOL project (1957–1960), which, as a consequence, also featured a context-free grammar to describe the resulting ALGOL syntax. Context-free grammars are simple enough to allow the construction of efficient parsing algorithms which, for a given string, determine whether and how it can be generated from the grammar. If a programming language designer is willing to work within some limited subsets of context-free grammars, more efficient parsers are possible. TheLR parser(left to right) was invented byDonald Knuthin 1965 in a paper, "On the Translation of Languages from Left to Right". AnLR parseris a parser that reads input fromLeft to right (as it would appear if visually displayed) and produces aRightmost derivation. The termLR(k) parseris also used, wherekrefers to the number of unconsumedlookaheadinput symbols that are used in making parsing decisions. Knuth proved that LR(k) grammars can be parsed with an execution time essentially proportional to the length of the program, and that every LR(k) grammar fork> 1 can be mechanically transformed into an LR(1) grammar for the same language. In other words, it is only necessary to have one symbol lookahead to parse anydeterministic context-free grammar(DCFG).[19] Korenjak (1969) was the first to show parsers for programming languages could be produced using these techniques.[20]Frank DeRemer devised the more practicalSimple LR(SLR) andLook-ahead LR(LALR) techniques, published in his PhD dissertation at MIT in 1969.[21][22]This was an important breakthrough, because LR(k) translators, as defined by Donald Knuth, were much too large for implementation on computer systems in the 1960s and 1970s. In practice, LALR offers a good solution; the added power of LALR(1) parsers over SLR(1) parsers (that is, LALR(1) can parse more complex grammars than SLR(1)) is useful, and, though LALR(1) is not comparable with LL(1)(See below) (LALR(1) cannot parse all LL(1) grammars), most LL(1) grammars encountered in practice can be parsed by LALR(1). LR(1) grammars are more powerful again than LALR(1); however, an LR(1) grammar requires acanonical LR parserwhich would be extremely large in size and is not considered practical. The syntax of manyprogramming languagesare defined by grammars that can be parsed with an LALR(1) parser, and for this reason LALR parsers are often used by compilers to perform syntax analysis of source code. Arecursive ascent parserimplements an LALR parser using mutually-recursive functions rather than tables. Thus, the parser isdirectly encodedin the host language similar torecursive descent. Direct encoding usually yields a parser which is faster than its table-driven equivalent[23]for the same reason that compilation is faster than interpretation. It is also (in principle) possible to hand edit a recursive ascent parser, whereas a tabular implementation is nigh unreadable to the average human. Recursive ascent was first described by Thomas Pennello in his article "Very fast LR parsing" in 1986.[23]The technique was later expounded upon by G.H. Roberts[24]in 1988 as well as in an article by Leermakers, Augusteijn, Kruseman Aretz[25]in 1992 in the journalTheoretical Computer Science. AnLL parserparses the input fromLeft to right, and constructs aLeftmost derivationof the sentence (hence LL, as opposed to LR). The class of grammars which are parsable in this way is known as theLL grammars. LL grammars are an even more restricted class of context-free grammars than LR grammars. Nevertheless, they are of great interest to compiler writers, because such a parser is simple and efficient to implement. LL(k) grammars can be parsed by arecursive descent parserwhich is usually coded by hand, although a notation such asMETA IImight alternatively be used. The design of ALGOL sparked investigation of recursive descent, since the ALGOL language itself is recursive. The concept of recursive descent parsing was discussed in the January 1961 issue ofCommunications of the ACMin separate papers by A.A. Grau andEdgar T. "Ned" Irons.[26][27]Richard Waychoff and colleagues also implemented recursive descent in theBurroughsALGOL compiler in March 1961,[28]the two groups used different approaches but were in at least informal contact.[29] The idea of LL(1) grammars was introduced by Lewis and Stearns (1968).[30][31] Recursive descent was popularised byNiklaus WirthwithPL/0, aneducational programming languageused to teach compiler construction in the 1970s.[32] LR parsing can handle a larger range of languages thanLL parsing, and is also better at error reporting (This is disputable, REFERENCE is required), i.e. it detects syntactic errors when the input does not conform to the grammar as soon as possible. In 1970,Jay Earleyinvented what came to be known as theEarley parser. Earley parsers are appealing because they can parse allcontext-free languagesreasonably efficiently.[33] John Backus proposed "metalinguistic formulas"[34][35]to describe the syntax of the new programming language IAL, known today asALGOL 58(1959). Backus's work was based on thePost canonical systemdevised byEmil Post. Further development of ALGOL led toALGOL 60; in its report (1963),Peter Naurnamed Backus's notationBackus normal form(BNF), and simplified it to minimize the character set used. However,Donald Knuthargued that BNF should rather be read asBackus–Naur form,[36]and that has become the commonly accepted usage. Niklaus Wirthdefinedextended Backus–Naur form(EBNF), a refined version of BNF, in the early 1970s for PL/0.Augmented Backus–Naur form(ABNF) is another variant. Both EBNF and ABNF are widely used to specify the grammar of programming languages, as the inputs to parser generators, and in other fields such as defining communication protocols. Aparser generatorgenerates the lexical-analyser portion of a compiler. It is a program that takes a description of aformal grammarof a specific programming language and produces a parser for that language. That parser can be used in a compiler for that specific language. The parser detects and identifies the reserved words and symbols of the specific language from a stream of text and returns these as tokens to the code which implements the syntactic validation and translation into object code. This second part of the compiler can also be created by acompiler-compilerusing a formal rules-of-precedence syntax-description as input. The firstcompiler-compilerto use that name was written byTony Brookerin 1960 and was used to create compilers for theAtlascomputer at theUniversity of Manchester, including theAtlas Autocodecompiler. However it was rather different from modern compiler-compilers, and today would probably be described as being somewhere between a highly customisable generic compiler and anextensible-syntax language. The name "compiler-compiler" was far more appropriate for Brooker's system than it is for most modern compiler-compilers, which are more accurately described as parser generators. In the early 1960s, Robert McClure atTexas Instrumentsinvented a compiler-compiler calledTMG, the name taken from "transmogrification".[37][38][39][40]In the following years TMG wasportedto severalUNIVACand IBM mainframe computers. TheMulticsproject, a joint venture betweenMITandBell Labs, was one of the first to develop anoperating systemin a high-level language.PL/Iwas chosen as the language, but an external supplier could not supply a working compiler.[41]The Multics team developed their own subset dialect ofPL/Iknown as Early PL/I (EPL) as their implementation language in 1964. TMG was ported toGE-600 seriesand used to develop EPL byDouglas McIlroy,Robert Morris, and others. Not long afterKen Thompsonwrote the first version ofUnixfor thePDP-7in 1969,Douglas McIlroycreated the new system's first higher-level language: an implementation of McClure's TMG.[42]TMG was also the compiler definition tool used by Ken Thompson to write the compiler for theB languageon his PDP-7 in 1970. B was the immediate ancestor ofC. An earlyLALR parser generatorwas called "TWS", created by Frank DeRemer and Tom Pennello. XPLis a dialect of thePL/Iprogramming language, used for the development of compilers for computer languages. It was designed and implemented in 1967 by a team withWilliam M. McKeeman,James J. Horning, andDavid B. WortmanatStanford Universityand theUniversity of California, Santa Cruz. It was first announced at the 1968Fall Joint Computer Conferencein San Francisco.[43][44] XPL featured a relatively simpletranslator writing systemdubbedANALYZER, based upon abottom-up compilerprecedence parsing technique calledMSP(mixed strategy precedence). XPL was bootstrapped through Burroughs Algol onto theIBM System/360computer. (Some subsequent versions of XPL used onUniversity of Torontointernal projects utilized an SLR(1) parser, but those implementations have never been distributed). Yaccis aparser generator(loosely,compiler-compiler), not to be confused withlex, which is alexical analyzerfrequently used as a first stage by Yacc. Yacc was developed byStephen C. JohnsonatAT&Tfor theUnixoperating system.[45]The name is an acronym for "Yet AnotherCompiler Compiler." It generates an LALR(1) compiler based on a grammar written in a notation similar to Backus–Naur form. Johnson worked on Yacc in the early 1970s atBell Labs.[46]He was familiar with TMG and its influence can be seen in Yacc and the design of the C programming language. Because Yacc was the default compiler generator on most Unix systems, it was widely distributed and used. Derivatives such asGNU Bisonare still in use. The compiler generated by Yacc requires alexical analyzer. Lexical analyzer generators, such aslexorflexare widely available. TheIEEEPOSIXP1003.2 standard defines the functionality and requirements for both Lex and Yacc. Coco/Ris aparser generatorthat generates LL(1) parsers in Modula-2 (with plug-ins for other languages) from input grammars written in a variant of EBNF. It was developed by Hanspeter Mössenböck at the Swiss Federal Institute of Technology in Zurich (ETHZ) in 1985. ANTLRis aparser generatorthat generates LL(*) parsers in Java from input grammars written in a variant of EBNF. It was developed by Terence Parr at the University of San Francisco in the early 1990s as a successor of an earlier generator called PCCTS. Metacompilers differ from parser generators, taking as input aprogramwritten in ametalanguage. Their input consists grammar analyzing formula combined with embedded transform operations that construct abstract syntax trees, or simply output reformatted text strings that may be stack machine code. Many can be programmed in their own metalanguage enabling them to compile themselves, making them self-hosting extensible language compilers. Many metacompilers build on the work ofDewey Val Schorre. HisMETA IIcompiler, first released in 1964, was the first documented metacompiler. Able to define its own language and others, META II acceptedsyntax formulahaving imbeddedoutput (code production). It also translated to one of the earliest instances of avirtual machine. Lexical analysis was performed by built token recognizing functions: .ID, .STRING, and .NUMBER. Quoted strings in syntax formula recognize lexemes that are not kept.[47] TREE-META, a second generation Schorre metacompiler, appeared around 1968. It extended the capabilities of META II, adding unparse rules separating code production from the grammar analysis. Tree transform operations in the syntax formula produceabstract syntax treesthat the unparse rules operate on. The unparse tree pattern matching providedpeephole optimizationability. CWIC, described in a 1970 ACM publication is a third generation Schorre metacompiler that added lexing rules and backtracking operators to the grammar analysis.LISP 2was married with the unparse rules of TREEMETA in the CWIC generator language. With LISP 2 processing, CWIC can generate fully optimized code. CWIC also provided binary code generation into named code sections. Single and multipass compiles could be implemented using CWIC. CWIC compiled to 8-bit byte-addressable machine code instructions primarily designed to produce IBM System/360 code. Later generations are not publicly documented. One important feature would be the abstraction of the target processor instruction set, generating to a pseudo machine instruction set, macros, that could be separately defined or mapped to a real machine's instructions. Optimizations applying to sequential instructions could then be applied to the pseudo instruction before their expansion to target machine code. Across compilerruns in one environment but producesobject codefor another. Cross compilers are used for embedded development, where the target computer has limited capabilities. An early example of cross compilation was AIMICO, where a FLOW-MATIC program on a UNIVAC II was used to generate assembly language for theIBM 705, which was then assembled on the IBM computer.[7] TheALGOL 68Ccompiler generatedZCODEoutput, that could then be either compiled into the local machine code by aZCODEtranslator or run interpreted.ZCODEis a register-based intermediate language. This ability to interpret or compileZCODEencouraged the porting of ALGOL 68C to numerous different computer platforms. Compiler optimizationis the process of improving the quality ofobject codewithout changing the results it produces. The developers of the firstFORTRANcompiler aimed to generate code that wasbetterthan the average hand-coded assembler, so that customers would actually use their product. In one of the first real compilers, they often succeeded.[48] Later compilers, like IBM'sFortran IVcompiler, placed more priority on good diagnostics and executing more quickly, at the expense ofobject codeoptimization. It wasn't until theIBM System/360 seriesthat IBM provided two separate compilers—a fast-executing code checker, and a slower, optimizing one. Frances E. Allen, working alone and jointly withJohn Cocke, introduced many of the concepts for optimization. Allen's 1966 paper,Program Optimization,[49]introduced the use ofgraph data structuresto encode program content for optimization.[50]Her 1970 papers,Control Flow Analysis[51]andA Basis for Program Optimization[52]establishedintervalsas the context for efficient and effective data flow analysis and optimization. Her 1971 paper with Cocke,A Catalogue of Optimizing Transformations,[53]provided the first description and systematization of optimizing transformations. Her 1973 and 1974 papers on interproceduraldata flow analysisextended the analysis to whole programs.[54][55]Her 1976 paper with Cocke describes one of the two main analysis strategies used in optimizing compilers today.[56] Allen developed and implemented her methods as part of compilers for theIBM 7030 Stretch-Harvestand the experimentalAdvanced Computing System. This work established the feasibility and structure of modern machine- and language-independent optimizers. She went on to establish and lead thePTRANproject on the automatic parallel execution of FORTRAN programs.[57]Her PTRAN team developed new parallelism detection schemes and created the concept of the program dependence graph, the primary structuring method used by most parallelizing compilers. Programming Languages and their Compilersby John Cocke andJacob T. Schwartz, published early in 1970, devoted more than 200 pages to optimization algorithms. It included many of the now familiar techniques such asredundant code eliminationandstrength reduction.[58] In 1972,Gary A. Kildall[59]introduced the theory ofdata-flow analysisused today in optimizing compilers[60](sometimes known asKildall's method). Peephole optimizationis a simple but effective optimization technique. It was invented byWilliam M. McKeemanand published in 1965 in CACM.[61]It was used in the XPL compiler that McKeeman helped develop. Capex Corporationdeveloped the "COBOL Optimizer" in the mid-1970s forCOBOL. This type of optimizer depended, in this case, upon knowledge of "weaknesses" in the standard IBM COBOL compiler, and actually replaced (orpatched) sections of theobject codewith more efficient code. The replacement code might replace a lineartable lookupwith abinary searchfor example or sometimes simply replace a relatively "slow" instruction with a known faster one that was otherwise functionally equivalent within its context. This technique is now known as "Strength reduction". For example, on theIBM System/360hardware theCLIinstruction was, depending on the particular model, between twice and 5 times as fast as aCLCinstruction for single byte comparisons.[62][63] Modern compilers typically provide optimization options to allow programmers to choose whether or not to execute an optimization pass. When a compiler is given a syntactically incorrect program, a good, clear error message is helpful. From the perspective of the compiler writer, it is often difficult to achieve. TheWATFIVFortran compiler was developed at theUniversity of Waterloo, Canada in the late 1960s. It was designed to give better error messages than IBM's Fortran compilers of the time. In addition, WATFIV was far more usable, because it combined compiling,linkingand execution into one step, whereas IBM's compilers had three separate components to run. PL/Cwas a computer programming language developed at Cornell University in the early 1970s. While PL/C was a subset of IBM's PL/I language, it was designed with the specific goal of being used for teaching programming. The two researchers and academic teachers who designed PL/C wereRichard W. Conwayand Thomas R. Wilcox. They submitted the famous article "Design and implementation of a diagnostic compiler for PL/I" published in the Communications of ACM in March 1973.[64] PL/C eliminated some of the more complex features of PL/I, and added extensive debugging and error recovery facilities. The PL/C compiler had the unusual capability of never failing to compile any program, through the use of extensive automatic correction of many syntax errors and by converting any remaining syntax errors to output statements. Just-in-time (JIT) compilation is the generation of executable codeon-the-flyor as close as possible to its actual execution, to take advantage of runtimemetricsor other performance-enhancing options. Most modern compilers have a lexer and parser that produce an intermediate representation of the program. The intermediate representation is a simple sequence of operations which can be used by an optimizer and acode generatorwhich produces instructions in themachine languageof the target processor. Because the code generator uses an intermediate representation, the same code generator can be used for many different high-level languages. There are many possibilities for the intermediate representation.Three-address code, also known as aquadrupleorquadis a common form, where there is an operator, two operands, and a result. Two-address code ortripleshave a stack to which results are written, in contrast to the explicit variables of three-address code. Static Single Assignment(SSA) was developed byRon Cytron,Jeanne Ferrante,Barry K. Rosen,Mark N. Wegman, andF. Kenneth Zadeck, researchers atIBMin the 1980s.[65]In SSA, a variable is given a value only once. A new variable is created rather than modifying an existing one. SSA simplifies optimization and code generation. A code generator generatesmachine languageinstructions for the target processor. Sethi–Ullman algorithmor Sethi–Ullman numbering is a method to minimise the number ofregistersneeded to hold variables.
https://en.wikipedia.org/wiki/History_of_compiler_construction
Incomputing, acompileris acomputer programthat transformssource codewritten in aprogramming languageor computer language (thesource language), into another computer language (thetarget language, often having a binary form known asobject codeormachine code). The most common reason for transforming source code is to create anexecutableprogram. Any program written in ahigh-level programming languagemust be translated to object code before it can be executed, so all programmers using such a language use a compiler or aninterpreter, sometimes even both. Improvements to a compiler may lead to a large number of improved features in executable programs. TheProduction Quality Compiler-Compiler, in the late 1970s, introduced the principles of compiler organization that are still widely used today (e.g., a front-end handling syntax and semantics and a back-end generating machine code). Software for early computers was primarily written inassembly language, and before that directly inmachine code. It is usually more productive for a programmer to use a high-level language, and programs written in a high-level language can bereusedon different kinds of computers. Even so, it took a while for compilers to become established, because they generated code that did not perform as well as hand-written assembler, they were daunting development projects in their own right, and the very limitedmemorycapacity of early computers created many technical problems for practical compiler implementations. Between 1942 and 1945,Konrad ZusedevelopedPlankalkül("plan calculus"), the first high-level language for a computer, for which he envisioned aPlanfertigungsgerät("plan assembly device"), which would automatically translate the mathematical formulation of a program into machine-readablepunched film stock.[1]However, the first actual compiler for the language was implemented only decades later. Between 1949 and 1951,Heinz RutishauserproposedSuperplan, a high-level language and automatic translator.[2]His ideas were later refined byFriedrich L. BauerandKlaus Samelson.[3] The first practical compiler was written byCorrado Böhmin 1951 for his PhD thesis,[4][5]one of the first computer science doctorates awarded anywhere in the world. The first implemented compiler was written byGrace Hopper, who also coined the term "compiler",[6][7]referring to herA-0 systemwhich functioned as aloaderorlinker, not the modern notion of a compiler. The firstAutocodeand compiler in the modern sense were developed byAlick Glenniein 1952 at theUniversity of Manchesterfor theMark 1computer.[8][9]TheFORTRANteam led byJohn W. BackusatIBMintroduced the first commercially available compiler, in 1957, which took 18 person-years to create.[10] The firstALGOL 58compiler was completed by the end of 1958 byFriedrich L. Bauer, Hermann Bottenbruch,Heinz Rutishauser, andKlaus Samelsonfor theZ22computer. Bauer et al. had been working on compiler technology for theSequentielle Formelübersetzung(i.e.sequential formula translation) in the previous years. By 1960, an extended Fortran compiler, ALTAC, was available on thePhilco2000, so it is probable that a Fortran program was compiled for both IBM and Philcocomputer architecturesin mid-1960.[11]The first known demonstratedcross-platformhigh-level language wasCOBOL. In a demonstration in December 1960, a COBOL program was compiled and executed on both theUNIVAC IIand theRCA501.[7][12] Like any other software, there are benefits from implementing a compiler in a high-level language. In particular, a compiler can beself-hosted– that is, written in the programming language it compiles. Building a self-hosting compiler is abootstrappingproblem, i.e. the first such compiler for a language must be either hand written machine code, compiled by a compiler written in another language, or compiled by running the compiler's source on itself in aninterpreter. Corrado Böhm developed a language, a machine, and a translation method for compiling that language on the machine in his PhD dissertation submitted in 1951.[4][5]He not only described a complete compiler, but also defined for the first time that compiler in its own language. The language was interesting in itself, because every statement (including input statements, output statements and control statements) was a special case of anassignment statement. TheNavy Electronics Laboratory InternationalALGOLCompilerorNELIACwas adialectand compiler implementation of theALGOL 58programming languagedeveloped by theNaval Electronics Laboratoryin 1958.[13] NELIAC was the brainchild ofHarry Huskey– then Chairman of theACMand a well knowncomputer scientist(and later academic supervisor ofNiklaus Wirth), and supported by Maury Halstead, the head of the computational center at NEL. The earliest version was implemented on the prototypeUSQ-17computer (called the Countess) at the laboratory. It was the world's first self-compiling compiler – the compiler was first coded in simplified form in assembly language (thebootstrap), then re-written in its own language and compiled by the bootstrap, and finally re-compiled by itself, making the bootstrap obsolete. Another earlyself-hostingcompiler was written forLispby Tim Hart and Mike Levin atMITin 1962.[14]They wrote a Lisp compiler in Lisp, testing it inside an existing Lisp interpreter. Once they had improved the compiler to the point where it could compile its own source code, it was self-hosting.[15] The compiler as it exists on the standard compiler tape is a machine language program that was obtained by having theS-expressiondefinition of the compiler work on itself through the interpreter. This technique is only possible when an interpreter already exists for the very same language that is to be compiled. It borrows directly from the notion of running a program on itself as input, which is also used in various proofs intheoretical computer science, such as the proof that thehalting problemisundecidable. Forthis an example of a self-hosting compiler. Theself compilation and cross compilationfeatures of Forth are synonymous withmetacompilationandmetacompilers.[16][17]LikeLisp, Forth is anextensible programminglanguage. It is theextensible programminglanguage features of Forth and Lisp that enable them to generate new versions of themselves or port themselves to new environments. Aparseris an important component of a compiler. It parses the source code of a computer programming language to create some form of internal representation. Programming languages tend to be specified in terms of acontext-free grammarbecause fast and efficient parsers can be written for them. Parsers can be written by hand or generated by aparser generator. A context-free grammar provides a simple and precise mechanism for describing how programming language constructs are built from smallerblocks. The formalism of context-free grammars was developed in the mid-1950s byNoam Chomsky.[18] Block structure was introduced into computer programming languages by the ALGOL project (1957–1960), which, as a consequence, also featured a context-free grammar to describe the resulting ALGOL syntax. Context-free grammars are simple enough to allow the construction of efficient parsing algorithms which, for a given string, determine whether and how it can be generated from the grammar. If a programming language designer is willing to work within some limited subsets of context-free grammars, more efficient parsers are possible. TheLR parser(left to right) was invented byDonald Knuthin 1965 in a paper, "On the Translation of Languages from Left to Right". AnLR parseris a parser that reads input fromLeft to right (as it would appear if visually displayed) and produces aRightmost derivation. The termLR(k) parseris also used, wherekrefers to the number of unconsumedlookaheadinput symbols that are used in making parsing decisions. Knuth proved that LR(k) grammars can be parsed with an execution time essentially proportional to the length of the program, and that every LR(k) grammar fork> 1 can be mechanically transformed into an LR(1) grammar for the same language. In other words, it is only necessary to have one symbol lookahead to parse anydeterministic context-free grammar(DCFG).[19] Korenjak (1969) was the first to show parsers for programming languages could be produced using these techniques.[20]Frank DeRemer devised the more practicalSimple LR(SLR) andLook-ahead LR(LALR) techniques, published in his PhD dissertation at MIT in 1969.[21][22]This was an important breakthrough, because LR(k) translators, as defined by Donald Knuth, were much too large for implementation on computer systems in the 1960s and 1970s. In practice, LALR offers a good solution; the added power of LALR(1) parsers over SLR(1) parsers (that is, LALR(1) can parse more complex grammars than SLR(1)) is useful, and, though LALR(1) is not comparable with LL(1)(See below) (LALR(1) cannot parse all LL(1) grammars), most LL(1) grammars encountered in practice can be parsed by LALR(1). LR(1) grammars are more powerful again than LALR(1); however, an LR(1) grammar requires acanonical LR parserwhich would be extremely large in size and is not considered practical. The syntax of manyprogramming languagesare defined by grammars that can be parsed with an LALR(1) parser, and for this reason LALR parsers are often used by compilers to perform syntax analysis of source code. Arecursive ascent parserimplements an LALR parser using mutually-recursive functions rather than tables. Thus, the parser isdirectly encodedin the host language similar torecursive descent. Direct encoding usually yields a parser which is faster than its table-driven equivalent[23]for the same reason that compilation is faster than interpretation. It is also (in principle) possible to hand edit a recursive ascent parser, whereas a tabular implementation is nigh unreadable to the average human. Recursive ascent was first described by Thomas Pennello in his article "Very fast LR parsing" in 1986.[23]The technique was later expounded upon by G.H. Roberts[24]in 1988 as well as in an article by Leermakers, Augusteijn, Kruseman Aretz[25]in 1992 in the journalTheoretical Computer Science. AnLL parserparses the input fromLeft to right, and constructs aLeftmost derivationof the sentence (hence LL, as opposed to LR). The class of grammars which are parsable in this way is known as theLL grammars. LL grammars are an even more restricted class of context-free grammars than LR grammars. Nevertheless, they are of great interest to compiler writers, because such a parser is simple and efficient to implement. LL(k) grammars can be parsed by arecursive descent parserwhich is usually coded by hand, although a notation such asMETA IImight alternatively be used. The design of ALGOL sparked investigation of recursive descent, since the ALGOL language itself is recursive. The concept of recursive descent parsing was discussed in the January 1961 issue ofCommunications of the ACMin separate papers by A.A. Grau andEdgar T. "Ned" Irons.[26][27]Richard Waychoff and colleagues also implemented recursive descent in theBurroughsALGOL compiler in March 1961,[28]the two groups used different approaches but were in at least informal contact.[29] The idea of LL(1) grammars was introduced by Lewis and Stearns (1968).[30][31] Recursive descent was popularised byNiklaus WirthwithPL/0, aneducational programming languageused to teach compiler construction in the 1970s.[32] LR parsing can handle a larger range of languages thanLL parsing, and is also better at error reporting (This is disputable, REFERENCE is required), i.e. it detects syntactic errors when the input does not conform to the grammar as soon as possible. In 1970,Jay Earleyinvented what came to be known as theEarley parser. Earley parsers are appealing because they can parse allcontext-free languagesreasonably efficiently.[33] John Backus proposed "metalinguistic formulas"[34][35]to describe the syntax of the new programming language IAL, known today asALGOL 58(1959). Backus's work was based on thePost canonical systemdevised byEmil Post. Further development of ALGOL led toALGOL 60; in its report (1963),Peter Naurnamed Backus's notationBackus normal form(BNF), and simplified it to minimize the character set used. However,Donald Knuthargued that BNF should rather be read asBackus–Naur form,[36]and that has become the commonly accepted usage. Niklaus Wirthdefinedextended Backus–Naur form(EBNF), a refined version of BNF, in the early 1970s for PL/0.Augmented Backus–Naur form(ABNF) is another variant. Both EBNF and ABNF are widely used to specify the grammar of programming languages, as the inputs to parser generators, and in other fields such as defining communication protocols. Aparser generatorgenerates the lexical-analyser portion of a compiler. It is a program that takes a description of aformal grammarof a specific programming language and produces a parser for that language. That parser can be used in a compiler for that specific language. The parser detects and identifies the reserved words and symbols of the specific language from a stream of text and returns these as tokens to the code which implements the syntactic validation and translation into object code. This second part of the compiler can also be created by acompiler-compilerusing a formal rules-of-precedence syntax-description as input. The firstcompiler-compilerto use that name was written byTony Brookerin 1960 and was used to create compilers for theAtlascomputer at theUniversity of Manchester, including theAtlas Autocodecompiler. However it was rather different from modern compiler-compilers, and today would probably be described as being somewhere between a highly customisable generic compiler and anextensible-syntax language. The name "compiler-compiler" was far more appropriate for Brooker's system than it is for most modern compiler-compilers, which are more accurately described as parser generators. In the early 1960s, Robert McClure atTexas Instrumentsinvented a compiler-compiler calledTMG, the name taken from "transmogrification".[37][38][39][40]In the following years TMG wasportedto severalUNIVACand IBM mainframe computers. TheMulticsproject, a joint venture betweenMITandBell Labs, was one of the first to develop anoperating systemin a high-level language.PL/Iwas chosen as the language, but an external supplier could not supply a working compiler.[41]The Multics team developed their own subset dialect ofPL/Iknown as Early PL/I (EPL) as their implementation language in 1964. TMG was ported toGE-600 seriesand used to develop EPL byDouglas McIlroy,Robert Morris, and others. Not long afterKen Thompsonwrote the first version ofUnixfor thePDP-7in 1969,Douglas McIlroycreated the new system's first higher-level language: an implementation of McClure's TMG.[42]TMG was also the compiler definition tool used by Ken Thompson to write the compiler for theB languageon his PDP-7 in 1970. B was the immediate ancestor ofC. An earlyLALR parser generatorwas called "TWS", created by Frank DeRemer and Tom Pennello. XPLis a dialect of thePL/Iprogramming language, used for the development of compilers for computer languages. It was designed and implemented in 1967 by a team withWilliam M. McKeeman,James J. Horning, andDavid B. WortmanatStanford Universityand theUniversity of California, Santa Cruz. It was first announced at the 1968Fall Joint Computer Conferencein San Francisco.[43][44] XPL featured a relatively simpletranslator writing systemdubbedANALYZER, based upon abottom-up compilerprecedence parsing technique calledMSP(mixed strategy precedence). XPL was bootstrapped through Burroughs Algol onto theIBM System/360computer. (Some subsequent versions of XPL used onUniversity of Torontointernal projects utilized an SLR(1) parser, but those implementations have never been distributed). Yaccis aparser generator(loosely,compiler-compiler), not to be confused withlex, which is alexical analyzerfrequently used as a first stage by Yacc. Yacc was developed byStephen C. JohnsonatAT&Tfor theUnixoperating system.[45]The name is an acronym for "Yet AnotherCompiler Compiler." It generates an LALR(1) compiler based on a grammar written in a notation similar to Backus–Naur form. Johnson worked on Yacc in the early 1970s atBell Labs.[46]He was familiar with TMG and its influence can be seen in Yacc and the design of the C programming language. Because Yacc was the default compiler generator on most Unix systems, it was widely distributed and used. Derivatives such asGNU Bisonare still in use. The compiler generated by Yacc requires alexical analyzer. Lexical analyzer generators, such aslexorflexare widely available. TheIEEEPOSIXP1003.2 standard defines the functionality and requirements for both Lex and Yacc. Coco/Ris aparser generatorthat generates LL(1) parsers in Modula-2 (with plug-ins for other languages) from input grammars written in a variant of EBNF. It was developed by Hanspeter Mössenböck at the Swiss Federal Institute of Technology in Zurich (ETHZ) in 1985. ANTLRis aparser generatorthat generates LL(*) parsers in Java from input grammars written in a variant of EBNF. It was developed by Terence Parr at the University of San Francisco in the early 1990s as a successor of an earlier generator called PCCTS. Metacompilers differ from parser generators, taking as input aprogramwritten in ametalanguage. Their input consists grammar analyzing formula combined with embedded transform operations that construct abstract syntax trees, or simply output reformatted text strings that may be stack machine code. Many can be programmed in their own metalanguage enabling them to compile themselves, making them self-hosting extensible language compilers. Many metacompilers build on the work ofDewey Val Schorre. HisMETA IIcompiler, first released in 1964, was the first documented metacompiler. Able to define its own language and others, META II acceptedsyntax formulahaving imbeddedoutput (code production). It also translated to one of the earliest instances of avirtual machine. Lexical analysis was performed by built token recognizing functions: .ID, .STRING, and .NUMBER. Quoted strings in syntax formula recognize lexemes that are not kept.[47] TREE-META, a second generation Schorre metacompiler, appeared around 1968. It extended the capabilities of META II, adding unparse rules separating code production from the grammar analysis. Tree transform operations in the syntax formula produceabstract syntax treesthat the unparse rules operate on. The unparse tree pattern matching providedpeephole optimizationability. CWIC, described in a 1970 ACM publication is a third generation Schorre metacompiler that added lexing rules and backtracking operators to the grammar analysis.LISP 2was married with the unparse rules of TREEMETA in the CWIC generator language. With LISP 2 processing, CWIC can generate fully optimized code. CWIC also provided binary code generation into named code sections. Single and multipass compiles could be implemented using CWIC. CWIC compiled to 8-bit byte-addressable machine code instructions primarily designed to produce IBM System/360 code. Later generations are not publicly documented. One important feature would be the abstraction of the target processor instruction set, generating to a pseudo machine instruction set, macros, that could be separately defined or mapped to a real machine's instructions. Optimizations applying to sequential instructions could then be applied to the pseudo instruction before their expansion to target machine code. Across compilerruns in one environment but producesobject codefor another. Cross compilers are used for embedded development, where the target computer has limited capabilities. An early example of cross compilation was AIMICO, where a FLOW-MATIC program on a UNIVAC II was used to generate assembly language for theIBM 705, which was then assembled on the IBM computer.[7] TheALGOL 68Ccompiler generatedZCODEoutput, that could then be either compiled into the local machine code by aZCODEtranslator or run interpreted.ZCODEis a register-based intermediate language. This ability to interpret or compileZCODEencouraged the porting of ALGOL 68C to numerous different computer platforms. Compiler optimizationis the process of improving the quality ofobject codewithout changing the results it produces. The developers of the firstFORTRANcompiler aimed to generate code that wasbetterthan the average hand-coded assembler, so that customers would actually use their product. In one of the first real compilers, they often succeeded.[48] Later compilers, like IBM'sFortran IVcompiler, placed more priority on good diagnostics and executing more quickly, at the expense ofobject codeoptimization. It wasn't until theIBM System/360 seriesthat IBM provided two separate compilers—a fast-executing code checker, and a slower, optimizing one. Frances E. Allen, working alone and jointly withJohn Cocke, introduced many of the concepts for optimization. Allen's 1966 paper,Program Optimization,[49]introduced the use ofgraph data structuresto encode program content for optimization.[50]Her 1970 papers,Control Flow Analysis[51]andA Basis for Program Optimization[52]establishedintervalsas the context for efficient and effective data flow analysis and optimization. Her 1971 paper with Cocke,A Catalogue of Optimizing Transformations,[53]provided the first description and systematization of optimizing transformations. Her 1973 and 1974 papers on interproceduraldata flow analysisextended the analysis to whole programs.[54][55]Her 1976 paper with Cocke describes one of the two main analysis strategies used in optimizing compilers today.[56] Allen developed and implemented her methods as part of compilers for theIBM 7030 Stretch-Harvestand the experimentalAdvanced Computing System. This work established the feasibility and structure of modern machine- and language-independent optimizers. She went on to establish and lead thePTRANproject on the automatic parallel execution of FORTRAN programs.[57]Her PTRAN team developed new parallelism detection schemes and created the concept of the program dependence graph, the primary structuring method used by most parallelizing compilers. Programming Languages and their Compilersby John Cocke andJacob T. Schwartz, published early in 1970, devoted more than 200 pages to optimization algorithms. It included many of the now familiar techniques such asredundant code eliminationandstrength reduction.[58] In 1972,Gary A. Kildall[59]introduced the theory ofdata-flow analysisused today in optimizing compilers[60](sometimes known asKildall's method). Peephole optimizationis a simple but effective optimization technique. It was invented byWilliam M. McKeemanand published in 1965 in CACM.[61]It was used in the XPL compiler that McKeeman helped develop. Capex Corporationdeveloped the "COBOL Optimizer" in the mid-1970s forCOBOL. This type of optimizer depended, in this case, upon knowledge of "weaknesses" in the standard IBM COBOL compiler, and actually replaced (orpatched) sections of theobject codewith more efficient code. The replacement code might replace a lineartable lookupwith abinary searchfor example or sometimes simply replace a relatively "slow" instruction with a known faster one that was otherwise functionally equivalent within its context. This technique is now known as "Strength reduction". For example, on theIBM System/360hardware theCLIinstruction was, depending on the particular model, between twice and 5 times as fast as aCLCinstruction for single byte comparisons.[62][63] Modern compilers typically provide optimization options to allow programmers to choose whether or not to execute an optimization pass. When a compiler is given a syntactically incorrect program, a good, clear error message is helpful. From the perspective of the compiler writer, it is often difficult to achieve. TheWATFIVFortran compiler was developed at theUniversity of Waterloo, Canada in the late 1960s. It was designed to give better error messages than IBM's Fortran compilers of the time. In addition, WATFIV was far more usable, because it combined compiling,linkingand execution into one step, whereas IBM's compilers had three separate components to run. PL/Cwas a computer programming language developed at Cornell University in the early 1970s. While PL/C was a subset of IBM's PL/I language, it was designed with the specific goal of being used for teaching programming. The two researchers and academic teachers who designed PL/C wereRichard W. Conwayand Thomas R. Wilcox. They submitted the famous article "Design and implementation of a diagnostic compiler for PL/I" published in the Communications of ACM in March 1973.[64] PL/C eliminated some of the more complex features of PL/I, and added extensive debugging and error recovery facilities. The PL/C compiler had the unusual capability of never failing to compile any program, through the use of extensive automatic correction of many syntax errors and by converting any remaining syntax errors to output statements. Just-in-time (JIT) compilation is the generation of executable codeon-the-flyor as close as possible to its actual execution, to take advantage of runtimemetricsor other performance-enhancing options. Most modern compilers have a lexer and parser that produce an intermediate representation of the program. The intermediate representation is a simple sequence of operations which can be used by an optimizer and acode generatorwhich produces instructions in themachine languageof the target processor. Because the code generator uses an intermediate representation, the same code generator can be used for many different high-level languages. There are many possibilities for the intermediate representation.Three-address code, also known as aquadrupleorquadis a common form, where there is an operator, two operands, and a result. Two-address code ortripleshave a stack to which results are written, in contrast to the explicit variables of three-address code. Static Single Assignment(SSA) was developed byRon Cytron,Jeanne Ferrante,Barry K. Rosen,Mark N. Wegman, andF. Kenneth Zadeck, researchers atIBMin the 1980s.[65]In SSA, a variable is given a value only once. A new variable is created rather than modifying an existing one. SSA simplifies optimization and code generation. A code generator generatesmachine languageinstructions for the target processor. Sethi–Ullman algorithmor Sethi–Ullman numbering is a method to minimise the number ofregistersneeded to hold variables.
https://en.wikipedia.org/wiki/History_of_compiler_construction#Self-hosting_compilers
Metacompilationis acomputationwhich involvesmetasystemtransitions (MST) from a computing machineMto a metamachineM'which controls, analyzes and imitates the work ofM.Semantics-based program transformation, such aspartial evaluationand supercompilation (SCP), is metacomputation. Metasystem transitions may be repeated, as when a program transformer gets transformed itself. In this manner MST hierarchies of any height can be formed. The Fox[clarification needed]paper reviews one strain of research which was started inRussiabyValentin Turchin'sREFALsystem in the late 1960s-early 1970s and became known for the development of supercompilation as a distinct method ofprogram transformation. After a brief description of the history of this research line, the paper concentrates on those results and problems where supercompilation is combined with repeated metasystem transitions. Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Metacompilation
Informal grammartheory, thedeterministic context-free grammars(DCFGs) are aproper subsetof thecontext-free grammars. They are the subset of context-free grammars that can be derived fromdeterministic pushdown automata, and they generate thedeterministic context-free languages. DCFGs are alwaysunambiguous, and are an important subclass of unambiguous CFGs; there are non-deterministic unambiguous CFGs, however. DCFGs are of great practical interest, as they can be parsed inlinear timeand in fact a parser can be automatically generated from the grammar by aparser generator. They are thus widely used throughout computer science. Various restricted forms of DCFGs can be parsed by simpler, less resource-intensive parsers, and thus are often used. These grammar classes are referred to by the type of parser that parses them, and important examples areLALR,SLR, andLL. In the 1960s, theoretical research in computer science onregular expressionsandfinite automataled to the discovery thatcontext-free grammarsare equivalent to nondeterministicpushdown automata.[1][2][3]These grammars were thought to capture the syntax of computer programming languages. The first high-level computer programming languages were under development at the time (seeHistory of programming languages) and writingcompilerswas difficult. But usingcontext-free grammarsto help automate the parsing part of the compiler simplified the task. Deterministic context-free grammars were particularly useful because they could be parsed sequentially by adeterministic pushdown automaton, which was a requirement due to computer memory constraints.[4]In 1965,Donald Knuthinvented theLR(k) parserand proved that there exists an LR(k) grammar for every deterministic context-free language.[5]This parser still required a lot of memory. In 1969Frank DeRemerinvented theLALRandSimple LRparsers, both based on the LR parser and having greatly reduced memory requirements at the cost of less language recognition power. The LALR parser was the stronger alternative.[6]These two parsers have since been widely used in compilers of many computer languages. Recent research has identified methods by which canonical LR parsers may be implemented with dramatically reduced table requirements over Knuth's table-building algorithm.[7]
https://en.wikipedia.org/wiki/Deterministic_context-free_grammar
Insoftware, aspell checker(orspelling checkerorspell check) is asoftware featurethat checks for misspellings in atext. Spell-checking features are often embedded insoftwareor services, such as aword processor,email client, electronicdictionary, orsearch engine. Eye have a spelling chequer,It came with my Pea Sea.It plane lee marks four my revueMiss Steaks I can knot sea.Eye strike the quays and type a whirredAnd weight four it two sayWeather eye am write oar wrongIt tells me straight a weigh.Eye ran this poem threw it,Your shore real glad two no.Its vary polished in its weigh.My chequer tolled me sew.A chequer is a bless thing,It freeze yew lodes of thyme.It helps me right all stiles of righting,And aides me when eye rime.Each frays come posed up on my screenEye trussed too bee a joule.The chequer pours o'er every wordTwo cheque sum spelling rule. A basic spell checker carries out the following processes: It is unclear whether morphological analysis—allowing for many forms of a word depending on its grammatical role—provides a significant benefit for English, though its benefits for highlysynthetic languagessuch as German, Hungarian, or Turkish are clear. As an adjunct to these components, the program'suser interfaceallows users to approve or reject replacements and modify the program's operation. Spell checkers can useapproximate string matchingalgorithms such asLevenshtein distanceto find correct spellings of misspelled words.[1]An alternative type of spell checker uses solely statistical information, such asn-grams, to recognize errors instead of correctly-spelled words. This approach usually requires a lot of effort to obtain sufficient statistical information. Key advantages include needing less runtime storage and the ability to correct errors in words that are not included in a dictionary.[2] In some cases, spell checkers use a fixed list of misspellings andsuggestionsfor those misspellings; this less flexible approach is often used in paper-based correction methods, such as thesee alsoentries of encyclopedias. Clustering algorithmshave also been used for spell checking[3]combined with phonetic information.[4] In 1961,Les Earnest, who headed the research on this budding technology, saw it necessary to include the first spell checker that accessed a list of 10,000 acceptable words.[5]Ralph Gorin, a graduate student under Earnest at the time, created the first true spelling checker program written as an applications program (rather than research) for general English text: SPELL for the DEC PDP-10 at Stanford University's Artificial Intelligence Laboratory, in February 1971.[6]Gorin wrote SPELL inassembly language, for faster action; he made the first spelling corrector by searching the word list for plausible correct spellings that differ by a single letter or adjacent letter transpositions and presenting them to the user. Gorin made SPELL publicly accessible, as was done with most SAIL (Stanford Artificial Intelligence Laboratory) programs, and it soon spread around the world via the new ARPAnet, about ten years before personal computers came into general use.[7]SPELL, its algorithms and data structures inspired the Unixispellprogram. The first spell checkers were widely available on mainframe computers in the late 1970s. A group of six linguists fromGeorgetown Universitydeveloped the first spell-check system for the IBM corporation.[8] Henry Kučerainvented one for the VAX machines of Digital Equipment Corp in 1981.[9] TheInternational Ispellprogram commonly used in Unix is based on R. E. Gorin's SPELL. It was converted to C by Pace Willisson at MIT.[10] The GNU project has its spell checkerGNU Aspell. Aspell's main improvement is that it can more accurately suggest correct alternatives for misspelled English words.[11] Due to the inability of traditional spell checkers to check words in complex inflected languages, Hungarian László Németh developedHunspell, a spell checker that supportsagglutinative languagesand complex compound words. Hunspell also uses Unicode in its dictionaries.[12]Hunspell replaced the previousMySpellinOpenOffice.orgin version 2.0.2. Enchantis another general spell checker, derived fromAbiWord. Its goal is to combine programs supporting different languages such as Aspell, Hunspell, Nuspell, Hspell (Hebrew), Voikko (Finnish), Zemberek (Turkish) and AppleSpell under one interface.[13] The first spell checkers for personal computers appeared in 1980, such as "WordCheck" for Commodore systems which was released in late 1980 in time for advertisements to go to print in January 1981.[14]Developers such as Maria Mariani[8]andRandom House[15]rushedOEMpackages or end-user products into the rapidly expanding software market. On the pre-Windows PCs, these spell checkers were standalone programs, many of which could be run interminate-and-stay-residentmode from within word-processing packages on PCs with sufficient memory. However, the market for standalone packages was short-lived, as by the mid-1980s developers of popular word-processing packages likeWordStarandWordPerfecthad incorporated spell checkers in their packages, mostly licensed from the above companies, who quickly expanded support from justEnglishto manyEuropeanand eventually evenAsian languages. However, this required increasing sophistication in the morphology routines of the software, particularly with regard to heavily-agglutinativelanguages likeHungarianandFinnish. Although the size of the word-processing market in a country likeIcelandmight not have justified the investment of implementing a spell checker, companies like WordPerfect nonetheless strove to localize their software for as many national markets as possible as part of their globalmarketingstrategy. When Apple developed "a system-wide spelling checker" for Mac OS X so that "the operating system took over spelling fixes,"[16]it was a first: one "didn't have to maintain a separate spelling checker for each" program.[17]Mac OS X's spellcheck coverage includes virtually all bundled and third party applications. Visual Tools'VT Speller, introduced in 1994, was "designed for developers of applications that support Windows."[18][19]It came with a dictionary but had the ability to build and incorporate use of secondary dictionaries.[20] Web browsers such asFirefoxandGoogle Chromeoffer spell checking support, usingHunspell. Prior to using Hunspell, Firefox and Chrome usedMySpellandGNU Aspell, respectively.[21] Some spell checkers have separate support for medical dictionaries to help prevent medical errors.[22][23][24] The first spell checkers were "verifiers" instead of "correctors." They offered no suggestions for incorrectly spelled words. This was helpful fortyposbut it was not so helpful for logical or phonetic errors. The challenge the developers faced was the difficulty in offering useful suggestions for misspelled words. This requires reducing words to a skeletal form and applying pattern-matching algorithms. It might seem logical that where spell-checking dictionaries are concerned, "the bigger, the better," so that correct words are not marked as incorrect. In practice, however, an optimal size for English appears to be around 90,000 entries. If there are more than this, incorrectly spelled words may be skipped because they are mistaken for others. For example, a linguist might determine on the basis ofcorpus linguisticsthat the wordbahtis more frequently a misspelling ofbathorbatthan a reference to the Thai currency. Hence, it would typically be more useful if a few people who write about Thai currency were slightly inconvenienced than if the spelling errors of the many more people who discuss baths were overlooked. The first MS-DOS spell checkers were mostly used in proofing mode from within word processing packages. After preparing a document, a user scanned the text looking for misspellings. Later, however, batch processing was offered in such packages asOracle's short-lived CoAuthor and allowed a user to view the results after a document was processed and correct only the words that were known to be wrong. When memory and processing power became abundant, spell checking was performed in the background in an interactive way, such as has been the case with the Sector Software produced Spellbound program released in 1987 andMicrosoft Wordsince Word 95. Spell checkers became increasingly sophisticated; now capable of recognizinggrammaticalerrors. However, even at their best, they rarely catch all the errors in a text (such ashomophoneerrors) and will flagneologismsand foreign words as misspellings. Nonetheless, spell checkers can be considered as a type offoreign language writing aidthat non-native language learners can rely on to detect and correct their misspellings in the target language.[25] English is unusual in that most words used in formal writing have a single spelling that can be found in a typical dictionary, with the exception of some jargon and modified words. In many languages, words are oftenconcatenatedinto new combinations of words. In German, compound nouns are frequently coined from other existing nouns. Some scripts do not clearly separate one word from another, requiring word-splitting algorithms. Each of these presents unique challenges to non-English language spell checkers. There has been research on developing algorithms that are capable of recognizing a misspelled word, even if the word itself is in the vocabulary, based on thecontextof the surrounding words. Not only does this allow words such as those in the poem above to be caught, but it mitigates the detrimental effect of enlarging dictionaries, allowing more words to be recognized. For example,bahtin the same paragraph asThaiorThailandwould not be recognized as a misspelling ofbath. The most common example of errors caught by such a system arehomophoneerrors, such as the bold words in the following sentence: The most successful algorithm to date is Andrew Golding and Dan Roth's "Winnow-based spelling correction algorithm",[26]published in 1999, which is able to recognize about 96% of context-sensitive spelling errors, in addition to ordinary non-word spelling errors. Context-sensitive spell checkers appeared in the now-defunct applicationsMicrosoft Office 2007[27]andGoogle Wave.[28] Grammar checkersattempt to fix problems with grammar beyond spelling errors, including incorrect choice of words.
https://en.wikipedia.org/wiki/Spell_checker
Link grammar(LG) is a theory ofsyntaxby Davy Temperley andDaniel Sleatorwhich builds relations between pairs of words, rather than constructing constituents in aphrase structurehierarchy. Link grammar is similar todependency grammar, but dependency grammar includes a head-dependent relationship, whereas link grammar makes the head-dependent relationship optional (links need not indicate direction).[1]Colored Multiplanar Link Grammar (CMLG) is an extension of LG allowing crossing relations between pairs of words.[2]The relationship between words is indicated withlink types, thus making the Link grammar closely related to certaincategorial grammars. For example, in asubject–verb–objectlanguage like English, the verb would look left to form a subject link, and right to form an object link. Nouns would look right to complete the subject link, or left to complete the object link. In asubject–object–verblanguage likePersian, the verb would look left to form an object link, and a more distant left to form a subject link. Nouns would look to the right for both subject and object links. Link grammar connects the words in a sentence with links, similar in form to acatena. Unlike the catena or a traditionaldependency grammar, the marking of the head-dependent relationship is optional for most languages, becoming mandatory only infree-word-order languages(such asTurkish,[3][better source needed]Finnish,Hungarian). That is, in English, the subject-verb relationship is "obvious", in that the subject is almost always to the left of the verb, and thus no specific indication of dependency needs to be made. In the case ofsubject-verb inversion, a distinct link type is employed. For free word-order languages, this can no longer hold, and a link between the subject and verb must contain an explicit directional arrow to indicate which of the two words is which. Link grammar also differs from traditional dependency grammars by allowingcyclic relationsbetween words. Thus, for example, there can be links indicating both the head verb of a sentence, the head subject of the sentence, as well as a link between the subject and the verb. These three links thus form a cycle (a triangle, in this case). Cycles are useful in constraining what might otherwise be ambiguous parses; cycles help "tighten up" the set of allowable parses of a sentence. For example, in the parse the LEFT-WALL indicates the start of the sentence, or the root node. The directionalWVlink (with arrows) points at the head verb of the sentence; it is the Wall-Verb link.[4]The Wd link (drawn here without arrows) indicates the head noun (the subject) of the sentence. The link typeWdindicates both that it connects to the wall (W) and that the sentence is a declarative sentence (the lower-case "d" subtype).[5]TheSslink indicates the subject-verb relationship; the lower-case "s" indicating that the subject is singular.[6]Note that the WV, Wd and Ss links for a cycle. The Pa link connects the verb to a complement; the lower-case "a" indicating that it is apredicative adjectivein this case.[7] Parsing is performed in analogy to assembling ajigsaw puzzle(representing the parsed sentence) from puzzle pieces (representing individual words).[8][9]A language is represented by means of a dictionary orlexis, which consists of words and the set of allowed "jigsaw puzzle shapes" that each word can have. The shape is indicated by a "connector", which is a link-type, and a direction indicator+or-indicating right or left. Thus for example, atransitive verbmay have the connectorsS- & O+indicating that the verb may form a Subject ("S") connection to its left ("-") and an object connection ("O") to its right ("+"). Similarly, acommon nounmay have the connectorsD- & S+indicating that it may connect to adetermineron the left ("D-") and act as a subject, when connecting to a verb on the right ("S+"). The act of parsing is then to identify that theS+connector can attach to theS-connector, forming an "S" link between the two words. Parsing completes when all connectors have been connected. A given word may have dozens or even hundreds of allowed puzzle-shapes (termed "disjuncts"): for example, many verbs may be optionally transitive, thus making theO+connector optional; such verbs might also take adverbial modifiers (Econnectors) which are inherently optional. More complex verbs may have additional connectors for indirect objects, or forparticlesorprepositions. Thus, a part of parsing also involves picking one single unique disjunct for a word; the final parse mustsatisfy(connect)allconnectors for that disjunct.[10] Connectors may also include head-dependent indicatorshandd. In this case, a connector containing a head indicator is only allowed to connect to a connector containing the dependent indicator (or to a connector without any h-d indicators on it). When these indicators are used, the link is decorated with arrows to indicate the link direction.[9] A recent extension simplifies the specification of connectors for languages that have little or no restrictions on word-order, such asLithuanian. There are also extensions to make it easier to support languages with concatenativemorphologies. The parsing algorithm also requires that the final graph is aplanar graph, i.e. that no links cross.[9]This constraint is based on empirical psycho-linguistic evidence that, indeed, for most languages, in nearly all situations, dependency links really do not cross.[11][12]There are rare exceptions, e.g. in Finnish, and even in English; they can be parsed by link-grammar only by introducing more complex and selective connector types to capture these situations. Connectors can have an optionalfloating-pointcost markup, so that some are "cheaper" to use than others, thus giving preference to certain parses over others.[9]That is, the total cost of parse is the sum of the individual costs of the connectors that were used; the cheapest parse indicates the most likely parse. This is used for parse-ranking multiple ambiguous parses. The fact that the costs are local to the connectors, and are not a global property of the algorithm makes them essentiallyMarkovianin nature.[13][14][15][16][17][18] The assignment of a log-likelihood to linkages allows link grammar to implement thesemantic selectionof predicate-argument relationships. That is, certain constructions, although syntactically valid, are extremely unlikely. In this way, link grammar embodies some of the ideas present inoperator grammar. Because the costs are additive, they behave like the logarithm of the probability (since log-likelihoods are additive), or equivalently, somewhat like theentropy(since entropies are additive). This makes link grammar compatible with machine learning techniques such ashidden Markov modelsand theViterbi algorithm, because the link costs correspond to the link weights inMarkov networksorBayesian networks. The link grammar link types can be understood to be the types in the sense oftype theory.[9][19]In effect, the link grammar can be used to model theinternal languageof certain (non-symmetric)compact closed categories, such aspregroup grammars. In this sense, link grammar appears to be isomorphic or homomorphic to somecategorial grammars. Thus, for example, in a categorial grammar the noun phrase "the bad boy" may be written as whereas the corresponding disjuncts in link grammar would be The contraction rules (inference rules) of theLambek calculuscan be mapped to the connecting of connectors in link grammar. The+and-directional indicators correspond the forward and backward-slashes of the categorical grammar. Finally, the single-letter namesAandDcan be understood as labels or "easy-to-read" mnemonic names for the rather more verbose typesNP/N, etc. The primary distinction here is then that the categorical grammars have twotype constructors, the forward and backward slashes, that can be used to create new types (such asNP/N) from base types (such asNPandN). Link-grammar omits the use of type constructors, opting instead to define a much larger set of base types having compact, easy-to-remember mnemonics. A basic rule file for an SVO language might look like: Thus the English sentence, "The boy painted a picture" would appear as: Similar parses apply for Chinese.[20] Conversely, a rule file for anull subjectSOV language might consist of the following links: And a simplePersiansentence,man nAn xordam(من نان خوردم) 'I ate bread' would look like:[21][22][23] VSO order can be likewise accommodated, such as for Arabic.[24] In many languages with a concatenative morphology, the stem plays no grammatical role; the grammar is determined by the suffixes. Thus, inRussian, the sentence 'вверху плыли редкие облачка' might have the parse:[25][26] The subscripts, such as '.vnndpp', are used to indicate the grammatical category. The primary links: Wd, EI, SIp and Api connect together the suffixes, as, in principle, other stems could appear here, without altering the structure of the sentence. The Api link indicates the adjective; SIp denotes subject-verb inversion; EI is a modifier. The Wd link is used to indicate the head noun; the head verb is not indicated in this sentence. The LLXXX links serve only to attach stems to suffixes. The link-grammar can also indicatephonological agreementbetween neighboring words. For example: Here, the connector 'PH' is used to constrain the determiners that can appear before the word 'abstract'. It effectively blocks (makes it costly) to use the determiner 'a' in this sentence, while the link to 'an' becomes cheap. The other links are roughly as in previous examples: S denoting subject, O denoting object, D denoting determiner. The 'WV' link indicates the head verb, and the 'W' link indicates the head noun. The lower-case letters following the upper-case link types serve to refine the type; so for example, Ds can only connect to a singular noun; Ss only to a singular subject, Os to a singular object. The lower-case v in PHv denotes 'vowel'; the lower-case d in Wd denotes a declarative sentence. TheVietnamese languagesentence "Bữa tiệc hôm qua là một thành công lớn" - "The party yesterday was a great success" may be parsed as follows:[27] The link grammar syntaxparseris alibraryfornatural language processingwritten inC. It is available under theLGPL license. The parser[30]is an ongoing project. Recent versions include improved sentence coverage, Russian, Persian and Arabic language support, prototypes for German, Hebrew, Lithuanian, Vietnamese and Turkish, and programming API's forPython,Java,Common LISP,AutoItandOCaml, with 3rd-party bindings forPerl,[31]Ruby[32]andJavaScriptnode.js.[33] A current major undertaking is a project to learn the grammar and morphology of new languages, using unsupervised learning algorithms.[34][35] Thelink-parserprogram along with rules and word lists for English may be found in standardLinux distributions, e.g., as aDebianpackage, although many of these are years out of date.[36] AbiWord,[30]afreeword processor, uses link grammar for on-the-fly grammar checking. Words that cannot be linked anywhere are underlined in green. The semantic relationship extractor RelEx,[37]layered on top of the link grammar library, generates adependency grammaroutput by making explicit the semantic relationships between words in a sentence. Its output can be classified as being at a level between that of SSyntR and DSyntR ofMeaning-Text Theory. It also provides framing/grounding,anaphora resolution, head-word identification,lexical chunking, part-of-speech identification, and tagging, including entity, date, money, gender, etc. tagging. It includes a compatibility mode to generate dependency output compatible with theStanford parser,[38]and PennTreebank[39]-compatiblePOS tagging. Link grammar has also been employed forinformation extractionof biomedical texts[40][41]and events described in news articles,[42]as well as experimentalmachine translationsystems from English to German, Turkish, Indonesian.[43]andPersian.[44][45] The link grammar link dictionary is used to generate and verify the syntactic correctness of three differentnatural language generationsystems: NLGen,[46]NLGen2[47]and microplanner/surreal.[48]It is also used as a part of the NLP pipeline in theOpenCogAI project.
https://en.wikipedia.org/wiki/Link_grammar