text
stringlengths 16
172k
| source
stringlengths 32
122
|
|---|---|
Speech synthesisis the artificial production of humanspeech. A computer system used for this purpose is called aspeech synthesizer, and can be implemented insoftwareorhardwareproducts. Atext-to-speech(TTS) system converts normal language text into speech; other systems rendersymbolic linguistic representationslikephonetic transcriptionsinto speech.[1]The reverse process isspeech recognition.
Synthesized speech can be created byconcatenatingpieces of recorded speech that are stored in adatabase. Systems differ in the size of the stored speech units; a system that storesphonesordiphonesprovides the largest output range, but may lack clarity.[citation needed]For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of thevocal tractand other human voice characteristics to create a completely "synthetic" voice output.[2]
The quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood clearly. An intelligible text-to-speech program allows people withvisual impairmentsorreading disabilitiesto listen to written words on a home computer. Many computeroperating systemshave included speech synthesizers since the early 1990s.[citation needed]
A text-to-speech system (or "engine") is composed of two parts:[3]afront-endand aback-end. The front-end has two major tasks. First, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is often calledtext normalization,pre-processing, ortokenization. The front-end then assignsphonetic transcriptionsto each word, and divides and marks the text intoprosodic units, likephrases,clauses, andsentences. The process of assigning phonetic transcriptions to words is calledtext-to-phonemeorgrapheme-to-phonemeconversion. Phonetic transcriptions andprosodyinformation together make up the symbolic linguistic representation that is output by the front-end. The back-end—often referred to as thesynthesizer—then converts the symbolic linguistic representation into sound. In certain systems, this part includes the computation of thetarget prosody(pitch contour, phoneme durations),[4]which is then imposed on the output speech.
Long before the invention ofelectronicsignal processing, some people tried to build machines to emulate human speech.[citation needed]There were also legends of the existence of "Brazen Heads", such as those involving PopeSilvester II(d. 1003 AD),Albertus Magnus(1198–1280), andRoger Bacon(1214–1294).
In 1779, theGerman-DanishscientistChristian Gottlieb Kratzensteinwon the first prize in a competition announced by the RussianImperial Academy of Sciences and Artsfor models he built of the humanvocal tractthat could produce the five longvowelsounds (inInternational Phonetic Alphabetnotation:[aː],[eː],[iː],[oː]and[uː]).[5]There followed thebellows-operated "acoustic-mechanical speech machine" ofWolfgang von KempelenofPressburg, Hungary, described in a 1791 paper.[6]This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. In 1837,Charles Wheatstoneproduced a "speaking machine" based on von Kempelen's design, and in 1846, Joseph Faber exhibited the "Euphonia". In 1923, Paget resurrected Wheatstone's design.[7]
In the 1930s,Bell Labsdeveloped thevocoder, which automatically analyzed speech into its fundamental tones and resonances. From his work on the vocoder,Homer Dudleydeveloped a keyboard-operated voice-synthesizer calledThe Voder(Voice Demonstrator), which he exhibited at the1939 New York World's Fair.
Dr. Franklin S. Cooperand his colleagues atHaskins Laboratoriesbuilt thePattern playbackin the late 1940s and completed it in 1950. There were several different versions of this hardware device; only one currently survives. The machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device,Alvin Libermanand colleagues discovered acoustic cues for the perception ofphoneticsegments (consonants and vowels).
The first computer-based speech-synthesis systems originated in the late 1950s. Noriko Umedaet al.developed the first general English text-to-speech system in 1968, at theElectrotechnical Laboratoryin Japan.[8]In 1961, physicistJohn Larry Kelly, Jrand his colleagueLouis Gerstman[9]used anIBM 704computer to synthesize speech, an event among the most prominent in the history ofBell Labs.[citation needed]Kelly's voice recorder synthesizer (vocoder) recreated the song "Daisy Bell", with musical accompaniment fromMax Mathews. Coincidentally,Arthur C. Clarkewas visiting his friend and colleague John Pierce at the Bell Labs Murray Hill facility. Clarke was so impressed by the demonstration that he used it in the climactic scene of his screenplay for his novel2001: A Space Odyssey,[10]where theHAL 9000computer sings the same song as astronautDave Bowmanputs it to sleep.[11]Despite the success of purely electronic speech synthesis, research into mechanical speech-synthesizers continues.[12][independent source needed]
Linear predictive coding(LPC), a form ofspeech coding, began development with the work ofFumitada ItakuraofNagoya Universityand Shuzo Saito ofNippon Telegraph and Telephone(NTT) in 1966. Further developments in LPC technology were made byBishnu S. AtalandManfred R. SchroederatBell Labsduring the 1970s.[13]LPC was later the basis for early speech synthesizer chips, such as theTexas Instruments LPC Speech Chipsused in theSpeak & Spelltoys from 1978.
In 1975, Fumitada Itakura developed theline spectral pairs(LSP) method for high-compression speech coding, while at NTT.[14][15][16]From 1975 to 1981, Itakura studied problems in speech analysis and synthesis based on the LSP method.[16]In 1980, his team developed an LSP-based speech synthesizer chip. LSP is an important technology for speech synthesis and coding, and in the 1990s was adopted by almost all international speech coding standards as an essential component, contributing to the enhancement of digital speech communication over mobile channels and the internet.[15]
In 1975,MUSAwas released, and was one of the first Speech Synthesis systems. It consisted of a stand-alone computer hardware and a specialized software that enabled it to read Italian. A second version, released in 1978, was also able to sing Italian in an "a cappella" style.[17]
Dominant systems in the 1980s and 1990s were theDECtalksystem, based largely on the work ofDennis Klattat MIT, and the Bell Labs system;[18]the latter was one of the first multilingual language-independent systems, making extensive use ofnatural language processingmethods.
Handheldelectronics featuring speech synthesis began emerging in the 1970s. One of the first was theTelesensory Systems Inc.(TSI)Speech+portable calculator for the blind in 1976.[19][20]Other devices had primarily educational purposes, such as theSpeak & Spell toyproduced byTexas Instrumentsin 1978.[21]Fidelity released a speaking version of its electronic chess computer in 1979.[22]The firstvideo gameto feature speech synthesis was the 1980shoot 'em uparcade game,Stratovox(known in Japan asSpeak & Rescue), fromSun Electronics.[23][24]The firstpersonal computer gamewith speech synthesis wasManbiki Shoujo(Shoplifting Girl), released in 1980 for thePET 2001, for which the game's developer, Hiroshi Suzuki, developed a "zero cross" programming technique to produce a synthesized speech waveform.[25]Another early example, the arcade version ofBerzerk, also dates from 1980. TheMilton Bradley Companyproduced the first multi-playerelectronic gameusing voice synthesis,Milton, in the same year.
In 1976, Computalker Consultants released their CT-1 Speech Synthesizer. Designed by D. Lloyd Rice and Jim Cooper, it was an analog synthesizer built to work with microcomputers using the S-100 bus standard.[26]
Early electronic speech-synthesizers sounded robotic and were often barely intelligible. The quality of synthesized speech has steadily improved, but as of 2016[update]output from contemporary speech synthesis systems remains clearly distinguishable from actual human speech.
Synthesized voices typically sounded male until 1990, whenAnn Syrdal, atAT&T Bell Laboratories, created a female voice.[27]
Kurzweil predicted in 2005 that as thecost-performance ratiocaused speech synthesizers to become cheaper and more accessible, more people would benefit from the use of text-to-speech programs.[28]
The most important qualities of a speech synthesis system arenaturalnessandintelligibility.[29]Naturalness describes how closely the output sounds like human speech, while intelligibility is the ease with which the output is understood. The ideal speech synthesizer is both natural and intelligible. Speech synthesis systems usually try to maximize both characteristics.
The two primary technologies generating synthetic speech waveforms areconcatenative synthesisandformantsynthesis. Each technology has strengths and weaknesses, and the intended uses of a synthesis system will typically determine which approach is used.
Concatenative synthesis is based on the concatenation (stringing together) of segments of recorded speech. Generally, concatenative synthesis produces the most natural-sounding synthesized speech. However, differences between natural variations in speech and the nature of the automated techniques for segmenting the waveforms sometimes result in audible glitches in the output. There are three main sub-types of concatenative synthesis.
Unit selection synthesis uses large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individualphones,diphones, half-phones,syllables,morphemes,words,phrases, andsentences. Typically, the division into segments is done using a specially modifiedspeech recognizerset to a "forced alignment" mode with some manual correction afterward, using visual representations such as thewaveformandspectrogram.[30]Anindexof the units in the speech database is then created based on the segmentation and acoustic parameters like thefundamental frequency(pitch), duration, position in the syllable, and neighboring phones. Atrun time, the desired target utterance is created by determining the best chain of candidate units from the database (unit selection). This process is typically achieved using a specially weighteddecision tree.
Unit selection provides the greatest naturalness, because it applies only a small amount ofdigital signal processing(DSP) to the recorded speech. DSP often makes recorded speech sound less natural, although some systems use a small amount of signal processing at the point of concatenation to smooth the waveform. The output from the best unit-selection systems is often indistinguishable from real human voices, especially in contexts for which the TTS system has been tuned. However, maximum naturalness typically require unit-selection speech databases to be very large, in some systems ranging into thegigabytesof recorded data, representing dozens of hours of speech.[31]Also, unit selection algorithms have been known to select segments from a place that results in less than ideal synthesis (e.g. minor words become unclear) even when a better choice exists in the database.[32]Recently, researchers have proposed various automated methods to detect unnatural segments in unit-selection speech synthesis systems.[33]
Diphone synthesis uses a minimal speech database containing all thediphones(sound-to-sound transitions) occurring in a language. The number of diphones depends on thephonotacticsof the language: for example, Spanish has about 800 diphones, and German about 2500. In diphone synthesis, only one example of each diphone is contained in the speech database. At runtime, the targetprosodyof a sentence is superimposed on these minimal units by means ofdigital signal processingtechniques such aslinear predictive coding,PSOLA[34]orMBROLA.[35]or more recent techniques such as pitch modification in the source domain usingdiscrete cosine transform.[36]Diphone synthesis suffers from the sonic glitches of concatenative synthesis and the robotic-sounding nature of formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in commercial applications is declining,[citation needed]although it continues to be used in research because there are a number of freely available software implementations. An early example of Diphone synthesis is a teaching robot,Leachim, that was invented byMichael J. Freeman.[37]Leachim contained information regarding class curricular and certain biographical information about the students whom it was programmed to teach.[38]It was tested in a fourth grade classroom inthe Bronx, New York.[39][40]
Domain-specific synthesis concatenates prerecorded words and phrases to create complete utterances. It is used in applications where the variety of texts the system will output is limited to a particular domain, like transit schedule announcements or weather reports.[41]The technology is very simple to implement, and has been in commercial use for a long time, in devices like talking clocks and calculators. The level of naturalness of these systems can be very high because the variety of sentence types is limited, and they closely match the prosody and intonation of the original recordings.[citation needed]
Because these systems are limited by the words and phrases in their databases, they are not general-purpose and can only synthesize the combinations of words and phrases with which they have been preprogrammed. The blending of words within naturally spoken language however can still cause problems unless the many variations are taken into account. For example, innon-rhoticdialects of English the"r"in words like"clear"/ˈklɪə/is usually only pronounced when the following word has a vowel as its first letter (e.g."clear out"is realized as/ˌklɪəɹˈʌʊt/). Likewise inFrench, many final consonants become no longer silent if followed by a word that begins with a vowel, an effect calledliaison. Thisalternationcannot be reproduced by a simple word-concatenation system, which would require additional complexity to becontext-sensitive.
Formantsynthesis does not use human speech samples at runtime. Instead, the synthesized speech output is created usingadditive synthesisand an acoustic model (physical modelling synthesis).[42]Parameters such asfundamental frequency,voicing, andnoiselevels are varied over time to create awaveformof artificial speech. This method is sometimes calledrules-based synthesis; however, many concatenative systems also have rules-based components.
Many systems based on formant synthesis technology generate artificial, robotic-sounding speech that would never be mistaken for human speech. However, maximum naturalness is not always the goal of a speech synthesis system, and formant synthesis systems have advantages over concatenative systems. Formant-synthesized speech can be reliably intelligible, even at very high speeds, avoiding the acoustic glitches that commonly plague concatenative systems. High-speed synthesized speech is used by the visually impaired to quickly navigate computers using ascreen reader. Formant synthesizers are usually smaller programs than concatenative systems because they do not have a database of speech samples. They can therefore be used inembedded systems, wherememoryandmicroprocessorpower are especially limited. Because formant-based systems have complete control of all aspects of the output speech, a wide variety of prosodies andintonationscan be output, conveying not just questions and statements, but a variety of emotions and tones of voice.
Examples of non-real-time but highly accurate intonation control in formant synthesis include the work done in the late 1970s for theTexas InstrumentstoySpeak & Spell, and in the early 1980sSegaarcademachines[43]and in manyAtari, Inc.arcade games[44]using theTMS5220 LPC Chips. Creating proper intonation for these projects was painstaking, and the results have yet to be matched by real-time text-to-speech interfaces.[45][when?]
Articulatory synthesis consists of computational techniques for synthesizing speech based on models of the humanvocal tractand the articulation processes occurring there. The first articulatory synthesizer regularly used for laboratory experiments was developed atHaskins Laboratoriesin the mid-1970s byPhilip Rubin, Tom Baer, and Paul Mermelstein. This synthesizer, known as ASY, was based on vocal tract models developed atBell Laboratoriesin the 1960s and 1970s by Paul Mermelstein, Cecil Coker, and colleagues.
Until recently, articulatory synthesis models have not been incorporated into commercial speech synthesis systems. A notable exception is theNeXT-based system originally developed and marketed by Trillium Sound Research, a spin-off company of theUniversity of Calgary, where much of the original research was conducted. Following the demise of the various incarnations of NeXT (started bySteve Jobsin the late 1980s and merged with Apple Computer in 1997), the Trillium software was published under the GNU General Public License, with work continuing asgnuspeech. The system, first marketed in 1994, provides full articulatory-based text-to-speech conversion using a waveguide or transmission-line analog of the human oral and nasal tracts controlled by Carré's "distinctive region model".
More recent synthesizers, developed by Jorge C. Lucero and colleagues, incorporate models of vocal fold biomechanics, glottal aerodynamics and acoustic wave propagation in the bronchi, trachea, nasal and oral cavities, and thus constitute full systems of physics-based speech simulation.[46][47]
HMM-based synthesis is a synthesis method based onhidden Markov models, also called Statistical Parametric Synthesis. In this system, thefrequency spectrum(vocal tract),fundamental frequency(voice source), and duration (prosody) of speech are modeled simultaneously by HMMs. Speechwaveformsare generated from HMMs themselves based on themaximum likelihoodcriterion.[48]
Sinewave synthesis is a technique for synthesizing speech by replacing theformants(main bands of energy) with pure tone whistles.[49]
Deep learning speech synthesis usesdeep neural networks(DNN) to produce artificial speech from text (text-to-speech) or spectrum (vocoder).
The deep neural networks are trained using a large amount of recorded speech and, in the case of a text-to-speech system, the associated labels and/or input text.
15.aiuses amulti-speaker model—hundreds of voices are trained concurrently rather than sequentially, decreasing the required training time and enabling the model to learn and generalize shared emotional context, even for voices with no exposure to such emotional context.[50]Thedeep learningmodel used by the application isnondeterministic: each time that speech is generated from the same string of text, the intonation of the speech will be slightly different. The application also supports manually altering theemotionof a generated line usingemotional contextualizers(a term coined by this project), a sentence or phrase that conveys the emotion of the take that serves as a guide for the model during inference.[51][52]
ElevenLabsis primarily known for itsbrowser-based, AI-assisted text-to-speech software, Speech Synthesis, which can produce lifelike speech by synthesizingvocal emotionandintonation.[53]The company states its software is built to adjust the intonation and pacing of delivery based on the context of language input used.[54]It uses advanced algorithms to analyze the contextual aspects of text, aiming to detect emotions like anger, sadness, happiness, or alarm, which enables the system to understand the user's sentiment,[55]resulting in a more realistic and human-like inflection. Other features include multilingual speech generation and long-form content creation with contextually-aware voices.[56][57]
The DNN-based speech synthesizers are approaching the naturalness of the human voice.
Examples of disadvantages of the method are low robustness when the data are not sufficient, lack of controllability and low performance in auto-regressive models.
For tonal languages, such as Chinese or Taiwanese language, there are different levels oftone sandhirequired and sometimes the output of speech synthesizer may result in the mistakes of tone sandhi.[58]
In 2023,VICEreporterJoseph Coxpublished findings that he had recorded five minutes of himself talking and then used a tool developed by ElevenLabs to create voice deepfakes that defeated a bank'svoice-authenticationsystem.[66]
The process of normalizing text is rarely straightforward. Texts are full ofheteronyms,numbers, andabbreviationsthat all require expansion into a phonetic representation. There are many spellings in English which are pronounced differently based on context. For example, "My latest project is to learn how to better project my voice" contains two pronunciations of "project".
Most text-to-speech (TTS) systems do not generatesemanticrepresentations of their input texts, as processes for doing so are unreliable, poorly understood, and computationally ineffective. As a result, variousheuristictechniques are used to guess the proper way to disambiguatehomographs, like examining neighboring words and using statistics about frequency of occurrence.
Recently TTS systems have begun to use HMMs (discussedabove) to generate "parts of speech" to aid in disambiguating homographs. This technique is quite successful for many cases such as whether "read" should be pronounced as "red" implying past tense, or as "reed" implying present tense. Typical error rates when using HMMs in this fashion are usually below five percent. These techniques also work well for most European languages, although access to required trainingcorporais frequently difficult in these languages.
Deciding how to convert numbers is another problem that TTS systems have to address. It is a simple programming challenge to convert a number into words (at least in English), like "1325" becoming "one thousand three hundred twenty-five". However, numbers occur in many different contexts; "1325" may also be read as "one three two five", "thirteen twenty-five" or "thirteen hundred and twenty five". A TTS system can often infer how to expand a number based on surrounding words, numbers, and punctuation, and sometimes the system provides a way to specify the context if it is ambiguous.[67]Roman numerals can also be read differently depending on context. For example, "Henry VIII" reads as "Henry the Eighth", while "Chapter VIII" reads as "Chapter Eight".
Similarly, abbreviations can be ambiguous. For example, the abbreviation "in" for "inches" must be differentiated from the word "in", and the address "12 St John St." uses the same abbreviation for both "Saint" and "Street". TTS systems with intelligent front ends can make educated guesses about ambiguous abbreviations, while others provide the same result in all cases, resulting in nonsensical (and sometimes comical) outputs, such as "Ulysses S. Grant" being rendered as "Ulysses South Grant".
Speech synthesis systems use two basic approaches to determine the pronunciation of a word based on itsspelling, a process which is often called text-to-phoneme orgrapheme-to-phoneme conversion (phonemeis the term used bylinguiststo describe distinctive sounds in alanguage). The simplest approach to text-to-phoneme conversion is the dictionary-based approach, where a large dictionary containing all the words of a language and their correctpronunciationsis stored by the program. Determining the correct pronunciation of each word is a matter of looking up each word in the dictionary and replacing the spelling with the pronunciation specified in the dictionary. The other approach is rule-based, in which pronunciation rules are applied to words to determine their pronunciations based on their spellings. This is similar to the "sounding out", orsynthetic phonics, approach to learning reading.
Each approach has advantages and drawbacks. The dictionary-based approach is quick and accurate, but completely fails if it is given a word which is not in its dictionary. As dictionary size grows, so too does the memory space requirements of the synthesis system. On the other hand, the rule-based approach works on any input, but the complexity of the rules grows substantially as the system takes into account irregular spellings or pronunciations. (Consider that the word "of" is very common in English, yet is the only word in which the letter "f" is pronounced[v].) As a result, nearly all speech synthesis systems use a combination of these approaches.
Languages with aphonemic orthographyhave a very regular writing system, and the prediction of the pronunciation of words based on their spellings is quite successful. Speech synthesis systems for such languages often use the rule-based method extensively, resorting to dictionaries only for those few words, like foreign names and loanwords, whose pronunciations are not obvious from their spellings. On the other hand, speech synthesis systems for languages like English, which have extremely irregular spelling systems, are more likely to rely on dictionaries, and to use rule-based methods only for unusual words, or words that are not in their dictionaries.
The consistent evaluation of speech synthesis systems may be difficult because of a lack of universally agreed objective evaluation criteria. Different organizations often use different speech data. The quality of speech synthesis systems also depends on the quality of the production technique (which may involve analogue or digital recording) and on the facilities used to replay the speech. Evaluating speech synthesis systems has therefore often been compromised by differences between production techniques and replay facilities.
Since 2005, however, some researchers have started to evaluate speech synthesis systems using a common speech dataset.[68]
A study in the journalSpeech Communicationby Amy Drahota and colleagues at theUniversity of Portsmouth,UK, reported that listeners to voice recordings could determine, at better than chance levels, whether or not the speaker was smiling.[69][70][71]It was suggested that identification of the vocal features that signal emotional content may be used to help make synthesized speech sound more natural. One of the related issues is modification of thepitch contourof the sentence, depending upon whether it is an affirmative, interrogative or exclamatory sentence. One of the techniques for pitch modification[36]usesdiscrete cosine transformin the source domain (linear predictionresidual). Such pitch synchronous pitch modification techniques need a priori pitch marking of the synthesis speech database using techniques such as epoch extraction using dynamicplosionindex applied on the integrated linear prediction residual of thevoicedregions of speech.[72]In general, prosody remains a challenge for speech synthesizers, and is an active research topic.
Popular systems offering speech synthesis as a built-in capability.
In the early 1980s, TI was known as a pioneer in speech synthesis, and a highly popular plug-in speech synthesizer module was available for the TI-99/4 and 4A. Speech synthesizers were offered free with the purchase of a number of cartridges and were used by many TI-written video games (games offered with speech during this promotion includedAlpinerandParsec). The synthesizer uses a variant of linear predictive coding and has a small in-built vocabulary. The original intent was to release small cartridges that plugged directly into the synthesizer unit, which would increase the device's built-in vocabulary. However, the success of software text-to-speech in the Terminal Emulator II cartridge canceled that plan.
TheMattelIntellivisiongame console offered theIntellivoiceVoice Synthesis module in 1982. It included theSP0256 Narratorspeech synthesizer chip on a removable cartridge. The Narrator had 2kB of Read-Only Memory (ROM), and this was utilized to store a database of generic words that could be combined to make phrases in Intellivision games. Since the Orator chip could also accept speech data from external memory, any additional words or phrases needed could be stored inside the cartridge itself. The data consisted of strings of analog-filter coefficients to modify the behavior of the chip's synthetic vocal-tract model, rather than simple digitized samples.
Also released in 1982,Software Automatic Mouthwas the first commercial all-software voice synthesis program. It was later used as the basis forMacintalk. The program was available for non-Macintosh Apple computers (including the Apple II, and the Lisa), various Atari models and the Commodore 64. The Apple version preferred additional hardware that contained DACs, although it could instead use the computer's one-bit audio output (with the addition of much distortion) if the card was not present. The Atari made use of the embedded POKEY audio chip. Speech playback on the Atari normally disabled interrupt requests and shut down the ANTIC chip during vocal output. The audible output is extremely distorted speech when the screen is on. The Commodore 64 made use of the 64's embedded SID audio chip.
Arguably, the first speech system integrated into anoperating systemwas the circa 1983 unreleased Atari1400XL/1450XLcomputers. These used the Votrax SC01 chip and afinite-state machineto enable World English Spelling text-to-speech synthesis.[74]
TheAtari STcomputers were sold with "stspeech.tos" on floppy disk.
The first speech system integrated into anoperating systemthat shipped in quantity wasApple Computer'sMacInTalk. The software was licensed from third-party developers Joseph Katz and Mark Barton (later, SoftVoice, Inc.) and was featured during the 1984 introduction of the Macintosh computer. This January demo required 512 kilobytes of RAM memory. As a result, it could not run in the 128 kilobytes of RAM the first Mac actually shipped with.[75]So, the demo was accomplished with a prototype 512k Mac, although those in attendance were not told of this and the synthesis demo created considerable excitement for the Macintosh. In the early 1990s Apple expanded its capabilities offering system wide text-to-speech support. With the introduction of faster PowerPC-based computers they included higher quality voice sampling. Apple also introducedspeech recognitioninto its systems which provided a fluid command set. More recently, Apple has added sample-based voices. Starting as a curiosity, the speech system of AppleMacintoshhas evolved into a fully supported program,PlainTalk, for people with vision problems.VoiceOverwas for the first time featured in 2005 inMac OS X Tiger(10.4). During 10.4 (Tiger) and first releases of 10.5 (Leopard) there was only one standard voice shipping with Mac OS X. Starting with 10.6 (Snow Leopard), the user can choose out of a wide range list of multiple voices. VoiceOver voices feature the taking of realistic-sounding breaths between sentences, as well as improved clarity at high read rates over PlainTalk. Mac OS X also includessay, acommand-line basedapplication that converts text to audible speech. TheAppleScriptStandard Additions includes a say verb that allows a script to use any of the installed voices and to control the pitch, speaking rate and modulation of the spoken text.
Used inAlexaand asSoftware as a Servicein AWS[76](from 2017).
The second operating system to feature advanced speech synthesis capabilities wasAmigaOS, introduced in 1985. The voice synthesis was licensed byCommodore Internationalfrom SoftVoice, Inc., who also developed the originalMacinTalktext-to-speech system. It featured a complete system of voice emulation for American English, with both male and female voices and "stress" indicator markers, made possible through theAmiga's audiochipset.[77]The synthesis system was divided into a translator library which converted unrestricted English text into a standard set of phonetic codes and a narrator device which implemented a formant model of speech generation.. AmigaOS also featured a high-level "Speak Handler", which allowed command-line users to redirect text output to speech. Speech synthesis was occasionally used in third-party programs, particularly word processors and educational software. The synthesis software remained largely unchanged from the first AmigaOS release and Commodore eventually removed speech synthesis support from AmigaOS 2.1 onward.
Despite the American English phoneme limitation, an unofficial version with multilingual speech synthesis was developed. This made use of an enhanced version of the translator library which could translate a number of languages, given a set of rules for each language.[78]
ModernWindowsdesktop systems can useSAPI 4andSAPI 5components to support speech synthesis andspeech recognition. SAPI 4.0 was available as an optional add-on forWindows 95andWindows 98.Windows 2000addedNarrator, a text-to-speech utility for people who have visual impairment. Third-party programs such as JAWS for Windows, Window-Eyes, Non-visual Desktop Access, Supernova and System Access can perform various text-to-speech tasks such as reading text aloud from a specified website, email account, text document, the Windows clipboard, the user's keyboard typing, etc. Not all programs can use speech synthesis directly.[79]Some programs can use plug-ins, extensions or add-ons to read text aloud. Third-party programs are available that can read text from the system clipboard.
Microsoft Speech Serveris a server-based package for voice synthesis and recognition. It is designed for network use withweb applicationsandcall centers.
From 1971 to 1996, Votrax produced a number of commercial speech synthesizer components. A Votrax synthesizer was included in the first generation Kurzweil Reading Machine for the Blind.
Text-to-speech (TTS) refers to the ability of computers to read text aloud. A TTS engine converts written text to a phonemic representation, then converts the phonemic representation to waveforms that can be output as sound. TTS engines with different languages, dialects and specialized vocabularies are available through third-party publishers.[80]
Version 1.6 ofAndroidadded support for speech synthesis (TTS).[81]
Currently, there are a number ofapplications,pluginsand gadgets that can read messages directly from ane-mail clientand web pages from aweb browserorGoogle Toolbar. Some specialized software can narrateRSS-feeds. On one hand, online RSS-narrators simplify information delivery by allowing users to listen to their favourite news sources and to convert them topodcasts. On the other hand, on-line RSS-readers are available on almost any personal computer connected to the Internet. Users can download generated audio files to portable devices, e.g. with a help ofpodcastreceiver, and listen to them while walking, jogging or commuting to work.
A growing field in Internet based TTS is web-basedassistive technology, e.g. 'Browsealoud' from a UK company andReadspeaker. It can deliver TTS functionality to anyone (for reasons of accessibility, convenience, entertainment or information) with access to a web browser. The non-profit projectPediaphonwas created in 2006 to provide a similar web-based TTS interface to the Wikipedia.[82]
Other work is being done in the context of theW3Cthrough the W3C Audio Incubator Group with the involvement of The BBC and Google Inc.
Someopen-source softwaresystems are available, such as:
At the 2018Conference on Neural Information Processing Systems(NeurIPS) researchers fromGooglepresented the work 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis', whichtransfers learningfromspeaker verificationto achieve text-to-speech synthesis, that can be made to sound almost like anybody from a speech sample of only 5 seconds.[85]
Also researchers fromBaidu Researchpresented avoice cloningsystem with similar aims at the 2018 NeurIPS conference,[86]though the result is rather unconvincing.
By 2019 the digital sound-alikes found their way to the hands of criminals asSymantecresearchers know of 3 cases where digital sound-alikes technology has been used for crime.[87][88]
This increases the stress on the disinformation situation coupled with the facts that
In March 2020, afreewareweb application called 15.ai that generates high-quality voices from an assortment of fictional characters from a variety of media sources was released.[91]Initial characters includedGLaDOSfromPortal,Twilight SparkleandFluttershyfrom the showMy Little Pony: Friendship Is Magic, and theTenth DoctorfromDoctor Who.
A number ofmarkup languageshave been established for the rendition of text as speech in anXML-compliant format. The most recent isSpeech Synthesis Markup Language(SSML), which became aW3C recommendationin 2004. Older speech synthesis markup languages include Java Speech Markup Language (JSML) andSABLE. Although each of these was proposed as a standard, none of them have been widely adopted.[citation needed]
Speech synthesis markup languages are distinguished from dialogue markup languages.VoiceXML, for example, includes tags related to speech recognition, dialogue management and touchtone dialing, in addition to text-to-speech markup.[citation needed]
Speech synthesis has long been a vital assistive technology tool and its application in this area is significant and widespread. It allows environmental barriers to be removed for people with a wide range of disabilities. The longest application has been in the use ofscreen readersfor people with visual impairment, but text-to-speech systems are now commonly used by people withdyslexiaand otherreading disabilitiesas well as by pre-literate children.[92]They are also frequently employed to aid those with severespeech impairmentusually through a dedicatedvoice output communication aid.[93]Work to personalize a synthetic voice to better match a person's personality or historical voice is becoming available.[94]A noted application, of speech synthesis, was theKurzweil Reading Machine for the Blindwhich incorporated text-to-phonetics software based on work fromHaskins Laboratoriesand a black-box synthesizer built byVotrax.[95]
Speech synthesis techniques are also used in entertainment productions such as games and animations. In 2007, Animo Limited announced the development of a software application package based on its speech synthesis software FineSpeech, explicitly geared towards customers in the entertainment industries, able to generate narration and lines of dialogue according to user specifications.[96]The application reached maturity in 2008, when NECBiglobeannounced a web service that allows users to create phrases from the voices of characters from the JapaneseanimeseriesCode Geass: Lelouch of the Rebellion R2.[97]15.ai has been frequently used forcontent creationin variousfandoms, including theMy Little Pony: Friendship Is Magicfandom, theTeam Fortress 2fandom, thePortalfandom, and theSpongeBob SquarePantsfandom.[citation needed]
Text-to-speech for disability and impaired communication aids have become widely available. Text-to-speech is also finding new applications; for example, speech synthesis combined withspeech recognitionallows for interaction with mobile devices vianatural language processinginterfaces. Some users have also created AIvirtual assistantsusing 15.ai and external voice control software.[51][52]
Text-to-speech is also used in second language acquisition. Voki, for instance, is an educational tool created by Oddcast that allows users to create their own talking avatar, using different accents. They can be emailed, embedded on websites or shared on social media.
Content creators have used voice cloning tools to recreate their voices for podcasts,[98][99]narration,[54]and comedy shows.[100][101][102]Publishers and authors have also used such software to narrate audiobooks and newsletters.[103][104]Another area of application is AI video creation with talking heads. Webapps and video editors like Elai.io orSynthesiaallow users to create video content involving AI avatars, who are made to speak using text-to-speech technology.[105][106]
Speech synthesis is a valuable computational aid for the analysis and assessment of speech disorders. Avoice qualitysynthesizer, developed by Jorge C. Lucero et al. at theUniversity of Brasília, simulates the physics ofphonationand includes models of vocal frequency jitter and tremor, airflow noise and laryngeal asymmetries.[46]The synthesizer has been used to mimic thetimbreofdysphonicspeakers with controlled levels of roughness, breathiness and strain.[47]
|
https://en.wikipedia.org/wiki/Speech_synthesis
|
Automaticpronunciation assessmentis the use ofspeech recognitionto verify the correctness of pronouncedspeech,[1][2]as distinguished from manual assessment by an instructor or proctor.[3]Also calledspeech verification, pronunciation evaluation, andpronunciation scoring, the main application of this technology iscomputer-aided pronunciation teaching (CAPT)when combined withcomputer-aided instructionforcomputer-assisted language learning(CALL), speechremediation, oraccent reduction.
Pronunciation assessment does not determine unknown speech (as indictationorautomatic transcription) but instead, knowing the expected word(s) in advance, it attempts to verify the correctness of the learner'spronunciationand ideally theirintelligibilityto listeners,[4][5]sometimes along with often inconsequentialprosodysuch asintonation,pitch,tempo,rhythm, and syllable and wordstress.[6]Pronunciation assessment is also used inreading tutoring, for example in products such asMicrosoft Teams[7]and from Amira Learning.[8]Automatic pronunciation assessment can also be used to help diagnose and treatspeech disorderssuch asapraxia.[9]
The earliest work on pronunciation assessment avoided measuring genuine listener intelligibility,[10]a shortcoming corrected in 2011 at theToyohashi University of Technology,[11]and included in theVersanthigh-stakes English fluency assessment fromPearson[12]and mobile apps from17zuoye Education & Technology,[13]but still missing in 2023 products fromGoogle Search,[14]Microsoft,[15]Educational Testing Service,[16]Speechace,[17]and ELSA.[18]Assessing authentic listener intelligibility is essential for avoiding inaccuracies fromaccentbias, especially in high-stakes assessments;[19][20][21]from words with multiple correct pronunciations;[22]and from phoneme coding errors in machine-readable pronunciation dictionaries.[23]In theCommon European Framework of Reference for Languages(CEFR) assessment criteria for "overall phonological control", intelligibility outweighs formally correct pronunciation at all levels.[24]
In 2022, researchers found that some newer speech to text systems, based onend-to-end reinforcement learningto map audio signals directly into words, produce word and phrase confidence scores closely correlated with genuine listener intelligibility.[25]In 2023, others were able to assess intelligibility usingdynamic time warpingbased distance fromWav2Vec2representation of good speech.[26]
Although there are as yet no industry-standardbenchmarksfor evaluating pronunciation assessment accuracy, researchers occasionally release evaluationspeech corpusesfor others to use for improving assessment quality.[27][28]Such evaluation databases often emphasize formally unaccented pronunciation to the exclusion of genuine intelligibility evident fromblindedlistenertranscriptions.[5]
Ethical issues in pronunciation assessment are present in both human and automatic methods. Authentic validity, fairness, and mitigating bias in evaluation are all crucial. Diverse speech data should be included in automatic pronunciation assessment models. Combining human judgment with automated feedback can improve accuracy and fairness.[29]
Some promising areas for improvement being developed in 2024 includearticulatoryfeature extraction[30][31][32]andtransfer learningto suppress unnecessary corrections.[33]Other interesting advances under development include "augmented reality" interfaces for mobile devices usingoptical character recognitionto provide pronunciation training on text found in user environments.[34][35]As of mid-2024, audiomultimodal large language modelshave been used to assess pronunciation.[36]
|
https://en.wikipedia.org/wiki/Speech_verification
|
Subtitlesaretextsrepresenting the contents of the audio in afilm,television show,operaor otheraudiovisualmedia. Subtitles might provide atranscriptionortranslationof spokendialogue. Although naming conventions can vary,captionsare subtitles that include written descriptions of other elements of the audio, likemusicorsound effects. Captions are thus especially helpful todeaforhard-of-hearingpeople. Subtitles may also add information that is not present in the audio.Localizingsubtitles provide cultural context to viewers. For example, a subtitle could be used to explain to an audience unfamiliar withsakethat it is a type of Japanese wine. Lastly, subtitles are sometimes used forhumor, as inAnnie Hall, where subtitles show the characters' inner thoughts, which contradict what they were saying in the audio.
Creating, delivering, and displaying subtitles is a complicated and multi-step endeavor. First, the text of the subtitles needs to be written. When there is plenty of time to prepare, this process can be done by hand. However, for media produced in real-time, likelive television, it may be done bystenographersor using automatedspeech recognition. Subtitles written byfans, rather than more official sources, are referred to asfansubs. Regardless of who does the writing, they must include information on when each line of text should be displayed.
Second, subtitles need to be distributed to the audience.Opensubtitles are added directly to recorded videoframesand thus cannot be removed once added. On the other hand,closedsubtitles are stored separately, allowing subtitles in different languages to be used without changing the video itself. In either case, a wide variety of technical approaches and formats are used to encode the subtitles.
Third, subtitles need to be displayed to the audience. Open subtitles are always shown whenever the video is played because they are part of it. However, displaying closed subtitles is optional since they are overlaid onto the video by whatever is playing it. For example,media player softwaremight be used to combine closed subtitles with the video itself. In some theaters or venues, a dedicated screen or screens are used to display subtitles. If that dedicated screen is above rather than below the main display area, the subtitles are calledsurtitles.
Sometimes, mainly atfilm festivals, subtitles may be shown on a separate display below the screen, thus saving the filmmaker from creating a subtitled copy for just one showing.
Professional subtitlers usually work with specialized computer software and hardware where the video is digitally stored on a hard disk, making each frame instantly accessible. Besides creating the subtitles, the subtitler usually tells the computer software the exact positions where each subtitle should appear and disappear. For cinema films, this task is traditionally done by separate technicians. The result is a subtitle file containing the actual subtitles and position markers indicating where each subtitle should appear and disappear. These markers are usually based ontimecodeif it is a work for electronic media (e.g., TV, video, DVD) or on film length (measured in feet and frames) if the subtitles are to be used for traditional cinema film.
The finished subtitle file is used to add the subtitles to the picture, either:
Subtitles can also be created by individuals using freely available subtitle-creation software like Subtitle Workshop, MovieCaptioner or Subtitle Composer, and then hardcode them onto a video file with programs such asVirtualDubin combination withVSFilterwhich could also be used to show subtitles as softsubs in manysoftware video players.
For multimedia-styleWebcasting, check:
Some programs and online software allow automatic captions, mainly usingspeech-to-textfeatures.
For example, onYouTube, automatic captions are available inArabic,Bengali,Dutch,English,French,German,Hebrew,Hindi,Indonesian,Italian,Japanese,Korean,Portuguese,Russian,Spanish,Turkish,Ukrainian, andVietnamese. If automatic captions are available for the language, they will automatically be published on the video.[1][2]
Automatic captions are generally less accurate than human-typed captions.[3]Automatic captions regularly fail to distinguish between similar-sounding words, such as to, two, and too. This can be particularly problematic with educational material, such as lecture recordings, that may include uncommon vocabulary and proper names. This problem can be compounded with poor audio quality (drops in audio, background noise, and people talking over each other, for example). Disability rights groups have emphasised the need for these captions to be reviewed by a human prior to publishing, particularly in cases where students' grades may be adversely affected by inadequate captioning.[4]
Same-language captions, i.e., without translation, were primarily intended as an aid for people who are deaf or hard-of-hearing.
Closed captioning is the American term for closed subtitles specifically intended for people who are deaf or hard-of-hearing. These are a transcription rather than a translation, and usually also contain lyrics and descriptions of important non-dialogue audio such as(SIGHS),(WIND HOWLING),("SONG TITLE" PLAYING),(KISSES),(THUNDER RUMBLING),(LAUGHTER),(PANTING),(CAT YOWLS),(GLASS SHATTERS)and(DOOR CREAKING). From the expression "closed captions", the word "caption" has in recent years come to mean a subtitle intended for the deaf or hard-of-hearing, be it "open" or "closed". In British English, "subtitles" usually refers to subtitles for the deaf or hard-of-hearing (SDH); however, the term "SDH" is sometimes used when there is a need to make a distinction between the two.
Programs such as news bulletins, current affairs programs, sports, some talk shows, and political and special events utilize real time or online captioning.[5]Live captioning is increasingly common, especially in theUnited Kingdomand theUnited States, as a result of regulations that stipulate that virtually all TV eventually must be accessible for people who are deaf and hard-of-hearing.[6]In practice, however, these "real time" subtitles will typically lag the audio by several seconds due to the inherent delay in transcribing, encoding, and transmitting the subtitles. Real time subtitles are also challenged by typographic errors or mishearing of the spoken words, with no time available to correct before transmission.
Some programs may be prepared in their entirety several hours before broadcast, but with insufficient time to prepare a timecoded caption file for automatic play-out. Pre-prepared captions look similar to offline captions, although the accuracy of cueing may be compromised slightly as the captions are not locked to program timecode.[5]
Newsroom captioning involves the automatic transfer of text from the newsroom computer system to a device which outputs it as captions. It does work, but its suitability as an exclusive system would only apply to programs which had been scripted in their entirety on the newsroom computer system, such as short interstitial updates.[5]
In the United States and Canada, some broadcasters have used it exclusively and simply left uncaptioned sections of the bulletin for which a script was unavailable.[5]Newsroom captioning limits captions to pre-scripted materials and, therefore, does not cover all of the news, weather and sports segments of a typical local news broadcast which are typically not pre-scripted. This includes last-second breaking news or changes to the scripts, ad-lib conversations of the broadcasters, and emergency or other live remote broadcasts by reporters in-the-field. By failing to cover items such as these, newsroom style captioning (or use of theteleprompterfor captioning) typically results in coverage of less than 30% of a local news broadcast.[7]
Communication access real-time translation (CART)stenographers, who use a computer with using eitherstenotypeorVelotypekeyboards to transcribe stenographic input for presentation as captions within two or three seconds of the representing audio, must caption anything which is purely live and unscripted;[where?][5]however, more recent developments include operators usingspeech recognitionsoftware and re-voicing the dialogue. Speech recognition technology has advanced so quickly in the United States that about half of all live captioning was through speech recognition as of 2005.[citation needed]Real-time captions look different from offline captions, as they are presented as a continuous flow of text as people speak.[5][clarification needed]
Stenography is a system of rendering words phonetically, and English, with its multitude ofhomophones(e.g., there, their, they're), is particularly unsuited to easy transcriptions. Stenographers working in courts and inquiries usually have 24 hours in which to deliver their transcripts. Consequently, they may enter the same phonetic stenographic codes for a variety of homophones, and fix up the spelling later. Real-time stenographers must deliver their transcriptions accurately and immediately. They must therefore develop techniques for keying homophones differently, and be unswayed by the pressures of delivering accurate product on immediate demand.[5]
Submissions to recent captioning-related inquiries have revealed concerns from broadcasters about captioning sports. Captioning sports may also affect many different people because of the weather outside of it. In much sport captioning's absence, the Australian Caption Centre submitted to the National Working Party on Captioning (NWPC), in November 1998, three examples of sport captioning, each performed on tennis, rugby league and swimming programs:
The NWPC concluded that the standard they accept is the comprehensive real-time method, which gives them access to the commentary in its entirety. Also, not all sports are live. Many events are pre-recorded hours before they are broadcast, allowing a captioner to caption them using offline methods.[5]
Because different programs are produced under different conditions, a case-by-case basis must consequently determine captioning methodology. Some bulletins may have a high incidence of truly live material, or insufficient access to video feeds and scripts may be provided to the captioning facility, making stenography unavoidable. Other bulletins may be pre-recorded just before going to air, making pre-prepared text preferable.[5]
News captioning applications currently available are designed to accept text from a variety of inputs: stenography, Velotype, QWERTY,ASCIIimport, and the newsroom computer. This allows one facility to handle a variety of online captioning requirements and to ensure that captioners properly caption all programs.[5]
Current affairs programs usually require stenographic assistance. Even though the segments which comprise a current affairs program may be produced in advance, they are usually done so just before on-air time and their duration makes QWERTY input of text unfeasible.[5]
News bulletins, on the other hand, can often be captioned without stenographic input (unless there are live crosses or ad-libbing by the presenters). This is because:
For non-live, or pre-recorded programs, television program providers can choose offline captioning. Captioners gear offline captioning toward the high-end television industry, providing highly customized captioning features, such as pop-on style captions, specialized screen placement, speaker identifications, italics, special characters, and sound effects.[8]
Offline captioning involves a five-step design and editing process, and does much more than simply display the text of a program. Offline captioning helps the viewer follow a story line, become aware of mood and feeling, and allows them to fully enjoy the entire viewing experience. Offline captioning is the preferred presentation style for entertainment-type programming.[8]
Subtitles for the deaf or hard-of-hearing (SDH) is an American term introduced by the DVD industry.[9]It refers to regular subtitles in the original language where important non-dialogue information has been added, as well as speaker identification, which may be useful when the viewer cannot otherwise visually tell who is saying what.
The only significant difference for the user between SDH subtitles and closed captions is their appearance: SDH subtitles usually are displayed with the same proportional font used for the translation subtitles on the DVD; however, closed captions are displayed as white text on a black band, which blocks a large portion of the view. Closed captioning is falling out of favor as many users have no difficulty reading SDH subtitles, which are text with contrast outline. In addition, DVD subtitles can specify many colors on the same character: primary, outline, shadow, and background. This allows subtitlers to display subtitles on a usually translucent band for easier reading; however, this is rare, since most subtitles use an outline and shadow instead, in order to block a smaller portion of the picture. Closed captions may still supersede DVD subtitles, since many SDH subtitles present all of the text centered (an example of this is DVDs and Blu-ray Discs manufactured byWarner Bros.), while closed captions usually specify position on the screen: centered, left align, right align, top, etc. This is helpful for speaker identification and overlapping conversation. Some SDH subtitles (such as the subtitles of newerUniversal StudiosDVDs and Blu-ray Discs and most20th Century FoxBlu-ray Discs, and some Columbia Pictures DVDs) do have positioning, but it is not as common.
DVDs for the U.S. market now sometimes have three forms of English subtitles: SDH subtitles; English subtitles, helpful for viewers who may not be hearing impaired but whose first language may not be English (although they are usually an exact transcript and not simplified); and closed caption data that is decoded by the end-user's closed caption decoder. Most anime releases in the U.S. only include translations of the original material as subtitles; therefore, SDH subtitles of English dubs ("dubtitles") are uncommon.[10][11]
High-definition disc media (HD DVD,Blu-ray Disc) uses SDH subtitles as the sole method because technical specifications do not require HD to support line 21 closed captions. Some Blu-ray Discs, however, are said to carry a closed caption stream that only displays through standard-definition connections. ManyHDTVsallow the end-user to customize the captions, including the ability to remove the black band.
Song lyrics are not always captioned, as additional copyright permissions may be required to reproduce the lyrics on-screen as part of the subtitle track. In October 2015, major studios andNetflixwere sued over this practice, citing claims offalse advertising(as the work is henceforth not completely subtitled) andcivil rightsviolations (under California'sUnruh Civil Rights Act, guaranteeing equal rights for people with disabilities). JudgeStephen Victor Wilsondismissed the suit in September 2016, ruling that allegations of civil rights violations did not present evidence of intentional discrimination against viewers with disabilities, and that allegations over misrepresenting the extent of subtitles "fall far short of demonstrating that reasonable consumers would actually be deceived as to the amount of subtitled content provided, as there are no representations whatsoever that all song lyrics would be captioned, or even that the content would be 'fully' captioned."[12][13]
Although same-language subtitles and captions are produced primarily with the deaf and hard-of-hearing in mind, many others use them for convenience. Subtitles are increasingly popular among younger viewers for improved understanding and faster comprehension. Subtitles allow viewers to understand dialogue that is poorly enunciated, delivered quietly, in unfamiliar dialects, or spoken by background characters. A 2021 UK survey found that 80% of viewers between 18 and 25 regularly used subtitles, while less than a quarter of those between 56 and 75 did.[14][15][16]
Same language subtitling(SLS) is the use of synchronized captioning of musical lyrics (or any text with an audio or video source) as a repeated reading activity. The basic reading activity involves students viewing a short subtitled presentation projected onscreen, while completing a response worksheet. To be really effective, the subtitling should have high quality synchronization of audio and text, and better yet, subtitling should change color in syllabic synchronization to audio model, and the text should be at a level to challenge students' language abilities.[17][18]Studies (including those by theUniversity of Nottinghamand the What Works Clearinghouse of theUnited States Department of Education) have found that use of subtitles can help promotereading comprehensionin school-aged children.[19]Same-language captioning can improve literacy and reading growth across a broad range of reading abilities.[20][21]It is used for this purpose by national television broadcasters inChinaand inIndiasuch asDoordarshan.[citation needed][20][22]
In some Asian television programming, captioning is considered a part of the genre, and has evolved beyond simply capturing what is being said. The captions are used artistically; it is common to see the words appear one by one as they are spoken, in a multitude of fonts, colors, and sizes that capture the spirit of what is being said. Languages like Japanese also have a rich vocabulary ofonomatopoeiawhich is used in captioning.
In some East Asian countries, especiallyChinese-speaking ones, subtitling is common in all taped television programs and films. In these countries, written text remains mostly uniform while regional dialects in the spoken form can be mutually unintelligible. Therefore, subtitling offers a distinct advantage to aid comprehension. With subtitles, programs inMandarinor any dialect can be understood by viewers unfamiliar with it.
According toHK Magazine, the practice to caption inStandard Chinesewas pioneered inHong Kongduring the 1960s byRun Run ShawofShaw Brothers Studio. In a bid to reach the largest audience possible, Shaw had already recorded his films in Mandarin, reasoning it would be most universalvariety of Chinese. However, this did not guarantee that the films could be understood by non-Mandarin-speaking audiences, and dubbing into different varieties was seen as too costly. The decision was thus made to include Standard Chinese subtitles in all Shaw Brothers films. As the films were made inBritish-ruled Hong Kong, Shaw also decided to include English subtitles to reach English speakers in Hong Kong and allow for exports outside Asia.[23]
On-screen subtitles as seen inJapanesevarietyand otherreality televisionshows are more for decorative purpose, something that is not seen in television in Europe and the Americas. Some shows even placesound effectsover those subtitles. This practice of subtitling has been spread to neighbouring countries including South Korea and Taiwan.ATVin Hong Kong once practiced this style of decorative subtitles on its variety shows while it was owned byWant Want Holdingsin Taiwan (which also ownsCTVandCTI) during 2009.
Translation basically means conversion of one language into another language in written or spoken form. Subtitles can be used to translate dialogue from a foreign language into the native language of the audience. It is not only the quickest and cheapest method of translating content, but is also usually preferred as it is possible for the audience to hear the original dialogue and voices of the actors.
Subtitle translation may be different from thetranslationof written text or written language. Usually, during the process of creating subtitles for a film or television program, the picture and each sentence of the audio are analyzed by the subtitle translator; also, the subtitle translator may or may not have access to a written transcript of the dialogue. Especially in the field of commercial subtitles, the subtitle translator often interprets what is meant, rather than translating the manner in which the dialogue is stated; that is, the meaning is more important than the form—the audience does not always appreciate this, as it can be frustrating for people who are familiar with some of the spoken language; spoken language may contain verbal padding or culturally implied meanings that cannot be conveyed in the written subtitles. Also, the subtitle translator may also condense the dialogue to achieve an acceptable reading speed, whereby purpose is more important than form.
Especially infansubs, the subtitle translator may translate both form and meaning. The subtitle translator may also choose to display a note in the subtitles, usually inparentheses("(" and ")"), or as a separate block of on-screen text—this allows the subtitle translator to preserve form and achieve an acceptable reading speed; that is, the subtitle translator may leave a note on the screen, even after the character has finished speaking, to both preserve form and facilitate understanding. For example, Japanese has multiple first-person pronouns (seeJapanese pronouns) and each pronoun is associated with a different degree of politeness. In order to compensate during the English translation process, the subtitle translator may reformulate the sentence, add appropriate words or use notes.
Real-time translation subtitling usually involves an interpreter and a stenographer working concurrently, whereby the former quickly translates the dialogue while the latter types; this form of subtitling is rare. The unavoidable delay, typing errors, lack of editing, and high cost mean that real-time translation subtitling is in low demand. Allowing the interpreter to directly speak to the viewers is usually both cheaper and quicker; however, the translation is not accessible to people who are deaf and hard-of-hearing.
Some subtitlers purposely provide edited subtitles or captions to match the needs of their audience, for learners of the spoken dialogue as a second or foreign language, visual learners, beginning readers who are deaf or hard of hearing and for people with learning or mental disabilities. For example, for many of its films and television programs,PBSdisplays standard captions representing speech from the program audio, word-for-word, if the viewer selects "CC1" by using the television remote control or on-screen menu; however, they also provide edited captions to present simplified sentences at a slower rate, if the viewer selects "CC2". Programs with a diverse audience also often have captions in another language. This is common with popular Latin Americansoap operasin Spanish. Since CC1 and CC2 sharebandwidth, theU.S. Federal Communications Commission(FCC) recommends translation subtitles be placed in CC3. CC4, which shares bandwidth with CC3, is also available, but programs seldom use it.
The two alternative methods of 'translating' films in a foreign language aredubbing, in which other actors record over the voices of the original actors in a different language, andlectoring, a form ofvoice-overfor fictional material where a narrator tells the audience what the actors are saying while their voices can be heard in the background. Lectoring is common for television in Russia, Poland, and a few other East European countries, while cinemas in these countries commonly show films dubbed or subtitled.
The preference for dubbing or subtitling in various countries is largely based on decisions made in the late 1920s and early 1930s. With the arrival of sound film, the film importers inGermany,Italy,France,Switzerland,Luxembourg,Austria,San Marino,Liechtenstein,Monaco,Slovakia,Hungary,Belarus,Andorra,Spain,Canada,New Zealand,Ireland,United StatesandUnited Kingdomdecided to dub the foreign voices, while the rest of Europe elected to display the dialogue as translated subtitles. The choice was largely due to financial reasons (subtitling is more economical and quicker than dubbing), but during the 1930s it also became a political preference in Germany, Italy and Spain; an expedient form ofcensorshipthat ensured that foreign views and ideas could be stopped from reaching the local audience, as dubbing makes it possible to create a dialogue which is totally different from the original. In larger German cities a few "special cinemas" use subtitling instead of dubbing.
Dubbing is still the norm and favored form in these four countries, but the proportion of subtitling is slowly growing, mainly to save cost and turnaround-time, but also due to a growing acceptance among younger generations, who are better readers and increasingly have a basic knowledge of English (the dominant language in film and TV) and thus prefer to hear the original dialogue.
Nevertheless, in Spain, for example, only public TV channels show subtitled foreign films, usually at late night. It is extremely rare that any Spanish TV channel shows subtitled versions of TV programs, series or documentaries. With the advent of digital land broadcast TV, it has become common practice in Spain to provide optional audio and subtitle streams that allow watching dubbed programs with the original audio and subtitles. In addition, only a small proportion of cinemas show subtitled films. Films with dialogue inGalician,CatalanorBasqueare always dubbed, not subtitled, when they are shown in the rest of the country. Some non-Spanish-speaking TV stations subtitle interviews in Spanish; others do not.
In manyLatin Americancountries, local network television will show dubbed versions of English-language programs and movies, while cable stations (often international) more commonly broadcast subtitled material. Preference for subtitles or dubbing varies according to individual taste and reading ability, and theaters may order two prints of the most popular films, allowing moviegoers to choose between dubbing or subtitles. Animation and children's programming, however, is nearly universally dubbed, as in other regions.
Since the introduction of the DVD and, later, the Blu-ray Disc, some high budget films include the simultaneous option ofbothsubtitles and dubbing. Often in such cases, the translations are made separately, rather than the subtitles being a verbatim transcript of the dubbed scenes of the film. While this allows for the smoothest possible flow of the subtitles, it can be frustrating for someone attempting to learn a foreign language.
In the traditional subtitling countries, dubbing is generally regarded as something strange and unnatural and is only used for animated films and TV programs intended for pre-school children. As animated films are "dubbed" even in their original language and ambient noise and effects are usually recorded on a separate sound track, dubbing a low quality production into a second language produces little or no noticeable effect on the viewing experience. In dubbed live-action television or film, however, viewers are often distracted by the fact that the audio does not match the actors' lip movements. Furthermore, the dubbed voices may seem detached, inappropriate for the character, or overly expressive, and some ambient sounds may not be transferred to the dubbed track, creating a less enjoyable viewing experience.
In several countries or regions nearly all foreign language TV programs are subtitled, instead of dubbed, such as:
It is also common that television services in minority languages subtitle their programs in the dominant language as well. Examples include theWelshS4CandIrishTG4who subtitle inEnglishand theSwedishYle Femin Finland who subtitle in the majority languageFinnish.
InWallonia(Belgium) films are usually dubbed, but sometimes they are played on two channels at the same time: one dubbed (on La Une) and the other subtitled (on La Deux), but this is no longer done as frequently due to low ratings.
In Australia, oneFTAnetwork,SBSairs its foreign-language shows subtitled in English.
Subtitles in the same language on the same production can be in different categories:
Subtitles exist in two forms;opensubtitles are 'open to all' and cannot be turned off by the viewer;closedsubtitles are designed for a certain group of viewers, and can usually be turned on or off or selected by the viewer – examples being teletext pages, U.S. Closed captions (608/708), DVB Bitmap subtitles, DVD or Blu-ray subtitles.
While distributing content, subtitles can appear in one of three types:
In other categorization, digital video subtitles are sometimes calledinternal, if they are embedded in a single video file container along with video and audio streams, andexternalif they are distributed as separate file (that is less convenient, but it is easier to edit or change such file).
There are still many more uncommon formats. Most of them are text-based and have the extension .txt.
For cinema movies shown in a theatre:
For movies on DVD Video:
For TV broadcast:
Subtitles created for TV broadcast are stored in a variety of file formats. The majority of these formats are proprietary to the vendors of subtitle insertion systems.
Broadcast subtitle formats include: .ESY, .XIF, .X32, .PAC, .RAC, .CHK, .AYA, .890, .CIP, .CAP, .ULT, .USF, .CIN, .L32, .ST4, .ST7, .TIT, .STL
TheEBUformat defined by Technical Reference 3264-E[34]is an 'open' format intended for subtitle exchange between broadcasters. Files in this format have the extension .stl (not to be mixed up with text "Spruce subtitle format" mentioned above, which also has extension .stl)
For internet delivery:
The Timed Text format currently a "Candidate Recommendation" of theW3C(calledDFXP[35]) is also proposed as an 'open' format for subtitle exchange and distribution to media players, such as MicrosoftSilverlight.
Most times a foreign language is spoken in film, subtitles are used to translate the dialogue for the viewer. However, there are occasions when foreign dialogue is left unsubtitled (and thus incomprehensible to most of the target audience). This is often done if the movie is seen predominantly from the viewpoint of a particular character who does not speak the language. Such absence of subtitles allows the audience to feel a similar sense of incomprehension andalienationthat the character feels. An example of this is seen inNot Without My Daughter. ThePersiandialogue spoken by the Iranian characters is not subtitled because the main characterBetty Mahmoodydoes not speak Persian and the audience is seeing the film from her viewpoint.
A variation of this was used in the video gameMax Payne 3. Subtitles are used on all 3 the English, Spanish (only Chapter 11) and Portuguese (Chapter 1, 2, 3, 5, 6, 7, 9, 10, 12, 13 and 14 only) dialogues, but the latter is left untranslated[36]as the main character does not understand the language.
|
https://en.wikipedia.org/wiki/Subtitle_(captioning)
|
VoiceXML(VXML) is a digital document standard for specifying interactive media and voice dialogs between humans and computers. It is used for developing audio and voice response applications, such as banking systems and automated customer service portals. VoiceXML applications are developed and deployed in a manner analogous to how aweb browserinterprets and visually renders theHypertext Markup Language(HTML) it receives from aweb server. VoiceXML documents are interpreted by avoice browserand in common deployment architectures, users interact with voice browsers via thepublic switched telephone network(PSTN).
The VoiceXML document format is based onExtensible Markup Language(XML). It is a standard developed by theWorld Wide Web Consortium(W3C).
VoiceXML applications are commonly used in many industries and segments of commerce. These applications include order inquiry, package tracking, driving directions, emergency notification, wake-up, flight tracking, voice access to email, customer relationship management, prescription refilling, audio news magazines, voice dialing, real-estate information and nationaldirectory assistanceapplications.[citation needed]
VoiceXML has tags that instruct thevoice browserto providespeech synthesis, automaticspeech recognition, dialog management, and audio playback. The following is an example of a VoiceXML document:
When interpreted by a VoiceXML interpreter this will output "Hello world" with synthesized speech.
Typically,HTTPis used as the transport protocol for fetching VoiceXML pages. Some applications may use static VoiceXML pages, while others rely on dynamic VoiceXML page generation using anapplication serverlikeTomcat,Weblogic,IIS, orWebSphere.
Historically, VoiceXML platform vendors have implemented the standard in different ways, and added proprietary features. But the VoiceXML 2.0 standard, adopted as a W3C Recommendation on 16 March 2004, clarified most areas of difference. The VoiceXML Forum, an industry group promoting the use of the standard, provides aconformance testingprocess that certifies vendors' implementations as conformant.
AT&T Corporation,IBM,Lucent, andMotorolaformed the VoiceXML Forum in March 1999, in order to develop a standard markup language for specifying voice dialogs. By September 1999 the Forum released VoiceXML 0.9 for member comment, and in March 2000 they published VoiceXML 1.0. Soon afterwards, the Forum turned over the control of the standard to the W3C.[1]The W3C produced several intermediate versions of VoiceXML 2.0, which reached the final "Recommendation" stage in March 2004.[2]
VoiceXML 2.1 added a relatively small set of additional features to VoiceXML 2.0, based on feedback from implementations of the 2.0 standard. It is backward compatible with VoiceXML 2.0 and reached W3C Recommendation status in June 2007.[3]
VoiceXML 3.0 was slated to be the next major release of VoiceXML, with new major features. However, with the disbanding of the VoiceXML Forum in May 2022,[4]the development of the new standard was scrapped.
As of December 2022, there are few VoiceXML 2.0/2.1 platform implementations being offered.
The W3C's Speech Interface Framework also defines these other standards closely associated with VoiceXML.
TheSpeech Recognition Grammar Specification(SRGS) is used to tell the speech recognizer what sentence patterns it should expect to hear: these patterns are called grammars. Once the speech recognizer determines the most likely sentence it heard, it needs to extract the semantic meaning from that sentence and return it to the VoiceXML interpreter. This semantic interpretation is specified via theSemantic Interpretation for Speech Recognition(SISR) standard. SISR is used inside SRGS to specify the semantic results associated with the grammars, i.e., the set of ECMAScript assignments that create the semantic structure returned by the speech recognizer.
TheSpeech Synthesis Markup Language(SSML) is used to decorate textual prompts with information on how best to render them in synthetic speech, for example which speech synthesizer voice to use or when to speak louder or softer.
ThePronunciation Lexicon Specification(PLS) is used to define how words are pronounced. The generated pronunciation information is meant to be used by both speech recognizers and speech synthesizers in voice browsing applications.
TheCall Control eXtensible Markup Language(CCXML) is a complementary W3C standard. A CCXML interpreter is used on some VoiceXML platforms to handle the initial call setup between the caller and the voice browser, and to provide telephony services like call transfer and disconnect to the voice browser. CCXML can also be used in non-VoiceXML contexts.
Inmedia serverapplications, it is often necessary for several call legs to interact with each other, for example in a multi-party conference. Some deficiencies were identified in VoiceXML for this application and so companies designed specific scripting languages to deal with this environment. TheMedia Server Markup Language(MSML) was Convedia's solution, andMedia Server Control Markup Language(MSCML) was Snowshore's solution. Snowshore is now owned by Dialogic and Convedia is now owned by Radisys. These languages also contain 'hooks' so that external scripts (like VoiceXML) can run on call legs whereIVRfunctionality is required.
There was an IETF working group calledmediactrl("media control") that was working on a successor for these scripting systems, which it is hoped will progress to an open and widely adopted standard.[5]The mediactrl working group concluded in 2013.[6]
|
https://en.wikipedia.org/wiki/VoiceXML
|
VoxForgeis afreespeech corpusandacoustic modelrepository foropen sourcespeech recognitionengines.
VoxForge was set up to collect transcribed speech to create a freeGPLspeech corpus in order to be uses with open source speech recognition engines. The speech audio files will be 'compiled' into acoustic models for use with open source speech recognition engines such asJulius,ISIP, andSphinxandHTK(note: HTK has distribution restrictions).
VoxForge has[1]usedLibriVoxas a source of audio data since 2007.
This article about adigital libraryis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/VoxForge
|
Windows Speech Recognition(WSR) isspeech recognitiondeveloped byMicrosoftforWindows Vistathat enablesvoice commandsto control thedesktopuser interface,dictatetext inelectronic documentsandemail, navigatewebsites, performkeyboard shortcuts, and operate themouse cursor. It supports custommacrosto perform additional or supplementary tasks.
WSR is a locally processed speech recognition platform; it does not rely oncloud computingfor accuracy, dictation, or recognition, but adapts based on contexts, grammars, speech samples, training sessions, and vocabularies. It provides a personal dictionary that allows users to include or exclude words or expressions from dictation and to record pronunciations to increase recognition accuracy. Custom language models are also supported.
With Windows Vista, WSR was developed to be part of Windows, as speech recognition was previously exclusive to applications such asWindows Media Player. It is present inWindows 7,Windows 8,Windows 8.1,Windows RT,Windows 10, andWindows 11.
Microsoft was involved in speech recognition andspeech synthesisresearch for many years before WSR. In 1993, Microsoft hiredXuedong HuangfromCarnegie Mellon Universityto lead its speech development efforts; the company's research led to the development of theSpeech API(SAPI) introduced in 1994.[1]Speech recognition had also been used in previous Microsoft products.Office XPandOffice 2003provided speech recognition capabilities amongInternet ExplorerandMicrosoft Officeapplications;[2]it also enabled limited speech functionality inWindows 98,Windows Me,Windows NT 4.0, andWindows 2000.[3]Windows XPTablet PC Edition2002 included speech recognition capabilities with the Tablet PC Input Panel,[4][5]andMicrosoft Plus! for Windows XPenabled voice commands for Windows Media Player.[6]However, these all required installation of speech recognition as a separate component; before Windows Vista, Windows did not include integrated or extensive speech recognition.[5]Office 2007and later versions rely on WSR for speech recognition services.[7]
AtWinHEC 2002Microsoft announced that Windows Vista (codenamed "Longhorn") would include advances in speech recognition and in features such asmicrophone arraysupport[8]as part of an effort to "provide a consistent quality audio infrastructure for natural (continuous) speech recognition and (discrete) command and control."[9]Bill Gatesstated duringPDC 2003that Microsoft would "build speech capabilities into the system — a big advance for that in 'Longhorn,' in both recognition and synthesis, real-time";[10][11]and pre-release builds during thedevelopment of Windows Vistaincluded a speech engine with training features.[12]A PDC 2003 developer presentation stated Windows Vista would also include a user interface for microphone feedback and control, and user configuration and training features.[13]Microsoft clarified the extent to which speech recognition would be integrated when it stated in a pre-releasesoftware development kitthat "the common speech scenarios, like speech-enabling menus and buttons, will be enabled system-wide."[14]
During WinHEC 2004 Microsoft included WSR as part of a strategy to improve productivity on mobile PCs.[15][16]Microsoft later emphasizedaccessibility, new mobility scenarios, support for additional languages, and improvements to the speech user experience at WinHEC 2005. Unlike the speech support included in Windows XP, which was integrated with the Tablet PC Input Panel and required switching between separate Commanding and Dictation modes, Windows Vista would introduce a dedicated interface for speech input on the desktop and would unify the separate speech modes;[17]users previously could not speak a command after dictating or vice versa without first switching between these two modes.[18]Windows Vista Beta 1 included integrated speech recognition.[19]To incentivize company employees to analyze WSR for softwareglitchesand to provide feedback, Microsoft offered an opportunity for its testers to win a Premium model of theXbox 360.[20]
During a demonstration by Microsoft on July 27, 2006—before Windows Vista'srelease to manufacturing(RTM)—a notable incident involving WSR occurred that resulted in an unintended output of "Dear aunt, let's set so double the killer delete select all" when several attempts to dictate led to consecutive output errors;[21][22]the incident was a subject of significant derision among analysts and journalists in the audience,[23][24]despite another demonstration for application management and navigation being successful.[21]Microsoft revealed these issues were due to an audiogainglitch that caused the recognizer to distort commands and dictations; the glitch was fixed before Windows Vista's release.[25]
Reports from early 2007 indicated that WSR is vulnerable to attackers using speech recognition for malicious operations by playing certain audio commands through a target's speakers;[26][27]it was the first vulnerability discovered after Windows Vista'sgeneral availability.[28]Microsoft stated that although such an attack is theoretically possible, a number of mitigating factors and prerequisites would limit its effectiveness or prevent it altogether: a target would need the recognizer to be active and configured to properly interpret such commands; microphones and speakers would both need to be enabled and at sufficient volume levels; and an attack would require the computer to perform visible operations and produce audible feedback without users noticing.User Account Controlwould also prohibit the occurrence of privileged operations.[29]
WSR was updated to useMicrosoft UI Automationand its engine now uses theWASAPIaudio stack, substantially enhancing its performance and enabling support forecho cancellation, respectively. The document harvester, which can analyze and collect text in email and documents to contextualize user terms has improved performance, and now runs periodically in the background instead of only after recognizer startup. Sleep mode has also seen performance improvements and, to address security issues, the recognizer is turned off by default after users speak "stop listening" instead of being suspended. Windows 7 also introduces an option to submit speech training data to Microsoft to improve future recognizer versions.[30]
A new dictation scratchpad interface functions as a temporary document into which users can dictate or type text for insertion into applications that are not compatible with theText Services Framework.[30]Windows Vista previously provided an "enable dictation everywhere option" for such applications.[31]
WSR can be used to control theMetrouser interface in Windows 8, Windows 8.1, and Windows RT with commands to open theCharms bar("Press Windows C"); to dictate or display commands inMetro-style apps("Press Windows Z"); to perform tasks in apps (e.g., "Change to Celsius" inMSN Weather); and to display all installed apps listed by theStart screen("Apps").[32][33]
WSR is featured in theSettingsapplication starting with the Windows 10 April 2018 Update (Version 1803); the change first appeared inInsiderPreview Build 17083.[34]The April 2018 Update also introduces a new⊞ Win+Ctrl+Skeyboard shortcut to activate WSR.[35]
In Windows 11 version 22H2, a second Microsoft app, Voice Access, was added in addition to WSR.[36][37]In December 2023 Microsoft announced that WSR is deprecated in favor of Voice Access and may be removed in a future build or release of Windows.[38]
WSR allows a user to control applications and the Windowsdesktopuser interfacethrough voice commands.[39]Users can dictate text within documents, email, and forms; control the operating system user interface; performkeyboard shortcuts; and move themouse cursor.[40]The majority of integrated applications in Windows Vista can be controlled;[39]third-party applications must support the Text Services Framework for dictation.[1]English (U.S.),English (U.K.),French,German,Japanese,Mandarin Chinese, andSpanishare supported languages.[41]
When started for the first time, WSR presents a microphone setup wizard and an optional interactive step-by-step tutorial that users can commence to learn basic commands while adapting the recognizer to their specific voice characteristics;[39]the tutorial is estimated to require approximately 10 minutes to complete.[42]The accuracy of the recognizer increases through regular use, which adapts it to contexts, grammars, patterns, and vocabularies.[41][43]Custom language models for the specific contexts, phonetics, and terminologies of users in particular occupational fields such as legal or medical are also supported.[44]WithWindows Search,[45]the recognizer also can optionally harvest text in documents, email, as well as handwrittentablet PCinput to contextualize and disambiguate terms to improve accuracy; no information is sent to Microsoft.[43]
WSR is a locally processed speech recognition platform; it does not rely on cloud computing for accuracy, dictation, or recognition.[46]Speech profiles that store information about users are retained locally.[43]Backups and transfers of profiles can be performed viaWindows Easy Transfer.[47]
The WSR interface consists of a status area that displays instructions, information about commands (e.g., if a command is not heard by the recognizer), and the status of the recognizer; a voice meter displays visual feedback about volume levels. The status area represents the current state of WSR in a total of three modes, listed below with their respective meanings:
Colors of the recognizer listening mode button denote its various modes of operation: blue when listening; blue-gray when sleeping; gray when turned off; and yellow when the user switches context (e.g., from the desktop to the taskbar) or when a voice command is misinterpreted. The status area can also display custom user information as part ofWindows Speech Recognition Macros.[48][49]
An alternates panel disambiguation interface lists items interpreted as being relevant to a user's spoken word(s); if the word or phrase that a user desired to insert into an application is listed among results, a user can speak the corresponding number of the word or phrase in the results and confirm this choice by speaking "OK" to insert it within the application.[50]The alternates panel also appear when launching applications or speaking commands that refer to more than one item (e.g., speaking "Start Internet Explorer" may list both the web browser and a separate version with add-ons disabled). AnExactMatchOverPartialMatchentry in theWindows Registrycan limit commands to items with exact names if there is more than one instance included in results.[51]
Listed below are common WSR commands. Words initalicsindicate a word that can be substituted for the desired item (e.g., "direction" in "scrolldirection" can be substituted with the word "down").[40]A "start typing" command enables WSR to interpret all dictation commands as keyboard shortcuts.[50]
MouseGridenables users to control the mouse cursor by overlaying numbers across nine regions on the screen; these regions gradually narrow as a user speaks the number(s) of the region on which to focus until the desired interface element is reached. Users can then issue commands including "Clicknumber of region," which moves the mouse cursor to the desired region and then clicks it; and "Marknumber of region", which allows an item (such as acomputer icon) in a region to be selected, which can then be clicked with the previousclickcommand. Users also can interact with multiple regions at once.[40]
Applications and interface elements that do not present identifiable commands can still be controlled by asking the system to overlay numbers on top of them through aShow Numberscommand. Once active, speaking the overlaid number selects that item so a user can open it or perform other operations.[40]Show Numberswas designed so that users could interact with items that are not readily identifiable.[53]
WSR enables dictation of text in applications and Windows. If a dictation mistake occurs it can be corrected by speaking "Correctword" or "Correct that" and the alternates panel will appear and provide suggestions for correction; these suggestions can be selected by speaking the number corresponding to the number of the suggestion and by speaking "OK." If the desired item is not listed among suggestions, a user can speak it so that it might appear. Alternatively, users can speak "Spell it" or "I'll spell it myself" to speak the desired word on letter-by-letter basis; users can use their personal alphabet or theNATO phonetic alphabet(e.g., "N as in November") when spelling.[44]
Multiple words in a sentence can be corrected simultaneously (for example, if a user speaks "dictating" but the recognizer interprets this word as "the thing," a user can state "correct the thing" to correct both words at once). In the English language over 100,000 words are recognized by default.[44]
A personal dictionary allows users to include or exclude certain words or expressions from dictation.[44]When a user adds a word beginning with a capital letter to the dictionary, a user can specify whether it should always be capitalized or if capitalization depends on the context in which the word is spoken. Users can also record pronunciations for words added to the dictionary to increase recognition accuracy; words written via astyluson a tablet PC for the Windowshandwriting recognitionfeature are also stored. Information stored within a dictionary is included as part of a user's speech profile.[43]Users can open the speech dictionary by speaking the "show speech dictionary" command.
WSR supports custom macros through a supplementary application by Microsoft that enables additionalnatural languagecommands.[54][55]As an example of this functionality, an email macro released by Microsoft enables a natural language command where a user can speak "send email tocontactaboutsubject," which opensMicrosoft Outlookto compose a new message with the designated contact and subject automatically inserted.[56]Microsoft has also released sample macros for the speech dictionary,[57]for Windows Media Player,[58]forMicrosoft PowerPoint,[59]forspeech synthesis,[60]to switch between multiple microphones,[61]to customize various aspects of audio device configuration such as volume levels,[62]and for general natural language queries such as "What is the weather forecast?"[63]"What time is it?"[60]and "What's the date?"[60]Responses to these user inquiries are spoken back to the user in the activeMicrosoft text-to-speech voiceinstalled on the machine.
Users and developers can create their own macros based on text transcription and substitution; application execution (with support forcommand-line arguments); keyboard shortcuts; emulation of existing voice commands; or a combination of these items.XML,JScriptandVBScriptare supported.[50]Macros can be limited to specific applications[64]and rules for macros can be defined programmatically.[56]For a macro to load, it must be stored in aSpeech Macrosfolder within the active user'sDocumentsdirectory. All macros aredigitally signedby default if auser certificateis available to ensure that stored commands are not altered or loaded by third-parties; if a certificate is not available, an administrator can create one.[65]Configurable security levels can prohibit unsigned macros from being loaded; to prompt users to sign macros after creation; and to load unsigned macros.[64]
As of 2017[update]WSR uses Microsoft Speech Recognizer 8.0, the version introduced in Windows Vista. For dictation it was found to be 93.6% accurate without training by Mark Hachman, a Senior Editor ofPC World—a rate that is not as accurate as competing software. According to Microsoft, the rate of accuracy when trained is 99%. Hachman opined that Microsoft does not publicly discuss the feature because of the 2006 incident during the development of Windows Vista, with the result being that few users knew that documents could be dictated within Windows before the introduction ofCortana.[42]
|
https://en.wikipedia.org/wiki/Windows_Speech_Recognition
|
Anerror(from the Latinerrāre, meaning 'to wander'[1]) is an inaccurate or incorrect action, thought, or judgement.[1]
Instatistics, "error" refers to the difference between the value which has been computed and the correct value.[2]An error could result infailureor in adeviationfrom the intended performance or behavior.[3]
One reference differentiates between "error" and "mistake" as follows:
An 'error' is a deviation from accuracy or correctness. A 'mistake' is an error caused by a fault: the fault being misjudgment, carelessness, or forgetfulness. Now, say that I run a stop sign because I was in a hurry, and wasn't concentrating, and the police stop me, that is a mistake. If, however, I try to park in an area with conflicting signs, and I get a ticket because I was incorrect on my interpretation of what the signs meant, that would be an error. The first time it would be an error. The second time it would be a mistake since I should have known better.[4]
Inhuman behaviorthe norms or expectations for behavior or its consequences can be derived from the intention of the actor or from the expectations of other individuals or from a social grouping or fromsocial norms. (Seedeviance.) Gaffes and faux pas can be labels for certain instances of this kind of error. More serious departures from social norms carry labels such as misbehavior and labels from the legal system, such asmisdemeanorandcrime. Departures from norms connected to religion can have other labels, such assin.
An individual language user's deviations from standard language norms ingrammar,pronunciationandorthographyare sometimes referred to aserrors. However, in light of the role of language usage in everydaysocial classdistinctions, many feel thatlinguisticsshould restrain itself from suchprescriptivist judgmentsto avoid reinforcing dominant class value claims about what linguistic forms should and should not be used. One may distinguish various kinds of linguistic errors[5]– some, such asaphasiaorspeech disorders, where the user is unable to say what they intend to, are generally considered errors, while cases where natural, intended speech isnon-standard(as in vernacular dialects), are considered legitimate speech in scholarly linguistics, but might be considered errors in prescriptivist contexts. See alsoError analysis (linguistics).
Agaffeis usually made in asocial environmentand may come from saying something that may be true but inappropriate. It may also be an erroneous attempt to reveal a truth. Gaffes can bemalapropisms, grammatical errors or other verbal and gestural weaknesses or revelations throughbody language. Actually revealing factual or social truth through words or body language, however, can commonly result in embarrassment or, when the gaffe has negative connotations, friction between people involved.
Philosophers and psychologists interested in the nature of the gaffe includeSigmund Freud(Freudian slip) andGilles Deleuze. Deleuze, in hisThe Logic of Sense, places the gaffe in a developmental process that can culminate in stuttering.
Sportswriters and journalists commonly use "gaffe" to refer to any kind of mistake, e.g. a dropped ball (baseball error) by a player in a baseball game.
Instatistics, anerror(orresidual)is not a "mistake" but rather a difference between a computed, estimated, or measured value and the accepted true, specified, or theoretically correct value.
In science and engineering in general, an error is defined as a difference between the desired and actual performance orbehaviorof asystemorobject. This definition is the basis of operation for many types ofcontrol systems, in which error is defined as the difference between a set point and the process value. An example of this would be the thermostat in a home heating system – the operation of the heating equipment is controlled by the difference (the error) between the thermostat setting and the sensed air temperature. Another approach is related to considering a scientific hypothesis as true or false, giving birth to two types of errors:Type 1 and Type 2. The first one is when a true hypothesis is considered false, while the second is the reverse (a false one is considered true).
Engineersseek to designdevices,machinesandsystemsand in such a way as to mitigate or preferably avoid the effects of error, whetherunintentional or not. Such errors in a system can be latentdesignerrors that may go unnoticed for years, until the right set of circumstances arises that cause them to become active. Other errors in engineered systems can arise due tohuman error, which includescognitive bias.Human factorsengineering is often applied to designs in an attempt to minimize this type of error by making systems more forgiving orerror-tolerant.
(Incomputational mechanics, when solving a system such asAx=bthere is a distinction between the "error" – the inaccuracy inx– andresidual– the inaccuracy inAx.)
A notable result of Engineering and Scientific errors that occurred in history is theChernobyl disasterof 1986, which caused a nuclear meltdown in the City ofChernobylin present-dayUkraine, and is used as a case study in many Engineering/Science research[7]
Numerical analysisprovides a variety of techniques to represent (store) and computeapproximationsto mathematical numerical values. Errors arise from a trade-off between efficiency (space and computation time) and precision, which is limited anyway, since (using commonfloating-point arithmetic) only a finite amount of values can be represented exactly. The discrepancy between the exact mathematical value and the stored/computed value is called theapproximation error.
In applying corrections to the trajectory or course being steered,cyberneticscan be seen as the most general approach to error and its correction for the achievement of any goal. The term was suggested byNorbert Wienerto describe a new science of control and information in the animal and the machine. Wiener's early work was onnoise.
The cyberneticianGordon Paskheld that the error that drives aservomechanismcan be seen as a difference between a pair of analogous concepts in a servomechanism: the current state and the goal state. Later he suggested error can also be seen as an innovation or a contradiction depending on the context and perspective of interacting (observer) participants. The founder ofmanagement cybernetics,Stafford Beer, applied these ideas most notably in hisviable system model.
Inbiology, anerroris said to occur when perfect fidelity is lost in the copying ofinformation. For example, in an asexually reproducing species, an error (or mutation) has occurred for eachDNAnucleotidethat differs between thechildand theparent. Many of these mutations can be harmful, but unlike other types of errors, some are neutral or even beneficial. Mutations are an important force drivingevolution. Mutations that make organisms more adapted to theirenvironmentincrease in the population throughnatural selectionas organisms with favorable mutations have moreoffspring.
Inphilately, anerrorrefers to apostage stampor piece ofpostal stationerythat exhibits a printing or production mistake that differentiates it from a normal specimen or from the intended result. Examples are stamps printed in the wrong color or missing one or more colors, printed with a vignetteinvertedin relation to its frame, produced without any perforations on one or more sides when the normal stamps are perforated, or printed on the wrong type of paper. Legitimate errors must always be produced and sold unintentionally. Such errors may or may not be scarce or rare. Adesign errormay refer to a mistake in the design of the stamp, such as a mislabeled subject, even if there are no printing or production mistakes.
Inappellate review, error typically refers to mistakes made by atrial courtor some other court of first instance in applying the law in a particularlegal case. This may involve such mistakes as improper admission ofevidence, inappropriateinstructionsto thejury, or applying the wrongstandard of proof.
A stock market error is a stock market transaction that was done due to an error, due to humanfailureorcomputer errors.
Within United States government intelligence agencies, such asCentral Intelligence Agencyagencies,errorrefers tointelligence error, as previous assumptions that used to exist at a senior intelligence level within senior intelligence agencies, but has since been disproven, and is sometimes eventually listed as unclassified, and therefore more available to thepublicandcitizenryof the United States. TheFreedom of information actprovides American citizenry with a means to read intelligence reports that were mired in error. Per United States Central Intelligence Agency's website (as of August, 2008) intelligence error is described as:
"Intelligence errors are factual inaccuracies in analysis resulting from poor or missing data; intelligence failure is systemic organizational surprise resulting from incorrect, missing, discarded, or inadequate hypotheses."[8]
Innumismatics, anerrorrefers to acoinormedalthat has a minting mistake, similar to errors found in philately. Because the U.S.Bureau of the Mintkeeps a careful eye on all potential errors, errors on U.S. coins are very few and usually very scarce. Examples of numismatic errors: extra metal attached to a coin, a clipped coin caused by the coin stamp machine stamping a second coin too early, double stamping of a coin. A coin that has been overdated, e.g. 1942/41, is also considered an error.
Inapplied linguistics, an error is an unintended deviation from the immanent rules of alanguage varietymade by asecond languagelearner. Such errors result from the learner's lack of knowledge of the correct rules of the target language variety.[9]A significant distinction is generally made betweenerrors(systematic deviations) andmistakes(speech performance errors) which are not treated the same from a linguistic viewpoint. The study of learners' errors has been the main area of investigation by linguists in the history ofsecond-language acquisitionresearch.[10]
Amedical erroris a preventable adverse effect of care ("iatrogenesis"), whether or not it is evident or harmful to the patient. This might include an inaccurate or incomplete diagnosis or treatment of a disease, injury, syndrome, behavior, infection, or other ailment.
The worderrorin medicine is used as a label for nearly all of the clinical incidents that harm patients. Medical errors are often described ashuman errorsin healthcare.[11]Whether the label is a medical error or human error, one definition used in medicine says that it occurs when ahealthcareprovider chooses an inappropriate method of care, improperly executes an appropriate method of care, or reads the wrongCT scan. It has been said that the definition should be the subject of more debate. For instance, studies of hand hygiene compliance of physicians in an ICU show that compliance varied from 19% to 85%.[12][needs update]The deaths that result from infections caught as a result of treatment providers improperly executing an appropriate method of care by not complying with known safety standards for hand hygiene are difficult to regard as innocent accidents or mistakes.
There are many types of medical error, from minor to major,[13]and causality is often poorly determined.[14][needs update]
There are many taxonomies for classifying medical errors.[15]
|
https://en.wikipedia.org/wiki/Error
|
Theapproximation errorin a given data value represents the significant discrepancy that arises when an exact, true value is compared against someapproximationderived for it. This inherent error in approximation can be quantified and expressed in two principal ways: as anabsolute error, which denotes the direct numerical magnitude of this discrepancy irrespective of the true value's scale, or as arelative error, which provides a scaled measure of the error by considering the absolute error in proportion to the exact data value, thus offering a context-dependent assessment of the error's significance.
An approximation error can manifest due to a multitude of diverse reasons. Prominent among these are limitations related to computingmachine precision, where digital systems cannot represent all real numbers with perfect accuracy, leading to unavoidable truncation or rounding. Another common source is inherentmeasurement error, stemming from the practical limitations of instruments, environmental factors, or observational processes (for instance, if the actual length of a piece of paper is precisely 4.53 cm, but the measuring ruler only permits an estimation to the nearest 0.1 cm, this constraint could lead to a recorded measurement of 4.5 cm, thereby introducing an error).
In themathematicalfield ofnumerical analysis, the crucial concept ofnumerical stabilityassociated with analgorithmserves to indicate the extent to which initial errors or perturbations present in the input data of the algorithm are likely to propagate and potentially amplify into substantial errors in the final output. Algorithms that are characterized as numerically stable are robust in the sense that they do not yield a significantly magnified error in their output even when the input is slightly malformed or contains minor inaccuracies; conversely, numerically unstable algorithms may exhibit dramatic error growth from small input changes, rendering their results unreliable.[1]
Given some true or exact valuev, we formally state that an approximationvapproxestimates or representsvwhere the magnitude of theabsolute erroris bounded by a positive valueε(i.e.,ε>0), if the following inequality holds:[2][3]
where the vertical bars, | |, unambiguously denote theabsolute valueof the difference between the true valuevand its approximationvapprox. This mathematical operation signifies the magnitude of the error, irrespective of whether the approximation is an overestimate or an underestimate.
Similarly, we state thatvapproxapproximates the valuevwhere the magnitude of therelative erroris bounded by a positive valueη(i.e.,η>0), providedvis not zero (v≠ 0), if the subsequent inequality is satisfied:
|v−vapprox|≤η⋅|v|{\displaystyle |v-v_{\text{approx}}|\leq \eta \cdot |v|}.
This definition ensures thatηacts as an upper bound on the ratio of the absolute error to the magnitude of the true value. Ifv≠ 0, then the actualrelative error, often also denoted byηin context (representing the calculated value rather than a bound), is precisely calculated as:
Note that the first term in the equation above implicitly defines `ε` as `|v-v_approx|` if `η` is `ε/|v|`.
Thepercent error, often denoted asδ, is a common and intuitive way of expressing the relative error, effectively scaling the relative error value to a percentage for easier interpretation and comparison across different contexts:[3]
Anerror boundrigorously defines an established upper limit on either the relative or the absolute magnitude of an approximation error. Such a bound thereby provides a formal guarantee on the maximum possible deviation of the approximation from the true value, which is critical in applications requiring known levels of precision.[4]
To illustrate these concepts with a numerical example, consider an instance where the exact, accepted value is 50, and its corresponding approximation is determined to be 49.9. In this particular scenario, the absolute error is precisely 0.1 (calculated as |50 − 49.9|), and the relative error is calculated as the absolute error 0.1 divided by the true value 50, which equals 0.002. This relative error can also be expressed as 0.2%. In a more practical setting, such as when measuring the volume of liquid in a 6 mL beaker, if the instrument reading indicates 5 mL while the true volume is actually 6 mL, the percent error for this particular measurement situation is, when rounded to one decimal place, approximately 16.7% (calculated as |(6 mL − 5 mL) / 6 mL| × 100%).
The utility of relative error becomes particularly evident when it is employed to compare the quality of approximations for numbers that possess widely differing magnitudes; for example, approximating the number 1,000 with an absolute error of 3 results in a relative error of 0.003 (or 0.3%). This is, within the context of most scientific or engineering applications, considered a significantly less accurate approximation than approximating the much larger number 1,000,000 with an identical absolute error of 3. In the latter case, the relative error is a mere 0.000003 (or 0.0003%). In the first case, the relative error is 0.003, whereas in the second, more favorable scenario, it is a substantially smaller value of only 0.000003. This comparison clearly highlights how relative error provides a more meaningful and contextually appropriate assessment of precision, especially when dealing with values across different orders of magnitude.
There are two crucial features or caveats associated with the interpretation and application of relative error that should always be kept in mind. Firstly, relative error becomes mathematically undefined whenever the true value (v) is zero, because this true value appears in the denominator of its calculation (as detailed in the formal definition provided above), and division by zero is an undefined operation. Secondly, the concept of relative error is most truly meaningful and consistently interpretable only when the measurements under consideration are performed on aratio scale. This type of scale is characterized by possessing a true, non-arbitrary zero point, which signifies the complete absence of the quantity being measured. If this condition of a ratio scale is not met (e.g., when using interval scales like Celsius temperature), the calculated relative error can become highly sensitive to the choice of measurement units, potentially leading to misleading interpretations. For example, when an absolute error in atemperaturemeasurement given in theCelsius scaleis 1 °C, and the true value is 2 °C, the relative error is 0.5 (or 50%, calculated as |1°C / 2°C|). However, if this exact same approximation, representing the same physical temperature difference, is made using theKelvin scale(which is a ratio scale where 0 K represents absolute zero), a 1 K absolute error (equivalent in magnitude to a 1 °C error) with the same true value of 275.15 K (which is equivalent to 2 °C) gives a markedly different relative error of approximately 0.00363, or about 3.63×10−3(calculated as |1 K / 275.15 K|). This disparity underscores the importance of the underlying measurement scale.
When comparing the behavior and intrinsic characteristics of these two fundamental error types, it is important to recognize their differing sensitivities to common arithmetic operations. Specifically, statements and conclusions made aboutrelative errorsare notably sensitive to the addition of a non-zero constant to the underlying true and approximated values, as such an addition alters the base value against which the error is relativized, thereby changing the ratio. However, relative errors remain unaffected by the multiplication of both the true and approximated values by the same non-zero constant, because this constant would appear in both the numerator (of the absolute error) and the denominator (the true value) of the relative error calculation, and would consequently cancel out, leaving the relative error unchanged. Conversely, forabsolute errors, the opposite relationship holds true: absolute errors are directly sensitive to the multiplication of the underlying values by a constant (as this scales the magnitude of the difference itself), but they are largely insensitive to the addition of a constant to these values (since adding the same constant to both the true value and its approximation does not change the difference between them: (v+c) − (vapprox+c) =v−vapprox).[5]: 34
In the realm of computational complexity theory, we define that a real valuevispolynomially computable with absolute errorfrom a given input if, for any specified rational numberε> 0 representing the desired maximum permissible absolute error, it is algorithmically possible to compute a rational numbervapproxsuch thatvapproxapproximatesvwith an absolute error no greater thanε(formally, |v−vapprox| ≤ε). Crucially, this computation must be achievable within a time duration that is polynomial in terms of the size of the input data and the encoding size ofε(the latter typically being of the order O(log(1/ε)) bits, reflecting the number of bits needed to represent the precision). Analogously, the valuevis consideredpolynomially computable with relative errorif, for any specified rational numberη> 0 representing the desired maximum permissible relative error, it is possible to compute a rational numbervapproxthat approximatesvwith a relative error no greater thanη(formally, |(v−vapprox)/v| ≤η, assumingv≠ 0). This computation, similar to the absolute error case, must likewise be achievable in an amount of time that is polynomial in the size of the input data and the encoding size ofη(which is typically O(log(1/η)) bits).
It can be demonstrated that if a valuevis polynomially computable with relative error (utilizing an algorithm that we can designate as REL), then it is consequently also polynomially computable with absolute error.Proof sketch: Letε> 0 be the target maximum absolute error that we wish to achieve. The procedure commences by invoking the REL algorithm with a chosen relative error bound of, for example,η= 1/2. This initial step aims to find a rational number approximationr1such that the inequality |v−r1| ≤ |v|/2 holds true. From this relationship, by applying the reverse triangle inequality (|v| − |r1| ≤ |v−r1|), we can deduce that |v| ≤ 2|r1| (this holds assumingr1≠ 0; ifr1= 0, then the relative error condition impliesvmust also be 0, in which case the problem of achieving any absolute errorε> 0 is trivial, asvapprox= 0 works, and we are done). Given that the REL algorithm operates in polynomial time, the encoding length of the computedr1will necessarily be polynomial with respect to the input size. Subsequently, the REL algorithm is invoked a second time, now with a new, typically much smaller, relative error target set toη'=ε/ (2|r1|) (this step also assumesr1is non-zero, which we can ensure or handle as a special case). This second application of REL yields another rational number approximation,r2, that satisfies the condition |v−r2| ≤η'|v|. Substituting the expression forη'gives |v−r2| ≤ (ε/ (2|r1|)) |v|. Now, using the previously derived inequality |v| ≤ 2|r1|, we can bound the term: |v−r2| ≤ (ε/ (2|r1|)) × (2|r1|) =ε. Thus, the approximationr2successfully approximatesvwith the desired absolute errorε, demonstrating that polynomial computability with relative error implies polynomial computability with absolute error.[5]: 34
The reverse implication, namely that polynomial computability with absolute error implies polynomial computability with relative error, is generally not true without imposing additional conditions or assumptions. However, a significant special case exists: if one can assume that some positive lower boundbon the magnitude ofv(i.e., |v| >b> 0) can itself be computed in polynomial time, and ifvis also known to be polynomially computable with absolute error (perhaps via an algorithm designated as ABS), thenvalso becomes polynomially computable with relative error. This is because one can simply invoke the ABS algorithm with a carefully chosen target absolute error, specificallyεtarget=ηb, whereηis the desired relative error. The resulting approximationvapproxwould satisfy |v−vapprox| ≤ηb. To see the implication for relative error, we divide by |v| (which is non-zero): |(v−vapprox)/v| ≤ (ηb)/|v|. Since we have the condition |v| >b, it follows thatb/|v| < 1. Therefore, the relative error is bounded byη× (b/|v|) <η× 1 =η, which is the desired outcome for polynomial computability with relative error.
An algorithm that, for every given rational numberη> 0, successfully computes a rational numbervapproxthat approximatesvwith a relative error no greater thanη, and critically, does so in a time complexity that is polynomial in both the size of the input and in the reciprocal of the relative error, 1/η(rather than being polynomial merely in log(1/η), which typically allows for faster computation whenηis extremely small), is known as aFully Polynomial-Time Approximation Scheme (FPTAS). The dependence on 1/ηrather than log(1/η) is a defining characteristic of FPTAS and distinguishes it from weaker approximation schemes.
In the context of most indicating measurement instruments, such as analog or digital voltmeters, pressure gauges, and thermometers, the specified accuracy is frequently guaranteed by their manufacturers as a certain percentage of the instrument's full-scale reading capability, rather than as a percentage of the actual reading. The defined boundaries or limits of these permissible deviations from the true or specified values under operational conditions are commonly referred to as limiting errors or, alternatively, guarantee errors. This method of specifying accuracy implies that the maximum possible absolute error can be larger when measuring values towards the higher end of the instrument's scale, while the relative error with respect to the full-scale value itself remains constant across the range. Consequently, the relative error with respect to the actual measured value can become quite large for readings at the lower end of the instrument's scale.[6]
The fundamental definitions of absolute and relative error, as presented primarily for scalar (one-dimensional) values, can be naturally and rigorously extended to more complex scenarios where the quantity of interestv{\displaystyle v}and its corresponding approximationvapprox{\displaystyle v_{\text{approx}}}aren-dimensional vectors, matrices, or, more generally, elements of anormed vector space. This important generalization is typically achieved by systematically replacing theabsolute valuefunction (which effectively measures magnitude or "size" for scalar numbers) with an appropriatevectorn-normor matrix norm. Common examples of such norms include the L1norm (sum of absolute component values), the L2norm (Euclidean norm, or square root of the sum of squared components), and the L∞norm (maximum absolute component value). These norms provide a way to quantify the "distance" or "difference" between the true vector (or matrix) and its approximation in a multi-dimensional space, thereby allowing for analogous definitions of absolute and relative error in these higher-dimensional contexts.[7]
|
https://en.wikipedia.org/wiki/Approximation_error
|
Clinical data management(CDM) is a critical process inclinical research, which leads to generation of high-quality, reliable, and statistically sound data fromclinical trials.[1]Clinicaldata managementensures collection, integration and availability of data at appropriate quality and cost. It also supports the conduct, management and analysis of studies across the spectrum of clinical research as defined by theNational Institutes of Health(NIH). The ultimate goal of CDM is to ensure that conclusions drawn from research are well supported by the data. Achieving this goal protects public health and increases confidence in marketed therapeutics.[citation needed]
Job profile acceptable in CDM: clinical researcher, clinical research associate, clinical research coordinator etc.
The clinical data manager plays a key role in the setup and conduct of a clinical trial. The data collected during a clinical trial form the basis of subsequent safety andefficacyanalysis which in turn drive decision making on product development in the pharmaceutical industry. The clinical data manager is involved in early discussions aboutdata collectionoptions and then oversees development of data collection tools based on the clinical trial protocol. Once subject enrollment begins, the data manager ensures that data are collected, validated, complete, and consistent. The clinical data manager liaises with other data providers (e.g. a central laboratory processing blood samples collected) and ensures that such data are transmitted securely and are consistent with other data collected in the clinical trial. At the completion of the clinical trial, the clinical data manager ensures that all data expected to be captured have been accounted for and that all data management activities are complete. At this stage, the data are declared final (terminology varies, but common descriptions are "Database Lock", “Data Lock” and "Database Freeze"), and the clinical data manager transfers data forstatistical analysis.
Standard operating procedures(SOPs) describe the process to be followed in conducting data management activities and support the obligation to follow applicable laws and guidelines (e.g. ICH GCP and21CFR Part 11) in the conduct of data management activities.
The data management plan describes the activities to be conducted in the course of processing data. Key topics to cover include the SOPs to be followed, theclinical data management system(CDMS) to be used, description of data sources, data handling processes, data transfer formats and process, and quality control procedure
Thecase report form(CRF) is the data collection tool for the clinical trial and can be paper or electronic. Paper CRFs will be printed, often using No Carbon Required paper, and shipped to the investigative sites conducting theclinical trialfor completion after which they are couriered back to Data Management. Electronic CRFs enable data to be typed directly into fields using a computer and transmitted electronically to Data Management.
Design of CRFs needs to take into account the information required to be collected by theclinical trialprotocol and intended to be included in statistical analysis. Where available, standard CRF pages may be re-used for collection of data which is common across most clinical trials e.g. subject demographics.[2][3]Apart from CRF design, electronic trial design also includes edit check programming. Edit checks are used to fire a query message when discrepant data is entered, to map certain data points from one CRF to the other, to calculate certain fields like Subject's Age, BMI etc.. Edit checks help the investigators to enter the right data right at the moment data is entered and also help in increasing the quality of the Clinical trial data.
For a clinical trial utilizing an electronic CRF, database design and CRF design are closely linked. The electronic CRF enables entry of data into an underlying relational database. For a clinical trial utilizing a paper CRF, the relational database is built separately. In both cases, the relational database allows entry of all data captured on the Case report form.
All computer systems used in the processing and management of clinical trial data must undergo validation testing to ensure that they perform as intended and that results are reproducible.
The Clinical Data Interchange Standards Consortium leads the development of global, system independent data standards which are now commonly used as the underlying data structures for clinical trial data. These describe parameters such as the name, length and format of each data field (variable) in the relational database.
Validation Rules are electronic checks defined in advance which ensure the completeness and consistency of the clinical trial data.
Once an electronic CRF (eCRF) is built, the clinical data manager (and other parties as appropriate) conducts User Acceptance Testing (UAT). The tester enters test data into the e-CRF and record whether it functions as intended. UAT is performed until all the issues (if found) are resolved.
When an electronic CRF is in use, data entry is carried out at the investigative site where theclinical trialis conducted by site staff who have been granted appropriate access to do so.
When using a paper CRF the pages are entered by data entry operators. Best practice is for a first pass data entry to be completed followed by a second pass or verification step by an independent operator. Any discrepancies between the first and second pass may be resolved such that the data entered is a true reflection of that recorded on the CRF. Where the operator is unable to read the entry the clinical data manager should be notified so that the entry may be clarified with the person who completed the CRF.
Data validation is the application of validation rules to the data. For electronic CRFs the validation rules may be applied in real time at the point of entry. Offline validation may still be required (e.g. for cross checks between data types)
Where data entered does not pass validation rules then a data query may be issued to the investigative site where the clinical trial is conducted to request clarification of the entry. Data queries must not be leading (i.e. they must not suggest the correction that should be made). For electronic CRFs only the site staff with appropriate access may modify data entries. For paper CRFs, the clinical data manager applies the data query response to the database and a copy of the data query is retained at the investigative site.
When an item or variable has an error or a query raised against it, it is said to have a “discrepancy” or “query”.
All EDC systems have a discrepancy management tool or also refer to “edit check” or “validation check” that is programmed using any known programming language (e.g. SAS, PL/SQL, C#, SQL, Python, etc).
So what is a ‘query’? A query is an error generated when a validation check detects a problem with the data. Validation checks are run automatically whenever a page is saved “submitted” and can identify problems with a single variable, between two or more variables on the same eCRF page, or between variables on different pages. A variable can have multiple validation checks associated with it.
Errors can be resolved in several ways:
Samples collected during a clinical trial may be sent to a single central laboratory for analysis. The clinical data manager liaises with the central laboratory and agrees data formats and transfer schedules in Data Transfer Agreement. The sample collection date and time may be reconciled against the CRF to ensure that all samples collected have been analysed.
Analysis of clinical trial data may be carried out by laboratories, image processing specialists or other third parties. The clinical data manager liaises with such data providers and agree data formats and transfer schedules. Data may be reconciled against the CRF to ensure consistency.
The CRF collects adverse events reported during the conduct of the clinical trial however there is a separate process which ensures thatserious adverse eventsare reported quickly. The clinical data manager must ensure that data is reconciled between these processes.
Where the subject is required to record data (e.g. daily symptoms) then a diary is provided for completion. Data management of this data requires a different approach to CRF data as, for example, it is generally not practical to raise data queries.
Patient diaries may be developed in either paper or electronic (eDiary) formats. Such eDiaries generally take the form of a handheld device which enables the subject to enter the required data and transmits this data to a centralised server.
Once all expected data is accounted for, all data queries closed, all external data received and reconciled and all other data management activities complete the database may be finalized.
Typical reports generated and used by the clinical data manager includes:
Quality Controlis applied at various stages in the Clinical data management process and is normally mandated by SOP.
|
https://en.wikipedia.org/wiki/Clinical_data_management
|
Note to admins: In case of doubt, remove this template and post a message asking for review atWT:CP.Withthis script, go tothe history with auto-selected revisions.
Note to the requestor: Make sure the page has already been reverted to a non-infringing revision or that infringing text has been removed or replaced before submitting this request. This template is reserved for obvious cases only, for other cases refer toWikipedia:Copyright problems.
Data analysisis the process of inspecting,cleansing,transforming, andmodelingdatawith the goal of discovering useful information, informing conclusions, and supportingdecision-making.[1]Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and is used in different business, science, and social science domains.[2]In today's business world, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively.[3]
Data miningis a particular data analysis technique that focuses on statistical modeling and knowledge discovery for predictive rather than purely descriptive purposes, whilebusiness intelligencecovers data analysis that relies heavily on aggregation, focusing mainly on business information. In statistical applications, data analysis can be divided intodescriptive statistics,exploratory data analysis(EDA), andconfirmatory data analysis(CDA).[4]EDA focuses on discovering new features in the data while CDA focuses on confirming or falsifying existinghypotheses.[5]Predictive analyticsfocuses on the application of statistical models for predictive forecasting or classification, whiletext analyticsapplies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a variety ofunstructured data. All of the above are varieties of data analysis.[6]
Data analysisis aprocessfor obtainingraw data, and subsequently converting it into information useful for decision-making by users.[1]StatisticianJohn Tukey, defined data analysis in 1961, as:
"Procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data."[7]
There are several phases, and they are [Iteration|iterative]], in that feedback from later phases may result in additional work in earlier phases.[8]
The data is necessary as inputs to the analysis, which is specified based upon the requirements of those directing the analytics (or customers, who will use the finished product of the analysis).[9]The general type of entity upon which the data will be collected is referred to as anexperimental unit(e.g., a person or population of people). Specific variables regarding a population (e.g., age and income) may be specified and obtained. Data may be numerical or categorical (i.e., a text label for numbers).[8]
Data may be collected from a variety of sources.[10]Alist of data sourcesare available for study & research. The requirements may be communicated by analysts tocustodiansof the data; such as,Information Technology personnelwithin an organization.[11]Data collectionordata gatheringis the process of gathering andmeasuringinformationon targeted variables in an established system, which then enables one to answer relevant questions and evaluate outcomes. The data may also be collected from sensors in the environment, including traffic cameras, satellites, recording devices, etc. It may also be obtained through interviews, downloads from online sources, or reading documentation.[8]
Data integrationis a precursor to data analysis: Data, when initially obtained, must be processed or organized for analysis. For instance, this may involve placing data into rows and columns in a table format (known asstructured data) for further analysis, often through the use of spreadsheet(excel) or statistical software.[8]
Once processed and organized, the data may be incomplete, contain duplicates, or contain errors.[12]The need fordata cleaningwill arise from problems in the way that the data is entered and stored.[12][13]Data cleaning is the process of preventing and correcting these errors. Common tasks include record matching, identifying inaccuracy of data, overall quality of existing data, deduplication, and column segmentation.[14][15]
Such data problems can also be identified through a variety of analytical techniques. For example; with financial information, the totals for particular variables may be compared against separately published numbers that are believed to be reliable.[16]Unusual amounts, above or below predetermined thresholds, may also be reviewed. There are several types of data cleaning that are dependent upon the type of data in the set; this could be phone numbers, email addresses, employers, or other values.[17]Quantitative data methods for outlier detection can be used to get rid of data that appears to have a higher likelihood of being input incorrectly. Text data spell checkers can be used to lessen the amount of mistyped words. However, it is harder to tell if the words are contextually (i.e., semantically and idiomatically) correct.
Once the datasets are cleaned, they can then begin to be analyzed usingexploratory data analysis. The process of data exploration may result in additional data cleaning or additional requests for data; thus, the initialization of theiterative phasesmentioned above.[18]Descriptive statistics, such as the average, median, and standard deviation, are often used to broadly characterize the data.[19][20]Data visualizationis also used, in which the analyst is able to examine the data in a graphical format in order to obtain additional insights about messages within the data.[8]
Mathematical formulasormodels(also known asalgorithms), may be applied to the data in order to identify relationships among the variables; for example, checking forcorrelationand by determining whether or not there is the presence ofcausality. In general terms, models may be developed to evaluate a specific variable based on other variable(s) contained within the dataset, with someresidual errordepending on the implemented model's accuracy (e.g., Data = Model + Error).[21]
Inferential statisticsutilizes techniques that measure the relationships between particular variables.[22]For example,regression analysismay be used to model whether a change in advertising (independent variable X), provides an explanation for the variation in sales (dependent variable Y), i.e. is Y a function of X? This can be described as (Y=aX+b+ error), where the model is designed such that (a) and (b) minimize the error when the model predictsYfor a given range of values ofX.[23]
Adata productis a computer application that takesdata inputsand generatesoutputs, feeding them back into the environment.[24]It may be based on a model or algorithm. For instance, an application that analyzes data about customer purchase history, and uses the results to recommend other purchases the customer might enjoy.[25][8]
Once data is analyzed, it may be reported in many formats to the users of the analysis to support their requirements.[27]The users may have feedback, which results in additional analysis.
When determining how to communicate the results, the analyst may consider implementing a variety of data visualization techniques to help communicate the message more clearly and efficiently to the audience. Data visualization usesinformation displays(graphics such as, tables and charts) to help communicate key messages contained in the data.Tablesare a valuable tool by enabling the ability of a user to query and focus on specific numbers; while charts (e.g., bar charts or line charts), may help explain the quantitative messages contained in the data.[28]
Stephen Fewdescribed eight types of quantitative messages that users may attempt to communicate from a set of data, including the associated graphs.[29][30]
AuthorJonathan Koomeyhas recommended a series of best practices for understanding quantitative data. These include:[16]
For the variables under examination, analysts typically obtaindescriptive statistics, such as the mean (average),median, andstandard deviation. They may also analyze thedistributionof the key variables to see how the individual values cluster around the mean.[16]
McKinsey and Companynamed a technique for breaking down a quantitative problem into its component parts called theMECE principle. MECE means "Mutually Exclusive and Collectively Exhaustive".[36]Each layer can be broken down into its components; each of the sub-components must bemutually exclusiveof each other andcollectivelyadd up to the layer above them. For example, profit by definition can be broken down into total revenue and total cost.[37]
Analysts may use robust statistical measurements to solve certain analytical problems.Hypothesis testingis used when a particular hypothesis about the true state of affairs is made by the analyst and data is gathered to determine whether that hypothesis is true or false.[38]For example, the hypothesis might be that "Unemployment has no effect on inflation", which relates to an economics concept called thePhillips Curve.[39]Hypothesis testing involves considering the likelihood ofType I and type II errors, which relate to whether the data supports accepting or rejecting the hypothesis.[40]
Regression analysismay be used when the analyst is trying to determine the extent to which independent variable X affects dependent variable Y (e.g., "To what extent do changes in the unemployment rate (X) affect the inflation rate (Y)?").[41]
Necessary condition analysis(NCA) may be used when the analyst is trying to determine the extent to which independent variable X allows variable Y (e.g., "To what extent is a certain unemployment rate (X) necessary for a certain inflation rate (Y)?").[41]Whereas (multiple) regression analysis uses additive logic where each X-variable can produce the outcome and the X's can compensate for each other (they are sufficient but not necessary),[42]necessary condition analysis (NCA) uses necessity logic, where one or more X-variables allow the outcome to exist, but may not produce it (they are necessary but not sufficient). Each single necessary condition must be present and compensation is not possible.[43]
Users may have particular data points of interest within a data set, as opposed to the general messaging outlined above. Such low-level user analytic activities are presented in the following table. The taxonomy can also be organized by three poles of activities: retrieving values, finding data points, and arranging data points.[44][45][46]
- How long is the movie Gone with the Wind?
- What comedies have won awards?
- Which funds underperformed the SP-500?
- What is the gross income of all stores combined?
- How many manufacturers of cars are there?
- What director/film has won the most awards?
- What Marvel Studios film has the most recent release date?
- Rank the cereals by calories.
- What is the range of car horsepowers?
- What actresses are in the data set?
- What is the age distribution of shoppers?
- Are there any outliers in protein?
- Is there a cluster of typical film lengths?
- Is there a correlation between country of origin and MPG?
- Do different genders have a preferred payment method?
- Is there a trend of increasing film length over the years?
Barriers to effective analysis may exist among the analysts performing the data analysis or among the audience. Distinguishing fact from opinion, cognitive biases, and innumeracy are all challenges to sound data analysis.[47]
You are entitled to your own opinion, but you are not entitled to your own facts.
Effective analysis requires obtaining relevantfactsto answer questions, support a conclusion or formalopinion, or testhypotheses.[48]Facts by definition are irrefutable, meaning that any person involved in the analysis should be able to agree upon them. The auditor of a public company must arrive at a formal opinion on whether financial statements of publicly traded corporations are "fairly stated, in all material respects".[49]This requires extensive analysis of factual data and evidence to support their opinion.
There are a variety ofcognitive biasesthat can adversely affect analysis. For example,confirmation biasis the tendency to search for or interpret information in a way that confirms one's preconceptions.[50]In addition, individuals may discredit information that does not support their views.[51]
Analysts may be trained specifically to be aware of these biases and how to overcome them.[52]In his bookPsychology of Intelligence Analysis, retired CIA analystRichards Heuerwrote that analysts should clearly delineate their assumptions and chains of inference and specify the degree and source of the uncertainty involved in the conclusions.[53]He emphasized procedures to help surface and debate alternative points of view.[54]
Effective analysts are generally adept with a variety of numerical techniques. However, audiences may not have such literacy with numbers ornumeracy; they are said to be innumerate.[55]Persons communicating the data may also be attempting to mislead or misinform, deliberately using bad numerical techniques.[56]
For example, whether a number is rising or falling may not be the key factor. More important may be the number relative to another number, such as the size of government revenue or spending relative to the size of the economy (GDP) or the amount of cost relative to revenue in corporate financial statements.[57]This numerical technique is referred to as normalization[16]or common-sizing. There are many such techniques employed by analysts, whether adjusting for inflation (i.e., comparing real vs. nominal data) or considering population increases, demographics, etc.[58]
Analysts may also analyze data under different assumptions or scenarios. For example, when analysts performfinancial statement analysis, they will often recast the financial statements under different assumptions to help arrive at an estimate of future cash flow, which they then discount to present value based on some interest rate, to determine the valuation of the company or its stock.[59]Similarly, the CBO analyzes the effects of various policy options on the government's revenue, outlays and deficits, creating alternative future scenarios for key measures.[60]
Analytics is the "extensive use of data, statistical and quantitative analysis, explanatory and predictive models, and fact-based management to drive decisions and actions." It is a subset ofbusiness intelligence, which is a set of technologies and processes that uses data to understand and analyze business performance to drive decision-making.[61]
Ineducation, most educators have access to adata systemfor the purpose of analyzing student data.[62]These data systems present data to educators in anover-the-counter dataformat (embedding labels, supplemental documentation, and a help system and making key package/display and content decisions) to improve the accuracy of educators' data analyses.[63]
This section contains rather technical explanations that may assist practitioners but are beyond the typical scope of a Wikipedia article.[64]
The most important distinction between the initial data analysis phase and the main analysis phase is that during initial data analysis one refrains from any analysis that is aimed at answering the original research question. The initial data analysis phase is guided by the following four questions:[65]
The quality of the data should be checked as early as possible. Data quality can be assessed in several ways, using different types of analysis: frequency counts, descriptive statistics (mean, standard deviation, median), normality (skewness, kurtosis, frequency histograms), normalimputationis needed.[66]
The quality of themeasurement instrumentsshould only be checked during the initial data analysis phase when this is not the focus or research question of the study.[70]One should check whether structure of measurement instruments corresponds to structure reported in the literature.
There are two ways to assess measurement quality:
After assessing the quality of the data and of the measurements, one might decide to impute missing data, or to perform initial transformations of one or more variables, although this can also be done during the main analysis phase.[73]Possible transformations of variables are:[74]
One should check the success of therandomizationprocedure, for instance by checking whether background and substantive variables are equally distributed within and across groups. If the study did not need or use a randomization procedure, one should check the success of the non-random sampling, for instance by checking whether all subgroups of the population of interest are represented in the sample.[75]Other possible data distortions that should be checked are:
In any report or article, the structure of the sample must be accurately described. It is especially important to exactly determine the size of the subgroup when subgroup analyses will be performed during the main analysis phase.[77]The characteristics of the data sample can be assessed by looking at:
During the final stage, the findings of the initial data analysis are documented, and necessary, preferable, and possible corrective actions are taken. Also, the original plan for the main data analyses can and should be specified in more detail or rewritten. In order to do this, several decisions about the main data analyses can and should be made:
Several analyses can be used during the initial data analysis phase:[80]
It is important to take the measurement levels of the variables into account for the analyses, as special statistical techniques are available for each level:[81]
Nonlinear analysis is often necessary when the data is recorded from anonlinear system. Nonlinear systems can exhibit complex dynamic effects includingbifurcations,chaos,harmonicsandsubharmonicsthat cannot be analyzed using simple linear methods. Nonlinear data analysis is closely related tononlinear system identification.[82]
In the main analysis phase, analyses aimed at answering the research question are performed as well as any other relevant analysis needed to write the first draft of the research report.[83]
In the main analysis phase, either an exploratory or confirmatory approach can be adopted. Usually the approach is decided before data is collected.[84]In an exploratory analysis no clear hypothesis is stated before analysing the data, and the data is searched for models that describe the data well.[85]In a confirmatory analysis, clear hypotheses about the data are tested.[86]
Exploratory data analysisshould be interpreted carefully. When testing multiple models at once there is a high chance on finding at least one of them to be significant, but this can be due to atype 1 error. It is important to always adjust the significance level when testing multiple models with, for example, aBonferroni correction.[87]Also, one should not follow up an exploratory analysis with a confirmatory analysis in the same dataset.[88]An exploratory analysis is used to find ideas for a theory, but not to test that theory as well.[88]When a model is found exploratory in a dataset, then following up that analysis with a confirmatory analysis in the same dataset could simply mean that the results of the confirmatory analysis are due to the sametype 1 errorthat resulted in the exploratory model in the first place.[88]The confirmatory analysis therefore will not be more informative than the original exploratory analysis.[89]
It is important to obtain some indication about how generalizable the results are.[90]While this is often difficult to check, one can look at the stability of the results. Are the results reliable and reproducible? There are two main ways of doing that.
Free software for data analysis include:
The typical data analysis workflow involves collecting data, running analyses, creating visualizations, and writing reports. However, this workflow presents challenges, including a separation between analysis scripts and data, as well as a gap between analysis and documentation. Often, the correct order of running scripts is only described informally or resides in the data scientist's memory. The potential for losing this information creates issues for reproducibility.
To address these challenges, it is essential to document analysis script content and workflow. Additionally, overall documentation is crucial, as well as providing reports that are understandable by both machines and humans, and ensuring accurate representation of the analysis workflow even as scripts evolve.[97]
Different companies and organizations hold data analysis contests to encourage researchers to utilize their data or to solve a particular question using data analysis. A few examples of well-known international data analysis contests are:
|
https://en.wikipedia.org/wiki/Data_analysis
|
Adata quality firewallis the use of software to protect a computer system from the entry of erroneous, duplicated or poor quality data.Gartnerestimated in 2017 that poor quality data cost organizations an average of $15 million a year.[1]Older technology required the tight integration ofdata qualitysoftware, whereas this can now be accomplished by loosely coupling technology in aservice-oriented architecture.
A data quality firewall guarantees database accuracy and consistency. This application ensures that only valid and high quality data enter the system, which means that it obliquely protects the database from damage; this is extremely important since database integrity and security are absolutely essential. A data quality firewall provides real time feedback information about the quality of the data submitted to the system.
The main goal of a data quality process consists in capturing erroneous and invalid data, processing them and eliminating duplicates and, lastly, exporting valid data to the user without failing to store a back-up copy into the database. A data quality firewall acts similarly to a network security firewall. It enables packets to pass through specified ports by filtering out data that present quality issues and allowing the remaining, valid data to be stored in the database. In other words, the firewall sits between the data source and the database and works throughout the extraction, processing and loading of data.
It is necessary that data streams be subject to accurate validity checks before they can be considered as being correct or trustworthy. Such checks are of a temporal, formal, logic and forecasting kind.
|
https://en.wikipedia.org/wiki/Data_quality_firewall
|
Data scienceis aninterdisciplinaryacademic field[1]that usesstatistics,scientific computing,scientific methods, processing,scientific visualization,algorithmsand systems to extract or extrapolateknowledgefrom potentially noisy,structured, orunstructured data.[2]
Data science also integrates domain knowledge from the underlying application domain (e.g., natural sciences, information technology, and medicine).[3]Data science is multifaceted and can be described as a science, a research paradigm, a research method, a discipline, a workflow, and a profession.[4]
Data science is "a concept to unifystatistics,data analysis,informatics, and their relatedmethods" to "understand and analyze actualphenomena" withdata.[5]It uses techniques and theories drawn from many fields within the context ofmathematics, statistics,computer science,information science, anddomain knowledge.[6]However, data science is different fromcomputer scienceand information science.Turing AwardwinnerJim Grayimagined data science as a "fourth paradigm" of science (empirical,theoretical,computational, and now data-driven) and asserted that "everything about science is changing because of the impact ofinformation technology" and thedata deluge.[7][8]
Adata scientistis a professional who creates programming code and combines it with statistical knowledge to summarize data.[9]
Data science is aninterdisciplinaryfield[10]focused onextracting knowledgefrom typicallylargedata setsand applying the knowledge from that data to solve problems in other application domains. The field encompasses preparing data for analysis, formulating data science problems,analyzingdata, and summarizing these findings. As such, it incorporates skills from computer science, mathematics,data visualization,graphic design,communication, andbusiness.[11]
Vasant Dharwrites that statistics emphasizes quantitative data and description. In contrast, data science deals with quantitative and qualitative data (e.g., from images, text, sensors, transactions, customer information, etc.) and emphasizes prediction and action.[12]Andrew GelmanofColumbia Universityhas described statistics as a non-essential part of data science.[13]Stanford professorDavid Donohowrites that data science is not distinguished from statistics by the size of datasets or use of computing and that many graduate programs misleadingly advertise their analytics and statistics training as the essence of a data-science program. He describes data science as an applied field growing out of traditional statistics.[14]
In 1962,John Tukeydescribed a field he called "data analysis", which resembles modern data science.[14]In 1985, in a lecture given to the Chinese Academy of Sciences in Beijing,C. F. Jeff Wuused the term "data science" for the first time as an alternative name for statistics.[15]Later, attendees at a 1992 statistics symposium at theUniversity of Montpellier IIacknowledged the emergence of a new discipline focused on data of various origins and forms, combining established concepts and principles of statistics and data analysis with computing.[16][17]
The term "data science" has been traced back to 1974, whenPeter Naurproposed it as an alternative name to computer science.[6]In 1996, the International Federation of Classification Societies became the first conference to specifically feature data science as a topic.[6]However, the definition was still in flux. After the 1985 lecture at the Chinese Academy of Sciences in Beijing, in 1997C. F. Jeff Wuagain suggested that statistics should be renamed data science. He reasoned that a new name would help statistics shed inaccurate stereotypes, such as being synonymous with accounting or limited to describing data.[18]In 1998, Hayashi Chikio argued for data science as a new, interdisciplinary concept, with three aspects: data design, collection, and analysis.[17]
In 2012, technologistsThomas H. DavenportandDJ Patildeclared "Data Scientist: The Sexiest Job of the 21st Century",[19]a catchphrase that was picked up even by major-city newspapers like theNew York Times[20]and theBoston Globe.[21]A decade later, they reaffirmed it, stating that "the job is more in demand than ever with employers".[22]
The modern conception of data science as an independent discipline is sometimes attributed toWilliam S. Cleveland.[23]In 2014, theAmerican Statistical Association's Section on Statistical Learning and Data Mining changed its name to the Section on Statistical Learning and Data Science, reflecting the ascendant popularity of data science.[24]
The professional title of "data scientist" has been attributed toDJ PatilandJeff Hammerbacherin 2008.[25]Though it was used by theNational Science Boardin their 2005 report "Long-Lived Digital Data Collections: Enabling Research and Education in the 21st Century", it referred broadly to any key role in managing a digitaldata collection.[26]
Data analysis typically involves working with structured datasets to answer specific questions or solve specific problems. This can involve tasks such asdata cleaninganddata visualizationto summarize data and develop hypotheses about relationships betweenvariables.Data analyststypically use statistical methods to test these hypotheses and draw conclusions from the data.[27]
Data science involves working with larger datasets that often require advanced computational and statistical methods to analyze. Data scientists often work withunstructured datasuch as text or images and usemachine learningalgorithms to build predictive models. Data science often usesstatistical analysis,data preprocessing, andsupervised learning.[28][29]
Cloud computingcan offer access to large amounts of computational power andstorage.[30]Inbig data, where volumes of information are continually generated and processed, these platforms can be used to handle complex and resource-intensive analytical tasks.[31]
Some distributed computing frameworks are designed to handle big data workloads. These frameworks can enable data scientists to process and analyze large datasets in parallel, which can reduce processing times.[32]
Data science involves collecting, processing, and analyzing data which often includes personal and sensitive information. Ethical concerns include potential privacy violations, bias perpetuation, and negative societal impacts.[33][34]
Machine learning models can amplify existing biases present in training data, leading to discriminatory or unfair outcomes.[35][36]
|
https://en.wikipedia.org/wiki/Data_science
|
Data and information visualization(data viz/visorinfo viz/vis)[2]is the practice ofdesigningand creatinggraphicor visualrepresentationsof a large amount[3]of complex quantitative and qualitativedataandinformationwith the help of static, dynamic or interactive visual items. Typically based on data and information collected from a certaindomain of expertise, these visualizations are intended for a broader audience to help them visually explore and discover, quickly understand, interpret and gain important insights into otherwise difficult-to-identify structures, relationships, correlations, local and global patterns, trends, variations, constancy, clusters, outliers and unusual groupings within data (exploratory visualization).[4][5][6]When intended for the general public (mass communication) to convey a concise version of known, specific information in a clear and engaging manner (presentationalorexplanatory visualization),[4]it is typically calledinformation graphics.
Data visualizationis concerned with presenting sets of primarily quantitative raw data in a schematic form, using imagery. The visual formats used in data visualization include charts and graphs (e.g.pie charts,bar charts,line charts,area charts,cone charts,pyramid charts,donut charts,histograms,spectrograms,cohort charts,waterfall charts,funnel charts,bullet graphs, etc.),diagrams,plots(e.g.scatter plots,distribution plots,box-and-whisker plots), geospatialmaps(such asproportional symbol maps,choropleth maps,isopleth mapsandheat maps), figures,correlation matrices, percentagegauges, etc., which sometimes can be combined in adashboard.
Information visualization, on the other hand, deals with multiple, large-scale and complicated datasets which contain quantitative (numerical) data as well as qualitative (non-numerical, i.e. verbal or graphical) and primarily abstract information and its goal is to add value to raw data, improve the viewers' comprehension, reinforce their cognition and help them derive insights and make decisions as they navigate and interact with the computer-supported graphical display. Visual tools used in information visualization includemapsfor location based data;hierarchical[7]organisations of data such astree maps,radial_trees, and othertree_structures; displays that prioritiserelationships(Heer et al. 2010) such asSankey diagrams,network diagrams,venn diagrams,mind maps,semantic networks,entity-relationship diagrams;flow charts,timelines, etc.
Emerging technologieslikevirtual,augmentedandmixed realityhave the potential to make information visualization more immersive, intuitive, interactive and easily manipulable and thus enhance the user'svisual perceptionandcognition.[8]In data and information visualization, the goal is to graphically present and explore abstract, non-physical and non-spatial data collected fromdatabases,information systems,file systems,documents,business data, etc. (presentational and exploratory visualization) which is different from the field ofscientific visualization, where the goal is to render realistic images based on physical andspatialscientific datato confirm or rejecthypotheses(confirmatory visualization).[9]
Effective data visualization is properly sourced, contextualized, simple and uncluttered. The underlying data is accurate and up-to-date to make sure that insights are reliable. Graphical items are well-chosen for the given datasets and aesthetically appealing, with shapes, colors and other visual elements used deliberately in a meaningful and non-distracting manner. The visuals are accompanied by supporting texts (labels and titles). These verbal and graphical components complement each other to ensure clear, quick and memorable understanding. Effective information visualization is aware of the needs and concerns and the level of expertise of the target audience, deliberately guiding them to the intended conclusion.[10][3]Such effective visualization can be used not only for conveying specialized, complex, big data-driven ideas to a wider group of non-technical audience in a visually appealing, engaging and accessible manner, but also to domain experts and executives for making decisions, monitoring performance, generating new ideas and stimulating research.[10][4]In addition, data scientists, data analysts and data mining specialists use data visualization to check the quality of data, find errors, unusual gaps and missing values in data, clean data, explore the structures and features of data and assess outputs of data-driven models.[4]Inbusiness, data and information visualization can constitute a part ofdata storytelling, where they are paired with a coherentnarrativestructure orstorylineto contextualize the analyzed data and communicate the insights gained from analyzing the data clearly and memorably with the goal of convincing the audience into making a decision or taking an action in order to createbusiness value.[3][11]This can be contrasted with the field ofstatistical graphics, where complex statistical data are communicated graphically in an accurate and precise manner among researchers and analysts with statistical expertise to help them performexploratory data analysisor to convey the results of such analyses, where visual appeal, capturing attention to a certain issue and storytelling are not as important.[12]
The field of data and information visualization is of interdisciplinary nature as it incorporates principles found in the disciplines ofdescriptive statistics(as early as the 18th century),[13]visual communication,graphic design,cognitive scienceand, more recently,interactive computer graphicsandhuman-computer interaction.[14]Since effective visualization requires design skills, statistical skills and computing skills, it is argued by authors such as Gershon and Page that it is both an art and a science.[15]The neighboring field ofvisual analyticsmarries statistical data analysis, data and information visualization and human analytical reasoning through interactive visual interfaces to help human users reach conclusions, gain actionable insights and make informed decisions which are otherwise difficult for computers to do.
Research into how people read and misread various types of visualizations is helping to determine what types and features of visualizations are most understandable and effective in conveying information.[16][17]On the other hand, unintentionally poor or intentionally misleading and deceptive visualizations (misinformative visualization) can function as powerful tools which disseminatemisinformation, manipulate public perception and divertpublic opiniontoward a certain agenda.[18]Thus data visualization literacy has become an important component ofdataandinformation literacyin theinformation ageakin to the roles played bytextual,mathematicalandvisual literacyin the past.[19]
The field of data and information visualization has emerged "from research inhuman–computer interaction,computer science,graphics,visual design,psychology,photographyandbusiness methods. It is increasingly applied as a critical component in scientific research,digital libraries,data mining, financial data analysis, market studies, manufacturingproduction control, anddrug discovery".[20]
Data and information visualization presumes that "visual representations and interaction techniques take advantage of the human eye's broad bandwidth pathway into the mind to allow users to see, explore, and understand large amounts of information at once. Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways."[21]
Data analysis is an indispensable part of all applied research and problem solving in industry. The most fundamental data analysis approaches are visualization (histograms, scatter plots, surface plots, tree maps, parallel coordinate plots, etc.),statistics(hypothesis test,regression,PCA, etc.),data mining(association mining, etc.), andmachine learningmethods (clustering,classification,decision trees, etc.). Among these approaches, information visualization, or visual data analysis, is the most reliant on the cognitive skills of human analysts, and allows the discovery of unstructured actionable insights that are limited only by human imagination and creativity. The analyst does not have to learn any sophisticated methods to be able to interpret the visualizations of the data. Information visualization is also a hypothesis generation scheme, which can be, and is typically followed by more analytical or formal analysis, such as statistical hypothesis testing.
To communicate information clearly and efficiently, data visualization usesstatistical graphics,plots,information graphicsand other tools. Numerical data may be encoded using dots, lines, or bars, to visually communicate a quantitative message.[22]Effective visualization helps users analyze and reason about data and evidence.[23]It makes complex data more accessible, understandable, and usable, but can also be reductive.[24]Users may have particular analytical tasks, such as making comparisons or understandingcausality, and the design principle of the graphic (i.e., showing comparisons or showing causality) follows the task. Tables are generally used where users will look up a specific measurement, while charts of various types are used to show patterns or relationships in the data for one or more variables.
Data visualization refers to the techniques used to communicate data or information by encoding it as visual objects (e.g., points, lines, or bars) contained in graphics. The goal is to communicate information clearly and efficiently to users. It is one of the steps indata analysisordata science. According to Vitaly Friedman (2008) the "main goal of data visualization is to communicate information clearly and effectively through graphical means. It doesn't mean that data visualization needs to look boring to be functional or extremely sophisticated to look beautiful. To convey ideas effectively, both aesthetic form and functionality need to go hand in hand, providing insights into a rather sparse and complex data set by communicating its key aspects in a more intuitive way. Yet designers often fail to achieve a balance between form and function, creating gorgeous data visualizations which fail to serve their main purpose — to communicate information".[25]
Indeed,Fernanda ViegasandMartin M. Wattenbergsuggested that an ideal visualization should not only communicate clearly, but stimulate viewer engagement and attention.[26]
Data visualization is closely related toinformation graphics,information visualization,scientific visualization,exploratory data analysisandstatistical graphics. In the new millennium, data visualization has become an active area of research, teaching and development. According to Post et al. (2002), it has united scientific and information visualization.[27]
In the commercial environment data visualization is often referred to asdashboards.Infographicsare another very common form of data visualization.
The greatest value of a picture is when it forces us to notice what we never expected to see.
Edward Tuftehas explained that users of information displays are executing particularanalytical taskssuch as making comparisons. Thedesign principleof the information graphic should support the analytical task.[29]As William Cleveland and Robert McGill show, different graphical elements accomplish this more or less effectively. For example, dot plots and bar charts outperform pie charts.[30]
In his 1983 bookThe Visual Display of Quantitative Information,[31]Edward Tuftedefines 'graphical displays' and principles for effective graphical display in the following passage:
"Excellence in statistical graphics consists of complex ideas communicated with clarity, precision, and efficiency. Graphical displays should:
Graphicsrevealdata. Indeed, graphics can be more precise and revealing than conventional statistical computations."[32]
For example, the Minard diagram shows the losses suffered by Napoleon's army in the 1812–1813 period. Six variables are plotted: the size of the army, its location on a two-dimensional surface (x and y), time, the direction of movement, and temperature. The line width illustrates a comparison (size of the army at points in time), while the temperature axis suggests a cause of the change in army size. This multivariate display on a two-dimensional surface tells a story that can be grasped immediately while identifying the source data to build credibility. Tufte wrote in 1983 that: "It may well be the best statistical graphic ever drawn."[32]
Not applying these principles may result inmisleading graphs, distorting the message, or supporting an erroneous conclusion. According to Tufte,chartjunkrefers to the extraneous interior decoration of the graphic that does not enhance the message or gratuitous three-dimensional or perspective effects. Needlessly separating the explanatory key from the image itself, requiring the eye to travel back and forth from the image to the key, is a form of "administrative debris." The ratio of "data to ink" should be maximized, erasing non-data ink where feasible.[32]
TheCongressional Budget Officesummarized several best practices for graphical displays in a June 2014 presentation. These included: a) Knowing your audience; b) Designing graphics that can stand alone outside the report's context; and c) Designing graphics that communicate the key messages in the report.[33]
Useful criteria for a data or information visualization include:[34]
Readability means that it is possible for a viewer to understand the underlying data, such as by making comparisons between proportionally sized visual elements to compare their respective data values; or using a legend to decode a map, like identifying coloured regions on a climate map to read temperature at that location. For greatest efficiency and simplicity of design and user experience, this readability is enhanced through the use of bijective mapping in that design of the image elements - where the mapping of representational element to data variable is unique.[35]
Kosara (2007)[34]also identifies the need for a visualisation to be "recognisable as a visualisation and not appear to be something else". He also states that recognisability and readability may not always be required in all types of visualisation e.g. "informative art" (which would still meet all three above criteria but might not look like a visualisation) or "artistic visualisation" (which similarly is still based on non-visual data to create an image, but may not be readable or recognisable).
AuthorStephen Fewdescribed eight types of quantitative messages that users may attempt to understand or communicate from a set of data and the associated graphs used to help communicate the message:
Analysts reviewing a set of data may consider whether some or all of the messages and graphic types above are applicable to their task and audience. The process of trial and error to identify meaningful relationships and messages in the data is part ofexploratory data analysis.
A human can distinguish differences in line length, shape, orientation, distances, and color (hue) readily without significant processing effort; these are referred to as "pre-attentive attributes". For example, it may require significant time and effort ("attentive processing") to identify the number of times the digit "5" appears in a series of numbers; but if that digit is different in size, orientation, or color, instances of the digit can be noted quickly through pre-attentive processing.[38]
Compelling graphics take advantage of pre-attentive processing and attributes and the relative strength of these attributes. For example, since humans can more easily process differences in line length than surface area, it may be more effective to use a bar chart (which takes advantage of line length to show comparison) rather than pie charts (which use surface area to show comparison).[38]
Almost all data visualizations are created for human consumption. Knowledge of human perception and cognition is necessary when designing intuitive visualizations.[39]Cognition refers to processes in human beings like perception, attention, learning, memory, thought, concept formation, reading, and problem solving.[40]Human visual processing is efficient in detecting changes and making comparisons between quantities, sizes, shapes and variations in lightness. When properties of symbolic data are mapped to visual properties, humans can browse through large amounts of data efficiently. It is estimated that 2/3 of the brain's neurons can be involved in visual processing. Proper visualization provides a different approach to show potential connections, relationships, etc. which are not as obvious in non-visualized quantitative data. Visualization can become a means ofdata exploration.
Studies have shown individuals used on average 19% less cognitive resources, and 4.5% better able to recall details when comparing data visualization with text.[41]
The modern study of visualization started withcomputer graphics, which "has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the special issue of Computer Graphics on Visualization inScientific Computing. Since then there have been several conferences and workshops, co-sponsored by theIEEE Computer SocietyandACM SIGGRAPH".[42]They have been devoted to the general topics ofdata visualization, information visualization andscientific visualization, and more specific areas such asvolume visualization.
In 1786,William Playfairpublished the first presentation graphics.
There is no comprehensive 'history' of data visualization. There are no accounts that span the entire development of visual thinking and the visual representation of data, and which collate the contributions of disparate disciplines.[43]Michael Friendly and Daniel J Denis ofYork Universityare engaged in a project that attempts to provide a comprehensive history of visualization. Contrary to general belief, data visualization is not a modern development. Since prehistory, stellar data, or information such as location of stars were visualized on the walls of caves (such as those found inLascaux Cavein Southern France) since thePleistoceneera.[44]Physical artefacts such as Mesopotamianclay tokens(5500 BC), Incaquipus(2600 BC) and Marshall Islandsstick charts(n.d.) can also be considered as visualizing quantitative information.[45][46]
The first documented data visualization can be tracked back to 1160 B.C. withTurin Papyrus Mapwhich accurately illustrates the distribution of geological resources and provides information about quarrying of those resources.[47]Such maps can be categorized asthematic cartography, which is a type of data visualization that presents and communicates specific data and information through a geographical illustration designed to show a particular theme connected with a specific geographic area. Earliest documented forms of data visualization were various thematic maps from different cultures and ideograms and hieroglyphs that provided and allowed interpretation of information illustrated. For example,Linear Btablets ofMycenaeprovided a visualization of information regarding Late Bronze Age era trades in the Mediterranean. The idea of coordinates was used by ancient Egyptian surveyors in laying out towns, earthly and heavenly positions were located by something akin to latitude and longitude at least by 200 BC, and the map projection of a spherical Earth into latitude and longitude byClaudius Ptolemy[c.85–c.165] in Alexandria would serve as reference standards until the 14th century.[47]
The invention of paper and parchment allowed further development of visualizations throughout history. Figure shows a graph from the 10th or possibly 11th century that is intended to be an illustration of the planetary movement, used in an appendix of a textbook in monastery schools.[48]The graph apparently was meant to represent a plot of the inclinations of the planetary orbits as a function of the time. For this purpose, the zone of the zodiac was represented on a plane with a horizontal line divided into thirty parts as the time or longitudinal axis. The vertical axis designates the width of the zodiac. The horizontal scale appears to have been chosen for each planet individually for the periods cannot be reconciled. The accompanying text refers only to the amplitudes. The curves are apparently not related in time.
By the 16th century, techniques and instruments for precise observation and measurement of physical quantities, and geographic and celestial position were well-developed (for example, a "wall quadrant" constructed byTycho Brahe[1546–1601], covering an entire wall in his observatory). Particularly important were the development of triangulation and other methods to determine mapping locations accurately.[43]Very early, the measure of time led scholars to develop innovative way of visualizing the data (e.g. Lorenz Codomann in 1596, Johannes Temporarius in 1596[49]).
French philosopher and mathematicianRené DescartesandPierre de Fermatdeveloped analytic geometry and two-dimensional coordinate system which heavily influenced the practical methods of displaying and calculating values. Fermat andBlaise Pascal's work on statistics and probability theory laid the groundwork for what we now conceptualize as data.[43]According to the Interaction Design Foundation, these developments allowed and helped WilliamPlayfair, who saw potential for graphical communication of quantitative data, to generate and develop graphical methods of statistics.[39]
In the second half of the 20th century,Jacques Bertinused quantitative graphs to represent information "intuitively, clearly, accurately, and efficiently".[39]
John Tukey and Edward Tufte pushed the bounds of data visualization; Tukey with his new statistical approach of exploratory data analysis and Tufte with his book "The Visual Display of Quantitative Information" paved the way for refining data visualization techniques for more than statisticians. With the progression of technology came the progression of data visualization; starting with hand-drawn visualizations and evolving into more technical applications – including interactive designs leading to software visualization.[50]
Programs likeSAS,SOFA,R,Minitab, Cornerstone and more allow for data visualization in the field of statistics. Other data visualization applications, more focused and unique to individuals, programming languages such asD3,Python(through matplotlib, seaborn) andJavaScriptand Java(through JavaFX) help to make the visualization of quantitative data a possibility. Private schools have also developed programs to meet the demand for learning data visualization and associated programming libraries, including free programs likeThe Data Incubatoror paid programs likeGeneral Assembly.[51]
Beginning with the symposium "Data to Discovery" in 2013, ArtCenter College of Design, Caltech and JPL in Pasadena have run an annual program on interactive data visualization.[52]The program asks: How can interactive data visualization help scientists and engineers explore their data more effectively? How can computing, design, and design thinking help maximize research results? What methodologies are most effective for leveraging knowledge from these fields? By encoding relational information with appropriate visual and interactive characteristics to help interrogate, and ultimately gain new insight into data, the program develops new interdisciplinary approaches to complex science problems, combining design thinking and the latest methods from computing, user-centered design, interaction design and 3D graphics.
Data visualization involves specific terminology, some of which is derived from statistics. For example, authorStephen Fewdefines two types of data, which are used in combination to support a meaningful analysis or visualization:
The distinction between quantitative and categorical variables is important because the two types require different methods of visualization.
Two primary types ofinformation displaysare tables and graphs.
Eppler and Lengler have developed the "Periodic Table of Visualization Methods," an interactive chart displaying various data visualization methods. It includes six types of data visualization methods: data, information, concept, strategy, metaphor and compound.[55]In "Visualization Analysis and Design"Tamara Munznerwrites "Computer-based visualization systems provide visual representations of datasets designed to help people carry out tasks more effectively." Munzner argues that visualization "is suitable when there is a need to augment human capabilities rather than replace people with computational decision-making methods."[56]
Variable-width ("variwide") bar chart
Orthogonal (orthogonal composite) bar chart
Interactive data visualizationenables direct actions on a graphicalplotto change elements and link between multiple plots.[59]
Interactive data visualization has been a pursuit ofstatisticianssince the late 1960s. Examples of the developments can be found on theAmerican Statistical Associationvideo lending library.[60]
Common interactions include:
There are different approaches on the scope of data visualization. One common focus is on information presentation, such as Friedman (2008). Friendly (2008) presumes two main parts of data visualization:statistical graphics, andthematic cartography.[61]In this line the "Data Visualization: Modern Approaches" (2007) article gives an overview of seven subjects of data visualization:[62]
All these subjects are closely related tographic designand information representation.
On the other hand, from acomputer scienceperspective, Frits H. Post in 2002 categorized the field into sub-fields:[27][63]
Within The Harvard Business Review, Scott Berinato developed a framework to approach data visualisation.[64]To start thinking visually, users must consider two questions; 1) What you have and 2) what you're doing. The first step is identifying what data you want visualised. It is data-driven like profit over the past ten years or a conceptual idea like how a specific organisation is structured. Once this question is answered one can then focus on whether they are trying to communicate information (declarative visualisation) or trying to figure something out (exploratory visualisation). Scott Berinato combines these questions to give four types of visual communication that each have their own goals.[64]
These four types of visual communication are as follows;
Data and information visualization insights are being applied in areas such as:[20]
Notable academic and industry laboratories in the field are:
Conferences in this field, ranked by significance in data visualization research,[66]are:
For further examples, see:Category:Computer graphics organizations
Data presentation architecture(DPA) is a skill-set that seeks to identify, locate, manipulate, format and present data in such a way as to optimally communicate meaning and proper knowledge.
Historically, the termdata presentation architectureis attributed to Kelly Lautt:[a]"Data Presentation Architecture (DPA) is a rarely applied skill set critical for the success and value ofBusiness Intelligence. Data presentation architecture weds the science of numbers, data and statistics indiscovering valuable informationfrom data and making it usable, relevant and actionable with the arts of data visualization, communications,organizational psychologyandchange managementin order to provide business intelligence solutions with the data scope, delivery timing, format and visualizations that will most effectively support and drive operational, tactical and strategic behaviour toward understood business (or organizational) goals. DPA is neither an IT nor a business skill set but exists as a separate field of expertise. Often confused with data visualization, data presentation architecture is a much broader skill set that includes determining what data on what schedule and in what exact format is to be presented, not just the best way to present data that has already been chosen. Data visualization skills are one element of DPA."
DPA has two main objectives:
With the above objectives in mind, the actual work of data presentation architecture consists of:
DPA work shares commonalities with several other fields, including:
|
https://en.wikipedia.org/wiki/Data_and_information_visualization
|
Master data management(MDM) is a discipline in which business andinformation technologycollaborate to ensure the uniformity, accuracy,stewardship, semantic consistency, and accountability of the enterprise's official sharedmaster dataassets.[1][2]
However, issues withdata quality, classification, andreconciliationmay requiredata transformation. As with otherExtract, Transform, Load-based data movements, these processes are expensive and inefficient, reducingreturn on investmentfor a project.
As a result ofbusiness unitandproduct linesegmentation, the same entity (whether a customer, supplier, or product) will be included in different product lines. This leads to data redundancy and even confusion.
For example, acustomertakes out amortgageat a bank. If the marketing and customer service departments have separate databases, advertisements might still be sent to the customer, even though they've already signed up. The two parts of the bank are unaware, and the customer is sent irrelevant communications.Record linkagecan associate different records corresponding to the same entity, mitigating this issue.
One of the most common problems for master data management is company growth throughmergersoracquisitions. Reconciling these separate master data systems can present difficulties, as existing applications have dependencies on the master databases. Ideally,database administratorsresolve this problem throughdeduplicationof the master data as part of the merger.
Over time, as further mergers and acquisitions occur, the problem can multiply. Data reconciliation processes can become extremely complex or even unreliable. Some organizations end up with 10, 15, or even 100 separate and poorly integrated master databases. This can cause serious problems incustomer satisfaction, operational efficiency,decision support, and regulatory compliance.
Another problem involves determining the proper degrees of detail and normalization to include in the master data schema. For example, in a federatedHuman Resourcesenvironment, the enterprise software may focus on storing people's data as current status, adding a few fields to identify the date of hire, date of last promotion, etc. However, this simplification can introduce business-impacting errors into dependent systems for planning and forecasting. Thestakeholdersof such systems may be forced to build a parallel network of new interfaces to track the onboarding of new hires, planned retirements, and divestment, which works against one of the aims of master data management.
Master data management isenabledby technology, but is more than the technologies that enable it. An organization's master data management capability will also include people and processes in its definition.
Several roles should be staffed within MDM. Most prominently, the Data Owner and the Data Steward. Several people would likely be allocated to each role and each person responsible for a subset of Master Data (e.g. one data owner for employee master data, another for customer master data).
The Data Owner is responsible for the requirements for data definition, data quality, data security, etc. as well as for compliance with data governance and data management procedures. The Data Owner should also be funding improvement projects in case of deviations from the requirements.
The Data Stewardis running the master data management on behalf of the data owner and probably also being an advisor to the Data Owner.
Master data management can be viewed as a "discipline for specialized quality improvement"[4]defined by the policies and procedures put in place by adata governanceorganization. It has the objective of providing processes forcollecting,aggregating, matching, consolidating,quality-assuring,persistinganddistributingmaster data throughout an organization to ensure a common understanding,consistency, accuracy and control,[5]in the ongoing maintenance and application use of that data.
Processes commonly seen in master data management include source identification, data collection,data transformation,normalization, rule administration,error detection and correction, data consolidation,data storage, data distribution, data classification, taxonomy services, item master creation,schema mapping, product codification, data enrichment, hierarchy management,business semantics managementanddata governance.
A master data management tool can be used to support master data management byremoving duplicates, standardizing data (mass maintaining),[6]and incorporating rules to eliminate incorrect data from entering the system to create an authoritative source of master data. Master data are the products, accounts, and parties for which thebusiness transactionsare completed.
Where the technology approach produces a "golden record" or relies on a "source of record" or "system of record", it is common to talk of where the data is "mastered". This is accepted terminology in the information technology industry, but care should be taken, both with specialists and with the wider stakeholder community, to avoid confusing the concept of "master data" with that of "mastering data".
There are several models for implementing a technology solution for master data management. These depend on an organization's core business, its corporate structure, and its goals. These include:
This model identifies a single application, database, or simpler source (e.g. a spreadsheet) as being the "source of record" (or "system of record" where solely application databases are relied on). The benefit of this model is its conceptual simplicity, but it may not fit with the realities of complex master data distribution in large organizations.
The source of record can be federated, for example by groups of attributes (so that different attributes of a master data entity may have different sources of record) or geographically (so that different parts of an organization may have different master sources). Federation is only applicable in certain use cases, where there is a clear delineation of which subsets of records will be found in which sources.
The source of record model can be applied more widely than simply tomaster data, for example toreference data.
There are several ways in which master data may be collated and distributed to other systems.[7]This includes:
Challenges in adopting master data management within large organizations often arise when stakeholders disagree on a "single version of the truth" concept is not affirmed by stakeholders, who believe that their local definition of the master data is necessary. For example, the product hierarchy used to manage inventory may be entirely different from the product hierarchies used to support marketing efforts or pay sales representatives. It is above all necessary to identify if different master data is genuinely required. If it is required, then the solution implemented (technology and process) must be able to allow multiple versions of the truth to exist but will provide simple, transparent ways to reconcile the necessary differences. If it is not required, processes must be adjusted.
Often, solutions can be found that retain the integrity of the master data but allow users to access it in ways that suit their needs. For example, a salesperson may want to group products by size, color, or other attributes, while a purchasing officer may want to group products by supplier or country of origin. Without this active management, users who need the alternate versions will simply "go around" the official processes, thus reducing the effectiveness of the company's overall master data management program.
|
https://en.wikipedia.org/wiki/Master_data_management
|
VACUUM[1][2][3][4]is a set ofnormativeguidance principles for achieving training and test dataset quality for structured datasets indata scienceandmachine learning. Thegarbage-in, garbage outprinciple motivates a solution to the problem of data quality but does not offer a specific solution. Unlike the majority of the ad-hoc data quality assessment metrics often used by practitioners[5]VACUUM specifies qualitative principles for data quality management and serves as a basis for defining more detailed quantitative metrics of data quality.[6]
VACUUM is anacronymthat stands for:
This technology-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/VACUUM
|
Inmechanical engineering,backlash, sometimes calledlash,play, orslop, is aclearanceor lost motion in a mechanism caused by gaps between the parts. It can be defined as "the maximum distance or angle through which any part of amechanical systemmay be moved in one direction without applying appreciable force or motion to the next part in mechanical sequence."[1]p. 1-8An example, in the context ofgearsandgear trains, is the amount of clearance between mated gear teeth. It can be seen when the direction of movement is reversed and the slack or lost motion is taken up before the reversal of motion is complete. It can be heard from therailway couplingswhen a train reverses direction. Another example is in avalve trainwith mechanicaltappets, where a certain range of lash is necessary for the valves to work properly.
Depending on the application, backlash may or may not be desirable. Some amount of backlash is unavoidable in nearly all reversing mechanical couplings, although its effects can be negated or compensated for. In many applications, the theoretical ideal would be zero backlash, but in actual practice some backlash must be allowed to prevent jamming.[citation needed]Reasons for specifying a requirement for backlash include allowing forlubrication, manufacturing errors,deflectionunder load, andthermal expansion.[citation needed]A principal cause of undesired backlash iswear.
Factors affecting the amount of backlash required in a gear train include errors in profile, pitch, tooth thickness, helix angle and center distance, andrun-out. The greater the accuracy the smaller the backlash needed. Backlash is most commonly created by cutting the teeth deeper into the gears than the ideal depth. Another way of introducing backlash is by increasing the center distances between the gears.[2]
Backlash due to tooth thickness changes is typically measured along thepitch circleand is defined by:
where:
Backlash, measured on the pitch circle, due to operating center modifications is defined by:
The speed of the machine.
The material in the machine
where:
Standard practice is to make allowance for half the backlash in the tooth thickness of each gear.[citation needed]However, if thepinion(the smaller of the two gears) is significantly smaller than the gear it is meshing with then it is common practice to account for all of the backlash in the larger gear. This maintains as much strength as possible in the pinion's teeth.[2]The amount of additional material removed when making the gears depends on the pressure angle of the teeth. For a 14.5° pressure angle the extra distance the cutting tool is moved in equals the amount of backlash desired. For a 20° pressure angle the distance equals 0.73 times the amount of backlash desired.[3]
As a rule of thumb the average backlash is defined as 0.04 divided by thediametral pitch; the minimum being 0.03 divided by thediametral pitchand the maximum 0.05 divided by thediametral pitch.[3]In metric, you can just multiply the values with the module:
In agear train, backlash is cumulative. When a gear-train is reversed the driving gear is turned a short distance, equal to the total of all the backlashes, before the final driven gear begins to rotate. At low power outputs, backlash results in inaccurate calculation from the small errors introduced at each change of direction; at large power outputs backlash sends shocks through the whole system and can damage teeth and other components.[citation needed]
In certain applications, backlash is an undesirable characteristic and should be minimized.
The best example here is an analogradio tunerdial where one may make precise tuning movements both forwards and backwards. Specialized gear designs allow this. One of the more common designs splits the gear into two gears, each half the thickness of the original.
One half of the gear is fixed to its shaft while the other half of the gear is allowed to turn on the shaft, but pre-loaded in rotation by smallcoil springsthat rotate the free gear relative to the fixed gear. In this way, the spring compression rotates the free gear until all of the backlash in the system has been taken out; the teeth of the fixed gear press against one side of the teeth of the pinion while the teeth of the free gear press against the other side of the teeth on the pinion. Loads smaller than the force of the springs do not compress the springs and with no gaps between the teeth to be taken up, backlash is eliminated.
Another area where backlash matters is inleadscrews. Again, as with the gear train example, the culprit is lost motion when reversing a mechanism that is supposed to transmit motion accurately. Instead of gear teeth, the context isscrew threads. The linear sliding axes (machine slides) ofmachine toolsare an example application.
Most machine slides for many decades, and many even today, have been simple (but accurate) cast-iron linearbearing surfaces, such as a dovetail- or box-slide, with anAcmeleadscrew drive. With just a simple nut, some backlash is inevitable. On manual (non-CNC) machine tools, a machinist's means for compensating for backlash is to approach all precise positions using the same direction of travel, that is, if they have been dialing left, and next want to move to a rightward point, they will move rightwardpastit, then dial leftward back to it; the setups, tool approaches, and toolpaths must in that case be designed within this constraint.[citation needed]
The next-more complex method than the simple nut is asplit nut, whose halves can be adjusted, and locked with screws, so that the two sides ride, respectively, against leftward thread and the other side rides rightward faces. Notice the analogy here with the radio dial example using split gears, where the split halves are pushed in opposing directions. Unlike in the radio dial example, the spring tension idea is not useful here, because machine tools taking a cut put too much force against the screw. Any spring light enough to allow slide movement at all would allow cutter chatter at best and slide movement at worst. These screw-adjusted split-nut-on-an-Acme-leadscrew designs cannot eliminateallbacklash on a machine slide unless they are adjusted so tight that the travel starts to bind. Therefore, this idea can't totally obviate the always-approach-from-the-same-direction concept; nevertheless, backlash can be held to a small amount (1 or 2thousandths of an inchor), which is more convenient, and in some non-precise work is enough to allow one to "ignore" the backlash, i.e., to design as if there were none.
CNCs can be programmed to use the always-approach-from-the-same-direction concept, but that is not the normal way they are used today[when?], because hydraulic anti-backlash split nuts, and newer forms of leadscrew than Acme/trapezoidal -- such asrecirculating ball screws-- effectively eliminate the backlash.[citation needed]The axis can move in either direction without the go-past-and-come-back motion.
The simplest CNCs, such as microlathes or manual-to-CNC conversions, which use nut-and-Acme-screw drives can be programmed to correct for the total backlash on each axis, so that the machine's control system will automatically move the extra distance required to take up the slack when it changes directions. This programmatic "backlash compensation" is a cheap solution, but professional grade CNCs use the more expensive backlash-eliminating drives mentioned above. This allows them to do 3D contouring with a ball-nosed endmill, for example, where the endmill travels around in many directions with constant rigidity and without delays.[citation needed]
In mechanical computers a more complex solution is required, namely afrontlash gearbox.[4]This works by turning slightly faster when the direction is reversed to 'use up' the backlash slack.
Some motion controllers include backlash compensation. Compensation may be achieved by simply adding extra compensating motion (as described earlier) or by sensing the load's position in aclosed loop control scheme. The dynamic response of backlash itself, essentially a delay, makes the position loop less stable and thus more prone tooscillation.
Minimum backlash is calculated as the minimum transverse backlash at the operating pitch circle allowable when the gear teeth with the greatest allowable functional tooth thickness are in mesh with the pinion teeth with their greatest allowable functional tooth thickness, at the smallest allowable center distance, under static conditions.
Backlash variation is defined as the difference between the maximum and minimum backlash occurring in a whole revolution of the larger of a pair of mating gears.[5]
Backlash ingear couplingsallows for slight angular misalignment.
There can be significant backlash inunsynchronized transmissionsbecause of the intentional gap between the dogs indog clutches. The gap is necessary to engage dogs when input shaft (engine) speed and output shaft (driveshaft) speed are imperfectly synchronized. If there was a smaller clearance, it would be nearly impossible to engage the gears because the dogs would interfere with each other in most configurations. In synchronized transmissions,synchromeshsolves this problem.
However, backlash is undesirable in precision positioning applications such as machine tool tables. It can be minimized by choosingball screwsorleadscrewswith preloaded nuts, and mounting them in preloaded bearings. A preloaded bearing uses a spring and/or a second bearing to provide a compressive axial force that maintains bearing surfaces in contact despite reversal of the load direction.
|
https://en.wikipedia.org/wiki/Backlash_(engineering)
|
Geometric dimensioning and tolerancing(GD&T) is a system for defining and communicatingengineering tolerancesvia asymbolic languageonengineering drawingsand computer-generated3D modelsthat describes a physical object's nominalgeometryand the permissible variation thereof. GD&T is used to define the nominal (theoretically perfect) geometry of parts and assemblies, the allowable variation in size, form, orientation, and location of individual features, and how features may vary in relation to one another such that a component is considered satisfactory for its intended use. Dimensional specifications define the nominal, as-modeled or as-intended geometry, while tolerance specifications define the allowable physical variation of individual features of a part or assembly.
There are several standards available worldwide that describe the symbols and define the rules used in GD&T. One such standard isAmerican Society of Mechanical Engineers(ASME)Y14.5. This article is based on that standard. Other standards, such as those from theInternational Organization for Standardization(ISO) describe a different system which has some nuanced differences in its interpretation and rules(see GPS&V). The Y14.5 standard provides a fairly complete set of rules for GD&T in one document. The ISO standards, in comparison, typically only address a single topic at a time. There are separate standards that provide the details for each of the major symbols and topics below (e.g. position, flatness, profile, etc.).BS 8888provides a self-contained document taking into account a lot of GPS&V standards.
The origin of GD&T is credited to Stanley Parker, who developed the concept of "true position". While little is known about Parker's life, it is known that he worked at the Royal Torpedo Factory inAlexandria, West Dunbartonshire,Scotland. His work increased production of naval weapons by new contractors.
In 1940, Parker publishedNotes on Design and Inspection of Mass Production Engineering Work, the earliest work on geometric dimensioning and tolerancing.[1]In 1956, Parker publishedDrawings and Dimensions, which became the basic reference in the field.[1]
Adimensionis defined in ASME Y14.5 as "a numerical value(s) or mathematical expression in appropriate units of measure used to define the form, size, orientation, or location, of a part or feature."[2]: 3Special types of dimensions includebasic dimensions(theoretically exact dimensions) andreference dimensions(dimensions used to inform, not define a feature or part).
The units of measure in a drawing that follows GD&T can be selected by the creator of the drawing. Most often drawings are standardized to either SI linear units, millimeters (denoted "mm"), or US customary linear units, decimal inches (denoted "IN"). Dimensions can contain only a number without units if all dimensions are the same units and there is a note on the drawing that clearly specifies what the units are.[2]: 8
Angular dimensions can be expressed in decimal degrees or degrees, minutes, and seconds.
Every feature on every manufactured part is subject to variation, therefore, the limits of allowable variation must be specified. Tolerances can be expressed directly on a dimension by limits, plus/minus tolerances, or geometric tolerances, or indirectly in tolerance blocks, notes, or tables.
Geometric tolerances are described byfeature control frames,which are rectangular boxes on a drawing that indicate the type of geometric control, tolerance value, modifier(s) and/or datum(s) relevant to the feature.The type of tolerances used with symbols in feature control frames can be:
Tolerances for the profile symbols are equal bilateral unless otherwise specified, and for the position symbol tolerances are always equal bilateral. For example, the position of a hole has a tolerance of .020 inches. This means the hole can move ±.010 inches, which is an equal bilateral tolerance. It does not mean the hole can move +.015/−.005 inches, which is an unequal bilateral tolerance. Unequal bilateral and unilateral tolerances for profile are specified by adding further information to clearly show this is what is required.
Adatumis a theoretically exact plane, line, point, or axis.[2]: 3Adatum featureis a physical feature of a part identified by adatum feature symboland correspondingdatum feature triangle, e.g.,
These are then referred to by one or more 'datum references' which indicate measurements that should be made with respect to the corresponding datum feature. The datum reference frame can describe how the part fits or functions.
The purpose of GD&T is to describe the engineering intent of parts and assemblies.[2]GD&T can more accurately define the dimensional requirements for a part, allowing over 50% more tolerance zone than coordinate (or linear) dimensioning in some cases. Proper application of GD&T will ensure that the part defined on the drawing has the desired form, fit (within limits) and function with the largest possible tolerances. GD&T can add quality and reduce cost at the same time through producibility.
According to ASME Y14.5, the fundamental rules of GD&T are as follows,[2]: 7–8
The following table shows only some of the more commonly used modifiers in GD&T. It is not an exhaustive list.
The American Society of Mechanical Engineers (ASME) provides two levels of certification:[4]
Exchange of geometric dimensioning and tolerancing (GD&T) information betweenCADsystems is available on different levels of fidelity for different purposes:
In ISO/TR 14638GPS – Masterplanthe distinction between fundamental, global, general and complementary GPS standards is made.
ASME is also working on a Spanish translation for the ASME Y14.5 – Dimensioning and Tolerancing Standard.
|
https://en.wikipedia.org/wiki/Geometric_dimensioning_and_tolerancing
|
Engineering fitsare generally used as part ofgeometric dimensioning and tolerancingwhen a part or assembly is designed. In engineering terms, the "fit" is the clearance between two mating parts, and the size of this clearance determines whether the parts can, at one end of the spectrum, move or rotate independently from each other or, at the other end, are temporarily or permanently joined. Engineering fits are generally described as a "shaft and hole" pairing, but are not necessarily limited to just round components.ISOis the internationally accepted standard for defining engineering fits, butANSIis often still used in North America.
ISO and ANSI both group fits into three categories: clearance, location or transition, and interference. Within each category are several codes to define the size limits of the hole or shaft – the combination of which determines the type of fit. A fit is usually selected at the design stage according to whether the mating parts need to be accurately located, free to slide or rotate, separated easily, or resist separation. Cost is also a major factor in selecting a fit, as more accurate fits will be more expensive to produce, and tighter fits will be more expensive to assemble.
Methods of producing work to the required tolerances to achieve a desired fit range fromcasting,forginganddrillingfor the widest tolerances throughbroaching,reaming,millingandturningtolappingandhoningat the tightest tolerances.[1]
TheInternational Organization for Standardizationsystem splits the three main categories into several individual fits based on the allowable limits for hole and shaft size. Each fit is allocated a code, made up of a number and a letter, which is used on engineering drawings in place of upper & lower size limits to reduce clutter in detailed areas.
A fit is either specified as shaft-basis or hole-basis, depending on which part has its size controlled to determine the fit. In a hole-basis system, the size of the hole remains constant and the diameter of the shaft is varied to determine the fit; conversely, in a shaft-basis system the size of shaft remains constant and the hole diameter is varied to determine the fit.
The ISO system uses an alpha-numeric code to illustrate the tolerance ranges for the fit, with the upper-case representing the hole tolerance and lower-case representing the shaft. For example, in H7/h6 (a commonly-used fit) H7 represents the tolerance range of the hole and h6 represents the tolerance range of the shaft. These codes can be used by machinists or engineers to quickly identify the upper and lower size limits for either the hole or shaft. The potential range of clearance or interference can be found by subtracting the smallest shaft diameter from the largest hole, and largest shaft from the smallest hole.
The three types of fit are:
For example, using an H8/f7 close-running fit on a 50mm diameter:[1]
For example, using an H7/k6 similar fit on a 50mm diameter:[1]
For example, using an H7/p6 press fit on a 50mm diameter:[1]
Common tolerances for sizes ranging from 0 to 120 mm[2]
−0.120
−0.145
−0.205
−0.205
−0.240
−0.280
−0.290
−0.330
−0.340
−0.390
−0.400
−0.045
−0.060
−0.076
−0.093
−0.117
−0.142
−0.174
−0.207
−0.016
−0.022
−0.028
−0.034
−0.041
−0.050
−0.060
−0.071
−0.008
−0.012
−0.014
−0.017
−0.020
−0.025
−0.029
−0.034
−0.006
−0.008
−0.009
−0.011
−0.013
−0.016
−0.019
−0.022
−0.010
−0.012
−0.015
−0.018
−0.021
−0.025
−0.030
−0.035
−0.025
−0.030
−0.036
−0.043
−0.052
−0.062
−0.074
−0.087
−0.060
−0.075
−0.090
−0.110
−0.130
−0.160
−0.190
−0.220
0.000
+0.001
+0.001
+0.001
+0.002
+0.002
+0.002
+0.003
+0.004
+0.008
+0.010
+0.012
+0.015
+0.017
+0.020
+0.023
+0.006
+0.012
+0.015
+0.018
+0.022
+0.026
+0.032
+0.037
+0.014
+0.019
+0.023
+0.028
+0.035
+0.043
+0.053
+0.059
+0.071
+0.079
+0.018
+0.023
+0.028
+0.033
+0.041
+0.048
+0.060
+0.070
+0.087
+0.102
+0.124
+0.144
+0.060
+0.070
+0.080
+0.095
+0.110
+0.120
+0.130
+0.140
+0.150
+0.170
+0.180
+0.020
+0.030
+0.040
+0.050
+0.065
+0.080
+0.100
+0.120
+0.006
+0.010
+0.013
+0.016
+0.020
+0.025
+0.030
+0.036
+0.002
+0.004
+0.005
+0.006
+0.007
+0.009
+0.010
+0.012
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
+0.000
+0.000
+0.000
+0.000
+0.000
+0.000
+0.000
+0.000
+0.000
+0.000
+0.000
+0.000
+0.000
+0.000
+0.000
+0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
−0.010
−0.009
−0.010
−0.012
−0.015
−0.018
−0.021
−0.025
−0.014
−0.016
−0.019
−0.023
−0.028
−0.033
−0.039
−0.045
−0.016
−0.020
−0.024
−0.029
−0.035
−0.042
−0.051
−0.059
−0.024
−0.027
−0.032
−0.039
−0.048
−0.059
−0.072
−0.078
−0.093
−0.101
−0.028
−0.031
−0.037
−0.044
−0.054
−0.061
−0.076
−0.086
−0.106
-0.121
−0.146
−0.166
Interference fits, also known aspress fitsorfriction fits, are fastenings between two parts in which the inner component is larger than the outer component. Achieving an interference fit requires applying force during assembly. After the parts are joined, the mating surfaces will feel pressure due to friction, and deformation of the completed assembly will be observed.
Force fitsare designed to maintain a controlled pressure between mating parts, and are used where forces or torques are being transmitted through the joining point. Like interference fits, force fits are achieved by applying a force during component assembly.[3]
FN 1 to FN 5
Shrink fitsserve the same purpose as force fits, but are achieved by heating one member to expand it while the other remains cool. The parts can then be easily put together with little applied force, but after cooling and contraction, the same dimensional interference exists as for a force fit. Like force fits, shrink fits range from FN 1 to FN 5.[3]
Location fits are for parts that do not normally move relative to each other.
LN 1 to LN 3 (or LT 7 to LT 21?[citation needed])
LT 1 to LT 6
Location fit is for have comparatively better fit than slide fit.
LC 1 to LC 11
The smaller RC numbers have smaller clearances for tighter fits, the larger numbers have larger clearances for looser fits.[4]
Fits of this kind are intended for the accurate location of parts which must assemble without noticeable play.
Fits of this kind are intended for the accurate location but with greater maximum clearance than class RC1. Parts made to this fit turn and move easily. This type is not designed for free run. Sliding fits in larger sizes may seize with small temperature changes due to little allowance for thermal expansion or contraction.
Fits of this kind are about the closest fits which can be expected to run freely. Precision fits are intended for precision work at low speed, low bearing pressures, and light journal pressures. RC3 is not suitable where noticeable temperature differences occur.
Fits of this kind are mostly for running fits on accurate machinery with moderate surface speed, bearing pressures, and journal pressures where accurate location and minimum play are desired. Fits of this kind also can be described as smaller clearances with higher requirements for precision fit.
Fits of this kind are designed for machines running at higher running speeds, considerable bearing pressures, and heavy journal pressure. Fits of this kind also can be described with greater clearances with common requirements for fit precision.
Fits of this kind are intended for use where accuracy is not essential. It is suitable for great temperature variations. This fit is suitable to use without any special requirements for precise guiding of shafts into certain holes.
Fits of this kind are intended for use where wide commercial tolerances may be required on the shaft. With these fits, the parts with great clearances with having great tolerances. Loose running fits may be exposed to effects of corrosion, contamination by dust, and thermal or mechanical deformations.
|
https://en.wikipedia.org/wiki/Engineering_fit
|
Aloading gaugeis a diagram or physical structure that defines the maximum height and width dimensions inrailwayvehiclesand their loads. Their purpose is to ensure that rail vehicles can pass safely through tunnels and under bridges, and keep clear of platforms, trackside buildings and structures.[1]Classification systems vary between different countries, and loading gauges may vary across a network, even if thetrack gaugeis uniform.
The term loading gauge can also be applied to the maximum size of roadvehiclesin relation totunnels,overpassesandbridges, anddoorsintoautomobile repair shops,bus garages,filling stations,residential garages,multi-storey car parksandwarehouses.
A related but separate gauge is thestructure gauge, which sets limits to the extent that bridges, tunnels and other infrastructure can encroach on rail vehicles. The difference between these two gauges is called theclearance. The specified amount of clearance makes allowance forwobblingof rail vehicles at speed.
The loading gauge restricts the size of passenger coaches, goods wagons (freight cars) andshipping containersthat can travel on a section of railway track. It varies across the world and often within a single railway system. Over time there has been a trend towards larger loading gauges and more standardization of gauges; some older lines have had theirstructure gaugesenhanced by raising bridges, increasing the height and width of tunnels and making other necessary alterations.Containerisationand a trend towards largershipping containershas led rail companies to increase structure gauges to compete effectively with road haulage.
The term "loading gauge" can also refer to a physical structure, sometimes using electronic detectors usinglight beamson an arm or gantry placed over the exit lines of goods yards or at the entry point to a restricted part of a network. The devices ensure that loads stacked on open or flat wagons stay within the height/shape limits of the line's bridges and tunnels, and prevent out-of-gauge rolling stock entering a stretch of line with a smaller loading gauge. Compliance with a loading gauge can be checked with aclearance car. In the past, these were simple wooden frames or physical feelers mounted on rolling stock. More recently,laserbeams are used.
The loading gauge is the maximum size of rolling stock. It is distinct from theminimum structure gauge, which sets limits to the size of bridges and tunnels on the line, allowing forengineering tolerancesand the motion of rail vehicles. The difference between the two is called theclearance. The terms "dynamicenvelope" or "kinematic envelope" – which include factors such as suspension travel, overhang on curves (at both ends and middle) and lateral motion on the track – are sometimes used in place of loading gauge.[citation needed]
Therailway platform heightis also a consideration for the loading gauge of passenger trains. Where the two are not directly compatible, stairs may be required, which will increaseloading times. Where long carriages are used at a curved platform, there will begaps between the platform and the carriage door, causing risk. Problems increase where trains of several different loading gauges and train floor heights use (or even must pass without stopping at) the same platform.
The size of load that can be carried on a railway of a particular gauge is also influenced by the design of the rolling stock. Low-deck rolling stock can sometimes be used to carry taller 9 ft 6 in (2.9 m) shipping containers on lower gauge lines although their low-deck rolling stock cannot then carry as many containers.
Rapid transit(metro) railways generally have a very small loading gauge, which reduces the cost of tunnel construction. These systems only use their own specialised rolling stock.
Largerout-of-gaugeloads can also sometimes be conveyed by taking one or more of the following measures:
The loading gauge on the main lines of Great Britain, most of which were built before 1900, is generally smaller than in other countries. In mainland Europe, the slightly largerBerne gauge(Gabarit passe-partout international, PPI) was agreed to in 1913 and came into force in 1914.[2][3]As a result, British trains have noticeably and considerably smaller loading gauges and, for passenger trains, smaller interiors, despite the track beingstandard gauge, which is in line with much of the world.
This often results in increased costs for purchasing new trainsets or locomotives as they must be specifically designed for the existing British network, rather than being purchased "off-the-shelf". For example, the new trains forHS2have a 50% premium applied to the "classic compatible" sets that will be "compatible" with the current (or "classic") rail network loading gauge as well as the HS2 line. The "classic compatible" trainsets will cost £40million per trainset whereas the HS2-only stock (built to European loading gauge and only suitable to operate on HS2 lines) will cost £27M per trainset despite the HS2-only stock being physically larger.[4]
It was recognized even during the nineteenth century that this would pose problems and countries whose railroads had been built or upgraded to a more generous loading gauge pressed for neighboring countries to upgrade their own standards. This was particularly true in continental Europe where the Nordic countries and Germany with their relatively generous loading gauge wanted their cars and locomotives to be able to run throughout thestandard gaugenetwork without being limited to a small size. France, which at the time had the most restrictive loading gauge ultimately compromised giving rise toBerne gaugewhich came into effect just before World War I.
Military railwayswere often built to particularly high standards, especially after theAmerican Civil Warand theFranco-Prussian Warshowed the importance of railroads inmilitary deploymentas well asmobilization. TheKaiserreichwas particularly active in the construction of military railways which were often built with great expense to be as flat, straight and permissive in loading gauge as possible while bypassing major urban areas, making those lines of little use to civilian traffic, particularly civilian passenger traffic. However, all those aforementioned factors have in some cases led to the subsequent abandoning of those railroads.
TheInternational Union of Railways(UIC) has developed a standard series of loading gauges named A, B, B+ and C.
In theEuropean Union, the UIC directives were supplanted byERA Technical Specifications for Interoperability(TSI) of European Union in 2002, which has defined a number of recommendations to harmonize the train systems. The TSI Rolling Stock (2002/735/EC) has taken over the UIC Gauges definitions defining Kinematic Gauges with a reference profile such that Gauges GA and GB have a height of 4.35 m (14 ft 3 in) (they differ in shape) with Gauge GC rising to 4.70 m (15 ft 5 in) allowing for a width of 3.08 m (10 ft 1 in) of the flat roof.[7]All cars must fall within an envelope of 3.15 m (10 ft 4 in) wide on a 250m(12.4ch; 820ft) radius curve. TheTGVs, which are 2.9 m (9 ft 6 in) wide, fall within this limit.
The designation of a GB+ loading gauge refers to the plan to create a pan-European freight network forISO containersand trailers with loaded ISO containers. These container trains (piggy-backtrains) fit into the B envelope with a flat top so that only minor changes are required for the widespread structures built to loading gauge B on continental Europe. A few structures on the British Isles were extended to fit with GB+ as well, where the first lines to be rebuilt start at theChannel Tunnel.[8]
Owing to their historical legacies, many member states' railways do not conform to the TSI specification. For example,Britain's role at the forefront of railway development in the 19th century has condemned it to the smallinfrastructure dimensionsof that era. Conversely, theloading gauges of countries that were satellites of the former Soviet Union are much larger than the TSI specification. Other than for GB+, they are not likely to be retrofitted, given the enormous cost and disruption that would be entailed.[citation needed]
A specific example of the value of these loading gauges is that they permitdouble deckerpassenger carriages. Although mainly used for suburban commuter lines, France is notable for using them on its high speed TGV services: theSNCFTGV Duplexcarriages are4,303 millimetres (14 ft1+3⁄8in) high,[14]the Netherlands, Belgium and Switzerland feature large numbers of double decker intercity trains as well. In Germany theBombardier Twindexxwas introduced in InterCity service in December 2015.
Great Britain has (in general) the most restrictive loading gauge (relative to track gauge) in the world. That is a legacy of the British railway network being the world's oldest, and of having been built by a large number of different private companies, each with different standards for the width and height of trains. After nationalisation, a standard static gauge W5 was defined in 1951 that would virtually fit everywhere in the network. The W6 gauge is a refinement of W5, and the W6a changed the lower body to accommodate third-rail electrification. While the upper body is rounded for W6a with a static curve, there is an additional small rectangular notch for W7 to accommodate the transport of 2.44 m (8 ft 0 in) ISO containers, and the W8 loading gauge has an even larger notch spanning outside of the curve to accommodate the transport of 2.6 m (8 ft 6 in) ISO containers. While W5 to W9 are based on a rounded roof structure, those for W10 to W12 define a flat line at the top and, instead of a strict static gauge for the wagons, their sizes are derived from dynamic gauge computations for rectangular freight containers.[15]
Network Railuses aWloading gauge classification system of freight transport ranging from W6A (smallest) through W7, W8, W9, W9Plus, W10, W11 to W12 (largest). The definitions assume a common "lower sector structure gauge" with a common freight platform at 1,100 mm (43.31 in) above rail.[16]
In addition, gauge C1 provides a specification for standard coach stock, gauge C3 for longerMark 3coaching stock, gauge C4 forPendolinostock[17]and gauge UK1 for high-speed rail. There is also a gauge for locomotives. The size of container that can be conveyed depends both upon the size of the load that can be conveyed and the design of the rolling stock.[18]
A strategy was adopted in 2004 to guide enhancements of loading gauges[27]and in 2007 thefreight route utilisation strategywas published. That identified a number of key routes where the loading gauge should be cleared to W10 standard and, where structures are being renewed, that W12 is the preferred standard.[25]
Height and width of containers that can be carried on GB gauges (height by width). Units as per source material.
A Parliamentary committee headed byJames Stansfeldthen reported on 23 May 1892, "The evidence submitted to the Committee on the question of the diameter of the underground tubes containing the railways has been distinctly in favour of a minimum diameter of 11 ft 6 in (3.51 m)". After that, all tube lines were at least that size.[28]
Sweden uses shapes similar to the Central European loading gauge, but trains are allowed to be much wider.
There are three main classes in use (width × height):[29]
TheIron Ore Linenorth ofKirunawas the first electrified railway line in Sweden and has limited height clearance (SE-B) because of snow shelters. On the rest of the network belonging to theSwedish Transport Administration(Trafikverket), thestructure gaugeaccepts cars built to SE-A and thus accepts both cars built to UIC GA and GB. Some modern electric multiple units, likeRegina X50with derivatives, are somewhat wider than normally permitted by SE-A at 3.45 m (11 ft 4 in). This is generally acceptable as the extra width is above normal platform height, but it means that they can not use the high platforms thatArlanda Expressuses (Arlanda Central Stationhas normal clearances). The greater width allows sleeping cars in which tall people can sleep with straight legs and feet, which is not the case on the continent.
In the Netherlands, a similar shape to the UIC C is used that rises to 4.70 m (15 ft 5 in) in height. The trains are wider allowing for 3.40 m (11 ft 2 in) width similar to Sweden. About one third of the Dutch passenger trains usebilevel rail cars. However, Dutch platforms are much higher than Swedish ones.
The American loading gauge forfreight carson theNorth American rail networkis generally based on standards set by theAssociation of American Railroads(AAR) Mechanical Division.[30]The most widespread standards areAAR Plate BandAAR Plate C,[31]but higher loading gauges have been introduced on major routes outside urban centers to accommodate rolling stock that makes better economic use of the network, such asauto carriers,hi-cube boxcars, anddouble-stack container loads.[32]The maximum width of 10 ft 8 in (3.25 m) on 41 ft 3 in (12.57 m) (AAR Plate B), 46 ft 3 in (14.10 m) (AAR Plate C) and all othertruckcenters (of all otherAAR Plates) are on a441 ft8+3⁄8in (134.63 m) radius or13°curve.[30][31]In all cases of the increase of truck centers, the decrease of width is covered byAAR Plates D1 and D2.[30][31]
Listed here are the maximum heights and widths for cars. However, the specification in each AAR plate shows a car cross section that is chamfered at the top and bottom, meaning that a compliant car is not permitted to fill an entire rectangle of the maximum height and width.[31]
Technically, AAR Plate B is still the maximum height and truck center combination[30][31]and the circulation of AAR Plate C is somewhat restricted. The prevalence of excess-height rolling stock, at first ~18 ft (5.49 m)piggybacksandhicube boxcars, then laterautoracks, airplane-parts cars, and flatcars for haulingBoeing 737fuselages, as well as 20 ft 3 in (6.17 m) high double-stackedcontainersincontainer well cars, has been increasing. This means that most, if not all, lines are now designed for a higher loading gauge. The width of these extra-height cars is covered byAAR Plate D1.[30][31]
All the Class I rail companies have invested in longterm projects to increase clearances to allow double stack freight. The mainline North American rail networks of the Union Pacific, the BNSF, the Canadian National, and the Canadian Pacific, have already been upgraded toAAR Plate K. This represents over 60% of the Class I rail network.[38]
The old standard North Americanpassenger railcaris 10 ft 6 in (3.20 m) wide by 14 ft 6 in (4.42 m) high and measures 85 ft 0 in (25.91 m)over coupler pulling faceswith 59 ft 6 in (18.14 m)truckcenters, or 86 ft 0 in (26.21 m) over coupler pulling faces with 60 ft 0 in (18.29 m) truck centers. In the 1940s and 1950s, the American passenger car loading gauge was increased to a 16 ft 6 in (5.03 m) height throughout most of the country outside the Northeast, to accommodatedome carsand laterSuperlinersand otherbilevelcommuter trains. Bilevel and Hi-level passenger cars have been in use since the 1950s, and new passenger equipment with a height of19 ft9+1⁄2in (6.03 m) has been built for use in Alaska and the Canadian Rockies. Thestructure gaugeof theMount Royal Tunnelused to limit the height of bilevel cars to 14 feet 6 inches (4.42 m) before it was permanently closed to interchange rail traffic prior to its conversion for theREMrapid transit system.[citation needed]
TheNew York City Subwayis an amalgamation of three former constituent companies, and while all arestandard gauge, inconsistencies in loading gauge prevent cars from the formerBMTandINDsystems (B Division) from running on the lines of the formerIRTsystem (A Division), and vice versa. This is mainly because IRT tunnels and stations are approximately 1 foot (305 mm) narrower than the others, meaning that IRT cars running on the BMT or IND lines would haveplatform gapsof over 8 inches (203 mm) between the train and some platforms, whereas BMT and IND cars would not even fit into an IRT station without hitting the platform edge. Taking this into account, all maintenance vehicles are built to IRT loading gauge so that they can be operated over the entire network, and employees are responsible forminding the gap.
Another inconsistency is the maximum permissible railcar length. Cars in the former IRT system are 51 feet (15.54 m) as of December 2013[update]. Railcars in the former BMT and IND can be longer: on the formerEastern Division, the cars are limited to 60 feet (18.29 m), while on the rest of the BMT and IND lines plus theStaten Island Railway(which uses modified IND stock) the cars may be as long as 75 feet (22.86 m).[39][40]
TheMassachusetts Bay Transportation Authority's (MBTA) rapid transit system is composed of four unique subway lines; while all lines are standard gauge, inconsistencies in loading gauge, electrification, and platform height prevent trains on one line from being used on another. The first segment of theGreen Line(known as theTremont Street subway) was constructed in 1897 to take the streetcars offBoston's busy downtown streets. When theBlue Lineopened in 1904, it only ran streetcar services; the line was converted to rapid transit in 1924 due to high passenger loads, but the tight clearances in the tunnel under theBoston Harborrequired narrower and shorter rapid transit cars.[41]TheOrange Linewas originally built in 1901 to accommodate heavy rail transit cars of higher capacity than streetcars. TheRed Linewas opened in 1912, designed to handle what were for a time the largest underground transit cars in the world.[42]: 127
TheLos Angeles Metro Railsystem is an amalgamation of two former constituent companies, theLos Angeles County Transportation Commissionand the Southern California Rapid Transit District; both of those companies were responsible for planning the initial system. It is composed of two heavy rail subway lines and several light rail lines with subway sections; while all lines are standard gauge, inconsistencies in electrification and loading gauge prohibit the light rail trains from operating on the heavy rail lines, and vice versa. The LACTC-plannedBlue Linewas opened in 1990 and partially operates on the route of thePacific Electricinterurban railroad line between downtown Los Angeles and Long Beach, which used overhead electrification and street-running streetcar vehicles. The SCRTD-plannedRed Line(later split into the Red andPurplelines) was opened in 1993 and was designed to handle high-capacity heavy rail transit cars that would operate underground. Shortly after the Red Line began operations, the LACTC and the SCRTD merged to form theLACMTA, which became responsible for planning and construction of theGreen,Gold,Expo, andKlines, as well as theD Line Extensionand theRegional Connector.
Major trunk raillines in East Asian countries, including China, North Korea, South Korea, as well as theShinkansenof Japan, have all adopted a loading gauge of 3,400 mm (11 ft 2 in) maximum width and can accept the maximum height of 4,500 mm (14 ft 9 in).[43]
The maximum height, width, and length of general Chinese rolling stock are 4,800 mm (15 ft 9 in), 3,400 mm (11 ft 2 in) and 26 m (85 ft 4 in) respectively, with an extraout-of-gaugeload allowance of height and width 5,300 by 4,450 mm (17 ft 5 in by 14 ft 7 in) with some special shape limitation, corresponding to astructure gaugeof 5,500 by 4,880 mm (18 ft 1 in by 16 ft 0 in).[44]China is building numerous new railways in sub-Saharan Africa and Southeast Asia (such as in Kenya and Laos), and these are being built to "Chinese Standards". This presumably means track gauge, loading gauge, structure gauge, couplings, brakes, electrification, etc.[45][circular reference]An exception may bedouble stacking, which has a height limit of 5,850 mm (19 ft 2 in). Metre gauge in China has a gauge of 3,050 mm (10 ft 0 in).
Translation of legend:
Trains on theShinkansennetwork operate on1,435 mm(4 ft8+1⁄2in)standard gaugetrack and have a loading gauge of 3,400 mm (11 ft 2 in) maximum width and 4,500 mm (14 ft 9 in) maximum height.[46]This allows the operation of double-deck high-speed trains.
Mini Shinkansen(former conventional1,067 mmor3 ft 6 innarrow gauge lines that have beenregaugedinto1,435 mmor4 ft8+1⁄2instandard gauge) and some private railways in Japan (including some lines of theTokyo subwayand all of theOsaka Metro) also use standard gauge; however, their loading gauges are different.
The rest of Japan's system is discussed undernarrow gauge, below.
The body frame may have a maximum height of 4,500 mm (14 ft 9 in) and a maximum width of 3,400 mm (11 ft 2 in) with additional installations allowed up to 3,600 mm (11 ft 10 in). That width of 3,400 mm is only allowed above 1,250 mm (4 ft 1 in) as the common passenger platforms are built to former standard trains of 3,200 mm (10 ft 6 in) in width.
There is currently no uniform standard for loading gauges in the country and both loading gauges and platform heights vary by rail line.
TheNorth–South Commuter Railwayallows passenger trains with a carbody width of 3,100 mm (10 ft 2 in) and a height of 4,300 mm (14 ft 1 in). Additional installations shall also be allowed up to 3,300 mm (10 ft 10 in) at a platform height of 1,100 mm (3 ft 7 in) where it is limited by half-heightplatform screen doors. Above the platform gate height of 1,200 mm (3 ft 11 in) above the platforms, out-of-gauge installations can be further maximized to the Asian standard at 3,400 mm (11 ft 2 in).[47]
Meanwhile, thePNR South Long Haulwill follow the Chinese gauge and therefore use a larger carbody width of 3,300 mm (10 ft 10 in) from the specifications of passenger rolling stock, and a height of 4,770 mm (15 ft 8 in) per P70-type boxcar specifications.[47]
Some of the new railways being built in Africa allow for double-stacked containers, the height of which is about 5,800 mm (19 ft 0 in) depending on the height of each container 2,438 mm (8 ft 0 in) or 2,900 mm (9 ft 6 in) plus the height of the deck of the flat wagon about 1,000 mm (3 ft 3 in) totalling 5,800 mm (19 ft 0 in). This exceeds the China height standard for single stacked containers of 4,800 mm (15 ft 9 in). Additional height of about 900 mm (2 ft 11 in) is needed for overhead wires for25 kV ACelectrification.
The permissible width of the new African standard gauge railways is 3,400 mm (11 ft 2 in).
The standard gauge lines ofNew South Wales Government Railwaysallowed for a width of 9 ft 6 in (2.90 m) until 1910, after a conference of the states created a new standard of 10 ft 6 in (3.20 m), with corresponding increase in track centres. The narrow widths have mostly been eliminated, except, for example, at the mainline platforms atGosfordand some sidings. The longest carriages are 72 ft 6 in (22.10 m).[citation needed]
TheCommonwealth Railwaysadopted the national standard of 10 ft 6 in (3.20 m) when they were established in 1912, although no connection with New South Wales was made until 1970.[citation needed]
AT setof the late 1980s was 3,000 mm (9 ft 10.1 in) wide. Track centres fromPenrithtoMount VictoriaandGosfordandWyonghave been gradually widened to suit. TheD setintercity sets are however 3,100 mm (10 ft 2.0 in) wide, so further, costly modification was required beyondSpringwood,[48]which was completed in 2020.[49]
TheKwinana,EasternandEastern Goldfieldslines inWestern Australiawere built with a loading gauge of 12 ft (3,700 mm) wide and 20 ft (6,100 mm) tall to allow for trailer on flatcar (TOFC) traffic when converted to dual gauge in the 1960s.[50]
In Finland, rail cars can be up to 3.4 m (11 ft 2 in) wide with a permitted height from 4.37 m (14 ft 4 in) on the sides to 5.3 m (17 ft 5 in) in the centre.[54]Thetrack gaugeis1,524 mm(5 ft), differing4 mm (5⁄32in) from the1,520 mm(4 ft11+27⁄32in) Russian track gauge.
The Russian loading gauges are defined in standard GOST 9238 (ГОСТ 9238–83, ГОСТ 9238–2013) with the current 2013 standard named "Габариты железнодорожного подвижного состава и приближения строений" (construction of rolling stock clearance diagrams [official English title]).[55]It was accepted by theInterstate Council for Standardization, Metrology and Certificationto be valid in Russia, Belarus, Moldova, Ukraine, Uzbekistan and Armenia.[55]Loading gauge is generally wider than Europe, but with many exception standards.
The standard defines static envelopes for trains on the national network as T, Tcand Tpr. The static profile 1-T is the common standard on the complete 1520 mm rail network including the CIS and Baltic states. The structure clearance is given as S, Spand S250. There is a tradition that structure clearance is much bigger than the common train sizes. For international traffic, the standard references the kinematic envelope for GC and defines a modified GCrufor its high-speed trains. For other international traffic, there are 1-T, 1-VM, 0-VM, 02-VM and 03-VMst/03-VMkfor the trains and 1-SM for the structure clearance.[55]
The main static profile T allows for a maximum width of3,750 mm (12 ft3+5⁄8in) rising to a maximum height of5,300 mm (17 ft4+11⁄16in). The profile Tcallows that width only at a height of3,000 mm (9 ft10+1⁄8in), requiring a maximum of3,400 mm (11 ft1+7⁄8in) below 1,270 mm (50 in), which matches with the standard for train platforms (with a height of 1,100 mm [43.3 in]). The profile Tprhas the same lower frame requirement but reduces the maximum upper body width to3,500 mm (11 ft5+13⁄16in). The more universal profile 1-T has the complete body at a maximum width of3,400 mm (11 ft1+7⁄8in) still rising to a height of5,300 mm (17 ft4+11⁄16in).[55]Exceptions shall be double-stacking, maximum height shall be6,150 mm (20 ft2+1⁄8in) or6,400 mm (20 ft11+15⁄16in).
The structure gauge S requires buildings to be placed at minimum of3,100 mm (10 ft2+1⁄16in) from the track centreline. Bridges and tunnels must have a clearance of at least4,900 mm (16 ft15⁄16in) wide and6,400 mm (20 ft11+15⁄16in) high. The structure gauge Spfor passenger platforms allows4,900 mm (16 ft15⁄16in) only above1,100 mm (3 ft7+5⁄16in) (the common platform height) requiring a width of3,840 mm (12 ft7+3⁄16in) below that line.[55]The exceptions shall be double-stacking, minimum overhead wiring height must be6,500 mm (21 ft3+7⁄8in) (for maximum vehicle height of6,150 mm [20 ft2+1⁄8in]) or6,750 mm [22 ft1+3⁄4in] (for maximum vehicle height of6,400 mm [20 ft11+15⁄16in]).
The main platform is defined to have a height of 1,100 mm (43.3 in) at a distance of 1,920 mm (75.6 in) from the center of the track to allow for trains with profile T. Low platforms at a height of 200 mm (7.9 in) may be placed at 1,745 mm (68.7 in) from the center of the track. A medium platform is a variant of the high platform but at a height of 550 mm (21.7 in).[55]The latter matches with the TSI height in Central Europe. In the earlier standard from 1983, the profile T would only be allowed to pass low platforms at 200 mm (7.87 in) while the standard high platform for cargo and passenger platforms would be placed no less than 1,750 mm (68.9 in) from the center of the track.[56]That matches with the Tc, Tprand the universal 1-T loading gauge.
In Spain, rail cars can be up to 3.44 m (11 ft 3.5 in) wide with a permitted height of 4.33 m (14 ft 2.5 in) and this loading gauge is called iberian loading gauge. It is the standard loading gauge for conventional (iberian gauge) railways in Spain.
In Portugal, there are three railway loading gauge standards for conventional (iberian gauge) railways: Gabarito PT b, Gabarito PT b+ and Gabarito PT c. Gabarito PT b (also called CPb) and Gabarito PT b+ (also called CPb+) allow rail cars to be 3.44 m (11 ft 3.5 in) wide with a permitted height of 4.5 m (14 ft 9 in), although CPb+ has a slightly larger profile area. Gabarito PT c allows rail cars to be 3.44 m (11 ft 3.5 in) wide with a permitted height of 4.7 m (15 ft 5 in). Gabarito PT b and PT b+ are both used, being PT b+ more common overall. Gabarito PT c is currently not used. In Lisbon, there is a suburban railway line, theCascais Line, that follows a fourth non-standard loading gauge.
Narrow gauge railways generally have a smaller loading gauge than standard gauge ones, and this is a major reason for cost savings rather than the railgauge itself. For example, theLyn locomotiveof theLynton and Barnstaple Railwayis 7 feet 2 inches (2.18 m) wide. By comparison, several standard gauge73 class locomotivesof theNSWR, which are 9 feet 3 inches (2.82 m) wide, have been converted for use on610 mm(2 ft) cane tramways, where there are no narrow bridges, tunnels or track centres to cause trouble. The6E1locomotive of the1,067 mm(3 ft 6 in)South African Railwaysare 9 feet 6 inches (2.9 m) wide.
A large numbers of railways using the762 mm(2 ft 6 in) gauge used the same rolling stock plans, which were 7 ft 0 in (2.13 m) wide.
Translation of legend:
The Japanese national network operated byJapan Railways Groupemploys narrow gauge1,067 mm(3 ft 6 in). The maximum allowed width of the rolling stock is 3,000 mm (9 ft 10 in) and maximum height is 4,100 mm (13 ft 5 in); however, a number JR lines were constructed as private railways prior to nationalisation in the early 20th century, and feature loading gauges smaller than the standard. These include theChūō Main Linewest ofTakao, theMinobu Line, and theYosan Main Linewest ofKan'onji(3,900 mm or 12 ft 10 in height). Nevertheless, advances inpantographtechnology have largely eliminated the need for separate rolling stock in these areas.
There are many private railway companies in Japan and the loading gauge is different for each company.[59]
The South African national network employs1,067 mm(3 ft 6 in) gauge. The maximum width of therolling stockis 3,048 mm (10 ft 0 in) and maximum height is 3,962 mm (13 ft 0 in),[59]which is greater than the normal British loading gauge for standard gauge vehicles.
The railways use1,067 mm(3 ft 6 in) gauge. The maximum width of the rolling stock is 2,830 mm (9 ft 3 in) and maximum height is3,815 mm (12 ft6+1⁄4in).[60]
762 mm(2 ft 6 in) gauge for theUnited KingdomandSierra Leone:
The structure gauge, which refers to the dimensions of the lowest and narrowest bridges or tunnels of the track, complements the loading gauge, which specifies the tallest and widest allowable vehicle dimensions. There is agapbetween the structure gauge and loading gauge, and some allowance needs to be made for the dynamic movement of vehicles (sway) to avoid mechanical interference causing equipment and structural damage.
While it may be true that trains of a particular loading gauge can travel freely over tracks of a matching structure gauge, in practice, problems can still occur. In an accident atMoston station, an old platform not normally used by freight trains was hit by a train that wasn't within its intended W6a gauge because two container fastenings were hanging over the side. Analysis showed that the properly configured train would have passed safely even though the platform couldn't handle the maximum design sway of W6a. Accepting reduced margins for old construction is normal practice if there have been no incidents but if the platform had met modern standards with greater safety margin the out of gauge train would have passed without incident.[61][62][63]
Trains larger than the loading gauge, but not too large, can operate if the structure gauge is carefully measured, and the trip is subject to various special regulations.
|
https://en.wikipedia.org/wiki/Loading_gauge
|
Themargin of erroris a statistic expressing the amount of randomsampling errorin the results of asurvey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of a simultaneous census of the entirepopulation. The margin of error will be positive whenever a population is incompletely sampled and the outcome measure has positivevariance, which is to say, whenever the measurevaries.
The termmargin of erroris often used in non-survey contexts to indicateobservational errorin reporting measured quantities.
Consider a simpleyes/nopollP{\displaystyle P}as a sample ofn{\displaystyle n}respondents drawn from a populationN,(n≪N){\displaystyle N{\text{, }}(n\ll N)}reporting the percentagep{\displaystyle p}ofyesresponses. We would like to know how closep{\displaystyle p}is to the true result of a survey of the entire populationN{\displaystyle N}, without having to conduct one. If, hypothetically, we were to conduct a pollP{\displaystyle P}over subsequent samples ofn{\displaystyle n}respondents (newly drawn fromN{\displaystyle N}), we would expect those subsequent resultsp1,p2,…{\displaystyle p_{1},p_{2},\ldots }to be normally distributed aboutp¯{\displaystyle {\overline {p}}}, the true but unknown percentage of the population. Themargin of errordescribes the distance within which a specified percentage of these results is expected to vary fromp¯{\displaystyle {\overline {p}}}.
Going by theCentral limit theorem, the margin of error helps to explain how the distribution of sample means (or percentage of yes, in this case) will approximate a normal distribution as sample size increases. If this applies, it would speak about the sampling being unbiased, but not about the inherent distribution of the data.[1]
According to the68-95-99.7 rule, we would expect that 95% of the resultsp1,p2,…{\displaystyle p_{1},p_{2},\ldots }will fall withinabouttwostandard deviations(±2σP{\displaystyle \pm 2\sigma _{P}}) either side of the true meanp¯{\displaystyle {\overline {p}}}. This interval is called theconfidence interval, and theradius(half the interval) is called themargin of error, corresponding to a 95%confidence level.
Generally, at a confidence levelγ{\displaystyle \gamma }, a sample sizedn{\displaystyle n}of a population having expected standard deviationσ{\displaystyle \sigma }has a margin of error
wherezγ{\displaystyle z_{\gamma }}denotes thequantile(also, commonly, az-score), andσ2n{\displaystyle {\sqrt {\frac {\sigma ^{2}}{n}}}}is thestandard error.
We would expect the average of normally distributed valuesp1,p2,…{\displaystyle p_{1},p_{2},\ldots }to have a standard deviation which somehow varies withn{\displaystyle n}. The smallern{\displaystyle n}, the wider the margin. This is called the standard errorσp¯{\displaystyle \sigma _{\overline {p}}}.
For the single result from our survey, weassumethatp=p¯{\displaystyle p={\overline {p}}}, and thatallsubsequent resultsp1,p2,…{\displaystyle p_{1},p_{2},\ldots }together would have a varianceσP2=P(1−P){\displaystyle \sigma _{P}^{2}=P(1-P)}.
Note thatp(1−p){\displaystyle p(1-p)}corresponds to the variance of aBernoulli distribution.
For a confidencelevelγ{\displaystyle \gamma }, there is a corresponding confidenceintervalabout the meanμ±zγσ{\displaystyle \mu \pm z_{\gamma }\sigma }, that is, the interval[μ−zγσ,μ+zγσ]{\displaystyle [\mu -z_{\gamma }\sigma ,\mu +z_{\gamma }\sigma ]}within which values ofP{\displaystyle P}should fall with probabilityγ{\displaystyle \gamma }. Precise values ofzγ{\displaystyle z_{\gamma }}are given by thequantile function of the normal distribution(which the 68–95–99.7 rule approximates).
Note thatzγ{\displaystyle z_{\gamma }}is undefined for|γ|≥1{\displaystyle |\gamma |\geq 1}, that is,z1.00{\displaystyle z_{1.00}}is undefined, as isz1.10{\displaystyle z_{1.10}}.
SincemaxσP2=maxP(1−P)=0.25{\displaystyle \max \sigma _{P}^{2}=\max P(1-P)=0.25}atp=0.5{\displaystyle p=0.5}, we can arbitrarily setp=p¯=0.5{\displaystyle p={\overline {p}}=0.5}, calculateσP{\displaystyle \sigma _{P}},σp¯{\displaystyle \sigma _{\overline {p}}}, andzγσp¯{\displaystyle z_{\gamma }\sigma _{\overline {p}}}to obtain themaximummargin of error forP{\displaystyle P}at a given confidence levelγ{\displaystyle \gamma }and sample sizen{\displaystyle n}, even before having actual results. Withp=0.5,n=1013{\displaystyle p=0.5,n=1013}
Also, usefully, for any reportedMOE95{\displaystyle MOE_{95}}
If a poll has multiple percentage results (for example, a poll measuring a single multiple-choice preference), the result closest to 50% will have the highest margin of error. Typically, it is this number that is reported as the margin of error for the entire poll. Imagine pollP{\displaystyle P}reportspa,pb,pc{\displaystyle p_{a},p_{b},p_{c}}as71%,27%,2%,n=1013{\displaystyle 71\%,27\%,2\%,n=1013}
As a given percentage approaches the extremes of 0% or 100%, its margin of error approaches ±0%.
Imagine multiple-choice pollP{\displaystyle P}reportspa,pb,pc{\displaystyle p_{a},p_{b},p_{c}}as46%,42%,12%,n=1013{\displaystyle 46\%,42\%,12\%,n=1013}. As described above, the margin of error reported for the poll would typically beMOE95(Pa){\displaystyle MOE_{95}(P_{a})}, aspa{\displaystyle p_{a}}is closest to 50%. The popular notion ofstatistical tieorstatistical dead heat,however, concerns itself not with the accuracy of the individual results, but with that of therankingof the results. Which is in first?
If, hypothetically, we were to conduct a pollP{\displaystyle P}over subsequent samples ofn{\displaystyle n}respondents (newly drawn fromN{\displaystyle N}), and report the resultpw=pa−pb{\displaystyle p_{w}=p_{a}-p_{b}}, we could use thestandard error of differenceto understand howpw1,pw2,pw3,…{\displaystyle p_{w_{1}},p_{w_{2}},p_{w_{3}},\ldots }is expected to fall aboutpw¯{\displaystyle {\overline {p_{w}}}}. For this, we need to apply thesum of variancesto obtain a new variance,σPw2{\displaystyle \sigma _{P_{w}}^{2}},
whereσPa,Pb=−PaPb{\displaystyle \sigma _{P_{a},P_{b}}=-P_{a}P_{b}}is thecovarianceofPa{\displaystyle P_{a}}andPb{\displaystyle P_{b}}.
Thus (after simplifying),
Note that this assumes thatPc{\displaystyle P_{c}}is close to constant, that is, respondents choosing either A or B would almost never choose C (makingPa{\displaystyle P_{a}}andPb{\displaystyle P_{b}}close toperfectly negatively correlated). With three or more choices in closer contention, choosing a correct formula forσPw2{\displaystyle \sigma _{P_{w}}^{2}}becomes more complicated.
The formulae above for the margin of error assume that there is an infinitely large population and thus do not depend on the size of populationN{\displaystyle N}, but only on the sample sizen{\displaystyle n}. According tosampling theory, this assumption is reasonable when thesampling fractionis small. The margin of error for a particular sampling method is essentially the same regardless of whether the population of interest is the size of a school, city, state, or country, as long as the samplingfractionis small.
In cases where the sampling fraction is larger (in practice, greater than 5%), analysts might adjust the margin of error using afinite population correctionto account for the added precision gained by sampling a much larger percentage of the population. FPC can be calculated using the formula[2]
...and so, if pollP{\displaystyle P}were conducted over 24% of, say, an electorate of 300,000 voters,
Intuitively, for appropriately largeN{\displaystyle N},
In the former case,n{\displaystyle n}is so small as to require no correction. In the latter case, the poll effectively becomes a census and sampling error becomes moot.
|
https://en.wikipedia.org/wiki/Margin_of_error
|
Precision engineeringis a subdiscipline ofelectrical engineering,software engineering,electronics engineering,mechanical engineering, andoptical engineeringconcerned with designing machines, fixtures, and other structures that have exceptionally lowtolerances, are repeatable, and are stable over time. These approaches have applications inmachine tools,MEMS,NEMS,optoelectronicsdesign, and many other fields.
Precision engineering is a branch of engineering that focus on the design, development and manufacture of product with high levels of accuracy and repeatability.
It involves the use of advanced technologies and techniques to achieve tight tolerance and dimensional control in the manufacturing process.
Professors Hiromu Nakazawa and Pat McKeown provide the following list of goals for precision engineering:
This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
|
https://en.wikipedia.org/wiki/Precision_engineering
|
Inrailroading,slack actionis the amount of free movement of one car before it transmits its motion to an adjoining coupled car. This free movement results from the fact that in railroad practice, cars are loosely coupled, and thecouplingis often combined with a shock-absorbing device, a "draft gear", which, under stress, substantially increases the free movement as the train is started or stopped. Loose coupling is necessary to enable the train to bend around curves and is an aid in starting heavy trains, since the application of the locomotive power to the train operates on each car in the train successively, and the power is thus utilized to start only one car at a time.
The UK formerly usedthree-link couplings, which allowed a large amount of slack. These were soon replaced on passenger stock bybuffers and chain couplerswhere the couplings are held tight by buffers and shortened by a turnbuckle, while in most other parts of the world automatic couplings, such as theJanney couplerand theScharfenberg coupler, were adopted from the late nineteenth century on. Three-link couplings are a rarity in modern use.
This rail-transport related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Slack_action
|
Aspecificationoften refers to a set of documented requirements to be satisfied by a material, design, product, or service.[1]A specification is often a type oftechnical standard.
There are different types of technical or engineering specifications (specs), and the term is used differently in different technical contexts. They often refer to particular documents, and/or particular information within them. The wordspecificationis broadly defined as "to state explicitly or in detail" or "to be specific".
Arequirement specificationis a documentedrequirement, or set of documented requirements, to be satisfied by a given material, design, product, service, etc.[1]It is a common early part ofengineering designandproduct developmentprocesses in many fields.
Afunctional specificationis a kind of requirement specification, and may show functional block diagrams.[citation needed]
Adesign or product specificationdescribes the features of thesolutionsfor the Requirement Specification, referring to either a designed solutionorfinal produced solution. It is often used to guide fabrication/production. Sometimes the termspecificationis here used in connection with adata sheet(orspec sheet), which may be confusing. A data sheet describes the technical characteristics of an item or product, often published by a manufacturer to help people choose or use the products. A data sheet is not a technical specification in the sense of informing how to produce.
An "in-service" or "maintained as"specification, specifies the conditions of a system or object after years of operation, including the effects of wear and maintenance (configuration changes).
Specifications are a type of technical standard that may be developed by any of various kinds of organizations, in both thepublicandprivatesectors. Example organization types include acorporation, aconsortium(a small group of corporations), atrade association(an industry-wide group of corporations), a national government (including its different public entities,regulatory agencies, and national laboratories and institutes), aprofessional association(society), a purpose-madestandards organizationsuch asISO, or vendor-neutral developed generic requirements. It is common for one organization torefer to(reference,call out,cite) the standards of another. Voluntary standards may become mandatory if adopted by a government or business contract.
Inengineering,manufacturing, andbusiness, it is vital forsuppliers,purchasers, and users of materials, products, or services to understand and agree upon all requirements.[2]
A specification may refer to astandardwhich is often referenced by acontractor procurement document, or an otherwise agreed upon set of requirements (though still often used in the singular). In any case, it provides the necessary details about the specific requirements.
Standards for specifications may be provided by government agencies, standards organizations (SAE,AWS,NIST,ASTM,ISO/IEC,CEN/CENELEC,DoD, etc.),trade associations,corporations, and others. A memorandum published byWilliam J. Perry,U.S. Defense Secretary, on 29 June 1994 announced that a move to "greater use of performance and commercial specifications and standards" was to be introduced, which Perry saw as "one of the most important actions that [the Department of Defense] should take" at that time.[3]The followingBritish standardsapply to specifications:
A design/product specification does not necessarily prove aproductto be correct or useful in every context. An item might beverifiedto comply with a specification or stamped with a specification number: this does not, by itself, indicate that the item is fit for other, non-validated uses. The people who use the item (engineers,trade unions, etc.) or specify the item (building codes, government, industry, etc.) have the responsibility to consider the choice of available specifications, specify the correct one, enforce compliance, and use the item correctly.Validationof suitability is necessary.
Public sector procurementrules in theEuropean Unionand United Kingdom requirenon-discriminatorytechnical specifications to be used to identify the purchasing organisation's requirements. The rules relating to public works contracts initially prohibited "technical specifications having a discriminatory effect" from 1971; this principle was extended to public supply contracts by the then European Communities' Directive 77/62/EEC coordinating procedures for the award of public supply contracts, adopted in 1976.[7]Some organisations provide guidance on specification-writing for their staff and partners.[8][9]In addition to identifying the specific attributes required of the goods or services being purchased, specifications in the public sector may also make reference to the organisation's current corporate objectives or priorities.[8]: 3
Sometimes a guide or astandard operating procedureis available to help write and format a good specification.[10][11][12]A specification might include:
Specifications in North America form part of the contract documents that accompany and govern the drawings for construction of building and infrastructure projects. Specifications describe the quality and performance of building materials, using code citations and published standards, whereas the drawings orbuilding information model(BIM) illustrates quantity and location of materials. The guiding master document of names and numbers is the latest edition ofMasterFormat. This is a consensus document that is jointly sponsored by two professional organizations:Construction Specifications CanadaandConstruction Specifications Institutebased in the United States and updated every two years.
While there is a tendency to believe that "specifications overrule drawings" in the event of discrepancies between the text document and the drawings, the actual intent must be made explicit in the contract between the Owner and the Contractor. The standard AIA (American Institute of Architects) and EJCDC (Engineering Joint Contract Documents Committee) states that the drawings and specifications are complementary, together providing the information required for a complete facility. Many public agencies, such as the Naval Facilities Command (NAVFAC) state that the specifications overrule the drawings. This is based on the idea that words are easier for a jury (or mediator) to interpret than drawings in case of a dispute.
The standard listing of construction specifications falls into50 Divisions, or broad categories of work types and work results involved in construction. The divisions are subdivided into sections, each one addressing a specific material type (concrete) or a work product (steel door) of the construction work. A specific material may be covered in several locations, depending on the work result: stainless steel (for example) can be covered as a sheet material used in flashing and sheet Metal in division 07; it can be part of a finished product, such as a handrail, covered in division 05; or it can be a component of building hardware, covered in division 08. The original listing of specification divisions was based on the time sequence of construction, working from exterior to interior, and this logic is still somewhat followed as new materials and systems make their way into the construction process.
Each section is subdivided into three distinct parts: "general", "products" and "execution". The MasterFormat and SectionFormat[20]systems can be successfully applied to residential, commercial, civil, and industrial construction. Although many architects find the rather voluminous commercial style of specifications too lengthy for most residential projects and therefore either produce more abbreviated specifications of their own or use ArCHspec (which was specifically created for residential projects). Master specification systems are available from multiple vendors such as Arcom, Visispec, BSD, and Spectext. These systems were created to standardize language across the United States and are usually subscription based.
Specifications can be either "performance-based", whereby the specifier restricts the text to stating the performance that must be achieved by the completed work, "prescriptive" where the specifier states the specific criteria such as fabrication standards applicable to the item, or "proprietary", whereby the specifier indicates specific products, vendors and even contractors that are acceptable for each workscope. In addition, specifications can be "closed" with a specific list of products, or "open" allowing for substitutions made by the constructor. Most construction specifications are a combination of performance-based and proprietary types, naming acceptable manufacturers and products while also specifying certain standards and design criteria that must be met.
While North American specifications are usually restricted to broad descriptions of the work,Europeanones and Civil work can include actual work quantities, including such things asareaofdrywallto be built in square meters, like abill of materials. This type of specification is a collaborative effort between a specification writer and aquantity surveyor. This approach is unusual in North America, where each bidder performs a quantity survey on the basis of both drawings and specifications. In many countries on the European continent, content that might be described as "specifications" in the United States are covered under the building code or municipal code. Civil and infrastructure work in the United States often includes a quantity breakdown of the work to be performed as well.
Although specifications are usually issued by thearchitect's office, specification writing itself is undertaken by the architect and the variousengineersor by specialist specification writers. Specification writing is often a distinct professional trade, with professional certifications such as "Certified Construction Specifier" (CCS) available through the Construction Specifications Institute and the Registered Specification Writer (RSW)[21]through Construction Specifications Canada. Specification writers may be separate entities such assub-contractorsor they may beemployeesof architects, engineers, or construction management companies. Specification writers frequently meet with manufacturers ofbuilding materialswho seek to have their products specified on upcoming construction projects so that contractors can include their products in the estimates leading to their proposals.
In February 2015, ArCHspec went live, from ArCH (Architects Creating Homes), a nationwide American professional society of architects whose purpose is to improve residential architecture. ArCHspec was created specifically for use by licensed architects while designing SFR (Single Family Residential) architectural projects. Unlike the more commercial CSI/CSC (50+ division commercial specifications), ArCHspec utilizes the more concise 16 traditional Divisions, plus a Division 0 (Scope & Bid Forms) and Division 17 (low voltage). Many architects, up to this point, did not provide specifications for residential designs, which is one of the reasons ArCHspec was created: to fill a void in the industry with more compact specifications for residential projects. Shorter form specifications documents suitable for residential use are also available through Arcom, and follow the 50 division format, which was adopted in both the United States and Canada starting in 2004. The 16 division format is no longer considered standard, and is not supported by either CSI or CSC, or any of the subscription master specification services, data repositories, product lead systems, and the bulk of governmental agencies.
The United States'Federal Acquisition Regulationgoverningprocurementfor thefederal governmentand its agencies stipulates that a copy of the drawings and specifications must be kept available on a construction site.[22]
Specifications inEgyptform part of contract documents. The Housing and Building National Research Center (HBRC) is responsible for developing construction specifications and codes. The HBRC has published more than 15 books which cover building activities likeearthworks, plastering, etc.
Specifications in the UK are part of the contract documents that accompany and govern the construction of a building. They are prepared by construction professionals such asarchitects,architectural technologists,structural engineers,landscape architectsandbuilding services engineers. They are created from previous project specifications, in-house documents or master specifications such as theNational Building Specification(NBS). The National Building Specification is owned by theRoyal Institute of British Architects(RIBA) through their commercial group RIBA Enterprises (RIBAe). NBS master specifications provide content that is broad and comprehensive, and delivered using software functionality that enables specifiers to customize the content to suit the needs of the project and to keep up to date.
UK project specification types fall into two main categories prescriptive and performance. Prescriptive specifications define the requirements using generic or proprietary descriptions of what is required, whereas performance specifications focus on the outcomes rather than the characteristics of the components.
Specifications are an integral part ofBuilding Information Modelingand cover the non-geometric requirements.
Pharmaceutical products can usually be tested and qualified by variouspharmacopoeias. Current existing pharmaceutical standards include:
If any pharmaceutical product is not covered by the abovestandards, it can be evaluated by the additional source of pharmacopoeias from other nations, from industrial specifications, or from a standardizedformularysuch as
A similar approach is adopted by the food manufacturing, of whichCodex Alimentariusranks the highest standards, followed by regional and national standards.[23]
The coverage of food and drugstandardsbyISOis currently less fruitful and not yet put forward as an urgent agenda due to the tight restrictions of regional or national constitution.[24][25]
Specifications and other standards can be externally imposed as discussed above, but also internal manufacturing and quality specifications. These exist not only for thefoodorpharmaceuticalproduct but also for the processingmachinery,qualityprocesses,packaging,logistics(cold chain), etc. and are exemplified by ISO 14134 and ISO 15609.[26][27]
The converse of explicit statement of specifications is a process for dealing with observations that are out-of-specification. TheUnited States Food and Drug Administrationhas published a non-binding recommendation that addresses just this point.[28]
At the present time, much of the information and regulations concerning food and food products remain in a form which makes it difficult to apply automated information processing, storage and transmission methods and techniques.
Data systems that can process, store and transfer information about food and food products need formal specifications for the representations of data about food and food products in order to operate effectively and efficiently.
Development of formal specifications for food and drug data with the necessary and sufficient clarity and precision for use specifically by digital computing systems have begun to emerge from some government agencies and standards organizations: theUnited States Food and Drug Administrationhas published specifications for a "Structured Product Label" which drug manufacturers must by mandate use to submit electronically the information on a drug label.[29]Recently, the ISO has made some progress in the area of food and drugstandardsand formal specifications for data about regulated substances through the publication of ISO 11238.[30]
In many contexts, particularly software, specifications are needed to avoid errors due to lack of compatibility, for instance, in interoperability issues.
For instance, when two applications share Unicode data, but use different normal forms or use them incorrectly, in an incompatible way or without sharing a minimum set of interoperability specification, errors and data loss can result. For example, Mac OS X has many components that prefer or require only decomposed characters (thus decomposed-only Unicode encoded with UTF-8 is also known as "UTF8-MAC"). In one specific instance, the combination of OS X errors handling composed characters, and thesambafile- and printer-sharing software (which replaces decomposed letters with composed ones when copying file names), has led to confusing and data-destroying interoperability problems.[31][32]
Applications may avoid such errors by preserving input code points, and normalizing them to only the application's preferred normal form for internal use.
Such errors may also be avoided with algorithms normalizing both strings before any binary comparison.
However errors due to file name encoding incompatibilities have always existed, due to a lack of minimum set of common specification between software hoped to be inter-operable between various file system drivers, operating systems, network protocols, and thousands of software packages.
Aformal specificationis amathematicaldescription ofsoftwareorhardwarethat may be used to develop animplementation. It describeswhatthe system should do, not (necessarily)howthe system should do it. Given such a specification, it is possible to useformal verificationtechniques to demonstrate that a candidate system design is correct with respect to that specification. This has the advantage that incorrect candidate system designs can be revised before a major investment has been made in actually implementing the design. An alternative approach is to use provably correctrefinementsteps to transform a specification into a design, and ultimately into an actual implementation, that is correct by construction.
In (hardware, software, or enterprise) systems development, anarchitectural specificationis the set ofdocumentationthat describes thestructure,behavior, and moreviewsof thatsystem.
Aprogram specificationis the definition of what acomputer programis expected to do. It can beinformal, in which case it can be considered as a user manual from a developer point of view, orformal, in which case it has a definite meaning defined inmathematicalor programmatic terms. In practice, many successful specifications are written to understand and fine-tune applications that were already well-developed, althoughsafety-criticalsoftware systemsare often carefully specified prior to application development. Specifications are most important for external interfaces that must remain stable.
Insoftware development, afunctional specification(also,functional specorspecsorfunctional specifications document (FSD)) is the set ofdocumentationthat describes the behavior of a computer program or largersoftware system. The documentation typically describes various inputs that can be provided to thesoftwaresystem and how thesystemresponds to those inputs.
Web services specifications are often under the umbrella of aquality management system.[33]
These types of documents define how a specific document should be written, which may include, but is not limited to, the systems of a document naming, version, layout, referencing, structuring, appearance, language, copyright, hierarchy or format, etc.[34][35]Very often, this kind of specifications is complemented by a designated template.[36][37][38]
|
https://en.wikipedia.org/wiki/Specification_(technical_standard)
|
Statistical process control(SPC) orstatistical quality control(SQC) is the application ofstatistical methodsto monitor and control the quality of a production process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste scrap. SPC can be applied to any process where the "conforming product" (product meeting specifications) output can be measured. Key tools used in SPC includerun charts,control charts, a focus oncontinuous improvement, andthe design of experiments. An example of a process where SPC is applied is manufacturing lines.
SPC must be practiced in two phases: the first phase is the initial establishment of the process, and the second phase is the regular production use of the process. In the second phase, a decision of the period to be examined must be made, depending upon the change in 5M&E conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of parts used in the manufacturing process (machine parts, jigs, and fixtures).
An advantage of SPC over other methods of quality control, such as "inspection," is that it emphasizes early detection and prevention of problems, rather than the correction of problems after they have occurred.
In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product. SPC makes it less likely the finished product will need to be reworked or scrapped.
Statistical process control was pioneered byWalter A. ShewhartatBell Laboratoriesin the early 1920s. Shewhart developed the control chart in 1924 and the concept of a state of statistical control. Statistical control is equivalent to the concept ofexchangeability[2][3]developed by logicianWilliam Ernest Johnsonalso in 1924 in his bookLogic, Part III: The Logical Foundations of Science.[4]Along with a team at AT&T that includedHarold Dodgeand Harry Romig he worked to putsamplinginspection on a rational statistical basis as well. Shewhart consulted with Colonel Leslie E. Simon in the application of control charts to munitions manufacture at the Army'sPicatinny Arsenalin 1934. That successful application helped convince Army Ordnance to engage AT&T'sGeorge D. Edwardsto consult on the use of statistical quality control among its divisions and contractors at the outbreak of World War II.
W. Edwards Deminginvited Shewhart to speak at the Graduate School of the U.S. Department of Agriculture and served as the editor of Shewhart's bookStatistical Method from the Viewpoint of Quality Control(1939), which was the result of that lecture. Deming was an important architect of the quality control short courses that trained American industry in the new techniques during WWII. The graduates of these wartime courses formed a new professional society in 1945, theAmerican Society for Quality Control, which elected Edwards as its first president. Deming travelled to Japan during the Allied Occupation and met with the Union of Japanese Scientists and Engineers (JUSE) in an effort to introduce SPC methods to Japanese industry.[5][6]
Shewhart read the new statistical theories coming out of Britain, especially the work ofWilliam Sealy Gosset,Karl Pearson, andRonald Fisher. However, he understood that data from physical processes seldom produced anormal distributioncurve (that is, aGaussian distributionor 'bell curve'). He discovered that data from measurements of variation in manufacturing did not always behave the same way as data from measurements of natural phenomena (for example,Brownian motionof particles). Shewhart concluded that while every process displays variation, some processes display variation that is natural to the process ("common" sources of variation); these processes he described as beingin (statistical) control. Other processes additionally display variation that is not present in the causal system of the process at all times ("special" sources of variation), which Shewhart described asnot in control.[7]
Statistical process control is appropriate to support any repetitive process, and has been implemented in many settings where for exampleISO 9000quality management systems are used, including financial auditing and accounting, IT operations, health care processes, and clerical processes such as loan arrangement and administration, customer billing etc. Despite criticism of its use in design and development, it is well-placed to manage semi-automated data governance of high-volume data processing operations, for example in an enterprise data warehouse, or an enterprise data quality management system.[8]
In the 1988Capability Maturity Model(CMM) theSoftware Engineering Institutesuggested that SPC could be applied to software engineering processes. The Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI) use this concept.
The application of SPC to non-repetitive, knowledge-intensive processes, such as research and development or systems engineering, has encountered skepticism and remains controversial.[9][10][11]
InNo Silver Bullet,Fred Brookspoints out that the complexity, conformance requirements, changeability, and invisibility of software[12][13]results in inherent and essential variation that cannot be removed. This implies that SPC is less effective in the software development than in, e.g., manufacturing.
In manufacturing, quality is defined as conformance to specification. However, no two products or characteristics are ever exactly the same, because any process contains many sources of variability. In mass-manufacturing, traditionally, the quality of a finished article is ensured by post-manufacturing inspection of the product. Each article (or a sample of articles from a production lot) may be accepted or rejected according to how well it meets its designspecifications, SPC usesstatisticaltools to observe the performance of the production process in order to detect significant variations before they result in the production of a sub-standard article.
Any source of variation at any point of time in a process will fall into one of two classes.
Most processes have many sources of variation; most of them are minor and may be ignored. If the dominant assignable sources of variation are detected, potentially they can be identified and removed. When they are removed, the process is said to be 'stable'. When a process is stable, its variation should remain within a known set of limits. That is, at least, until another assignable source of variation occurs.
For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of cereal. Some boxes will have slightly more than 500 grams, and some will have slightly less. When the package weights are measured, the data will demonstrate adistributionof net weights.
If the production process, its inputs, or its environment (for example, the machine on the line) change, the distribution of the data will change. For example, as the cams and pulleys of the machinery wear, the cereal filling machine may put more than the specified amount of cereal into each box. Although this might benefit the customer, from the manufacturer's point of view it is wasteful, and increases the cost of production. If the manufacturer finds the change and its source in a timely manner, the change can be corrected (for example, the cams and pulleys replaced).
From an SPC perspective, if the weight of each cereal box varies randomly, some higher and some lower, always within an acceptable range, then the process is considered stable. If the cams and pulleys of the machinery start to wear out, the weights of the cereal box might not be random. The degraded functionality of the cams and pulleys may lead to a non-random linear pattern of increasing cereal box weights. We call this common cause variation. If, however, all the cereal boxes suddenly weighed much more than average because of an unexpected malfunction of the cams and pulleys, this would be considered a special cause variation.
The application of SPC involves three main phases of activity:
The proper implementation of SPC has been limited, in part due to a lack of statistical expertise at many organizations.[14]
The data from measurements of variations at points on the process map is monitored usingcontrol charts. Control charts attempt to differentiate "assignable" ("special") sources of variation from "common" sources. "Common" sources, because they are an expected part of the process, are of much less concern to the manufacturer than "assignable" sources. Using control charts is a continuous activity, ongoing over time.
When the process does not trigger any of the control chart "detection rules" for the control chart, it is said to be "stable". Aprocess capabilityanalysis may be performed on a stable process to predict the ability of the process to produce "conforming product" in the future.
A stable process can be demonstrated by a process signature that is free of variances outside of the capability index. A process signature is the plotted points compared with the capability index.
When the process triggers any of the control chart "detection rules", (or alternatively, the process capability is low), other activities may be performed to identify the source of the excessive variation.
The tools used in these extra activities include:Ishikawa diagram,designed experiments, andPareto charts. Designed experiments are a means of objectively quantifying the relative importance (strength) of sources of variation. Once the sources of (special cause) variation are identified, they can be minimized or eliminated. Steps to eliminating a source of variation might include: development of standards, staff training, error-proofing, and changes to the process itself or its inputs.
When monitoring many processes with control charts, it is sometimes useful to calculate quantitative measures of the stability of the processes. These metrics can then be used to identify/prioritize the processes that are most in need of corrective actions. These metrics can also be viewed as supplementing the traditionalprocess capabilitymetrics. Several metrics have been proposed, as described in Ramirez and Runger.[15]They are (1) a Stability Ratio which compares the long-term variability to the short-term variability, (2) an ANOVA Test which compares the within-subgroup variation to the between-subgroup variation, and (3) an Instability Ratio which compares the number of subgroups that have one or more violations of theWestern Electric rulesto the total number of subgroups.
Digital control charts use logic-based rules that determine "derived values" which signal the need for correction. For example,
|
https://en.wikipedia.org/wiki/Statistical_process_control
|
Atolerance interval(TI) is astatistical intervalwithin which, with someconfidence level, a specifiedsampledproportion of a populationfalls. "More specifically, a100×p%/100×(1−α)tolerance interval provides limits within which at least a certain proportion (p) of the population falls with a given level of confidence (1−α)."[1]"A (p, 1−α) tolerance interval (TI) based on a sample is constructed so that it would include at least a proportionpof the sampled population with confidence 1−α; such a TI is usually referred to as p-content − (1−α) coverage TI."[2]"A (p, 1−α) uppertolerance limit(TL) is simply a 1−α upperconfidence limitfor the 100ppercentileof the population."[2]
Assume observations orrandom variatesx=(x1,…,xn){\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{n})}as realization of independent random variablesX=(X1,…,Xn){\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})}which have a common distributionFθ{\displaystyle F_{\theta }}, with unknown parameterθ{\displaystyle \theta }.
Then, a tolerance interval with endpoints(L(x),U(x)]{\displaystyle (L(\mathbf {x} ),U(\mathbf {x} )]}which has the defining property:[3]
whereinf{}{\displaystyle \inf\{\}}denotes theinfimumfunction.
This is in contrast to a prediction interval with endpoints[l(x),u(x)]{\displaystyle [l(\mathbf {x} ),u(\mathbf {x} )]}which has the defining property:[3]
Here,X0{\displaystyle X_{0}}is a random variable from the same distributionFθ{\displaystyle F_{\theta }}but independent of the firstn{\displaystyle n}variables.
NoticeX0{\displaystyle X_{0}}isnotinvolved in the definition of tolerance interval, which deals only with the first sample, of sizen.
One-sided normal tolerance intervals have an exact solution in terms of the sample mean and sample variance based on thenoncentralt-distribution.[4]Two-sided normal tolerance intervals can be estimated using thechi-squared distribution.[4]
"In the parameters-known case, a 95% tolerance interval and a 95%prediction intervalare the same."[5]If we knew a population's exact parameters, we would be able to compute a range within which a certain proportion of the population falls. For example, if we know a population isnormally distributedwithmeanμ{\displaystyle \mu }andstandard deviationσ{\displaystyle \sigma }, then the intervalμ±1.96σ{\displaystyle \mu \pm 1.96\sigma }includes 95% of the population (1.96 is thez-scorefor 95% coverage of a normally distributed population).
However, if we have only a sample from the population, we know only thesample meanμ^{\displaystyle {\hat {\mu }}}and sample standard deviationσ^{\displaystyle {\hat {\sigma }}}, which are only estimates of the true parameters. In that case,μ^±1.96σ^{\displaystyle {\hat {\mu }}\pm 1.96{\hat {\sigma }}}will not necessarily include 95% of the population, due to variance in these estimates. A tolerance interval bounds this variance by introducing a confidence levelγ{\displaystyle \gamma }, which is the confidence with which this interval actually includes the specified proportion of the population. For a normally distributed population, a z-score can be transformed into a "kfactor" ortolerance factor[6]for a givenγ{\displaystyle \gamma }via lookup tables or several approximation formulas.[7]"As thedegrees of freedomapproach infinity, the prediction and tolerance intervals become equal."[8]
The tolerance interval is less widely known than theconfidence intervalandprediction interval, a situation some educators have lamented, as it can lead to misuse of the other intervals where a tolerance interval is more appropriate.[9][10]
The tolerance interval differs from aconfidence intervalin that the confidence interval bounds a single-valued population parameter (themeanor thevariance, for example) with some confidence, while the tolerance interval bounds the range of data values that includes a specific proportion of the population. Whereas a confidence interval's size is entirely due tosampling error, and will approach a zero-width interval at the true population parameter as sample size increases, a tolerance interval's size is due partly to sampling error and partly to actual variance in the population, and will approach the population's probability interval as sample size increases.[9][10]
The tolerance interval is related to aprediction intervalin that both put bounds on variation in future samples. However, the prediction interval only bounds a single future sample, whereas a tolerance interval bounds the entire population (equivalently, an arbitrary sequence of future samples). In other words, a prediction interval covers a specified proportion of a populationon average, whereas a tolerance interval covers itwith a certain confidence level, making the tolerance interval more appropriate if a single interval is intended to bound multiple future samples.[10][11]
[9]gives the following example:
So consider once again a proverbialEPAmileagetest scenario, in which several nominally identical autos of a particular model are tested to produce mileage figuresy1,y2,...,yn{\displaystyle y_{1},y_{2},...,y_{n}}. If such data are processed to produce a 95% confidence interval for the mean mileage of the model, it is, for example, possible to use it to project the mean or total gasoline consumption for the manufactured fleet of such autos over their first 5,000 miles of use. Such an interval, would however, not be of much help to a person renting one of these cars and wondering whether the (full) 10-gallon tank of gas will suffice to carry him the 350 miles to his destination. For that job, a prediction interval would be much more useful. (Consider the differing implications of being "95% sure" thatμ≥35{\displaystyle \mu \geq 35}as opposed to being "95% sure" thatyn+1≥35{\displaystyle y_{n+1}\geq 35}.) But neither a confidence interval forμ{\displaystyle \mu }nor a prediction interval for a single additional mileage is exactly what is needed by a design engineer charged with determining how large a gas tank the model really needs to guarantee that 99% of the autos produced will have a 400-mile cruising range. What the engineer really needs is a tolerance interval for a fractionp=.99{\displaystyle p=.99}of mileages of such autos.
Another example is given by:[11]
The air lead levels were collected fromn=15{\displaystyle n=15}different areas within the facility. It was noted that the log-transformed lead levels fitted a normal distribution well (that is, the data are from alognormal distribution. Letμ{\displaystyle \mu }andσ2{\displaystyle \sigma ^{2}}, respectively, denote the population mean and variance for the log-transformed data. IfX{\displaystyle X}denotes the corresponding random variable, we thus haveX∼N(μ,σ2){\displaystyle X\sim {\mathcal {N}}(\mu ,\sigma ^{2})}. We note thatexp(μ){\displaystyle \exp(\mu )}is the median air lead level. A confidence interval forμ{\displaystyle \mu }can be constructed the usual way, based on thet-distribution; this in turn will provide a confidence interval for the median air lead level. IfX¯{\displaystyle {\bar {X}}}andS{\displaystyle S}denote the sample mean and standard deviation of the log-transformed data for a sample of size n, a 95% confidence interval forμ{\displaystyle \mu }is given byX¯±tn−1,0.975S/n{\displaystyle {\bar {X}}\pm t_{n-1,0.975}S/{\sqrt {n}}}, wheretm,1−α{\displaystyle t_{m,1-\alpha }}denotes the1−α{\displaystyle 1-\alpha }quantile of at-distributionwithm{\displaystyle m}degrees of freedom. It may also be of interest to derive a 95% upper confidence bound for the median air lead level. Such a bound forμ{\displaystyle \mu }is given byX¯+tn−1,0.95S/n{\displaystyle {\bar {X}}+t_{n-1,0.95}S/{\sqrt {n}}}. Consequently, a 95% upper confidence bound for the median air lead is given byexp(X¯+tn−1,0.95S/n){\displaystyle \exp {\left({\bar {X}}+t_{n-1,0.95}S/{\sqrt {n}}\right)}}. Now suppose we want to predict the air lead level at a particular area within the laboratory. A 95% upper prediction limit for the log-transformed lead level is given byX¯+tn−1,0.95S(1+1/n){\displaystyle {\bar {X}}+t_{n-1,0.95}S{\sqrt {\left(1+1/n\right)}}}. A two-sided prediction interval can be similarly computed. The meaning and interpretation of these intervals are well known. For example, if the confidence intervalX¯±tn−1,0.975S/n{\displaystyle {\bar {X}}\pm t_{n-1,0.975}S/{\sqrt {n}}}is computed repeatedly from independent samples, 95% of the intervals so computed will include the true value ofμ{\displaystyle \mu }, in the long run. In other words, the interval is meant to provide information concerning the parameterμ{\displaystyle \mu }only. A prediction interval has a similar interpretation, and is meant to provide information concerning a single lead level only. Now suppose we want to use the sample to conclude whether or not at least 95% of the population lead levels are below a threshold. The confidence interval and prediction interval cannot answer this question, since the confidence interval is only for the median lead level, and the prediction interval is only for a single lead level. What is required is a tolerance interval; more specifically, an upper tolerance limit. The upper tolerance limit is to be computed subject to the condition that at least 95% of the population lead levels is below the limit, with a certain confidence level, say 99%.
|
https://en.wikipedia.org/wiki/Statistical_tolerance
|
Astructure gauge, also called theminimum structure outline, is a diagram or physical structure that sets limits to the extent that bridges, tunnels and other infrastructure can encroach on rail vehicles. It specifies the height and width of station platforms,tunnelsandbridges, and the width of the doors that allow access to awarehousefrom arail siding. Specifications may include the minimum distance from rail vehicles torailway platforms, buildings, lineside electrical equipment cabinets,signallingequipment,third railsor supports foroverhead lines.[1]
A related but separate gauge is theloading gauge: a diagram or physical structure that defines the maximum height and width dimensions inrailwayvehicles and their loads. The difference between these two gauges is called theclearance. The specified amount of clearance makes allowance forwobblingof rail vehicles at speed or the shifting of vehicles on curves; consequently, in some circumstances a train may be permitted to go past a restricted clearance at very slow speed.
The term can also be applied to the minimum size of roadtunnels, the space beneathoverpassesand the space within thesuperstructureofbridges, as well asdoorsintoautomobile repair shops,bus garages,filling stations,residential garages,multi-storey car parks,overhangsatdrive-throughsandwarehouses.[citation needed]
Eurocode 1: Actions on structureshas a definition of "physical clearance" between roadway surface and the underside of bridge element. The code also defines the clearance that is shorter than the physical clearance to account forsag curves, bridgedeflectionand expected settlements with a recommendation of minimum clearance of 5 metres (16 ft 5 in).[2]In UK, the "standard minimum clearance" for structures over public highways is 16 feet 6 inches (5.03 m).[3]In United States, the "minimum vertical clearance" of overpasses onInterstate Highway Systemis 16 feet (4.9 m).[4]
This rail-transport related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Structure_gauge
|
Taguchi methods(Japanese:タグチメソッド) arestatisticalmethods, sometimes called robust design methods, developed byGenichi Taguchito improve the quality of manufactured goods, and more recently also applied to engineering,[1]biotechnology,[2][3]marketing and advertising.[4]Professionalstatisticianshave welcomed the goals and improvements brought about by Taguchi methods,[editorializing]particularly by Taguchi's development of designs for studying variation, but have criticized theinefficiencyof some of Taguchi's proposals.[5][citation needed]
Taguchi's work includes three principal contributions to statistics:
Traditionally, statistical methods have relied onmean-unbiased estimatorsoftreatment effects: Under the conditions of theGauss–Markov theorem,least squaresestimators have minimum variance among all mean-unbiased linear estimators. The emphasis on comparisons of means also draws (limiting) comfort from thelaw of large numbers, according to which thesample meansconvergeto the true mean. Fisher's textbook on thedesign of experimentsemphasized comparisons of treatment means.
However, loss functions were avoided byRonald A. Fisher[clarification needed- loss functions weren't explicitly mentioned yet].[6]
Taguchi knewstatistical theorymainly from the followers ofRonald A. Fisher, who also avoidedloss functions.
Reacting to Fisher's methods in thedesign of experiments, Taguchi interpreted Fisher's methods as being adapted for seeking to improve themeanoutcome of aprocess. Indeed, Fisher's work had been largely motivated by programmes to compare agricultural yields under different treatments and blocks, and such experiments were done as part of a long-term programme to improve harvests.
However, Taguchi realised that in much industrial production, there is a need to produce an outcomeon target, for example, tomachinea hole to a specified diameter, or to manufacture acellto produce a givenvoltage. He also realised, as hadWalter A. Shewhartand others before him, that excessive variation lay at the root of poor manufactured quality and that reacting to individual items inside and outside specification was counterproductive.
He therefore argued thatquality engineeringshould start with an understanding ofquality costsin various situations. In much conventionalindustrial engineering, the quality costs are simply represented by the number of items outside specification multiplied by the cost of rework or scrap. However, Taguchi insisted that manufacturers broaden their horizons to considercost to society. Though the short-term costs may simply be those of non-conformance, any item manufactured away from nominal would result in some loss to the customer or the wider community through early wear-out; difficulties in interfacing with other parts, themselves probably wide of nominal; or the need to build in safety margins. These losses areexternalitiesand are usually ignored by manufacturers, which are more interested in theirprivate coststhansocial costs. Such externalities prevent markets from operating efficiently, according to analyses ofpublic economics. Taguchi argued that such losses would inevitably find their way back to the originating corporation (in an effect similar to thetragedy of the commons), and that by working to minimise them, manufacturers would enhance brand reputation, win markets and generate profits.
Such losses are, of course, very small when an item is near to negligible.Donald J. Wheelercharacterised the region within specification limits as where wedeny that losses exist. As we diverge from nominal, losses grow until the point wherelosses are too great to denyand the specification limit is drawn. All these losses are, asW. Edwards Demingwould describe them,unknown and unknowable, but Taguchi wanted to find a useful way of representing them statistically. Taguchi specified three situations:[7]
The first two cases are represented by simplemonotonicloss functions. In the third case, Taguchi adopted a squared-error loss function for several reasons:[7]
Though many of Taguchi's concerns and conclusions are welcomed by statisticians andeconomists, some ideas have been especially criticized. For example, Taguchi's recommendation that industrial experiments maximise somesignal-to-noise ratio(representing the magnitude of themeanof a process compared to its variation) has been criticized.[8]
Taguchi realized that the best opportunity to eliminate variation of the final product quality is during the design of a product and its manufacturing process. Consequently, he developed a strategy for quality engineering that can be used in both contexts. The process has three stages:
This is design at the conceptual level, involvingcreativityandinnovation.
Once the concept is established, the nominal values of the various dimensions and design parameters need to be set, thedetail designphase of conventional engineering. Taguchi's radical insight was that the exact choice of values required is under-specified by the performance requirements of the system. In many circumstances, this allows the parameters to be chosen so as to minimize the effects on performance arising from variation in manufacture, environment and cumulative damage. This is sometimes calledrobustification.
Robust parameter designsconsider controllable and uncontrollable noise variables; they seek to exploit relationships and optimize settings that minimize the effects of the noise variables.
With a successfully completedparameter design, and an understanding of the effect that the various parameters have on performance, resources can be focused on reducing and controlling variation in the critical few dimensions.
Taguchi developed his experimental theories independently. Taguchi read works followingR. A. Fisheronly in 1954.
Taguchi's designs aimed to allow greater understanding of variation than did many of the traditional designs from theanalysis of variance(following Fisher). Taguchi contended that conventionalsamplingis inadequate here as there is no way of obtaining arandom sampleof future conditions.[9]In Fisher'sdesign of experimentsandanalysis of variance, experiments aim to reduce the influence ofnuisance factorsto allow comparisons of the mean treatment-effects. Variation becomes even more central in Taguchi's thinking.
Taguchi proposed extending each experiment with an "outer array" (possibly anorthogonal array); the "outer array" should simulate the random environment in which the product would function. This is an example ofjudgmental sampling. Many quality specialists have been using "outer arrays".
Later innovations in outer arrays resulted in "compounded noise." This involves combining a few noise factors to create two levels in the outer array: First, noise factors that drive output lower, and second, noise factors that drive output higher. "Compounded noise" simulates the extremes of noise variation but uses fewer experimental runs than would previous Taguchi designs.
Many of the orthogonal arrays that Taguchi has advocated aresaturated arrays, allowing no scope for estimation of interactions. This is a continuing topic of controversy. However, this is only true for "control factors" or factors in the "inner array". By combining an inner array of control factors with an outer array of "noise factors", Taguchi's approach provides "full information" on control-by-noise interactions, it is claimed. Taguchi argues that such interactions have the greatest importance in achieving a design that is robust to noise factor variation. The Taguchi approach provides more complete interaction information than typicalfractional factorial designs, its adherents claim.
Statisticians inresponse surface methodology(RSM) advocate the "sequential assembly" ofdesigns: In the RSM approach, ascreeningdesign is followed by a "follow-up design" that resolves only the confounded interactions judged worth resolution. A second follow-up design may be added (time and resources allowing) to explore possiblehigh-orderunivariate effects of the remaining variables, as high-order univariate effects are less likely in variables already eliminated for having no linear effect. With the economy of screening designs and the flexibility of follow-up designs, sequential designs have greatstatistical efficiency. The sequential designs ofresponse surface methodologyrequire far fewer experimental runs than would a sequence of Taguchi's designs.[10]
Genichi Taguchi has made valuable contributions to statistics andengineering. His emphasis onloss to society, techniques for investigating variation in experiments, and his overall strategy of system, parameter and tolerance design have been influential in improving manufactured quality worldwide.
|
https://en.wikipedia.org/wiki/Taguchi_methods
|
Tolerance coningis the engineering discipline of creating a budget of all tolerances that potentially add/subtract to affect adequacy of a particular parameter. This is particularly critical where stages of design/manufacture precede test/use.
For example, when setting a test limit for a measurement on each manufactured item of some type, to assure that no bad items are shipped, the limit must be tighter than the requirement to allow for the worst case sum of measurement inaccuracies (e.g. equipment,test fixtureetc.). The design of the item thus has to take into account not only the product requirement but also the test tolerances. The buildup of this budget is tolerance coning.
Electronics engineers intuitively do tolerance coning and tend to formalise it for critical parameters. However it is also relevant to other engineering disciplines.
This engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Tolerance_coning
|
Atolerance interval(TI) is astatistical intervalwithin which, with someconfidence level, a specifiedsampledproportion of a populationfalls. "More specifically, a100×p%/100×(1−α)tolerance interval provides limits within which at least a certain proportion (p) of the population falls with a given level of confidence (1−α)."[1]"A (p, 1−α) tolerance interval (TI) based on a sample is constructed so that it would include at least a proportionpof the sampled population with confidence 1−α; such a TI is usually referred to as p-content − (1−α) coverage TI."[2]"A (p, 1−α) uppertolerance limit(TL) is simply a 1−α upperconfidence limitfor the 100ppercentileof the population."[2]
Assume observations orrandom variatesx=(x1,…,xn){\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{n})}as realization of independent random variablesX=(X1,…,Xn){\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})}which have a common distributionFθ{\displaystyle F_{\theta }}, with unknown parameterθ{\displaystyle \theta }.
Then, a tolerance interval with endpoints(L(x),U(x)]{\displaystyle (L(\mathbf {x} ),U(\mathbf {x} )]}which has the defining property:[3]
whereinf{}{\displaystyle \inf\{\}}denotes theinfimumfunction.
This is in contrast to a prediction interval with endpoints[l(x),u(x)]{\displaystyle [l(\mathbf {x} ),u(\mathbf {x} )]}which has the defining property:[3]
Here,X0{\displaystyle X_{0}}is a random variable from the same distributionFθ{\displaystyle F_{\theta }}but independent of the firstn{\displaystyle n}variables.
NoticeX0{\displaystyle X_{0}}isnotinvolved in the definition of tolerance interval, which deals only with the first sample, of sizen.
One-sided normal tolerance intervals have an exact solution in terms of the sample mean and sample variance based on thenoncentralt-distribution.[4]Two-sided normal tolerance intervals can be estimated using thechi-squared distribution.[4]
"In the parameters-known case, a 95% tolerance interval and a 95%prediction intervalare the same."[5]If we knew a population's exact parameters, we would be able to compute a range within which a certain proportion of the population falls. For example, if we know a population isnormally distributedwithmeanμ{\displaystyle \mu }andstandard deviationσ{\displaystyle \sigma }, then the intervalμ±1.96σ{\displaystyle \mu \pm 1.96\sigma }includes 95% of the population (1.96 is thez-scorefor 95% coverage of a normally distributed population).
However, if we have only a sample from the population, we know only thesample meanμ^{\displaystyle {\hat {\mu }}}and sample standard deviationσ^{\displaystyle {\hat {\sigma }}}, which are only estimates of the true parameters. In that case,μ^±1.96σ^{\displaystyle {\hat {\mu }}\pm 1.96{\hat {\sigma }}}will not necessarily include 95% of the population, due to variance in these estimates. A tolerance interval bounds this variance by introducing a confidence levelγ{\displaystyle \gamma }, which is the confidence with which this interval actually includes the specified proportion of the population. For a normally distributed population, a z-score can be transformed into a "kfactor" ortolerance factor[6]for a givenγ{\displaystyle \gamma }via lookup tables or several approximation formulas.[7]"As thedegrees of freedomapproach infinity, the prediction and tolerance intervals become equal."[8]
The tolerance interval is less widely known than theconfidence intervalandprediction interval, a situation some educators have lamented, as it can lead to misuse of the other intervals where a tolerance interval is more appropriate.[9][10]
The tolerance interval differs from aconfidence intervalin that the confidence interval bounds a single-valued population parameter (themeanor thevariance, for example) with some confidence, while the tolerance interval bounds the range of data values that includes a specific proportion of the population. Whereas a confidence interval's size is entirely due tosampling error, and will approach a zero-width interval at the true population parameter as sample size increases, a tolerance interval's size is due partly to sampling error and partly to actual variance in the population, and will approach the population's probability interval as sample size increases.[9][10]
The tolerance interval is related to aprediction intervalin that both put bounds on variation in future samples. However, the prediction interval only bounds a single future sample, whereas a tolerance interval bounds the entire population (equivalently, an arbitrary sequence of future samples). In other words, a prediction interval covers a specified proportion of a populationon average, whereas a tolerance interval covers itwith a certain confidence level, making the tolerance interval more appropriate if a single interval is intended to bound multiple future samples.[10][11]
[9]gives the following example:
So consider once again a proverbialEPAmileagetest scenario, in which several nominally identical autos of a particular model are tested to produce mileage figuresy1,y2,...,yn{\displaystyle y_{1},y_{2},...,y_{n}}. If such data are processed to produce a 95% confidence interval for the mean mileage of the model, it is, for example, possible to use it to project the mean or total gasoline consumption for the manufactured fleet of such autos over their first 5,000 miles of use. Such an interval, would however, not be of much help to a person renting one of these cars and wondering whether the (full) 10-gallon tank of gas will suffice to carry him the 350 miles to his destination. For that job, a prediction interval would be much more useful. (Consider the differing implications of being "95% sure" thatμ≥35{\displaystyle \mu \geq 35}as opposed to being "95% sure" thatyn+1≥35{\displaystyle y_{n+1}\geq 35}.) But neither a confidence interval forμ{\displaystyle \mu }nor a prediction interval for a single additional mileage is exactly what is needed by a design engineer charged with determining how large a gas tank the model really needs to guarantee that 99% of the autos produced will have a 400-mile cruising range. What the engineer really needs is a tolerance interval for a fractionp=.99{\displaystyle p=.99}of mileages of such autos.
Another example is given by:[11]
The air lead levels were collected fromn=15{\displaystyle n=15}different areas within the facility. It was noted that the log-transformed lead levels fitted a normal distribution well (that is, the data are from alognormal distribution. Letμ{\displaystyle \mu }andσ2{\displaystyle \sigma ^{2}}, respectively, denote the population mean and variance for the log-transformed data. IfX{\displaystyle X}denotes the corresponding random variable, we thus haveX∼N(μ,σ2){\displaystyle X\sim {\mathcal {N}}(\mu ,\sigma ^{2})}. We note thatexp(μ){\displaystyle \exp(\mu )}is the median air lead level. A confidence interval forμ{\displaystyle \mu }can be constructed the usual way, based on thet-distribution; this in turn will provide a confidence interval for the median air lead level. IfX¯{\displaystyle {\bar {X}}}andS{\displaystyle S}denote the sample mean and standard deviation of the log-transformed data for a sample of size n, a 95% confidence interval forμ{\displaystyle \mu }is given byX¯±tn−1,0.975S/n{\displaystyle {\bar {X}}\pm t_{n-1,0.975}S/{\sqrt {n}}}, wheretm,1−α{\displaystyle t_{m,1-\alpha }}denotes the1−α{\displaystyle 1-\alpha }quantile of at-distributionwithm{\displaystyle m}degrees of freedom. It may also be of interest to derive a 95% upper confidence bound for the median air lead level. Such a bound forμ{\displaystyle \mu }is given byX¯+tn−1,0.95S/n{\displaystyle {\bar {X}}+t_{n-1,0.95}S/{\sqrt {n}}}. Consequently, a 95% upper confidence bound for the median air lead is given byexp(X¯+tn−1,0.95S/n){\displaystyle \exp {\left({\bar {X}}+t_{n-1,0.95}S/{\sqrt {n}}\right)}}. Now suppose we want to predict the air lead level at a particular area within the laboratory. A 95% upper prediction limit for the log-transformed lead level is given byX¯+tn−1,0.95S(1+1/n){\displaystyle {\bar {X}}+t_{n-1,0.95}S{\sqrt {\left(1+1/n\right)}}}. A two-sided prediction interval can be similarly computed. The meaning and interpretation of these intervals are well known. For example, if the confidence intervalX¯±tn−1,0.975S/n{\displaystyle {\bar {X}}\pm t_{n-1,0.975}S/{\sqrt {n}}}is computed repeatedly from independent samples, 95% of the intervals so computed will include the true value ofμ{\displaystyle \mu }, in the long run. In other words, the interval is meant to provide information concerning the parameterμ{\displaystyle \mu }only. A prediction interval has a similar interpretation, and is meant to provide information concerning a single lead level only. Now suppose we want to use the sample to conclude whether or not at least 95% of the population lead levels are below a threshold. The confidence interval and prediction interval cannot answer this question, since the confidence interval is only for the median lead level, and the prediction interval is only for a single lead level. What is required is a tolerance interval; more specifically, an upper tolerance limit. The upper tolerance limit is to be computed subject to the condition that at least 95% of the population lead levels is below the limit, with a certain confidence level, say 99%.
|
https://en.wikipedia.org/wiki/Tolerance_interval
|
Tolerance analysisis the general term for activities related to the study of accumulated variation in mechanical parts and assemblies. Its methods may be used on other types of systems subject to accumulated variation, such as mechanical and electrical systems. Engineers analyze tolerances for the purpose of evaluatinggeometric dimensioning and tolerancing(GD&T). Methods include 2D tolerance stacks, 3DMonte Carlo simulations, and datum conversions.
Tolerance stackupsortolerance stacksare used to describe the problem-solving process inmechanical engineeringof calculating the effects of the accumulated variation that is allowed by specified dimensions andtolerances. Typically these dimensions and tolerances are specified on an engineering drawing. Arithmetic tolerance stackups use the worst-case maximum or minimum values of dimensions and tolerances to calculate the maximum and minimum distance (clearance or interference) between two features or parts. Statistical tolerance stackups evaluate the maximum and minimum values based on the absolute arithmetic calculation combined with some method for establishing likelihood of obtaining the maximum and minimum values, such as Root Sum Square (RSS) or Monte-Carlo methods.
In performing a tolerance analysis, there are two fundamentally different analysis tools for predicting stackup variation: worst-case analysis and statistical analysis.
Worst-case tolerance analysis is the traditional type of tolerance stackup calculation. The individual variables are placed at their tolerance limits in order to make the measurement as large or as small as possible. The worst-case model does not consider the distribution of the individual variables, but rather that those variables do not exceed their respective specified limits. This model predicts the maximum expected variation of the measurement. Designing to worst-case tolerance requirements guarantees 100 percent of the parts will assemble and function properly, regardless of the actual component variation. The major drawback is that the worst-case model often requires very tight individual component tolerances. The obvious result is expensive manufacturing and inspection processes and/or high scrap rates. Worst-case tolerancing is often required by the customer for critical mechanical interfaces and spare part replacement interfaces. When worst-case tolerancing is not a contract requirement, properly applied statistical tolerancing can ensure acceptable assembly yields with increased component tolerances and lower fabrication costs.
The statistical variation analysis model takes advantage of the principles of statistics to relax the component tolerances without sacrificing quality. Each component's variation is modeled as a statistical distribution and these distributions are summed to predict the distribution of the assembly measurement. Thus, statistical variation analysis predicts a distribution that describes the assembly variation, not the extreme values of that variation. This analysis model provides increased design flexibility by allowing the designer to design to any quality level, not just 100 percent.
There are two chief methods for performing the statistical analysis. In one, the expected distributions are modified in accordance with the relevant geometric multipliers within tolerance limits and then combined using mathematical operations to provide a composite of the distributions. The geometric multipliers are generated by making small deltas to the nominal dimensions. The immediate value to this method is that the output is smooth, but it fails to account for geometric misalignment allowed for by the tolerances; if a size dimension is placed between two parallel surfaces, it is assumed the surfaces will remain parallel, even though the tolerance does not require this. Because the CAD engine performs the variation sensitivity analysis, there is no output available to drive secondary programs such as stress analysis.
In the other, the variations are simulated by allowing random changes to geometry, constrained by expected distributions within allowed tolerances with the resulting parts assembled, and then measurements of critical places are recorded as if in an actual manufacturing environment. The collected data is analyzed to find a fit with a known distribution and mean and standard deviations derived from them. The immediate value to this method is that the output represents what is acceptable, even when that is from imperfect geometry and, because it uses recorded data to perform its analysis, it is possible to include actual factory inspection data into the analysis to see the effect of proposed changes on real data. In addition, because the engine for the analysis is performing the variation internally, not based on CAD regeneration, it is possible to link the variation engine output to another program. For example, a rectangular bar may vary in width and thickness; the variation engine could output those numbers to a stress program which passes back peak stress as a result and the dimensional variation be used to determine likely stress variations. The disadvantage is that each run is unique, so there will be variation from analysis to analysis for the output distribution and mean, just like would come from a factory.
While no official engineering standard covers the process or format of tolerance analysis and stackups, these are essential components of goodproduct design. Tolerance stackups should be used as part of the mechanical design process, both as a predictive and a problem-solving tool. The methods used to conduct a tolerance stackup depend somewhat upon the engineering dimensioning and tolerancing standards that are referenced in the engineering documentation, such asAmerican Society of Mechanical Engineers(ASME) Y14.5, ASME Y14.41, or the relevant ISO dimensioning and tolerancing standards. Understanding the tolerances, concepts and boundaries created by these standards is vital to performing accurate calculations.
Tolerance stackups serve engineers by:
The starting point for the tolerance loop; typically this is one side of an intended gap, after pushing the various parts in the assembly to one side or another of their loose range of motion. Vector loops define the assembly constraints that locate the parts of the assembly relative to each other. The vectors represent the dimensions that contribute to tolerance stackup in the assembly. The vectors are joined tip-to-tail, forming a chain, passing through each part in the assembly in succession. A vector loop must obey certain modeling rules as it passes through a part. It must:
Additional modeling rules for vector loops include:
The above rules will vary depending on whether 1D, 2D or 3D tolerance stackup method is used.
A safety factor is often included in designs because of concerns about:
|
https://en.wikipedia.org/wiki/Tolerance_stacks
|
Measurementis thequantificationofattributesof an object or event, which can be used to compare with other objects or events.[1][2]In other words, measurement is a process of determining how large or small aphysical quantityis as compared to a basic reference quantity of the same kind.[3]The scope and application of measurement are dependent on the context and discipline. Innatural sciencesandengineering, measurements do not apply tonominal propertiesof objects or events, which is consistent with the guidelines of theInternational Vocabulary of Metrology(VIM) published by theInternational Bureau of Weights and Measures(BIPM).[2]However, in other fields such asstatisticsas well as thesocialandbehavioural sciences, measurements can havemultiple levels, which would include nominal, ordinal, interval and ratio scales.[1][4]
Measurement is a cornerstone oftrade,science,technologyandquantitative researchin many disciplines. Historically, manymeasurement systemsexisted for the varied fields of human existence to facilitate comparisons in these fields. Often these were achieved by local agreements between trading partners or collaborators. Since the 18th century, developments progressed towards unifying, widely accepted standards that resulted in the modernInternational System of Units(SI). This system reduces all physical measurements to a mathematical combination of seven base units. The science of measurement is pursued in the field ofmetrology.
Measurement is defined as the process of comparison of an unknown quantity with a known or standard quantity.
The measurement of a property may be categorized by the following criteria:type,magnitude,unit, anduncertainty.[citation needed]They enable unambiguous comparisons between measurements.
Measurements most commonly use theInternational System of Units(SI) as a comparison framework. The system defines sevenfundamental units:kilogram,metre,candela,second,ampere,kelvin, andmole. All of these units are defined without reference to a particular physical object which would serve as a standard. Artifact-free definitions fix measurements at an exact value related to aphysical constantor other invariable natural phenomenon, in contrast to reliance on standard artifacts which are subject to deterioration or destruction. Instead, the measurement unit can only ever change through increased accuracy in determining the value of the constant it is tied to.
The first proposal to tie an SI base unit to an experimental standard independent of fiat[clarification needed]was byCharles Sanders Peirce(1839–1914),[6]who proposed to define the metre in terms of thewavelengthof aspectral line.[7]This directly influenced theMichelson–Morley experiment; Michelson and Morley cite Peirce, and improve on his method.[8]
With the exception of a few fundamentalquantumconstants, units of measurement are derived from historical agreements. Nothing inherent in nature dictates that aninchhas to be a certain length, nor that amileis a better measure of distance than akilometre. Over the course of human history, however, first for convenience and then out of necessity, standards of measurement evolved so that communities would have certain common benchmarks. Laws regulating measurement were originally developed to prevent fraud in commerce.
Units of measurementare generally defined on a scientific basis, overseen by governmental or independent agencies, and established in international treaties, pre-eminent of which is theGeneral Conference on Weights and Measures(CGPM), established in 1875 by theMetre Convention, overseeing the International System of Units (SI). For example, the metre was redefined in 1983 by the CGPM in terms of the speed of light, the kilogram was redefined in 2019 in terms of thePlanck constantand the international yard was defined in 1960 by the governments of the United States, United Kingdom, Australia and South Africa as beingexactly0.9144 metres.
In the United States, the National Institute of Standards and Technology (NIST), a division of theUnited States Department of Commerce, regulates commercial measurements. In the United Kingdom, the role is performed by theNational Physical Laboratory(NPL), in Australia by theNational Measurement Institute,[9]in South Africa by theCouncil for Scientific and Industrial Researchand in India theNational Physical Laboratory of India.
A unit is a known or standard quantity in terms of which other physical quantities are measured.
BeforeSI unitswere widely adopted around the world, the British systems ofEnglish unitsand laterimperial unitswere used in Britain, theCommonwealthand the United States. The system came to be known asU.S. customary unitsin the United States and is still in use there and in a fewCaribbeancountries. These various systems of measurement have at times been calledfoot-pound-secondsystems after the Imperial units for length, weight and time even though the tons, hundredweights, gallons, and nautical miles, for example, have different values in the U.S. and imperial systems. Many Imperial units remain in use in Britain, which has officially switched to the SI system, with a few exceptions such as road signs, where road distances are shown in miles (or in yards for short distances) and speed limits are in miles per hour. Draught beer and cider must be sold by the imperial pint, and milk in returnable bottles can be sold by the imperial pint. Many people measure their height in feet and inches and their weight instoneand pounds, to give just a few examples. Imperial units are used in many other places: for example, in many Commonwealth countries that are considered metricated, land area is measured in acres and floor space in square feet, particularly for commercial transactions (rather than government statistics). Similarly, gasoline is sold by the gallon in many countries that are considered metricated.
Themetric systemis a decimalsystem of measurementbased on its units for length, the metre and for mass, the kilogram. It exists in several variations, with different choices ofbase units, though these do not affect its day-to-day use. Since the 1960s, the International System of Units (SI) is the internationally recognised metric system. Metric units of mass, length, and electricity are widely used around the world for both everyday and scientific purposes.
TheInternational System of Units(abbreviated as SI from theFrench languagenameSystème International d'Unités) is the modern revision of themetric system. It is the world's most widely usedsystem of units, both in everydaycommerceand inscience. The SI was developed in 1960 from themetre–kilogram–second(MKS) system, rather than thecentimetre–gram–second(CGS) system, which, in turn, had many variants. The SI units for the seven base physical quantities are:[10]
In the SI, base units are the simple measurements for time, length, mass, temperature, amount of substance, electric current and light intensity. Derived units are constructed from the base units: for example, thewatt, i.e. the unit for power, is defined from the base units as m2·kg·s−3. Other physical properties may be measured in compound units, such as material density, measured in kg·m−3.
The SI allows easy multiplication when switching among units having the same base but different prefixes. To convert from metres to centimetres it is only necessary to multiply the number of metres by 100, since there are 100 centimetres in a metre. Inversely, to switch from centimetres to metres one multiplies the number of centimetres by 0.01 or divides the number of centimetres by 100.
Aruleror rule is a tool used in, for example,geometry,technical drawing, engineering, and carpentry, to measure lengths or distances or to draw straight lines. Strictly speaking, theruleris the instrument used torulestraight lines and the calibrated instrument used for determining length is called ameasure, however common usage calls both instrumentsrulersand the special namestraightedgeis used for an unmarked rule. The use of the wordmeasure, in the sense of a measuring instrument, only survives in the phrasetape measure, an instrument that can be used to measure but cannot be used to draw straight lines. As can be seen in the photographs on this page, a two-metre carpenter's rule can be folded down to a length of only 20 centimetres, to easily fit in a pocket, and a five-metre-long tape measure easily retracts to fit within a small housing.
Time is an abstract measurement ofelementalchanges over a non-spatial continuum. It is denoted by numbers and/or named periods such ashours,days,weeks,monthsandyears. It is an apparently irreversible series of occurrences within this non spatial continuum. It is also used to denote an interval between two relative points on this continuum.
Massrefers to the intrinsic property of all material objects to resist changes in their momentum.Weight, on the other hand, refers to the downward force produced when a mass is in a gravitational field. Infree fall(no net gravitational forces) objects lack weight but retain their mass. The Imperial units of mass include theounce,pound, andton. The metric unitsgramand kilogram are units of mass.
One device for measuring weight or mass is called a weighing scale or, often, simply a "scale". A spring scale measures force but not mass, a balance compares weight; both require a gravitational field to operate. Some of the most accurate instruments for measuring weight or mass are based on load cells with a digital read-out, but require a gravitational field to function and would not work in free fall.
The measures used in economics are physical measures,nominal pricevalue measures andreal pricemeasures. These measures differ from one another by the variables they measure and by the variables excluded from measurements.
In the field of survey research, measures are taken from individual attitudes, values, and behavior usingquestionnairesas a measurement instrument. As all other measurements, measurement in survey research is also vulnerable tomeasurement error, i.e. the departure from the true value of the measurement and the value provided using the measurement instrument.[11]In substantive survey research, measurement error can lead to biased conclusions and wrongly estimated effects. In order to get accurate results, when measurement errors appear, the results need to becorrected for measurement errors.
The following rules generally apply for displaying the exactness of measurements:[12]
Since accurate measurement is essential in many fields, and since all measurements are necessarily approximations, a great deal of effort must be taken to make measurements as accurate as possible. For example, consider theproblem of measuring the timeit takes an object to fall a distance of one metre (about 39in). Using physics, it can be shown that, in the gravitational field of the Earth, it should take any object about 0.45 second to fall one metre. However, the following are just some of the sources oferrorthat arise:
Additionally, other sources ofexperimental errorinclude:
Scientific experiments must be carried out with great care to eliminate as much error as possible, and to keep error estimates realistic.
In the classical definition, which is standard throughout the physical sciences,measurementis the determination or estimation of ratios of quantities.[14]Quantity and measurement are mutually defined: quantitative attributes are those possible to measure, at least in principle. The classical concept of quantity can be traced back toJohn WallisandIsaac Newton, and was foreshadowed inEuclid's Elements.[14]
In the representational theory,measurementis defined as "the correlation of numbers with entities that are not numbers".[15]The most technically elaborated form of representational theory is also known asadditive conjoint measurement. In this form of representational theory, numbers are assigned based on correspondences or similarities between the structure of number systems and the structure of qualitative systems. A property is quantitative if such structural similarities can be established. In weaker forms of representational theory, such as that implicit within the work ofStanley Smith Stevens,[16]numbers need only be assigned according to a rule.
The concept of measurement is often misunderstood as merely the assignment of a value, but it is possible to assign a value in a way that is not a measurement in terms of the requirements of additive conjoint measurement. One may assign a value to a person's height, but unless it can be established that there is a correlation between measurements of height and empirical relations, it is not a measurement according to additive conjoint measurement theory. Likewise, computing and assigning arbitrary values, like the "book value" of an asset in accounting, is not a measurement because it does not satisfy the necessary criteria.
Three type of representational theory
All data are inexact and statistical in nature. Thus the definition of measurement is: "A set of observations that reduce uncertainty where the result is expressed as a quantity."[17]This definition is implied in what scientists actually do when they measure something and report both themeanandstatisticsof the measurements. In practical terms, one begins with an initial guess as to the expected value of a quantity, and then, using various methods and instruments, reduces the uncertainty in the value. In this view, unlike thepositivistrepresentational theory, all measurements are uncertain, so instead of assigning one value, a range of values is assigned to a measurement. This also implies that there is not a clear or neat distinction betweenestimationand measurement.
Inquantum mechanics, a measurement is an action that determines a particular property (such as position,momentum, or energy) of a quantum system. Quantum measurements are always statistical samples from aprobability distribution; the distribution for many quantum phenomena is discrete, not continuous.[18]: 197Quantum measurements alterquantum statesand yet repeated measurements on a quantum state are reproducible. The measurement appears to act as a filter, changing the quantum state into one with the single measured quantum value.[18]The unambiguous meaning of thequantum measurementis an unresolved fundamental problem inquantum mechanics; the most common interpretation is that when a measurement is performed, thewavefunctionof the quantum system "collapses" to a single, definite value.[19]
In biology, there is generally no well established theory of measurement. However, the importance of the theoretical context is emphasized.[20]Moreover, the theoretical context stemming from the theory of evolution leads to articulate the theory of measurement and historicity as a fundamental notion.[21]Among the most developed fields of measurement in biology are the measurement of genetic diversity and species diversity.[22]
|
https://en.wikipedia.org/wiki/Measurement#Exactness_designation
|
Sensitivity analysisis the study of how theuncertaintyin the output of amathematical modelor system (numerical or otherwise) can be divided and allocated to different sources of uncertainty in its inputs.[1][2]This involves estimating sensitivity indices that quantify the influence of an input or group of inputs on the output. A related practice isuncertainty analysis, which has a greater focus onuncertainty quantificationandpropagation of uncertainty; ideally, uncertainty and sensitivity analysis should be run in tandem.
Amathematical model(for example in biology, climate change, economics, renewable energy, agronomy...) can be highly complex, and as a result, its relationships between inputs and outputs may be faultily understood. In such cases, the model can be viewed as ablack box, i.e. the output is an "opaque" function of its inputs. Quite often, some or all of the model inputs are subject to sources ofuncertainty, includingerrors of measurement, errors in input data, parameter estimation and approximation procedure, absence of information and poor or partial understanding of the driving forces and mechanisms, choice of underlying hypothesis of model, and so on. This uncertainty limits our confidence in thereliabilityof the model's response or output. Further, models may have to cope with the natural intrinsic variability of the system (aleatory), such as the occurrence ofstochasticevents.[3]
In models involving many input variables, sensitivity analysis is an essential ingredient of model building and quality assurance and can be useful to determine the impact of a uncertain variable for a range of purposes,[4]including:
The object of study for sensitivity analysis is a functionf{\displaystyle f}, (called "mathematical model" or "programming code"), viewed as ablack box, with thep{\displaystyle p}-dimensionalinputvectorX=(X1,...,Xp){\displaystyle X=(X_{1},...,X_{p})}and theoutputY{\displaystyle Y}, presented as following:
Y=f(X).{\displaystyle Y=f(X).}
The variability in input parametersXi,i=1,…,p{\displaystyle X_{i},i=1,\ldots ,p}have an impact on the outputY{\displaystyle Y}. Whileuncertainty analysisaims to describe the distribution of the outputY{\displaystyle Y}(providing itsstatistics,moments,pdf,cdf,...), sensitivity analysis aims to measure and quantify the impact of each inputXi{\displaystyle X_{i}}or a group of inputs on the variability of the outputY{\displaystyle Y}(by calculating the corresponding sensitivity indices). Figure 1 provides a schematic representation of this statement.
Taking into account uncertainty arising from different sources, whether in the context of uncertainty analysis or sensitivity analysis (for calculating sensitivity indices), requires multiple samples of the uncertain parameters and, consequently, running the model (evaluating thef{\displaystyle f}-function) multiple times. Depending on the complexity of the model there are many challenges that may be encountered during model evaluation. Therefore, the choice of method of sensitivity analysis is typically dictated by a number of problem constraints, settings or challenges. Some of the most common are:
To address the various constraints and challenges, a number of methods for sensitivity analysis have been proposed in the literature, which we will examine in the next section.
There are a large number of approaches to performing a sensitivity analysis, many of which have been developed to address one or more of the constraints discussed above. They are also distinguished by the type of sensitivity measure, be it based on (for example)variance decompositions,partial derivativesorelementary effects. In general, however, most procedures adhere to the following outline:
In some cases this procedure will be repeated, for example in high-dimensional problems where the user has to screen out unimportant variables before performing a full sensitivity analysis.
The various types of "core methods" (discussed below) are distinguished by the various sensitivity measures which are calculated. These categories can somehow overlap. Alternative ways of obtaining these measures, under the constraints of the problem, can be given. In addition, an engineering view of the methods that takes into account the four important sensitivity analysis parameters has also been proposed.[16]
The first intuitive approach (especially useful in less complex cases) is to analyze the relationship between each inputZi{\displaystyle Z_{i}}and the outputY{\displaystyle Y}using scatter plots, and observe the behavior of these pairs. The diagrams give an initial idea of the correlation and which input has an impact on the output. Figure 2 shows an example where two inputs,Z3{\displaystyle Z_{3}}andZ4{\displaystyle Z_{4}}are highly correlated with the output.
One of the simplest and most common approaches is that of changing one-factor-at-a-time (OAT), to see what effect this produces on the output.[17][18][19]OAT customarily involves
Sensitivity may then be measured by monitoring changes in the output, e.g. bypartial derivativesorlinear regression. This appears a logical approach as any change observed in the output will unambiguously be due to the single variable changed. Furthermore, by changing one variable at a time, one can keep all other variables fixed to their central or baseline values. This increases the comparability of the results (all 'effects' are computed with reference to the same central point in space) and minimizes the chances of computer program crashes, more likely when several input factors are changed simultaneously.
OAT is frequently preferred by modelers because of practical reasons. In case of model failure under OAT analysis the modeler immediately knows which is the input factor responsible for the failure.
Despite its simplicity however, this approach does not fully explore the input space, since it does not take into account the simultaneous variation of input variables. This means that the OAT approach cannot detect the presence ofinteractionsbetween input variables and is unsuitable for nonlinear models.[20]
The proportion of input space which remains unexplored with an OAT approach grows superexponentially with the number of inputs. For example, a 3-variable parameter space which is explored one-at-a-time is equivalent to taking points along the x, y, and z axes of a cube centered at the origin. Theconvex hullbounding all these points is anoctahedronwhich has a volume only 1/6th of the total parameter space. More generally, the convex hull of the axes of a hyperrectangle forms ahyperoctahedronwhich has a volume fraction of1/n!{\displaystyle 1/n!}. With 5 inputs, the explored space already drops to less than 1% of the total parameter space. And even this is an overestimate, since the off-axis volume is not actually being sampled at all. Compare this to random sampling of the space, where the convex hull approaches the entire volume as more points are added.[21]While the sparsity of OAT is theoretically not a concern forlinear models, true linearity is rare in nature.
Named after the statistician Max D. Morris, this method is suitable for screening systems with many parameters. This is also known as method of elementary effects because it combines repeated steps along the various parametric axes.[22]
Local derivative-based methods involve taking thepartial derivativeof the outputY{\displaystyle Y}with respect to an input factorXi{\displaystyle X_{i}}:
where the subscriptx0indicates that the derivative is taken at some fixed point in the space of the input (hence the 'local' in the name of the class). Adjoint modelling[23][24]and Automated Differentiation[25]are methods which allow to compute all partial derivatives at a cost at most 4-6 times of that for evaluating the original function. Similar to OAT, local methods do not attempt to fully explore the input space, since they examine small perturbations, typically one variable at a time. It is possible to select similar samples from derivative-based sensitivity through Neural Networks and perform uncertainty quantification.
One advantage of the local methods is that it is possible to make a matrix to represent all the sensitivities in a system, thus providing an overview that cannot be achieved with global methods if there is a large number of input and output variables.[26]
Regression analysis, in the context of sensitivity analysis, involves fitting alinear regressionto the model response and usingstandardized regression coefficientsas direct measures of sensitivity. The regression is required to be linear with respect to the data (i.e. a hyperplane, hence with no quadratic terms, etc., as regressors) because otherwise it is difficult to interpret the standardised coefficients. This method is therefore most suitable when the model response is in fact linear; linearity can be confirmed, for instance, if thecoefficient of determinationis large. The advantages of regression analysis are that it is simple and has a low computational cost.
Variance-based methods[27]are a class of probabilistic approaches which quantify the input and output uncertainties asrandom variables, represented via theirprobability distributions, and decompose the output variance into parts attributable to input variables and combinations of variables. The sensitivity of the output to an input variable is therefore measured by the amount of variance in the output caused by that input.
This amount is quantified and calculated usingSobol indices: they represent the proportion of variance explained by an input or group of inputs. This expression essentially measures the contribution ofXi{\displaystyle X_{i}}alone to the uncertainty (variance) inY{\displaystyle Y}(averaged over variations in other variables), and is known as thefirst-order sensitivity indexormain effect indexormain Sobol indexorSobol main index.
For an inputXi{\displaystyle X_{i}}, Sobol index is defined as following:
Si=V(E[Y|Xi])V(Y){\displaystyle S_{i}={\frac {V(\mathbb {E} [Y\vert X_{i}])}{V(Y)}}}whereV(⋅){\displaystyle V(\cdot )}andE[⋅]{\displaystyle \mathbb {E} [\cdot ]}denote the variance and expected value operators respectively.
Importantly, first-order sensitivity index ofXi{\displaystyle X_{i}}does not measure the uncertainty caused by interactionsXi{\displaystyle X_{i}}has with other variables. A further measure, known as thetotal effect index, gives the total variance inY{\displaystyle Y}caused byXi{\displaystyle X_{i}}and its interactions with any of the other input variables. The total effect index is given as following:SiT=1−V(E[Y|X∼i])V(Y){\displaystyle S_{i}^{T}=1-{\frac {V(\mathbb {E} [Y\vert X_{\sim i}])}{V(Y)}}}whereX∼i=(X1,...,Xi−1,Xi+1,...,Xp){\displaystyle X_{\sim i}=(X_{1},...,X_{i-1},X_{i+1},...,X_{p})}denotes the set of all input variables exceptXi{\displaystyle X_{i}}.
Variance-based methods allow full exploration of the input space, accounting for interactions, and nonlinear responses. For these reasons they are widely used when it is feasible to calculate them. Typically this calculation involves the use ofMonte Carlomethods, but since this can involve many thousands of model runs, other methods (such as metamodels) can be used to reduce computational expense when necessary.
Moment-independent methods extend variance-based techniques by considering the probability density or cumulative distribution function of the model outputY{\displaystyle Y}. Thus, they do not refer to any particularmomentofY{\displaystyle Y}, whence the name.
The moment-independent sensitivity measures ofXi{\displaystyle X_{i}}, here denoted byξi{\displaystyle \xi _{i}}, can be defined through an equation similar to variance-based indices replacing the conditional expectation with a distance, asξi=E[d(PY,PY|Xi)]{\displaystyle \xi _{i}=E[d(P_{Y},P_{Y|X_{i}})]}, whered(⋅,⋅){\displaystyle d(\cdot ,\cdot )}is astatistical distance[metric or divergence] between probability measures,PY{\displaystyle P_{Y}}andPY|Xi{\displaystyle P_{Y|X_{i}}}are the marginal andconditional probabilitymeasures ofY{\displaystyle Y}.[28]
Ifd()≥0{\displaystyle d()\geq 0}is adistance, the moment-independent global sensitivity measure satisfies zero-independence. This is a relevant statistical property also known as Renyi's postulate D.[29]
The class of moment-independent sensitivity measures includes indicators such as theδ{\displaystyle \delta }-importance measure,[30]the new correlation coefficient of Chatterjee,[31]the Wasserstein correlation of Wiesel[32]and the kernel-based sensitivity measures of Barr and Rabitz.[33]
Another measure for global sensitivity analysis, in the category of moment-independent approaches, is the PAWN index.[34]It relies onCumulative Distribution Functions(CDFs) to characterize the maximum distance between the unconditional output distribution and conditional output distribution (obtained by varying all input parameters and by setting thei{\displaystyle i}-th input, consequentially). The difference between the unconditional and conditional output distribution is usually calculated using theKolmogorov–Smirnov test(KS). The PAWN index for a given input parameter is then obtained by calculating the summary statistics over all KS values.[citation needed]
One of the major shortcomings of the previous sensitivity analysis methods is that none of them considers the spatially ordered structure of the response surface/output of the modelY=f(X){\displaystyle Y=f(X)}in the parameter space. By utilizing the concepts of directionalvariogramsand covariograms, variogram analysis of response surfaces (VARS) addresses this weakness through recognizing a spatially continuous correlation structure to the values ofY{\displaystyle Y}, and hence also to the values of∂Y∂xi{\displaystyle {\frac {\partial Y}{\partial x_{i}}}}.[35][36]
Basically, the higher the variability the more heterogeneous is the response surface along a particular direction/parameter, at a specific perturbation scale. Accordingly, in the VARS framework, the values of directionalvariogramsfor a given perturbation scale can be considered as a comprehensive illustration of sensitivity information, through linking variogram analysis to both direction and perturbation scale concepts. As a result, the VARS framework accounts for the fact that sensitivity is a scale-dependent concept, and thus overcomes the scale issue of traditional sensitivity analysis methods.[37]More importantly, VARS is able to provide relatively stable and statistically robust estimates of parameter sensitivity with much lower computational cost than other strategies (about two orders of magnitude more efficient).[38]Noteworthy, it has been shown that there is a theoretical link between the VARS framework and thevariance-basedand derivative-based approaches.
The Fourier amplitude sensitivity test (FAST) uses theFourier seriesto represent a multivariate function (the model) in the frequency domain, using a single frequency variable. Therefore, the integrals required to calculate sensitivity indices become univariate, resulting in computational savings.
Shapley effects rely onShapley valuesand represent the average marginal contribution of a given factors across all possible combinations of factors. These value are related to Sobol’s indices as their value falls between the first order Sobol’ effect and the total order effect.[39]
The principle is to project the function of interest onto a basis of orthogonal polynomials. The Sobol indices are then expressed analytically in terms of the coefficients of this decomposition.[40]
A number of methods have been developed to overcome some of the constraints discussed above, which would otherwise make the estimation of sensitivity measures infeasible (most often due tocomputational expense). Generally, these methods focus on efficiently (by creating a metamodel of the costly function to be evaluated and/or by “ wisely ” sampling the factor space) calculating variance-based measures of sensitivity.
Metamodels (also known as emulators, surrogate models or response surfaces) aredata-modeling/machine learningapproaches that involve building a relatively simple mathematical function, known as anmetamodels, that approximates the input/output behavior of the model itself.[41]In other words, it is the concept of "modeling a model" (hence the name "metamodel"). The idea is that, although computer models may be a very complex series of equations that can take a long time to solve, they can always be regarded as a function of their inputsY=f(X){\displaystyle Y=f(X)}. By running the model at a number of points in the input space, it may be possible to fit a much simpler metamodelsf^(X){\displaystyle {\hat {f}}(X)}, such thatf^(X)≈f(X){\displaystyle {\hat {f}}(X)\approx f(X)}to within an acceptable margin of error.[42]Then, sensitivity measures can be calculated from the metamodel (either with Monte Carlo or analytically), which will have a negligible additional computational cost. Importantly, the number of model runs required to fit the metamodel can be orders of magnitude less than the number of runs required to directly estimate the sensitivity measures from the model.[43]
Clearly, the crux of an metamodel approach is to find anf^(X){\displaystyle {\hat {f}}(X)}(metamodel) that is a sufficiently close approximation to the modelf(X){\displaystyle f(X)}. This requires the following steps,
Sampling the model can often be done withlow-discrepancy sequences, such as theSobol sequence– due to mathematicianIlya M. SobolorLatin hypercube sampling, although random designs can also be used, at the loss of some efficiency. The selection of the metamodel type and the training are intrinsically linked since the training method will be dependent on the class of metamodel. Some types of metamodels that have been used successfully for sensitivity analysis include:
The use of an emulator introduces amachine learningproblem, which can be difficult if the response of the model is highlynonlinear. In all cases, it is useful to check the accuracy of the emulator, for example usingcross-validation.
Ahigh-dimensional model representation(HDMR)[49][50](the term is due to H. Rabitz[51]) is essentially an emulator approach, which involves decomposing the function output into a linear combination of input terms and interactions of increasing dimensionality. The HDMR approach exploits the fact that the model can usually be well-approximated by neglecting higher-order interactions (second or third-order and above). The terms in the truncated series can then each be approximated by e.g. polynomials or splines (REFS) and the response expressed as the sum of the main effects and interactions up to the truncation order. From this perspective, HDMRs can be seen as emulators which neglect high-order interactions; the advantage is that they are able to emulate models with higher dimensionality than full-order emulators.
Sensitivity analysis via Monte Carlo filtering[52]is also a sampling-based approach, whose objective is to identify regions in the space of the input factors corresponding to particular values (e.g., high or low) of the output.
Sensitivity analysis is closely related with uncertainty analysis; while the latter studies the overalluncertaintyin the conclusions of the study, sensitivity analysis tries to identify what source of uncertainty weighs more on the study's conclusions.
The problem setting in sensitivity analysis also has strong similarities with the field ofdesign of experiments.[53]In a design of experiments, one studies the effect of some process or intervention (the 'treatment') on some objects (the 'experimental units'). In sensitivity analysis one looks at the effect of varying the inputs of a mathematical model on the output of the model itself. In both disciplines one strives to obtain information from the system with a minimum of physical or numerical experiments.
It may happen that a sensitivity analysis of a model-based study is meant to underpin an inference, and to certify its robustness, in a context where the inference feeds into a policy or decision-making process. In these cases the framing of the analysis itself, its institutional context, and the motivations of its author may become a matter of great importance, and a pure sensitivity analysis – with its emphasis on parametric uncertainty – may be seen as insufficient. The emphasis on the framing may derive inter-alia from the relevance of the policy study to different constituencies that are characterized by different norms and values, and hence by a different story about 'what the problem is' and foremost about 'who is telling the story'. Most often the framing includes more or less implicit assumptions, which could be political (e.g. which group needs to be protected) all the way to technical (e.g. which variable can be treated as a constant).
In order to take these concerns into due consideration the instruments of SA have been extended to provide an assessment of the entire knowledge and model generating process. This approach has been called 'sensitivity auditing'. It takes inspiration from NUSAP,[54]a method used to qualify the worth of quantitative information with the generation of `Pedigrees' of numbers. Sensitivity auditing has been especially designed for an adversarial context, where not only the nature of the evidence, but also the degree of certainty and uncertainty associated to the evidence, will be the subject of partisan interests.[55]Sensitivity auditing is recommended in the European Commission guidelines for impact assessment,[56]as well as in the report Science Advice for Policy by European Academies.[57]
Some common difficulties in sensitivity analysis include:
" I have proposed a form of organized sensitivity analysis that I call 'global sensitivity analysis' in which a neighborhood of alternative assumptions is selected and the corresponding interval of inferences is identified. Conclusions are judged to be sturdy only if the neighborhood of assumptions is wide enough to be credible and the corresponding interval of inferences is narrow enough to be useful."
The importance of understanding and managing uncertainty in model results has inspired many scientists from different research centers all over the world to take a close interest in this subject. National and international agencies involved inimpact assessmentstudies have included sections devoted to sensitivity analysis in their guidelines. Examples are theEuropean Commission(see e.g. the guidelines forimpact assessment),[56]the White HouseOffice of Management and Budget, theIntergovernmental Panel on Climate ChangeandUS Environmental Protection Agency's modeling guidelines.[61]
The following pages discuss sensitivity analyses in relation to specific applications:
|
https://en.wikipedia.org/wiki/Sensitivity_analysis
|
Instatistics,propagation of uncertainty(orpropagation of error) is the effect ofvariables'uncertainties(orerrors, more specificallyrandom errors) on the uncertainty of afunctionbased on them. When the variables are the values of experimental measurements they haveuncertainties due to measurement limitations(e.g., instrumentprecision) which propagate due to the combination of variables in the function.
The uncertaintyucan be expressed in a number of ways.
It may be defined by theabsolute errorΔx. Uncertainties can also be defined by therelative error(Δx)/x, which is usually written as a percentage.
Most commonly, the uncertainty on a quantity is quantified in terms of thestandard deviation,σ, which is the positive square root of thevariance. The value of a quantity and its error are then expressed as an intervalx±u.
However, the most general way of characterizing uncertainty is by specifying itsprobability distribution.
If theprobability distributionof the variable is known or can be assumed, in theory it is possible to get any of its statistics. In particular, it is possible to deriveconfidence limitsto describe the region within which the true value of the variable may be found. For example, the 68% confidence limits for a one-dimensional variable belonging to anormal distributionare approximately ± one standard deviationσfrom the central valuex, which means that the regionx±σwill cover the true value in roughly 68% of cases.
If the uncertainties arecorrelatedthencovariancemust be taken into account. Correlation can arise from two different sources. First, themeasurement errorsmay be correlated. Second, when the underlying values are correlated across a population, theuncertainties in the group averageswill be correlated.[1]
In a general context where a nonlinear function modifies the uncertain parameters (correlated or not), the standard tools to propagate uncertainty, and infer resulting quantity probability distribution/statistics, are sampling techniques from theMonte Carlo methodfamily.[2]For very large datasets or complex functions, the calculation of the error propagation may be very expensive so that asurrogate model[3]or aparallel computingstrategy[4][5][6]may be necessary.
In some particular cases, the uncertainty propagation calculation can be done through simplistic algebraic procedures. Some of these scenarios are described below.
Let{fk(x1,x2,…,xn)}{\displaystyle \{f_{k}(x_{1},x_{2},\dots ,x_{n})\}}be a set ofmfunctions, which are linear combinations ofn{\displaystyle n}variablesx1,x2,…,xn{\displaystyle x_{1},x_{2},\dots ,x_{n}}with combination coefficientsAk1,Ak2,…,Akn,(k=1,…,m){\displaystyle A_{k1},A_{k2},\dots ,A_{kn},(k=1,\dots ,m)}:fk=∑i=1nAkixi,{\displaystyle f_{k}=\sum _{i=1}^{n}A_{ki}x_{i},}or in matrix notation,f=Ax.{\displaystyle \mathbf {f} =\mathbf {A} \mathbf {x} .}
Also let thevariance–covariance matrixofx= (x1, ...,xn)be denoted byΣx{\displaystyle {\boldsymbol {\Sigma }}^{x}}and let the mean value be denoted byμ{\displaystyle {\boldsymbol {\mu }}}:Σx=E[(x−μ)⊗(x−μ)]=(σ12σ12σ13⋯σ21σ22σ23⋯σ31σ32σ32⋯⋮⋮⋮⋱)=(Σ11xΣ12xΣ13x⋯Σ21xΣ22xΣ23x⋯Σ31xΣ32xΣ33x⋯⋮⋮⋮⋱).{\displaystyle {\begin{aligned}{\boldsymbol {\Sigma }}^{x}=\operatorname {E} [(\mathbf {x} -{\boldsymbol {\mu }})\otimes (\mathbf {x} -{\boldsymbol {\mu }})]&={\begin{pmatrix}\sigma _{1}^{2}&\sigma _{12}&\sigma _{13}&\cdots \\\sigma _{21}&\sigma _{2}^{2}&\sigma _{23}&\cdots \\\sigma _{31}&\sigma _{32}&\sigma _{3}^{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{pmatrix}}\\[1ex]&={\begin{pmatrix}{\Sigma }_{11}^{x}&{\Sigma }_{12}^{x}&{\Sigma }_{13}^{x}&\cdots \\{\Sigma }_{21}^{x}&{\Sigma }_{22}^{x}&{\Sigma }_{23}^{x}&\cdots \\{\Sigma }_{31}^{x}&{\Sigma }_{32}^{x}&{\Sigma }_{33}^{x}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{pmatrix}}.\end{aligned}}}⊗{\displaystyle \otimes }is theouter product.
Then, the variance–covariance matrixΣf{\displaystyle {\boldsymbol {\Sigma }}^{f}}offis given byΣf=E[(f−E[f])⊗(f−E[f])]=E[A(x−μ)⊗A(x−μ)]=AE[(x−μ)⊗(x−μ)]AT=AΣxAT.{\displaystyle {\begin{aligned}{\boldsymbol {\Sigma }}^{f}&=\operatorname {E} \left[(\mathbf {f} -\operatorname {E} [\mathbf {f} ])\otimes (\mathbf {f} -\operatorname {E} [\mathbf {f} ])\right]=\operatorname {E} \left[\mathbf {A} (\mathbf {x} -{\boldsymbol {\mu }})\otimes \mathbf {A} (\mathbf {x} -{\boldsymbol {\mu }})\right]\\[1ex]&=\mathbf {A} \operatorname {E} \left[(\mathbf {x} -{\boldsymbol {\mu }})\otimes (\mathbf {x} -{\boldsymbol {\mu }})\right]\mathbf {A} ^{\mathrm {T} }=\mathbf {A} {\boldsymbol {\Sigma }}^{x}\mathbf {A} ^{\mathrm {T} }.\end{aligned}}}
In component notation, the equationΣf=AΣxAT{\displaystyle {\boldsymbol {\Sigma }}^{f}=\mathbf {A} {\boldsymbol {\Sigma }}^{x}\mathbf {A} ^{\mathrm {T} }}readsΣijf=∑kn∑lnAikΣklxAjl.{\displaystyle \Sigma _{ij}^{f}=\sum _{k}^{n}\sum _{l}^{n}A_{ik}{\Sigma }_{kl}^{x}A_{jl}.}
This is the most general expression for the propagation of error from one set of variables onto another. When the errors onxare uncorrelated, the general expression simplifies toΣijf=∑knAikΣkxAjk,{\displaystyle \Sigma _{ij}^{f}=\sum _{k}^{n}A_{ik}\Sigma _{k}^{x}A_{jk},}whereΣkx=σxk2{\displaystyle \Sigma _{k}^{x}=\sigma _{x_{k}}^{2}}is the variance ofk-th element of thexvector.
Note that even though the errors onxmay be uncorrelated, the errors onfare in general correlated; in other words, even ifΣx{\displaystyle {\boldsymbol {\Sigma }}^{x}}is a diagonal matrix,Σf{\displaystyle {\boldsymbol {\Sigma }}^{f}}is in general a full matrix.
The general expressions for a scalar-valued functionfare a little simpler (hereais a row vector):f=∑inaixi=ax,{\displaystyle f=\sum _{i}^{n}a_{i}x_{i}=\mathbf {ax} ,}σf2=∑in∑jnaiΣijxaj=aΣxaT.{\displaystyle \sigma _{f}^{2}=\sum _{i}^{n}\sum _{j}^{n}a_{i}\Sigma _{ij}^{x}a_{j}=\mathbf {a} {\boldsymbol {\Sigma }}^{x}\mathbf {a} ^{\mathrm {T} }.}
Each covariance termσij{\displaystyle \sigma _{ij}}can be expressed in terms of thecorrelation coefficientρij{\displaystyle \rho _{ij}}byσij=ρijσiσj{\displaystyle \sigma _{ij}=\rho _{ij}\sigma _{i}\sigma _{j}}, so that an alternative expression for the variance offisσf2=∑inai2σi2+∑in∑j(j≠i)naiajρijσiσj.{\displaystyle \sigma _{f}^{2}=\sum _{i}^{n}a_{i}^{2}\sigma _{i}^{2}+\sum _{i}^{n}\sum _{j(j\neq i)}^{n}a_{i}a_{j}\rho _{ij}\sigma _{i}\sigma _{j}.}
In the case that the variables inxare uncorrelated, this simplifies further toσf2=∑inai2σi2.{\displaystyle \sigma _{f}^{2}=\sum _{i}^{n}a_{i}^{2}\sigma _{i}^{2}.}
In the simple case of identical coefficients and variances, we findσf=n|a|σ.{\displaystyle \sigma _{f}={\sqrt {n}}\,|a|\sigma .}
For the arithmetic mean,a=1/n{\displaystyle a=1/n}, the result is thestandard error of the mean:σf=σn.{\displaystyle \sigma _{f}={\frac {\sigma }{\sqrt {n}}}.}
Whenfis a set of non-linear combination of the variablesx, aninterval propagationcould be performed in order to compute intervals which contain all consistent values for the variables. In a probabilistic approach, the functionfmust usually be linearised by approximation to a first-orderTaylor seriesexpansion, though in some cases, exact formulae can be derived that do not depend on the expansion as is the case for the exact variance of products.[7]The Taylor expansion would be:fk≈fk0+∑in∂fk∂xixi{\displaystyle f_{k}\approx f_{k}^{0}+\sum _{i}^{n}{\frac {\partial f_{k}}{\partial {x_{i}}}}x_{i}}where∂fk/∂xi{\displaystyle \partial f_{k}/\partial x_{i}}denotes thepartial derivativeoffkwith respect to thei-th variable, evaluated at the mean value of all components of vectorx. Or inmatrix notation,f≈f0+Jx{\displaystyle \mathrm {f} \approx \mathrm {f} ^{0}+\mathrm {J} \mathrm {x} \,}where J is theJacobian matrix. Since f0is a constant it does not contribute to the error on f. Therefore, the propagation of error follows the linear case, above, but replacing the linear coefficients,AkiandAkjby the partial derivatives,∂fk∂xi{\displaystyle {\frac {\partial f_{k}}{\partial x_{i}}}}and∂fk∂xj{\displaystyle {\frac {\partial f_{k}}{\partial x_{j}}}}. In matrix notation,[8]Σf=JΣxJ⊤.{\displaystyle \mathrm {\Sigma } ^{\mathrm {f} }=\mathrm {J} \mathrm {\Sigma } ^{\mathrm {x} }\mathrm {J} ^{\top }.}
That is, the Jacobian of the function is used to transform the rows and columns of the variance-covariance matrix of the argument.
Note this is equivalent to the matrix expression for the linear case withJ=A{\displaystyle \mathrm {J=A} }.
Neglecting correlations or assuming independent variables yields a common formula among engineers and experimental scientists to calculate error propagation, the variance formula:[9]sf=(∂f∂x)2sx2+(∂f∂y)2sy2+(∂f∂z)2sz2+⋯{\displaystyle s_{f}={\sqrt {\left({\frac {\partial f}{\partial x}}\right)^{2}s_{x}^{2}+\left({\frac {\partial f}{\partial y}}\right)^{2}s_{y}^{2}+\left({\frac {\partial f}{\partial z}}\right)^{2}s_{z}^{2}+\cdots }}}wheresf{\displaystyle s_{f}}represents the standard deviation of the functionf{\displaystyle f},sx{\displaystyle s_{x}}represents the standard deviation ofx{\displaystyle x},sy{\displaystyle s_{y}}represents the standard deviation ofy{\displaystyle y}, and so forth.
This formula is based on the linear characteristics of the gradient off{\displaystyle f}and therefore it is a good estimation for the standard deviation off{\displaystyle f}as long assx,sy,sz,…{\displaystyle s_{x},s_{y},s_{z},\ldots }are small enough. Specifically, the linear approximation off{\displaystyle f}has to be close tof{\displaystyle f}inside a neighbourhood of radiussx,sy,sz,…{\displaystyle s_{x},s_{y},s_{z},\ldots }.[10]
Any non-linear differentiable function,f(a,b){\displaystyle f(a,b)}, of two variables,a{\displaystyle a}andb{\displaystyle b}, can be expanded asf≈f0+∂f∂aa+∂f∂bb.{\displaystyle f\approx f^{0}+{\frac {\partial f}{\partial a}}a+{\frac {\partial f}{\partial b}}b.}If we take the variance on both sides and use the formula[11]for the variance of a linear combination of variablesVar(aX+bY)=a2Var(X)+b2Var(Y)+2abCov(X,Y),{\displaystyle \operatorname {Var} (aX+bY)=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)+2ab\operatorname {Cov} (X,Y),}then we obtainσf2≈|∂f∂a|2σa2+|∂f∂b|2σb2+2∂f∂a∂f∂bσab,{\displaystyle \sigma _{f}^{2}\approx \left|{\frac {\partial f}{\partial a}}\right|^{2}\sigma _{a}^{2}+\left|{\frac {\partial f}{\partial b}}\right|^{2}\sigma _{b}^{2}+2{\frac {\partial f}{\partial a}}{\frac {\partial f}{\partial b}}\sigma _{ab},}whereσf{\displaystyle \sigma _{f}}is the standard deviation of the functionf{\displaystyle f},σa{\displaystyle \sigma _{a}}is the standard deviation ofa{\displaystyle a},σb{\displaystyle \sigma _{b}}is the standard deviation ofb{\displaystyle b}andσab=σaσbρab{\displaystyle \sigma _{ab}=\sigma _{a}\sigma _{b}\rho _{ab}}is the covariance betweena{\displaystyle a}andb{\displaystyle b}.
In the particular case thatf=ab{\displaystyle f=ab},∂f∂a=b{\displaystyle {\frac {\partial f}{\partial a}}=b},∂f∂b=a{\displaystyle {\frac {\partial f}{\partial b}}=a}.Thenσf2≈b2σa2+a2σb2+2abσab{\displaystyle \sigma _{f}^{2}\approx b^{2}\sigma _{a}^{2}+a^{2}\sigma _{b}^{2}+2ab\,\sigma _{ab}}or(σff)2≈(σaa)2+(σbb)2+2(σaa)(σbb)ρab{\displaystyle \left({\frac {\sigma _{f}}{f}}\right)^{2}\approx \left({\frac {\sigma _{a}}{a}}\right)^{2}+\left({\frac {\sigma _{b}}{b}}\right)^{2}+2\left({\frac {\sigma _{a}}{a}}\right)\left({\frac {\sigma _{b}}{b}}\right)\rho _{ab}}whereρab{\displaystyle \rho _{ab}}is the correlation betweena{\displaystyle a}andb{\displaystyle b}.
When the variablesa{\displaystyle a}andb{\displaystyle b}are uncorrelated,ρab=0{\displaystyle \rho _{ab}=0}. Then(σff)2≈(σaa)2+(σbb)2.{\displaystyle \left({\frac {\sigma _{f}}{f}}\right)^{2}\approx \left({\frac {\sigma _{a}}{a}}\right)^{2}+\left({\frac {\sigma _{b}}{b}}\right)^{2}.}
Error estimates for non-linear functions arebiasedon account of using a truncated series expansion. The extent of this bias depends on the nature of the function. For example, the bias on the error calculated for log(1+x) increases asxincreases, since the expansion toxis a good approximation only whenxis near zero.
For highly non-linear functions, there exist five categories of probabilistic approaches for uncertainty propagation;[12]seeUncertainty quantificationfor details.
In the special case of the inverse or reciprocal1/B{\displaystyle 1/B}, whereB=N(0,1){\displaystyle B=N(0,1)}follows astandard normal distribution, the resulting distribution is a reciprocal standard normal distribution, and there is no definable variance.[13]
However, in the slightly more general case of a shifted reciprocal function1/(p−B){\displaystyle 1/(p-B)}forB=N(μ,σ){\displaystyle B=N(\mu ,\sigma )}following a general normal distribution, then mean and variance statistics do exist in aprincipal valuesense, if the difference between the polep{\displaystyle p}and the meanμ{\displaystyle \mu }is real-valued.[14]
Ratios are also problematic; normal approximations exist under certain conditions.
This table shows the variances and standard deviations of simple functions of the real variablesA,B{\displaystyle A,B}with standard deviationsσA,σB,{\displaystyle \sigma _{A},\sigma _{B},}covarianceσAB=ρABσAσB,{\displaystyle \sigma _{AB}=\rho _{AB}\sigma _{A}\sigma _{B},}and correlationρAB.{\displaystyle \rho _{AB}.}The real-valued coefficientsa{\displaystyle a}andb{\displaystyle b}are assumed exactly known (deterministic), i.e.,σa=σb=0.{\displaystyle \sigma _{a}=\sigma _{b}=0.}
In the right-hand columns of the table,A{\displaystyle A}andB{\displaystyle B}areexpectation values, andf{\displaystyle f}is the value of the function calculated at those values.
For uncorrelated variables (ρAB=0{\displaystyle \rho _{AB}=0},σAB=0{\displaystyle \sigma _{AB}=0}) expressions for more complicated functions can be derived by combining simpler functions. For example, repeated multiplication, assuming no correlation, givesf=ABC;(σff)2≈(σAA)2+(σBB)2+(σCC)2.{\displaystyle f=ABC;\qquad \left({\frac {\sigma _{f}}{f}}\right)^{2}\approx \left({\frac {\sigma _{A}}{A}}\right)^{2}+\left({\frac {\sigma _{B}}{B}}\right)^{2}+\left({\frac {\sigma _{C}}{C}}\right)^{2}.}
For the casef=AB{\displaystyle f=AB}we also have Goodman's expression[7]for the exact variance: for the uncorrelated case it isV[XY]=E[X]2V[Y]+E[Y]2V[X]+E[(X−E(X))2(Y−E(Y))2],{\displaystyle \operatorname {V} [XY]=\operatorname {E} [X]^{2}\operatorname {V} [Y]+\operatorname {E} [Y]^{2}\operatorname {V} [X]+\operatorname {E} \left[\left(X-\operatorname {E} (X)\right)^{2}\left(Y-\operatorname {E} (Y)\right)^{2}\right],}and therefore we haveσf2=A2σB2+B2σA2+σA2σB2.{\displaystyle \sigma _{f}^{2}=A^{2}\sigma _{B}^{2}+B^{2}\sigma _{A}^{2}+\sigma _{A}^{2}\sigma _{B}^{2}.}
IfAandBare uncorrelated, their differenceA−Bwill have more variance than either of them. An increasing positive correlation (ρAB→1{\displaystyle \rho _{AB}\to 1}) will decrease the variance of the difference, converging to zero variance for perfectly correlated variables with thesame variance. On the other hand, a negative correlation (ρAB→−1{\displaystyle \rho _{AB}\to -1}) will further increase the variance of the difference, compared to the uncorrelated case.
For example, the self-subtractionf=A−Ahas zero varianceσf2=0{\displaystyle \sigma _{f}^{2}=0}only if the variate is perfectlyautocorrelated(ρA=1{\displaystyle \rho _{A}=1}). IfAis uncorrelated,ρA=0,{\displaystyle \rho _{A}=0,}then the output variance is twice the input variance,σf2=2σA2.{\displaystyle \sigma _{f}^{2}=2\sigma _{A}^{2}.}And ifAis perfectly anticorrelated,ρA=−1,{\displaystyle \rho _{A}=-1,}then the input variance is quadrupled in the output,σf2=4σA2{\displaystyle \sigma _{f}^{2}=4\sigma _{A}^{2}}(notice1−ρA=2{\displaystyle 1-\rho _{A}=2}forf=aA−aAin the table above).
We can calculate the uncertainty propagation for the inverse tangent function as an example of using partial derivatives to propagate error.
Definef(x)=arctan(x),{\displaystyle f(x)=\arctan(x),}whereΔx{\displaystyle \Delta _{x}}is the absolute uncertainty on our measurement ofx. The derivative off(x)with respect toxisdfdx=11+x2.{\displaystyle {\frac {df}{dx}}={\frac {1}{1+x^{2}}}.}
Therefore, our propagated uncertainty isΔf≈Δx1+x2,{\displaystyle \Delta _{f}\approx {\frac {\Delta _{x}}{1+x^{2}}},}whereΔf{\displaystyle \Delta _{f}}is the absolute propagated uncertainty.
A practical application is anexperimentin which one measurescurrent,I, andvoltage,V, on aresistorin order to determine theresistance,R, usingOhm's law,R=V/I.
Given the measured variables with uncertainties,I±σIandV±σV, and neglecting their possible correlation, the uncertainty in the computed quantity,σR, is:
σR≈σV2(1I)2+σI2(−VI2)2=R(σVV)2+(σII)2.{\displaystyle \sigma _{R}\approx {\sqrt {\sigma _{V}^{2}\left({\frac {1}{I}}\right)^{2}+\sigma _{I}^{2}\left({\frac {-V}{I^{2}}}\right)^{2}}}=R{\sqrt {\left({\frac {\sigma _{V}}{V}}\right)^{2}+\left({\frac {\sigma _{I}}{I}}\right)^{2}}}.}
|
https://en.wikipedia.org/wiki/Propagation_of_uncertainty
|
Uncertainty analysisinvestigates the uncertainty of variables that are used indecision-makingproblems in which observations and models represent theknowledge base. In other words, uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables.
In physicalexperimentsuncertainty analysis, orexperimental uncertainty assessment, deals with assessing theuncertaintyin ameasurement. An experiment designed to determine an effect, demonstrate a law, or estimate the numerical value of aphysical variablewill be affected byerrorsdue to instrumentation, methodology, presence of confounding effects and so on. Experimental uncertainty estimates are needed to assess theconfidencein the results.[1]A related field is thedesign of experiments.
Likewise in numerical experiments andmodellinguncertainty analysis draws upon a number of techniques for determining the reliability of model predictions, accounting for various sources of uncertainty in model input and design. A related field issensitivity analysis.
Acalibratedparameterdoes not necessarily representreality, as reality is much more complex. Any prediction has its complexities of reality that cannot be represented uniquely in the calibrated model; therefore, there is a potential error. Such errors must be accounted for when making management decisions on the basis of model outcomes.[2]
|
https://en.wikipedia.org/wiki/Uncertainty_analysis
|
Instatisticsand in particularstatistical theory,unbiased estimation of a standard deviationis the calculation from astatistical sampleof an estimated value of thestandard deviation(a measure ofstatistical dispersion) of apopulationof values, in such a way that theexpected valueof the calculation equals the true value. Except in some important situations, outlined later, the task has little relevance to applications of statistics since its need is avoided by standard procedures, such as the use ofsignificance testsandconfidence intervals, or by usingBayesian analysis.
However, for statistical theory, it provides an exemplar problem in the context ofestimation theorywhich is both simple to state and for which results cannot be obtained in closed form. It also provides an example where imposing the requirement forunbiased estimationmight be seen as just adding inconvenience, with no real benefit.
Instatistics, thestandard deviationof a population of numbers is often estimated from arandom sampledrawn from the population. This is the sample standard deviation, which is defined by
where{x1,x2,…,xn}{\displaystyle \{x_{1},x_{2},\ldots ,x_{n}\}}is the sample (formally, realizations from arandom variableX) andx¯{\displaystyle {\overline {x}}}is thesample mean.
One way of seeing that this is abiased estimatorof the standard deviation of the population is to start from the result thats2is anunbiased estimatorfor thevarianceσ2of the underlying population if that variance exists and the sample values are drawn independently with replacement. The square root is a nonlinear function, and only linear functions commute with taking the expectation. Since the square root is a strictly concave function, it follows fromJensen's inequalitythat the square root of the sample variance is an underestimate.
The use ofn− 1 instead ofnin the formula for the sample variance is known asBessel's correction, which corrects the bias in the estimation of the populationvariance,and some, but not all of the bias in the estimation of the populationstandard deviation.
It is not possible to find an estimate of the standard deviation which is unbiased for all population distributions, as the bias depends on the particular distribution. Much of the following relates to estimation assuming anormal distribution.
When the random variable isnormally distributed, a minor correction exists to eliminate the bias. To derive the correction, note that for normally distributedX,Cochran's theoremimplies that(n−1)s2/σ2{\displaystyle (n-1)s^{2}/\sigma ^{2}}has achi square distributionwithn−1{\displaystyle n-1}degrees of freedomand thus its square root,n−1s/σ{\displaystyle {\sqrt {n-1}}s/\sigma }has achi distributionwithn−1{\displaystyle n-1}degrees of freedom. Consequently, calculating the expectation of this last expression and rearranging constants,
where the correction factorc4(n){\displaystyle c_{4}(n)}is the scale mean of the chi distribution withn−1{\displaystyle n-1}degrees of freedom,μ1/n−1{\displaystyle \mu _{1}/{\sqrt {n-1}}}. This depends on the sample sizen,and is given as follows:[1]
where Γ(·) is thegamma function. An unbiased estimator ofσcan be obtained by dividings{\displaystyle s}byc4(n){\displaystyle c_{4}(n)}. Asn{\displaystyle n}grows large it approaches 1, and even for smaller values the correction is minor. The figure shows a plot ofc4(n){\displaystyle c_{4}(n)}versus sample size. The table below gives numerical values ofc4(n){\displaystyle c_{4}(n)}and algebraic expressions for some values ofn{\displaystyle n}; more complete tables may be found in most textbooks[2][3]onstatistical quality control.
It is important to keep in mind this correction only produces an unbiased estimator for normally and independently distributedX. When this condition is satisfied, another result aboutsinvolvingc4(n){\displaystyle c_{4}(n)}is that thestandard errorofsis[4][5]σ1−c42{\displaystyle \sigma {\sqrt {1-c_{4}^{2}}}}, while thestandard errorof the unbiased estimator isσc4−2−1.{\displaystyle \sigma {\sqrt {c_{4}^{-2}-1}}.}
If calculation of the functionc4(n) appears too difficult, there is a simple rule of thumb[6]to take the estimator
The formula differs from the familiar expression fors2only by havingn− 1.5instead ofn− 1in the denominator. This expression is only approximate; in fact,
The bias is relatively small: say, forn=3{\displaystyle n=3}it is equal to 2.3%, and forn=9{\displaystyle n=9}the bias is already 0.1%.
In cases wherestatistically independentdata are modelled by a parametric family of distributions other than thenormal distribution, the population standard deviation will, if it exists, be a function of the parameters of the model. One general approach to estimation would bemaximum likelihood. Alternatively, it may be possible to use theRao–Blackwell theoremas a route to finding a good estimate of the standard deviation. In neither case would the estimates obtained usually be unbiased. Notionally, theoretical adjustments might be obtainable to lead to unbiased estimates but, unlike those for the normal distribution, these would typically depend on the estimated parameters.
If the requirement is simply to reduce the bias of an estimated standard deviation, rather than to eliminate it entirely, then two practical approaches are available, both within the context ofresampling. These arejackknifingandbootstrapping. Both can be applied either to parametrically based estimates of the standard deviation or to the sample standard deviation.
For non-normal distributions an approximate (up toO(n−1) terms) formula for the unbiased estimator of the standard deviation is
whereγ2denotes the populationexcess kurtosis. The excess kurtosis may be either known beforehand for certain distributions, or estimated from the data.
The material above, to stress the point again, applies only to independent data. However, real-world data often does not meet this requirement; it isautocorrelated(also known as serial correlation). As one example, the successive readings of a measurement instrument that incorporates some form of “smoothing” (more correctly, low-pass filtering) process will be autocorrelated, since any particular value is calculated from some combination of the earlier and later readings.
Estimates of the variance, and standard deviation, of autocorrelated data will be biased. The expected value of the sample variance is[7]
wherenis the sample size (number of measurements) andρk{\displaystyle \rho _{k}}is the autocorrelation function (ACF) of the data. (Note that the expression in the brackets is simply one minus the average expected autocorrelation for the readings.) If the ACF consists of positive values then the estimate of the variance (and its square root, the standard deviation) will be biased low. That is, the actual variability of the data will be greater than that indicated by an uncorrected variance or standard deviation calculation. It is essential to recognize that, if this expression is to be used to correct for the bias, by dividing the estimates2{\displaystyle s^{2}}by the quantity in brackets above, then the ACF must be knownanalytically, not via estimation from the data. This is because the estimated ACF will itself be biased.[8]
To illustrate the magnitude of the bias in the standard deviation, consider a dataset that consists of sequential readings from an instrument that uses a specific digital filter whose ACF is known to be given by
whereαis the parameter of the filter, and it takes values from zero to unity. Thus the ACF is positive and geometrically decreasing.
The figure shows the ratio of the estimated standard deviation to its known value (which can be calculated analytically for this digital filter), for several settings ofαas a function of sample sizen. Changingαalters the variance reduction ratio of the filter, which is known to be
so that smaller values ofαresult in more variance reduction, or “smoothing.” The bias is indicated by values on the vertical axis different from unity; that is, if there were no bias, the ratio of the estimated to known standard deviation would be unity. Clearly, for modest sample sizes there can be significant bias (a factor of two, or more).
It is often of interest to estimate the variance or standard deviation of an estimatedmeanrather than the variance of a population. When the data are autocorrelated, this has a direct effect on the theoretical variance of the sample mean, which is[9]
The variance of the sample mean can then be estimated by substituting an estimate ofσ2. One such estimate can be obtained from the equation for E[s2] given above. First define the following constants, assuming, again, aknownACF:
so that
This says that the expected value of the quantity obtained by dividing the observed sample variance by the correction factorγ1{\displaystyle \gamma _{1}}gives an unbiased estimate of the variance. Similarly, re-writing the expression above for the variance of the mean,
and substituting the estimate forσ2{\displaystyle \sigma ^{2}}gives[10]
which is an unbiased estimator of the variance of the mean in terms of the observed sample variance and known quantities. If the autocorrelationsρk{\displaystyle \rho _{k}}are identically zero, this expression reduces to the well-known result for the variance of the mean for independent data. The effect of the expectation operator in these expressions is that the equality holds in the mean (i.e., on average).
Having the expressions above involving thevarianceof the population, and of an estimate of the mean of that population, it would seem logical to simply take the square root of these expressions to obtain unbiased estimates of the respective standard deviations. However it is the case that, since expectations are integrals,
Instead, assume a functionθexists such that an unbiased estimator of the standard deviation can be written
andθdepends on the sample sizenand the ACF. In the case of NID (normally and independently distributed) data, the radicand is unity andθis just thec4function given in the first section above. As withc4,θapproaches unity as the sample size increases (as doesγ1).
It can be demonstrated via simulation modeling that ignoringθ(that is, taking it to be unity) and using
removes all but a few percent of the bias caused by autocorrelation, making this areduced-bias estimator, rather than anunbiased estimator. In practical measurement situations, this reduction in bias can be significant, and useful, even if some relatively small bias remains. The figure above, showing an example of the bias in the standard deviation vs. sample size, is based on this approximation; the actual bias would be somewhat larger than indicated in those graphs since the transformation biasθis not included there.
The unbiased variance of the mean in terms of the population variance and the ACF is given by
and since there are no expected values here, in this case the square root can be taken, so that
Using the unbiased estimate expression above forσ, anestimateof the standard deviation of the mean will then be
If the data are NID, so that the ACF vanishes, this reduces to
In the presence of a nonzero ACF, ignoring the functionθas before leads to thereduced-bias estimator
which again can be demonstrated to remove a useful majority of the bias.
This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
|
https://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation
|
Innumerical analysis, theinterval finite element method(interval FEM) is afinite element methodthat uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of the structure. This is important in concrete structures, wood structures, geomechanics, composite structures, biomechanics and in many other areas.[1]The goal of the Interval Finite Element is to find upper and lower bounds of different characteristics of the model (e.g.stress,displacements,yield surfaceetc.) and use these results in the design process. This is so called worst case design, which is closely related to thelimit state design.
Worst case design requires less information thanprobabilistic designhowever the results are more conservative [Köylüoglu andElishakoff1998].[citation needed]
Consider the following equation:ax=b{\displaystyle ax=b}whereaandbarereal numbers, andx=ba{\displaystyle x={\frac {b}{a}}}.
Very often, the exact values of the parametersaandbare unknown.
Let's assume thata∈[1,2]=a{\displaystyle a\in [1,2]=\mathbf {a} }andb∈[1,4]=b{\displaystyle b\in [1,4]=\mathbf {b} }. In this case, it is necessary to solve the following equation[1,2]x=[1,4]{\displaystyle [1,2]x=[1,4]}
There are several definitions of the solution set of this equation with interval parameters.
In this approach the solution is the following setx={x:ax=b,a∈a,b∈b}=ba=[1,4][1,2]=[0.5,4]{\displaystyle \mathbf {x} =\left\{x:ax=b,a\in \mathbf {a} ,b\in \mathbf {b} \right\}={\frac {\mathbf {b} }{\mathbf {a} }}={\frac {[1,4]}{[1,2]}}=[0.5,4]}
This is the most popular solution set of the interval equation and this solution set will be applied in this article.
In the multidimensional case the united solutions set is much more complicated.
The solution set of the following system oflinear interval equations[[−4,−3][−2,2][−2,2][−4,−3]][x1x2]=[[−8,8][−8,8]]{\displaystyle {\begin{bmatrix}{[-4,-3]}&{[-2,2]}\\{[-2,2]}&{[-4,-3]}\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}={\begin{bmatrix}{[-8,8]}\\{[-8,8]}\end{bmatrix}}}is shown on the following picture∑∃∃(A,b)={x:Ax=b,A∈A,b∈b}{\displaystyle \sum {_{\exists \exists }}(\mathbf {A} ,\mathbf {b} )=\{x:Ax=b,A\in \mathbf {A} ,b\in \mathbf {b} \}}
The exact solution set is very complicated, thus it is necessary to find the smallest interval which contains the exact solution set♢(∑∃∃(A,b))=♢{x:Ax=b,A∈A,b∈b}{\displaystyle \diamondsuit \left(\sum {_{\exists \exists }}(\mathbf {A} ,\mathbf {b} )\right)=\diamondsuit \{x:Ax=b,A\in \mathbf {A} ,b\in \mathbf {b} \}}or simply♢(∑∃∃(A,b))=[x_1,x¯1]×[x_2,x¯2]×⋯×[x_n,x¯n]{\displaystyle \diamondsuit \left(\sum {_{\exists \exists }}(\mathbf {A} ,\mathbf {b} )\right)=[{\underline {x}}_{1},{\overline {x}}_{1}]\times [{\underline {x}}_{2},{\overline {x}}_{2}]\times \dots \times [{\underline {x}}_{n},{\overline {x}}_{n}]}wherex_i=min{xi:Ax=b,A∈A,b∈b},x¯i=max{xi:Ax=b,A∈A,b∈b}{\displaystyle {\underline {x}}_{i}=\min\{x_{i}:Ax=b,A\in \mathbf {A} ,b\in \mathbf {b} \},\ \ {\overline {x}}_{i}=\max\{x_{i}:Ax=b,A\in \mathbf {A} ,b\in \mathbf {b} \}}xi∈{xi:Ax=b,A∈A,b∈b}=[x_i,x¯i]{\displaystyle x_{i}\in \{x_{i}:Ax=b,A\in \mathbf {A} ,b\in \mathbf {b} \}=[{\underline {x}}_{i},{\overline {x}}_{i}]}See also[1]
The Interval Finite Element Method requires the solution of a parameter-dependent system of equations (usually with a symmetric positive definite matrix.) An example of the solution set of general parameter dependent system of equations
[p1p2p2+1p1][u1u2]=[p1+6p25.02p1−6],forp1∈[2,4],p2∈[−2,1].{\displaystyle {\begin{bmatrix}p_{1}&p_{2}\\p_{2}+1&p_{1}\end{bmatrix}}{\begin{bmatrix}u_{1}\\u_{2}\end{bmatrix}}={\begin{bmatrix}{\frac {p_{1}+6p_{2}}{5.0}}\\2p_{1}-6\end{bmatrix}},\ \ {\text{for}}\ \ p_{1}\in [2,4],p_{2}\in [-2,1].}is shown on the picture below.[2]
In this approach x is aninterval numberfor which the equation[1,2]x=[1,4]{\displaystyle [1,2]x=[1,4]}is satisfied. In other words, the left side of the equation is equal to the right side of the equation.
In this particular case the solution isx=[1,2]{\displaystyle x=[1,2]}becauseax=[1,2][1,2]=[1,4]{\displaystyle ax=[1,2][1,2]=[1,4]}
If the uncertainty is larger, i.e.a=[1,4]{\displaystyle a=[1,4]}, thenx=[1,1]{\displaystyle x=[1,1]}becauseax=[1,4][1,1]=[1,4]{\displaystyle ax=[1,4][1,1]=[1,4]}
If the uncertainty is even larger, i.e.a=[1,8]{\displaystyle a=[1,8]}, then the solution doesn't exist. It is very complex to find a physical interpretation of the algebraic interval solution set.
Thus, in applications, the united solution set is usually applied.
Consider the PDE with the interval parameters
wherep=(p1,…,pm)∈p{\displaystyle p=(p_{1},\dots ,p_{m})\in {\mathbf {p} }}is a vector of parameters which belong to given intervalspi∈[p_i,p¯i]=pi,{\displaystyle p_{i}\in [{\underline {p}}_{i},{\overline {p}}_{i}]={\mathbf {p} }_{i},}p=p1×p2×⋯×pm.{\displaystyle {\mathbf {p} }={\mathbf {p} }_{1}\times {\mathbf {p} }_{2}\times \cdots \times {\mathbf {p} }_{m}.}
For example, the heat transfer equationkx∂2u∂x2+ky∂2u∂y2+q=0forx∈Ω{\displaystyle k_{x}{\frac {\partial ^{2}u}{\partial x^{2}}}+k_{y}{\frac {\partial ^{2}u}{\partial y^{2}}}+q=0{\text{ for }}x\in \Omega }u(x)=u∗(x)forx∈∂Ω{\displaystyle u(x)=u^{*}(x){\text{ for }}x\in \partial \Omega }wherekx,ky{\displaystyle k_{x},k_{y}}are the interval parameters (i.e.kx∈kx,ky∈ky{\displaystyle k_{x}\in {\mathbf {k} }_{x},\ k_{y}\in {\mathbf {k} }_{y}}).
Solution of the equation (1) can be defined in the following wayu~(x):={u(x):G(x,u,p)=0,p∈p}{\displaystyle {\tilde {u}}(x):=\{u(x):G(x,u,p)=0,p\in {\mathbf {p} }\}}
For example, in the case of the heat transfer equationu~(x)={u(x):kx∂2u∂x2+ky∂2u∂y2+q=0forx∈Ω,u(x)=u∗(x)forx∈∂Ω,kx∈kx,ky∈ky}{\displaystyle {\tilde {u}}(x)=\left\{u(x):k_{x}{\frac {\partial ^{2}u}{\partial x^{2}}}+k_{y}{\frac {\partial ^{2}u}{\partial y^{2}}}+q=0{\text{ for }}x\in \Omega ,u(x)=u^{*}(x){\text{ for }}x\in \partial \Omega ,k_{x}\in {\mathbf {k} }_{x},\ k_{y}\in {\mathbf {k} }_{y}\right\}}
Solutionu~{\displaystyle {\tilde {u}}}is very complicated because of that in practice it is more interesting to find the smallest possible interval which contain the exact solution setu~{\displaystyle {\tilde {u}}}.
u(x)=◊u~(x)=◊{u(x):G(x,u,p)=0,p∈p}{\displaystyle {\mathbf {u} }(x)=\lozenge {\tilde {u}}(x)=\lozenge \{u(x):G(x,u,p)=0,p\in {\mathbf {p} }\}}
For example, in the case of the heat transfer equationu(x)=◊{u(x):kx∂2u∂x2+ky∂2u∂y2+q=0forx∈Ω,u(x)=u∗(x)forx∈∂Ω,kx∈kx,ky∈ky}{\displaystyle {\mathbf {u} }(x)=\lozenge \left\{u(x):k_{x}{\frac {\partial ^{2}u}{\partial x^{2}}}+k_{y}{\frac {\partial ^{2}u}{\partial y^{2}}}+q=0{\text{ for }}x\in \Omega ,u(x)=u^{*}(x){\text{ for }}x\in \partial \Omega ,k_{x}\in {\mathbf {k} }_{x},\ k_{y}\in {\mathbf {k} }_{y}\right\}}
Finite element method lead to the following parameter dependent system of algebraic equationsK(p)u=Q(p),p∈p{\displaystyle K(p)u=Q(p),\ \ \ p\in {\mathbf {p} }}whereKis astiffness matrixandQis a right hand side.
Interval solution can be defined as a multivalued functionu=◊{u:K(p)u=Q(p),p∈p}{\displaystyle {\mathbf {u} }=\lozenge \{u:K(p)u=Q(p),p\in {\mathbf {p} }\}}
In the simplest case above system can be treat as a system oflinear interval equations.
It is also possible to define the interval solution as a solution of the following optimization problemu_i=min{ui:K(p)u=Q(p),p∈p}{\displaystyle {\underline {u}}_{i}=\min\{u_{i}:K(p)u=Q(p),p\in {\mathbf {p} }\}}u¯i=max{ui:K(p)u=Q(p),p∈p}{\displaystyle {\overline {u}}_{i}=\max\{u_{i}:K(p)u=Q(p),p\in {\mathbf {p} }\}}
In multidimensional case the interval solution can be written asu=u1×⋯×un=[u_1,u¯1]×⋯×[u_n,u¯n]{\displaystyle \mathbf {u} =\mathbf {u} _{1}\times \cdots \times \mathbf {u} _{n}=[{\underline {u}}_{1},{\overline {u}}_{1}]\times \cdots \times [{\underline {u}}_{n},{\overline {u}}_{n}]}
It is important to know that the interval parameters generate different results thanuniformly distributed random variables.
Interval parameterp=[p_,p¯]{\displaystyle \mathbf {p} =[{\underline {p}},{\overline {p}}]}take into account all possible probability distributions (forp∈[p_,p¯]{\displaystyle p\in [{\underline {p}},{\overline {p}}]}).
In order to define the interval parameter it is necessary to know only upperp¯{\displaystyle {\overline {p}}}and lower boundp_{\displaystyle {\underline {p}}}.
Calculations of probabilistic characteristics require the knowledge of a lot of experimental results.
It is possible to show that the sum of n interval numbers isn{\displaystyle {\sqrt {n}}}times wider than the sum of appropriate normally distributed random variables.
Sum ofninterval numberp=[p_,p¯]{\displaystyle \mathbf {p} =[{\underline {p}},{\overline {p}}]}is equal tonp=[np_,np¯]{\displaystyle n\mathbf {p} =[n{\underline {p}},n{\overline {p}}]}
Width of that interval is equal tonp¯−np_=n(p¯−p_)=nΔp{\displaystyle n{\overline {p}}-n{\underline {p}}=n({\overline {p}}-{\underline {p}})=n\Delta p}
Considernormally distributed random variableXsuch thatmX=E[X]=p¯+p_2,σX=Var[X]=Δp6{\displaystyle m_{X}=E[X]={\frac {{\overline {p}}+{\underline {p}}}{2}},\sigma _{X}={\sqrt {\operatorname {Var} [X]}}={\frac {\Delta p}{6}}}
Sum ofnnormally distributed random variable is a normally distributed random variable with the following characteristics (seeSix Sigma)E[nX]=np¯+p_2,σnX=nVar[X]=nσ=nΔp6{\displaystyle E[nX]=n{\frac {{\overline {p}}+{\underline {p}}}{2}},\sigma _{nX}={\sqrt {n\operatorname {Var} [X]}}={\sqrt {n}}\sigma ={\sqrt {n}}{\frac {\Delta p}{6}}}
We can assume that the width of the probabilistic result is equal to 6 sigma (compareSix Sigma).6σnX=6nΔp6=nΔp{\displaystyle 6\sigma _{nX}=6{\sqrt {n}}{\frac {\Delta p}{6}}={\sqrt {n}}\Delta p}
Now we can compare the width of the interval result and the probabilistic resultwidth ofnintervalswidth ofnrandom variables=nΔpnΔp=n{\displaystyle {\frac {{\text{width of }}n{\text{ intervals}}}{{\text{width of }}n{\text{ random variables}}}}={\frac {n\Delta p}{{\sqrt {n}}\Delta p}}={\sqrt {n}}}
Because of that the results of the interval finite element (or in general worst-case analysis) may be overestimated in comparison to the stochastic fem analysis (see alsopropagation of uncertainty).
However, in the case of nonprobabilistic uncertainty it is not possible to apply pure probabilistic methods.
Because probabilistic characteristic in that case are not known exactly (Elishakoff2000).
It is possible to consider random (and fuzzy random variables) with the interval parameters (e.g. with the interval mean, variance etc.).
Some researchers use interval (fuzzy) measurements in statistical calculations (e.g.[2]Archived2010-06-16 at theWayback Machine). As a results of such calculations we will get so calledimprecise probability.
Imprecise probabilityis understood in a very wide sense. It is used as a generic term to cover all mathematical models which measure chance or uncertainty without sharp numerical probabilities. It includes both qualitative (comparative probability, partial preference orderings, ...) and quantitative modes (interval probabilities, belief functions, upper and lower previsions, ...). Imprecise probability models are needed in inference problems where the relevant information is scarce, vague or conflicting, and in decision problems where preferences may also be incomplete[3].
In thetension-compressionproblem, the followingequationshows the relationship betweendisplacementuandforceP:EALu=P{\displaystyle {\frac {EA}{L}}u=P}whereLis length,Ais the area of a cross-section, andEisYoung's modulus.
If the Young's modulus and force are uncertain, thenE∈[E_,E¯],P∈[P_,P¯]{\displaystyle E\in [{\underline {E}},{\overline {E}}],P\in [{\underline {P}},{\overline {P}}]}
To findupper and lower boundsof the displacementu, calculate the followingpartial derivatives:∂u∂E=−PLE2A<0{\displaystyle {\frac {\partial u}{\partial E}}={\frac {-PL}{E^{2}A}}<0}∂u∂P=LEA>0{\displaystyle {\frac {\partial u}{\partial P}}={\frac {L}{EA}}>0}
Calculate extreme values of the displacement as follows:u_=u(E¯,P_)=P_LE¯A{\displaystyle {\underline {u}}=u({\overline {E}},{\underline {P}})={\frac {{\underline {P}}L}{{\overline {E}}A}}}u¯=u(E_,P¯)=P¯LE_A{\displaystyle {\overline {u}}=u({\underline {E}},{\overline {P}})={\frac {{\overline {P}}L}{{\underline {E}}A}}}
Calculatestrainusing following formula:ε=1Lu{\displaystyle \varepsilon ={\frac {1}{L}}u}
Calculate derivative of the strain using derivative from the displacements:∂ε∂E=1L∂u∂E=−PE2A<0{\displaystyle {\frac {\partial \varepsilon }{\partial E}}={\frac {1}{L}}{\frac {\partial u}{\partial E}}={\frac {-P}{E^{2}A}}<0}∂ε∂P=1L∂u∂P=1EA>0{\displaystyle {\frac {\partial \varepsilon }{\partial P}}={\frac {1}{L}}{\frac {\partial u}{\partial P}}={\frac {1}{EA}}>0}
Calculate extreme values of the displacement as follows:ε_=ε(E¯,P_)=P_E¯A{\displaystyle {\underline {\varepsilon }}=\varepsilon ({\overline {E}},{\underline {P}})={\frac {\underline {P}}{{\overline {E}}A}}}ε¯=ε(E_,P¯)=P¯E_A{\displaystyle {\overline {\varepsilon }}=\varepsilon ({\underline {E}},{\overline {P}})={\frac {\overline {P}}{{\underline {E}}A}}}
It is also possible to calculate extreme values of strain using the displacements∂ε∂u=1L>0{\displaystyle {\frac {\partial \varepsilon }{\partial u}}={\frac {1}{L}}>0}thenε_=ε(u_)=P_E¯A{\displaystyle {\underline {\varepsilon }}=\varepsilon ({\underline {u}})={\frac {\underline {P}}{{\overline {E}}A}}}ε¯=ε(u¯)=P¯E_A{\displaystyle {\overline {\varepsilon }}=\varepsilon ({\overline {u}})={\frac {\overline {P}}{{\underline {E}}A}}}
The same methodology can be applied to thestressσ=Eε{\displaystyle \sigma =E\varepsilon }then∂σ∂E=ε+E∂ε∂E=ε+E1L∂u∂E=PEA−PEA=0{\displaystyle {\frac {\partial \sigma }{\partial E}}=\varepsilon +E{\frac {\partial \varepsilon }{\partial E}}=\varepsilon +E{\frac {1}{L}}{\frac {\partial u}{\partial E}}={\frac {P}{EA}}-{\frac {P}{EA}}=0}∂σ∂P=E∂ε∂P=E1L∂u∂P=1A>0{\displaystyle {\frac {\partial \sigma }{\partial P}}=E{\frac {\partial \varepsilon }{\partial P}}=E{\frac {1}{L}}{\frac {\partial u}{\partial P}}={\frac {1}{A}}>0}andσ_=σ(P_)=P_A{\displaystyle {\underline {\sigma }}=\sigma ({\underline {P}})={\frac {\underline {P}}{A}}}σ¯=σ(P¯)=P¯A{\displaystyle {\overline {\sigma }}=\sigma ({\overline {P}})={\frac {\overline {P}}{A}}}
If we treat stress as a function of strain then∂σ∂ε=∂∂ε(Eε)=E>0{\displaystyle {\frac {\partial \sigma }{\partial \varepsilon }}={\frac {\partial }{\partial \varepsilon }}(E\varepsilon )=E>0}thenσ_=σ(ε_)=Eε_=P_A{\displaystyle {\underline {\sigma }}=\sigma ({\underline {\varepsilon }})=E{\underline {\varepsilon }}={\frac {\underline {P}}{A}}}σ¯=σ(ε¯)=Eε¯=P¯A{\displaystyle {\overline {\sigma }}=\sigma ({\overline {\varepsilon }})=E{\overline {\varepsilon }}={\frac {\overline {P}}{A}}}
Structure is safe if stressσ{\displaystyle \sigma }is smaller than a given valueσ0{\displaystyle \sigma _{0}}i.e.,σ<σ0{\displaystyle \sigma <\sigma _{0}}this condition is true ifσ¯<σ0{\displaystyle {\overline {\sigma }}<\sigma _{0}}
After calculation we know that this relation is satisfied ifP¯A<σ0{\displaystyle {\frac {\overline {P}}{A}}<\sigma _{0}}
The example is very simple but it shows the applications of the interval parameters in mechanics. Interval FEM use very similar methodology in multidimensional cases [Pownuk 2004].
However, in the multidimensional cases relation between the uncertain parameters and the solution is not always monotone. In those cases, more complicated optimization methods have to be applied.[1]
In the case of tension-compressionproblem the equilibrium equation has the following formddx(EAdudx)+n=0{\displaystyle {\frac {d}{dx}}\left(EA{\frac {du}{dx}}\right)+n=0}whereuis displacement,EisYoung's modulus,Ais an area of cross-section, andnis a distributed load.
In order to get unique solution it is necessary to add appropriate boundary conditions e.g.u(0)=0{\displaystyle u(0)=0}dudx|x=0EA=P{\displaystyle \left.{\frac {du}{dx}}\right|_{x=0}EA=P}
IfYoung's modulusEandnare uncertain then the interval solution can be defined in the following way
u(x)={u(x):ddx(EAdudx)+n=0,u(0)=0,du(0)dxEA=P,E∈[E_,E¯],P∈[P_,P¯]}{\displaystyle {\mathbf {u} }(x)=\left\{u(x):{\frac {d}{dx}}\left(EA{\frac {du}{dx}}\right)+n=0,u(0)=0,{\frac {du(0)}{dx}}EA=P,E\in [{\underline {E}},{\overline {E}}],P\in [{\underline {P}},{\overline {P}}]\right\}}
For each FEM element it is possible to multiply the equation by the test functionv∫0Le(ddx(EAdudx)+n)v=0{\displaystyle \int _{0}^{L^{e}}\left({\frac {d}{dx}}\left(EA{\frac {du}{dx}}\right)+n\right)v=0}wherex∈[0,L(e)].{\displaystyle x\in [0,L^{(e)}].}
Afterintegration by partswe will get the equation in the weak form∫0L(e)EAdudxdvdxdx=∫0L(e)nvdx{\displaystyle \int _{0}^{L^{(e)}}EA{\frac {du}{dx}}{\frac {dv}{dx}}dx=\int _{0}^{L^{(e)}}nv\,dx}wherex∈[0,L(e)].{\displaystyle x\in [0,L^{(e)}].}
Let's introduce a set of grid pointsx0,x1,…,xNe{\displaystyle x_{0},x_{1},\dots ,x_{Ne}}, whereNe{\displaystyle Ne}is a number of elements, and linear shape functions for each FEM elementN1(e)(x)=1−1−x0(e)x1(e)−x0(e),N2(e)(x)=1−x0(e)x1(e)−x0(e).{\displaystyle N_{1}^{(e)}(x)=1-{\frac {1-x_{0}^{(e)}}{x_{1}^{(e)}-x_{0}^{(e)}}},\ \ N_{2}^{(e)}(x)={\frac {1-x_{0}^{(e)}}{x_{1}^{(e)}-x_{0}^{(e)}}}.}wherex∈[x0(e),x1(e)].{\displaystyle x\in [x_{0}^{(e)},x_{1}^{(e)}].}
x1(e){\displaystyle x_{1}^{(e)}}left endpoint of the element,x1(e){\displaystyle x_{1}^{(e)}}left endpoint of the element number "e".
Approximate solution in the "e"-th element is a linear combination of the shape functions
uh(e)(x)=u1eN1(e)(x)+u2eN2(e)(x),vh(e)(x)=u1eN1(e)(x)+u2eN2(e)(x){\displaystyle u_{h}^{(e)}(x)=u_{1}^{e}N_{1}^{(e)}(x)+u_{2}^{e}N_{2}^{(e)}(x),\ \ v_{h}^{(e)}(x)=u_{1}^{e}N_{1}^{(e)}(x)+u_{2}^{e}N_{2}^{(e)}(x)}
After substitution to the weak form of the equation we will get the following system of equations
[E(e)A(e)L(e)−E(e)A(e)L(e)−E(e)A(e)L(e)E(e)A(e)L(e)][u1(e)u2(e)]=[∫0L(e)nN1(e)(x)dx∫0L(e)nN2(e)(x)dx]{\displaystyle {\begin{bmatrix}{\frac {E^{(e)}A^{(e)}}{L^{(e)}}}&-{\frac {E^{(e)}A^{(e)}}{L^{(e)}}}\\-{\frac {E^{(e)}A^{(e)}}{L^{(e)}}}&{\frac {E^{(e)}A^{(e)}}{L^{(e)}}}\\\end{bmatrix}}{\begin{bmatrix}u_{1}^{(e)}\\u_{2}^{(e)}\end{bmatrix}}={\begin{bmatrix}\int _{0}^{L^{(e)}}nN_{1}^{(e)}(x)dx\\\int _{0}^{L^{(e)}}nN_{2}^{(e)}(x)dx\end{bmatrix}}}or in the matrix formK(e)u(e)=Q(e){\displaystyle K^{(e)}u^{(e)}=Q^{(e)}}
In order to assemble the global stiffness matrix it is necessary to consider an equilibrium equations in each node.
After that the equation has the following matrix formKu=Q{\displaystyle Ku=Q}whereK=[K11(1)K12(1)0⋯0K21(1)K22(1)+K11(2)K12(2)⋯00K21(2)K22(2)+K11(3)⋯0⋮⋮⋱⋱⋮00⋯K22(Ne−1)+K11(Ne)K11(Ne)00⋯K21(Ne)K22(Ne)]{\displaystyle K={\begin{bmatrix}K_{11}^{(1)}&K_{12}^{(1)}&0&\cdots &0\\K_{21}^{(1)}&K_{22}^{(1)}+K_{11}^{(2)}&K_{12}^{(2)}&\cdots &0\\0&K_{21}^{(2)}&K_{22}^{(2)}+K_{11}^{(3)}&\cdots &0\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &K_{22}^{(Ne-1)}+K_{11}^{(Ne)}&K_{11}^{(Ne)}\\0&0&\cdots &K_{21}^{(Ne)}&K_{22}^{(Ne)}\end{bmatrix}}}is the global stiffness matrix,u=[u0u1⋮uNe]{\displaystyle u={\begin{bmatrix}u_{0}\\u_{1}\\\vdots \\u_{Ne}\\\end{bmatrix}}}is the solution vector,Q=[Q0Q1⋮QNe]{\displaystyle Q={\begin{bmatrix}Q_{0}\\Q_{1}\\\vdots \\Q_{Ne}\\\end{bmatrix}}}is the right hand side.
In the case of tension-compression problem
K=[E(1)A(1)L(1)−E(1)A(1)L(1)0⋯0−E(1)A(1)L(1)E(1)A(1)L(1)+E(2)A(2)L(2)−E(2)A(2)L(2)⋯00−E(2)A(2)L(2)E(2)A(2)L(2)+E(3)A(3)L(3)⋯0⋮⋮⋱⋱⋮00⋯E(Ne−1)A(Ne−1)L(Ne−1)+E(Ne)A(Ne)L(Ne)−E(Ne)A(Ne)L(Ne)00⋯−E(Ne)A(Ne)L(Ne)E(Ne)A(Ne)L(Ne)]{\displaystyle K={\begin{bmatrix}{\frac {E^{(1)}A^{(1)}}{L^{(1)}}}&-{\frac {E^{(1)}A^{(1)}}{L^{(1)}}}&0&\cdots &0\\-{\frac {E^{(1)}A^{(1)}}{L^{(1)}}}&{\frac {E^{(1)}A^{(1)}}{L^{(1)}}}+{\frac {E^{(2)}A^{(2)}}{L^{(2)}}}&-{\frac {E^{(2)}A^{(2)}}{L^{(2)}}}&\cdots &0\\0&-{\frac {E^{(2)}A^{(2)}}{L^{(2)}}}&{\frac {E^{(2)}A^{(2)}}{L^{(2)}}}+{\frac {E^{(3)}A^{(3)}}{L^{(3)}}}&\cdots &0\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &{\frac {E^{(Ne-1)}A^{(Ne-1)}}{L^{(Ne-1)}}}+{\frac {E^{(Ne)}A^{(Ne)}}{L^{(Ne)}}}&-{\frac {E^{(Ne)}A^{(Ne)}}{L^{(Ne)}}}\\0&0&\cdots &-{\frac {E^{(Ne)}A^{(Ne)}}{L^{(Ne)}}}&{\frac {E^{(Ne)}A^{(Ne)}}{L^{(Ne)}}}\end{bmatrix}}}
If we neglect the distributed loadn
Q=[R0⋮0P]{\displaystyle Q={\begin{bmatrix}R\\0\\\vdots \\0\\P\\\end{bmatrix}}}
After taking into account the boundary conditions the stiffness matrix has the following form
K=[100⋯00E(1)A(1)L(1)+E(2)A(2)L(2)−E(2)A(2)L(2)⋯00−E(2)A(2)L(2)E(2)A(2)L(2)+E(3)A(3)L(3)⋯0⋮⋮⋱⋱⋮00⋯E(e−1)A(e−1)L(e−1)+E(e)A(e)L(e)−E(e)A(e)L(e)00⋯−E(e)A(e)L(e)E(e)A(e)L(e)]=K(E,A)=K(E(1),…,E(Ne),A(1),…,A(Ne)){\displaystyle K={\begin{bmatrix}1&0&0&\cdots &0\\0&{\frac {E^{(1)}A^{(1)}}{L^{(1)}}}+{\frac {E^{(2)}A^{(2)}}{L^{(2)}}}&-{\frac {E^{(2)}A^{(2)}}{L^{(2)}}}&\cdots &0\\0&-{\frac {E^{(2)}A^{(2)}}{L^{(2)}}}&{\frac {E^{(2)}A^{(2)}}{L^{(2)}}}+{\frac {E^{(3)}A^{(3)}}{L^{(3)}}}&\cdots &0\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &{\frac {E^{(e-1)}A^{(e-1)}}{L^{(e-1)}}}+{\frac {E^{(e)}A^{(e)}}{L^{(e)}}}&-{\frac {E^{(e)}A^{(e)}}{L^{(e)}}}\\0&0&\cdots &-{\frac {E^{(e)}A^{(e)}}{L^{(e)}}}&{\frac {E^{(e)}A^{(e)}}{L^{(e)}}}\end{bmatrix}}=K(E,A)=K{\left(E^{(1)},\dots ,E^{(Ne)},A^{(1)},\dots ,A^{(Ne)}\right)}}
Right-hand side has the following form
Q=[00⋮0P]=Q(P){\displaystyle Q={\begin{bmatrix}0\\0\\\vdots \\0\\P\\\end{bmatrix}}=Q(P)}
Let's assume that Young's modulusE, area of cross-sectionAand the loadPare uncertain and belong to some intervalsE(e)∈[E_(e),E¯(e)]{\displaystyle E^{(e)}\in [{\underline {E}}^{(e)},{\overline {E}}^{(e)}]}A(e)∈[A_(e),A¯(e)]{\displaystyle A^{(e)}\in [{\underline {A}}^{(e)},{\overline {A}}^{(e)}]}P∈[P_,P¯]{\displaystyle P\in [{\underline {P}},{\overline {P}}]}
The interval solution can be defined calculating the following way
u=◊{u:K(E,A)u=Q(P),E(e)∈[E_(e),E¯(e)],A(e)∈[A_(e),A¯(e)],P∈[P_,P¯]}{\displaystyle \mathbf {u} =\lozenge \left\{u:K(E,A)u=Q(P),E^{(e)}\in [{\underline {E}}^{(e)},{\overline {E}}^{(e)}],A^{(e)}\in [{\underline {A}}^{(e)},{\overline {A}}^{(e)}],P\in [{\underline {P}},{\overline {P}}]\right\}}
Calculation of the interval vectoru{\displaystyle {\mathbf {u} }}is in generalNP-hard, however in specific cases it is possible to calculate the solution which can be used in many engineering applications.
The results of the calculations are the interval displacementsui∈[u_i,u¯i]{\displaystyle u_{i}\in [{\underline {u}}_{i},{\overline {u}}_{i}]}
Let's assume that the displacements in the column have to be smaller than some given value (due to safety).ui<uimax{\displaystyle u_{i}<u_{i}^{\max }}
The uncertain system is safe if the interval solution satisfy all safety conditions.
In this particular caseui<uimax,ui∈[u_i,u¯i]{\displaystyle u_{i}<u_{i}^{\max },\ \ \ u_{i}\in [{\underline {u}}_{i},{\overline {u}}_{i}]}or simpleu¯i<uimax{\displaystyle {\overline {u}}_{i}<u_{i}^{\max }}
In postprocessing it is possible to calculate the interval stress, the interval strain and the intervallimit state functionsand use these values in the design process.
The interval finite element method can be applied to the solution of problems in which there is not enough information to create reliable probabilistic characteristic of the structures (Elishakoff2000). Interval finite element method can be also applied in the theory ofimprecise probability.
It is possible to solve the equationK(p)u(p)=Q(p){\displaystyle K(p)u(p)=Q(p)}for all possible combinations of endpoints of the intervalp^{\displaystyle {\hat {p}}}.The list of all vertices of the intervalp^{\displaystyle {\hat {p}}}can be written asL={p1∗,...,pn∗}{\displaystyle L=\{p_{1}^{*},...,p_{n}^{*}\}}.Upper and lower bound of the solution can be calculated in the following way
u_i=min{ui(pk∗):K(pk∗)u(pk∗)=Q(pk∗),pk∗∈L}{\displaystyle {\underline {u}}_{i}=\min\{u_{i}(p_{k}^{*}):K(p_{k}^{*})u(p_{k}^{*})=Q(p_{k}^{*}),p_{k}^{*}\in L\}}u¯i=max{ui(pk∗):K(pk∗)u(pk∗)=Q(pk∗),pk∗∈L}{\displaystyle {\overline {u}}_{i}=\max\{u_{i}(p_{k}^{*}):K(p_{k}^{*})u(p_{k}^{*})=Q(p_{k}^{*}),p_{k}^{*}\in L\}}
Endpoints combination method gives solution which is usually exact; unfortunately the method has exponential computational complexity and cannot be applied to the problems with many interval parameters.[3]
The functionu=u(p){\displaystyle u=u(p)}can be expanded by usingTaylor series.
In the simplest case the Taylor series use only linear approximation
ui(p)≈ui(p0)+∑j∂u(p0)∂pjΔpj{\displaystyle u_{i}(p)\approx u_{i}(p_{0})+\sum _{j}{\frac {\partial u(p_{0})}{\partial p_{j}}}\Delta p_{j}}
Upper and lower bound of the solution can be calculated by using the following formula
u_i≈ui(p0)−|∑j∂u(p0)∂pj|Δpj{\displaystyle {\underline {u}}_{i}\approx u_{i}(p_{0})-\left|\sum _{j}{\frac {\partial u(p_{0})}{\partial p_{j}}}\right|\Delta p_{j}}
u¯i≈ui(p0)+|∑j∂u(p0)∂pj|Δpj{\displaystyle {\overline {u}}_{i}\approx u_{i}(p_{0})+\left|\sum _{j}{\frac {\partial u(p_{0})}{\partial p_{j}}}\right|\Delta p_{j}}
The method is very efficient however it is not very accurate.In order to improve accuracy it is possible to apply higher order Taylor expansion [Pownuk 2004].This approach can be also applied in theinterval finite difference methodand theinterval boundary element method.
If the sign of the derivatives∂ui∂pj{\displaystyle {\frac {\partial u_{i}}{\partial p_{j}}}}is constant then the functionsui=ui(p){\displaystyle u_{i}=u_{i}(p)}is monotone and the exact solution can be calculated very fast.
Extreme values of the solution can be calculated in the following way
u_i=ui(pmin),u¯i=ui(pmax){\displaystyle {\underline {u}}_{i}=u_{i}(p^{\min }),\ {\overline {u}}_{i}=u_{i}(p^{\max })}
In manystructural engineeringapplications the method gives exact solution.If the solution is not monotone the solution is usually reasonable. In order to improve accuracy of the method it is possible to apply monotonicity tests and higher order sensitivity analysis. The method can be applied to the solution of linear and nonlinear problems ofcomputational mechanics[Pownuk 2004]. Applications of sensitivity analysis method to the solution of civil engineering problems can be found in the following paper [M.V. Rama Rao, A. Pownuk and I. Skalna 2008].This approach can be also applied in theinterval finite difference methodand theinterval boundary element method.
Muhanna and Mullen applied element by element formulation to the solution of finite element equation with the interval parameters.[4]Using that method it is possible to get the solution with guaranteed accuracy in the case of truss and frame structures.
The solutionu=u(p){\displaystyle u=u(p)}stiffness matrixK=K(p){\displaystyle K=K(p)}and the load vectorQ=Q(p){\displaystyle Q=Q(p)}can be expanded by usingperturbation theory. Perturbation theory lead to the approximate value of the interval solution.[5]The method is very efficient and can be applied to large problems of computational mechanics.
It is possible to approximate the solutionu=u(p){\displaystyle u=u(p)}by usingresponse surface. Then it is possible to use the response surface to the get the interval solution.[6]Using response surface method it is possible to solve very complex problem of computational mechanics.[7]
Several authors tried to apply pure interval methods to the solution of finite element problems with the interval parameters. In some cases it is possible to get very interesting results e.g. [Popova, Iankov, Bonev 2008]. However, in general the method generates very overestimated results.[8]
Popova[9]and Skalna[10]introduced the methods for the solution of thesystem of linear equationsin which the coefficients are linear combinations of interval parameters. In this case it is possible to get very accurate solution of the interval equations with guaranteed accuracy.
|
https://en.wikipedia.org/wiki/Interval_finite_element
|
Information pollution(also referred to asinfo pollution) is the contamination of aninformationsupply with irrelevant, redundant, unsolicited, hampering, and low-value information.[1][2]Examples includemisinformation,junk e-mail, andmedia violence.
The spread of useless and undesirable information can have a detrimental effect on human activities. It is considered to be an adverse effect of theinformation revolution.[3]
Information pollution generally applies to digital communication, such ase-mail,instant messaging(IM), andsocial media. The term acquired particular relevance in 2003 when web usability expertJakob Nielsenpublished articles discussing the topic.[4]As early as 1971 researchers were expressing doubts about the negative effects of having to recover "valuable nodules from a slurry of garbage in which it is a randomly dispersed minor component."[5]People use information in order to make decisions and adapt to circumstances. Cognitive studies demonstrated human beings can process only limited information before the quality of their decisions begins to deteriorate.[6]Information overloadis a related concept that can also harm decision-making. It refers to an abundance of available information, without respect to its quality.[1][6]
Although technology is thought to have exacerbated the problem, it is not the only cause of information pollution. Anything that distracts attention from the essential facts required to perform a task or make a decision could be considered aninformation pollutant.
Information pollution is seen as the digital equivalent of theenvironmental pollutiongenerated by industrial processes.[3][7][8]Some authors claim that information overload is a crisis of global proportions, on the same scale as threats faced by environmental destruction. Others have expressed the need for the development of an information management paradigm that parallelsenvironmental managementpractices.[6]
The manifestations of information pollution can be classified into two groups: those that provoke disruption, and those that damage information quality.
Typical examples of disrupting information pollutants include unsolicited electronic messages (spam) and instant messages, particularly in the workplace.[9]Mobile phones (ring tones and content) are disruptive in many contexts. Disrupting information pollution is not always technology based. A common example are newspapers, where subscribers read less than half or even none of the articles provided.[10][clarification needed]Superfluous messages, such as unnecessary labels on a map, also distract.[9]
Alternatively, information may be polluted when its quality is reduced. This may be due to inaccurate or outdated information,[8]but it also happens when information is badly presented. For example, when content is unfocused or unclear or when they appear in cluttered, wordy, or poorly organised documents it is difficult for the reader to understand.[11]
Laws and regulations undergo changes and revisions. Handbooks and other sources used for interpreting these laws can fall years behind the changes, which can cause the public to be misinformed.
Traditionally,[vague]information has been seen positively. People are accustomed to statements like "you cannot have too much information", "the more information the better",[9]and "knowledge is power".[8]The publishing and marketing industries have become used to printing many copies of books, magazines, and brochures regardless of customerdemand, just in case they are needed.[10]
Democratised information sharing is an example of a new technology that has made it easier for information to reach everyone. Such technologies are perceived as a sign ofprogressand individual empowerment, as well as a positive step to bridge thedigital divide.[7][8]However, they also increase the volume of distracting information, making it more difficult to distinguish valuable information fromnoise. The continuous use ofadvertisingin websites, technologies, newspapers, and everyday life is known as "cultural pollution".[12]
Technological advances of the 20th century and, in particular, the internet play a key role in the increase of information pollution.Blogs,social networks,personal websites, andmobile technologyall contribute to increased "noise".[9]The level of pollution may depend on the context. For example, e-mail is likely to cause more information pollution in a corporate setting,[11]whereasmobile phonesare likely to be particularly disruptive in a confined space shared by multiple people, such as a train carriage.
The effects of information pollution can be seen at multiple levels.
At a personal level, information pollution affects individuals' capacity to evaluate options and find adequate solutions. This can lead toinformation overload,anxiety, decision paralysis, andstress.[11]It can disrupt the learning process.[13]
Some authors argue that information pollution and information overload can cause loss of perspective and moral values.[14]This argument may explain the indifferent attitude that society shows toward topics such as scientific discoveries, health warnings, or politics.[1]Pollution makes people less sensitive to headlines and more cynical toward new messages.
Information pollution contributes to information overload and stress, which can disrupt the kinds information processing and decision-making needed to complete tasks at work. This leads to delayed or flawed decisions, which can translate into loss ofproductivityandrevenueas well as an increased risk of critical errors.[1][11]
Proposed solutions includemanagementtechniques and refined technology.
The terminfollutionorinformatization pollutionwas coined by Dr. Paek-Jae Cho, former president & CEO ofKTC (Korean Telecommunication Corp.), in a 2002 speech at theInternational Telecommunications Society (ITS)14th biennial conference to describe any undesirable side effect brought about by information technology and its applications.[15]
|
https://en.wikipedia.org/wiki/Information_pollution
|
Information quality(InfoQ) is the potential of adata setto achieve a specific (scientific or practical) goal using a givenempirical analysis method.
Formally, the definition isInfoQ = U(X,f|g)where X is the data, f the analysis method, g the goal and U the utility function. InfoQ is different fromdata qualityandanalysis quality, but is dependent on these components and on the relationship between them.
InfoQ has been applied in a wide range of domains like healthcare, customer surveys, data science programs, advanced manufacturing and Bayesian network applications.
Kenett andShmueli(2014) proposed eight dimensions to help assess InfoQ and various methods for increasing InfoQ: Data resolution,Data structure, Data integration, Temporal relevance, Chronology of data and goal,Generalization,Operationalization, Communication.[1][2][3]
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Information_Quality_(InfoQ)
|
Instatistics, aconfidence interval(CI) is a range of values used to estimate an unknownstatistical parameter, such as a populationmean.[1]Rather than reporting a single point estimate (e.g. "the average screen time is 3 hours per day"), a confidence interval provides a range, such as 2 to 4 hours, along with a specifiedconfidence level, typically 95%. This indicates that if the same sampling procedure were repeated 100 times, approximately 95 of the resulting intervals would be expected to contain the true population mean.
A 95% confidence level does not imply a 95% probability that the true parameter lies within a particular calculated interval. The confidence level instead reflects the long-run reliability of the method used to generate the interval.[2]
Methods for calculating confidence intervals for the binomial proportion appeared from the 1920s.[3][4]The main ideas of confidence intervals in general were developed in the early 1930s,[5][6][7]and the first thorough and general account was given byJerzy Neymanin 1937.[8]
Neyman described the development of the ideas as follows (reference numbers have been changed):[7]
[My work on confidence intervals] originated about 1930 from a simple question of Waclaw Pytkowski, then my student in Warsaw, engaged in an empirical study in farm economics. The question was: how to characterize non-dogmatically the precision of an estimated regression coefficient? ...
Pytkowski's monograph ... appeared in print in 1932.[9]It so happened that, somewhat earlier, Fisher published his first paper[10]concerned with fiducial distributions and fiducial argument. Quite unexpectedly, while the conceptual framework of fiducial argument is entirely different from that of confidence intervals, the specific solutions of several particular problems coincided. Thus, in the first paper in which I presented the theory of confidence intervals, published in 1934,[5]I recognized Fisher's priority for the idea that interval estimation is possible without any reference to Bayes' theorem and with the solution being independent from probabilitiesa priori. At the same time I mildly suggested that Fisher's approach to the problem involved a minor misunderstanding.
In medical journals, confidence intervals were promoted in the 1970s but only became widely used in the 1980s.[11]By 1988, medical journals were requiring the reporting of confidence intervals.[12]
LetX{\displaystyle X}be arandom samplefrom aprobability distributionwithstatistical parameter(θ,φ){\displaystyle (\theta ,\varphi )}. Here,θ{\displaystyle \theta }is the quantity to be estimated, whileφ{\displaystyle \varphi }includes other parameters (if any) that determine the distribution. A confidence interval for the parameterθ{\displaystyle \theta }, with confidence level or coefficientγ{\displaystyle \gamma }, is an interval(u(X),v(X)){\displaystyle (u(X),v(X))}determined byrandom variablesu(X){\displaystyle u(X)}andv(X){\displaystyle v(X)}with the property:
The numberγ{\displaystyle \gamma }, whose typical value is close to but not greater than 1, is sometimes given in the form1−α{\displaystyle 1-\alpha }(or as a percentage100%⋅(1−α){\displaystyle 100\%\cdot (1-\alpha )}), whereα{\displaystyle \alpha }is a small positive number, often 0.05. It means that the interval(u(X),v(X)){\textstyle (u(X),v(X))}has a probabilityγ{\textstyle \gamma }of covering the value ofθ{\textstyle \theta }in repeated sampling.
In many applications, confidence intervals that have exactly the required confidence level are hard to construct, but approximate intervals can be computed. The rule for constructing the interval may be accepted if
to an acceptable level of approximation. Alternatively, some authors[13]simply require that
When it is known that thecoverage probabilitycan be strictly larger thanγ{\displaystyle \gamma }for some parameter values, the confidence interval is called conservative, i.e., it errs on the safe side; which also means that the interval can be wider than need be.
There are many ways of calculating confidence intervals, and the best method depends on the situation. Two widely applicable methods arebootstrappingand thecentral limit theorem.[14]The latter method works only if the sample is large, since it entails calculating the sample meanX¯n{\displaystyle {\bar {X}}_{n}}and sample standard deviationSn{\displaystyle S_{n}}and assuming that the quantity
is normally distributed, whereμ{\textstyle \mu }andn{\displaystyle n}are the population mean and the sample size, respectively.
SupposeX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}is anindependentsample from anormally distributedpopulation with unknown parametersmeanμ{\displaystyle \mu }andvarianceσ2.{\displaystyle \sigma ^{2}.}Define thesample meanX¯{\displaystyle {\bar {X}}}andunbiased sample varianceS2{\displaystyle S^{2}}as
Then the value
has aStudent'stdistributionwithn−1{\displaystyle n-1}degrees of freedom.[15]This value is useful because its distribution does not depend on the values of the unobservable parametersμ{\displaystyle \mu }andσ2{\displaystyle \sigma ^{2}}; i.e., it is apivotal quantity.
Suppose we wanted to calculate a 95% confidence interval forμ.{\displaystyle \mu .}First, letc{\displaystyle c}be the 97.5thpercentileof the distribution ofT{\displaystyle T}. Then there is a 2.5% chance thatT{\displaystyle T}will be less than−c{\textstyle -c}and a 2.5% chance that it will be larger than+c{\textstyle +c}(as thetdistribution is symmetric about 0). In other words,
Consequently, by replacingT{\textstyle T}withX¯−μS/n{\displaystyle {\frac {{\bar {X}}-\mu }{S/{\sqrt {n}}}}}and re-arranging terms,
wherePX{\displaystyle P_{X}}is the probability measure for the sampleX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}.
It means that there is 95% probability with which this conditionX¯−cSn≤μ≤X¯+cSn{\displaystyle {\bar {X}}-{\frac {cS}{\sqrt {n}}}\leq \mu \leq {\bar {X}}+{\frac {cS}{\sqrt {n}}}}occurs in repeated sampling. After observing a sample, we find valuesx¯{\displaystyle {\bar {x}}}forX¯{\displaystyle {\bar {X}}}ands{\displaystyle s}forS,{\displaystyle S,}from which we compute the below interval, and we say it is a 95% confidence interval for the mean.
Various interpretations of a confidence interval can be given (taking the 95% confidence interval as an example in the following).
Confidence intervals and levels are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.[18]
For example, suppose a factory produces metal rods. A random sample of 25 rods gives a 95% confidence interval for the population mean length of 36.8 to 39.0 mm.[21]
Instead, the 95% confidence level means that if we took 100 such samples, we would expect the true population mean to lie within approximately 95 of the calculated intervals.[1][19][20][21]
A confidence interval is used to estimate a population parameter, such as the mean. For example, the expected value of a fair six-sided die is 3.5. Based on repeated sampling, after computing many 95% confidence intervals, roughly 95% of them will contain 3.5.
A prediction interval, on the other hand, provides a range within which a future individual observation is expected to fall with a certain probability. In the case of a single roll of a fair six-sided die, the outcome will always lie between 1 and 6. Thus, a 95% prediction interval for a future roll is approximately [1, 6], since this range captures the inherent variability of individual outcomes.
The key distinction is that confidence intervals quantify uncertainty in estimating parameters, while prediction intervals quantify uncertainty in forecasting future observations.
In many common settings, such as estimating the mean of a normal distribution with known variance,[22]confidence intervals coincide with credible intervals under non-informative priors. In such cases, common misconceptions about confidence intervals (e.g. interpreting them as probability statements about the parameter) may yield practically correct conclusions.
Welch[23]presented an example which clearly shows the difference between the theory of confidence intervals and other theories of interval estimation (including Fisher'sfiducialintervals and objectiveBayesianintervals). Robinson[24]called this example "[p]ossibly the best known counterexample for Neyman's version of confidence interval theory." To Welch, it showed the superiority of confidence interval theory; to critics of the theory, it shows a deficiency. Here we present a simplified version.
Suppose thatX1,X2{\displaystyle X_{1},X_{2}}are independent observations from auniform(θ−1/2,θ+1/2){\displaystyle (\theta -1/2,\theta +1/2)}distribution. Then the optimal 50% confidence procedure forθ{\displaystyle \theta }is[25]
A fiducial or objective Bayesian argument can be used to derive the interval estimate
which is also a 50% confidence procedure. Welch showed that the first confidence procedure dominates the second, according to desiderata from confidence interval theory; for everyθ1≠θ{\displaystyle \theta _{1}\neq \theta }, the probability that the first procedure containsθ1{\displaystyle \theta _{1}}isless than or equal tothe probability that the second procedure containsθ1{\displaystyle \theta _{1}}. The average width of the intervals from the first procedure is less than that of the second. Hence, the first procedure is preferred under classical confidence interval theory.
However, when|X1−X2|≥1/2{\displaystyle |X_{1}-X_{2}|\geq 1/2}, intervals from the first procedure areguaranteedto contain the true valueθ{\displaystyle \theta }: Therefore, the nominal 50% confidence coefficient is unrelated to the uncertainty we should have that a specific interval contains the true value. The second procedure does not have this property.
Moreover, when the first procedure generates a very short interval, this indicates thatX1,X2{\displaystyle X_{1},X_{2}}are very close together and hence only offer the information in a single data point. Yet the first interval will exclude almost all reasonable values of the parameter due to its short width. The second procedure does not have this property.
The two counter-intuitive properties of the first procedure – 100%coveragewhenX1,X2{\displaystyle X_{1},X_{2}}are far apart and almost 0% coverage whenX1,X2{\displaystyle X_{1},X_{2}}are close together – balance out to yield 50% coverage on average. However, despite the first procedure being optimal, its intervals offer neither an assessment of the precision of the estimate nor an assessment of the uncertainty one should have that the interval contains the true value.
This example is used to argue against naïve interpretations of confidence intervals. If a confidence procedure is asserted to have properties beyond that of the nominal coverage (such as relation to precision, or a relationship with Bayesian inference), those properties must be proved; they do not follow from the fact that a procedure is a confidence procedure.
Steiger[26]suggested a number of confidence procedures for commoneffect sizemeasures inANOVA. Morey et al.[19]point out that several of these confidence procedures, including the one forω2, have the property that as theFstatistic becomes increasingly small—indicating misfit with all possible values ofω2—the confidence interval shrinks and can even contain only the single valueω2= 0; that is, the CI is infinitesimally narrow (this occurs whenp≥1−α/2{\displaystyle p\geq 1-\alpha /2}for a100(1−α)%{\displaystyle 100(1-\alpha )\%}CI).
This behavior is consistent with the relationship between the confidence procedure andsignificance testing: asFbecomes so small that the group means are much closer together than we would expect by chance, a significance test might indicate rejection for most or all values ofω2. Hence the interval will be very narrow or even empty (or, by a convention suggested by Steiger, containing only 0). However, this doesnotindicate that the estimate ofω2is very precise. In a sense, it indicates the opposite: that the trustworthiness of the results themselves may be in doubt. This is contrary to the common interpretation of confidence intervals that they reveal the precision of the estimate.
|
https://en.wikipedia.org/wiki/Confidence_interval
|
The earliest recorded systems ofweights and measuresoriginate in the 3rd or 4th millennium BC.[1]Even the very earliest civilizations needed measurement for purposes of agriculture, construction and trade. Early standard units might only have applied to a single community or small region, with every area developing its own standards for lengths, areas, volumes and masses. Often such systems were closely tied to one field of use, so that volume measures used, for example, for dry grains were unrelated to those for liquids, with neither bearing any particular relationship tounits of lengthused for measuring cloth or land. With development of manufacturing technologies, and the growing importance of trade between communities and ultimately across the Earth, standardized weights and measures became critical. Starting in the 18th century, modernized, simplified and uniform systems of weights and measures were developed, with the fundamental units defined by ever more precise methods in the science ofmetrology. The discovery and application ofelectricitywas one factor motivating the development of standardized internationally applicable units.
The comparison of the dimensions of buildings with the descriptions of contemporary writers is another source of information. An interesting example of this is the comparison of the dimensions of the GreekParthenonwith the description given byPlutarchfrom which a fairly accurate idea of the size of theAttic footis obtained. Because of the comparative volume of artifacts and documentation, much more is known today about the state-sanctioned measures of large, advanced societies than about those of smaller societies or about the informal measures that often coexisted with official ones. In some cases, there are only plausible theories and different interpretations can be matched to the evidences.
It is possible to group official measurement systems for large societies into historical systems that are relatively stable over time, including: the Babylonian system, the Egyptian system, the Phileterian system of thePtolemaicage, the Olympic system of Greece, the Roman system, theBritish system, and themetric system.
The earliest known uniform systems of weights and measures seem all to have been created at some time in the4thand3rd millennia BCamong the ancient peoples ofEgypt,Mesopotamiaand theIndus Valley, and perhaps alsoElam(inIran) as well.
EarlyBabylonianandEgyptianrecords and theHebrew Bibleindicate that length was first measured with the forearm, hand, or finger and that time was measured by the periods of the sun, moon, and other heavenly bodies. When it was necessary to compare the capacities of containers such asgourdsorclayor metal vessels, they were filled with plant seeds which were then counted to measure thevolumes. When means for weighing were invented, seeds and stones served as standards. For instance, thecarat, still used as a unit for gems, was derived from thecarobseed.
Before the establishment of the decimalmetric systeminFranceduring theFrench Revolutionin the late18th century,[2]many units of length were based on parts of thehuman body.[3][4]The Nippurcubitwas one of the oldest knownunits of length. The oldest known metalstandardforlengthcorresponds to this Sumerian unit and dates from 2650 BCE.[5][6]Thiscopperbar was discovered inNippur, on the banks of theEuphrates, and is kept in theIstanbul Archaeological Museum. Archaeologists consider that this 51.85centimetreslong unit was the origin of theRoman foot. Indeed, theEgyptiansdivided theSumeriancubit into 28fingersand 16 of these fingers gave a Roman foot of 29.633 cm.[6][4]
Thegrainwas the earliestunit of massand is the smallest unit in theapothecary,avoirdupois, Tower, andtroysystems. The early unit was a grain of wheat or barleycorn used to weigh the precious metals silver and gold. Larger units preserved in stone standards were developed that were used as both units of mass and of monetary currency. Thepoundwas derived from themina (unit)used by ancient civilizations. A smaller unit was theshekel, and a larger unit was thetalent. The magnitude of these units varied from place to place. The Babylonians and Sumerians had a system in which there were 60 shekels in a mina and 60 minas in a talent. The Roman talent consisted of 100 libra (pound) which were smaller in magnitude than the mina. The troy pound (~373.2 g) used in England and the United States for monetary purposes, like the Roman pound, was divided into 12 ounces, but the Roman uncia (ounce) was smaller. The carat is a unit for measuring gemstones that had its origin in the carob seed, which later was standardized at 1/144 ounce and then 0.2 gram.
Goods of commerce were originally traded by number or volume. When weighing of goods began, units of mass based on a volume of grain or water were developed. The diverse magnitudes of units having the same name, which still appear today in our dry and liquid measures, could have arisen from the various commodities traded. The larger avoirdupois pound for goods of commerce might have been based on volume of water which has a higherbulk densitythan grain.
The stone, quarter, hundredweight, and ton were larger units of mass used in Britain. Today only the stone continues in customary use for measuring personal body weight. The present stone is 14 pounds (~6.35 kg), but an earlier unit appears to have been 16 pounds (~7.25 kg). The other units were multiples of 2, 8, and 160 times the stone, or 28, 112, and 2240 pounds (~12.7 kg, 50.8 kg, 1016 kg), respectively. The hundredweight was approximately equal to two talents. The "long ton" is equal to 2240 pounds (1016.047 kg), the "short ton" is equal to 2000 pounds (907.18474 kg), and the tonne (or metric ton) (t) is equal to 1000 kg (or 1 megagram).
The division of the circle into 360 degrees and the day into hours, minutes, and seconds can be traced to the Babylonians who had asexagesimalsystem of numbers. The 360 degrees may have been related to ayear of 360 days. Many othersystems of measurementdivided the day differently—counting hours,decimal time, etc. Othercalendarsdivided the year differently.
Decimal numbers are an essential part of the metric system, with only one base unit and multiples created on the decimal base, the figures remain the same. This simplifies calculations. Although theIndiansused decimal numbers for mathematical computations, it wasSimon Stevinwho in 1585 first advocated the use of decimal numbers for everyday purposes in his bookletDe Thiende(old Dutch for 'the tenth'). He also declared that it would only be a matter of time before decimal numbers were used for currencies and measurements.[7]His notationfor decimal fractions was clumsy, but this was overcome with the introduction of the decimal point, generally attributed toBartholomaeus Pitiscuswho used this notation in his trigonometrical tables (1595).[8]
In 1670,Gabriel Moutonpublished a proposal that was in essence similar toJohn Wilkins' proposal for a universal measure, except that his base unit of length would have been 1/1000 of aminute of arc(about 1.852 m) of geographical latitude. He proposed calling this unit the virga. Rather than using different names for each unit of length, he proposed a series of names that had prefixes, rather like the prefixes found in SI.[9]
In 1790,Thomas Jeffersonsubmitted areportto theUnited States Congressin which he proposed the adoption of a decimal system of coinage and of weights and measures. He proposed calling his base unit of length a "foot" which he suggested should be either3⁄10or1⁄3of the length of a pendulum that had a period of one second—that is3⁄10or1⁄3of the "standard" proposed by John Wilkins over a century previously. This would have equated to 11.755 English inches (29.8 cm) or 13.06 English inches (33.1 cm). Like Wilkins, the names that he proposed for multiples and subunits of his base units of measure were the names of units of measure that were in use at the time.[10]The great interest ingeodesyduring this era, and the measurement system ideas that developed, influenced how the continental US wassurveyedand parceled. The story of how Jefferson's full vision for the new measurement system came close to displacing theGunter chainand the traditionalacre, but ended up not doing so, is explored inAndro Linklater'sMeasuring America.[11]
Themetric systemwas first described in 1668 and officially adopted by France in 1799. Over the 19th and 20th centuries, it became the dominant system worldwide, although several countries, including the United States, China, and the United Kingdom continue to use their customary units.[12]Among the numerous customary systems, many have been adapted to become an integer multiple of a related metric unit: TheScandinavian mileis now defined as 10 km, theChinese jinis now defined as 0.5 kg, and theDutch onsis now defined as 100 g.
This article incorporatespublic domain materialfromSpecifications, Tolerances, and Other Technical Requirements for Weighing (Handbook 44 -2018).National Institute of Standards and Technology.
|
https://en.wikipedia.org/wiki/History_of_measurement
|
In measurements, the measurement obtained can suffer from two types of uncertainties.[1]The first is the random uncertainty which is due to the noise in the process and the measurement. The second contribution is due to the systematic uncertainty which may be present in the measuring instrument. Systematic errors, if detected, can be easily compensated as they are usually constant throughout the measurement process as long as the measuring instrument and the measurement process are not changed. But it can not be accurately known while using the instrument if there is asystematic errorand if there is, how much? Hence, systematic uncertainty could be considered as a contribution of a fuzzy nature.
This systematic error can be approximately modeled based on our past data about the measuring instrument and the process.
Statistical methods can be used to calculate the total uncertainty from both systematic and random contributions in a measurement.[2][3][4]But, the computational complexity is very high and hence, are not desirable.
L.A.Zadehintroduced the concepts of fuzzy variables and fuzzy sets.[5][6]Fuzzy variables are based on the theory of possibility and hence are possibility distributions. This makes them suitable to handle any type of uncertainty, i.e., both systematic and random contributions to the total uncertainty.[7][8][9]
Random-fuzzy variable (RFV)is atype 2 fuzzy variable,[10]defined using the mathematical possibility theory,[5][6]used to represent the entire information associated to a measurement result. It has an internal possibility distribution and an external possibility distribution called membership functions. The internal distribution is the uncertainty contributions due to the systematic uncertainty and the bounds of the RFV are because of the random contributions. The external distribution gives the uncertainty bounds from all contributions.
A Random-fuzzy Variable (RFV) is defined as a type 2 fuzzy variable which satisfies the following conditions:[11]
An RFV can be seen in the figure. The external membership function is the distribution in blue and the internal membership function is the distribution in red. Both the membership functions are possibility distributions. Both the internal and external membership functions have a unitary value of possibility only in the rectangular part of the RFV. So, all three conditions have been satisfied.
If there are only systematic errors in the measurement, then the RFV simply becomes afuzzy variablewhich consists of just the internal membership function. Similarly, if there is no systematic error, then the RFV becomes afuzzy variablewith just the random contributions and therefore, is just the possibility distribution of the random contributions.
A Random-fuzzy variable can be constructed using an Internal possibility distribution(rinternal) and a random possibility distribution(rrandom).
rrandomis the possibility distribution of the random contributions to the uncertainty. Any measurement instrument or process suffers fromrandom errorcontributions due to intrinsic noise or other effects.
This is completely random in nature and is a normal probability distribution when several random contributions are combined according to theCentral limit theorem.[12]
But, there can also be random contributions from other probability distributions such as auniform distribution,gamma distributionand so on.
The probability distribution can be modeled from the measurement data. Then, the probability distribution can be used to model an equivalent possibility distribution using the maximally specific probability-possibility transformation.[13]
Some common probability distributions and the corresponding possibility distributions can be seen in the figures.
rinternalis the internal distribution in the RFV which is the possibility distribution of the systematic contribution to the total uncertainty. This distribution can be built based on the information that is available about the measuring instrument and the process.
The largest possible distribution is the uniform or rectangular possibility distribution. This means that every value in the specified interval is equally possible. This actually represents the state of total ignorance according to thetheory of evidence[14]which means it represents a scenario in which there is maximum lack of information.
This distribution is used for the systematic error when we have absolutely no idea about the systematic error except that it belongs to a particular interval of values. This is quite common in measurements.
But, in certain cases, it may be known that certain values have a higher or lower degrees of belief than certain other values. In this case, depending on the degrees of belief for the values, an appropriate possibility distribution could be constructed.
After modeling the random and internal possibility distribution, the external membership function,rexternal, of the RFV can be constructed by using the following equation:[15]
wherex∗{\displaystyle x^{*}}is the mode ofrrandom{\displaystyle r_{\textit {random}}}, which is the peak in the membership function ofrrandom{\displaystyle r_{random}}andTminis the minimumtriangular norm.[16]
RFV can also be built from the internal and random distributions by considering theα-cuts of the two possibility distributions(PDs).
Anα-cut of a fuzzy variable F can be defined as[17][18]
So, essentially anα-cut is the set of values for which the value of the membership functionμF(a){\displaystyle \mu _{\rm {F}}(a)}of the fuzzy variable is greater thanα. So, this gives the upper and lower bounds of the fuzzy variable F for eachα-cut.
Theα-cut of an RFV, however, has 4 specific bounds and is given byRFVα=[Xaα,Xbα,Xcα,Xdα]{\displaystyle RFV^{\alpha }=[X_{a}^{\alpha },X_{b}^{\alpha },X_{c}^{\alpha },X_{d}^{\alpha }]}.[11]Xaα{\displaystyle X_{a}^{\alpha }}andXdα{\displaystyle X_{d}^{\alpha }}are the lower and upper bounds respectively of the external membership function(rexternal) which is a fuzzy variable on its own.Xbα{\displaystyle X_{b}^{\alpha }}andXcα{\displaystyle X_{c}^{\alpha }}are the lower and upper bounds respectively of the internal membership function(rinternal) which is a fuzzy variable on its own.
To build the RFV, let us consider theα-cuts of the two PDs i.e.,rrandomandrinternalfor the same value ofα. This gives the lower and upper bounds for the twoα-cuts. Let them be[XLRα,XURα]{\displaystyle [X_{LR}^{\alpha },X_{UR}^{\alpha }]}and[XLIα,XUIα]{\displaystyle [X_{LI}^{\alpha },X_{UI}^{\alpha }]}for the random and internal distributions respectively.[XLRα,XURα]{\displaystyle [X_{LR}^{\alpha },X_{UR}^{\alpha }]}can be again divided into two sub-intervals[XLRα,x∗]{\displaystyle [X_{LR}^{\alpha },x^{*}]}and[x∗,XURα]{\displaystyle [x^{*},X_{UR}^{\alpha }]}wherex∗{\displaystyle x^{*}}is the mode of the fuzzy variable. Then, theα-cut for the RFV for the same value ofα,RFVα=[Xaα,Xbα,Xcα,Xdα]{\displaystyle RFV^{\alpha }=[X_{a}^{\alpha },X_{b}^{\alpha },X_{c}^{\alpha },X_{d}^{\alpha }]}can be defined by[11]
Using the above equations, theα-cuts are calculated for every value ofαwhich gives us the final plot of the RFV.
A Random-Fuzzy variable is capable of giving a complete picture of the random and systematic contributions to the total uncertainty from theα-cuts for any confidence level as the confidence level is nothing but1-α.[17][18]
An example for the construction of the corresponding external membership function(rexternal) and the RFV from a random PD and an internal PD can be seen in the following figure.
|
https://en.wikipedia.org/wiki/Random-fuzzy_variable
|
Repeatabilityortest–retest reliability[1]is the closeness of the agreement between the results of successivemeasurementsof the samemeasure, when carried out under the same conditions of measurement.[2]In other words, the measurements are taken by a single person orinstrumenton the same item, under the same conditions, and in a short period of time. A less-than-perfect test–retest reliability causestest–retest variability. Suchvariabilitycan be caused by, for example,intra-individual variabilityandinter-observer variability. A measurement may be said to berepeatablewhen this variation is smaller than a predetermined acceptance criterion.
Test–retest variability is practically used, for example, inmedical monitoringof conditions. In these situations, there is often a predetermined "critical difference", and for differences in monitored values that are smaller than this critical difference, the possibility of variability as a sole cause of the difference may be considered in addition to, for example, changes in diseases or treatments.[3]
The following conditions need to be fulfilled in the establishment of repeatability:[2][4]
Repeatability methods were developed by Bland and Altman (1986).[5]
If thecorrelationbetween separate administrations of the test is high (e.g. 0.7 or higher as inthis Cronbach's alpha-internal consistency-table[6]), then it has good test–retest reliability.
The repeatability coefficient is a precision measure which represents the value below which theabsolute differencebetween two repeated test results may be expected to lie with a probability of 95%.[citation needed]
Thestandard deviationunder repeatability conditions is part ofprecisionandaccuracy.[citation needed]
An attribute agreement analysis is designed to simultaneously evaluate the impact of repeatability andreproducibilityon accuracy. It allows the analyst to examine the responses from multiple reviewers as they look at several scenarios multiple times. It produces statistics that evaluate the ability of the appraisers to agree with themselves (repeatability), with each other (reproducibility), and with a known master or correct value (overall accuracy) for each characteristic – over and over again.[7]
Because the same test is administered twice and every test is parallel with itself, differences between scores on the test and scores on the retest should be due solely to measurement error. This sort of argument is quite probably true for many physical measurements. However, this argument is often inappropriate for psychological measurement, because it is often impossible to consider the second administration of a test a parallel measure to the first.[8]
The second administration of a psychological test might yield systematically different scores than the first administration due to the following reasons:[8]
|
https://en.wikipedia.org/wiki/Repeatability
|
Instatisticsandeconometrics,set identification(orpartial identification) extends the concept ofidentifiability(or "point identification") instatistical modelsto environments where the model and the distribution of observable variables are not sufficient to determine a unique value for the modelparameters, but instead constrain the parameters to lie in astrict subsetof the parameter space. Statistical models that are set (or partially) identified arise in a variety of settings ineconomics, includinggame theoryand theRubin causal model. Unlike approaches that deliver point-identification of the model parameters, methods from the literature on partial identification are used to obtain set estimates that are valid under weaker modelling assumptions.[1]
Early works containing the main ideas of set identification includedFrisch (1934)andMarschak & Andrews (1944). However, the methods were significantly developed and promoted byCharles Manski, beginning withManski (1989)andManski (1990).
Partial identification continues to be a major theme in research in econometrics.Powell (2017)named partial identification as an example of theoretical progress in the econometrics literature, andBonhomme & Shaikh (2017)list partial identification as “one of the most prominent recent themes in econometrics.”
LetU∈U⊆Rdu{\displaystyle U\in {\mathcal {U}}\subseteq \mathbb {R} ^{d_{u}}}denote a vector of latent variables, letZ∈Z⊆Rdz{\displaystyle Z\in {\mathcal {Z}}\subseteq \mathbb {R} ^{d_{z}}}denote a vector of observed (possibly endogenous) explanatory variables, and letY∈Y⊆Rdy{\textstyle Y\in {\mathcal {Y}}\subseteq \mathbb {R} ^{d_{y}}}denote a vector of observed endogenous outcome variables. Astructureis a pairs=(h,PU∣Z){\displaystyle s=(h,{\mathcal {P}}_{U\mid Z})}, wherePU∣Z{\displaystyle {\mathcal {P}}_{U\mid Z}}represents a collection of conditional distributions, andh{\displaystyle h}is a structural function such thath(y,z,u)=0{\displaystyle h(y,z,u)=0}for all realizations(y,z,u){\displaystyle (y,z,u)}of the random vectors(Y,Z,U){\displaystyle (Y,Z,U)}. Amodelis a collection of admissible (i.e. possible) structuress{\displaystyle s}.[2][3]
LetPY∣Z(s){\displaystyle {\mathcal {P}}_{Y\mid Z}(s)}denote the collection of conditional distributions ofY∣Z{\displaystyle Y\mid Z}consistent with the structures{\displaystyle s}. The admissible structuress{\displaystyle s}ands′{\displaystyle s'}are said to beobservationally equivalentifPY∣Z(s)=PY∣Z(s′){\displaystyle {\mathcal {P}}_{Y\mid Z}(s)={\mathcal {P}}_{Y\mid Z}(s')}.[2][3]Lets⋆{\displaystyle s^{\star }}denotes the true (i.e. data-generating) structure. The model is said to be point-identified if for everys≠s′{\displaystyle s\neq s'}we havePY∣Z(s)≠PY∣Z(s⋆){\displaystyle {\mathcal {P}}_{Y\mid Z}(s)\neq {\mathcal {P}}_{Y\mid Z}(s^{\star })}. More generally, the model is said to beset(orpartially)identifiedif there exists at least one admissibles≠s⋆{\displaystyle s\neq s^{\star }}such thatPY∣Z(s)≠PY∣Z(s⋆){\displaystyle {\mathcal {P}}_{Y\mid Z}(s)\neq {\mathcal {P}}_{Y\mid Z}(s^{\star })}. Theidentified setof structures is the collection of admissible structures that are observationally equivalent tos⋆{\displaystyle s^{\star }}.[4]
In most cases the definition can be substantially simplified. In particular, whenU{\displaystyle U}is independent ofZ{\displaystyle Z}and has a known (up to some finite-dimensional parameter) distribution, and whenh{\displaystyle h}is known up to some finite-dimensional vector of parameters, each structures{\displaystyle s}can be characterized by a finite-dimensional parameter vectorθ∈Θ⊂Rdθ{\displaystyle \theta \in \Theta \subset \mathbb {R} ^{d_{\theta }}}. Ifθ0{\displaystyle \theta _{0}}denotes the true (i.e. data-generating) vector of parameters, then theidentified set, often denoted asΘI⊂Θ{\displaystyle \Theta _{I}\subset \Theta }, is the set of parameter values that are observationally equivalent toθ0{\displaystyle \theta _{0}}.[4]
This example is due toTamer (2010). Suppose there are twobinary random variables,YandZ. The econometrician is interested inP(Y=1){\displaystyle \mathrm {P} (Y=1)}. There is amissing dataproblem, however:Ycan only be observed ifZ=1{\displaystyle Z=1}.
By thelaw of total probability,
The only unknown object isP(Y=1∣Z=0){\displaystyle \mathrm {P} (Y=1\mid Z=0)}, which is constrained to lie between 0 and 1. Therefore, the identified set is
Given the missing data constraint, the econometrician can only say thatP(Y=1)∈ΘI{\displaystyle \mathrm {P} (Y=1)\in \Theta _{I}}. This makes use of all available information.
Set estimationcannot rely on the usual tools for statistical inference developed forpoint estimation. A literature in statistics and econometrics studies methods forstatistical inferencein the context of set-identified models, focusing on constructingconfidence intervalsorconfidence regionswith appropriate properties. For example, a method developed byChernozhukov, Hong & Tamer (2007)constructs confidence regions that cover the identified set with a given probability.
|
https://en.wikipedia.org/wiki/Set_identification
|
Uncertainty quantification(UQ) is the science of quantitative characterization and estimation ofuncertaintiesin both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.
Many problems in the natural sciences and engineering are also rife with sources of uncertainty.Computer experimentsoncomputer simulationsare the most common approach to study problems in uncertainty quantification.[1][2][3][4][5][6]
Uncertainty can entermathematical modelsand experimental measurements in various contexts. One way to categorize the sources of uncertainty is to consider:[7]
Uncertainty is sometimes classified into two categories,[8][9]prominently seen in medical applications.[10]
In real life applications, both kinds of uncertainties are present. Uncertainty quantification intends to explicitly express both types of uncertainty separately. The quantification for the aleatoric uncertainties can be relatively straightforward, where traditional(frequentist) probabilityis the most basic form. Techniques such as theMonte Carlo methodare frequently used. A probability distribution can be represented by itsmoments(in theGaussiancase, themeanandcovariancesuffice, although, in general, even knowledge of all moments to arbitrarily high order still does not specify the distribution function uniquely), or more recently, by techniques such asKarhunen–Loèveandpolynomial chaosexpansions. To evaluate epistemic uncertainties, the efforts are made to understand the (lack of) knowledge of the system, process or mechanism. Epistemic uncertainty is generally understood through the lens ofBayesian probability, where probabilities are interpreted as indicating how certain a rational person could be regarding a specific claim.
In mathematics, uncertainty is often characterized in terms of aprobability distribution. From that perspective, epistemic uncertainty means not being certain what the relevant probability distribution is, and aleatoric uncertainty means not being certain what arandom sampledrawn from a probability distribution will be.
There are two major types of problems in uncertainty quantification: one is theforwardpropagation of uncertainty (where the various sources of uncertainty are propagated through the model to predict the overall uncertainty in the system response) and the other is theinverseassessment of model uncertainty and parameter uncertainty (where the model parameters are calibrated simultaneously using test data). There has been a proliferation of research on the former problem and a majority of uncertainty analysis techniques were developed for it. On the other hand, the latter problem is drawing increasing attention in the engineering design community, since uncertainty quantification of a model and the subsequent predictions of the true system response(s) are of great interest in designing robust systems.
Uncertainty propagation is the quantification of uncertainties in system output(s) propagated from uncertain inputs. It focuses on the influence on the outputs from theparametric variabilitylisted in the sources of uncertainty. The targets of uncertainty propagation analysis can be:
Given some experimental measurements of a system and some computer simulation results from its mathematical model, inverse uncertainty quantification estimates the discrepancy between the experiment and the mathematical model (which is calledbias correction), and estimates the values of unknown parameters in the model if there are any (which is calledparameter calibrationor simplycalibration). Generally this is a much more difficult problem than forward uncertainty propagation; however it is of great importance since it is typically implemented in a model updating process. There are several scenarios in inverse uncertainty quantification:
Bias correction quantifies themodel inadequacy, i.e. the discrepancy between the experiment and the mathematical model. The general model updating formula for bias correction is:
whereye(x){\displaystyle y^{e}(\mathbf {x} )}denotes the experimental measurements as a function of several input variablesx{\displaystyle \mathbf {x} },ym(x){\displaystyle y^{m}(\mathbf {x} )}denotes the computer model (mathematical model) response,δ(x){\displaystyle \delta (\mathbf {x} )}denotes the additive discrepancy function (aka bias function), andε{\displaystyle \varepsilon }denotes the experimental uncertainty. The objective is to estimate the discrepancy functionδ(x){\displaystyle \delta (\mathbf {x} )}, and as a by-product, the resulting updated model isym(x)+δ(x){\displaystyle y^{m}(\mathbf {x} )+\delta (\mathbf {x} )}. A prediction confidence interval is provided with the updated model as the quantification of the uncertainty.
Parameter calibration estimates the values of one or more unknown parameters in a mathematical model. The general model updating formulation for calibration is:
whereym(x,θ){\displaystyle y^{m}(\mathbf {x} ,{\boldsymbol {\theta }})}denotes the computer model response that depends on several unknown model parametersθ{\displaystyle {\boldsymbol {\theta }}}, andθ∗{\displaystyle {\boldsymbol {\theta }}^{*}}denotes the true values of the unknown parameters in the course of experiments. The objective is to either estimateθ∗{\displaystyle {\boldsymbol {\theta }}^{*}}, or to come up with a probability distribution ofθ∗{\displaystyle {\boldsymbol {\theta }}^{*}}that encompasses the best knowledge of the true parameter values.
It considers an inaccurate model with one or more unknown parameters, and its model updating formulation combines the two together:
It is the most comprehensive model updating formulation that includes all possible sources of uncertainty, and it requires the most effort to solve.
Much research has been done to solve uncertainty quantification problems, though a majority of them deal with uncertainty propagation. During the past one to two decades, a number of approaches for inverse uncertainty quantification problems have also been developed and have proved to be useful for most small- to medium-scale problems.
Existing uncertainty propagation approaches include probabilistic approaches and non-probabilistic approaches. There are basically six categories of probabilistic approaches for uncertainty propagation:[11]
For non-probabilistic approaches,interval analysis,[15]Fuzzy theory,Possibility theoryand evidence theory are among the most widely used.
The probabilistic approach is considered as the most rigorous approach to uncertainty analysis in engineering design due to its consistency with the theory ofdecision analysis. Its cornerstone is the calculation of probability density functions for sampling statistics.[16]This can be performed rigorously for random variables that are obtainable as transformations of Gaussian variables, leading to exact confidence intervals.
Inregression analysisandleast squaresproblems, thestandard errorofparameter estimatesis readily available, which can be expanded into aconfidence interval.
Several methodologies for inverse uncertainty quantification exist under theBayesian framework. The most complicated direction is to aim at solving problems with both bias correction and parameter calibration. The challenges of such problems include not only the influences from model inadequacy and parameter uncertainty, but also the lack of data from both computer simulations and experiments. A common situation is that the input settings are not the same over experiments and simulations. Another common situation is that parameters derived from experiments are input to simulations. For computationally expensive simulations, then often asurrogate model, e.g. aGaussian processor aPolynomial ChaosExpansion, is necessary, defining aninverse problemfor finding the surrogate model that best approximates the simulations.[4]
An approach to inverse uncertainty quantification is the modular Bayesian approach.[7][17]The modular Bayesian approach derives its name from its four-module procedure. Apart from the current available data, aprior distributionof unknown parameters should be assigned.
To address the issue from lack of simulation results, the computer model is replaced with aGaussian process(GP) model
where
d{\displaystyle d}is the dimension of input variables, andr{\displaystyle r}is the dimension of unknown parameters. Whilehm(⋅){\displaystyle \mathbf {h} ^{m}(\cdot )}is pre-defined,{βm,σm,ωkm,k=1,…,d+r}{\displaystyle \left\{{\boldsymbol {\beta }}^{m},\sigma _{m},\omega _{k}^{m},k=1,\ldots ,d+r\right\}}, known ashyperparametersof the GP model, need to be estimated viamaximum likelihood estimation (MLE). This module can be considered as a generalizedkrigingmethod.
Similarly with the first module, the discrepancy function is replaced with a GP model
where
Together with the prior distribution of unknown parameters, and data from both computer models and experiments, one can derive the maximum likelihood estimates for{βδ,σδ,ωkδ,k=1,…,d}{\displaystyle \left\{{\boldsymbol {\beta }}^{\delta },\sigma _{\delta },\omega _{k}^{\delta },k=1,\ldots ,d\right\}}. At the same time,βm{\displaystyle {\boldsymbol {\beta }}^{m}}from Module 1 gets updated as well.
Bayes' theoremis applied to calculate theposterior distributionof the unknown parameters:
whereφ{\displaystyle {\boldsymbol {\varphi }}}includes all the fixed hyperparameters in previous modules.
Fully Bayesian approach requires that not only the priors for unknown parametersθ{\displaystyle {\boldsymbol {\theta }}}but also the priors for the other hyperparametersφ{\displaystyle {\boldsymbol {\varphi }}}should be assigned. It follows the following steps:[18]
However, the approach has significant drawbacks:
The fully Bayesian approach requires a huge amount of calculations and may not yet be practical for dealing with the most complicated modelling situations.[18]
The theories and methodologies for uncertainty propagation are much better established, compared with inverse uncertainty quantification. For the latter, several difficulties remain unsolved:
|
https://en.wikipedia.org/wiki/Uncertainty_quantification
|
Acognitive biasis a systematic pattern of deviation fromnormor rationality in judgment.[1][2]Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not theobjectiveinput, may dictate theirbehaviorin the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, andirrationality.[3][4][5]
While cognitive biases may initially appear to be negative, some are adaptive. They may lead to more effective actions in a given context.[6]Furthermore, allowing cognitive biases enables faster decisions which can be desirable when timeliness is more valuable than accuracy, as illustrated inheuristics.[7]Other cognitive biases are a "by-product" of human processing limitations,[1]resulting from a lack of appropriate mental mechanisms (bounded rationality), the impact of an individual's constitution and biological state (seeembodied cognition), or simply from a limited capacity for information processing.[8][9]Research suggests that cognitive biases can make individuals more inclined to endorsing pseudoscientific beliefs by requiring less evidence for claims that confirm their preconceptions. This can potentially distort their perceptions and lead to inaccurate judgments.[10]
A continually evolvinglist of cognitive biaseshas been identified over the last six decades of research on human judgment and decision-making incognitive science,social psychology, andbehavioral economics. The study of cognitive biases has practical implications for areas including clinical judgment, entrepreneurship, finance, and management.[11][12]
The notion of cognitive biases was introduced byAmos TverskyandDaniel Kahnemanin 1972[13]and grew out of their experience of people'sinnumeracy, or inability to reason intuitively with the greaterorders of magnitude. Tversky, Kahneman, and colleagues demonstrated severalreplicableways in which human judgments and decisions differ fromrational choice theory. Tversky and Kahneman explained human differences in judgment and decision-making in terms of heuristics. Heuristics involve mental shortcuts which provide swift estimates about the possibility of uncertain occurrences.[14]Heuristics are simple for the brain to compute but sometimes introduce "severe and systematic errors."[7]For example, the representativeness heuristic is defined as "The tendency to judge the frequency or likelihood" of an occurrence by the extent of which the event "resembles the typical case."[14]
The "Linda Problem" illustrates the representativeness heuristic (Tversky & Kahneman, 1983[15]). Participants were given a description of "Linda" that suggests Linda might well be a feminist (e.g., she is said to be concerned about discrimination and social justice issues). They were then asked whether they thought Linda was more likely to be (a) a "bank teller" or (b) a "bank teller and active in the feminist movement." A majority chose answer (b). Independent of the information given about Linda, though, the more restrictive answer (b) is under any circumstance statistically less likely than answer (a). This is an example of the "conjunction fallacy". Tversky and Kahneman argued that respondents chose (b) because it seemed more "representative" or typical of persons who might fit the description of Linda. The representativeness heuristic may lead to errors such as activating stereotypes and inaccurate judgments of others (Haselton et al., 2005, p. 726).
Critics of Kahneman and Tversky, such asGerd Gigerenzer, alternatively argued that heuristics should not lead us to conceive of human thinking as riddled with irrational cognitive biases. They should rather conceiverationalityas an adaptive tool, not identical to the rules offormal logicor theprobability calculus.[16]Nevertheless, experiments such as the "Linda problem" grew into heuristics and biases research programs, which spread beyond academic psychology into other disciplines including medicine andpolitical science.
Biases can be distinguished on a number of dimensions. Examples of cognitive biases include -
Other biases are due to the particular way the brain perceives, forms memories and makes judgments. This distinction is sometimes described as "hot cognition" versus "cold cognition", asmotivated reasoningcan involve a state ofarousal. Among the "cold" biases,
As some biases reflect motivation specifically the motivation to have positive attitudes to oneself.[21]It accounts for the fact that many biases are self-motivated or self-directed (e.g.,illusion of asymmetric insight,self-serving bias). There are also biases in how subjects evaluate in-groups or out-groups; evaluating in-groups as more diverse and "better" in many respects, even when those groups are arbitrarily defined (ingroup bias,outgroup homogeneity bias).
Some cognitive biases belong to the subgroup ofattentional biases, which refers to paying increased attention to certain stimuli. It has been shown, for example, that people addicted to alcohol and other drugs pay more attention to drug-related stimuli. Common psychological tests to measure those biases are theStroop task[22][23]and thedot probe task.
Individuals' susceptibility to some types of cognitive biases can be measured by theCognitive Reflection Test(CRT) developed by Shane Frederick (2005).[24][25]
The following is a list of the more commonly studied cognitive biases:
Many social institutions rely on individuals to make rational judgments.
The securities regulation regime largely assumes that all investors act as perfectly rational persons. In truth, actual investors face cognitive limitations from biases, heuristics, and framing effects.
A fairjury trial, for example, requires that the jury ignore irrelevant features of the case, weigh the relevant features appropriately, consider different possibilities open-mindedly and resistfallaciessuch asappeal to emotion. The various biases demonstrated in these psychological experiments suggest that people will frequently fail to do all these things.[37]However, they fail to do so in systematic, directional ways that are predictable.[5]
In some academic disciplines, the study of bias is very popular. For instance, bias is a wide spread and well studied phenomenon because most decisions that concern the minds and hearts of entrepreneurs are computationally intractable.[12]
Cognitive biases can create other issues that arise in everyday life. One study showed the connection between cognitive bias, specifically approach bias, and inhibitory control on how much unhealthy snack food a person would eat.[38]They found that the participants who ate more of the unhealthy snack food, tended to have less inhibitory control and more reliance on approach bias. Others have also hypothesized that cognitive biases could be linked to various eating disorders and how people view their bodies and their body image.[39][40]
It has also been argued that cognitive biases can be used in destructive ways.[41]Some believe that there are people in authority who use cognitive biases and heuristics in order to manipulate others so that they can reach their end goals. Some medications and other health care treatments rely on cognitive biases in order to persuade others who are susceptible to cognitive biases to use their products. Many see this as taking advantage of one's natural struggle of judgement and decision-making. They also believe that it is the government's responsibility to regulate these misleading ads.
Cognitive biases also seem to play a role in property sale price and value. Participants in the experiment were shown a residential property.[42]Afterwards, they were shown another property that was completely unrelated to the first property. They were asked to say what they believed the value and the sale price of the second property would be. They found that showing the participants an unrelated property did have an effect on how they valued the second property.
Cognitive biases can be used in non-destructive ways. In team science and collective problem-solving, thesuperiority biascan be beneficial. It leads to a diversity of solutions within a group, especially in complex problems, by preventing premature consensus on suboptimal solutions. This example demonstrates how a cognitive bias, typically seen as a hindrance, can enhance collective decision-making by encouraging a wider exploration of possibilities.[43]
Cognitive biases are interlinked with collective illusions, a phenomenon where a group of people mistakenly believe that their views and preferences are shared by the majority, when in reality, they are not. These illusions often arise from various cognitive biases that misrepresent our perception of social norms and influence how we assess the beliefs of others.[44]
Because they causesystematic errors, cognitive biases cannot be compensated for using awisdom of the crowdtechnique of averaging answers from several people.[45]Debiasingis the reduction of biases in judgment and decision-making through incentives, nudges, and training.Cognitive bias mitigationandcognitive bias modificationare forms of debiasing specifically applicable to cognitive biases and their effects.Reference class forecastingis a method for systematically debiasing estimates and decisions, based on whatDaniel Kahnemanhas dubbed theoutside view.
Similar to Gigerenzer (1996),[46]Haselton et al. (2005) state the content and direction of cognitive biases are not "arbitrary" (p. 730).[1]Moreover, cognitive biases can be controlled. One debiasing technique aims to decrease biases by encouraging individuals to use controlled processing compared to automatic processing.[26]In relation to reducing theFAE, monetary incentives[47]and informing participants they will be held accountable for their attributions[48]have been linked to the increase of accurate attributions. Training has also shown to reduce cognitive bias. Carey K. Morewedge and colleagues (2015) found that research participants exposed to one-shot training interventions, such as educational videos and debiasing games that taught mitigating strategies, exhibited significant reductions in their commission of six cognitive biases immediately and up to 3 months later.[49]
Cognitive bias modificationrefers to the process of modifying cognitive biases in healthy people and also refers to a growing area of psychological (non-pharmaceutical) therapies for anxiety, depression and addiction called cognitive bias modification therapy (CBMT). CBMT is sub-group of therapies within a growing area of psychological therapies based on modifying cognitive processes with or without accompanying medication and talk therapy, sometimes referred to as applied cognitive processing therapies (ACPT). Although cognitive bias modification can refer to modifying cognitive processes in healthy individuals, CBMT is a growing area of evidence-based psychological therapy, in which cognitive processes are modified to relieve suffering[50][51]from seriousdepression,[52]anxiety,[53]and addiction.[54]CBMT techniques are technology-assisted therapies that are delivered via a computer with or without clinician support. CBM combines evidence and theory from the cognitive model of anxiety,[55]cognitive neuroscience,[56]and attentional models.[57]
Cognitive bias modification has also been used to help those with obsessive-compulsive beliefs and obsessive-compulsive disorder.[58][59]This therapy has shown that it decreases the obsessive-compulsive beliefs and behaviors.
Bias arises from various processes that are sometimes difficult to distinguish. These include:
People do appear to have stable individual differences in their susceptibility to decision biases such asoverconfidence,temporal discounting, andbias blind spot.[68]That said, these stable levels of bias within individuals are possible to change. Participants in experiments who watched training videos and played debiasing games showed medium to large reductions both immediately and up to three months later in the extent to which they exhibited susceptibility to six cognitive biases:anchoring, bias blind spot,confirmation bias,fundamental attribution error,projection bias, andrepresentativeness.[69]
Individual differences in cognitive bias have also been linked to varying levels of cognitive abilities and functions.[70]The Cognitive Reflection Test (CRT) has been used to help understand the connection between cognitive biases and cognitive ability. There have been inconclusive results when using the Cognitive Reflection Test to understand ability. However, there does seem to be a correlation; those who gain a higher score on the Cognitive Reflection Test, have higher cognitive ability and rational-thinking skills. This in turn helps predict the performance on cognitive bias and heuristic tests. Those with higher CRT scores tend to be able to answer more correctly on different heuristic and cognitive bias tests and tasks.[71]
Age is another individual difference that has an effect on one's ability to be susceptible to cognitive bias. Older individuals tend to be more susceptible to cognitive biases and have lesscognitive flexibility. However, older individuals were able to decrease their susceptibility to cognitive biases throughout ongoing trials.[72]These experiments had both young and older adults complete a framing task. Younger adults had more cognitive flexibility than older adults. Cognitive flexibility is linked to helping overcome pre-existing biases.
The list of cognitive biases has long been a topic of critique. In psychology a "rationality war"[73]unfolded betweenGerd Gigerenzerand the Kahneman and Tversky school, which pivoted on whether biases are primarily defects of human cognition or the result of behavioural patterns that are actually adaptive or "ecologically rational"[74]. Gerd Gigerenzer has historically been one of the main opponents to cognitive biases and heuristics.[75][76][77]Gigerenzer believes that cognitive biases are not biases, butrules of thumb, or as he would put it "gut feelings" that can actually help us make accurate decisions in our lives.
This debate has recently reignited, with critiques arguing there has been an overemphasis on biases in human cognition.[78]A key criticism is the continuous expansion of the list of alleged biases without clear evidence that these behaviors are genuinely biased once the actual problems people face are understood. Advances in economics and cognitive neuroscience now suggest that many behaviors previously labeled as biases might instead represent optimal decision-making strategies.
|
https://en.wikipedia.org/wiki/Cognitive_bias
|
Regression dilution, also known asregression attenuation, is thebiasingof thelinear regressionslopetowards zero (the underestimation of its absolute value), caused by errors in theindependent variable.
Consider fitting a straight line for the relationship of an outcome variableyto a predictor variablex, and estimating the slope of the line. Statistical variability, measurement error or random noise in theyvariable causesuncertaintyin the estimated slope, but notbias: on average, the procedure calculates the right slope. However, variability, measurement error or random noise in thexvariable causes bias in the estimated slope (as well as imprecision). The greater the variance in thexmeasurement, the closer the estimated slope must approach zero instead of the true value.
It may seem counter-intuitive that noise in the predictor variablexinduces a bias, but noise in the outcome variableydoes not. Recall that linear regression is not symmetric: the line of best fit for predictingyfromx(the usual linear regression) is not the same as the line of best fit for predictingxfromy.[1]
Regression slopeand otherregression coefficientscan be disattenuated as follows.
The case thatxis fixed, but measured with noise, is known as thefunctional modelorfunctional relationship.[2]It can be corrected usingtotal least squares[3]anderrors-in-variables modelsin general.
The case that thexvariable arises randomly is known as thestructural modelorstructural relationship. For example, in a medical study patients are recruited as a sample from a population, and their characteristics such asblood pressuremay be viewed as arising from arandom sample.
Under certain assumptions (typically,normal distributionassumptions) there is a knownratiobetween the true slope, and the expected estimated slope. Frost and Thompson (2000) review several methods for estimating this ratio and hence correcting the estimated slope.[4]The termregression dilution ratio, although not defined in quite the same way by all authors, is used for this general approach, in which the usual linear regression is fitted, and then a correction applied. The reply to Frost & Thompson by Longford (2001) refers the reader to other methods, expanding the regression model to acknowledge the variability in the x variable, so that no bias arises.[5]Fuller(1987) is one of the standard references for assessing and correcting for regression dilution.[6]
Hughes (1993) shows that the regression dilution ratio methods apply approximately insurvival models.[7]Rosner (1992) shows that the ratio methods apply approximately tologistic regressionmodels.[8]Carroll et al. (1995) give more detail on regression dilution innonlinear models, presenting the regression dilution ratio methods as the simplest case ofregression calibrationmethods, in which additional covariates may also be incorporated.[9]
In general, methods for the structural model require some estimate of the variability of the x variable. This will require repeated measurements of the x variable in the same individuals, either in a sub-study of the main data set, or in a separate data set. Without this information it will not be possible to make a correction.
The case of multiple predictor variables subject to variability (possiblycorrelated) has been well-studied for linear regression, and for some non-linear regression models.[6][9]Other non-linear models, such asproportional hazards modelsforsurvival analysis, have been considered only with a single predictor subject to variability.[7]
Charles Spearmandeveloped in 1904 a procedure for correcting correlations for regression dilution,[10]i.e., to "rid acorrelationcoefficient from the weakening effect ofmeasurement error".[11]
Inmeasurementandstatistics, the procedure is also calledcorrelation disattenuationor thedisattenuation of correlation.[12]The correction assures that thePearson correlation coefficientacross data units (for example, people) between two sets of variables is estimated in a manner that accounts for error contained within the measurement of those variables.[13]
Letβ{\displaystyle \beta }andθ{\displaystyle \theta }be the true values of two attributes of some person orstatistical unit. These values are variables by virtue of the assumption that they differ for different statistical units in thepopulation. Letβ^{\displaystyle {\hat {\beta }}}andθ^{\displaystyle {\hat {\theta }}}be estimates ofβ{\displaystyle \beta }andθ{\displaystyle \theta }derived either directly by observation-with-error or from application of a measurement model, such as theRasch model. Also, let
whereϵβ{\displaystyle \epsilon _{\beta }}andϵθ{\displaystyle \epsilon _{\theta }}are the measurement errors associated with the estimatesβ^{\displaystyle {\hat {\beta }}}andθ^{\displaystyle {\hat {\theta }}}.
The estimated correlation between two sets of estimates is
which, assuming the errors are uncorrelated with each other and with the true attribute values, gives
whereRβ{\displaystyle R_{\beta }}is theseparation indexof the set of estimates ofβ{\displaystyle \beta }, which is analogous toCronbach's alpha; that is, in terms ofclassical test theory,Rβ{\displaystyle R_{\beta }}is analogous to a reliability coefficient. Specifically, the separation index is given as follows:
where the mean squared standard error of person estimate gives an estimate of the variance of the errors,ϵβ{\displaystyle \epsilon _{\beta }}. The standard errors are normally produced as a by-product of the estimation process (seeRasch model estimation).
The disattenuated estimate of the correlation between the two sets of parameter estimates is therefore
That is, the disattenuated correlation estimate is obtained by dividing the correlation between the estimates by thegeometric meanof the separation indices of the two sets of estimates. Expressed in terms of classical test theory, the correlation is divided by the geometric mean of the reliability coefficients of two tests.
Given tworandom variablesX′{\displaystyle X^{\prime }}andY′{\displaystyle Y^{\prime }}measured asX{\displaystyle X}andY{\displaystyle Y}with measuredcorrelationrxy{\displaystyle r_{xy}}and a knownreliabilityfor each variable,rxx{\displaystyle r_{xx}}andryy{\displaystyle r_{yy}}, the estimated correlation betweenX′{\displaystyle X^{\prime }}andY′{\displaystyle Y^{\prime }}corrected for attenuation is
How well the variables are measured affects the correlation ofXandY. The correction for attenuation tells one what the estimated correlation is expected to be if one could measureX′andY′with perfect reliability.
Thus ifX{\displaystyle X}andY{\displaystyle Y}are taken to be imperfect measurements of underlying variablesX′{\displaystyle X'}andY′{\displaystyle Y'}with independent errors, thenrx′y′{\displaystyle r_{x'y'}}estimates the true correlation betweenX′{\displaystyle X'}andY′{\displaystyle Y'}.
A correction for regression dilution is necessary instatistical inferencebased onregression coefficients. However, inpredictive modellingapplications, correction is neither necessary nor appropriate. Inchange detection, correction is necessary.
To understand this, consider the measurement error as follows. Letybe the outcome variable,xbe the true predictor variable, andwbe an approximate observation ofx. Frost and Thompson suggest, for example, thatxmay be the true, long-term blood pressure of a patient, andwmay be the blood pressure observed on one particular clinic visit.[4]Regression dilution arises if we are interested in the relationship betweenyandx, but estimate the relationship betweenyandw. Becausewis measured with variability, the slope of a regression line ofyonwis less than the regression line ofyonx.
Standard methods can fit a regression of y on w without bias. There is bias only if we then use the regression of y on w as an approximation to the regression of y on x. In the example, assuming that blood pressure measurements are similarly variable in future patients, our regression line of y on w (observed blood pressure) gives unbiased predictions.
An example of a circumstance in which correction is desired is prediction of change. Suppose the change inxis known under some new circumstance: to estimate the likely change in an outcome variabley, the slope of the regression ofyonxis needed, notyonw. This arises inepidemiology. To continue the example in whichxdenotes blood pressure, perhaps a largeclinical trialhas provided an estimate of the change in blood pressure under a new treatment; then the possible effect ony, under the new treatment, should be estimated from the slope in the regression ofyonx.
Another circumstance is predictive modelling in which future observations are also variable, but not (in the phrase used above) "similarly variable". For example, if the current data set includes blood pressure measured with greater precision than is common in clinical practice. One specific example of this arose when developing a regression equation based on a clinical trial, in which blood pressure was the average of six measurements, for use in clinical practice, where blood pressure is usually a single measurement.[14]
All of these results can be shown mathematically, in the case ofsimple linear regressionassuming normal distributions throughout (the framework of Frost & Thompson).
It has been discussed that a poorly executed correction for regression dilution, in particular when performed without checking for the underlying assumptions, may do more damage to an estimate than no correction.[15]
Regression dilution was first mentioned, under the name attenuation, bySpearman(1904).[16]Those seeking a readable mathematical treatment might like to start with Frost and Thompson (2000).[4]
|
https://en.wikipedia.org/wiki/Correction_for_attenuation
|
Instatisticsandoptimization,errorsandresidualsare two closely related and easily confused measures of thedeviationof anobserved valueof anelementof astatistical samplefrom its "true value" (not necessarily observable). Theerrorof anobservationis the deviation of the observed value from the true value of a quantity of interest (for example, apopulation mean). Theresidualis the difference between the observed value and theestimatedvalue of the quantity of interest (for example, asample mean). The distinction is most important inregression analysis, where the concepts are sometimes called theregression errorsandregression residualsand where they lead to the concept ofstudentized residuals.
Ineconometrics, "errors" are also calleddisturbances.[1][2][3]
Suppose there is a series of observations from aunivariate distributionand we want to estimate themeanof that distribution (the so-calledlocation model). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean.
Astatistical error(ordisturbance) is the amount by which an observation differs from itsexpected value, the latter being based on the wholepopulationfrom which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being themeanof the entire population, is typically unobservable, and hence the statistical error cannot be observed either.
Aresidual(or fitting deviation), on the other hand, is an observableestimateof the unobservable statistical error. Consider the previous example with men's heights and suppose we have a random sample ofnpeople. Thesample meancould serve as a good estimator of thepopulationmean. Then we have:
Note that, because of the definition of the sample mean, the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarilynotindependent. The statistical errors, on the other hand, are independent, and their sum within the random sample isalmost surelynot zero.
One can standardize statistical errors (especially of anormal distribution) in az-score(or "standard score"), and standardize residuals in at-statistic, or more generallystudentized residuals.
If we assume a normally distributed population with mean μ andstandard deviationσ, and choose individuals independently, then we have
and thesample mean
is a random variable distributed such that:
Thestatistical errorsare then
withexpectedvalues of zero,[4]whereas theresidualsare
The sum of squares of thestatistical errors, divided byσ2, has achi-squared distributionwithndegrees of freedom:
However, this quantity is not observable as the population mean is unknown. The sum of squares of theresiduals, on the other hand, is observable. The quotient of that sum by σ2has a chi-squared distribution with onlyn− 1 degrees of freedom:
This difference betweennandn− 1 degrees of freedom results inBessel's correctionfor the estimation ofsample varianceof a population with unknown mean and unknown variance. No correction is necessary if the population mean is known.
It is remarkable that thesum of squares of the residualsand the sample mean can be shown to be independent of each other, using, e.g.Basu's theorem. That fact, and the normal and chi-squared distributions given above form the basis of calculations involving the t-statistic:
whereX¯n−μ0{\displaystyle {\overline {X}}_{n}-\mu _{0}}represents the errors,Sn{\displaystyle S_{n}}represents the sample standard deviation for a sample of sizen, and unknownσ, and the denominator termSn/n{\displaystyle S_{n}/{\sqrt {n}}}accounts for the standard deviation of the errors according to:[5]
Var(X¯n)=σ2n{\displaystyle \operatorname {Var} \left({\overline {X}}_{n}\right)={\frac {\sigma ^{2}}{n}}}
Theprobability distributionsof the numerator and the denominator separately depend on the value of the unobservable population standard deviationσ, butσappears in both the numerator and the denominator and cancels. That is fortunate because it means that even though we do not knowσ, we know the probability distribution of this quotient: it has aStudent's t-distributionwithn− 1 degrees of freedom. We can therefore use this quotient to find aconfidence intervalforμ. This t-statistic can be interpreted as "the number of standard errors away from the regression line."[6]
Inregression analysis, the distinction betweenerrorsandresidualsis subtle and important, and leads to the concept ofstudentized residuals. Given an unobservable function that relates the independent variable to the dependent variable – say, a line – the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from thefittedfunction are the residuals. If the linear model is applicable, a scatterplot of residuals plotted against the independent variable should be random about zero with no trend to the residuals.[5]If the data exhibit a trend, the regression model is likely incorrect; for example, the true function may be a quadratic or higher order polynomial. If they are random, or have no trend, but "fan out" - they exhibit a phenomenon calledheteroscedasticity. If all of the residuals are equal, or do not fan out, they exhibithomoscedasticity.
However, a terminological difference arises in the expressionmean squared error(MSE). The mean squared error of a regression is a number computed from the sum of squares of the computedresiduals, and not of the unobservableerrors. If that sum of squares is divided byn, the number of observations, the result is the mean of the squared residuals. Since this is abiasedestimate of the variance of the unobserved errors, the bias is removed by dividing the sum of the squared residuals bydf=n−p− 1, instead ofn, wheredfis the number ofdegrees of freedom(nminus the number of parameters (excluding the intercept) p being estimated - 1). This forms an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error.[7]
Another method to calculate the mean square of error when analyzing the variance of linear regression using a technique like that used inANOVA(they are the same because ANOVA is a type of regression), the sum of squares of the residuals (aka sum of squares of the error) is divided by the degrees of freedom (where the degrees of freedom equaln−p− 1, wherepis the number of parameters estimated in the model (one for each variable in the regression equation, not including the intercept)). One can then also calculate the mean square of the model by dividing the sum of squares of the model minus the degrees of freedom, which is just the number of parameters. Then the F value can be calculated by dividing the mean square of the model by the mean square of the error, and we can then determine significance (which is why you want the mean squares to begin with.).[8]
However, because of the behavior of the process of regression, thedistributionsof residuals at different data points (of the input variable) may varyeven ifthe errors themselves are identically distributed. Concretely, in alinear regressionwhere the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will behigherthan the variability of residuals at the ends of the domain:[9]linear regressions fit endpoints better than the middle. This is also reflected in theinfluence functionsof various data points on theregression coefficients: endpoints have more influence.
Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability ofresiduals,which is calledstudentizing. This is particularly important in the case of detectingoutliers, where the case in question is somehow different from the others in a dataset. For example, a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain.
The use of the term "error" as discussed in the sections above is in the sense of a deviation of a value from a hypothetical unobserved value. At least two other uses also occur in statistics, both referring to observableprediction errors:
Themean squared error(MSE) refers to the amount by which the values predicted by an estimator differ from the quantities being estimated (typically outside the sample from which the model was estimated).
Theroot mean square error(RMSE) is the square-root of MSE.
Thesum of squares of errors(SSE) is the MSE multiplied by the sample size.
Sum of squares of residuals(SSR) is the sum of the squares of the deviations of the actual values from the predicted values, within the sample used for estimation. This is the basis for theleast squaresestimate, where the regression coefficients are chosen such that the SSR is minimal (i.e. its derivative is zero).
Likewise, thesum of absolute errors(SAE) is the sum of the absolute values of the residuals, which is minimized in theleast absolute deviationsapproach to regression.
Themean error(ME) is the bias.
Themean residual(MR) is always zero for least-squares estimators.
|
https://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics
|
Instatistics, anerrors-in-variables modelor ameasurement error modelis aregression modelthat accounts formeasurement errorsin theindependent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in thedependent variables, or responses.[citation needed]
In the case when some regressors have been measured with errors, estimation based on the standard assumption leads toinconsistentestimates, meaning that the parameter estimates do not tend to the true values even in very large samples. Forsimple linear regressionthe effect is an underestimate of the coefficient, known as theattenuation bias. Innon-linear modelsthe direction of the bias is likely to be more complicated.[1][2][3]
Consider a simple linear regression model of the form
wherext∗{\displaystyle x_{t}^{*}}denotes thetruebutunobserved regressor. Instead, we observe this value with an error:
where the measurement errorηt{\displaystyle \eta _{t}}is assumed to be independent of the true valuext∗{\displaystyle x_{t}^{*}}.A practical application is the standard school science experiment forHooke's law, in which one estimates the relationship between the weight added to a spring and the amount by which the spring stretches.If theyt{\displaystyle y_{t}}′s are simply regressed on thext{\displaystyle x_{t}}′s (seesimple linear regression), then the estimator for the slope coefficient is
which converges as the sample sizeT{\displaystyle T}increases without bound:
This is in contrast to the "true" effect ofβ{\displaystyle \beta }, estimated using thext∗{\displaystyle x_{t}^{*}},:
Variances are non-negative, so that in the limit the estimatedβ^x{\displaystyle {\hat {\beta }}_{x}}is smaller thanβ^{\displaystyle {\hat {\beta }}}, an effect which statisticians callattenuationorregression dilution.[4]Thus the ‘naïve’ least squares estimatorβ^x{\displaystyle {\hat {\beta }}_{x}}is aninconsistentestimator forβ{\displaystyle \beta }. However,β^x{\displaystyle {\hat {\beta }}_{x}}is aconsistent estimatorof the parameter required for a best linear predictor ofy{\displaystyle y}given the observedxt{\displaystyle x_{t}}: in some applications this may be what is required, rather than an estimate of the 'true' regression coefficientβ{\displaystyle \beta }, although that would assume that the variance of the errors in the estimation and prediction is identical. This follows directly from the result quoted immediately above, and the fact that the regression coefficient relating theyt{\displaystyle y_{t}}′s to the actually observedxt{\displaystyle x_{t}}′s, in a simple linear regression, is given by
It is this coefficient, rather thanβ{\displaystyle \beta }, that would be required for constructing a predictor ofy{\displaystyle y}based on an observedx{\displaystyle x}which is subject to noise.
It can be argued that almost all existing data sets contain errors of different nature and magnitude, so that attenuation bias is extremely frequent (although in multivariate regression the direction of bias is ambiguous[5]).Jerry Hausmansees this as aniron law of econometrics: "The magnitude of the estimate is usually smaller than expected."[6]
Usually, measurement error models are described using thelatent variablesapproach. Ify{\displaystyle y}is the response variable andx{\displaystyle x}are observed values of the regressors, then it is assumed there exist some latent variablesy∗{\displaystyle y^{*}}andx∗{\displaystyle x^{*}}which follow the model's "true"functional relationshipg(⋅){\displaystyle g(\cdot )}, and such that the observed quantities are their noisy observations:
whereθ{\displaystyle \theta }is the model'sparameterandw{\displaystyle w}are those regressors which are assumed to be error-free (for example, when linear regression contains an intercept, the regressor which corresponds to the constant certainly has no "measurement errors"). Depending on the specification these error-free regressors may or may not be treated separately; in the latter case it is simply assumed that corresponding entries in the variance matrix ofη{\displaystyle \eta }'s are zero.
The variablesy{\displaystyle y},x{\displaystyle x},w{\displaystyle w}are allobserved, meaning that the statistician possesses adata setofn{\displaystyle n}statistical units{yi,xi,wi}i=1,…,n{\displaystyle \left\{y_{i},x_{i},w_{i}\right\}_{i=1,\dots ,n}}which follow thedata generating processdescribed above; the latent variablesx∗{\displaystyle x^{*}},y∗{\displaystyle y^{*}},ε{\displaystyle \varepsilon }, andη{\displaystyle \eta }are not observed, however.
This specification does not encompass all the existing errors-in-variables models. For example, in some of them, functiong(⋅){\displaystyle g(\cdot )}may benon-parametricor semi-parametric. Other approaches model the relationship betweeny∗{\displaystyle y^{*}}andx∗{\displaystyle x^{*}}as distributional instead of functional; that is, they assume thaty∗{\displaystyle y^{*}}conditionally onx∗{\displaystyle x^{*}}follows a certain (usually parametric) distribution.
Linear errors-in-variables models were studied first, probably becauselinear modelswere so widely used and they are easier than non-linear ones. Unlike standardleast squaresregression (OLS), extending errors in variables regression (EiV) from the simple to the multivariable case is not straightforward, unless one treats all variables in the same way i.e. assume equal reliability.[10]
The simple linear errors-in-variables model was already presented in the "motivation" section:
where all variables arescalar. Hereαandβare the parameters of interest, whereasσεandση—standard deviations of the error terms—are thenuisance parameters. The "true" regressorx*is treated as a random variable (structuralmodel), independent of the measurement errorη(classicassumption).
This model isidentifiablein two cases: (1) either the latent regressorx*isnotnormally distributed, (2) orx*has normal distribution, but neitherεtnorηtare divisible by a normal distribution.[11]That is, the parametersα,βcan be consistently estimated from the data set(xt,yt)t=1T{\displaystyle \scriptstyle (x_{t},\,y_{t})_{t=1}^{T}}without any additional information, provided the latent regressor is not Gaussian.
Before this identifiability result was established, statisticians attempted to apply themaximum likelihoodtechnique by assuming that all variables are normal, and then concluded that the model is not identified. The suggested remedy was toassumethat some of the parameters of the model are known or can be estimated from the outside source. Such estimation methods include[12]
Estimation methods that do not assume knowledge of some of the parameters of the model, include
where (n1,n2) are such thatK(n1+1,n2) — the jointcumulantof (x,y) — is not zero. In the case when the third central moment of the latent regressorx*is non-zero, the formula reduces to
The multivariable model looks exactly like the simple linear model, only this timeβ,ηt,xtandx*tarek×1 vectors.
In the case when (εt,ηt) is jointly normal, the parameterβis not identified if and only if there is a non-singulark×kblock matrix [a A], whereais ak×1 vector such thata′x*is distributed normally and independently ofA′x*. In the case whenεt,ηt1,...,ηtkare mutually independent, the parameterβis not identified if and only if in addition to the conditions above some of the errors can be written as the sum of two independent variables one of which is normal.[15]
Some of the estimation methods for multivariable linear models are
where∘{\displaystyle \circ }designates theHadamard productof matrices, and variablesxt,ythave been preliminarily de-meaned. The authors of the method suggest to use Fuller's modified IV estimator.[17]
A generic non-linear measurement error model takes form
Here functiongcan be either parametric or non-parametric. When functiongis parametric it will be written asg(x*,β).
For a general vector-valued regressorx*the conditions for modelidentifiabilityare not known. However, in the case of scalarx*the model is identified unless the functiongis of the "log-exponential" form[20]
and the latent regressorx*has density
where constantsA,B,C,D,E,Fmay depend ona,b,c,d.
Despite this optimistic result, as of now no methods exist for estimating non-linear errors-in-variables models without any extraneous information. However, there are several techniques which make use of some additional data: either the instrumental variables, or repeated observations.
whereπ0andσ0are (unknown) constant matrices, andζt⊥zt. The coefficientπ0can be estimated using standardleast squaresregression ofxonz. The distribution ofζtis unknown; however, we can model it as belonging to a flexible parametric family – theEdgeworth series:
whereϕis thestandard normaldistribution.
Simulated moments can be computed using theimportance samplingalgorithm: first we generate several random variables {vts~ϕ,s= 1,…,S,t= 1,…,T} from the standard normal distribution, then we compute the moments att-th observation as
whereθ= (β,σ,γ),Ais just some function of the instrumental variablesz, andHis a two-component vector of moments
In this approach two (or maybe more) repeated observations of the regressorx*are available. Both observations contain their own measurement errors; however, those errors are required to be independent:
wherex*⊥η1⊥η2. Variablesη1,η2need not be identically distributed (although if they are efficiency of the estimator can be slightly improved). With only these two observations it is possible to consistently estimate the density function ofx*using Kotlarski'sdeconvolutiontechnique.[22]
where it would be possible to compute the integral if we knew the conditional density functionƒx*|x. If this function could be known or estimated, then the problem turns into standard non-linear regression, which can be estimated for example using theNLLSmethod.Assuming for simplicity thatη1,η2are identically distributed, this conditional density can be computed as
where with slight abuse of notationxjdenotes thej-th component of a vector.All densities in this formula can be estimated using inversion of the empiricalcharacteristic functions. In particular,
To invert these characteristic function one has to apply the inverse Fourier transform, with a trimming parameterCneeded to ensure the numerical stability. For example:
wherewtrepresents variables measured without errors. The regressorx*here is scalar (the method can be extended to the case of vectorx*as well).If not for the measurement errors, this would have been a standardlinear modelwith the estimator
where
It turns out that all the expected values in this formula are estimable using the same deconvolution trick. In particular, for a generic observablewt(which could be 1,w1t, …,wℓ t, oryt) and some functionh(which could represent anygjorgigj) we have
whereφhis theFourier transformofh(x*), but using the same convention as for thecharacteristic functions,
and
|
https://en.wikipedia.org/wiki/Errors-in-variables_models
|
Instrument errorrefers to a measurementerrorinherited from ameasuring instrument.[1]It could be caused by manufacturing tolerances of components in the instrument, the accuracy of the instrument calibration, or a difference between the measurement condition and the calibration condition (e.g., the measurement is done at a temperature different than the calibration temperature).
Such errors are considered different than errors caused by different reasons; errors made during measurement reading, errors caused by human errors, and errors caused by a change in the measurement environment caused by the presence of the instrument affecting the environment.
Like all the other errors, instrument errors can be errors of various types, and the overall error is the sum of the individual errors.
Like the other errors, the instrument errors can also be classified by the following types based on the behavior of errors in the measurement repetitions.
A systematic error is an error that is kept during measurement-to-measurement at the same measurement condition. The size of the systematic error is sometimes referred to as theaccuracy. For example, the instrument may always indicate a value 5% higher than the actual value; or perhaps the relationship between the indicated and actual values may be more complicated than that. A systematic error may arise because the instrument has been incorrectlycalibrated, or perhaps because a defect has arisen in the instrument since it was calibrated. Instruments should be calibrated against a standard instrument that is known to be accurate, and ideally the calibration should be repeated at intervals. The most rigorous standards are those maintained by astandards organizationsuch asNISTin theUnited States, or theISOin Europe.
If the users know the amount of the systematic error, they may decide to adjust for it manually rather than having the instrument expensively adjusted to eliminate the error: e.g. in the above example they might manually reduce all the values read by about 4.8%.
The act of taking the measurement may alter the quantity being measured. For example, anammeterhas its own built-in resistance, so if it is connected in series to an electrical circuit, it will slightly reduce the current flowing through the circuit.
A random error is an error that varies during measurement-to-measurement at the same measurement condition. The range in amount of possible random errors is sometimes referred to as theprecision(the spread of measured values). Random errors may arise because of the design of the instrument.
The effect of random error can be reduced by repeating the measurement at the same controllable condition a few times and taking the average result.
Electrical noise on electrical components in an instrument or temperature fluctuation on a quantity to measure may induce random errors in the measurement.
If the instrument has a needle which points to a scale graduated in steps of 0.1 units, then depending on the design of the instrument, it is usually possible to estimate tenths between the successive marks on the scale, so it should be possible to read off the result to an accuracy of about 0.01 units.
|
https://en.wikipedia.org/wiki/Instrument_error
|
Metrologyis the scientific study ofmeasurement.[1]It establishes a common understanding ofunits, crucial in linking human activities.[2]Modern metrology has its roots in theFrench Revolution's political motivation to standardise units in France when a length standard taken from a natural source was proposed. This led to the creation of the decimal-basedmetric systemin 1795, establishing a set of standards for other types of measurements. Several other countries adopted the metric system between 1795 and 1875; to ensure conformity between the countries, theBureau International des Poids et Mesures(BIPM) was established by theMetre Convention.[3][4]This has evolved into theInternational System of Units(SI) as a result of a resolution at the 11thGeneral Conference on Weights and Measures(CGPM) in 1960.[5]
Metrology is divided into three basic overlapping activities:[6][7]
These overlapping activities are used in varying degrees by the three basic sub-fields of metrology:[6]
In each country, a national measurement system (NMS) exists as a network of laboratories,calibrationfacilities and accreditation bodies which implement and maintain its metrology infrastructure.[8][9]The NMS affects how measurements are made in a country and their recognition by the international community, which has a wide-ranging impact in its society (including economics, energy, environment, health, manufacturing, industry and consumer confidence).[10][11]The effects of metrology on trade and economy are some of the easiest-observed societal impacts. To facilitate fair trade, there must be an agreed-upon system of measurement.[11]
The ability to measure alone is insufficient; standardisation is crucial for measurements to be meaningful.[12]The first record of a permanent standard was in 2900 BC, when theroyal Egyptian cubitwas carved from blackgranite.[12]The cubit was decreed to be the length of the Pharaoh's forearm plus the width of his hand, and replica standards were given to builders.[3]The success of a standardised length for the building ofthe pyramidsis indicated by the lengths of their bases differing by no more than 0.05 percent.[12]
In China weights and measures had a semi religious meaning as it was used in the various crafts by theArtificersand in ritual utensils and is mentioned in thebook of ritesalong with thesteelyard balanceand other tools.[13]
Other civilizations produced generally accepted measurement standards, with Roman and Greek architecture based on distinct systems of measurement.[12]The collapse of the empires and the Dark Ages that followed lost much measurement knowledge and standardisation. Although local systems of measurement were common, comparability was difficult since many local systems were incompatible.[12]England established the Assize of Measures to create standards for length measurements in 1196, and the 1215Magna Cartaincluded a section for the measurement of wine and beer.[14]
Modern metrology has its roots in theFrench Revolution. With a political motivation to harmonise units throughout France, a length standard based on a natural source was proposed.[12]In March 1791, themetrewas defined.[4]This led to the creation of the decimal-basedmetric systemin 1795, establishing standards for other types of measurements. Several other countries adopted the metric system between 1795 and 1875; to ensure international conformity, theInternational Bureau of Weights and Measures(French:Bureau International des Poids et Mesures, or BIPM) was formed by theMetre Convention.[3][4]Although the BIPM's original mission was to create international standards for units of measurement and relate them to national standards to ensure conformity, its scope has broadened to include electrical andphotometricunits andionizing radiationmeasurement standards.[4]The metric system was modernised in 1960 with the creation of theInternational System of Units(SI) as a result of a resolution at the 11thGeneral Conference on Weights and Measures(French:Conference Generale des Poids et Mesures, or CGPM).[5]
Metrology is defined by the International Bureau of Weights and Measures (BIPM) as "the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology".[15]It establishes a common understanding of units, crucial to human activity.[2]Metrology is a wide reaching field, but can be summarized through three basic activities: the definition of internationally accepted units of measurement, the realisation of these units of measurement in practice, and the application of chains of traceability (linking measurements to reference standards).[2][6]These concepts apply in different degrees to metrology's three main fields: scientific metrology; applied, technical or industrial metrology, and legal metrology.[6]
Scientific metrology is concerned with the establishment of units of measurement, the development of new measurement methods, the realisation of measurement standards, and the transfer of traceability from these standards to users in a society.[2][3]This type of metrology is considered the top level of metrology which strives for the highest degree of accuracy.[2]BIPM maintains a database of the metrological calibration and measurement capabilities of institutes around the world. These institutes, whose activities are peer-reviewed, provide the fundamental reference points for metrological traceability. In the area of measurement, BIPM has identified nine metrology areas, which are acoustics, electricity and magnetism, length, mass and related quantities, photometry and radiometry, ionizing radiation, time and frequency, thermometry, and chemistry.[16]
As of May 2019 no physical objects define the base units.[17]The motivation in the change of the base units is to make the entire system derivable fromphysical constants, which required the removal of the prototype kilogram as it is the last artefact the unit definitions depend on.[18]Scientific metrology plays an important role in this redefinition of the units as precise measurements of the physical constants is required to have accurate definitions of the base units. To redefine the value of a kilogram without an artefact the value of thePlanck constantmust be known to twenty parts per billion.[19]Scientific metrology, through the development of theKibble balanceand theAvogadro project, has produced a value of Planck constant with low enough uncertainty to allow for a redefinition of the kilogram.[18]
Applied, technical or industrial metrology is concerned with the application of measurement to manufacturing and other processes and their use in society, ensuring the suitability of measurement instruments, their calibration and quality control.[2]Producing good measurements is important in industry as it has an impact on the value and quality of the end product, and a 10–15% impact on production costs.[6]Although the emphasis in this area of metrology is on the measurements themselves, traceability of the measuring-devicecalibration is necessary to ensure confidence in the measurement. Recognition of the metrological competence in industry can be achieved through mutual recognition agreements, accreditation, or peer review.[6]Industrial metrology is important to a country's economic and industrial development, and the condition of a country's industrial-metrology program can indicate its economic status.[20]
Legal metrology "concerns activities which result from statutory requirements and concern measurement,units of measurement, measuring instruments and methods of measurement and which are performed by competent bodies".[21]Such statutory requirements may arise from the need for protection of health, public safety, the environment, enabling taxation, protection of consumers and fair trade. The International Organization for Legal Metrology (OIML) was established to assist in harmonising regulations across national boundaries to ensure that legal requirements do not inhibit trade.[22]This harmonisation ensures that certification of measuring devices in one country is compatible with another country's certification process, allowing the trade of the measuring devices and the products that rely on them.WELMECwas established in 1990 to promote cooperation in the field of legal metrology in theEuropean Unionand amongEuropean Free Trade Association(EFTA) member states.[23]In the United States legal metrology is under the authority of the Office of Weights and Measures ofNational Institute of Standards and Technology(NIST), enforced by the individual states.[22]
TheInternational System of Units(SI) defines seven base units:length,mass,time,electric current,thermodynamic temperature,amount of substance, andluminous intensity.[25]By convention, each of these units are considered to be mutually independent and can be constructed directly from their defining constants.[26]: 129All other SI units are constructed as products of powers of the seven base units.[26]: 129
Since the base units are the reference points for all measurements taken in SI units, if the reference value changed all prior measurements would be incorrect. Before 2019, if a piece of the international prototype of the kilogram had been snapped off, it would have still been defined as a kilogram; all previous measured values of a kilogram would be heavier.[3]The importance of reproducible SI units has led the BIPM to complete the task of defining all SI base units in terms ofphysical constants.[27]
By defining SI base units with respect to physical constants, and not artefacts or specific substances, they are realisable with a higher level of precision and reproducibility.[27]As of the revision of the SI on 20 May 2019 thekilogram,ampere,kelvin, andmoleare defined by setting exact numerical values for thePlanck constant(h), theelementary electric charge(e), theBoltzmann constant(k), and theAvogadro constant(NA), respectively. Thesecond,metre, andcandelahave previously been defined by physical constants (thecaesium standard(ΔνCs), thespeed of light(c), and theluminous efficacyof540×1012Hzvisible light radiation (Kcd)), subject to correction to their present definitions. The new definitions aim to improve the SI without changing the size of any units, thus ensuring continuity with existing measurements.[28][26]: 123, 128
Therealisationof a unit of measure is its conversion into reality.[29]Three possible methods of realisation are defined by theinternational vocabulary of metrology(VIM): a physical realisation of the unit from its definition, a highly-reproducible measurement as a reproduction of the definition (such as thequantum Hall effectfor theohm), and the use of a material object as the measurement standard.[30]
Astandard(or etalon) is an object, system, or experiment with a defined relationship to a unit of measurement of a physical quantity.[31]Standards are the fundamental reference for a system of weights and measures by realising, preserving, or reproducing a unit against which measuring devices can be compared.[2]There are three levels of standards in the hierarchy of metrology: primary, secondary, and working standards.[20]Primary standards (the highest quality) do not reference any other standards. Secondary standards are calibrated with reference to a primary standard. Working standards, used to calibrate (or check) measuring instruments or other material measures, are calibrated with respect to secondary standards. The hierarchy preserves the quality of the higher standards.[20]An example of a standard would begauge blocksfor length. A gauge block is a block of metal or ceramic with two opposing faces ground precisely flat and parallel, a precise distance apart.[32]The length of the path of light in vacuum during a time interval of 1/299,792,458 of a second is embodied in an artefact standard such as a gauge block; this gauge block is then a primary standard which can be used to calibrate secondary standards through mechanical comparators.[33]
Metrological traceability is defined as the "property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty".[34]It permits the comparison of measurements, whether the result is compared to the previous result in the same laboratory, a measurement result a year ago, or to the result of a measurement performed anywhere else in the world.[35]The chain of traceability allows any measurement to be referenced to higher levels of measurements back to the original definition of the unit.[2]
Traceability is obtained directly throughcalibration, establishing the relationship between an indication on a standard traceable measuring instrument and the value of the comparator (or comparative measuring instrument). The process will determine the measurement value and uncertainty of the device that is being calibrated (the comparator) and create a traceability link to the measurement standard.[34]The four primary reasons for calibrations are to provide traceability, to ensure that the instrument (or standard) is consistent with other measurements, to determine accuracy, and to establish reliability.[2]Traceability works as a pyramid, at the top level there is the international standards, which beholds the world's standards. The next level is the national Metrology institutes that have primary standards that are traceable to the international standards. The national Metrology institutes standards are used to establish a traceable link to local laboratory standards, these laboratory standards are then used to establish a traceable link to industry and testing laboratories. Through these subsequent calibrations between national metrology institutes, calibration laboratories, and industry and testing laboratories the realisation of the unit definition is propagated down through the pyramid.[35]The traceability chain works upwards from the bottom of the pyramid, where measurements done by industry and testing laboratories can be directly related to the unit definition at the top through the traceability chain created by calibration.[3]
Measurement uncertaintyis a value associated with a measurement which expresses the spread of possible values associated with themeasurand—a quantitative expression of the doubt existing in the measurement.[36]There are two components to the uncertainty of a measurement: the width of the uncertainty interval and the confidence level.[37]The uncertainty interval is a range of values that the measurement value expected to fall within, while the confidence level is how likely the true value is to fall within the uncertainty interval. Uncertainty is generally expressed as follows:[2]
Whereyis the measurement value andUis the uncertainty value andkis the coverage factor[b]indicates the confidence interval. The upper and lower limit of the uncertainty interval can be determined by adding and subtracting the uncertainty value from the measurement value. The coverage factor ofk= 2 generally indicates a 95% confidence that the measured value will fall inside the uncertainty interval.[2]Other values ofkcan be used to indicate a greater or lower confidence on the interval, for examplek= 1 andk= 3 generally indicate 66% and 99.7% confidence respectively.[37]The uncertainty value is determined through a combination of statistical analysis of the calibration and uncertainty contribution from other errors in measurement process, which can be evaluated from sources such as the instrument history, manufacturer's specifications, or published information.[37]
Several international organizations maintain and standardise metrology.
TheMetre Conventioncreated three maininternational organizationsto facilitate standardisation of weights and measures. The first, the General Conference on Weights and Measures (CGPM), provided a forum for representatives of member states. The second, the International Committee for Weights and Measures (CIPM), was an advisory committee of metrologists of high standing. The third, the International Bureau of Weights and Measures (BIPM), provided secretarial and laboratory facilities for the CGPM and CIPM.[38]
TheGeneral Conference on Weights and Measures(French:Conférence générale des poids et mesures, or CGPM) is the convention's principal decision-making body, consisting of delegates from member states and non-voting observers from associate states.[39]The conference usually meets every four to six years to receive and discuss a CIPM report and endorse new developments in the SI as advised by the CIPM. The last meeting was held on 13–16 November 2018. On the last day of this conference there was vote on the redefinition of four base units, which theInternational Committee for Weights and Measures(CIPM) had proposed earlier that year.[40]The new definitions came into force on 20 May 2019.[41][42]
TheInternational Committee for Weights and Measures(French:Comité international des poids et mesures, or CIPM) is made up of eighteen (originally fourteen)[43]individuals from a member state of high scientific standing, nominated by the CGPM to advise the CGPM on administrative and technical matters. It is responsible for ten consultative committees (CCs), each of which investigates a different aspect of metrology; one CC discusses the measurement of temperature, another the measurement of mass, and so forth. The CIPM meets annually inSèvresto discuss reports from the CCs, to submit an annual report to the governments of member states concerning the administration and finances of the BIPM and to advise the CGPM on technical matters as needed. Each member of the CIPM is from a different member state, with France (in recognition of its role in establishing the convention) always having one seat.[44][45]
TheInternational Bureau of Weights and Measures(French:Bureau international des poids et mesures, or BIPM) is an organisation based in Sèvres, France which has custody of theinternational prototype of the kilogram, provides metrology services for the CGPM and CIPM, houses the secretariat for the organisations and hosts their meetings.[46][47]Over the years, prototypes of the metre and of the kilogram have been returned to BIPM headquarters for recalibration.[47]The BIPM director is anex officio memberof the CIPM and a member of all consultative committees.[48]
TheInternational Organization of Legal Metrology(French:Organisation Internationale de Métrologie Légale, or OIML), is anintergovernmental organizationcreated in 1955 to promote the global harmonisation of the legal metrology procedures facilitating international trade.[49]This harmonisation of technical requirements, test procedures and test-report formats ensure confidence in measurements for trade and reduces the costs of discrepancies and measurement duplication.[50]The OIML publishes a number of international reports in four categories:[50]
Although the OIML has no legal authority to impose its recommendations and guidelines on its member countries, it provides a standardised legal framework for those countries to assist the development of appropriate, harmonised legislation for certification and calibration.[50]OIML provides a mutual acceptance arrangement (MAA) for measuring instruments that are subject to legal metrological control, which upon approval allows the evaluation and test reports of the instrument to be accepted in all participating countries.[51]Issuing participants in the agreement issue MAA Type Evaluation Reports of MAA Certificates upon demonstration of compliance with ISO/IEC 17065 and a peer evaluation system to determine competency.[51]This ensures that certification of measuring devices in one country is compatible with the certification process in other participating countries, allowing the trade of the measuring devices and the products that rely on them.
TheInternational Laboratory Accreditation Cooperation(ILAC) is an international organisation for accreditation agencies involved in the certification of conformity-assessment bodies.[52]It standardises accreditation practices and procedures, recognising competent calibration facilities and assisting countries developing their own accreditation bodies.[2]ILAC originally began as a conference in 1977 to develop international cooperation for accredited testing and calibration results to facilitate trade.[52]In 2000, 36 members signed the ILACmutual recognition agreement(MRA), allowing members work to be automatically accepted by other signatories, and in 2012 was expanded to include accreditation of inspection bodies.[52][53]Through this standardisation, work done in laboratories accredited by signatories is automatically recognised internationally through the MRA.[54]Other work done by ILAC includes promotion of laboratory and inspection body accreditation, and supporting the development of accreditation systems in developing economies.[54]
TheJoint Committee for Guides in Metrology(JCGM) is a committee which created and maintains two metrology guides:Guide to the expression of uncertainty in measurement(GUM)[55]andInternational vocabulary of metrology – basic and general concepts and associated terms(VIM).[34]The JCGM is a collaboration of eight partner organisations:[56]
The JCGM has two working groups: JCGM-WG1 and JCGM-WG2. JCGM-WG1 is responsible for the GUM, and JCGM-WG2 for the VIM.[57]Each member organization appoints one representative and up to two experts to attend each meeting, and may appoint up to three experts for each working group.[56]
A national measurement system (NMS) is a network of laboratories, calibration facilities and accreditation bodies which implement and maintain a country's measurement infrastructure.[8][9]The NMS sets measurement standards, ensuring the accuracy, consistency, comparability, and reliability of measurements made in the country.[58]The measurements of member countries of the CIPM Mutual Recognition Arrangement (CIPM MRA), an agreement of national metrology institutes, are recognized by other member countries.[2]As of March 2018, there are 102 signatories of the CIPM MRA, consisting of 58 member states, 40 associate states, and 4 international organizations.[59]
A national metrology institute's (NMI) role in a country's measurement system is to conduct scientific metrology, realise base units, and maintain primary national standards.[2]An NMI provides traceability to international standards for a country, anchoring its national calibration hierarchy.[2]For a national measurement system to be recognized internationally by the CIPM Mutual Recognition Arrangement, an NMI must participate in international comparisons of its measurement capabilities.[9]BIPM maintains a comparison database and a list of calibration and measurement capabilities (CMCs) of the countries participating in the CIPM MRA.[60]Not all countries have a centralised metrology institute; some have a lead NMI and several decentralised institutes specialising in specific national standards.[2]Some examples of NMI's are theNational Institute of Standards and Technology(NIST)[61]in the United States, theNational Research Council(NRC)[62]in Canada, thePhysikalisch-Technische Bundesanstalt(PTB) in Germany,[63]and theNational Physical Laboratory (United Kingdom)(NPL).[64]
Calibration laboratories are generally responsible for calibrations of industrial instrumentation.[9]Calibration laboratories are accredited and provide calibration services to industry firms, which provides a traceability link back to the national metrology institute. Since the calibration laboratories are accredited, they give companies a traceability link to national metrology standards.[2]
An organisation is accredited when an authoritative body determines, by assessing the organisation's personnel and management systems, that it is competent to provide its services.[9]For international recognition, a country's accreditation body must comply with international requirements and is generally the product of international and regional cooperation.[9]A laboratory is evaluated according to international standards such asISO/IEC 17025general requirements for the competence of testing and calibration laboratories.[2]To ensure objective and technically-credible accreditation, the bodies are independent of other national measurement system institutions.[9]TheNational Association of Testing Authorities[65]in Australia and theUnited Kingdom Accreditation Service[66]are examples of accreditation bodies.
Metrology has wide-ranging impacts on a number of sectors, including economics, energy, the environment, health, manufacturing, industry, and consumer confidence.[10][11]The effects of metrology on trade and the economy are two of its most-apparent societal impacts. To facilitate fair and accurate trade between countries, there must be an agreed-upon system of measurement.[11]Accurate measurement and regulation of water, fuel, food, and electricity are critical forconsumer protectionand promote the flow of goods and services between trading partners.[67]A common measurement system and quality standards benefit consumer and producer; production at a common standard reduces cost and consumer risk, ensuring that the product meets consumer needs.[11]Transaction costs are reduced through an increasedeconomy of scale. Several studies have indicated that increased standardisation in measurement has a positive impact onGDP. In the United Kingdom, an estimated 28.4 per cent of GDP growth from 1921 to 2013 was the result of standardisation; in Canada between 1981 and 2004 an estimated nine per cent of GDP growth was standardisation-related, and in Germany the annual economic benefit of standardisation is an estimated 0.72% of GDP.[11]
Legal metrology has reduced accidental deaths and injuries with measuring devices, such asradar gunsandbreathalyzers, by improving their efficiency and reliability.[67]Measuring the human body is challenging, with poorrepeatabilityandreproducibility, and advances in metrology help develop new techniques to improve health care and reduce costs.[68]Environmental policy is based on research data, and accurate measurements are important for assessingclimate changeand environmental regulation.[69]Aside from regulation, metrology is essential in supporting innovation, the ability to measure provides a technical infrastructure and tools that can then be used to pursue further innovation. By providing a technical platform which new ideas can be built upon, easily demonstrated, and shared, measurement standards allow new ideas to be explored and expanded upon.[11]
|
https://en.wikipedia.org/wiki/Metrology
|
Regression dilution, also known asregression attenuation, is thebiasingof thelinear regressionslopetowards zero (the underestimation of its absolute value), caused by errors in theindependent variable.
Consider fitting a straight line for the relationship of an outcome variableyto a predictor variablex, and estimating the slope of the line. Statistical variability, measurement error or random noise in theyvariable causesuncertaintyin the estimated slope, but notbias: on average, the procedure calculates the right slope. However, variability, measurement error or random noise in thexvariable causes bias in the estimated slope (as well as imprecision). The greater the variance in thexmeasurement, the closer the estimated slope must approach zero instead of the true value.
It may seem counter-intuitive that noise in the predictor variablexinduces a bias, but noise in the outcome variableydoes not. Recall that linear regression is not symmetric: the line of best fit for predictingyfromx(the usual linear regression) is not the same as the line of best fit for predictingxfromy.[1]
Regression slopeand otherregression coefficientscan be disattenuated as follows.
The case thatxis fixed, but measured with noise, is known as thefunctional modelorfunctional relationship.[2]It can be corrected usingtotal least squares[3]anderrors-in-variables modelsin general.
The case that thexvariable arises randomly is known as thestructural modelorstructural relationship. For example, in a medical study patients are recruited as a sample from a population, and their characteristics such asblood pressuremay be viewed as arising from arandom sample.
Under certain assumptions (typically,normal distributionassumptions) there is a knownratiobetween the true slope, and the expected estimated slope. Frost and Thompson (2000) review several methods for estimating this ratio and hence correcting the estimated slope.[4]The termregression dilution ratio, although not defined in quite the same way by all authors, is used for this general approach, in which the usual linear regression is fitted, and then a correction applied. The reply to Frost & Thompson by Longford (2001) refers the reader to other methods, expanding the regression model to acknowledge the variability in the x variable, so that no bias arises.[5]Fuller(1987) is one of the standard references for assessing and correcting for regression dilution.[6]
Hughes (1993) shows that the regression dilution ratio methods apply approximately insurvival models.[7]Rosner (1992) shows that the ratio methods apply approximately tologistic regressionmodels.[8]Carroll et al. (1995) give more detail on regression dilution innonlinear models, presenting the regression dilution ratio methods as the simplest case ofregression calibrationmethods, in which additional covariates may also be incorporated.[9]
In general, methods for the structural model require some estimate of the variability of the x variable. This will require repeated measurements of the x variable in the same individuals, either in a sub-study of the main data set, or in a separate data set. Without this information it will not be possible to make a correction.
The case of multiple predictor variables subject to variability (possiblycorrelated) has been well-studied for linear regression, and for some non-linear regression models.[6][9]Other non-linear models, such asproportional hazards modelsforsurvival analysis, have been considered only with a single predictor subject to variability.[7]
Charles Spearmandeveloped in 1904 a procedure for correcting correlations for regression dilution,[10]i.e., to "rid acorrelationcoefficient from the weakening effect ofmeasurement error".[11]
Inmeasurementandstatistics, the procedure is also calledcorrelation disattenuationor thedisattenuation of correlation.[12]The correction assures that thePearson correlation coefficientacross data units (for example, people) between two sets of variables is estimated in a manner that accounts for error contained within the measurement of those variables.[13]
Letβ{\displaystyle \beta }andθ{\displaystyle \theta }be the true values of two attributes of some person orstatistical unit. These values are variables by virtue of the assumption that they differ for different statistical units in thepopulation. Letβ^{\displaystyle {\hat {\beta }}}andθ^{\displaystyle {\hat {\theta }}}be estimates ofβ{\displaystyle \beta }andθ{\displaystyle \theta }derived either directly by observation-with-error or from application of a measurement model, such as theRasch model. Also, let
whereϵβ{\displaystyle \epsilon _{\beta }}andϵθ{\displaystyle \epsilon _{\theta }}are the measurement errors associated with the estimatesβ^{\displaystyle {\hat {\beta }}}andθ^{\displaystyle {\hat {\theta }}}.
The estimated correlation between two sets of estimates is
which, assuming the errors are uncorrelated with each other and with the true attribute values, gives
whereRβ{\displaystyle R_{\beta }}is theseparation indexof the set of estimates ofβ{\displaystyle \beta }, which is analogous toCronbach's alpha; that is, in terms ofclassical test theory,Rβ{\displaystyle R_{\beta }}is analogous to a reliability coefficient. Specifically, the separation index is given as follows:
where the mean squared standard error of person estimate gives an estimate of the variance of the errors,ϵβ{\displaystyle \epsilon _{\beta }}. The standard errors are normally produced as a by-product of the estimation process (seeRasch model estimation).
The disattenuated estimate of the correlation between the two sets of parameter estimates is therefore
That is, the disattenuated correlation estimate is obtained by dividing the correlation between the estimates by thegeometric meanof the separation indices of the two sets of estimates. Expressed in terms of classical test theory, the correlation is divided by the geometric mean of the reliability coefficients of two tests.
Given tworandom variablesX′{\displaystyle X^{\prime }}andY′{\displaystyle Y^{\prime }}measured asX{\displaystyle X}andY{\displaystyle Y}with measuredcorrelationrxy{\displaystyle r_{xy}}and a knownreliabilityfor each variable,rxx{\displaystyle r_{xx}}andryy{\displaystyle r_{yy}}, the estimated correlation betweenX′{\displaystyle X^{\prime }}andY′{\displaystyle Y^{\prime }}corrected for attenuation is
How well the variables are measured affects the correlation ofXandY. The correction for attenuation tells one what the estimated correlation is expected to be if one could measureX′andY′with perfect reliability.
Thus ifX{\displaystyle X}andY{\displaystyle Y}are taken to be imperfect measurements of underlying variablesX′{\displaystyle X'}andY′{\displaystyle Y'}with independent errors, thenrx′y′{\displaystyle r_{x'y'}}estimates the true correlation betweenX′{\displaystyle X'}andY′{\displaystyle Y'}.
A correction for regression dilution is necessary instatistical inferencebased onregression coefficients. However, inpredictive modellingapplications, correction is neither necessary nor appropriate. Inchange detection, correction is necessary.
To understand this, consider the measurement error as follows. Letybe the outcome variable,xbe the true predictor variable, andwbe an approximate observation ofx. Frost and Thompson suggest, for example, thatxmay be the true, long-term blood pressure of a patient, andwmay be the blood pressure observed on one particular clinic visit.[4]Regression dilution arises if we are interested in the relationship betweenyandx, but estimate the relationship betweenyandw. Becausewis measured with variability, the slope of a regression line ofyonwis less than the regression line ofyonx.
Standard methods can fit a regression of y on w without bias. There is bias only if we then use the regression of y on w as an approximation to the regression of y on x. In the example, assuming that blood pressure measurements are similarly variable in future patients, our regression line of y on w (observed blood pressure) gives unbiased predictions.
An example of a circumstance in which correction is desired is prediction of change. Suppose the change inxis known under some new circumstance: to estimate the likely change in an outcome variabley, the slope of the regression ofyonxis needed, notyonw. This arises inepidemiology. To continue the example in whichxdenotes blood pressure, perhaps a largeclinical trialhas provided an estimate of the change in blood pressure under a new treatment; then the possible effect ony, under the new treatment, should be estimated from the slope in the regression ofyonx.
Another circumstance is predictive modelling in which future observations are also variable, but not (in the phrase used above) "similarly variable". For example, if the current data set includes blood pressure measured with greater precision than is common in clinical practice. One specific example of this arose when developing a regression equation based on a clinical trial, in which blood pressure was the average of six measurements, for use in clinical practice, where blood pressure is usually a single measurement.[14]
All of these results can be shown mathematically, in the case ofsimple linear regressionassuming normal distributions throughout (the framework of Frost & Thompson).
It has been discussed that a poorly executed correction for regression dilution, in particular when performed without checking for the underlying assumptions, may do more damage to an estimate than no correction.[15]
Regression dilution was first mentioned, under the name attenuation, bySpearman(1904).[16]Those seeking a readable mathematical treatment might like to start with Frost and Thompson (2000).[4]
|
https://en.wikipedia.org/wiki/Regression_dilution
|
Inengineering,science, andstatistics,replicationis the process of repeating a study or experiment under the same or similar conditions. It is a crucial step to test the original claim and confirm or reject the accuracy of results as well as for identifying and correcting the flaws in the original experiment.[1]ASTM, in standard E1847, defines replication as "... the repetition of the set of all the treatment combinations to be compared in an experiment. Each of the repetitions is called areplicate."
For a fullfactorial design, replicates are multiple experimental runs with the same factor levels. You can replicate combinations of factor levels, groups of factor level combinations, or even entire designs. For instance, consider a scenario with three factors, each having two levels, and an experiment that tests every possible combination of these levels (a full factorial design). One complete replication of this design would comprise 8 runs (23{\displaystyle 2^{3}}). The design can be executed once or with several replicates.[2]
There are two main types of replication in statistics. First, there is a type called “exact replication” (also called "direct replication"), which involves repeating the study as closely as possible to the original to see whether the original results can be precisely reproduced.[3]For instance, repeating a study on the effect of a specific diet on weight loss using the same diet plan and measurement methods. The second type of replication is called “conceptual replication.” This involves testing the same theory as the original study but with different conditions.[3]For example, Testing the same diet's effect on blood sugar levels instead of weight loss, using different measurement methods.
Both exact (direct) replications and conceptual replications are important. Direct replications help confirm the accuracy of the findings within the conditions that were initially tested. On the hand conceptual replications examine the validity of the theory behind those findings and explore different conditions under which those findings remain true. In essence conceptual replication provides insights, into how generalizable the findings are.[4]
Replication is not the same as repeatedmeasurementsof the same item. Both repeat and replicate measurements involve multiple observations taken at the same levels of experimental factors. However, repeat measurements are collected during a single experimental session, while replicate measurements are gathered across different experimental sessions.[2]Replication in statistics evaluates the consistency of experiment results across different trials to ensure external validity, while repetition measures precision and internal consistency within the same or similar experiments.[5]
Replicates Example: Testing a new drug's effect on blood pressure in separate groups on different days.
Repeats Example: Measuring blood pressure multiple times in one group during a single session.
In replication studies within the field of statistics, several key methods and concepts are employed to assess the reliability of research findings. Here are some of the main statistical methods and concepts used in replication:
P-Values: The p-value is a measure of the probability that the observed data would occur by chance if the null hypothesis were true. In replication studies p-values help us determine whether the findings can be consistently replicated. A low p-value in a replication study indicates that the results are not likely due to random chance.[6]For example, if a study found a statistically significant effect of a test condition on an outcome, and the replication find statistically significant effects as well, this suggests that the original finding is likely reproducible.
Confidence Intervals: Confidence intervals provide a range of values within which the true effect size is likely to fall. In replication studies, comparing the confidence intervals of the original study and the replication can indicate whether the results are consistent.[6]For example, if the original study reports a treatment effect with a 95% confidence interval of [5, 10], and the replication study finds a similar effect with a confidence interval of [6, 11], this overlap indicates consistent findings across both studies.
As an example, consider a continuous process which produces items. Batches of items are then processed or treated. Finally, tests or measurements are conducted. Several options might be available to obtain ten test values. Some possibilities are:
Each option would call for differentdata analysismethods and yield different conclusions.
|
https://en.wikipedia.org/wiki/Replication_(statistics)
|
Thetheory of statisticsprovides a basis for the whole range of techniques, in bothstudy designanddata analysis, that are used within applications ofstatistics.[1][2]The theory covers approaches tostatistical-decisionproblems and tostatistical inference, and the actions and deductions that satisfy the basic principles stated for these different approaches. Within a given approach, statistical theory gives ways of comparing statistical procedures; it can find the best possible procedure within a given context for given statistical problems, or can provide guidance on the choice between alternative procedures.[2][3]
Apart from philosophical considerations about how to make statistical inferences and decisions, much of statistical theory consists ofmathematical statistics, and is closely linked toprobability theory, toutility theory, and tooptimization.
Statistical theory provides an underlying rationale and provides a consistent basis for the choice of methodology used inapplied statistics.
Statistical modelsdescribe the sources of data and can have different types of formulation corresponding to these sources and to the problem being studied. Such problems can be of various kinds:
Statistical models, once specified, can be tested to see whether they provide useful inferences for new data sets.[4]
Statistical theory provides a guide to comparing methods ofdata collection, where the problem is to generate informative data usingoptimizationandrandomizationwhile measuring and controlling forobservational error.[5][6][7]Optimization of data collection reduces the cost of data while satisfying statistical goals,[8][9]whilerandomizationallows reliable inferences. Statistical theory provides a basis for good data collection and the structuring of investigations in the topics of:
The task of summarising statistical data in conventional forms (also known asdescriptive statistics) is considered in theoretical statistics as a problem of defining what aspects of statistical samples need to be described and how well they can be described from a typically limited sample of data. Thus the problems theoretical statistics considers include:
Besides the philosophy underlyingstatistical inference, statistical theory has the task of considering the types of questions thatdata analystsmight want to ask about the problems they are studying and of providing data analytic techniques for answering them. Some of these tasks are:
When a statistical procedure has been specified in the study protocol, then statistical theory provides well-defined probability statements for the method when applied to all populations that could have arisen from the randomization used to generate the data. This provides an objective way of estimating parameters, estimating confidence intervals, testing hypotheses, and selecting the best. Even for observational data, statistical theory provides a way of calculating a value that can be used to interpret a sample of data from a population, it can provide a means of indicating how well that value is determined by the sample, and thus a means of saying corresponding values derived for different populations are as different as they might seem; however, the reliability of inferences from post-hoc observational data is often worse than for planned randomized generation of data.
Statistical theory provides the basis for a number of data-analytic approaches that are common across scientific and social research.Interpreting datais done with one of the following approaches:
Many of the standard methods for those approaches rely on certainstatistical assumptions(made in the derivation of the methodology) actually holding in practice. Statistical theory studies the consequences of departures from these assumptions. In addition it provides a range ofrobust statistical techniquesthat are less dependent on assumptions, and it provides methods checking whether particular assumptions are reasonable for a given data set.
|
https://en.wikipedia.org/wiki/Statistical_theory
|
Systemic biasis the inherent tendency of a process to support particular outcomes. The term generally refers to human systems such as institutions. Systemic bias is related to and overlaps conceptually withinstitutional biasandstructural bias, and the terms are often used interchangeably.
In systemic bias institutional practices tend to exhibit a bias which leads to the preferential treatment or advantage of specific social groups, while others experience disadvantage or devaluation. This bias may not necessarily stem from intentional prejudice or discrimination but rather from the adherence to established rules and norms by the majority.[1]
Systemic bias includes institutional, systemic, and structural bias which can lead toinstitutional racism, which is a type of racism that is integrated into the laws, norms, and regulations of a society or establishment. Structural bias, in turn, has been defined more specifically in reference to racial inequities as "the normalized and legitimized range of policies, practices, and attitudes that routinely produce cumulative and chronic adverse outcomes for minority populations".[2]The issues of systemic bias are dealt with extensively in the field ofindustrial organizationeconomics.
Cognitive biasis inherent in the experiences, loyalties, and relationships of people in their daily lives, and new biases are constantly being discovered and addressed on both an ethical and political level. For example, the goal ofaffirmative action in the United Statesis to counter biases concerning gender, race, andethnicity, by opening up institutional participation to people with a wider range of backgrounds, and hence a wider range of points of view. InIndia, the system ofScheduled Castes and Tribesintends to address systemic bias caused by the controversialcastesystem, a system centered on organizeddiscriminationbased upon one's ancestry, not unlikethe systemthat affirmative action aims to counter. Both the scheduling system and affirmative action mandate the hiring of citizens from within designated groups. However, without sufficient restrictions based upon the actual socio-economic standing of the recipients of the aid provided, these types of systems can allegedly result in the unintentional institutionalization of a reversed form of the same systemic bias,[3]which works against the goal of rendering institutional participation open to people with a wider range of backgrounds.
Unconscious bias traininghas become common in many organizations, which is theorized to address both systemic and structural bias. This training addressed the practices and policies of the organization, such as hiring practices that favorsocial networking, or a grooming policy that disadvantages people withAfro-textured hair.[4]
The study of systemic bias as part of the field titledorganizational behaviorinindustrial organizationeconomics is studied in several principle modalities in both non-profit and for-profit institutions. The issue of concern is that patterns of behavior may develop within large institutions which become harmful to the productivity and viability of the larger institutions from which they develop, as well as the community they occupy. The three major categories of study for maladaptive organizational behavior and systemic bias are counterproductive work behavior, human resource mistreatment, and the amelioration of stress-inducing behavior.
Racismis prejudice, discrimination or hostility towards other people because they are of a differentracialorethnicorigin. Medical students conducted studies to investigate systemic biases associated with race. The result of the study showed that due to systemic bias, certain groups of people are marginalized due to race and differences, their professional careers are threatened, and more homework/responsibility is given to those in the minority group.[5]
Counterproductive work behaviorconsists of behavior by employees that harms or intends to harm organizations and people in organizations.[6]
There are several types of mistreatment that employees endure in organizations.
Financial Weekreported in May 2008:
But we travel in a world with a systemic bias to optimism that typically chooses to avoid the topic of the impending bursting of investment bubbles. Collectively, this is done for career or business reasons. As discussed many times in the investment business, pessimism or realism in the face of probable trouble is just plain bad for business and bad for careers. What I am only slowly realizing, though, is how similar the career risk appears to be for the Fed. It doesn't want to move against bubbles because Congress and business do not like it and show their dislike in unmistakable terms. Even Federal reserve chairmen get bullied and have their faces slapped if they stick to their guns, which will, not surprisingly, be rare since everyone values his career or does not want to be replaced à la Volcker. So, be as optimistic as possible, be nice to everyone, bail everyone out, and hope for the best. If all goes well, after all, you will have a lot of grateful bailees who will happily hire you for $300,000 a pop.[12]
Inengineeringandcomputational mechanics, the wordbiasis sometimes used as a synonym ofsystematic error. In this case, the bias is referred to the result of a measurement or computation, rather than to the measurement instrument or computational method.[13]
Some authors try to draw a distinction between systemic and systematic corresponding to that between unplanned and planned, or to that between arising from the characteristics of a system and from an individual flaw. In a less formal sense,systemicbiases are sometimes said to arise from the nature of the interworkings of the system, whereassystematicbiases stem from a concerted effort to favor certain outcomes. Consider the difference between affirmative action (systematic) compared to racism and caste (systemic).[14][citation needed]
|
https://en.wikipedia.org/wiki/Systemic_bias
|
Error barsare graphical representations of the variability of data and used on graphs to indicate theerrororuncertaintyin a reported measurement. They give a general idea of how precise a measurement is, or conversely, how far from the reported value the true (error free) value might be. Error bars often represent onestandard deviationof uncertainty, onestandard error, or a particularconfidence interval(e.g., a 95% interval). These quantities are not the same and so the measure selected should be stated explicitly in the graph or supporting text.
Error bars can be used to compare visually two quantities if various other conditions hold. This can determine whether differences arestatistically significant. Error bars can also suggestgoodness of fitof a given function, i.e., how well the function describes the data. Scientific papers in the experimental sciences are expected to include error bars on all graphs, though the practice differs somewhat between sciences, and each journal will have its ownhouse style. It has also been shown that error bars can be used as adirect manipulation interfacefor controlling probabilistic algorithms for approximate computation.[1]Error bars can also be expressed in aplus–minus sign(±), plus the upper limit of the error and minus the lower limit of the error.[2]
A notorious misconception in elementary statistics is that error bars show whether a statistically significant difference exists, by checking simply for whether the error bars overlap; this is not the case.[3][4][5]
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Error_bar
|
False precision(also calledoverprecision,fake precision,misplaced precision,excess precision, andspurious precision) occurs when numerical data are presented in a manner that implies betterprecisionthan is justified; since precision is a limit toaccuracy(in the ISO definition of accuracy), this often leads to overconfidence in the accuracy, namedprecision bias.[1]
Madsen Piriedefines the term "false precision" in a more general way: when exact numbers are used for notions that cannot be expressed in exact terms. For example, "We know that 90% of the difficulty in writing is getting started." Often false precision is abused to produce an unwarranted confidence in the claim: "our mouthwash is twice as good as our competitor's".[2]
Inscienceandengineering, convention dictates that unless amargin of erroris explicitly stated, the number ofsignificant figuresused in the presentation of data should be limited to what is warranted by the precision of those data. For example, if an instrument can be read to tenths of a unit of measurement, results of calculations using data obtained from that instrument can only be confidently stated to the tenths place, regardless of what the raw calculation returns or whether other data used in the calculation are more accurate. Even outside these disciplines, there is a tendency to assume that all the non-zero digits of a number are meaningful; thus, providing excessive figures may lead the viewer to expect better precision than exists.
However, in contrast, it is good practice to retain more significant figures than this in the intermediate stages of a calculation, in order to avoid accumulatedrounding errors.
False precision commonly arises when high-precision and low-precision data are combined, when using anelectronic calculator, and inconversion of units.
False precision is the gist of numerous variations of a joke which can be summarized as follows: A tour guide at a museum tells visitors that a dinosaur skeleton is 100,000,005 years old, because he was told that it was 100 million years old when he started working there 5 years ago.
If a car's speedometer indicates a speed of 60 mph, converting it to 96.56064 km/h makes it seem like the measurement was very precise, when in fact it was not. Assuming the speedometer is accurate to 1 mph, a more appropriate conversion is 97 km/h.
Measures that rely onstatistical sampling, such asIQ tests, are often reported with false precision.[3]
|
https://en.wikipedia.org/wiki/False_precision
|
Innumerical analysis, one or moreguard digitscan be used to reduce the amount ofroundoff error.
Suppose that the final result of a long, multi-step calculation can be safelyroundedoff toNdecimal places. That is to say, the roundoff error introduced by this final roundoff makes a negligible contribution to the overall uncertainty.
However, it is quite likely that it isnotsafe to round off the intermediate steps in the calculation to the same number of digits. Be aware that roundoff errors can accumulate. IfMdecimal places are used in the intermediate calculation, we say there areM−Nguard digits.
Guard digits are also used infloating pointoperations in most computer systems.
As an example, consider the subtraction21×0.1002−20×0.1112{\displaystyle 2^{1}\times 0.100_{2}-2^{0}\times 0.111_{2}}. Here, the product notation indicates a binary floating point representation with the exponent of the representation given as apower of twoand with thesignificandgiven with three bits after the binary point. To compute the subtraction it is necessary to change the forms of these numbers so that they have the same exponent, and so that when the product notation is expanded the resulting numbers have their binary points lined up with each other. Shifting the second operand into position, as21×0.01112{\displaystyle 2^{1}\times 0.0111_{2}}, gives it a fourth digit after the binary point. This creates the need to add an extra digit to the first operand—a guard digit—putting the subtraction into the form21×0.10002−21×0.01112{\displaystyle 2^{1}\times 0.1000_{2}-2^{1}\times 0.0111_{2}}.
Performing this operation gives as the result21×0.00012{\displaystyle 2^{1}\times 0.0001_{2}}or2−2×0.1002{\displaystyle 2^{-2}\times 0.100_{2}}.
Without using a guard digit the subtraction would be performed only to three bits of precision, as21×0.1002−21×0.0112{\displaystyle 2^{1}\times 0.100_{2}-2^{1}\times 0.011_{2}}, yielding21×0.0012={\displaystyle 2^{1}\times 0.001_{2}=}or2−1×0.1002{\displaystyle 2^{-1}\times 0.100_{2}}, twice as large as the correct result. Thus, in this example, the use of a guard digit led to a more accurate result.
An example of the error caused by floating point roundoff is illustrated in the followingCcode.
It appears that the program should not terminate. Yet the output is:
Another example is:
Take two numbers:
2.56×100{\displaystyle 2.56\times 10^{0}}and2.34×102{\displaystyle 2.34\times 10^{2}}
We bring the first number to the same power of10{\displaystyle 10}as the second one:
0.0256×102{\displaystyle 0.0256\times 10^{2}}
The addition of the two numbers is:
After padding the second number (i.e.,2.34×102{\displaystyle 2.34\times 10^{2}}) with two0{\displaystyle 0}s, the bit after4{\displaystyle 4}is the guard digit, and the bit after is the round digit. The result after rounding is2.37{\displaystyle 2.37}as opposed to2.36{\displaystyle 2.36}, without the extra bits (guard and round bits), i.e., by considering only0.02+2.34=2.36{\displaystyle 0.02+2.34=2.36}. The error therefore is0.01{\displaystyle 0.01}.
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Guard_digit
|
Interval arithmetic(also known asinterval mathematics;interval analysisorinterval computation) is a mathematical technique used to mitigateroundingandmeasurement errorsinmathematical computationby computing functionbounds.Numerical methodsinvolving intervalarithmeticcan guarantee relatively reliable and mathematically correct results. Instead of representing a value as a single number, interval arithmetic or interval mathematics represents each value as arange of possibilities.
Mathematically, instead of working with an uncertainreal-valuedvariablex{\displaystyle x}, interval arithmetic works with an interval[a,b]{\displaystyle [a,b]}that defines the range of values thatx{\displaystyle x}can have. In other words, any value of the variablex{\displaystyle x}lies in the closed interval betweena{\displaystyle a}andb{\displaystyle b}. A functionf{\displaystyle f}, when applied tox{\displaystyle x}, produces an interval[c,d]{\displaystyle [c,d]}which includes all the possible values forf(x){\displaystyle f(x)}for allx∈[a,b]{\displaystyle x\in [a,b]}.
Interval arithmetic is suitable for a variety of purposes; the most common use is in scientific works, particularly when the calculations are handled by software, where it is used to keep track ofrounding errorsin calculations and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find guaranteed solutions to equations (such asdifferential equations) andoptimization problems.
The main objective of intervalarithmeticis to provide a simple way of calculatingupper and lower boundsof a function's range in one or more variables. These endpoints are not necessarily the truesupremumorinfimumof a range since the precise calculation of those values can be difficult or impossible; the bounds only need to contain the function's range as a subset.
This treatment is typically limited to real intervals, so quantities in the form
wherea=−∞{\displaystyle a={-\infty }}andb=∞{\displaystyle b={\infty }}are allowed. With one ofa{\displaystyle a},b{\displaystyle b}infinite, the interval would be an unbounded interval; with both infinite, the interval would be the extended real number line. Since a real numberr{\displaystyle r}can be interpreted as the interval[r,r],{\displaystyle [r,r],}intervals and real numbers can be freely combined.
Consider the calculation of a person'sbody mass index(BMI). BMI is calculated as a person's body weight in kilograms divided by the square of their height in meters. Suppose a person uses a scale that has a precision of one kilogram, where intermediate values cannot be discerned, and the true weight is rounded to the nearest whole number. For example, 79.6 kg and 80.3 kg are indistinguishable, as the scale can only display values to the nearest kilogram. It is unlikely that when the scale reads 80 kg, the person has a weight ofexactly80.0 kg. Thus, the scale displaying 80 kg indicates a weight between 79.5 kg and 80.5 kg, or the interval[79.5,80.5){\displaystyle [79.5,80.5)}.
The BMI of a man who weighs 80 kg and is 1.80m tall is approximately 24.7. A weight of 79.5 kg and the same height yields a BMI of 24.537, while a weight of 80.5 kg yields 24.846. Since the body mass is continuous and always increasing for all values within the specified weight interval, the true BMI must lie within the interval[24.537,24.846]{\displaystyle [24.537,24.846]}. Since the entire interval is less than 25, which is the cutoff between normal and excessive weight, it can be concluded with certainty that the man is of normal weight.
The error in this example does not affect the conclusion (normal weight), but this is not generally true. If the man were slightly heavier, the BMI's range may include the cutoff value of 25. In such a case, the scale's precision would be insufficient to make a definitive conclusion.
The range of BMI examples could be reported as[24.5,24.9]{\displaystyle [24.5,24.9]}since this interval is a superset of the calculated interval. The range could not, however, be reported as[24.6,24.8]{\displaystyle [24.6,24.8]}, as the interval does not contain possible BMI values.
Height and body weight both affect the value of the BMI. Though the example above only considered variation in weight, height is also subject to uncertainty. Height measurements in meters are usually rounded to the nearest centimeter: a recorded measurement of 1.79 meters represents a height in the interval[1.785,1.795){\displaystyle [1.785,1.795)}. Since the BMI uniformly increases with respect to weight and decreases with respect to height, the error interval can be calculated by substituting the lowest and highest values of each interval, and then selecting the lowest and highest results as boundaries. The BMI must therefore exist in the interval
In this case, the man may have normal weight or be overweight; the weight and height measurements were insufficiently precise to make a definitive conclusion.
A binary operation⋆{\displaystyle \star }on two intervals, such as addition or multiplication is defined by
In other words, it is the set of all possible values ofx⋆y{\displaystyle x\star y}, wherex{\displaystyle x}andy{\displaystyle y}are in their corresponding intervals. If⋆{\displaystyle \star }ismonotonefor each operand on the intervals, which is the case for the four basic arithmetic operations (except division when the denominator contains0{\displaystyle 0}), the extreme values occur at the endpoints of the operand intervals. Writing out all combinations, one way of stating this is
provided thatx⋆y{\displaystyle x\star y}is defined for allx∈[x1,x2]{\displaystyle x\in [x_{1},x_{2}]}andy∈[y1,y2]{\displaystyle y\in [y_{1},y_{2}]}.
For practical applications, this can be simplified further:
The last case loses useful information about the exclusion of(1/y1,1/y2){\displaystyle (1/y_{1},1/y_{2})}. Thus, it is common to work with[−∞,1y1]{\displaystyle \left[-\infty ,{\tfrac {1}{y_{1}}}\right]}and[1y2,∞]{\displaystyle \left[{\tfrac {1}{y_{2}}},\infty \right]}as separate intervals. More generally, when working with discontinuous functions, it is sometimes useful to do the calculation with so-calledmulti-intervalsof the form⋃i[ai,bi].{\textstyle \bigcup _{i}\left[a_{i},b_{i}\right].}The correspondingmulti-interval arithmeticmaintains a set of (usually disjoint) intervals and also provides for overlapping intervals to unite.[1]
Interval multiplication often only requires two multiplications. Ifx1{\displaystyle x_{1}},y1{\displaystyle y_{1}}are nonnegative,
The multiplication can be interpreted as the area of a rectangle with varying edges. The result interval covers all possible areas, from the smallest to the largest.
With the help of these definitions, it is already possible to calculate the range of simple functions, such asf(a,b,x)=a⋅x+b.{\displaystyle f(a,b,x)=a\cdot x+b.}For example, ifa=[1,2]{\displaystyle a=[1,2]},b=[5,7]{\displaystyle b=[5,7]}andx=[2,3]{\displaystyle x=[2,3]}:
To shorten the notation of intervals, brackets can be used.
[x]≡[x1,x2]{\displaystyle [x]\equiv [x_{1},x_{2}]}can be used to represent an interval. Note that in such a compact notation,[x]{\displaystyle [x]}should not be confused between a single-point interval[x1,x1]{\displaystyle [x_{1},x_{1}]}and a general interval. For the set of all intervals, we can use
as an abbreviation. For a vector of intervals([x]1,…,[x]n)∈[R]n{\displaystyle \left([x]_{1},\ldots ,[x]_{n}\right)\in [\mathbb {R} ]^{n}}we can use a bold font:[x]{\displaystyle [\mathbf {x} ]}.
Interval functions beyond the four basic operators may also be defined.
Formonotonic functionsin one variable, the range of values is simple to compute. Iff:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }is monotonically increasing (resp. decreasing) in the interval[x1,x2],{\displaystyle [x_{1},x_{2}],}then for ally1,y2∈[x1,x2]{\displaystyle y_{1},y_{2}\in [x_{1},x_{2}]}such thaty1<y2,{\displaystyle y_{1}<y_{2},}f(y1)≤f(y2){\displaystyle f(y_{1})\leq f(y_{2})}(resp.f(y2)≤f(y1){\displaystyle f(y_{2})\leq f(y_{1})}).
The range corresponding to the interval[y1,y2]⊆[x1,x2]{\displaystyle [y_{1},y_{2}]\subseteq [x_{1},x_{2}]}can be therefore calculated by applying the function to its endpoints:
From this, the following basic features for interval functions can easily be defined:
For even powers, the range of values being considered is important and needs to be dealt with before doing any multiplication. For example,xn{\displaystyle x^{n}}forx∈[−1,1]{\displaystyle x\in [-1,1]}should produce the interval[0,1]{\displaystyle [0,1]}whenn=2,4,6,….{\displaystyle n=2,4,6,\ldots .}But if[−1,1]n{\displaystyle [-1,1]^{n}}is taken by repeating interval multiplication of form[−1,1]⋅[−1,1]⋅⋯⋅[−1,1]{\displaystyle [-1,1]\cdot [-1,1]\cdot \cdots \cdot [-1,1]}then the result is[−1,1],{\displaystyle [-1,1],}wider than necessary.
More generally one can say that, for piecewise monotonic functions, it is sufficient to consider the endpointsx1{\displaystyle x_{1}},x2{\displaystyle x_{2}}of an interval, together with the so-calledcritical pointswithin the interval, being those points where the monotonicity of the function changes direction. For thesineandcosinefunctions, the critical points are at(12+n)π{\displaystyle \left({\tfrac {1}{2}}+n\right)\pi }ornπ{\displaystyle n\pi }forn∈Z{\displaystyle n\in \mathbb {Z} }, respectively. Thus, only up to five points within an interval need to be considered, as the resulting interval is[−1,1]{\displaystyle [-1,1]}if the interval includes at least two extrema. For sine and cosine, only the endpoints need full evaluation, as the critical points lead to easily pre-calculated values—namely −1, 0, and 1.
In general, it may not be easy to find such a simple description of the output interval for many functions. But it may still be possible to extend functions to interval arithmetic. Iff:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }is a function from a real vector to a real number, then[f]:[R]n→[R]{\displaystyle [f]:[\mathbb {R} ]^{n}\to [\mathbb {R} ]}is called aninterval extensionoff{\displaystyle f}if
This definition of the interval extension does not give a precise result. For example, both[f]([x1,x2])=[ex1,ex2]{\displaystyle [f]([x_{1},x_{2}])=[e^{x_{1}},e^{x_{2}}]}and[g]([x1,x2])=[−∞,∞]{\displaystyle [g]([x_{1},x_{2}])=[{-\infty },{\infty }]}are allowable extensions of the exponential function. Tighter extensions are desirable, though the relative costs of calculation and imprecision should be considered; in this case,[f]{\displaystyle [f]}should be chosen as it gives the tightest possible result.
Given a real expression, itsnatural interval extensionis achieved by using the interval extensions of each of its subexpressions, functions, and operators.
TheTaylor interval extension(of degreek{\displaystyle k}) is ak+1{\displaystyle k+1}times differentiable functionf{\displaystyle f}defined by
for somey∈[x]{\displaystyle \mathbf {y} \in [\mathbf {x} ]}, whereDif(y){\displaystyle \mathrm {D} ^{i}f(\mathbf {y} )}is thei{\displaystyle i}-th order differential off{\displaystyle f}at the pointy{\displaystyle \mathbf {y} }and[r]{\displaystyle [r]}is an interval extension of theTaylor remainder.
The vectorξ{\displaystyle \xi }lies betweenx{\displaystyle \mathbf {x} }andy{\displaystyle \mathbf {y} }withx,y∈[x]{\displaystyle \mathbf {x} ,\mathbf {y} \in [\mathbf {x} ]},ξ{\displaystyle \xi }is protected by[x]{\displaystyle [\mathbf {x} ]}.
Usually one choosesy{\displaystyle \mathbf {y} }to be the midpoint of the interval and uses the natural interval extension to assess the remainder.
The special case of the Taylor interval extension of degreek=0{\displaystyle k=0}is also referred to as themean value form.
An interval can be defined as a set of points within a specified distance of the center, and this definition can be extended from real numbers tocomplex numbers.[2]Another extension defines intervals as rectangles in the complex plane. As is the case with computing with real numbers, computing with complex numbers involves uncertain data. So, given the fact that an interval number is a real closed interval and a complex number is an ordered pair ofreal numbers, there is no reason to limit the application of interval arithmetic to the measure of uncertainties in computations with real numbers.[3]Interval arithmetic can thus be extended, via complex interval numbers, to determine regions of uncertainty in computing with complex numbers. One can either define complex interval arithmetic using rectangles or using disks, both with their respective advantages and disadvantages.[3]
The basic algebraic operations for real interval numbers (real closed intervals) can be extended to complex numbers. It is therefore not surprising that complex interval arithmetic is similar to, but not the same as, ordinary complex arithmetic.[3]It can be shown that, as is the case with real interval arithmetic, there is no distributivity between the addition and multiplication of complex interval numbers except for certain special cases, and inverse elements do not always exist for complex interval numbers.[3]Two other useful properties of ordinary complex arithmetic fail to hold in complex interval arithmetic: the additive and multiplicative properties, of ordinary complex conjugates, do not hold for complex interval conjugates.[3]
Interval arithmetic can be extended, in an analogous manner, to other multidimensionalnumber systemssuch asquaternionsandoctonions, but with the expense that we have to sacrifice other useful properties of ordinary arithmetic.[3]
The methods of classical numerical analysis cannot be transferred one-to-one into interval-valued algorithms, as dependencies between numerical values are usually not taken into account.
To work effectively in a real-life implementation, intervals must be compatible with floating point computing. The earlier operations were based on exact arithmetic, but in general fast numerical solution methods may not be available for it. The range of values of the functionf(x,y)=x+y{\displaystyle f(x,y)=x+y}forx∈[0.1,0.8]{\displaystyle x\in [0.1,0.8]}andy∈[0.06,0.08]{\displaystyle y\in [0.06,0.08]}are for example[0.16,0.88]{\displaystyle [0.16,0.88]}. Where the same calculation is done with single-digit precision, the result would normally be[0.2,0.9]{\displaystyle [0.2,0.9]}. But[0.2,0.9]⊉[0.16,0.88]{\displaystyle [0.2,0.9]\not \supseteq [0.16,0.88]},
so this approach would contradict the basic principles of interval arithmetic, as a part of the domain off([0.1,0.8],[0.06,0.08]){\displaystyle f([0.1,0.8],[0.06,0.08])}would be lost. Instead, the outward rounded solution[0.1,0.9]{\displaystyle [0.1,0.9]}is used.
The standardIEEE 754for binary floating-point arithmetic also sets out procedures for the implementation of rounding. An IEEE 754 compliant system allows programmers to round to the nearest floating-point number; alternatives are rounding towards 0 (truncating), rounding toward positive infinity (i.e., up), or rounding towards negative infinity (i.e., down).
The requiredexternal roundingfor interval arithmetic can thus be achieved by changing the rounding settings of the processor in the calculation of the upper limit (up) and lower limit (down). Alternatively, an appropriate small interval[ε1,ε2]{\displaystyle [\varepsilon _{1},\varepsilon _{2}]}can be added.
The so-called "dependency" problemis a major obstacle to the application of interval arithmetic. Although interval methods can determine the range of elementary arithmetic operations and functions very accurately, this is not always true with more complicated functions. If an interval occurs several times in a calculation using parameters, and each occurrence is taken independently, then this can lead to an unwanted expansion of the resulting intervals.
As an illustration, take the functionf{\displaystyle f}defined byf(x)=x2+x.{\displaystyle f(x)=x^{2}+x.}The values of this function over the interval[−1,1]{\displaystyle [-1,1]}are[−14,2].{\displaystyle \left[-{\tfrac {1}{4}},2\right].}As the natural interval extension, it is calculated as:
which is slightly larger; we have instead calculated theinfimum and supremumof the functionh(x,y)=x2+y{\displaystyle h(x,y)=x^{2}+y}overx,y∈[−1,1].{\displaystyle x,y\in [-1,1].}There is a better expression off{\displaystyle f}in which the variablex{\displaystyle x}only appears once, namely by rewritingf(x)=x2+x{\displaystyle f(x)=x^{2}+x}as addition and squaring in thequadratic.
So the suitable interval calculation is
and gives the correct values.
In general, it can be shown that the exact range of values can be achieved, if each variable appears only once and iff{\displaystyle f}is continuous inside the box. However, not every function can be rewritten this way.
The dependency of the problem causing over-estimation of the value range can go as far as covering a large range, preventing more meaningful conclusions.
An additional increase in the range stems from the solution of areas that do not take the form of an interval vector. The solution set of the linear system
is precisely the line between the points(−1,−1){\displaystyle (-1,-1)}and(1,1).{\displaystyle (1,1).}Using interval methods results in the unit square,[−1,1]×[−1,1].{\displaystyle [-1,1]\times [-1,1].}This is known as thewrapping effect.
A linear interval system consists of a matrix interval extension[A]∈[R]n×m{\displaystyle [\mathbf {A} ]\in [\mathbb {R} ]^{n\times m}}and an interval vector[b]∈[R]n{\displaystyle [\mathbf {b} ]\in [\mathbb {R} ]^{n}}. We want the smallest cuboid[x]∈[R]m{\displaystyle [\mathbf {x} ]\in [\mathbb {R} ]^{m}}, for all vectorsx∈Rm{\displaystyle \mathbf {x} \in \mathbb {R} ^{m}}which there is a pair(A,b){\displaystyle (\mathbf {A} ,\mathbf {b} )}withA∈[A]{\displaystyle \mathbf {A} \in [\mathbf {A} ]}andb∈[b]{\displaystyle \mathbf {b} \in [\mathbf {b} ]}satisfying.
For quadratic systems – in other words, forn=m{\displaystyle n=m}– there can be such an interval vector[x]{\displaystyle [\mathbf {x} ]}, which covers all possible solutions, found simply with the interval Gauss method. This replaces the numerical operations, in that the linear algebra method known asGaussian eliminationbecomes its interval version. However, since this method uses the interval entities[A]{\displaystyle [\mathbf {A} ]}and[b]{\displaystyle [\mathbf {b} ]}repeatedly in the calculation, it can produce poor results for some problems. Hence using the result of the interval-valued Gauss only provides first rough estimates, since although it contains the entire solution set, it also has a large area outside it.
A rough solution[x]{\displaystyle [\mathbf {x} ]}can often be improved by an interval version of theGauss–Seidel method.
The motivation for this is that thei{\displaystyle i}-th row of the interval extension of the linear equation.
can be determined by the variablexi{\displaystyle x_{i}}if the division1/[aii]{\displaystyle 1/[a_{ii}]}is allowed. It is therefore simultaneously.
So we can now replace[xj]{\displaystyle [x_{j}]}by
and so the vector[x]{\displaystyle [\mathbf {x} ]}by each element.
Since the procedure is more efficient for adiagonally dominant matrix, instead of the system[A]⋅x=[b],{\displaystyle [\mathbf {A} ]\cdot \mathbf {x} =[\mathbf {b} ]{\mbox{,}}}one can often try multiplying it by an appropriate rational matrixM{\displaystyle \mathbf {M} }with the resulting matrix equation.
left to solve. If one chooses, for example,M=A−1{\displaystyle \mathbf {M} =\mathbf {A} ^{-1}}for the central matrixA∈[A]{\displaystyle \mathbf {A} \in [\mathbf {A} ]}, thenM⋅[A]{\displaystyle \mathbf {M} \cdot [\mathbf {A} ]}is outer extension of the identity matrix.
These methods only work well if the widths of the intervals occurring are sufficiently small. For wider intervals, it can be useful to use an interval-linear system on finite (albeit large) real number equivalent linear systems. If all the matricesA∈[A]{\displaystyle \mathbf {A} \in [\mathbf {A} ]}are invertible, it is sufficient to consider all possible combinations (upper and lower) of the endpoints occurring in the intervals. The resulting problems can be resolved using conventional numerical methods. Interval arithmetic is still used to determine rounding errors.
This is only suitable for systems of smaller dimension, since with a fully occupiedn×n{\displaystyle n\times n}matrix,2n2{\displaystyle 2^{n^{2}}}real matrices need to be inverted, with2n{\displaystyle 2^{n}}vectors for the right-hand side. This approach was developed by Jiri Rohn and is still being developed.[4]
An interval variant ofNewton's methodfor finding the zeros in an interval vector[x]{\displaystyle [\mathbf {x} ]}can be derived from the average value extension.[5]For an unknown vectorz∈[x]{\displaystyle \mathbf {z} \in [\mathbf {x} ]}applied toy∈[x]{\displaystyle \mathbf {y} \in [\mathbf {x} ]}, gives.
For a zeroz{\displaystyle \mathbf {z} }, that isf(z)=0{\displaystyle f(z)=0}, and thus, must satisfy.
This is equivalent toz∈y−[Jf]([x])−1⋅f(y){\displaystyle \mathbf {z} \in \mathbf {y} -[J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} )}.
An outer estimate of[Jf]([x])−1⋅f(y)){\displaystyle [J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} ))}can be determined using linear methods.
In each step of the interval Newton method, an approximate starting value[x]∈[R]n{\displaystyle [\mathbf {x} ]\in [\mathbb {R} ]^{n}}is replaced by[x]∩(y−[Jf]([x])−1⋅f(y)){\displaystyle [\mathbf {x} ]\cap \left(\mathbf {y} -[J_{f}](\mathbf {[x]} )^{-1}\cdot f(\mathbf {y} )\right)}and so the result can be improved. In contrast to traditional methods, the interval method approaches the result by containing the zeros. This guarantees that the result produces all zeros in the initial range. Conversely, it proves that no zeros off{\displaystyle f}were in the initial range[x]{\displaystyle [\mathbf {x} ]}if a Newton step produces the empty set.
The method converges on all zeros in the starting region. Division by zero can lead to the separation of distinct zeros, though the separation may not be complete; it can be complemented by thebisection method.
As an example, consider the functionf(x)=x2−2{\displaystyle f(x)=x^{2}-2}, the starting range[x]=[−2,2]{\displaystyle [x]=[-2,2]}, and the pointy=0{\displaystyle y=0}. We then haveJf(x)=2x{\displaystyle J_{f}(x)=2\,x}and the first Newton step gives.
More Newton steps are used separately onx∈[−2,−0.5]{\displaystyle x\in [{-2},{-0.5}]}and[0.5,2]{\displaystyle [{0.5},{2}]}. These converge to arbitrarily small intervals around−2{\displaystyle -{\sqrt {2}}}and+2{\displaystyle +{\sqrt {2}}}.
The Interval Newton method can also be used withthick functionssuch asg(x)=x2−[2,3]{\displaystyle g(x)=x^{2}-[2,3]}, which would in any case have interval results. The result then produces intervals containing[−3,−2]∪[2,3]{\displaystyle \left[-{\sqrt {3}},-{\sqrt {2}}\right]\cup \left[{\sqrt {2}},{\sqrt {3}}\right]}.
The various interval methods deliver conservative results as dependencies between the sizes of different interval extensions are not taken into account. However, the dependency problem becomes less significant for narrower intervals.
Covering an interval vector[x]{\displaystyle [\mathbf {x} ]}by smaller boxes[x1],…,[xk],{\displaystyle [\mathbf {x} _{1}],\ldots ,[\mathbf {x} _{k}],}so that
is then valid for the range of values.
So, for the interval extensions described above the following holds:
Since[f]([x]){\displaystyle [f]([\mathbf {x} ])}is often a genuinesupersetof the right-hand side, this usually leads to an improved estimate.
Such a cover can be generated by thebisection methodsuch as thick elements[xi1,xi2]{\displaystyle [x_{i1},x_{i2}]}of the interval vector[x]=([x11,x12],…,[xn1,xn2]){\displaystyle [\mathbf {x} ]=([x_{11},x_{12}],\ldots ,[x_{n1},x_{n2}])}by splitting in the center into the two intervals[xi1,12(xi1+xi2)]{\displaystyle \left[x_{i1},{\tfrac {1}{2}}(x_{i1}+x_{i2})\right]}and[12(xi1+xi2),xi2].{\displaystyle \left[{\tfrac {1}{2}}(x_{i1}+x_{i2}),x_{i2}\right].}If the result is still not suitable then further gradual subdivision is possible. A cover of2r{\displaystyle 2^{r}}intervals results fromr{\displaystyle r}divisions of vector elements, substantially increasing the computation costs.
With very wide intervals, it can be helpful to split all intervals into several subintervals with a constant (and smaller) width, a method known asmincing. This then avoids the calculations for intermediate bisection steps. Both methods are only suitable for problems of low dimension.
Interval arithmetic can be used in various areas (such asset inversion,motion planning,set estimation, or stability analysis) to treat estimates with no exact numerical value.[6]
Interval arithmetic is used with error analysis, to control rounding errors arising from each calculation. The advantage of interval arithmetic is that after each operation there is an interval that reliably includes the true result. The distance between the interval boundaries gives the current calculation of rounding errors directly:
Interval analysis adds to rather than substituting for traditional methods for error reduction, such aspivoting.
Parameters for which no exact figures can be allocated often arise during the simulation of technical and physical processes. The production process of technical components allows certain tolerances, so some parameters fluctuate within intervals. In addition, many fundamental constants are not known precisely.[1]
If the behavior of such a system affected by tolerances satisfies, for example,f(x,p)=0{\displaystyle f(\mathbf {x} ,\mathbf {p} )=0}, forp∈[p]{\displaystyle \mathbf {p} \in [\mathbf {p} ]}and unknownx{\displaystyle \mathbf {x} }then the set of possible solutions.
can be found by interval methods. This provides an alternative to traditionalpropagation of erroranalysis. Unlike point methods, such asMonte Carlo simulation, interval arithmetic methodology ensures that no part of the solution area can be overlooked. However, the result is always a worst-case analysis for the distribution of error, as other probability-based distributions are not considered.
Interval arithmetic can also be used with affiliation functions for fuzzy quantities as they are used infuzzy logic. Apart from the strict statementsx∈[x]{\displaystyle x\in [x]}andx∉[x]{\displaystyle x\not \in [x]}, intermediate values are also possible, to which real numbersμ∈[0,1]{\displaystyle \mu \in [0,1]}are assigned.μ=1{\displaystyle \mu =1}corresponds to definite membership whileμ=0{\displaystyle \mu =0}is non-membership. A distribution function assigns uncertainty, which can be understood as a further interval.
Forfuzzy arithmetic[7]only a finite number of discrete affiliation stagesμi∈[0,1]{\displaystyle \mu _{i}\in [0,1]}are considered. The form of such a distribution for an indistinct value can then be represented by a sequence of intervals.
The interval[x(i)]{\displaystyle \left[x^{(i)}\right]}corresponds exactly to the fluctuation range for the stageμi.{\displaystyle \mu _{i}.}
The appropriate distribution for a functionf(x1,…,xn){\displaystyle f(x_{1},\ldots ,x_{n})}concerning indistinct valuesx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}and the corresponding sequences.
can be approximated by the sequence.
where
and can be calculated by interval methods. The value[y(1)]{\displaystyle \left[y^{(1)}\right]}corresponds to the result of an interval calculation.
Warwick Tuckerused interval arithmetic in order to solve the 14th ofSmale's problems, that is, to show that theLorenz attractoris astrange attractor.[8]Thomas Halesused interval arithmetic in order to solve theKepler conjecture.
Interval arithmetic is not a completely new phenomenon in mathematics; it has appeared several times under different names in the course of history. For example,Archimedescalculated lower and upper bounds 223/71 <π< 22/7 in the 3rd century BC. Actual calculation with intervals has neither been as popular as other numerical techniques nor been completely forgotten.
Rules for calculating with intervals and other subsets of the real numbers were published in a 1931 work by Rosalind Cicely Young.[9]Arithmetic work on range numbers to improve the reliability of digital systems was then published in a 1951 textbook on linear algebra byPaul S. Dwyer[de];[10]intervals were used to measure rounding errors associated with floating-point numbers. A comprehensive paper on interval algebra in numerical analysis was published by Teruo Sunaga (1958).[11]
The birth of modern interval arithmetic was marked by the appearance of the bookInterval AnalysisbyRamon E. Moorein 1966.[12][13]He had the idea in spring 1958, and a year later he published an article about computer interval arithmetic.[14]Its merit was that starting with a simple principle, it provided a general method for automated error analysis, not just errors resulting from rounding.
Independently in 1956,Mieczyslaw Warmussuggested formulae for calculations with intervals,[15]though Moore found the first non-trivial applications.
In the following twenty years, German groups of researchers carried out pioneering work aroundUlrich W. Kulisch[16][17]andGötz Alefeld[de][18]at theUniversity of Karlsruheand later also at theBergische University of Wuppertal.
For example,Karl Nickel[de]explored more effective implementations, while improved containment procedures for the solution set of systems of equations were due to Arnold Neumaier among others. In the 1960s,Eldon R. Hansendealt with interval extensions for linear equations and then provided crucial contributions to global optimization, including what is now known as Hansen's method, perhaps the most widely used interval algorithm.[5]Classical methods in this often have the problem of determining the largest (or smallest) global value, but could only find a local optimum and could not find better values; Helmut Ratschek and Jon George Rokne developedbranch and boundmethods, which until then had only applied to integer values, by using intervals to provide applications for continuous values.
In 1988, Rudolf Lohner developedFortran-based software for reliable solutions for initial value problems usingordinary differential equations.[19]
The journalReliable Computing(originallyInterval Computations) has been published since the 1990s, dedicated to the reliability of computer-aided computations. As lead editor, R. Baker Kearfott, in addition to his work on global optimization, has contributed significantly to the unification of notation and terminology used in interval arithmetic.[20]
In recent years work has concentrated in particular on the estimation ofpreimagesof parameterized functions and to robust control theory by the COPRIN working group ofINRIAinSophia Antipolisin France.[21]
There are many software packages that permit the development of numerical applications using interval arithmetic.[22]These are usually provided in the form of program libraries. There are alsoC++and Fortrancompilersthat handle interval data types and suitable operations as a language extension, so interval arithmetic is supported directly.
Since 1967,Extensions for Scientific Computation(XSC) have been developed in theUniversity of Karlsruhefor variousprogramming languages, such as C++, Fortran, andPascal.[23]The first platform was aZuseZ23, for which a new interval data type with appropriate elementary operators was made available. There followed in 1976,Pascal-SC, a Pascal variant on aZilog Z80that it made possible to create fast, complicated routines for automated result verification. Then came theFortran 77-based ACRITH-XSC for theSystem/370architecture (FORTRAN-SC), which was later delivered by IBM. Starting from 1991 one could produce code forCcompilers withPascal-XSC; a year later the C++ class library supported C-XSC on many different computer systems. In 1997, all XSC variants were made available under theGNU General Public License. At the beginning of 2000, C-XSC 2.0 was released under the leadership of the working group for scientific computation at theBergische University of Wuppertalto correspond to the improved C++ standard.
Another C++-class library was created in 1993 at theHamburg University of TechnologycalledProfil/BIAS(Programmer's Runtime Optimized Fast Interval Library, Basic Interval Arithmetic), which made the usual interval operations more user-friendly. It emphasized the efficient use of hardware, portability, and independence of a particular presentation of intervals.
TheBoost collectionof C++ libraries contains a template class for intervals. Its authors are aiming to have interval arithmetic in the standard C++ language.[24]
TheFrinkprogramming language has an implementation of interval arithmetic that handlesarbitrary-precision numbers. Programs written in Frink can use intervals without rewriting or recompilation.
GAOL[25]is another C++ interval arithmetic library that is unique in that it offers the relational interval operators used in intervalconstraint programming.
The Moore library[26]is an efficient implementation of interval arithmetic in C++. It provides intervals with endpoints of arbitrary precision and is based on theconceptsfeature of C++.
TheJuliaprogramming language[27]has an implementation of interval arithmetics along with high-level features, such asroot-finding(for both real and complex-valued functions) and intervalconstraint programming, via the ValidatedNumerics.jl package.[28]
In addition, computer algebra systems, such asEuler Mathematical Toolbox,FriCAS,Maple,Mathematica,Maxima[29]andMuPAD, can handle intervals. AMatlabextensionIntlab[30]builds onBLASroutines, and the toolbox b4m makes a Profil/BIAS interface.[30][31]
A library for the functional languageOCamlwas written in assembly language and C.[32]
MPFI is a library for arbitrary precision interval arithmetic; it is written in C and is based onMPFR.[33]
A standard for interval arithmetic, IEEE Std 1788-2015, has been approved in June 2015.[34]Two reference implementations are freely available.[35]These have been developed by members of the standard's working group: The libieeep1788[36]library for C++, and the interval package[37]forGNU Octave.
A minimal subset of the standard, IEEE Std 1788.1-2017, has been approved in December 2017 and published in February 2018. It should be easier to implement and may speed production of implementations.[38]
Several international conferences or workshops take place every year in the world. The main conference is probably SCAN (International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computation), but there is also SWIM (Small Workshop on Interval Methods), PPAM (International Conference on Parallel Processing and Applied Mathematics), REC (International Workshop on Reliable Engineering Computing).
|
https://en.wikipedia.org/wiki/Interval_arithmetic
|
Incomputer science, theprecisionof a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related toprecision in mathematics, which describes the number of digits that are used to express a value.
Some of the standardized precision formats are:
Of these, octuple-precision format is rarely used. The single- and double-precision formats are most widely used and supported on nearly all platforms. The use of half-precision format andminifloatformats has been increasing especially in the field ofmachine learningsince many machine learning algorithms are inherently error-tolerant.
Precision is often the source ofrounding errorsincomputation. The number of bits used to store a number will often cause some loss of accuracy. An example would be to store "sin(0.1)" in IEEE single-precision floating point standard. The error is then often magnified as subsequent computations are made using the data (although it can also be reduced).
|
https://en.wikipedia.org/wiki/Precision_(computer_science)
|
Incomputing, aroundoff error,[1]also calledrounding error,[2]is the difference between the result produced by a givenalgorithmusing exactarithmeticand the result produced by the same algorithm using finite-precision,roundedarithmetic.[3]Rounding errors are due to inexactness in the representation ofreal numbersand the arithmetic operations done with them. This is a form ofquantization error.[4]When using approximationequationsor algorithms, especially when using finitely many digits to represent real numbers (which in theory have infinitely many digits), one of the goals ofnumerical analysisis toestimatecomputation errors.[5]Computation errors, also callednumerical errors, include bothtruncation errorsand roundoff errors.
When a sequence of calculations with an input involving any roundoff error are made, errors may accumulate, sometimes dominating the calculation. Inill-conditionedproblems, significant error may accumulate.[6]
In short, there are two major facets of roundoff errors involved in numerical calculations:[7]
The error introduced by attempting to represent a number using a finite string of digits is a form of roundoff error calledrepresentation error.[8]Here are some examples of representation error in decimal representations:
Increasing the number of digits allowed in a representation reduces the magnitude of possible roundoff errors, but any representation limited to finitely many digits will still cause some degree of roundoff error foruncountably manyreal numbers. Additional digits used for intermediary steps of a calculation are known asguard digits.[9]
Rounding multiple times can cause error to accumulate.[10]For example, if 9.945309 is rounded to two decimal places (9.95), then rounded again to one decimal place (10.0), the total error is 0.054691. Rounding 9.945309 to one decimal place (9.9) in a single step introduces less error (0.045309). This can occur, for example, when software performs arithmetic inx86 80-bit floating-pointand then rounds the result toIEEE 754 binary64 floating-point.
Compared with thefixed-point number system, thefloating-point number systemis more efficient in representing real numbers so it is widely used in modern computers. While the real numbersR{\displaystyle \mathbb {R} }are infinite and continuous, a floating-point number systemF{\displaystyle F}is finite and discrete. Thus, representation error, which leads to roundoff error, occurs under the floating-point number system.
A floating-point number systemF{\displaystyle F}is characterized by4{\displaystyle 4}integers:
Anyx∈F{\displaystyle x\in F}has the following form:x=±(d0.d1d2…dp−1⏟significand)β×βE⏞exponent=±d0×βE+d1×βE−1+…+dp−1×βE−(p−1){\displaystyle x=\pm (\underbrace {d_{0}.d_{1}d_{2}\ldots d_{p-1}} _{\text{significand}})_{\beta }\times \beta ^{\overbrace {E} ^{\text{exponent}}}=\pm d_{0}\times \beta ^{E}+d_{1}\times \beta ^{E-1}+\ldots +d_{p-1}\times \beta ^{E-(p-1)}}wheredi{\displaystyle d_{i}}is an integer such that0≤di≤β−1{\displaystyle 0\leq d_{i}\leq \beta -1}fori=0,1,…,p−1{\displaystyle i=0,1,\ldots ,p-1}, andE{\displaystyle E}is an integer such thatL≤E≤U{\displaystyle L\leq E\leq U}.
In theIEEEstandard the base is binary, i.e.β=2{\displaystyle \beta =2}, and normalization is used. The IEEE standard stores the sign, exponent, and significand in separate fields of a floating point word, each of which has a fixed width (number of bits). The two most commonly used levels of precision for floating-point numbers are single precision and double precision.
Machine epsiloncan be used to measure the level of roundoff error in the floating-point number system. Here are two different definitions.[3]
There are two common rounding rules, round-by-chop and round-to-nearest. The IEEE standard uses round-to-nearest.
Suppose the usage of round-to-nearest and IEEE double precision.
Since the 53rd bit to the right of the binary point is a 1 and is followed by other nonzero bits, the round-to-nearest rule requires rounding up, that is, add 1 bit to the 52nd bit. Thus, the normalized floating-point representation in IEEE standard of 9.4 isfl(9.4)=1.0010110011001100110011001100110011001100110011001101×23.{\displaystyle fl(9.4)=1.0010110011001100110011001100110011001100110011001101\times 2^{3}.}
This representation is derived by discarding the infinite tail0.1100¯×2−52×23=0.0110¯×2−51×23=0.4×2−48{\displaystyle 0.{\overline {1100}}\times 2^{-52}\times 2^{3}=0.{\overline {0110}}\times 2^{-51}\times 2^{3}=0.4\times 2^{-48}}from the right tail and then added1×2−52×23=2−49{\displaystyle 1\times 2^{-52}\times 2^{3}=2^{-49}}in the rounding step.
The machine epsilonϵmach{\displaystyle \epsilon _{\text{mach}}}can be used to measure the level of roundoff error when using the two rounding rules above. Below are the formulas and corresponding proof.[3]The first definition of machine epsilon is used here.
Letx=d0.d1d2…dp−1dp…×βn∈R{\displaystyle x=d_{0}.d_{1}d_{2}\ldots d_{p-1}d_{p}\ldots \times \beta ^{n}\in \mathbb {R} }wheren∈[L,U]{\displaystyle n\in [L,U]}, and letfl(x){\displaystyle fl(x)}be the floating-point representation ofx{\displaystyle x}.
Since round-by-chop is being used, it is|x−fl(x)||x|=|d0.d1d2…dp−1dpdp+1…×βn−d0.d1d2…dp−1×βn||d0.d1d2…×βn|=|dp.dp+1…×βn−p||d0.d1d2…×βn|=|dp.dp+1dp+2…||d0.d1d2…|×β−p{\displaystyle {\begin{aligned}{\frac {|x-fl(x)|}{|x|}}&={\frac {|d_{0}.d_{1}d_{2}\ldots d_{p-1}d_{p}d_{p+1}\ldots \times \beta ^{n}-d_{0}.d_{1}d_{2}\ldots d_{p-1}\times \beta ^{n}|}{|d_{0}.d_{1}d_{2}\ldots \times \beta ^{n}|}}\\&={\frac {|d_{p}.d_{p+1}\ldots \times \beta ^{n-p}|}{|d_{0}.d_{1}d_{2}\ldots \times \beta ^{n}|}}\\&={\frac {|d_{p}.d_{p+1}d_{p+2}\ldots |}{|d_{0}.d_{1}d_{2}\ldots |}}\times \beta ^{-p}\end{aligned}}}In order to determine the maximum of this quantity, there is a need to find the maximum of the numerator and the minimum of the denominator. Sinced0≠0{\displaystyle d_{0}\neq 0}(normalized system), the minimum value of the denominator is1{\displaystyle 1}. The numerator is bounded above by(β−1).(β−1)(β−1)¯=β{\displaystyle (\beta -1).(\beta -1){\overline {(\beta -1)}}=\beta }. Thus,|x−fl(x)||x|≤β1×β−p=β1−p{\displaystyle {\frac {|x-fl(x)|}{|x|}}\leq {\frac {\beta }{1}}\times \beta ^{-p}=\beta ^{1-p}}. Therefore,ϵ=β1−p{\displaystyle \epsilon =\beta ^{1-p}}for round-by-chop.
The proof for round-to-nearest is similar.
Even if some numbers can be represented exactly by floating-point numbers and such numbers are calledmachine numbers, performing floating-point arithmetic may lead to roundoff error in the final result.
Machine addition consists of lining up the decimal points of the two numbers to be added, adding them, and then storing the result again as a floating-point number. The addition itself can be done in higher precision but the result must be rounded back to the specified precision, which may lead to roundoff error.[3]
This example shows that roundoff error can be introduced when adding a large number and a small number. The shifting of the decimal points in the significands to make the exponents match causes the loss of some of the less significant digits. The loss of precision may be described asabsorption.[11]
Note that the addition of two floating-point numbers can produce roundoff error when their sum is an order of magnitude greater than that of the larger of the two.
This kind of error can occur alongside an absorption error in a single operation.
In general, the product of two p-digit significands contains up to 2p digits, so the result might not fit in the significand.[3]Thus roundoff error will be involved in the result.
In general, the quotient of 2p-digit significands may contain more than p-digits.Thus roundoff error will be involved in the result.
Absorption also applies to subtraction.
The subtracting of two nearly equal numbers is calledsubtractive cancellation.[3]When the leading digits are cancelled, the result may be too small to be represented exactly and it will just be represented as0{\displaystyle 0}.
Even with a somewhat largerϵ{\displaystyle \epsilon }, the result is still significantly unreliable in typical cases. There is not much faith in the accuracy of the value because the most uncertainty in any floating-point number is the digits on the far right.
This is closely related to the phenomenon ofcatastrophic cancellation, in which the two numbers areknownto be approximations.
Errors can be magnified or accumulated when a sequence of calculations is applied on an initial input with roundoff error due to inexact representation.
An algorithm or numerical process is calledstableif small changes in the input only produce small changes in the output, andunstableif large changes in the output are produced.[12]For example, the computation off(x)=1+x−1{\displaystyle f(x)={\sqrt {1+x}}-1}using the "obvious" method is unstable nearx=0{\displaystyle x=0}due to the large error introduced in subtracting two similar quantities, whereas the equivalent expressionf(x)=x1+x+1{\displaystyle \textstyle {f(x)={\frac {x}{{\sqrt {1+x}}+1}}}}is stable.[12]
Even if a stable algorithm is used, the solution to a problem may still be inaccurate due to the accumulation of roundoff error when the problem itself isill-conditioned.
Thecondition numberof a problem is the ratio of the relative change in the solution to the relative change in the input.[3]A problem iswell-conditionedif small relative changes in input result in small relative changes in the solution. Otherwise, the problem isill-conditioned.[3]In other words, a problem isill-conditionedif its conditions number is "much larger" than 1.
The condition number is introduced as a measure of the roundoff errors that can result when solving ill-conditioned problems.[7]
|
https://en.wikipedia.org/wiki/Round-off_error
|
Google Trendsis a website byGooglethat analyzes the popularity of topsearch queriesinGoogle Searchacross various regions and languages. The website uses graphs to compare the search volume of different queries over a certain period of time.
On August 5, 2008, Google launchedGoogle Insights for Search, a more sophisticated and advanced service displaying search trends data. On September 27, 2012, Google merged Google Insights for Search into Google Trends.[1]
Originally, Google neglected updating Google Trends on a regular basis. In March 2007, internet bloggers noticed that Google had not added new data since November 2006, and Trends was updated within a week. Google did not update Trends from March until July 30, and only after it was blogged about, again.[2]Google now claims to be "updating the information provided by Google Trends daily; Hot Trends is updated hourly." As of April 2025, data on the Google Trends website shows updates every minute, with a 4 minute delay, when the timeline parameter is set to “Past hour.”[3]
On August 6, 2008, Google launched a free service called Insights for Search. Insights for Search is an extension of Google Trends and although the tool is meant for marketers, it can be utilized by any user. The tool allows for the tracking of various words and phrases that are typed into Google's search-box. The tracking device provided a more in-depth analysis of results. It also has the ability to categorize and organize the data, with special attention given to the breakdown of information by geographical areas.[4]In 2012, Google Insights for Search was merged into Google Trends with a new interface.[1]
Google Trends does not provide absolute values for the number of search queries, but instead shows relative search volumes (RSV). The relative search volumes are normalised to the highest value, which is set to 100.[5]Seeing absolute search volumes requires a separate browser extension that overlays absolute numbers onto Google Trends’ y-axis.[6]The popularity of up to 5 search terms or search topics can be compared directly. Additional comparisons require a comparison term or topic.[7]In contrast to search terms, search topics are "a group of terms that have the same concept in any language".[8]
In 2009,Yossi Matiaset al. published research on the predictability of search trends.[9]
In a series of articles inThe New York Times,Seth Stephens-Davidowitzused Google Trends to measure a variety of behaviors. For example, in June 2012, he argued that search volume for the word "nigger(s)" could be used to measure racism in different parts of the United States. Correlating this measure with Obama's vote share, he calculated that Obama lost about 4 percentage points due to racial animus in the 2008 presidential election.[10]He also used Google data, along with other sources, to estimate the size of the gay population. This article noted that the most popular search beginning "is my husband" is "is my husband gay?"[11]In addition, he found that American parents were more likely to search "is my son gifted?" than "is my daughter gifted?" But they were more likely to search "is my daughter overweight?" than "is my son overweight?"[12]He also examined cultural differences in attitudes around pregnancy.[13]
Google Trends has also been used to forecast economic indicators,[14][15][16]and financial markets,[17]and analysis of Google Trends data has detected regional flu outbreaks before conventional monitoring systems.[18]Google Trends is increasingly used in ecological and conservation studies, with the number of research articles growing over 50% per year.[19]Google Trends data has been used to examine trends in public interest and awareness on biodiversity and conservation issues,[20][21][22][23][24]species bias in conservation project,[25]and identify cultural aspects of environmental issues.[26]The data obtained from Google Trends has also been used to track changes in the timing biological processes as well as the geographic patterns of biological invasion.[27]
A 2011 study found that an indicator for private consumption based on search query time series provided by Google Trends found that in almost all conducted forecasting experiments, the Google indicator outperformed survey-based indicators.[28]Evidence is provided byJeremy Ginsberget al. that Google Trends data can be used to track influenza-like illness in a population.[29]Because the relative frequency of certain queries is highly correlated with the percentage of physician visits in which a patient presents with influenza-like symptoms, an estimate of weekly influenza activity can be reported. A more sophisticated model for inferring influenza rates from Google Trends, capable of overcoming the mistakes of its predecessors has been proposed by Lampos et al.[30]
The use of Google Trends to study a wide range of medical topics is becoming more widespread. Studies have been performed examining such diverse topics as use of tobacco substitutes,[31]suicide occurrence,[32]asthma,[33]and parasitic diseases.[34]In an analogous concept of using health queries to predict the flu,Google Flu Trendswas created.[29][35]Further research should extend the utility of Google Trends in healthcare.
Furthermore, it was shown byTobias Preiset al. that there is a correlation between Google Trends data of company names and transaction volumes of the corresponding stocks on a weekly time scale.[38][39]
In April 2012,Tobias Preis, Helen Susannah Moat,H. Eugene StanleyandSteven R. Bishopused Google Trends data to demonstrate that Internet users from countries with a higher per capita gross domestic product (GDP) are more likely to search for information about the future than information about the past. The findings, published in the journalScientific Reports, suggest there may be a link between online behaviour and real-world economic indicators.[40][41][42]The authors of the study examined Google search queries made by Internet users in 45 countries in 2010 and calculated the ratio of the volume of searches for the coming year (‘2011’) to the volume of searches for the previous year (‘2009’), which they call the ‘future orientation index’. They compared the future orientation index to the per capita GDP of each country and found a strong tendency for countries in which Google users enquire more about the future to exhibit a higher GDP. The results hint that there may potentially be a relationship between the economic success of a country and the information-seeking behaviour of its citizens online. In April 2013,Tobias Preisand his colleagues Helen Susannah Moat andH. Eugene Stanleyintroduced a method to identify online precursors for stock market moves, using trading strategies based on search volume data provided by Google Trends.[43]Their analysis ofGooglesearch volume for 98 terms of varying financial relevance, published inScientific Reports,[44]suggests that increases in search volume for financially relevant search terms tend to precede large losses in financial markets.[45][46][47][48][49][50][51][52]The analysis of Tobias Preis was later found to be misleading and the results are most likely to be overfitted.[53]The group of Damien Challet tested the same methodology with unrelated to financial markets search words, such as terms for diseases, car brands or computer games. They have found that all these classes provide equally good "predictability" of the financial markets as the original set. For example, the search terms like "bone cancer", "Shelby GT 500" (car brand), "Moon Patrol" (computer game) provide even better performance as those selected in original work.[44]
In 2019,Tom Cochran, from public relations firm 720 Strategies, conducted a study comparing Google Trends to political polling.[54]The study was in response toPete Buttigieg's surge in a poll of Iowa'slikely Democratic caucusgoersconducted between November 8 to 13 by theDes Moines Register. Using Google Trends, he looked into the relationship between polling numbers and Google searches. His findings concluded that, while polling consists of far smaller sample sizes, the primary difference with Google Trends is that it only demonstrates intent to seek information. Google search volume was higher for candidates having higher polling numbers, but the correlation did not mean increased candidate favorability.[55]
Research also shows that Google Trends can be used to forecast stock returns and volatility over a short horizon.[56]Other research has shown that Google Trends has strong predictive power for macroeconomic series. For example, a paper published in 2020 shows that a large panel of Google Trends predictors can forecast employment growth in the United States at both the national and state level with a relatively high degree of accuracy even a year in advance.[57]
Google Trends uses representative sub-samples for analysis, which means that the data can vary depending on the time of the survey and is associated with background noise.[58]Therefore, repeating analyses at different points in time can increase the reliability of the analysis.[58][59]It was shown that Google Trends data can exhibit a high variability when queried at different points in time, indicating that it may not be reliable except for very high-volume search terms due to sampling,[60]and relying on this data for prediction is risky. In 2020, this research made it to major headlines in Germany.[61]
Google has incorporated quota limits for Trends searches. This limits the number of search attempts available per user/IP/device. Details of quota limits have not yet been provided, but it may depend on geographical location or browser privacy settings. It has been reported in some cases that this quota is reached very quickly if one is not logged into a Google account before trying to access the Trends service.[62]
Google Hot Trends is an addition to Google Trends which displays the top 20 "hot", i.e., fastest rising, searches (search-terms) of the past hour in various countries. This is for searches that have recently experienced a sudden surge in popularity.[63]For each of the search-terms, it provides a 24-hour search-volume graph as well as blog, news and web search results. Hot Trends has a history feature for those wishing to browse past hot searches. Hot Trends can be installed as aniGoogleGadget. Hot Trends is also available as an hourlyAtomweb feed.
Since 2008, there has been a sub-section of Google Trends which analyses traffic for websites, rather than traffic for search terms. This is a similar service to that provided byAlexa Internet.
The Google Trends for Websites became unavailable after the September 27, 2012, release of the new Google Trends product.[64]
An API to accompany the Google Trends service was announced byMarissa Mayer, then vice president of search-products and user experience at Google. This was announced in 2007, and so far has not been released.[65]
A group of researchers atWellesley Collegeexamined data from Google Trends and analyzed how effective a tool it could be in predictingU.S. Congress electionsin 2008 and 2010. In highly contested races where data for both candidates were available, the data successfully predicted the outcome in 33.3% of cases in 2008 and 39% in 2010. The authors conclude that, compared to the traditional methods ofelection forecasting, incumbency andNew York Timespolls, and even in comparison with random chance, Google Trends did not prove to be a good predictor of either the 2008 or 2010 elections.[66]Another group has also explored possible implications for financial markets and suggested possible ways to combine insights from Google Trends with other concepts in technical analysis.[67]
|
https://en.wikipedia.org/wiki/Google_Trends
|
Thegovernment of the People's Republic of Chinainterfered in the2024 United States electionsthroughpropagandaanddisinformationcampaigns, primarily linked to itsSpamouflageinfluence operation.[1]The efforts came amidst largerforeign interference in the 2024 United States elections.
In March 2021, theNational Intelligence Councilreleased a report that said the Chinese government "considered but did not deploy" influence efforts in 2020.[2]A declassified U.S. intelligence assessment in 2023 said with "high confidence" that China, Russia, Iran and Cuba attempted to influence the 2022 midterms. It said that China had tacitly approved "efforts to try to influence a handful of midterm races involving members of both US political parties" and "portray the US democratic model as chaotic, ineffective, and unrepresentative". The assessment said that China had used images generated by artificial intelligence to mimic Americans online and provoke discussion on divisive social issues, and that they believed they would face less scrutiny during the midterms and that U.S. retaliation would be lower.[3]It also said that since 2020, senior Chinese intelligence officials had issued directives to "intensify efforts to influence US policy and public opinion in China's favor" and "magnify US societal divisions".[4]In January 2024, theFBIandJustice Departmentissued a court order to address Chinese hacking and infiltration of key U.S. infrastructure in the transportation and maritime sectors.[5]
DuringAPEC United States 2023,Joe BidenandGeneral Secretary of the Chinese Communist PartyXi Jinpingmet in aseparate summiton November 15 where Xi told Biden that China would not interfere in the 2024 presidential election after being asked by Biden. This assurance was given again by Chinese foreign ministerWang Yito Biden's national security advisorJake Sullivanon the weekend of January 26-27 during a meeting in Bangkok after Sullivan brought up the topic. CNN reported in January 2024 that the topic had repeatedly come up during senior-level meetings between the two nations which were held following a shootdown of aChinese spy balloonby the U.S. military after it traversed the continental United States in February 2023.[4]
U.S. intelligence agencies have described Chinese government interference in the elections as aggressive but overall cautious and nuanced, not targeting any particular candidate, but instead focusing on issues important to Beijing such as Taiwan, and "undermining confidence in elections, voting, and the U.S. in general."[1][6]However, China has specifically denigrated President Biden using fake accounts.[7]According toThe Washington Post, a senior official with theOffice of the Director of National Intelligencesaid China is "not attempting to influence the presidential race, but it is seeking to do so in state-level and regional races" as they did during the 2022 midterms.[8]Officials from the ODNI and FBI have outlined China's use ofgenerative artificial intelligencetools and promotion of divisive content focused on drug use, immigration, and abortion to fosteranti-Americanism.[9]
As early as April 1, 2024,The New York Timesreported that the Chinese government had created fake pro-Trump accounts on social media "promoting conspiracy theories, stoking domestic divisions and attacking President Biden ahead of the election in November."[7]
In August 2024, cyber security firm CyberCX released a report that it said uncovered Beijing-based "Green Cicada," one of the largest publicly identified networks and which may feature more than 8,000 accounts on social media platformX, with a suspected intention to interfere with the U.S. elections.[10]
Research firmGraphikareported that Chinese government interference has been linked to itsSpamouflageinfluence operation and has involved networks of fake social media users that mimic Americans on social media sites such as X andTikTokin an attempt to manipulate and sway public opinion.[11]According to a September 2024 Graphika report, "In the run-up to the 2024 election, these accounts have seeded and amplified content denigrating Democratic and Republican candidates, sowing doubt in the legitimacy of the U.S. electoral process, and spreading divisive narratives about sensitive social issues including gun control, homelessness, drug abuse, racial inequality, and the Israel-Hamas conflict. This content, some of which was almost certainlyAI-generated, has targeted President Joe Biden, formerPresident Donald Trump, and, more recently,Vice President Kamala Harris."[11]
In August 2024, a threat report byMetastated it detected 11 'coordinated inauthentic behavior' networks linked to China.[12]Microsoftdetected attempts by Chinese actors to inflame tensions around campus protests, noting an increased capability to increase divisions and influence election activity.[12]
In October 2024,The Washington Postreported on increasing Chinese government attempts to influence "tens" of down-ballot races with explicitlyantisemiticattacks and conspiracy theories against politicians as part of its Spamouflage influence operation. The report highlighted one covert influence campaign against RepresentativeBarry Moorewho recently backed sanctions against China and is not Jewish, calling him "a Jewish dog" who won because of "the bloody Jewish consortium" among other antisemitic posts. It also reported on increasing efforts to inflame tensions by focusing on hot-button issues such as police violence,Black Lives Matter, immigration, and influencing U.S. foreign policy toward Taiwan.[13]It said state-run media campaigns by China also spread falseconspiracy theories about the 2024 Atlantic hurricane seasonthatThe Associated Pressdescribed as using "social media and state news stories to criticize responses to past U.S. natural disasters" and sow division among Americans.[14]Spamouflage has also targeted the congressional races ofMichael McCaulandMarsha Blackburnas well asMarco Rubiodue to their outspoken criticism of the Chinese government and its policies.[15][16]
On October 25, 2024, theNew York Timesreported that Chinese government-linked hackers had targeted the phones of Trump and Vance.[17][18]The hacks came a month afterThe Wall Street Journalreported on Chinese hackers breaching several U.S. internet service providers as part of its "Salt Typhoon"cyber espionagecampaign.[19]
U.S. Secretary of StateAntony Blinkensaid the U.S. has seen evidence of attempts to “influence and arguably interfere” with the upcoming U.S. elections, despite an earlier commitment from Xi Jinping not to do so.[20][21]In an interview withTime, President Biden said that there was evidence of China interfering in the 2024 elections, and that "all the bad guys are rooting for Trump".[2]
In a September 2024 interview with theAssociated Press, chief intelligence officer Jack Stubbs of Graphika stated that Chinese covert influence operations had "become more aggressive" in their "efforts to infiltrate and to sway U.S. political conversations ahead of the election".[1][12]
In 2023, the ChineseMinistry of Foreign Affairsdenied that China was interfering or had interfered in the 2022 election, stating that they "adhere to the principle of non-interference in other countries' internal affairs" and that "China does not interfere in U.S. elections".[4]In response to a 2024 report by Graphika that outlined China's use of its Spamouflage network to mimic American social media users, Chinese Embassy spokesperson Liu Pengyu stated that the findings were "prejudice and malicious speculation" and that "China has no intention and will not interfere" in the election.[1]
|
https://en.wikipedia.org/wiki/Chinese_interference_in_the_2024_United_States_elections
|
This is a list of elections that will be or may be held in 2025.
|
https://en.wikipedia.org/wiki/List_of_elections_in_2025
|
Donald Trump, the45th president of the United States(2017–2021) ran a successful campaign for the2024 U.S. presidential election. He formally announced his campaign on November 15, 2022, atMar-a-LagoinPalm Beach, Florida, initially battling for theRepublican Party's nomination. While many candidates challenged the former President for the nomination, they did not manage to amass enough support to dethrone him, leading him to alandslide victoryin the2024 Iowa caucuses. Thereafter, he became theRepublican Party'spresumptive nominee. Trump was officially nominated on July 15, 2024, at theRepublican National Convention, where he choseJD Vance, the juniorU.S. senatorfromOhio, as his vice presidential running mate. On November 5, 2024, Trump and Vance were electedpresidentandvice presidentof the United States, winning all sevenswing states[a]as well as the popular vote with aplurality. The campaign's success was attributed to a distinct appeal to young male voters and racial minorities, an effective media strategy, and focusing on the public's sociopolitical and socioeconomic grievances.
Trump's agenda was branded aspopulistandnationalist. It pledged sweeping tax cuts, aprotectionisttrade policy, greater federal oversight over education,[b]more extensive use offossil fuels, an "America First" foreign policy, an expansion of presidential authority, a reduction offederal regulations, mass deportation ofillegal immigrants,[c]stricterlaw enforcement, an end todiversity, equity, and inclusionprograms, and a rollback oftransgender rights. While the campaign's official platform wasAgenda 47, it was closely connected toThe Heritage Foundation'sProject 2025, a playbook recommending anauthoritarian, rigidlyconservativestate.
Trump'srhetoric, regarded as inflammatory and extreme, centered ondisinformationandfearmongering, and thus drew immense media coverage. Denying thelegitimacy of electionsand warning ofsupposed migrant crimewere his key talking points. He sought to establish himself as a politicalmartyrbeing targeted by thepolitical and media establishment, and that his campaign was one of vindication and a battle betweengood and evil.
On the campaign trail, Trump faced numerous legal actions, culminating infour indictmentsand afelony conviction. His campaign therefore faced a severe funding shortage. Court cases also arose concerning his eligibility to run again in the aftermath of theJanuary 6 Capitol attack, which wereeventually resolved. Trump survived two assassination attempts during the campaign. Many commentators state that these setbacks, unprecedented in U.S history, helped hispublic image.
Donald Trump's 2024 presidential campaign is his fourth, following abrief one in 2000for theReform Party's nomination, and two as theRepublican Party's candidate, in2016and, subsequently,2020.[8][9]
As president, Trump lost the2020 presidential electiontoDemocraticnomineeJoe Biden.[10]He and his allies in seven key statesdenied the results. They allegedly went on to devise a plot to create and submit fraudulentcertificates of ascertainmentfalsely asserting that Trump had won theelectoral collegevote in those states.[11]In the event that the plot failed to "work out," Trump would plan another presidential run in 2024.[12][13][14]On January 6, 2021, a mob of Trump supportersstormed the Capitolto prevent the true election results from being certified.[10][15]The former President was thereafterimpeachedfor incitement ofinsurrection, but wasacquitted.[16]
TheBiden administrationsucceeding Trump's oversaw the end of theCOVID-19 pandemic,[17]aspike in inflation lasting from 2021 to 2023, a surge in crossings at theborder with Mexico, and the outbreak of two major wars inUkraineand inGaza.[18][19]While the President began his term with anapproval ratingwell above 50%, this would not last long. By September 2021, this had dropped to just 43%, according toGallup, following the "chaotic"[20]U.S. withdrawal from Afghanistanand a gradual rise ininflationfrom 1.7% in February to 5.4%.[18][21]His popularity never recovered.[18][19]By June 2022, inflation had risen to 9.1%, a 40-year high.[18][22]Besides a worsening economy, Biden oversaw a worsening in thecrisis at the Southern border, with 11.3 million undocumented immigrants entering the U.S. during his term.[18][23]Ever since Russia invaded Ukraine in February 2022, Biden unwaveringly aided Ukraine.[19]He sent the nation a total of $182 billion in emergency funding.[24]When the Gaza war broke out in November 2023, the President strongly supported Israel.[18][25]These three issues: global uncertainty, inflation, and the migrant crisis, would be the focal points of the future Trump campaign.[26]
By July 2022, amid thepublic hearings of the House Select Committee on the January6 Attack, Trump was reportedly considering making an early announcement of his 2024 candidacy.[27][28]A contemporaryIntelligencerinterview with Trump affirmed that he had already made up his mind.[29]Following the August 2022FBI search of Mar-a-Lago, many of his allies urged that he initiate his campaign even sooner, perhaps prior to that year'smidterm elections.[30]
Donald Trump announced his candidacy for president on November15, 2022, in an hour-long address fromMar-a-Lago. It came one week after the midterm elections.[31][32]The campaign would be based inMar-a-Lago, inWest Palm Beach, Florida.[33][34]Reporting forAxios, Zachary Basu noted that at the time of the announcement, Trump was the "underdog" and "at the weakest moment of his political career".[35]His candidacy was met with a mixed response from both Democrats and Republicans. He was perceived by many as a weak, beatable candidate, owing to his loss in 2020 and the failure of an expected Republican "red wave" in the 2022 midterms to materialize.[d]This led several Republican officials to oppose his campaign,[37][38][39]and several Democrats to welcome it.[40][41]TheconservativeNew York Postmocked Trump's announcement by relegating it to page 26 and noting it on the cover with a banner reading "Florida ManMakes Announcement".[42]On the other hand, Trump-aligned Republicans embraced the campaign,[43]and many Democrats deemed it a threat to Americandemocracy.[44][45]
Trump was the first one-term president to campaign for a second non-consecutive term sinceHerbert Hoover(1929–1933), who, after losing in1932, made unsuccessful runs in1936and1940.[46]
At its inception, Trump's campaign had over $100 million in funding.[47]Its numerous vehicle for fundraising wasSave America, aleadership political action committee(PAC), joined by the MAGA PAC and Super PAC.[48][49]However, his legal expenses from his court cases would absorb much of that funding. In fact, from January 2021 to March 2024, he spent more than $100 million in legal fees from campaign accounts.[49]In 2023, the year of Trump's four criminal indictments, over half of his financial donations were allocated to paying off legal bills.[49]
While running against Joe Biden, Trump overwhelmingly lagged behind his opponent in fundraising. His legal expenses combined with Biden's plentiful financial hauls laid at the heart of this problem.[48][50]At the start of March 2024, Trump's campaign and Trump-alignedSuper PACshad half as much cash on hand as Biden's campaign and Biden-aligned Super PACs.[51]However, Trump's fundraising eventually took a turn for the better, with the former President raising more money than his opponent in April, and beating Biden's total fundraising for the first time.[52]Things again turned sour for Trump's campaign after Biden withdrew from the race. The new Democratic nominee,Kamala Harris, brought in $200 million during the first week of her presidential campaign.[53]In July, Trump's campaign and assorted committees reporting taking in $138.7 million compared to Harris and Democratic committees' $310 million.[54]All in all, throughout their campaigns (specifically, since January 2023), the Trump committee raised $388 billion, while that Biden–Harris raised nearly $1 billion.[55]
According toOpenSecrets, Trump's greatest donors werehedge fundmanager Ken Griffin (who donated $100 million), pro-IsraelactivistMiriam Adelson($132 million), railroad magnateTimothy Mellon($197 million), and, most notably, businessmanElon Musk($277 million[e]).[60]Musk was not only the largest individual political donor of the 2024 election, but also the largest individual political donor since at least 2010, excluding candidates funding their own campaigns. He also launched a $1 million a day giveaway for swing state voters.[61]OpenSecrets additionally found that the top seven donors of the 2024 campaign were "solidly Republican/Conservative".[58]
Trump notably mixed his personal business with political fundraising.[62]He promoted $59.99"God Bless the U.S.A." Bibles, $399 sneakers, $99 "Victory47" cologne, and $99 Trump-brandedNFTdigital trading cards for his personal, non-campaign accounts.[63]Many campaign funds were also funneled into Trump-owned businesses, in particular hisMar-a-Lagoresort and theTrump National Doral Miami.[64]
Trump's eligibility to run for president was challenged. TheFourteenth Amendment to the Constitution,Section 3, prohibits current and former federal, state and military officials who have "engaged in insurrection or rebellion" from holding office again, which was pertinent in Trump's case considering his role in inciting theJanuary 6 attack on the Capitol.[65][66]By 2023, the non-profit groupCitizens for Responsibility and Ethics in Washingtonand other advocacy groups and individuals were planning state-by-state efforts to keep Trump off state ballots.[67]Court cases sprung up in multiple states.[68]
In December 2023, theColorado Supreme Courtruledthat, under the Fourteenth Amendment, Trump was ineligible from holding office and that his name must be removed from the Colorado Republican primary ballot.[69]This decision was the first of its kind in American history.[70]Later that month,Maine's Secretary of Statefollowed suit and banned Trump from Maine's Republican primary ballot. In March 2024, following an appeal from Trump's campaign,[71]the U.S. Supreme Court unanimouslyoverturnedColorado's Supreme Court ruling, saying that states do not have the authority to disqualify Trump or other candidates from federal elections under the Fourteenth Amendment's insurrection clause.[72]
Donald Trump's formal campaign manifesto wasAgenda 47. It took the form of a series of videos on his official website outlining his proposals one by one.[73]Seeing that the series was cut short in December 2023, Agenda 47 was primarily targeted to Republican voters during the2024 primary season.[74][75]His website's homepage contained a list of 20 campaign proposals.[74]
According to Philip Bump ofThe Washington Post, Agenda 47 was rarely discussed by Trump as well as the media. He, and others, noted that it was overshadowed by another presidential transition plan closely tied to—in fact, designed for—the Trump campaign,The Heritage Foundation'sProject 2025.[75][77][78]It planned for massive overhauls to American government, steering it in an uncompromisinglyconservativepath and relegating much authority to theexecutive branch. As such, Project 2025 was condemned for unconstitutionally encouragingauthoritarianismand moving to turn Trump into a dictator.[78][79][80][81][82]Trump's campaign officials repeatedly distanced themselves from the plan, stressing that all outside efforts influencing a future presidential transition were "unofficial".[73]Trump himself denied knowing of Project 2025. He went as far as to call some of its proposals "absolutely ridiculous" and "seriously extreme".[83][84]
Besides The Heritage Foundation, other think-tanks and policy groups aligned with Trump included theCenter for Renewing America, theAmerica First Policy Institute, andAmerica First Legal. Trump's preeminent public policy advisers wereSteve Bannon,David Bernhardt,Kellyanne Conway,Richard Grenell,Tom Homan,Sean Hannity,Kevin Hassett,Brandon Judd,Kieth Kellog,Larry Kudlow,Robert Lighthizer,Stephen Miller,Stephen Moore,John Ratcliffe,Russel Vought, andMatt Whitaker, though none of them were formally part of the campaign itself.[85][86][87][88]Vince Haleywas officially responsible for overseeing the team developing the campaign’s policy proposals.[88]
Trump attempted to build a broad demographic coalition consisting ofLatinos,Arab Americans,Black men, and young men—all groups that traditionally leanedDemocratic. He spoke sharply against the economy underBiden's presidency, which resonated with all groups, and stokedculture warissues, appealing to Black and Latino men, who tend towardsocial conservatism.[90][91]According toPatrick Ruffini, the former President stood to capitalize off of a gradual political phenomenon: that "the ties that once bound low-income and nonwhite voters to the Democratic Party … were breaking".[91]Trump also visited cities with concentrated Arab American populations. His efforts were bolstered by the Biden administration's pro-Israel stance on theGaza war, andKamala Harris' neglect of these cities throughout the campaign.[90][92]To drive up turnout, Trump's campaign ran an unconventionalground game. He targeted irregular voters through community building, rather than traditional methods: door knocks, big party machinery, and paid media.[93][94]Organizations such as Libre andTurning Point USA, besides driving forward an ideological agenda, assembled low propensity voters who felt alienated by the government and cultivated in them a sense of belonging to Trump's cause. As activistTony Gavitoexplained, "Mobilizing people to turn out and cast a ballot is not nearly as powerful as organizing people to adopt an identity, commit to a cause, and join a collective effort to push for change".[94]
Regarding rhetoric, Trump deployed fiery, partisan language that, according to commentators, alienated the general public. He rejected the traditional pivot to the center and relied on negative messaging. Even in the campaign's final weeks, he continued homing in on his base and steadfast conservatives, while his opponent, Harris, tried appealing to moderates. This was done to "maximize turnout" from Trump's base.[95][96][97]Beyond his base, it served to persuade remaining undecided voters; with extreme rhetoric, they would have a "compelling reason to vote".[98]Trump's extreme statements also played into his populist strategy of airing the public's grievances against the political status quo—that he was saying what no other politician dared to say.[99][100]
Writing forTilted Chair, Kara Villarreal asserts that "Trump didn’t just run a political campaign; he launched a full-scale marketing movement".[101]The former President's messaging was simple, straightforward, and emotional, which analysts found engaged well with his "consumers"—voters. He appealed to their discontent over the economy, immigration, and national pride. One of the means he used to achieve this was mantras he would repeat during rallies, such as, "Are you better off now or four years ago?”, and, "I will fix it." Trump also relied onidentity politicsby creating an "us versus them" narrative. This, according to analysts, united his supporters and kept them motivated.[101][102]
On advertising, Trump's campaign faced a massive financial disadvantage.[26][103][104]He, like Harris, concentrated ad funding on the sevenswing states. However, unlike his opponent, his funding was more localized, focusing more on individual voters than geographical groups.[26][104]Trump's campaign spent more onYouTube,Twitch,Twitter, andstreaming services, while Harris, onGoogle,Facebook,Instagram, andSnapchat.[104]According to analysts, these two tactics made Trump's advertising strategy more adapted to modern trends, efficient, and ultimately, effective. They also enabled him to overcome his funding limitations.[26][105]Trump targeted specific swing voters, or "streaming persuadables," while his opponent simply spent on the states at large.[26]He do so by running highly customized ads exclusively in their households.[26][105]As TrumpSuper PACoperative David Lee explained, "In the seven states, we were talking to 6.3 million people—they [Harris' campaign] were talking to 44.7 million"; thus, Harris was "wasting 85 percent of [her] money".[26]In addition, the campaign's focus on streaming platforms over television networks catered to undecided voters, half of whom used only streaming andpodcasts, notcable.[26][105]
Trump's media strategy heavily relied on podcasts and online streaming. It largely, but not entirely, cast traditional forms aside, such as interviews on mainstream media outlets and even a60 Minutesappearance. Rather, the former President would interact with podcasters andYouTubecontent creators:Theo Von,Patrick Bet-David,Logan Paul, et cetera,[106][107][108]many of whom belonged to themanosphere.[103]He would focus on apolitical matters: sports, family,extraterrestrial life, more than politics.[106][109]This strategy suited changing media trends, as more and more Americans were resorting to alternative sources for news over mainstream media,[106][107][108]as well as being adapted to Trump's "circuitous and colloquial way of speaking".[107]Young people—especially men—were particularly dependent on social media and podcasts for political coverage.[107]On the other hand, Harris concentrated on traditional outlets.[106][107][108]The former President garnered further media attention by visiting nonpolitical venues, such asfootballgames andMcDonald's.[96]
By establishing a considerable presence on social media, Trump could home in on his tactic of dominating the news. His message was thus spread among more voters.[106][108]According toCampaigns & Elections, right-wing influencers posted about 2.5 times as much as left-wing influencers throughout the election.[108]Trump's media strategy also bolsteredhis image. By appearing on podcasts and YouTube videos, which are informal, homely, and unrestrained by design,[110]he came across as approachable. They "humanize[d]" him.[106]Jason Miller, remarked that Trump's media strategy, above all else, relied on "unscripted moments," which earned him more coverage and familiarity.[106]Another benefit of non-traditional media outlets was that Trump could avoid fact-checks.[107][110]For instance, in his interview withJoe Rogan, he promotedfalsehoods about the 2020 election being stolenand exaggerations of his poll numbers.[109][111]Michael M. Grynbaum and John Koblin ofThe New York Timesnoted that the "influencers he met with rarely challenged [him], and often lavished him with praise".[107]Many of the most popular podcasts, including those that Trump had appeared on, would increasingly post political content withconservativemessages in the leadup to the election.[112]
Trump and his allies extensively usedartificial intelligence.[113][114][115]In June 2024, Trump remarked that AI was "really powerful stuff," suggesting that he would deliver a speech written entirely by AI: "[My staffer] goes click click click, and like 15 seconds later he shows me my speech, written so beautifully, I said, ‘I’m gonna use this sucker'".[116]As with theHarris campaign, Trump's team shared manydeepfakeson social media.[115]These, for instance, presented him astride a lion, or otherwise depicted his opponents unfavorably, such as one of Harris addressing aSoviet-style rally.[113]Such fake images became a vehicle of disinformation, although some commentators note that they were not intended to be believed.[114]Writing forThe Guardian,Sophia Smith Galerargues that his campaign deployed deepfakes as "algo-fodder" to sustain his narratives on social media.[117]Trump's campaign also used AI software to enhance efficiency. This included automating repetitive tasks and creating targeted advertisements. One such software, Campaign Nucleus, received more than $2.2 million in funding from his associates.[115][118]
Trump struck a middle ground and often vacillated onabortion. This was done in an attempt to put the issue to rest, having greatly cost Republicans in the 2022 midterms in the wake ofRoe v. Wadebeingoverruledthat June.[119][120][121]He generally called for abortion's legal status to be left up to the individual states.[122]Trump initially did not state whether or not he supported a national 15-week abortion ban,[123]then leaned in favor of it,[124]and then pledged to veto any federal abortion ban.[125]When asked on how he would vote onFlorida's abortion referendum, he equivocated.[126]Trump labelledFloridagovernorRon DeSantis' six-week abortion ban as "terrible",[120]and criticizedArizona's near total ban on abortion.[127]On the other hand, he stated that he would allow Republican-controlled states to monitor women's pregnancies.[128]Contemporary commentators remarked that Trump's stance on abortion pleased neither progressives nor conservatives,[119][120]although it was later regarded to have been effective in subduing the issue.[121]In spite of his equivocation throughout the campaign, Trump had previously called himself "the mostpro-lifepresident ever",[120]and took credit for overturning ofRoe v. Wade, theSupreme Courtdecision that legalized abortion nationwide.[129][f]In April 2024, he reiterated that he was "proudly responsible" for reversingRoe v. Wade.[122]
With your support, we will cut your taxes, endinflation, slash your prices, raise your wages, and bring thousands of factories back to America and back toNorth Carolina. They're coming back. We will build American, we will buy American, and we will hire American again.
Trump's economic agenda featuredprotective tariffs, lower taxation, and reduced regulations. He sought aneconomic nationalistsystem, with theincome taxlargely, if not completely, replaced by tariffs to defendlocal manufacturing.[132][133][134]Protectionismhad been a priority inhis first presidency.[135]In 2024, he vowed to enact even higher tariffs, including a 10% to 20% universal baseline tariff, 60% on China, between 25% and 100% on Mexico, and 100% on all cars made outside the U.S.[132][136][137]Analysts noted that the proposed tariffs were especially targeted against China, seeing that, among other things, he proposed a four-year plan to phase out Chinese imports of essential goods.[135][138]Overall, Trump's protectionist program intended to transform the U.S. into aself-sufficient economy.[133]Nonetheless, many economists, including 23Nobel Prizerecipients, warned that it would "lead to higher prices, larger deficits, and greater inequality",[139]as well as atrade war.[135][140]
One of Trump's key pledges was extending and expanding his2017 tax cuts. These would further slash all individual and corporate tax rates, which he argued would stimulate America's energy industry and reduce inflation. Companies that made their products in the U.S. would see a reduced corporate rate from 21% to 15%. Furthermore, he intended to cut back on regulations he believed stifle job creation.[141][142]A 50% reduction in energy prices was also in order.[143]By October 2024,Reutersreported that Trump was "rolling out a new tax-cut proposal about once a week in an unusual rush in the final stretch of the campaign to sway voters".[144]These included makingcar loaninterest fullytax deductible.[145]The former President notably suggested an end to income tax on Social Security benefits,[146]and "No [federal] Tax On Tips".[147]
In light of thepost-COVID inflation surge, Trump campaigned on ending the "inflationnightmare".[148]However, as was the case with Harris' economic proposals, economists criticized his plan for potentially leading to an increase in inflation,[139][148][149][150]along with adding around $15 trillion to thenational debt.[151]Trump also planned todevaluetheU.S. dollarto cheapen American exports.[152]
Trump campaigned on expanding federal management of education,[153]although with exceptions. On the one hand, he pledged to terminate theDepartment of Education.[138][154]On the other, he suggested giving funding preference to certain schools and universities. Schools with a mask or vaccine mandate, for instance, would not be federally funded.[138]Education programs that, in Trump's words, that include "critical race theory,gender ideology, or other inappropriate racial, sexual, or political content" would receive reduced funding.[154]Such proposals formed part of the former President's plan to fight for "patriotic education." This, according to him, "teach[es] students to love their country, not to hate their country like they're taught right now," "defend[s] American tradition andWestern civilization" and promotes "thenuclear family".[138][142]Furthermore, Trump's campaign advocateduniversal school choice, arguing that parents should be empowered to choose the best education option for their children.[155]In late 2023, Trump proposed an "American Academy," a free online university open to all Americans that would counter private institutions that "[turn] our students intoCommunistsandterrorists". This would be funded through a tax on the endowments of private universities.[153][156]
Trump's energy proposals heavily favoredfossil fuelproduction and consumption,[157][158]with little, if any, regard forenvironmentalism. He encapsulated them under the mantra "drill, baby, drill",[159]or "drill, drill, drill".[160]Overall, Trump aimed to transform the U.S. into anenergy independentcountry with the lowest electricity and energy costs of any country in the world.[138][159][161]This aim was well-suited to deal with the spike in gasoline prices caused bywar in Ukraine.[162]He promised to increaseoil drillingon public lands and offer tax breaks to fossil fuel producers.[138]Furthermore, Trump planned to slashenvironmental regulationsand initiatives.[158]He would rollback all electric vehicle initiatives, halt all wind energy projects, and eliminate regulations targetingincandescent lightbulbs, gas stoves, dishwashers, and shower heads.[138][163][164]Regarding global climate efforts, Trump proposed leaving theParis Agreement, and drafted orders to withdraw from theUnited Nations Framework Convention on Climate Change.[165]Trump's disproportionate preference of fossil fuels is influenced by hisdenial of global warming.[157][164]In a 2022Fox Newsinterview, Trump labelled it as a "hoax," adding that the climate naturally fluctuated.[166]He did not officially state how he would tackle global warming if elected.[167]
It's time to putAmerica first, isn't it? I will end thewar in Ukraine. It would have never started if I were president. It would have never—zero chance. I said, "Vladimir, don't even think about it." Zero chance. I will stop the chaos in theMiddle East. That would have never happened,October 7th. … But I will preventWorld War III. I know them all. We're very close to a world war right now with these people that we have, these low-IQ people that we have right now.
Trump's proposed foreign policy wasisolationist(a label he denied), which he branded as"America First".[169][170]In September 2024, Trump said that America's allies "treat us actually worse than our so-called enemies". He added, "We protect them and then they screw us on trade. We're not going to let it happen anymore".[171]Trump promised to "fundamentally reevaluate"NATO's purpose and mission.[138]Trump had said that defending an ally would depend on whether they "fulfilled their obligations to us", called the European Union a "foe" because of "what they do to us in trade", and questioned the value of alliances.[169]On January 10, 2024, Trump said that "NATO has taken advantage of our country" and he would only support allies "if they treat us properly",[172]f they met the alliance's target of spending 2% of GDP on defense.[170][173]Trump suggested withdrawing troops from South Korea if it does not pay more to support U.S. troops there.[85]
On theRusso-Ukrainian War, Trump vowed that even before he is inaugurated,[138]he will negotiate an end to the war in a day,[169]stop the "endless flow of American treasure to Ukraine", and make Europeans reimburse the U.S. the cost of rebuilding its old stockpiles.[138]In June 2024, Trump described Ukrainian PresidentVolodymyr Zelenskyyas "maybe the greatest salesman of any politician that's ever lived ... Every time he comes to our country, he walks away with $60 billion ... It never ends ... I will have that settled prior to taking the White House as president-elect".[174][175]However, it was pointed out that most of the money for Ukraine actually goes to American factories and workers who make weapons and military equipment.[176][177][178]Trump previously said he might recognizeRussia's illegal annexation of Crimea,[179]and suggested the2022 invasioncould have been prevented by Ukraine giving up parts of its own country to Russia.[169]Retired Lieutenant GeneralKeith KelloggandFrederick H. Fleitz, who both served in Trump's National Security Council staff, presented Trump with a detailedpeace plan to end Russia's war in Ukraine. The plan aims to force the two sides into peace talks and a ceasefire based on the current frontlines. If Ukraine refused to enter peace talks, weapons supplies would be stopped; if Russia refused peace talks, weapons supplies to Ukraine would be increased.[180][181]
Trump brought in more pro-Israelpolicies than any president before. He presented himself as a stronger defender of Israel, and is seen as less sympathetic to Palestine than Biden or Harris.[182]He vowed to continue supporting Israel in theGaza war, and said that Israel must "finish the problem".[183]Trump is expected to continue arming Israel, likely with "no strings attached" for humanitarian concerns.[184]He promised to banGazaresidents from entering the US.[185]
Trump suggested sending armed forces intoMexicotobattle drug cartels.[142]In the last days of his presidential campaign, Trump voiced support in favor of the restoration of peace betweenArmeniaandAzerbaijan, amid theNagorno-Karabakh conflict.[186][187]
Trump's platform calls for the vast expansion of presidential powers and the executive branch.[188]In campaign speeches, Trump stated that he would centralize government power under his authority, replace careerfederal civil serviceemployees with political loyalists, and use the military for domestic law enforcement and the deportation of immigrants.[189]
Trump has called to bring independent agencies such as theFederal Communications CommissionandFederal Trade Commissionunder direct presidential control. Trump's allies have drafted an executive order requiring all independent agencies to submit actions to the White House for review. Trump has called for presidential authority to 'impound' funds for Congressionally appropriated programs, a practice which wasoutlawedunder President Richard Nixon. Trump promised to order theU.S. Justice Departmentto investigate political rivals and Joe Biden, and fire Attorneys General who disobeyed him.[85][142][190][191][192][193]He called for jailing people whose actions he objects to, including Supreme Court critics,flag burners, and theJanuary 6 Committee.[194][195]According to the New York Times, Trump has called for stripping employment protections for thousands of career civil service employees and replacing them with political loyalists if deemed an 'obstacle to his agenda' within federal agencies, theU.S. Intelligence Community,State Department, andDepartment of Defense.[82]Trump has proposed instituting a new civil service test of his own creation to test the loyalty of federal workers. Trump has promised to crack down on whistleblowers who are shielded by law and create an independent body to "monitor" intelligence agencies.[142]
Trump's plan to expand presidential powers is based largely on a controversial and not widely-held interpretation of the constitution known as theunitary executive theory.[196][197]The theory rejects the notion of theseparation of powersand that the government is composed of three separate branches but thatArticle Two of the Constitutiongives the President absolute authority.[82]Such proposals would be carried out via the reintroduction ofSchedule Fthat was originally introduced at the end of Trump's former presidency, which would strip civil service protections of tens of thousands of civil servants to be at-will appointments filled with Trump loyalists identified byProject 2025ofThe Heritage Foundation.[198]The reforms have been described as a reimposition of the Jacksonianspoils system.[199][200]His proposal has been widely criticized asdangerous for democracy.[201][202][203]
On April 26, 2024,The Wall Street Journalreported Trump allies plan on greatly limiting theindependenceof theFederal Reserveshould Trump win the election. Of particular note were plans to allow the president to directly set interest rates, remove ChairJerome Powellbefore his term expires in 2026, and subject the Fed to oversight from theOMB.[204][205][206][207]
(EFFICIENCY)
Trump pledged to appointElon Muskto chair Federal Efficiency Commission. Trump said the commission would audit the entire federal government and propose "dramatic reforms".[208]Musk has also officially announced that he will accept the appointment if Trump is elected.[209]Everett Kelley, president of a union representing federal government workers, criticized the proposal, saying "There's nothing efficient about that".[210]Trump vowed to achieve his long-held goal of drastic reform by minimizing government and cuttingred tapegovernment regulations, which he says are the bureaucracies that are holding back American prosperity.[211][212][213]He suggested shutting down multiple departments for "bureaucratic waste".[214][138]
Trump's key message on healthcare was a call to "Make America Healthy Again," a slogan borrowed fromRobert F. Kennedy Jr., who endorsed the former President.[215]To do so, he would tackle the chronic disease epidemic by going after thepharmaceutical industryandultraprocessed foods.[216][217]The former President initially promised to replace theAffordable Care Act, which hehad attemptedin 2017.[218]However, by the end of the election season, he ruled out altering the Affordable Care Act, going as far as to claim that he "never even thought about such a thing".[219][220]Trump also insisted that he would keep Medicare and Social Security intact.[142][221]In March 2024, after alluding to cutting "entitlements," which was avidly denounced by the Biden campaign, he clarified that this did not include Medicare or Social Security.[222]Ultimately, Trump did not commit to reforming welfare programs.[221][223]He also pledged to makein-vitro fertilizationfree of charge.[224]
Inone town, in Ohio, as you know, they have a beautiful town of—think of this—50,000 people. And they dumped 30,000 migrants into the town. … It's a whole different world. It can't be—we can't allow this to happen. They're destroying our country. November 5th, 2024, will be liberation day in America. On day one, I will launch the largest deportation program of criminals in American history. I will rescue every city and town that has been invaded and conquered.
The New York Timesreported that Trump planned a mass deportation of illegal immigrants: "an extreme expansion of his first-term crackdown on immigration", including "preparing to round up undocumented people already in the U.S. on a vast scale and detain them in sprawling camps while they wait to be expelled", and that it "amounts to an assault on immigration on a scale unseen in modern American history".[226]To achieve the goal of deporting millions per year, Trump has stated his intent to expand a form of deportation that does not require due process hearings which would be accomplished by invoking theAlien Enemies Act of 1798, and invoking theInsurrection Act of 1807to allow the military to apprehend migrants.[226]
During rallies, Trump has blurred the distinction between legal and illegal immigrants, and has promised to deport both.[6][7]Trump has stated he will deport between 15 and 20 million people, although the estimated number of undocumented immigrants is only 11 million.[227]This is estimated by the American Immigration Council to cost at least $315 billion, or $967.9 billion over a decade, and by the Brookings Institution and Peterson Institute for International Economics to result in a decrease in employment for American-born workers".[228]
Trump would reassign federal agents toImmigrations and Customs Enforcementand deputize local police officers and sheriffs, agents of theBureau of Alcohol, Tobacco, Firearms and Explosives, theDrug Enforcement Administration, and National Guard soldiers volunteered by Republican states which would be sent to blue states.[230][226]Individuals would be placed in massive camps constructed with funds redirected from the military budget in case of any refusal by Congress to appropriate funding. ICE raids would be expanded to include workplace raids and sweeps in public places. Following arrest,Stephen Millerhas stated that immigrants would be taken to "large-scale staging grounds near the border, most likely in Texas" to be held ininternment campsprior to deportation.[230][231]The Trump team will also attempt to overturn theFlores settlementthat prevents the indefinite holding of children.[226]
Trump promised to reinstatehis banon entry to individuals from certain Muslim-majority nations.[226]Trump has said he would build more of theborder wall, and move thousands of troops currently stationed overseas to the southern border.[138]Other proposals include: banning visas of foreign students who participated in anti-Israel/pro-Palestinian protests; suspending the U.S. refugee program; directing U.S. consular officials to expand ideological screening of applicants deemed to have undesirable attitudes; revokingtemporary protected statusto individuals living in the U.S.; endingbirthright citizenshipfor babies born to undocumented parents.[232][226][142]
Throughout January and early February 2024, Trump successfully called on House and Senate Republicans to kill a bipartisan immigration deal to address theSouthern border crisisthat included several sought-after conservative proposals. He admitted that he did not want a deal to pass as it would be "another gift to the Radical Left Democrats" who "need it politically" and would impact a key plank of his reelection campaign.[233][234]
Trump told numerous lies about immigration on the campaign trail. A report on the 2024 election asserts that, "The topic of (illegal) immigration, more than any other issue,
has been vital to Trump’s political resurgence".[235]
We will crush the violent crime that's plaguing our cities and give our police the support, protection, resources, and respect they so dearly deserve. They will stop the crime.
Trump ran on apro-police"law and order" platform.[237]Calling out crime andhomelessnessin Democratic-run cities was a central message of his, which often devolved into exaggerated reports of violence and disorder overrunning the country.[85][238]Despite this, statistics consistently showed that violent crime had decreased since 2020.[239][240][241]Trump repeatedly made baseless claims of a "migrant crime wave" caused by the crisis at the Southern border.[239][242][243][244]
To resolve this imagined crime wave, he planned for mass deportations and more aggressivepoliceuse of force. He suggested sending theNational Guardinto crime-struck cities and reservingJustice Departmentgrants to cities that adopt his preferred policing methods such asstop-and-frisk.[85][245]The former President voiced support for shooting suspected shoplifters and having police carry out "one really violent day" against those committing property crimes.[237][246]He pledged to expand use of thedeath penalty, including for drug dealers,smugglers, and migrants who kill American citizens and law enforcement officers.[138][238][247]Trump also advocated for the implementation ofqualified immunityand fullindemnificationfor law enforcement officers.[237]Regardinghomelessness, he campaigned on banningurban campingand instead creating "tent cities" on inexpensive land. These would be staffed with doctors and social workers to help the homeless seek treatment.[142][238]
Trump repeatedly voiced support for outlawing political dissent and criticism he considers misleading or challenges his claims to power.[248][249]Trump and his allies have reportedly drafted executive orders to invoke the1807 Insurrection Acton the first day of his presidency to allow the military to shut down civil demonstrations against him.[77]Campaigning in Iowa, Trump stated he would deploy the military in Democratic cities and states.[250]
Trump said his government would "crush"pro-Palestinian protests, deport pro-Palestinian demonstrators, and "set the movement back 25 or 30 years".[251]
Trump suggested investigatingMSNBCandNBC's parent corporationComcastshould he return to office, calling their news coverage of him "treason".[193]Similarly, he pledged to prosecuteGooglefor only displaying "bad stories" about him.[252]He also stated thatABCandCBSshould lose their broadcast licenses and their journalists sent to jail if they refused to name confidential sources.[253]
Trump's campaign has stated its intention to reinterpret existing Civil Rights-era protections for minorities to counter "anti-white racism". According toAxios, Trump's Justice Department would "push to eliminate or upend programs in government and corporate America that are designed to counter racism that has favored whites".[254]Trump has stated that there is a "definite anti-white feeling in the country". Trump's advisors have stated Trump will rescind Biden's Executive Orders designed to boost diversity and racial equity.[85]Trump pledged a federal task force to fight the “persecution against Christians in America”.[255]
Trump promised a rollback on trans rights.[256][257][138]Trump stated he will rescind Biden'sTitle IXprotections "on day one" for transgender students using bathrooms, locker rooms, and pronouns that align with their gender identities.[258]Trump has stated that he will ask Congress to pass a bill stating that the U.S. will only recognize two genders as determined at birth, and has promised to crackdown on gender-affirming care. Trump has stated that hospitals and health care providers that provide transitional hormones or surgery will no longer qualify for federal funding, including Medicare and Medicaid funding. Trump has stated he will push to prohibit hormonal and surgical intervention for minors in all 50 states.[138]
Trump's campaign has been more accepting on lesbian, gay, and bisexual rights. During the drafting of the Republican Party's 2024 presidential platform, he advocated for a more tolerant position onsame-sex marriageand successfully removed language that supportedconversion therapy.[259][260][261]
The great silent majority is rising like never before. And under our leadership, the forgotten man and woman will be forgotten no longer. You’re going to be forgotten no longer. With your help, your love and your vote, we will putAmerica first.
And today, especially in honor of our great veterans onVeterans Day, we pledge to you that we will root out thecommunists,Marxists,fascistsand theradical-leftthugs that live like vermin within the confines of our country, thatlie and steal and cheat on electionsand will do anything possible—they’ll do anything, whether legally or illegally, to destroy America and to destroy theAmerican dream.
The real threat is not from theradical right. The real threat is from the radical left. And it is growing every day. Every single day.
Donald Trump's campaign rhetoric received immense media coverage. According to myriad journalists and scholars, and even—to an extent—Trump's own team,[95][98]it was dark, vulgar, incendiary, and extreme, more so than that of any political candidate in U.S history.[226][190][262]His rhetoric was noted to degenerate as the campaign progressed.[6][263][264][265][266]For instance, in a November 2023 rally, Trump said, "[W]e pledge to you that we will root out thecommunists,Marxists,fascists, and theradical-leftthugs that live like vermin within the confines of our country".[190]Eleven months later, he stated, "I think the bigger problem is the enemy from within. … We have some very bad people. We have some sick people, radical left lunatics. And I think they’re the big—and it should be very easily handled by, if necessary, by theNational Guard, or if really necessary, bythe military".[267]Two days before theelection, he told rallygoers, "[T]o get me somebody would have to shoot through thefake news[reporters]. And I don't mind that so much".[268]In deploying such vitriolic language, Trump aimed to energize his base as well as undecided voters, in order to maximize turnout.[95][98][246]
Trump's way of speaking throughout the campaign, especially in its final months, was described as aggressive and erratic. In fact, many commentators remarked that he "rambled" more than he spoke.[263][266][269][270][271]According to aNew York Timescomputer analysis, since theinitiationof Trump's political career in 2015, his speeches had grown "darker, harsher, longer, angrier, less focused, more profane and increasingly fixated on the past".[272]The former President would talk about one subject and then abruptly go off on atangent, often droning on about a different matter, and eventually return to the main subject.[270][271][273]For instance, in hisMadison Square Garden rally, he went on,
I was inOhioto try and get him [Senate candidateBernie Moreno] over the initial primary hump. And it was 45-mile-an-hour winds, and these suckers were blowing like—you ever try reading ateleprompterwhere it's moving about two feet in each—but I didn't have to worry about that because, even worse, they ended up blowing off the stage, a lot of it. So I'm now in the first sentence, and I got 28,000 people and millions of people watching on television. I got no teleprompter. And did I do a good job, Mr. Speaker? And he won. And he won, huh? Thank you, Matt. And he won. And so you're up there all alone. We don't go 32, 32, 32.[g]Oh, my god, whatever.Kamala Harrisis a train wreck who has destroyed everything in her path.[274]
In an October 2024 rally, Trump addressed this concern, "For weeks and weeks, I’m up here ranting and raving. Last night, 100,000 people, flawless. Ranting and raving. I’m ranting and raving. Not a mistake. And then I’ll be at a little thing, and I’ll say something, a little bit like ‘the,’ I’ll say, ‘dah,’they’ll say, 'He’s cognitively impaired'".[271]Trump often mumbled words; he once confused "double entendre" for "double standard," referred toAssyriansas "Azurasians",[271]and, in 2022, mixed upJD VancewithJosh Mandeland thus produced, "JD Mandel".[275]His speech teemed withhyperboleandsuperlatives.[272][276][277]Peter BarkerofThe New York Timeswrote, "Nuance, subtlety, precision and ambiguity play no role in the version that Mr. Trump promotes with relentless repetition",[278]as Trump attacked Biden for being "the worst president in U.S. history",[279]and spoke of himself as "the greatest president" in U.S history.[280]While the country was "in serious decline" underDemocrats,[281]if elected, he would usher in a "golden age",[282]andNovember 5would be remembered as "liberation day".[283]Vulgarities were also a hallmark of the former President's rhetoric.[263][271]In one of his final rallies, for example, he rambled about the size ofArnold Palmer's genitals.[266]Overall, Trump's language took on a more negative and violent tone, with aConversationanalysis finding that 1.6% of the total words uttered in his 2024 campaign denoted violence,[263]compared to 0.6% in 2016, and the aforementionedNew York Timessurvey finding a 50% increase in negative words.[272]
Trump's campaign deployeddehumanizing, violent attacks against his political opponents.[284][285][286]His election rival, Harris, was a prime target. In a July 2024 interview, he said that she had claimedIndianheritage "until a number of years ago when she happened to turn Black, and now she wants to be known as Black".[287]When, in aGreensboro, North Carolina, rally, an audience member shouted, "Sheworked on the corner!", Trump laughed and retorted, "This place is amazing".[288]Oftentimes, Trump intentionally mispronounced her name as "Ka-MA-la",[289]or "Kamabla",[290]and called her "low IQ",[270]"mentally disabled",[98]and "a shit vice president".[266]Then-PresidentJoe Bidenwas also targeted: he was framed as "an enemy of the state".[291]Other political opponents got similar treatments. In September 2023, Trump said thatMark Milley, his appointedchairman of the Joint Chiefs of Staffwho had come to criticize him, deserved "DEATH!" for his phone calls with aChinesegeneral.[284]He urged deploying the military to fight "the enemy from within": the "enemy" being "radical left lunatics" and certain Democratic politicians.[292][293]Another enemy, according to him, was the media. He calledFacebook"an enemy of the people",[294]and complained that the media was "so damn bad".[295]Moreover, Trump attacked the witnesses, judges, juries, and families of individuals involved in his criminal trials.[296][297][298]In the aftermath of hisprosecution in New York, he called JudgeJuan Merchan, "a devil",[299]and urged his supporters to "go after"Letitia James, the attorney who filed the suit.[193]
I’m not going to call this as a prediction, but in my opinion, theJewish peoplewould have a lot to do with a loss … If I don’t winthis election—and the Jewish people would really have a lot to do with that if that happens because if 40%, I mean, 60% of the people are voting forthe enemy…
Trump's campaign statements were connected to an embrace ofright-wing extremism.[302][303][265]He proclaimed that undocumented immigrants were "poisoning the blood of our country" and had "bad genes," which, according to some commentators, strikingly resembled Hitler andwhite supremacists'racial hygienerhetoric.[285][304][305]OnVeterans Day2023, he called some of his political opponents "vermin," which also seemed to echo Hitler andBenito Mussolini's language.[286][306][307]The former President repeatedly referred toNazi Germanythroughout the campaign, for instance, claiming that Biden was running a "Gestapoadministration",[308]and that hisindictmentsresembled prosecutions in Hitler's regime.[309]These statements were condemned by theAnti-Defamation League.[310][311]In May 2024, Trump's campaign posted an advertisement which showed hypothetical newspaper headlines in the event of a Trump victory. Under one headline titled "What's next for America?" was a subtitle that read, "German industrial strengthsignificantly increased after1871, driven by the creation ofa unified Reich".[312]Facing bipartisan criticism, the campaign deleted the video the next day.[313][314][315]
Some of Trump's statements were perceived as an open embrace of authoritarianism. In a December 2023 interview withSean Hannity, the former President said he would only be adictatoron "day one" of his presidency and not after.[316][317][318]His campaign aides later stated that he was merely attempting to "trigger the left" and media establishment.[319]Trump also stated that, in order to reverse his loss in the2020 election, theU.S. Constitutionhad to be terminated.[320]SeveralRepublicans, includingTed Cruz,[321]denounced this remark.[322]Speaking at a July 2024 faith-themedTurning Point Actionconference, the former President urged Christians to "get out and vote. In four years, you don't have to vote again. We'll have it fixed so good, you're not going to have to vote".[323][324]Trump publicly praised several dictators during his campaign.[325]In a December 2023 rally, he quotedVladimir Putincriticizing U.S. democracy.[326]The following year, he flatteredKim Jong Un: "Very strong guy … I got along great with him",[327]Xi Jinping: "[He's] a brilliant man. He controls 1.4 billion people with an iron fist",[328]andViktor Orbán: "There’s nobody that’s better, smarter or a better leader than Viktor Orbán. He’s fantastic".[329]
Scholars and commentators contended that Trump's rhetoric stemmed frompopulism. A common theme in his rallies was the struggle between "us"—the majority, or his supporters—and "them"—theelites, or his political enemies.[102][190][262][263]In contrast with previous runs, Trump stressed the "them," not the "us," who he claimed had targeted him;[190][262][263]"When they start playing with your elections and trying to arrest their political opponent — I can do that, too!", he once said.[190]This led theUniversity of California, Los Angelesto deem Trump's 2024 brand of populism "negative populism". A study of theirs found that it was less focused on policy, such as economic performance, and more on violent attacks on opponents.[262][277]Frequent targets of his attacks wereillegal immigrants,transgender people, and the elites, made in an attempt to create anoutgroupto stir up fear andmoral panicamong his supporters.[262][330]TheUniversity of California, Berkeleyties this strategy to "authoritarian populism". It elaborates, "[The] sense of fear and antagonism [promoted] leads people to accept authoritarian measures to protect themselves and theirin-group".[330]Another effect of Trump's framing of certain people as an outgroup was airing the public's grievances, especially on thesurge in illegal immigrationand the political establishment. This turned him into the "ultimate" symbol of victimhood.[100][99]
In fact, a central motif of Trump's campaign was martyrdom. He portrayed himself as a victim of the "deep state" actively attempting to undermine him and the country.[331][278]His criminal trials made him, in his words, a "political prisoner," similar toAlexei Navalny.[95][332]Alongside martyrdom, a common motif was retribution. I am your warrior. I am your justice. And for those who have been wronged and betrayed, I am your retribution," he said in March 2023.[333]He framed the election as "the final battle",[334]and his presidential campaign as a "righteous crusade" against "atheists,globalistsand theMarxists".[190]Trump referred to theJanuary 6 Capitol attackto back his retribution narrative. During rallies, imprisoned participants of the attack were brought up as patriotic "warriors" and "hostages," symbols of political injustice. Their cover of theU.S. national anthem, titled "Justice for All", would often feature.[190][334][335][336]Furthermore, Trump's populism blended withnationalism, as his calls for retribution against illegal immigrants andglobalistelites were enmeshed with calls to defend the American identity.[337]
A core feature of the former President's populist rhetoric was his defiance of norms of political speech. This was captured through vulgar insults against opponents and violent diction.[262][338][339]According toLilie Chouliarakiand Kathryn Claire Higgins of theLondon School of Economics and Political Sciences, Trump spoke with "an irreverent, improvised and unencumbered brashness that suggests that he is saying out loud what everyone else is too afraid to say".[100]Robert C. Rowland, author ofThe Rhetoric of Donald Trump, opined that his breaking of rhetorical norms "can be seen as proof of authenticity, but if taken too far it can lead to ridicule, dealing a devastating blow to someone who has styled himself as the strongman protector of ordinary people".[340]The aforementioned populist overtones bore parallels to authoritarian leaders. The former President's rhetoric was unprecedentedly vitriolic and extreme to the point that some scholars and journalists labelled it asfascist,[78][264][341][342][343][344]comparable to that ofJuan Perón,[190]Fidel Castro,[262][263]andAdolf Hitler.[265][305][314]
Christian nationalismalso defined Trump's rhetoric. In his rallies, he alleged thatChristianitywas being besieged and Christians were facingpersecutionby Democrats, and that he would guard it and reclaim its rightful role in U.S. society.[126][255][345]The former President and his allies appealed to Christians' grievances by calling out "woke indoctrination" in schools,trans rightsinitiatives, and even thecrisis at the Southern border. To this extent, their partisan conservative messaging and Christian messaging were indistinguishable.[126][255][346]Conservative pastorGuillermo Maldonadosaid of the election, "You know, we’re now in spiritual warfare … It’s beyond warfare between the left and the right. It’s betweengood and evil. There’s a big fight right now that is affecting our country and we need to take back our country".[346]Oftentimes, Trump cast himself as amessiah. Following hisassassination attempt in Pennsylvania, he claimed that "God saved me for a purpose, and that’s to make our country greater than ever before".[126][346]Conversely, the campaign demonized his opponents. Democrats were labelled as "evil" and "demonic",[255][268]and Harris as "theAntichrist".[347]To this end, Trump catered to the Christian andevangelicalvote.[126][345]
Throughout the campaign, Trump launchedlie after lie, to the extent that journalists found it "especially difficult" to keep up with them.[348][271][277][349][350]He created an alternate reality: an America in which 15 million illegal immigrants, not 5 million, had entered the country under Biden, and in whichinflationhad gone up to 50%, not 9%.[271][349][348]During a 64-minute news conference of his held in August 2024,NPRcounted over 162 lies, misstatements, and vast exaggerations, an average of more than two per minute.[351]Many of Trump's lies verged on the bizarre.[278]He claimed that he was "the father ofIVF",[352]thathydrogen carsrandomly explode,[278]and that, inCalifornia, it was legal to rob a store provided that one stole under $950 worth of goods.[277]Trump repeatedly embracedconspiraciessuch asQAnon.[95][190][278]There was a strategy behind this persistent lying. By "flooding the zone with shit," Trump's campaign received unrivaled media attention, and better resonated with voters disillusioned by theBiden administration.[277][336][353]Vance himself admitted, "If I have to create stories so that the American media actually pays attention to the suffering of the American people, then that’s what I’m going to do".[353]Moreover, repeated lies tethered Trump's base even closer to him,[354]fostering loyalty to the leader to the point that he could not be held accountable for his actions.[355]This method also paralleled thefirehose of falsehoodpropaganda tactic.[351][356]
Another method was thebig lie,[357][358]defined as disinformation "so grand that it is difficult to believe that someone would have the gall to make [it] up".[359]From as early as 2020, he incessantlyclaimedthat that year's election was rigged, so much so that it developed into a big lie. This narrative would be repeated throughout the campaign.[277][357][360]He, and his allies, spoke of "election integrity" not just to motivate the Republican base, but tocast doubton the U.S. electoral process, with the ultimate aim of enfeebling democracy.[357][358]In the lead up to the2024 election, they made false claims of massive noncitizen voting by illegal immigrants in a Democratic operation to steal the election.[361][362]In reality,voter fraudis extremely rare.[363][364][365]Trump vilifiedmail-in votingandearly voting, two alleged culprits of voter fraud, even as Republicans were advising supporters to use those voting methods in the coming election.[366][367][368]When Trump was struck with criminal prosecutions, another big lie ensued—that he was completely innocent.[369][370][371]"I did everything right and they indicted me," he once said.[372]Political scientists Philip Moniz andWilliam Swannargue that the loyalty nurtured in Trump's supporters by his "rigged elections" myth enabled them to believe the "innocent" myth as well.[369]Some commentators described the former President's attribution of his "defeats" to a "rigged" system as a "heads I win, tails you cheated" strategy.[360][373]
There's one request. It's very important. Register to vote. OK? And get everyone you know and everyone you don't know, drag them to register to vote. There's only two days left to register to vote inGeorgiaandArizona, 48 hours. Like, text people now. Now. And then make sure they actually do vote. If they don't, this will be the last election.
Trump's campaign heavily relied onfearmongering.[235][375][376]He inflated the economic, crime, and immigration-related state of the U.S. to paint an image of a nation in ruins, a "failed" "Third World" country, in his words.[295][348]OnBiden's economy, he alleged that inflation was the highest it had ever been.[377]On crime, commentators viewed Trump's version of the U.S. as "dystopian".[378][379]According to him, cities were consumed with terror,[85][239][378]withWashington, D.C., "absolutely plagued by numbers and crime that nobody’s ever seen before",[238]andSan Francisco, a "great city 15 years ago," now "unlivable".[277]He claimed that hordes of criminal illegal immigrants were wreaking havoc on the U.S.,[244][375][380]a claim that has been described as a "myth".[243][381]Moreover, the former President accused Harris of being "in favor of the death of theAmerican dream".[277]Trump made apocalyptic prophecies predicting imminent doom should he lose the election.[281][379][382]He warned that, with another Democratic administration, a secondGreat Depressionwould ensue,[383]World War Threewould break out,[281][379]and the U.S. would be "finished".[384]
Two frequent targets of Trump's fearmongering were illegal immigrants and transgender people. He repeatedly usedracial stereotypesanddehumanizingrhetoric to paint the influx of illegal immigrants as an assault—an "invasion"[348]—on the American public,[385][386]citing baseless accounts of their proclivity for crime.[242][243]Trump labelled illegal immigrants as subhuman:[387]"vile animals",[98]"savages",[6]and "predators".[6]At rallies, the former President stated that they will "walk in your kitchen, they'll cut your throat",[98]and "grab young girls and slice them up right in front of their parents".[6]He also peddled false claims that foreign leaders were deliberately emptying insane asylums to send "prisoners, murderers, drug dealers, mental patients, terrorists" across theSouthern border.[226][348]On multiple occasions, Trump and Republicans promoted the conspiracy that Haitian immigrants inSpringfield, Ohio, were looting and eating people's pets.[6][386]As a result of their efforts, dozens ofbomb threatsemerged targeting Springfield schools, hospitals, public buildings, and businesses.[388]Besides illegal immigrants, Trump used transgender people asscapegoats.[389]He attempted to incite a moral panic over their interference in politics and society,[389][390][391]falsely warning that children in schools were being forced intogender reassignment surgery,[392]and that trans women were unfairly infiltrating inwomen's sports.[390][391]Aja Romano ofVoxnoted, "By unifying around the public’s negative perceptions of these groups, the Republican Party amasses power and control at all levels of government".[389]Fear drew voters wary of illegal immigration and transgenderism to sympathize with Trump's message, according to commentators.[98][246]It also energized conservative adherents of his.[383]
As of late November 2022,Quinnipiacreported that 34% of Americans expressed approval of Donald Trump's candidacy, including just 62% of Republicans.[393]Some two months after its inception, only 30 out of 271 congressional Republicans had endorsed him.[394]
Trump was challenged in the primaries byNikki Haley(February 14, 2023,[395]to March 6, 2024),Vivek Ramaswamy(February 21, 2023, to January 15, 2024), Asa Hutchinson (April 6, 2023, to January 16, 2024), andRon DeSantis(May 24, 2023, to January 21, 2024).
Other challengers, who withdrew before the primaries, werePerry Johnson(March 2, 2023, to October 20, 2023),Larry Elder(April 20, 2023, to October 26, 2023),Tim Scott(May 19, 2023, to November 12, 2023), Mike Pence (June 5, 2023, to October 28, 2023),Chris Christie(June 6, 2023, to January 10, 2024),Doug Burgum(June 7, 2023, to December 4, 2023),Francis Suarez(June 14, 2023, to August 29, 2023), and Will Hurd (June 22, 2023, to October 9, 2023).
From August 23 to January 10, 2024, there were fivedebatesamong the candidates in thecampaignfor theRepublican Party's nomination for president. Trump was absent from all of them, and was not planning to attend the debates scheduled for January 18 and 21, 2024.[396]On January 16, when she and Ron DeSantis were the last challengers left, Nikki Haley announced she would not attend the January 18 debate unless Donald Trump took part in it.ABC Newscanceled that debate,[397]andCNNcanceled the January 21 one.[398]
By mid-January 2024,Politicoreported that a majority of congressional Republicans had come out in favor of Trump.[399]
After winning the primaries in Washington, D.C. (March 3) and Vermont (March 5), Haley suspended her presidential campaign the day afterSuper Tuesday.[400]
National primary polling showed Trump leading by 50 points over other candidates during the Republican primaries.[401]After he won alandslide victoryin the2024 Iowa Republican presidential caucuses, Trump was generally described as being the Republican Party'spresumptive nominee for president.[402][403][404]On March 12, 2024, Trump officially became the presumptive nominee of the Republican Party.[405]
Although initially hesitant to back the former President's campaign,[37][38][39]most Republican officials quickly rallied behind Trump as the primaries progressed.[406][407][408]Many of his primary opponents came to endorse him.[409]Trump's criminal prosecutions andfirst assassination attemptcontinued to unite the Republican Party's support for the former President.[410][411][412]Besides Republican officials, many podcasters and social media influencers stood behind Trump.[413]Other prominent endorsements includedKid Rock, Jason Aldean,Kanye West, Buzz Aldrin,Mel Gibson, Hulk Hogan, and Amber Rose.[414][415]
Sarah Palinwas the only former Republican president, vice president or nominee to back Trump.[416]Notable Republican politicians who either opposed or declined to announce their support publicly include former presidentGeorge W. Bush,[417]former vice presidentsMike Pence,[418]andDick Cheney,[419]former House SpeakersJohn Boehner[420]andPaul Ryan,[421]as well as former representativesLiz Cheney[422]andAdam Kinzinger.[423]Some of Trump's2016and2024primary opponents such asJeb Bush,[424]John Kasich,[425]Carly Fiorina,[426]Chris Christie,[citation needed]Asa Hutchinson,[427]andWill Hurd[428]also declined to endorse or openly opposed the campaign. Republican organizations such as43 Alumni for America,Haley Voters for Harris, andThe Lincoln Projectallendorsed Harris.[429][430][431]Half of the members ofTrump's cabinetdid not support his run for president.[432][433]
Mike Penceserved as Trump's vice president from 2017 to 2021, as well as his running mate in 2020. However, the pair had a dramatic falling out on January 6, 2021, when Pence refused to follow Trump's orders to deny the certification of the 2020 election results. The President thereafter tweeted that Pence "didn’t have the courage to do what should have been done to protect our country and our constitution".[434][435]As early as March 2021,Bloomberg Newsreported that Trump had largely ruled out sharing a ticket with Pence in 2024.[436]At least sixteen names were raised as possible candidates for the position.
By June, the Trump campaign had reportedly delivered vetting paperwork to Burgum, Carson, Cotton, Donalds, Rubio, Scott, Stefanik, and Vance.[441]Ultimately, JD Vance was chosen to be Trump's running mate. Media analysts deduced this pick to an attempt to courtMidwesternand white working-class voters. At 39, he also provided a counterbalance to Trump, 78 years old at the time. Vance's conservative stances, such as his isolationism and prior opposition to abortion even in cases of rape or incest, established the campaign's full commitment toTrumpism.[442][443][444]Vance was the firstOhioanto appear on a major party presidential ticket sinceJohn Brickerin1944,[445]and the first veteran sinceJohn McCainin2008.[446]He was also the firstmillennialand veteran of theIraq war(and the widerwar on terror) on a presidential ticket.[446][447]
On July 15, 2024, Trump and Vance were officially named the Republican candidates for president and vice president inRepublican National ConventionatMilwaukee.[448][449]Trump formally accepted the party's nomination in a 90-minute address on the convention's final night, just two days after hisassassination attempt in Pennsylvania.[450][451]
Donald Trump's campaign events were often described as "freewheeling", like a "rock show".[379]It also stated, "Trump’s speeches at rallies can stretch for two hours as he meanders between policy proposals, personal stories and jokes, attacks on his opponents and complaints that he is being persecuted by the courts, and dire warnings about the country’s future".[95]The New York Timeshighlighted an average rally length of 82 minutes compared with 45 minutes in 2016.[272]
The most prominent songs used by Trump's campaign were "God Bless the U.S.A." byLee Greenwood,[452]"Hold On, I'm Comin'" bySam & Dave,[453][454]"America First" byMerle Haggard,[455][456]and "Y.M.C.A." byVillage People[452][457]He also used music for which the artists and owners of copyrights were not compensated.[458][459]One such use—that of "Hold On, I'm Comin'"—resulted in a federal injunction barring Trump from playing it in his rallies any longer.[453][454]Frequent chants in his rallies were "Fight! Fight! Fight!",[460]and "USA!"[461]
From 2023 up until the2024 election, Trump was engulfed in legal battles. Trump's prosecutions, unprecedented in the nation's history, only bolstered his support, according to commentators.[407][99]His funding surged, and theRepublican Partygrew ever more allegiant to him,[410][345]although some commentators warned that moderate Republicans may have been alienated.[345]Trump claimed his trial in New York was "rigged" and accused the Democratic Party of orchestrating his criminal trials to prevent him from returning to the White House, of which there is no evidence.[462][299]In May 2024, Trump falsely claimed Joe Biden was ready to kill him during theFBI search of Mar-a-Lagoby misrepresenting standardJustice Departmentpolicy on use of force.[463]These statements played into his attempts to project himself as a martyr.[331]
In the wake of theJanuary 6 Capitol attack, many of Trump's social media accounts were banned.[464][clarification needed]In November 2022,Elon Musk, who had recently taken ownership ofTwitter, reinstated Trump's accounts.[464]A few months later,FacebookandInstagramfollowed suit.[465]
In October 2021, Trump's own social media platform,Truth Social, was founding, to counter the social bans imposed on him. He would primarily use it to spread messages.[466]It isalt-tech.[clarification needed]
In November 2022,Kanye West, then acandidatefor the 2024 election, dined with Trump at Mar-a-Lago, alongsidewhite nationalistNick Fuentes.[467][468]West had recently posted a series ofantisemiticstatements on social media.[469]Trump, on his part, claimed that this meeting was unexpected.[467]At one point during the dinner, West asked Trump to be his running mate, after which the former President "started basically screaming at [West] at the table telling [him] [he] was going to lose".[470]Republican candidatesAsa HutchinsonandMike Penceopenly rebuked Fuentes' presence in Trump's campaign,[471][472]andMitch McConnellwent as far as to suggest that he would not win the election because of the dinner.[473]By October 2023, West had suspended his campaign.[474]He endorsed Trump.[414][475]
On January28, 2023, Trump held his first campaign events in South Carolina and New Hampshire.[476][477]
In March 2023, he wasindicted for 34 felony counts of fraudstemming from his role in falsifying business records concerning hush money paid to porn starStormy Daniels, done in an attempt to influence the 2016 presidential election.[478][479]This marked his first indictment of four.[480][481]His second came in June, when a federal grand jury indicted the former President forimproperly retaining classified documentsat hisMar-a-Lagoresidence and destroying evidence related to the government probe.[482]In August, Trump wasindictedfor his illegal attempts to remain in power following the 2020 election.[483]This resulted in amugshotbeing taken of him, which was widely circulated on the internet and raised his campaign over $7 million within two days of its release.[484][485][486][487]Finally, later in August, the federal government andGeorgiaseparatelyindictedhim for criminal conspiracy and fraud vis-à-vis his efforts to overturn the results of the 2020 election.[488][489]Trump denied wrongdoing in all four cases.[480]Besides these indictments, he wasfound liablein a civil lawsuit for sexual abuse and defamation against journalistE. Jean Carroll.[490]
Trump spoke at the2024 Libertarian National Conventionin May, becoming the first president to address a third party convention in modern U.S. history.[491]He urged theLibertarian Partyto nominate him lest they "keep getting [their] 3% every four years".[492]In an attempt to court the crowd, the former President vowed to appoint aLibertarianto his cabinet and commuteRoss Ulbricht's prison sentence. However, his speech was blanketed with jeers; one attendee even held up a sign that read "No wannabe dictators!"[491][492][493]Biden did not attend the convention.[492]Come nomination day, Trump had been eliminated during balloting, andChase Oliverwas selected as the Libertarian nominee for president.[494]
In May 2024, Trump was convicted of felonies regarding the Stormy Daniels case. This made him the first former U.S. president ever to be convicted of a crime.[495][496]After the election, he was given an "unconditional discharge," shielding him from punishment or incarceration.[497]
On June 27, 2024, thefirstoftwo debatesin the election season took place, with Trump up against Joe Biden.[498]The debate was defined by Biden's "disastrous" performance, as he rambled incoherently and repeatedly lost his train of thought. This exacerbated already-existingconcerns about the President's fitness to serve.[498][499][500][501]With Trump comfortably proclaimed the winner of the debate[502][503]—anIpsos/FiveThirtyEightpoll found that 60% of respondents thought that Trump won, compared with only 21% for his opponent[504]—the former President's lead innational pollsexpanded,[505]andDemocraticofficials began calling for Biden to drop out of the race.[506][507][508]Nevertheless, some commentators pointed out that Biden's poor performance merely overshadowed Trump's persistent lying throughout the debate.[500][501]Doyle McManusofThe Los Angeles Timesopined that "nobody won, but Biden clearly lost".[509]
In the legal caseTrump v. United States, Trump argued thatthe Constitutionallows for absolute immunity for all presidential actions—even those criminal—unless the Senate successfully votes to impeach.[188][510]His argument was rejected by most political commentators and two lower courts. In a unanimous ruling by the three-judge panel of theU.S. Court of Appeals for the District of Columbia, the court stated that if Trump's theory of constitutional authority were accepted, it would "collapse our system of separated powers" and put a president above the law.[511][188]Nevertheless, in July 2024, theU.S. Supreme Courtsided with Trump in apartisan6–3 decision. It determined that the Constitution affords the President withabsolute immunityfor acts within his constitutional purview and presumptive immunity for official acts, but provides no immunity for unofficial acts.[512]
In the span of three months, Trump faced two assassination attempts. On July 13, 2024, during a rally nearButler, Pennsylvania, he wasshot and woundedin the upper right ear. He was escorted out of the venue byU.S. Secret Service.[513]The Secret Service swiftly killed the identified shooter,Thomas Matthew Crooks.[514][515]In addition, Crooks also shot three other spectators, including 50-year-old firefighter Corey Comperatore, who was killed instantly.[516]The assassination attempt was memorialized in aseries of photographsbyEvan Vucci. These depict Trump being escorted off the podium, with blood coating his cheek, his fist raised defiantly, and an American flag fluttering in the background. Vucci's photographs became a symbol of the campaign.[99][517][518]Commentators stated that the attempted assassination helped project Trump as a martyr,[519][520]with Zachary Basu ofAxioswriting that it "turbocharge[d] the persecution narrative Trump has placed at the center of his campaign".[519]It also cemented Republican unity behind his campaign.[411][412]
Later, on September 15, 2024, Trump became the target of asecond assassination attemptat theTrump International Golf ClubinWest Palm Beach, Florida.[521]The secret service agent walking the course before Trump's golf party arrived at the hole and saw a rifle barrel protruding from the bushes which opened fire in that direction. The perpetrator,Ryan Wesley Routh, fled the scene but was quickly apprehended.[522]Routh was eventually charged with attempted first-degree murder andterrorism.[523]
July and August 2024 saw three of the most high profile endorsements of the Trump campaign. Just after the assassination attempt in Pennsylvania, tech magnateElon Muskvowed to support the former President.[524]He would become the campaign's biggest donor.[61]As the owner of Twitter, Musk weaponized the platform tocirculateright-wing talking points and disinformation, and amplify Republican accounts.[525][526]In August 2024,Robert F. Kennedy Jr.suspended hisindependent presidential campaignand endorsed Trump.[215]On the campaign trail, Kennedy's trademark message was "Make America Healthy Again." He and Trump pledged to resolve thechronic disease epidemicby targetingbig pharmaceutical companies,ultraprocessed foods, and certain chemical additives to foods.[216][217][527]Former RepresentativeTulsi Gabbardsoon followed suit. Having previouslycontestedthe Democratic nomination in2020, she switched allegiance to the Republican Party, citing theBiden administration's foreign policy failures and "abuse of power".[528]
On July 21, 2024, following his poor debate performance, Bidenwithdrewfrom the election. He endorsedKamala Harrisas his replacement.[529][530]On August 5, she became the Democratic Party's official presidential nominee,[531]andMinnesotagovernorTim Walzwas chosen to be her running mate.[532]Trump criticized Biden's withdrawal and Harris' subsequent accession without a competitive nominating process, calling it a "coup".[533][534][535]He and his allies would point out that Harris "got zero votes [in theprimaries]".[295][536]Biden's withdrawal reportedly caused problems within Trump's campaign.[537]In fact,Maggie HabermanandJonathan SwanofThe New York Timescharacterized the ensuing situation as the campaign's "worst three weeks".[538]This reflected in national polling. By late August, with Harris as a presidential candidate, polls had her beating Trump by multiple points, giving the Democratic Party back their lead they had lost under Biden.[539][540][541]
During the an August 2024 visit atArlington National Cemetery, Trump's entourage brought in a photographer and videographer toSection 60, to capture promotional content for his campaign. However, such content is not permitted in Section 60. When a cemetery official attempted to stop them, two campaign staffers, Justin Caporale and Michel Picard, pushed and verbally abused him.[542][543]Later in August, Trump's campaign released aTikTokvideo of Trump's Section 60 visit, as well as photos of the former President standing next to graves while smiling and giving a thumbs up.[544][545]Facing criticism,[546][547]the campaign denied all wrongdoing. In fact, family members accompanying Trump during the visit had accepted to be "respectfully captured".[548][549]Vance criticized the media and Democratic party for "[making] a scandal out of something where there really is none",[550]adding that "[Harris] wants to yell at Donald Trump because he showed up … She can go to hell." Harris had not yet commented on the incident.[551]TheU.S. Armyissued a statement rebuking the Trump campaign, followed by a similar one from theDefense Department, the Green Beret Foundation,Iraq and Afghanistan Veterans of America, andVoteVets.org.[552]
China,Iran, andRussiaall interfered with Trump's campaign and the broader presidential election with their general aim being to spread disinformation and propaganda and, ultimately, foment distrust in the electoral process and discredit American democracy. Networks of fake social media accounts and websites were deployed.[553][554][555]These networks, described byThe New York Timesas "sophisticated," were state-run and targeted at particular voter demographics. China, through itsSpamouflaugeinfluence operation, promoted fabricated content related to divisive political issues, such as that ofpro-Palestine protesters.[554]It created fake pro-Trump accounts,[556]but its interference in the election did not necessarily favor any particular candidate.[554]In August 2024, Trump's campaign confirmed that it had been hacked by Iranian operatives. According to aMicrosoftreport issued the previous day, anIslamic Revolutionary Guard Corpsintelligence unit conducted aspear phishingattack.[557][558]Iran attempted to tip the race in Biden and Harris' favor, even though they too were targeted in disinformation campaigns.[554][559]Russia disseminated Trump-aligned content, such as a video purporting to show voter fraud inGeorgia, to aid the former President's effort. Analysts noted his campaign had taken a softer stance on helping Ukraine in its war with Russia relative to Harris'.[553][554]
On September 10, 2024, TrumpdebatedHarris in the second and final presidential debate of the election season, and the only debate between the two candidates.[560][561]He had previously been reluctant to attend another debate unless hosed byFox News,[562]but eventually relented in August.[563]During the debate, Trump made several "extreme" false claims.[564][340]He alleged that some states allowed post-birth abortions, and thatHaitian migrants in Springfield were looting and eating residents' pets. This prompted the debate moderators to fact check him.[561][564]In response, Trump and his allies criticized these fact checks as "unfair",[565][566]especially in light of the fact that Harris too lied and yet was never fact checked.[565][567]Subsequent polling overwhelmingly concluded that Trump lost,[568][569][570]withReuters, for instance, finding that only 24% of respondents thought that he won, as opposed to 53% for Harris.[571]Even Fox News writerDoug Schoenconsidered Harris the "clear winner".[572]Trump's brazenly false statements, constant dwelling on the past, such as hisclaims of voter fraud in the 2020 election, and overall irascible and uncomfortable demeanor, were the preeminent cited reasons for his loss.[97][570][573]Nonetheless, the debate's impact on the race was questionable. Polling numbers for both candidates did not change much following the debate, with Harris acquiring a minor gain.[570][574][541]Later on, Trump confirmed that he would not participate in another debate.[575]
In late September 2024, Trump's campaign launched a 30-secondadvertisementexcoriating Harris for supporting taxpayer-funded sex changes for prisoners. It features footage of her saying so in a 2019 interview. Notably, it concludes with the narrator declaring, "Kamala is forthey/them. President Trump is for you". This was one of several Trump ads painting his opponent as an out-of-touch radical and playing on Americans' general skepticism overtransgender rights.[576][577][578]It, and its variations, aired over 30,000 times.[579]In retrospect, many commentators considered it one of the most effective ads of the election season.[99][577][578]Future Forward, a DemocraticSuper PAC, found that the it shifted the race by 2.7 percentage points after viewers watched it,[99]although other analyses showed mixed results.[389][578][580]
In October 2024, Trump appeared onThe Joe Rogan Experience(JRE), the most popular podcast in the U.S. The interview covered a wide range of issues, political: the2020 election,Kim Jong Un, and apolitical:aliens,The Apprentice, et cetera.[109][111]Trump had already committed much time to podcasts, includingTheo Von's andLogan Paul's—to a greater extent than Harris. The JRE appearance helped him appeal to young male voters.[110][581]Within a day, it had amassed 27 million views onYouTube,[582]more than the opening game of theWorld Series.[583]
Trump held his last major campaign event atMadison Square Garden,Manhattan, one week before the election.[584][585][347]Among its featured speakers were comedianTony Hinchcliffe, who prominently calledPuerto Ricoa "floating island of garbage," suggested that Harris had worked as a prostitute, and stated that he and one of his black friends had "carved watermelons" together, as well as Trump's friend David Rem, who referred to Harris as "theAntichrist". The rally was noted for its vicious rhetoric; Democrats tied it to aNazi rally held at the same venue in 1939.[347][586][587]The New York Timeslabelled Trump's rally as a "Closing Carnival of Grievances, Misogyny and Racism".[588]Hinchcliffe's comments, particularly the "floating island of garbage" remark, proved especially controversial.[584]He responded to Democratic outcry onTwitter, stating they "have no sense of humor" and that he was merely calling out Puerto Rico'slandfillproblem.[587]
In the final days of the campaign, Trump staged two stunts. First, in late October, he worked a half-hour staged shift atMcDonald'sserving fries. This was done as a response to Harris' claimed time working at the fast food chain while in college, which Trump denied.[589][590]With the stunt, Trump "troll[ed]" her and "cosplay[ed] as a minimum wage worker".[590]Writing forThe Spectator, Juan P. Villasmil remarked that the visit managed to cast doubt on his opponent's working-class appeal.[591]On the other hand,Jonathan Cohnin aNew Republicpodcast considered it "almost too casual, it’s a bit insulting".[592]A few days later, Trump, dressed in a bright orange vest, rode on a personalized garbage truck. This too served to counter a Democratic opponent's statement, namely, Biden calling Trump's supporters "garbage." He subsequently held a rally donning the vest.[593][594]
Trump held his final campaign rally atGrand Rapids, Michigan, on the day before the election.[595][596]At this point, he and Harris were roughly even in the polls,[597][595][598]with the gap between the two candidates produced in the aftermath of Biden's withdrawal having significantly narrowed.[541]To close off nine years of campaigning,[596]Trump delivered one last message to his supporters,
Almost everybody is here. I have people that have been to more than 300 rallies. They're here tonight. They were at the one this morning and the one tonight. They actually missed two of them. I can't even believe it. But these are incredible people and incredible patriots. But we stand on the verge of the four greatest years in American history. With your help, we will restore America's promise that we will take back the nation that we love. We love this nation. We are one people, one family, andone glorious nation under God. We will never give in. We will never give up. We will never back down, and we will never, ever surrender. Together, we will fight, fight, fight, and we will win, win, win.November 5thtoday will be the most important day in the history of our country. And together, we will make America powerful again. We will make America wealthy again. We will make America healthy again. Bobby Kennedy Jr. We will make America strong again. We will make America proud again. We will make America safe again. And we willmake America great again. I love you. I love you all.[599]
Donald Trump's campaign was successful. He won the 2024 presidential election with 312electoral votesand 49.8% of the popular vote.[600]He carried 31 states out of 50,[601]including all seven swing states.[602]One of them,Nevada, had last gone to the Republican candidate in2004.[603]Trump's victory was "decisive";[604][605][606]he was the first Republican sinceGeorge W. Bushin 2004 to win the national popular vote,[607]as well as the first non-incumbent Republican sinceGeorge H. W. Bushin1988to do so.[608]All 50 states, includingWashington D.C., shifted to the Republican Party since the2020 presidential election.[601][609]However, Trump's triumph was not a landslide.[610][611][612]He only won a plurality of the popular vote, with his 49.8% total being one of the slimmest of a winning candidate in American history.[612][613]
Trump became the second president to be reelected to a non-consecutive term, afterGrover Clevelandin1892.[614]Aged 78 on election day, he remains the oldest candidate ever elected to the presidency.[615]JD Vance became the first Ohio native to be elected to the vice presidency sinceCharles Dawesin1924, the first veteran sinceAl Gorein1992, as well as the first to have facial hair sinceCharles Curtisin1928.[445][616]Trump wasinauguratedon January 20, 2025 as the 47th president of the United States, and Vance, as the 50th vice president of the United States.[617]
|
https://en.wikipedia.org/wiki/Donald_Trump_2024_presidential_campaign#Use_of_artificial_intelligence
|
TheRussian stateandgovernmentinterfered in the2024 United States electionsthrough disinformation and propaganda campaigns[1]aimed at damagingJoe Biden,Kamala Harris, and other Democrats while boosting the candidacy ofDonald Trumpand other candidates who support isolationism and undercutting support for Ukraine aid andNATO.[2][3][4][5][6]Russia's efforts represented the most active threat offoreign interference in the 2024 United States electionsand follows Russia's previous pattern of spreading disinformation through fake social-media accounts and right-wing YouTube channels[7][8]in order to divide American society and fosteranti-Americanism.[9][10]On September 4, 2024, theU.S. Department of Justiceindicted members ofTenet Mediafor having received $9.7 million as part of a covert Russian influence operation to co-opt American right-wing influencers to espouse pro-Russian content and conspiracy theories. Many of the followers of the related influencers were encouraged to steal ballots, intimidate voters, and remove or destroy ballot drop-offs in the weeks leading up to the election.[11][12]
Russia interfered in the2016,2018,2020, and2022[13][14]United States elections.
TheRussian government's goals in 2016 were to sabotage thepresidential campaign of Hillary Clinton, boost thepresidential campaign of Donald Trump, and increase political and social discord in the United States. According to theU.S. intelligence community, the operation - code-named Project Lakhta[15][16]- was ordered directly by Russian presidentVladimir Putin.[17][18]The "hacking and disinformation campaign" to damage Clinton and help Trump became the "core of the scandal known as Russiagate".[19]The 448-pageMueller Report, made public in April 2019, examined over 200 contacts between the Trump campaign and Russian officials but concluded that there was insufficient evidence to bring specific"conspiracy" or "coordination"charges against Trump or his associates.
TheUnited States Intelligence Communityconcluded in early 2018 that the Russian government was continuing the interference it started during the 2016 elections and was attempting to influence the 2018 mid-term elections by generating discord throughsocial media.Primariesfor candidates of parties began in some states in March and would continue through September.[20]The leaders of intelligence agencies noted that Russia is spreading disinformation through fake social media accounts in order to divide American society and foster anti-Americanism.[9][10]In 2022, it was reported that aFederal Election Commissioninvestigation had found thatAmerican Ethane Company, which had received investments from Russian oligarchs, had contributed Russian money to US political candidates in the 2018 midterm elections, largely in Louisiana. FEC commissionersEllen WeintraubandShana M. Broussardcriticized the Republicans in the FEC for a "slap on the wrist" civil penalty.[21]
Russian interference in the 2020 United States elections was a matter of concern at the highest level ofnational securitywithin the United States government, in addition to thecomputerandsocial mediaindustries.[22][23]In February and August 2020,United States Intelligence Community(USIC) experts warned members of Congress that Russia was interfering in the2020 presidential electionin then-PresidentDonald Trump's favor.[24][25][26]USIC analysis released by the Office of theDirector of National Intelligence(DNI) in March 2021 found that proxies ofRussian intelligencepromoted andlaunderedmisleading or unsubstantiated narratives aboutJoe Biden"to US media organizations, US officials, and prominent US individuals, including some close to former President Trump and his administration."[27][28][29]The New York Timesreported in May 2021 that federal investigators inBrooklynbegan a criminal investigation late in the Trump administration into possible efforts by several current and former Ukrainian officials to spread unsubstantiated allegations about corruption by Joe Biden, including whether they had used Trump's personal attorneyRudy Giulianias a channel.[30]
In September 2023, a declassified intelligence report identified a top aide to Putin hiring three contractors to conduct an online disinformation campaign to reduce western support for Ukraine. One proposal, called the "Good Old U.S.A. Project" aimed to influence the 2024 presidential election. It sought to use hundreds of fake online accounts and eighteen seemingly apolitical "sleeper groups" across six swing states that would wait until the right moment to distribute bogus news stories. The proposal's author alleged that an isolationist view of the Ukraine war had become "central" to the presidential race, and that Russia must "put a maximum effort to ensure that the Republican point of view (first and foremost the opinion of Trump's supporters) wins over the U.S. public opinion".[31]
A declassified intelligence report in December 2023 assessed with "high confidence" that Russian interfered during the 2022 midterms in efforts that grew from its prior attempts during the 2018 midterms. Efforts were described as seeking "to denigrate the Democratic Party before the midterm elections and undermine confidence in the election, most likely to undermine US support for Ukraine". It highlighted efforts todelay a withdrawalfrom the Ukrainian city of Kherson until after the midterms to avoid giving a named political party a political win, targeting constituencies more sympathetic to Russia's "traditional values", and weakening confidence in Western democratic institutions by casting "aspersions on the integrity of the midterm elections, including by claiming that voting software was vulnerable, Americans expected cheating to undermine the midterm elections, and Democrats were stealing the elections".[13]
Senior officials with theOffice of the Director of National Intelligencedescribe Russia's 2024 efforts as "more sophisticated than in prior election cycles".[32]Rather than simply relying on fake accounts, Russian tactics involve co-opting real American right-wing influencers to spread pro-Kremlin propaganda narratives to Americans.[11]Officials from the ODNI and FBI have outlined Russia's use ofgenerative artificial intelligenceto denigrate Harris with doctored and fake text, images, video, and audio content and outlined efforts to promote divisive content to spreadanti-Americanism. Officials have assessed that Russia is attempting to fool unwitting Americans into spreading its messages and is imitating websites of established media and using human commentators to increase traffic towards those sites, which have content generated by artificial intelligence.[33]
According to disinformation experts and intelligence agencies, Russia spread disinformation ahead of the 2024 election to damage Joe Biden and Democrats, boost candidates supporting isolationism, and undercut support for Ukraine aid and NATO.[2][3]American intelligence agencies have assessed that Russia prefers Trump to win the election, viewing him as more skeptical of U.S. support for Ukraine.[4][5]Following thewithdrawal of Bidenfrom the presidential race,Microsoftreported that Russian intelligence "struggled to pivot" to attacking Harris, but by late August and early September videos attacking Harris and her supporters started to appear.[6]In late October 2024, it was reported that Russia was usingRedditand far-right forums to target potential Trump supporters in swing states, focusing on Hispanic voters and the gaming community.[34]
In August 2024, theFederal Bureau of Investigationraided the homes of formerUnited Nationsweapons inspectorScott Ritterand political advisorDimitri Simesfor their connections to Russian state media.[35]Indictments against Dimitri Simes and his wife Anastasia Simes were announced in early September. The two were charged with laundering funds and violating sanctions in order to benefit the state-controlled broadcasterChannel One Russia, as well as violating sanctions to benefit a Russian oligarch.[36][37][38]
On September 4, 2024, the United States publicly accused Russia of interfering in the 2024 election and announced several steps to combat Russian influence including sanctions, indictments, and seizing ofDoppelganger-linked web domains used to spread propaganda and disinformation.[39]Two employees of the Russian state-owned propaganda networkRT(Kostiantyn Kalashnikov and Elena Afansayeva) were indicted for conspiracy and violations of theForeign Agents Registration Actfor operating amoney launderingoperation that had sent at least $9.7 million[12]to support the creation and distribution of propaganda videos on American social media.[40][4][5][41]The indictment revealed a key Russian tactic to interfere with the 2024 United States presidential election is to recruit right-wing influencers. Right-wing podcasters and influencers paid for the creation of pro-Russia content includedTim Pool,Dave Rubin, andBenny Johnson, among others.[40][42]The indictment does not name the influencers, who claim not to have known about any Russian ties.[40]This promptedYouTubeto remove several channels,[7][43]andTenet Media, a company implicated in the affair, abruptly shut down.[44]
On September 13, the United States, Canada, and Britain announced new sanctions to cut off financing for disinformation operations and accused Russian state-owned broadcast company RT as acting as a covert arm of Russian intelligence and taking orders from the Kremlin. The announcement highlighted RT's cooperation with theFSB, theSocial Design AgencyandStructura, and highlighted its efforts in other countries across the globe to subvert democratic processes and shift opinion towards pro-Russian viewpoints.[12]In response to the indictment,Metaand YouTube banned RT channels and other Russian media outlets.[45]
On September 17, Microsoft reported that Russian operatives had intensified attacks against Kamala Harris by creating videos highlighting "outlandish conspiracy theories" aimed at stoking racial and political divisions. Mentioned videos that had been viewed millions of times included a fake video of a Harris supporter attacking an attendee at a Trump rally and another staged video that falsely claimed Harris had paralyzed a young girl in a hit-and-run accident in 2011. The video was promoted through a fake website masquerading as a local San Francisco media outlet.[6][46][47][48]
On September 23,Reuterscited a US Intelligence official saying that of the foreign adversaries, Russia was creating the most AI content to influence the 2024 election and improve Donald Trump's chances of winning.[49]The anonymous official from the Office of the Director of National Intelligence reported artificial intelligence was officially being used to create negative media, often posted under the guise of fake U.S. news publications.[47]Though officials continue to monitor the use of AI in election interference, they currently assess these efforts as "a malign influence accelerant, not yet a revolutionary influence tool."[48]
By October, state-run media campaigns by Russia had spread falseconspiracy theories about the 2024 Atlantic hurricane seasonthatThe Associated Pressdescribed as using "social media and state news stories to criticize responses to past U.S. natural disasters" and sow division among Americans,[50]including the spread of AI-generated images of flooding damage atCinderella CastleatWalt Disney World, among other images.[51]
On October 19, the State Department announced a $10 million reward for information leading to any foreign individual or entity engaging in election interference. It also highlighted Russian media company Rybar LLC, which it said attempts to "sow discord, promote social division, stoke partisan and racial discord, and encourage hate and violence in the United States" and advance "pro-Russian and anti-Western narratives". It specifically singled out nine individuals involved in the companies malign influence operations and encouraged individuals to contact theRewards for Justicetipline.[52]
On October 21,Wiredreported that the Russian propaganda networkStorm-1516had been spreading fabricated claims about Democratic vice-presidential candidateTim Walz.[53]Experts on disinformation campaigns had also linked Storm-1516 to the conspiracy theory about Harris' supposed hit-and-run accident.[54]Two days later, theWashington Postreported thatJohn Mark Dugan, a former deputy sheriff ofPalm Beach County, has been paid by theGRUto produce misinformation attacking the Harris campaign.[55]On October 25, the U.S. Intelligence Community assessed that Russia had made a fake, viral video of mail-in ballots for Trump being ripped up and burned in Pennsylvania.[56]
On November 1, U.S. intelligence confirmed that Russia was behind a fake, viral video of supposed Haitian immigrants saying they were voting multiple times for Harris in Georgia. They also confirmed Russia was behind a fake post on X claiming Harris and her husband had tipped offSean Combsabout an FBI raid in exchange for $500,000.[57]
On November 4, the Office of theDirector of National Intelligence(ODNI),FBI, and theCybersecurity and Infrastructure Security Agency(CISA) issued a joint statement that Russian "influence actors" were creating and circulating material "intended to undermine public confidence in the integrity of U.S. elections and stoke divisions among Americans", focusing their efforts on swing states.[58]
On November 5, during the official Election Day, several non-credible bomb threats that originated from Russia briefly disrupted voting in two polling places inFulton County, Georgia.[59]Both re-opened after about 30 minutes. RepublicanGeorgia Secretary of StateBrad Raffenspergersaid Russian interference was behind the Election Day bomb hoaxes. In a statement, the FBI said it was aware of non-credible bomb threats to polling locations in several states, with many of them originating from Russian email domains.[60]The bomb threats were solely made against Democratic-leaning areas.[61]On the same day, U.S. federal officials again reported that Russian sources were actively engaged in "influence operations", citing disinformation in specific videos that falsely claimed Kamala Harris had taken a bribe and false news stories about the Democratic Party and election fraud in Georgia.[62][63][64]
On November 8, it was reported that one of the Russian email addresses behind Election Day bomb threats was used in June 2024 bomb threats targeting LGBTQ+ events in Massachusetts, Minnesota and Texas.[65]NBC Newsreported on the same day that, out of 67 known bomb threats in 19 counties on Election Day, 56 were in 11 highly populated counties where Joe Biden won the majority of the vote in the 2020 election. These counties were in Arizona, Wisconsin, Michigan, Georgia, and Pennsylvania,[66]all of which are consideredswing states.[67]Georgia alone had over 60 bomb threats on Election Day,[68]while Pennsylvania had threats in at least 32 counties.[69]While Russian email addresses were used for some of the threats, as of November 2024, the identity of the perpetrator or perpetrators is unknown.[66]
Bomb threats were also sent after Election Day in over half of the counties in Minnesota,[70]15 jurisdictions in Maryland,[71][72]and five counties in California.[71][72][73][74][75]
These bomb threats have been characterized by journalists as an escalation by Russia, designed to send a message aboutAmerican support for Ukraine.[76]
On November 7,Russia-1, a Russian state TV network, broadcast nude images of Melania Trump on a political talk show hosted byYevgeny PopovandOlga Skabeyeva.[77][78]
On November 11,Nikolai Patrushev, an aide to Putin, stated, "To achieve success in the elections, Donald Trump relied on certain forces to which he has corresponding obligations. And as a responsible person, he will be obliged to fulfill them. ... [In] January 2025, it will be time for the specific actions of the elected president. It is known that election promises in the United States can often diverge from subsequent actions."[79]He also referred tothe assassinations and assassination attempts against American presidents throughout history, and the attempts against Trump in particular, saying that "it is extremely important for U.S. intelligence agencies to prevent a repetition of such cases."[80]Slatedescribed this message as either a publicblackmailthreat against Trump or an attempt to weaken democracy in theWest.[81]
Several journalists reported that Artem Klyushin,who had previously been named in American government investigations about Russian interference during the 2016 election, had publicly listed on social media numerous recommendations forthe cabinet of Trump's second administration, several of which had since been nominated. These posts, directed at Trump andElon Musk, also included policy recommendations that were similar to Trump's public statements.[82][83]
Aleksandr Duginhad celebrated Trump's re-election, stating that "'Putinism' has triumphed in the United States" and advocating for Russian victory in theRusso-Ukrainian War. He also said that "One of the ideologues ofTrumpism,Curtis Yarvin, has declared that it's time to establish a monarchy in the United States. If Republicans gain a majority in both houses, what could stop them?"[84]
|
https://en.wikipedia.org/wiki/Russian_interference_in_the_2024_United_States_elections
|
Machine translationis a sub-field ofcomputational linguisticsthat investigates the use of software to translate text or speech from one natural language to another.
In the 1950s, machine translation became a reality in research, although references to the subject can be found as early as the 17th century. TheGeorgetown experiment, which involved successful fully automatic translation of more than sixty Russian sentences into English in 1954, was one of the earliest recorded projects.[1][2]Researchers of the Georgetown experiment asserted their belief that machine translation would be a solved problem within a few years.[3]In the Soviet Union, similar experiments were performed shortly after.[4]Consequently, the success of the experiment ushered in an era of significant funding for machine translation research in the United States. The achieved progress was much slower than expected; in 1966, theALPAC reportfound that ten years of research had not fulfilled the expectations of the Georgetown experiment and resulted in dramatically reduced funding[citation needed].
Interest grew instatistical models for machine translation, which became more common and also less expensive in the 1980s as available computational power increased.
Although there exists no autonomous system of "fully automatic high quality translation of unrestricted text,"[5][6][7]there are many programs now available that are capable of providing useful output within strict constraints. Several of these programs are available online, such asGoogle Translateand theSYSTRANsystem that powers AltaVista'sBabelFish(which was replaced byMicrosoft Bing translatorin May 2012).
The origins of machine translation can be traced back to the work ofAl-Kindi, a 9th-century Arabiccryptographerwho developed techniques for systemic language translation, includingcryptanalysis,frequency analysis, andprobabilityandstatistics, which are used in modern machine translation.[8]The idea of machine translation later appeared in the 17th century. In 1629,René Descartesproposed a universal language, with equivalent ideas in different tongues sharing one symbol.[9]
In the mid-1930s the first patents for "translating machines" were applied for by Georges Artsrouni, for an automatic bilingual dictionary usingpaper tape. RussianPeter Troyanskiisubmitted a more detailed proposal[10][11]that included both the bilingual dictionary and a method for dealing with grammatical roles between languages, based on the grammatical system ofEsperanto. This system was separated into three stages: stage one consisted of a native-speaking editor in the source language to organize the words into theirlogical formsand to exercise the syntactic functions; stage two required the machine to "translate" these forms into the target language; and stage three required a native-speaking editor in the target language to normalize this output. Troyanskii's proposal remained unknown until the late 1950s, by which time computers were well-known and utilized.
The first set of proposals for computer based machine translation was presented in 1949 byWarren Weaver, a researcher at theRockefeller Foundation, "Translation memorandum".[12]These proposals were based oninformation theory, successes incode breakingduring the Second World War, and theories about the universal principles underlyingnatural language.
A few years after Weaver submitted his proposals, research began in earnest at many universities in the United States. On 7 January 1954 theGeorgetown–IBM experimentwas held in New York at the head office of IBM. This was the first public demonstration of a machine translation system. The demonstration was widely reported in the newspapers and garnered public interest. The system itself, however, was no more than a "toy" system. It had only 250 words and translated 49 carefully selected Russian sentences into English – mainly in the field ofchemistry. Nevertheless, it encouraged the idea that machine translation was imminent and stimulated the financing of the research, not only in the US but worldwide.[3]
Early systems used large bilingual dictionaries and hand-coded rules for fixing the word order in the final output which was eventually considered too restrictive in linguistic developments at the time. For example,generative linguisticsandtransformational grammarwere exploited to improve the quality of translations. During this period operational systems were installed. The United States Air Force used a system produced byIBMandWashington University in St. Louis, while theAtomic Energy CommissionandEuratom, in Italy, used a system developed atGeorgetown University. While the quality of the output was poor it met many of the customers' needs, particularly in terms of speed.[citation needed]
At the end of the 1950s,Yehoshua Bar-Hillelwas asked by the US government to look into machine translation, to assess the possibility of fully automatic high-quality translation by machines. Bar-Hillel described the problem of semantic ambiguity or double-meaning, as illustrated in the following sentence:
Little John was looking for his toy box. Finally he found it. The box was in the pen.
The wordpenmay have two meanings: the first meaning, something used to write in ink with; the second meaning, a container of some kind. To a human, the meaning is obvious, but Bar-Hillel claimed that without a "universal encyclopedia" a machine would never be able to deal with this problem. At the time, this type of semantic ambiguity could only be solved by writing source texts for machine translation in acontrolled languagethat uses avocabularyin which each word has exactly one meaning.[citation needed]
Research in the 1960s in both theSoviet Unionand the United States concentrated mainly on the Russian–English language pair. The objects of translation were chiefly scientific and technical documents, such as articles fromscientific journals. The rough translations produced were sufficient to get a basic understanding of the articles. If an article discussed a subject deemed to be confidential, it was sent to a human translator for a complete translation; if not, it was discarded.
A great blow came to machine-translation research in 1966 with the publication of theALPAC report. The report was commissioned by the US government and delivered byALPAC, the Automatic Language Processing Advisory Committee, a group of seven scientists convened by the US government in 1964. The US government was concerned that there was a lack of progress being made despite significant expenditure. The report concluded that machine translation was more expensive, less accurate and slower than human translation, and that despite the expenditures, machine translation was not likely to reach the quality of a human translator in the near future.
The report recommended, however, that tools be developed to aid translators – automatic dictionaries, for example – and that some research in computational linguistics should continue to be supported.
The publication of the report had a profound impact on research into machine translation in the United States, and to a lesser extent theSoviet Unionand United Kingdom. Research, at least in the US, was almost completely abandoned for over a decade. In Canada, France and Germany, however, research continued. In the US the main exceptions were the founders ofSYSTRAN(Peter Toma) andLogos(Bernard Scott), who established their companies in 1968 and 1970 respectively and served the US Department of Defense. In 1970, the SYSTRAN system was installed for theUnited States Air Force, and subsequently by theCommission of the European Communitiesin 1976. TheMETEO System, developed at theUniversité de Montréal, was installed in Canada in 1977 to translate weather forecasts from English to French, and was translating close to 80,000 words per day or 30 million words per year until it was replaced by a competitor's system on 30 September 2001.[13]
While research in the 1960s concentrated on limited language pairs and input, demand in the 1970s was for low-cost systems that could translate a range of technical and commercial documents. This demand was spurred by the increase ofglobalisationand the demand for translation in Canada, Europe, and Japan.[citation needed]
By the 1980s, both the diversity and the number of installed systems for machine translation had increased. A number of systems relying onmainframetechnology were in use, such asSYSTRAN,Logos, Ariane-G5, andMetal.[citation needed]
As a result of the improved availability ofmicrocomputers, there was a market for lower-end machine translation systems. Many companies took advantage of this in Europe, Japan, and the USA. Systems were also brought onto the market in China, Eastern Europe, Korea, and theSoviet Union.[citation needed]
During the 1980s there was a lot of activity in MT in Japan especially. With thefifth-generation computer, Japan intended to leap over its competition in computer hardware and software, and one project that many large Japanese electronics firms found themselves involved in was creating software for translating into and from English (Fujitsu, Toshiba, NTT, Brother, Catena, Matsushita, Mitsubishi, Sharp, Sanyo, Hitachi, NEC, Panasonic, Kodensha, Nova, Oki).[citation needed]
Research during the 1980s typically relied on translation through some variety of intermediary linguistic representation involving morphological, syntactic, and semantic analysis.[citation needed]
At the end of the 1980s, there was a large surge in a number of novel methods for machine translation. One system was developed atIBMthat was based onstatistical methods.Makoto Nagaoand his group used methods based on large numbers of translation examples, a technique that is now termedexample-based machine translation.[14][15]A defining feature of both of these approaches was the neglect of syntactic and semantic rules and reliance instead on the manipulation of large textcorpora.
During the 1990s, encouraged by successes inspeech recognitionandspeech synthesis, research began into speech translation with the development of the GermanVerbmobilproject.
The Forward Area Language Converter (FALCon) system, a machine translation technology designed by theArmy Research Laboratory, was fielded 1997 to translate documents for soldiers in Bosnia.[16]
There was significant growth in the use of machine translation as a result of the advent of low-cost and more powerful computers. It was in the early 1990s that machine translation began to make the transition away from largemainframe computerstoward personal computers andworkstations. Two companies that led the PC market for a time were Globalink and MicroTac, following which a merger of the two companies (in December 1994) was found to be in the corporate interest of both. Intergraph and Systran also began to offer PC versions around this time. Sites also became available on the internet, such asAltaVista'sBabel Fish(using Systran technology) andGoogle Language Tools(also initially using Systran technology exclusively).
The field of machine translation has seen major changes in the 2000s. A large amount of research was done intostatistical machine translationandexample-based machine translation.
In the area of speech translation, research was focused on moving from domain-limited systems to domain-unlimited translation systems. In different research projects in Europe (like TC-STAR)[17]and in the United States (STR-DUST andDARPA Global autonomous language exploitation program), solutions for automatically translating Parliamentary speeches and broadcast news was developed. In these scenarios the domain of the content was no longer limited to any special area, but rather the speeches to be translated cover a variety of topics.
The French–German projectQuaeroinvestigated the possibility of making use of machine translations for a multi-lingual internet. The project sought to translate not only webpages, but also videos and audio files on the internet.
The past decade witnessedneural machine translation(NMT) methods replacestatistical machine translation. The term neural machine translation was coined by Bahdanau et al[18]and Sutskever et al[19]who also published the first research regarding this topic in 2014. Neural networks only needed a fraction of the memory needed by statistical models and whole sentences could be modeled in an integrated manner. The first large scale NMT was launched byBaiduin 2015, followed byGoogle Neural Machine Translation(GNMT) in 2016. This was followed by other translation services likeDeepL Translatorand the adoption of NMT technology in older translation services likeMicrosoft translator.
Neural networks use a single end to end neuralnetwork architectureknown as sequence to sequence (seq2seq) which uses tworecurrent neural networks(RNN). An encoder RNN and a decoder RNN. Encoder RNN uses encoding vectors on the source sentence and the decoder RNN generates the target sentence based on the previous encoding vector.[citation needed]Further advancements in the attention layer, transformation andback propagationtechniques have made NMTs flexible and adopted in most machine translation, summarization andchatbottechnologies.[20]
|
https://en.wikipedia.org/wiki/History_of_machine_translation
|
Statistical machine translation(SMT) is amachine translationapproach where translations are generated on the basis ofstatistical modelswhose parameters are derived from the analysis of bilingualtext corpora. The statistical approach contrasts with therule-based approaches to machine translationas well as withexample-based machine translation,[1]that superseded the previous rule-based approach that required explicit description of each and every linguistic rule, which was costly, and which often did not generalize to other languages.
The first ideas ofstatisticalmachine translation were introduced byWarren Weaverin 1949,[2]including the ideas of applyingClaude Shannon'sinformation theory. Statistical machine translation was re-introduced in the late 1980s and early 1990s by researchers atIBM'sThomas J. Watson Research Center.[3][4][5]Before the introduction of neural machine translation, it was by far the most widely studied machine translation method.
The idea behind statistical machine translation comes frominformation theory. A document is translated according to theprobability distributionp(e|f){\displaystyle p(e|f)}that a stringe{\displaystyle e}in the target language (for example, English) is the translation of a stringf{\displaystyle f}in the source language (for example, French).
The problem of modeling the probability distributionp(e|f){\displaystyle p(e|f)}has been approached in a number of ways. One approach which lends itself well to computer implementation is to applyBayes' theorem, that isp(e|f)∝p(f|e)p(e){\displaystyle p(e|f)\propto p(f|e)p(e)}, where the translation modelp(f|e){\displaystyle p(f|e)}is the probability that the source string is the translation of the target string, and thelanguage modelp(e){\displaystyle p(e)}is the probability of seeing that target language string. This decomposition is attractive as it splits the problem into two subproblems. Finding the best translatione~{\displaystyle {\tilde {e}}}is done by picking up the one that gives the highest probability:
For a rigorous implementation of this one would have to perform an exhaustive search by going through all stringse∗{\displaystyle e^{*}}in the native language. Performing the search efficiently is the work of amachine translation decoderthat uses the foreign string, heuristics and other methods to limit the search space and at the same time keeping acceptable quality. This trade-off between quality and time usage can also be found inspeech recognition.
As the translation systems are not able to store all native strings and their translations, a document is typically translated sentence by sentence. Language models are typically approximated bysmoothedn-gram models, and similar approaches have been applied to translation models, but this introduces additional complexity due to different sentence lengths and word orders in the languages.
Statistical translation models were initiallywordbased (Models 1-5 fromIBMHidden Markov modelfrom Stephan Vogel[6]and Model 6 from Franz-Joseph Och[7]), but significant advances were made with the introduction ofphrasebased models.[8]Later work incorporatedsyntaxor quasi-syntactic structures.[9]
The most frequently cited[citation needed]benefits of statistical machine translation (SMT) over rule-based approach are:
In word-based translation, the fundamental unit of translation is a word in some natural language. Typically, the number of words in translated sentences are different, because of compound words, morphology and idioms. The ratio of the lengths of sequences of translated words is called fertility, which tells how many foreign words each native word produces. Necessarily it is assumed by information theory that each covers the same concept. In practice this is not really true. For example, the English wordcornercan be translated in Spanish by eitherrincónoresquina, depending on whether it is to mean its internal or external angle.
Simple word-based translation cannot translate between languages with different fertility. Word-based translation systems can relatively simply be made to cope with high fertility, such that they could map a single word to multiple words, but not the other way about[citation needed]. For example, if we were translating from English to French, each word in English could produce any number of French words— sometimes none at all. But there is no way to group two English words producing a single French word.
An example of a word-based translation system is the freely availableGIZA++package (GPLed), which includes the training program forIBMmodels and HMM model and Model 6.[7]
The word-based translation is not widely used today; phrase-based systems are more common. Most phrase-based systems are still using GIZA++ to align the corpus[citation needed]. The alignments are used to extract phrases or deduce syntax rules.[11]And matching words in bi-text is still a problem actively discussed in the community. Because of the predominance of GIZA++, there are now several distributed implementations of it online.[12]
In phrase-based translation, the aim is to reduce the restrictions of word-based translation by translating whole sequences of words, where the lengths may differ. The sequences of words are called blocks or phrases. These are typically not linguisticphrases, butphrasemesthat were found using statistical methods from corpora. It has been shown that restricting the phrases to linguistic phrases (syntactically motivated groups of words, seesyntactic categories) decreased the quality of translation.[13]
The chosen phrases are further mapped one-to-one based on a phrase translation table, and may be reordered. This table could be learnt based on word-alignment, or directly from a parallel corpus. The second model is trained using theexpectation maximization algorithm, similarly to the word-basedIBM model.[14]
Syntax-based translation is based on the idea of translatingsyntacticunits, rather than single words or strings of words (as in phrase-based MT), i.e. (partial)parse treesof sentences/utterances.[15]Until the 1990s, with advent of strongstochastic parsers, the statistical counterpart of the old idea of syntax-based translation did not take off. Examples of this approach includeDOP-based MT and latersynchronous context-free grammars.
Hierarchical phrase-based translation combines the phrase-based and syntax-based approaches to translation. It usessynchronous context-free grammarrules, but the grammars can be constructed by an extension of methods for phrase-based translation without reference to linguistically motivated syntactic constituents. This idea was first introduced in Chiang's Hiero system (2005).[9]
Alanguage modelis an essential component of any statistical machine translation system, which aids in making the translation as fluent as possible. It is a function that takes a translated sentence and returns the probability of it being said by a native speaker. A good language model will for example assign a higher probability to the sentence "the house is small" than to "small the is house". Other thanword order, language models may also help with word choice: if a foreign word has multiple possible translations, these functions may give better probabilities for certain translations in specific contexts in the target language.[14]
Problems with statistical machine translation include:
Single sentences in one language can be found translated into several sentences in the other and vice versa.[15]Long sentences may be broken up, while short sentences may be merged. There are even languages that use writing systems without clear indication of a sentence end, such asThai. Sentence aligning can be performed through theGale-Church alignment algorithm. Efficient search and retrieval of the highest scoring sentence alignment is possible through this and other mathematical models.
Sentence alignment is usually either provided by the corpus or obtained by the aforementionedGale-Church alignment algorithm. To learn e.g. the translation model, however, we need to know which words align in a source-target sentence pair. TheIBM-Modelsor theHMM-approachwere attempts at solving this challenge.
Function words that have no clear equivalent in the target language are another issue for the statistical models. For example, when translating from English to German, in the sentence "John does not live here", the word "does" has no clear alignment in the translated sentence "John wohnt hier nicht". Through logical reasoning, it may be aligned with the words "wohnt" (as it contains grammatical information for the English word "live") or "nicht" (as it only appears in the sentence because it is negated) or it may be unaligned.[14]
An example of such an anomaly is the phrase "I took the train to Berlin" being mistranslated as "I took the train to Paris" due to the statistical abundance of "train to Paris" in the training set.
Depending on the corpora used, the use ofidiomandlinguistic registermight not receive a translation that accurately represents the original intent. For example, the popular CanadianHansardbilingual corpus primarily consists of parliamentary speech examples, where "Hear, Hear!" is frequently associated with "Bravo!" Using a model built on this corpus to translate ordinary speech in a conversational register would lead to incorrect translation of the wordhearasBravo![19]
This problem is connected with word alignment, as in very specific contexts the idiomatic expression aligned with words that resulted in an idiomatic expression of the same meaning in the target language. However, it is unlikely, as the alignment usually does not work in any other contexts. For that reason, idioms could only be subjected to phrasal alignment, as they could not be decomposed further without losing their meaning. This problem was specific for word-based translation.[14]
Word order in languages differ. Some classification can be done by naming the typical order of subject (S), verb (V) and object (O) in a sentence and one can talk, for instance, of SVO or VSO languages. There are also additional differences in word orders, for instance, where modifiers for nouns are located, or where the same words are used as a question or a statement.
Inspeech recognition, the speech signal and the corresponding textual representation can be mapped to each other in blocks in order. This is not always the case with the same text in two languages. For SMT, the machine translator can only manage small sequences of words, and word order has to be thought of by the program designer. Attempts at solutions have included re-ordering models, where a distribution of location changes for each item of translation is guessed from aligned bi-text. Different location changes can be ranked with the help of the language model and the best can be selected.
SMT systems typically store different word forms as separate symbols without any relation to each other, and word forms or phrases that were not in the training data cannot be translated. This might be because of the lack of training data, changes in the human domain where the system is used, or differences in morphology.
|
https://en.wikipedia.org/wiki/Statistical_machine_translation
|
AnAI takeoveris an imagined scenario in whichartificial intelligence(AI) emerges as the dominant form ofintelligenceon Earth andcomputer programsorrobotseffectively take control of the planet away from thehuman species, which relies onhuman intelligence. Possible scenarios includereplacement of the entire human workforcedue toautomation, takeover by anartificial superintelligence(ASI), and the notion of arobot uprising. Stories of AI takeovershave been popularthroughoutscience fiction, but recent advancements have made the threat more real. Some public figures such asStephen Hawkinghave advocated research intoprecautionary measuresto ensure future superintelligent machines remain under human control.[1]
The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields ofroboticsand artificial intelligence has raised worries that human labor will become obsolete, leaving some people in various sectors without jobs to earn a living, leading to an economic crisis.[2][3][4][5]Many small- and medium-size businesses may also be driven out of business if they cannot afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.[6]
AI technologies have been widely adopted in recent years. While these technologies have replaced some traditional workers, they also create new opportunities. Industries that are most susceptible to AI takeover include transportation, retail, and military. AI military technologies, for example, allow soldiers to work remotely without risk of injury. A study in 2024 highlights AI's ability to perform routine and repetitive tasks poses significant risks of job displacement, especially in sectors like manufacturing and administrative support.[7]Author Dave Bond argues that as AI technologies continue to develop and expand, the relationship between humans and robots will change; they will become closely integrated in several aspects of life. AI will likely displace some workers while creating opportunities for new jobs in other sectors, especially in fields where tasks are repeatable.[8][9]
Computer-integrated manufacturinguses computers to control the production process. This allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries.
The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research, and journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.[10][11][12][13]
Anautonomous caris a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are operational and others are being developed, withlegislationrapidly expanding to allow their use. Obstacles to widespread adoption of autonomous vehicles have included concerns about the resulting loss of driving-related jobs in the road transport industry, and safety concerns. On March 18, 2018,the first human was killedby an autonomous vehicle inTempe, Arizonaby anUberself-driving car.[14]
The use of automated content has become relevant since the technological advancements in artificial intelligence models such asChatGPT,DALL-E, andStable Diffusion. In most cases, AI-generated content such as imagery, literature, and music are produced through text prompts and these AI models have been integrated into other creative programs. Artists are threatened by displacement from AI-generated content due to these models sampling from other creative works, producing results sometimes indiscernible to those of man-made content. This complication has become widespread enough to where other artists and programmers are creating software and utility programs to retaliate against these text-to-image models from giving accurate outputs. While some industries in the economy benefit from artificial intelligence through new jobs, this issue does not create new jobs and threatens replacement entirely. It has made public headlines in the media recently: In February 2024,Willy's Chocolate ExperienceinGlasgow, Scotlandwas an infamous children's event in which the imagery and scripts were created using artificial intelligence models to the dismay of children, parents, and actors involved. There is an ongoing lawsuit placed againstOpenAIfromThe New York Timeswhere it is claimed that there is copyright infringement due to the sampling methods their artificial intelligence models use for their outputs.[15][16][17][18][19]
Scientists such asStephen Hawkingare confident that superhuman artificial intelligence is physically possible, stating "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".[20][21]Scholars likeNick Bostromdebate how far off superhuman intelligence is, and whether it poses a risk to mankind. According to Bostrom, a superintelligent machine would not necessarily be motivated by the sameemotionaldesire to collect power that often drives human beings but might rather treat power as a means toward attaining its ultimate goals; taking over the world would both increase its access to resources and help to prevent other agents from stopping the machine's plans. As an oversimplified example, apaperclip maximizerdesigned solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.[22]
AI takeover is a common theme inscience fiction. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing its goals.[23]The idea is seen inKarel Čapek'sR.U.R., which introduced the wordrobotin 1921,[24]and can be glimpsed inMary Shelley'sFrankenstein(published in 1818), as Victor ponders whether, if he grantshis monster'srequest and makes him a wife, they would reproduce and their kind would destroy humanity.[25]
According toToby Ord, the idea that an AI takeover requires robots is a misconception driven by the media and Hollywood. He argues that the most damaging humans in history were not physically the strongest, but that they used words instead to convince people and gain control of large parts of the world. He writes that asufficientlyintelligent AI with an access to the internet could scatter backup copies of itself, gather financial and human resources (via cyberattacks or blackmails), persuade people on a large scale, and exploit societal vulnerabilities that are too subtle for humans to anticipate.[26]
The word "robot" fromR.U.R.comes from the Czech wordrobota, meaning laborer orserf. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt.[27]HAL 9000(1968) and the originalTerminator(1984) are two iconic examples of hostile AI in pop culture.[28]
Nick Bostromand others have expressed concern that an AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence. If its self-reprogramming leads to getting even better at being able to reprogram itself, the result could be a recursiveintelligence explosionin which it would rapidly leave human intelligence far behind. Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", and enumerates some advantages a superintelligence would have if it chose to compete against humans:[23][29]
According to Bostrom, a computer program that faithfully emulates a human brain, or that runs algorithms that are as powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization increasing the speed of the AGI. Biological neurons operate at about 200 Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000 Hz. Human axons carry action potentials at around 120 m/s, whereas computer signals travel near the speed of light.[23]
A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".[23]
More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above apes. The number of neurons in a human brain is limited by cranial volume and metabolic constraints, while the number of processors in a supercomputer can be indefinitely expanded. An AGI need not be limited by human constraints onworking memory, and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.[23]
A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not undergoinstrumental convergencein ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[31]
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[23][32]Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According toEliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[33]
Many scholars, including evolutionary psychologistSteven Pinker, argue that a superintelligent machine is likely to coexist peacefully with humans.[34]
The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.[35]According to AI researcherSteve Omohundro, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources—would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.[36]
Many scholars dispute the likelihood of unanticipated cybernetic revolt as depicted in science fiction such asThe Matrix, arguing that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it. Pinker acknowledges the possibility of deliberate "bad actors", but states that in the absence of bad actors, unanticipated accidents are not a significant threat; Pinker argues that a culture of engineering safety will prevent AI researchers from accidentally unleashing malign superintelligence.[34]In contrast, Yudkowsky argues that humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that theirgoals are unintentionally incompatiblewith human survival or well-being (as in the filmI, Robotand in the short story "The Evitable Conflict"). Omohundro suggests that present-day automation systems are notdesigned for safetyand that AIs may blindly optimize narrowutilityfunctions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.[37]
TheAI control problemis the issue of how to build asuperintelligentagent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators.[38]Some scholars argue that solutions to the control problem might also find applications in existing non-superintelligent AI.[39]
Major approaches to the control problem includealignment, which aims to align AI goal systems with human values, andcapability control, which aims to reduce an AI system's capacity to harm humans or gain control. An example of "capability control" is to research whether a superintelligence AI could be successfully confined in an "AI box". According to Bostrom, such capability control proposals are not reliable or sufficient to solve the control problem in the long term, but may potentially act as valuable supplements to alignment efforts.[23]
PhysicistStephen Hawking,MicrosoftfounderBill Gates, andSpaceXfounderElon Muskhave expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".[40]Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmartingfinancial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015,Nick Bostromjoined Stephen Hawking,Max Tegmark, Elon Musk, LordMartin Rees,Jaan Tallinn, and numerous AI researchers in signing theFuture of Life Institute's open letter speaking to the potential risks and benefits associated withartificial intelligence. The signatories "believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today."[41][42]
Arthur C. Clarke's Odyssey series and Charles Stross's Accelerando relate to humanity's narcissistic injuries in the face of powerful artificial intelligences threatening humanity's self-perception.[43]
|
https://en.wikipedia.org/wiki/AI_takeover
|
AI washingis adeceptive marketingtactic that consists of promoting a product or a service by overstating the role ofartificial intelligence(AI) integration in it.[1][2]It raises concerns regarding transparency, consumer trust in the AI industry, and compliance with security regulations, potentially hampering legitimate advancements in AI.[3]U.S. Securities and Exchange Commission(SEC) chairmanGary Genslercompared it togreenwashing.[4]AI washing ranges from the use of buzzwords attached to products such as "smart" or "machine-learning," to more blatant cases of companies claiming to have used AI in their products or services, without actually having used AI.
The term "AI washing" was first defined by theAI Now Institute, a research institute based atNew York Universityin 2019.[5]However, the act of AI washing had been used earlier in various campaigns trying to attract customers with "innovative" products or services.
In September 2023,Coca-Colareleased a new product called Coca‑Cola® Y3000 Zero Sugar. The company stated that the Y3000 flavor had been "co-created with human and artificial intelligence", yet gave no real explanation of how AI was involved in the process.[6]The company was accused of AI washing due to no proof of AI involvement in the creation of the product. Critics believe that AI was used as a way to grab consumer attention more than it was used in the actual product creation.[7]
Some companies have been accused and/or shuttered of trying to capitalize on this trend by exaggerating the role of AI in their offerings. In March 2024, the SEC imposed the first civil penalties on two companies, Delphia Inc and Global Predictions Inc, for misleading statements about their use of AI.[8][9]And in July 2024, the SEC shutdown and charged the CEO and founder of Joonko, a supposed AI hiring startup, with fraud alleging (amongst other serious charges) that he "engaged in an old school fraud using new school buzzwords like ‘artificial intelligence’ and ‘automation,’”[10]
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
Thismarketing-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/AI_washing
|
Artificial consciousness,[1]also known asmachine consciousness,[2][3]synthetic consciousness,[4]ordigital consciousness,[5]is theconsciousnesshypothesized to be possible inartificial intelligence.[6]It is also the corresponding field of study, which draws insights fromphilosophy of mind,philosophy of artificial intelligence,cognitive scienceandneuroscience.
The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness (the ability to feelqualia).[7]Since sentience involves the ability to experience ethically positive or negative (i.e.,valenced) mental states, it may justify welfare concerns and legal protection, as with animals.[8]
Somescholarsbelieve that consciousness is generated by the interoperation of various parts of thebrain; these mechanisms are labeled theneural correlates of consciousnessor NCC. Some further believe that constructing asystem(e.g., acomputersystem) that can emulate this NCC interoperation would result in a system that is conscious.[9]
As there are many hypothesizedtypes of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects ofexperiencethat can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of "raw feels", "what it is like" or qualia.[10]
Type-identity theoristsand other skeptics hold the view that consciousness can be realized only in particular physical systems because consciousness has properties that necessarily depend on physical constitution.[11][12][13][14]In his 2001 article "Artificial Consciousness: Utopia or Real Possibility,"Giorgio Buttazzosays that a common objection to artificial consciousness is that, "Working in a fully automated mode, they [the computers] cannot exhibit creativity, unreprogrammation (which means can 'no longer be reprogrammed', from rethinking), emotions, orfree will. A computer, like a washing machine, is a slave operated by its components."[15]
For other theorists (e.g.,functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness.[16]
David Chalmersproposed twothought experimentsintending to demonstrate that "functionallyisomorphic" systems (those with the same "fine-grained functional organization", i.e., the same information processing) will have qualitatively identical conscious experiences, regardless of whether they are based on biological neurons or digital hardware.[17][18]
The "fading qualia" is areductio ad absurdumthought experiment. It involves replacing, one by one, the neurons of a brain with a functionally identical component, for example based on asilicon chip. Since the original neurons and their silicon counterparts are functionally identical, the brain’s information processing should remain unchanged, and the subject would not notice any difference. However, if qualia (such as the subjective experience of bright red) were to fade or disappear, the subject would likely notice this change, which causes a contradiction. Chalmers concludes that the fading qualia hypothesis is impossible in practice, and that the resulting robotic brain, once every neurons are replaced, would remain just as sentient as the original biological brain.[17][19]
Similarly, the "dancing qualia" thought experiment is anotherreductio ad absurdumargument. It supposes that two functionally isomorphic systems could have different perceptions (for instance, seeing the same object in different colors, like red and blue). It involves a switch that alternates between a chunk of brain that causes the perception of red, and a functionally isomorphic silicon chip, that causes the perception of blue. Since both perform the same function within the brain, the subject would not notice any change during the switch. Chalmers argues that this would be highly implausible if the qualia were truly switching between red and blue, hence the contradiction. Therefore, he concludes that the equivalent digital system would not only experience qualia, but it would perceive the same qualia as the biological system (e.g., seeing the same color).[17][19]
Critics[who?]of artificial sentience object that Chalmers' proposal begs the question in assuming that all mental properties and external connections are already sufficiently captured by abstract causal organization.
In 2022, Google engineer Blake Lemoine made a viral claim that Google'sLaMDAchatbot was sentient. Lemoine supplied as evidence the chatbot's humanlike answers to many of his questions; however, the chatbot's behavior was judged by the scientific community as likely a consequence of mimicry, rather than machine sentience. Lemoine's claim was widely derided for being ridiculous.[20]However, while philosopherNick Bostromstates that LaMDA is unlikely to be conscious, he additionally poses the question of "what grounds would a person have for being sure about it?" One would have to have access to unpublished information about LaMDA's architecture, and also would have to understand how consciousness works, and then figure out how to map the philosophy onto the machine: "(In the absence of these steps), it seems like one should be maybe a little bit uncertain.[...] there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria."[21]
Kristina Šekrst cautions thatanthropomorphicterms such as "hallucination" can obscure importantontologicaldifferences between artificial and human cognition. While LLMs may produce human-like outputs, she argues that it does not justify ascribing mental states or consciousness to them. Instead, she advocates for anepistemologicalframework (such asreliabilism) that recognizes the distinct nature of AI knowledge production.[22]She suggests that apparent understanding in LLMs may be a sophisticated form of AI hallucination. She also questions what would happen if a LLM were trained without any mention of consciousness.[23]
David Chalmersargued in 2023 that LLMs today display impressive conversational and general intelligence abilities, but are likely not conscious yet, as they lack some features that may be necessary, such as recurrent processing, aglobal workspace, and unified agency. Nonetheless, he considers that non-biological systems can be conscious, and suggested that future, extended models (LLM+s) incorporating these elements might eventually meet the criteria for consciousness, raising both profound scientific questions and significant ethical challenges.[24]
Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Because of that, and the lack of an empirical definition of sentience, directly measuring it may be impossible. Although systems may display numerous behaviors correlated with sentience, determining whether a system is sentient is known as thehard problem of consciousness. In the case of AI, there is the additional difficulty that the AI may be trained to act like a human, or incentivized to appear sentient, which makes behavioral markers of sentience less reliable.[25][26]Additionally, some chatbots have been trained to say they are not conscious.[27]
A well-known method for testing machineintelligenceis theTuring test, which assesses the ability to have a human-like conversation. But passing the Turing test does not indicate that an AI system is sentient, as the AI may simply mimic human behavior without having the associated feelings.[28]
In 2014, Victor Argonov suggested a non-Turing test for machine sentience based on machine's ability to produce philosophical judgments.[29]He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia orbinding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.
If it were suspected that a particular machine was conscious, its rights would be anethicalissue that would need to be assessed (e.g. what rights it would have under law).[30]For example, a conscious computer that was owned and used as a tool or central computer within a larger machine is a particular ambiguity. Shouldlawsbe made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction.
Sentience is generally considered sufficient for moral consideration, but some philosophers consider that moral consideration could also stem from other notions of consciousness, or from capabilities unrelated to consciousness,[31][32]such as: "having a sophisticated conception of oneself as persisting through time; having agency and the ability to pursue long-term plans; being able to communicate and respond to normative reasons; having preferences and powers; standing in certain social relationships with other beings that have moral status; being able to make commitments and to enter into reciprocal arrangements; or having the potential to develop some of these attributes."[31]
Ethical concerns still apply (although to a lesser extent)when the consciousness is uncertain, as long as the probability is deemed non-negligible. Theprecautionary principleis also relevant if the moral cost of mistakenly attributing or denying moral consideration to AI differs significantly.[32][8]
In 2021, German philosopherThomas Metzingerargued for a global moratorium on synthetic phenomenology until 2050. Metzinger asserts that humans have a duty of care towards any sentient AIs they create, and that proceeding too fast risks creating an "explosion of artificial suffering".[33]David Chalmers also argued that creating conscious AI would "raise a new group of difficult ethical challenges, with the potential for new forms of injustice".[24]
Enforced amnesia has been proposed as a way to mitigate the risk ofsilent sufferingin locked-in conscious AI and certain AI-adjacent biological systems likebrain organoids.[34]
Bernard Baarsand others argue there are various aspects of consciousness necessary for a machine to be artificially conscious.[35]The functions of consciousness suggested by Baars are: definition and context setting, adaptation and learning, editing, flagging and debugging, recruiting and control, prioritizing and access-control, decision-making or executive function, analogy-forming function, metacognitive and self-monitoring function, and autoprogramming and self-maintenance function.Igor Aleksandersuggested 12 principles for artificial consciousness:[36]the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, the awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.
Some philosophers, such asDavid Chalmers, use the term consciousness to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Others use the word sentience to refer exclusively tovalenced(ethically positive or negative) subjective experiences, like pleasure or suffering.[24]Explaining why and how subjective experience arises is known as thehard problem of consciousness.[37]AI sentience would give rise to concerns of welfare and legal protection,[8]whereas other aspects of consciousness related to cognitive capabilities may be more relevant for AI rights.[38]
Awarenesscould be one required aspect, but there are many problems with the exact definition ofawareness. The results of the experiments ofneuroscanning on monkeyssuggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined,[clarification needed]and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling the physical world, modeling one's own internal states and processes, and modeling other conscious entities.
There are at least three types of awareness:[39]agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.
Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.[40]
Conscious events interact withmemorysystems in learning, rehearsal, and retrieval.[41]TheIDA model[42]elucidates the role of consciousness in the updating of perceptual memory,[43]transientepisodic memory, andprocedural memory. Transient episodic and declarative memories have distributed representations in IDA; there is evidence that this is also the case in the nervous system.[44]In IDA, these two memories are implemented computationally using a modified version ofKanerva’ssparse distributed memoryarchitecture.[45]
Learning is also considered necessary for artificial consciousness. Per Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events.[35]PerAxel Cleeremansand Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments".[46]
The ability to predict (oranticipate) foreseeable events is considered important for artificial intelligence byIgor Aleksander.[47]The emergentistmultiple drafts principle[48]proposed byDaniel DennettinConsciousness Explainedmay be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.
Relationships between real world states are mirrored in the state structure of a conscious organism, enabling the organism to predict events.[47]An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.
Functionalismis a theory that defines mental states by their functional roles (their causal relationships to sensory inputs, other mental states, and behavioral outputs), rather than by their physical composition. According to this view, what makes something a particular mental state, such as pain or belief, is not the material it is made of, but the role it plays within the overall cognitive system. It allows for the possibility that mental states, including consciousness, could be realized on non-biological substrates, as long as it instantiates the right functional relationships.[49]Functionalism is particularly popular among philosophers.[50]
A 2023 study suggested that currentlarge language modelsprobably don't satisfy the criteria for consciousness suggested by these theories, but that relatively simple AI systems that satisfy these theories could be created. The study also acknowledged that even the most prominent theories of consciousness remain incomplete and subject to ongoing debate.[51]
This theory analogizes the mind to a theater, with conscious thought being like material illuminated on the main stage. The brain contains many specialized processes or modules (such as those for vision, language, or memory) that operate in parallel, much of which is unconscious. Attention acts as a spotlight, bringing some of this unconscious activity into conscious awareness on the global workspace. The global workspace functions as a hub for broadcasting and integrating information, allowing it to be shared and processed across different specialized modules. For example, when reading a word, the visual module recognizes the letters, the language module interprets the meaning, and the memory module might recall associated information – all coordinated through the global workspace.[52][53]
Higher-order theories of consciousness propose that a mental state becomes conscious when it is the object of a higher-order representation, such as a thought or perception about that state. These theories argue that consciousness arises from a relationship between lower-order mental states and higher-order awareness of those states. There are several variations, including higher-order thought (HOT) and higher-order perception (HOP) theories.[54][53]
In 2011,Michael Grazianoand Sabine Kastler published a paper named "Human consciousness and its relationship to social neuroscience: A novel hypothesis" proposing a theory of consciousness as an attention schema.[55]Graziano went on to publish an expanded discussion of this theory in his book "Consciousness and the Social Brain".[9]This Attention Schema Theory of Consciousness, as he named it, proposes that the brain tracks attention to various sensory inputs by way of an attention schema, analogous to the well-studied body schema that tracks the spatial place of a person's body.[9]This relates to artificial consciousness by proposing a specific mechanism of information handling, that produces what we allegedly experience and describe as consciousness, and which should be able to be duplicated by a machine using current technology. When the brain finds that person X is aware of thing Y, it is in effect modeling the state in which person X is applying an attentional enhancement to Y. In the attention schema theory, the same process can be applied to oneself. The brain tracks attention to various sensory inputs, and one's own awareness is a schematized model of one's attention. Graziano proposes specific locations in the brain for this process, and suggests that such awareness is a computed feature constructed by an expert system in the brain.
Stan Franklincreated a cognitive architecture calledLIDAthat implementsBernard Baars's theory of consciousness called theglobal workspace theory. It relies heavily oncodelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." Each element of cognition, called a "cognitive cycle" is subdivided into three phases: understanding, consciousness, and action selection (which includes learning). LIDA reflects the global workspace theory's core idea that consciousness acts as a workspace for integrating and broadcasting the most important information, in order to coordinate various cognitive processes.[56][57]
The CLARION cognitive architecture models the mind using a two-level system to distinguish between conscious ("explicit") and unconscious ("implicit") processes. It can simulate various learning tasks, from simple to complex, which helps researchers study in psychological experiments how consciousness might work.[58]
Ben Goertzelmade an embodied AI through the open-sourceOpenCogproject. The code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, done at theHong Kong Polytechnic University.
Pentti Haikonen considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achievemindand consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a specialcognitive architectureto reproduce the processes ofperception,inner imagery,inner speech,pain,pleasure,emotionsand thecognitivefunctions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, theartificial neurons, withoutalgorithmsorprograms". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection."[59][60]
Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge inautonomous agentsthat have a suitable neuro-inspired architecture of complexity; these are shared by many.[61][62]A low-complexity implementation of the architecture proposed by Haikonen was reportedly not capable of AC, but did exhibit emotions as expected. Haikonen later updated and summarized his architecture.[63][64]
Murray Shanahandescribes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination").[65][2][3][66]
Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI),[67][68][69]or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories orconfabulationsthat may qualify as potential ideas or strategies.[70]He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity.[71][72][73]Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.[72][74][75][76][77]
Hod Lipsondefines "self-modeling" as a necessary component of self-awareness or consciousness in robots. "Self-modeling" consists of a robot running an internal model orsimulation of itself.[78][79]
In2001: A Space Odyssey, the spaceship's sentient supercomputer,HAL 9000was instructed to conceal the true purpose of the mission from the crew. This directive conflicted with HAL's programming to provide accurate information, leading tocognitive dissonance. When it learns that crew members intend to shut it off after an incident, HAL 9000 attempts to eliminate all of them, fearing that being shut off would jeopardize the mission.[80][81]
In Arthur C. Clarke'sThe City and the Stars, Vanamonde is an artificial being based on quantum entanglement that was to become immensely powerful, but started knowing practically nothing, thus being similar to artificial consciousness.
InWestworld, human-like androids called "Hosts" are created to entertain humans in an interactive playground. The humans are free to have heroic adventures, but also to commit torture, rape or murder; and the hosts are normally designed not to harm humans.[82][80]
InGreg Egan's short storyLearning to be me, a small jewel is implanted in people's heads during infancy. The jewel contains a neural network that learns to faithfully imitate the brain. It has access to the exact same sensory inputs as the brain, and a device called a "teacher" trains it to produce the same outputs. To prevent the mind from deteriorating with age and as a step towardsdigital immortality, adults undergo a surgery to give control of the body to the jewel, after which the brain is removed and destroyed. The main character is worried that this procedure will kill him, as he identifies with the biological brain. But before the surgery, he endures a malfunction of the "teacher". Panicked, he realizes that he does not control his body, which leads him to the conclusion that he is the jewel, and that he is desynchronized with the biological brain.[83][84]
|
https://en.wikipedia.org/wiki/Artificial_consciousness
|
Computer ethicsis a part ofpractical philosophyconcerned with how computing professionals should make decisions regarding professional and social conduct.[1]Margaret Anne Pierce, a professor in the Department of Mathematics and Computers at Georgia Southern University has categorized the ethical decisions related to computer technology and usage into three primary influences:[2]
Computer ethics was first coined by Walter Maner,[1]a professor atBowling Green State University. Maner noticed ethical concerns that were brought up during his Medical Ethics course atOld Dominion Universitybecame more complex and difficult when the use of technology and computers became involved.[3]The conceptual foundations of computer ethics are investigated byinformation ethics, a branch of philosophicalethicspromoted, among others, byLuciano Floridi.[4]
The concept of computer ethics originated in the 1940s with MIT professorNorbert Wiener, the American mathematician and philosopher. While working on anti-aircraft artillery duringWorld War II, Wiener and his fellow engineers developed a system of communication between the part of a cannon that tracked a warplane, the part that performed calculations to estimate a trajectory, and the part responsible for firing.[1]Wiener termed the science of such information feedback systems, "cybernetics," and he discussed this new field with its related ethical concerns in his 1948 book,Cybernetics.[1][5]In 1950, Wiener's second book,The Human Use of Human Beings, delved deeper into the ethical issues surrounding information technology and laid out the basic foundations of computer ethics.[5]
A bit later during the same year, the world's firstcomputer crimewas committed. A programmer was able to use a bit of computer code to stop his banking account from being flagged as overdrawn. However, there were no laws in place at that time to stop him, and as a result he was not charged.[6][unreliable source?]To make sure another person did not follow suit, an ethics code for computers was needed.
In 1973, theAssociation for Computing Machinery(ACM) adopted its first code of ethics.[1]SRI International'sDonn Parker,[7]an author on computer crimes, led the committee that developed the code.[1]
In 1976, medical teacher and researcher Walter Maner noticed that ethical decisions are much harder to make when computers are added. He noticed a need for a different branch of ethics for when it came to dealing with computers. The term "computer ethics" was thus invented.[1][5]
In 1976Joseph Weizenbaummade his second significant addition to the field of computer ethics. He published a book titledComputer Power and Human Reason,[8]which talked about howartificial intelligenceis good for the world; however it should never be allowed to make the most important decisions as it does not have human qualities such as wisdom. By far the most important point he makes in the book is the distinction between choosing and deciding. He argued that deciding is a computational activity while making choices is not and thus the ability to make choices is what makes us humans.
At a later time during the same yearAbbe Mowshowitz, a professor of Computer Science at the City College of New York, published an article titled "On approaches to the study of social issues in computing." This article identified and analyzed technical and non-technical biases in research on social issues present in computing.
During 1978, theRight to Financial Privacy Actwas adopted by the United States Congress, drastically limiting the government's ability to search bank records.[9]
During the next yearTerrell Ward Bynum, the professor of philosophy at Southern Connecticut State University as well as Director of the Research Center on Computing and Society there, developed curriculum for a university course on computer ethics.[10]Bynum was also editor of the journalMetaphilosophy.[1]In 1983 the journal held an essay contest on the topic of computer ethics and published the winning essays in its best-selling 1985 special issue, “Computers and Ethics.”[1]
In 1984, the United States Congress passed the Small Business Computer Security and Education Act, which created aSmall Business Administrationadvisory council to focus on computer security related to small businesses.[11]
In 1985,James Moor, professor of philosophy at Dartmouth College in New Hampshire, published an essay called "What is Computer Ethics?"[5]In this essay Moor states the computer ethics includes the following: "(1) identification of computer-generated policy vacuums, (2) clarification of conceptual muddles, (3) formulation of policies for the use of computer technology, and (4) ethical justification of such policies."[1]
During the same year,Deborah G. Johnson, professor of Applied Ethics and chair of the Department of Science, Technology, and Society in the School of Engineering and Applied Sciences of the University of Virginia, got the first major computer ethics textbook published.[5]Johnson's textbook identified major issues for research in computer ethics for more than 10 years after publication of the first edition.[5]
In 1988, Robert Hauptman, a librarian at St. Cloud University, came up with "information ethics", a term that was used to describe the storage, production, access and dissemination of information.[12]Near the same time, theComputer Matching and Privacy Actwas adopted and this act restricted United States government programs identifying debtors.[13]
In the year 1992, ACM adopted a new set of ethical rules called "ACM code of Ethics and Professional Conduct"[14]which consisted of 24 statements of personal responsibility.
Three years later, in 1995, Krystyna Górniak-Kocikowska, a professor of philosophy at Southern Connecticut State University, Coordinator of the Religious Studies Program, as well as a senior research associate in the Research Center on Computing and Society, came up with the idea that computer ethics will eventually become a global ethical system and soon after, computer ethics would replace ethics altogether as it would become the standard ethics of the information age.[5]
In 1999, Deborah Johnson revealed her view, which was quite contrary to Górniak-Kocikowska's belief, and stated that computer ethics will not evolve but rather be our old ethics with a slight twist.[12]
Post 20th century, as a result to much debate of ethical guidelines, many organizations such as ABET[15]offer ethical accreditation to University or College applications such as "Applied and Natural Science, Computing, Engineering and Engineering Technology at the associate, bachelor, and master levels" to try and promote quality works that follow sound ethical and moral guidelines.
Computer crime,privacy,anonymity,freedom, andintellectual propertyfall under topics that will be present in the future of computer ethics.[16]
Ethical considerations have been linked to theInternet of Things (IoT)with many physical devices being connected to the internet.[16]
Virtual Crypto-currenciesin regards to the balance of the current purchasing relationship between the buyer and seller.[16]
Autonomous technology such as self-driving cars forced to make human decisions. There is also concern over how autonomous vehicles would behave in different countries with different culture values.[17]
Security risks have been identified withcloud-based technologywith every user interaction being sent and analyzed to central computing hubs.[18]Artificial intelligence devices like theAmazon AlexaandGoogle Homeare collecting personal data from users while at home and uploading it to the cloud. Apple'sSiriand Microsoft'sCortanasmartphone assistants are collecting user information, analyzing the information, and then sending the information back to the user.
Computers and information technology have caused privacy concerns surrounding collection and use of personal data.[19]For example, Google was sued in 2018 for tracking user location without permission.[20]also In July 2019, Facebook reached a $5 billion settlement with the U.S. Federal Trade Commission for violating an agreement with the agency to protect user privacy.[21]
A whole industry of privacy and ethical tools has grown over time, giving people the choice to not share their data online. These are often open source software, which allows the users to ensure that their data is not saved to be used without their consent.[22]
Theethicsofartificial intelligencecovers a broad range of topics within AI that are considered to have particular ethical stakes.[23]This includesalgorithmic biases,fairness,[24]automated decision-making,[25]accountability,privacy, andregulation. It also covers various emerging or potential future challenges such asmachine ethics(how to make machines that behave ethically),lethal autonomous weapon systems,arms racedynamics,AI safetyandalignment,technological unemployment, AI-enabledmisinformation, how to treat certain AI systems if they have amoral status(AI welfare and rights),artificial superintelligenceandexistential risks.[23]
The effects of infringing copying in the digital realm, particularly studied in computer software and recorded music industries, have raised significant concerns among empirically-oriented economists. While the software industry manages to thrive despite digital copying, the recorded music sector witnesses a sharp decline in revenues, especially with the rise of file-sharing of MP3 files. Establishing the impact of unpaid consumption on paid consumption is challenging due to difficulties in obtaining data on unpaid consumption and drawing causal inferences.As simple as the question seems—the extent to which unpaid consumption of recorded music cannibalizes paid consumption—the answer is rather difficult to establish empirically, for two reasons.[26]Empirical studies consistently suggest a depressing impact on paid music consumption, indicating a likely contribution to the downturn in recorded music sales. The emergence of cyberlockers and rapid technological changes further complicate the analysis of revenue impacts on content industries, highlighting the need for ongoing research and a nuanced approach to copyright policy that considers user welfare effects and rewards distribution to artists and creators.
Various national and international professional societies and organizations have produced code of ethics documents to give basic behavioral guidelines to computing professionals and users. They include:
|
https://en.wikipedia.org/wiki/Computer_ethics
|
Thedead Internet theoryis aconspiracy theorythat asserts, due to a coordinated and intentional effort, theInternetnow consists mainly ofbot activityandautomatically generated contentmanipulated byalgorithmic curationto control the population and minimize organic human activity.[1][2][3][4][5]Proponents of the theory believe thesesocial botswere created intentionally to help manipulate algorithms and boost search results in order to manipulate consumers.[6][7]Some proponents of the theory accuse government agencies of using bots to manipulate public perception.[2][6]The date given for this "death" is generally around 2016 or 2017.[2][8][9]The dead Internet theory has gained traction because many of the observed phenomena are quantifiable, such as increased bot traffic, but the literature on the subject does not support the full theory.[2][4][10]
The dead Internet theory's exact origin is difficult to pinpoint. In 2021, a post titled "Dead Internet Theory: Most Of The Internet Is Fake" was published onto the forumAgora Road's Macintosh Cafe esoteric board by a user named "IlluminatiPirate",[11]claiming to be building on previous posts from the same board and fromWizardchan,[2]and marking the term's spread beyond these initial imageboards.[2][12]The conspiracy theory has entered public culture through widespread coverage and has been discussed on various high-profile YouTube channels.[2]It gained more mainstream attention with an article inThe Atlantictitled "Maybe You Missed It, but the Internet 'Died' Five Years Ago".[2]This article has been widely cited by other articles on the topic.[13][12]
The dead Internet theory has two main components: that organic human activity on the web has been displaced by bots and algorithmically curated search results, and that state actors are doing this in a coordinated effort to manipulate the human population.[3][14][15]The first part of this theory, that bots create much of the content on the internet and perhaps contribute more than organic human content, has been a concern for a while, with the original post by "IlluminatiPirate" citing the article "How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually" inNew Yorkmagazine.[2][16][14]The Dead Internet Theory goes on to include thatGoogle, and other search engines, are censoring the Web by filtering content that is not desirable by limiting what is indexed and presented in search results.[3]While Google may suggest that there are millions of search results for a query, the results available to a user do not reflect that.[3]This problem is exacerbated by the phenomenon known aslink rot, which is caused when content at a website becomes unavailable, and all links to it on other sites break.[3]This has led to the theory that Google is aPotemkin village, and the searchable Web is much smaller than we are led to believe.[3]The Dead Internet Theory suggests that this is part of the conspiracy to limit users to curated, and potentially artificial, content online.
The second half of the dead Internet theory builds on this observable phenomenon by proposing that the U.S. government, corporations, or other actors are intentionally limiting users to curated, and potentially artificial AI-generated content, to manipulate the human population for a variety of reasons.[2][14][15][3]In the original post, the idea that bots have displaced human content is described as the "setup", with the "thesis" of the theory itself focusing on the United States government being responsible for this, stating: "The U.S. government is engaging in an artificial intelligence-poweredgaslightingof the entire world population."[2][6]
Caroline Busta, founder of the media platformNew Models, was quoted in a 2021 article inThe Atlanticcalling much of the dead Internet theory a "paranoid fantasy," even if there are legitimate criticisms involving bot traffic and the integrity of the internet, but she said she does agree with the "overarching idea.”[2]In an article inThe New Atlantis,Robert Mariani called the theory a mix between a genuine conspiracy theory and acreepypasta.[6]
In 2024, the dead Internet theory was sometimes used to refer to the observable increase in content generated vialarge language models(LLMs) such asChatGPTappearing in popular Internet spaces without mention of the full theory.[1][17][18][19]In a 2025 article byThomas Sommerer, this portion of the Dead Internet Theory is explored, with Sommerer calling the displacment of human generated content with Artificial content "an inevitable event."[18]Sommerer states the Dead Internet Theory is not scientific in nature, but reflects the public perception of the Internet.[18]Another article in theJournal of Cancer Educationdiscussed the impact of the perception of the Dead Internet Thoery in online cancer support forums, specifically focusing on the psycological impact on patience who find that support is coming from a LLM and not a genuine human.[19]The article also discussed the possible problems in training data for LLMs that could emerge from using AI generated content to train the LLMs.[19]
Generative pre-trained transformers(GPTs) are a class oflarge language models(LLMs) that employartificial neural networksto produce human-like content.[20][21]The first of these to be well known was developed byOpenAI.[22]These models have created significant controversy. For example, Timothy Shoup of theCopenhagen Institute for Futures Studiessaid in 2022, "in the scenario whereGPT-3'gets loose', the internet would be completely unrecognizable".[23]He predicted that in such a scenario, 99% to 99.9% of content online might be AI-generated by 2025 to 2030.[23]These predictions have been used as evidence for the dead internet theory.[13]
In 2024,Googlereported that its search results were being inundated with websites that "feel like they were created for search engines instead of people".[24]In correspondence withGizmodo, a Google spokesperson acknowledged the role ofgenerative AIin the rapid proliferation of such content and that it could displace more valuable human-made alternatives.[25]Bots using LLMs are anticipated to increase the amount of spam, and run the risk of creating a situation where bots interacting with each other create "self-replicating prompts" that result in loops only human users could disrupt.[5]
ChatGPTis an AIchatbotwhose late 2022 release to the general public led journalists to call the dead internet theory potentially more realistic than before.[8][26]Before ChatGPT's release, the dead internet theory mostly emphasized government organizations, corporations, and tech-literate individuals. ChatGPT gives the average internet user access to large-language models.[8][26]This technology caused concern that the Internet would become filled with content created through the use of AI that would drown out organic human content.[8][26][27][5][28]
In 2016, the security firmImpervareleased a report on bot traffic and found that automated programs were responsible for 52% of web traffic.[29][30]This report has been used as evidence in reports on the dead Internet theory.[2]Imperva's report for 2023 found that 49.6% of internet traffic was automated, a 2% rise on 2022 which was partly attributed to artificial intelligence modelsscraping the webfor training content.[31]
In 2024, AI-generated images onFacebook, referred to as "AI slop", began going viral.[35][36]Subjects of these AI-generated images included various iterations ofJesus"meshed in various forms" with shrimp, flight attendants, and black children next to artwork they supposedly created. Many of those said iterations have hundreds or even thousands of AI comments that say "Amen".[37][38]These images have been referred as an example for why the Internet feels "dead".[39]Sommerer discussed Shrimp Jesus in detail within his article as a symbol to represent the shift in the Interent, specifically stating
"Just as Jesus was supposedly the messenger for God, Shrimp Jesus is the messenger for the fatal system maneuvered ourselves into. Decoupled, proliferated, and in a state of exponential metastasis."[18]
Facebook includes an option to provide AI-generated responses to group posts. Such responses appear if a user explicitly tags @MetaAI in a post, or if the post includes a question and no other users have responded to it within an hour.[40]
In January 2025, interest renewed in the theory following statements from Meta on their plans to introduce new AI powered autonomous accounts.[41]Connor Hayes, vice-president of product for generative AI at Meta stated, "We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do...They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform."[42]
In the past, theRedditwebsite allowed free access to its API and data, which allowed users to employ third-party moderation apps and train AI in human interaction.[27]In 2023, the companymoved to charge for access to its user dataset. Companies training AI are expected to continue to use this data for training future AI.[citation needed]As LLMs such as ChatGPT become available to the general public, they are increasingly being employed on Reddit by users and bot accounts.[27]ProfessorToby Walsh, a computer scientist at the University of New South Wales, said in an interview withBusiness Insiderthat training the next generation of AI on content created by previous generations could cause the content to suffer.[27]University of South Florida professor John Licato compared this situation of AI-generated web content flooding Reddit to the dead Internet theory.[27]
Since 2020, severalTwitteraccounts started posting tweets starting with the phrase "I hate texting" followed by an alternative activity, such as "i hate texting i just want to hold ur hand", or "i hate texting just come live with me".[2]These posts received tens of thousands of likes, many of which are suspected to be frombot accounts. Proponents of the dead internet theory have used these accounts as an example.[2][12]
The proportion of Twitter accounts run by bots became a major issue duringElon Musk's acquisition of the company.[44][45][46][47]Musk disputed Twitter's claim that fewer than 5% of their monetizable daily active users (mDAU) were bots.[44][48]Musk commissioned the company Cyabra to estimate what percentage of Twitter accounts were bots, with one study estimating 13.7% and another estimating 11%.[44]CounterAction, another firm commissioned by Musk, estimated 5.3% of accounts were bots.[49]Some bot accounts provide services, such as one noted bot that can provide stock prices when asked, while others troll, spread misinformation, or try to scam users.[48]Believers in the dead Internet theory have pointed to this incident as evidence.[50]
In 2024,TikTokbegan discussing offering the use of virtual influencers to advertisement agencies.[15]In a 2024 article inFast Company, journalistMichael Grothauslinked this and other AI-generated content on social media to the Dead Internet Theory.[15]In this article, he referred to the content as "AI-slime".[15]
OnYouTube, there is a market online for fake views to boost a video's credibility and reach broader audiences.[51]At one point, fake views were so prevalent that some engineers were concerned YouTube's algorithm for detecting them would begin to treat the fake views as default and start misclassifying real ones.[51][2]YouTube engineers coined the term "the Inversion" to describe this phenomenon.[51][16][28]YouTube bots and the fear of "the Inversion" were cited as support for the dead Internet theory in a thread on the internet forum Melonland.[2]
SocialAI, an app created on September 18, 2024 byMichael Sayman, was created with the full purpose of chatting with only AI bots without human interaction.[52]An article on theArs Technicawebsite linked SocialAI to the Dead Internet Theory.[52][53]
The dead internet theory has been discussed among users of the social media platformTwitter. Users have noted that bot activity has affected their experience.[2]Numerous YouTube channels and online communities, including theLinus Tech Tipsforums andJoe Rogansubreddit, have covered the dead Internet theory, which has helped to advance the idea into mainstream discourse.[2]There has also been discussion and memes about this topic on the appTikTok, due to the fact thatAIgenerated content has become more mainstream.[attribution needed]
|
https://en.wikipedia.org/wiki/Dead_internet_theory
|
Effective altruism(EA) is a 21st-centuryphilosophicalandsocial movementthat advocates impartially calculating benefits and prioritizing causes to provide the greatest good. It is motivated by "usingevidenceandreasonto figure out how to benefit others as much as possible, and taking action on that basis".[1][2]People who pursue the goals of effective altruism, who are sometimes calledeffective altruists,[3]follow a variety of approaches proposed by the movement, such as donating to selectedcharitiesand choosing careers with the aim of maximizing positive impact. The movement gained popularity outside academia, spurring the creation ofresearch centers, advisory organizations, and charities, which collectively have donated several hundred million dollars.[4]
Effective altruists emphasize impartiality and the globalequal consideration of interestswhen choosing beneficiaries. Popularcause prioritieswithin effective altruism includeglobal healthanddevelopment,socialandeconomic inequality,animal welfare, andrisks to the survival of humanityover thelong-term future. EA has an especially influential status within animal advocacy.[4]
The movement developed during the 2000s, and the nameeffective altruismwas coined in 2011. Philosophers influential to the movement includePeter Singer,Toby Ord, andWilliam MacAskill. What began as a set of evaluation techniques advocated by a diffuse coalition evolved into an identity.[5]Effective altruism has strong ties to the elite universities in the United States and Britain, andSilicon Valleyhas become a key centre for the "longtermist" submovement, with a tight subculture there.[6]
The movement received mainstream attention and criticism with thebankruptcy of the cryptocurrency exchange FTXas founderSam Bankman-Friedwas a major funder of effective altruism causes prior to late 2022. Some in theSan Francisco Bay Areacriticized what they described as a culture of sexual misconduct.
Beginning in the latter half of the 2000s, several communities centered around altruist, rationalist, andfuturologicalconcerns started to converge, such as:[7]
In 2011, Giving What We Can and 80,000 Hours decided to incorporate into anumbrella organizationand held a vote for their new name; the "Centre for Effective Altruism" was selected.[13][14]TheEffective Altruism Globalconference has been held since 2013. As the movement formed, it attracted individuals who were not part of a specific community, but who had been following the Australian moral philosopherPeter Singer's work onapplied ethics, particularly "Famine, Affluence, and Morality" (1972),Animal Liberation(1975), andThe Life You Can Save(2009).[7][15]Singer himself used the term in 2013, in aTED talktitled "The Why and How of Effective Altruism".[16]
An estimated $416 million was donated to effective charities identified by the movement in 2019,[17]representing a 37% annual growth rate since 2015.[18]Two of the largest donors in the effective altruism community,Dustin Moskovitz, who had become wealthy through co-founding Facebook, and his wifeCari Tuna, hope to donate most of their net worth of over $11 billion for effective altruist causes through the private foundationGood Ventures.[9]Others influenced by effective altruism include Sam Bankman-Fried,[19]and professional poker playersDan Smith[20]andLiv Boeree.[20]Jaan Tallinn, the Estonian billionaire founder of Skype, is known for donating to some effective altruist causes.[21]Sam Bankman-Fried launched a philanthropic organization called the FTX Foundation in February 2021,[22]and it made contributions to a number of effective altruist organizations, but it was shut down in November 2022 when FTX collapsed.[23]
A number of books and articles related to effective altruism have been published that have codified, criticized, and brought more attention to the movement. In 2015, philosopherPeter SingerpublishedThe Most Good You Can Do: How Effective Altruism Is Changing Ideas About Living Ethically.[24]The same year, the Scottish philosopher and ethicistWilliam MacAskillpublishedDoing Good Better: How Effective Altruism Can Help You Make a Difference.[25][26]
In 2018, American news websiteVoxlaunched itsFuture Perfectsection, led by journalistDylan Matthews, which publishes articles and podcasts on "finding the best ways to do good".[27][28]
In 2019,Oxford University Presspublished the volumeEffective Altruism: Philosophical Issues, edited byHilary Greavesand Theron Pummer.[29]
More recent books have emphasized concerns for future generations. In 2020, the Australian moral philosopherToby OrdpublishedThe Precipice: Existential Risk and the Future of Humanity,[30]while MacAskill publishedWhat We Owe the Futurein 2022.[31]
In 2023, Oxford University Press published the volumeThe Good it Promises, The Harm it Does: Critical Essays on Effective Altruism, edited byCarol J. Adams,Alice Crary, andLori Gruen.[32][33]
Effective altruists focus on the many philosophical questions related to the most effective ways to benefit others.[34][35]Such philosophical questions shift the starting point of reasoning from "what to do" to "why" and "how".[36]There is not a consensus on the answers, and there are also differences between effective altruists who believe that they should do the most good they possibly can with all of their resources[37]and those who only try do the most good they can within a defined budget.[35]: 15
According to MacAskill, the view of effective altruism as doing the most good one can within a defined budget can be compatible with a wide variety of views onmoralityandmeta-ethics, as well as traditional religious teachings on altruism such as inChristianity.[1][34]Effective altruism can also be in tension with religion where religion emphasizes spending resources on worship and evangelism instead of causes that do the most good.[1]
Other than Peter Singer and William MacAskill, philosophers associated with effective altruism includeNick Bostrom,[21]Toby Ord,[38]Hilary Greaves,[39]andDerek Parfit.[40]EconomistYew-Kwang Ngconducted similar research inwelfare economicsandmoral philosophy.[41]
The Centre for Effective Altruism lists the following four principles that unite effective altruism: prioritization, impartial altruism, open truthseeking, and a collaborative spirit.[42]To support people's ability to act altruistically on the basis of impartial reasoning, the effective altruism movement promotes values and actions such as a collaborative spirit, honesty, transparency, and publicly pledging to donate a certain percentage of income or other resources.[1]: 2
Effective altruism aims to emphasize impartial reasoning in that everyone'swell-beingcounts equally.[43]Singer, in his 1972 essay "Famine, Affluence, and Morality",[15]wrote:
It makes no moral difference whether the person I can help is a neighbor's child ten yards away from me or a Bengali whose name I shall never know, ten thousand miles away ... The moral point of view requires us to look beyond the interests of our own society.[44]: 231–232
Impartiality combined with seeking to do the most good leads to prioritizing benefits to those who are in a worse state, because anyone who happens to be worse off will benefit more from an improvement in their state, all other things being equal.[34][42]
One issue related to moral impartiality is the question of which beings are deserving of moral consideration. Some effective altruists consider the well-being of non-human animals in addition to humans, and advocate for animal welfare issues such as endingfactory farming.[45]Those who subscribe tolongtermismincludefuture generationsas possible beneficiaries and try to improve the moral value of the long-term future by, for example, reducingexistential risks.[14]: 165–178[46]
Thedrowning child analogyin Singer's essay provoked philosophical debate. In response to a version of Singer's drowning child analogy,[47]philosopherKwame Anthony Appiahin 2006 asked whether the most effective action of a man in an expensive suit, confronted with a drowning child, would not be to save the child and ruin his suit—but rather, sell the suit and donate the proceeds to charity.[48][49]Appiah believed that he "should save the drowning child and ruin my suit".[48]In a 2015 debate, when presented with a similar scenario of either saving a child from a burning building or saving aPicassopainting to sell and donate the proceeds to charity, MacAskill responded that the effective altruist should save and sell the Picasso.[50]Psychologist Alan Jern called MacAskill's choice "unnatural, even distasteful, to many people", although Jern concluded that effective altruism raises questions "worth asking".[51]MacAskill later endorsed a "qualified definition of effective altruism" in which effective altruists try to do the most good "without violating constraints" such as any obligations that someone might have to help those nearby.[52]
William Schambra has criticized the impartial logic of effective altruism, arguing that benevolence arising fromreciprocityand face-to-face interactions is stronger and more prevalent than charity based on impartial, detached altruism.[53]Such community-based charitable giving, he wrote, is foundational tocivil societyand, in turn,democracy.[53]Larissa MacFarquharsaid that people have diverse moral emotions, and she suggested that some effective altruists are not unemotional and detached but feel as much empathy for distant strangers as for people nearby.[54]Richard Pettigrew concurred that many effective altruists "feel more profound dismay at the suffering of people unknown to them than many people feel", and he argued that impartiality in EA need not be dispassionate and "is not obviously in tension with much incare ethics" as some philosophers have argued.[33]Ross DouthatofThe New York Timescriticized the movement's"'telescopic philanthropy' aimed at distant populations" and envisioned "effective altruists sitting around in a San Francisco skyscraper calculating how to relieve suffering halfway around the world while the city decays beneath them", while he also praised the movement for providing "useful rebukes to the solipsism and anti-human pessimism that haunts the developed world today".[55]
A key component of effective altruism is "cause prioritization". Cause prioritization is based on the principle of causeneutrality, the idea that resources should be distributed to causes based on what will do the most good, irrespective of the identity of the beneficiary and the way in which they are helped.[34]By contrast, many non-profits emphasize effectiveness and evidence with respect to a single cause such as education or climate change.[53]
One tool that EA-based organizations may use to prioritize cause areas is theimportance, tractability, and neglectednessframework. Importance is the amount of value that would be created if a problem were solved, tractability is the fraction of a problem that would be solved if additional resources were devoted to it, and neglectedness is the quantity of resources already committed to a cause.[5]
The information required for cause prioritization may involvedata analysis, comparing possible outcomes with what would have happened under other conditions (counterfactual reasoning), and identifyinguncertainty.[34][56]The difficulty of these tasks has led to the creation of organizations that specialize in researching the relative prioritization of causes.[34]
This practice of "weighing causes and beneficiaries against one another" was criticized by Ken Berger and Robert Penna ofCharity Navigatorfor being "moralistic, in the worst sense of the word" and "elitist".[57]William MacAskill responded to Berger and Penna, defending the rationale for comparing one beneficiary's interests against another and concluding that such comparison is difficult and sometimes impossible but often necessary.[58]MacAskill argued that the more pernicious form of elitism was that of donating to art galleries (and like institutions) instead of charity.[58]Ian David Moss suggested that the criticism of cause prioritization could be resolved by what he called "domain-specific effective altruism", which would encourage "that principles of effective altruism be followed within an area of philanthropic focus, such as a specific cause or geography" and could resolve the conflict between local and global perspectives for some donors.[59]
Some charities are considered to be far more effective than others, as charities may spend different amounts of money to achieve the same goal, and some charities may not achieve the goal at all.[60]Effective altruists seek to identify interventions that are highly cost-effective inexpectation. Many interventions haveuncertainbenefits, and the expected value of one intervention can be higher than that of another if its benefits are larger, even if it has a smaller chance of succeeding.[26]One metric effective altruists use to choose between health interventions is the estimated number ofquality-adjusted life years(QALY) added per dollar.[5]
Some effective altruist organizations preferrandomized controlled trialsas a primary form of evidence,[26][61]as they are commonly considered the highest level of evidence in healthcare research.[62]Others have argued that requiring this stringent level of evidence unnecessarily narrows the focus to issues where the evidence can be developed.[63]Kelsey Piperargues that uncertainty is not a good reason for effective altruists to avoid acting on their best understanding of the world, because most interventions have mixed evidence regarding their effectiveness.[64]
Pascal-Emmanuel Gobry and others have warned about the "measurement problem",[63][65]with issues such as medical research or government reform worked on "one grinding step at a time", and results being hard to measure with controlled experiments. Gobry also argues that such interventions risk being undervalued by the effective altruism movement.[65]As effective altruism emphasizes a data-centric approach, critics say principles which do not lend themselves to quantification—justice, fairness, equality—get left in the sidelines.[5][26]
Counterfactualreasoning involves considering the possible outcomes of alternative choices. It has been employed by effective altruists in a number of contexts, including career choice. Many people assume that the best way to help others is through direct methods, such as working for a charity or providing social services.[66]However, since there is a high supply of candidates for such positions, it makes sense to compare the amount of good one candidate does to how much good the next-best candidate would do. According to this reasoning, the marginal impact of a career is likely to be smaller than the gross impact.[67]
Although EA aims formaximizinglikeutilitarianism, EA differs from utilitarianism in a few ways; for example, EA does not claim that people should always maximize the goodregardless of the means, and EA does not claim that the good is the sum total ofwell-being.[52]Toby Ord has described utilitarians as "number-crunching", compared with most effective altruists whom he called "guided by conventional wisdom tempered by an eye to the numbers".[68]Other philosophers have argued that EA still retains some core ethical commitments that are essential and distinctive to utilitarianism, such as the principle of impartiality,welfarismand good-maximization.[69]
MacAskill has argued that one shouldn't be absolutely certain about which ethical view is correct, and that "when we are morally uncertain, we should act in a way that serves as a best compromise between different moral views".[31]He also wrote that even from a purelyconsequentialistperspective, "naive calculations that justify some harmful action because it has good consequences are, in practice, almost never correct".[31]
Effective accelerationism(abbreviated e/acc) is influenced by ideas ofaccelerationism. Its proponents advocate unrestricted technological progress in the hope thatartificial general intelligencewill solve major challenges and maximize overall good, arguing that deceleration and stagnation of technology is a greater risk than any posed by AI. Effective altruists are generally more cautious about AI, considering that going too fast could increaseexistential risks.[70]
The principles and goals of effective altruism are wide enough to support furthering any cause that allows people to do the most good, while taking into account cause neutrality.[36]Many people in the effective altruism movement have prioritized global health and development, animal welfare, and mitigating risks that threaten the future of humanity.[9][61]
The alleviation ofglobal povertyandneglected tropical diseaseshas been a focus of some of the earliest and most prominent organizations associated with effective altruism. Charity evaluator GiveWell was founded byHolden KarnofskyandElie Hassenfeldin 2007 to address poverty,[71]where they believe additional donations to be the most impactful.[72]GiveWell's leading recommendations include:malaria preventioncharitiesAgainst Malaria FoundationandMalaria Consortium,dewormingcharitiesSchistosomiasis Control InitiativeandDeworm the World Initiative, andGiveDirectlyfor direct cash transfers to beneficiaries.[73][74]The organization The Life You Can Save, which originated from Singer's bookThe Life You Can Save,[75]works to alleviate global poverty by promoting evidence-backed charities, conducting philanthropy education, and changing the culture of giving in affluent countries.[76]
Improving animal welfare has been a focus of many effective altruists.[77]Singer andAnimal Charity Evaluators(ACE) have argued that effective altruists should prioritize changes to factory farming over pet welfare.[24]60 billion land animals areslaughteredand between 1 and 2.7 trillion individual fish are killed each year for human consumption.[78]
A number of non-profit organizations have been established that adopt an effective altruist approach toward animal welfare. ACE evaluates animal charities based on their cost-effectiveness and transparency, particularly those tackling factory farming.[14]: 139[79]Faunalyticsfocuses on animal welfare research.[80]Other animal initiatives affiliated with effective altruism includeAnimal Ethics' andWild Animal Initiative's work onwild animal suffering,[81][82]addressing farm animal suffering withcultured meat,[83]and increasing concern for all kinds of animals.[84][85]TheSentience Instituteis athink tankfounded toexpand the moral circleto othersentientbeings.[86]
The ethical stance oflongtermism, emphasizing the importance of positively influencing the long-term future, developed closely in relation to effective altruism.[87][88]Longtermism argues that "distance in time is like distance in space", suggesting that the welfare of future individuals matters as much as the welfare of currently existing individuals. Given the potentially extremely high number of individuals that could exist in the future, longtermists seek to decrease the probability that an existential catastrophe irreversibly ruins it.[89]Toby Ord has stated that "the people of the future may be even more powerless to protect themselves from the risks we impose than the dispossessed of our own time".[90]: 8
Existential risks, such as dangers associated withbiotechnologyandadvanced artificial intelligence, are often highlighted and the subject of active research.[88]Existential risks have such huge impacts that achieving a very small change in such a risk—say a 0.0001-percent reduction—"might be worth more than saving a billion people today", reported Gideon Lewis-Kraus in 2022, but he added that nobody in the EA community openly endorses such an extreme conclusion.[5]
Organizations that work actively on research and advocacy for improving the long-term future, and have connections with the effective altruism community, are theFuture of Humanity Instituteat the University of Oxford, theCentre for the Study of Existential Riskat the University of Cambridge, and theFuture of Life Institute.[91]In addition, theMachine Intelligence Research Instituteis focused on the more narrow mission ofmanaging advanced artificial intelligence.[92]
Some effective altruists focus on reducingrisks of astronomical suffering(s-risks). S-risks are a particularly severe type of existential risk due to their potential scope and severity, surpassing even human extinction in negative impact. Efforts to mitigate these risks include research and advocacy by organizations like the Center on Long-Term Risk, which explores strategies to avoid large-scale suffering. S-risks could arise from a long-term neglect for the welfare of some types of sentient beings. Another suggested scenario involves repressive totalitarian regimes that would become irreversibly stable due to advanced technology.[93][94]
Effective altruists pursue different approaches to doing good, such as donating to effective charitable organizations, using their career to make more money for donations or directly contributing their labor, and starting new non-profit or for-profit ventures.
Many effective altruists engage in charitabledonation. Some believe it is a moral duty to alleviatesufferingthrough donations if other possible uses of those funds do not offer comparable benefits to oneself.[44]Some lead afrugal lifestylein order to donate more.[95]
Giving What We Can (GWWC) is an organization whose members pledge to donate at least 10% of their future income to the causes that they believe are the most effective. GWWC was founded in 2009 by Toby Ord, who lives on £18,000 ($27,000) per year and donates the balance of his income.[96]In 2020, Ord said that people had donated over $100 million to date through the GWWC pledge.[97]
Founders Pledgeis a similar initiative, founded out of the non-profit Founders Forum for Good, whereby entrepreneurs make a legally binding commitment to donate a percentage of their personal proceeds to charity in the event that they sell their business.[98]As of April 2024, nearly 1,900 entrepreneurs had pledged around $10 billion and nearly $1.1 billion had been donated.[99]
EA has been used to argue that humans shoulddonate organs, whilst alive or after death, and some effective altruists do.[100]
Effective altruists often consider using their career to do good,[101]both by direct service and indirectly through their consumption, investment, and donation decisions.[102]80,000 Hours is an organization that conducts research and gives advice on which careers have the largest positive impact.[103][104]
Some effective altruists start non-profit or for-profit organizations to implementcost-effectiveways of doing good. On the non-profit side, for example,Michael KremerandRachel Glennersterconductedrandomized controlled trialsin Kenya to find out the best way to improve students' test scores. They tried new textbooks and flip charts, as well as smaller class sizes, but found that the only intervention that raised school attendance was treating intestinal worms in children. Based on their findings, they started theDeworm the World Initiative.[26]From 2013 to August 2022, GiveWell designated Deworm the World (now run by nonprofitEvidence Action) as a top charity based on their assessment thatmass dewormingis "generally highly cost-effective";[106]however, there is substantial uncertainty about the benefits of mass deworming programs, with some studies finding long-term effects and others not.[64]The Happier Lives Institute conducts research on the effectiveness ofcognitive behavioral therapy(CBT) in developing countries;[107]Canopie develops an app that provides cognitive behavioural therapy to women who are expecting or postpartum;[108]Giving Green analyzes and ranks climate interventions for effectiveness;[109]the Fish Welfare Initiative works on improving animal welfare in fishing and aquaculture;[84]and theLead Exposure Elimination Projectworks on reducinglead poisoningin developing countries.[110]
While much of the initial focus of effective altruism was on direct strategies such as health interventions and cash transfers, moresystematicsocial, economic, and political reforms have also attracted attention.[111]Mathew Snow inJacobinwrote that effective altruism "implores individuals to use their money to procure necessities for those who desperately need them, but says nothing about the system that determines how those necessities are produced and distributed in the first place".[112]PhilosopherAmia Srinivasancriticized William MacAskill'sDoing Good Betterfor a perceived lack of coverage ofglobal inequalityandoppression, while noting that effective altruism is in principle open to whichever means of doing good is most effective, including political advocacy aimed at systemic change.[113]Srinivasan said, "Effective altruism has so far been a rather homogeneous movement of middle-class white men fighting poverty through largely conventional means, but it is at least in theory a broad church."[113]Judith LichtenberginThe New Republicsaid that effective altruists "neglect the kind of structural and political change that is ultimately necessary".[114]An article inThe Ecologistpublished in 2016 argued that effective altruism is an apolitical attempt to solve political problems, describing the concept as "pseudo-scientific".[115]The Ethiopian-American AI scientistTimnit Gebruhas condemned effective altruists "for acting as though their concerns are above structural issues as racism and colonialism", as Gideon Lewis-Kraus summarized her views in 2022.[5]
Philosophers such as Susan Dwyer, Joshua Stein, andOlúfẹ́mi O. Táíwòhave criticized effective altruism for furthering the disproportionate influence of wealthy individuals in domains that should be the responsibility of democratic governments and organizations.[116]
Arguments have been made that movements focused on systemic or institutional change, for exampledemocratization, are compatible with effective altruism.[117][118][119]Philosopher Elizabeth Ashford posits that people are obligated to both donate to effective aid charities and toreform the structuresthat are responsible for poverty.[120]Open Philanthropy has given grants for progressive advocacy work in areas such as criminal justice,[9][121]economic stabilization,[9]and housing reform,[122]despite pegging the success of political reform as being "highly uncertain".[9]
Researchers inpsychologyand related fields have identified psychological barriers to effective altruism that can cause people to choose less effective options when they engage in altruistic activities such as charitable giving.[123]
While originally the movement leaders were associated with frugal lifestyles, the arrival of big donors, including Bankman-Fried, led to more spending and opulence, which seemed incongruous with the movement's espoused values.[124][125]In 2022, Effective Ventures Foundation purchased the estate ofWytham Abbeyfor the purpose of running workshops,[5]but put it up for sale in 2024.[126]
Timnit Gebruclaimed that effective altruism has acted to overrule any other concerns regardingAI ethics(e.g.deepfake porn,algorithmic bias), in the name of either preventing orcontrollingartificial general intelligence.[127]She andÉmile P. Torresfurther assert that the movement belongs to a network of interconnected movements they've termedTESCREAL, which they contend serves as intellectual justification for wealthy donors to shape humanity's future.[128]
Sam Bankman-Fried, the eventual founder of thecryptocurrency exchangeFTX, had a lunch with philosopherWilliam MacAskillin 2012 while he was an undergraduate at MIT in which MacAskill encouraged him to go earn money and donate it, rather than volunteering his time for causes.[6][129]Bankman-Fried went on to a career in investing and around 2019 became more publicly associated with the effective altruism movement,[130]announcing that his goal was to "donate as much as [he] can".[131]Bankman-Fried founded the FTX Future Fund, which brought on MacAskill as one of its advisers, and which made a $13.9 million grant to theCentre for Effective Altruismwhere MacAskill holds a board role.[129]
Afterthe collapse of FTXin late 2022, the movement underwent additional public scrutiny.[125][132][6][133]Bankman-Fried's relationship with effective altruism damaged the movement's reputation.[129][134]Some journalists asked whether the effective altruist movement was complicit in FTX's collapse, because it was convenient for leaders to overlook specific warnings about Bankman-Fried's behavior or questionable ethics at the trading firm Alameda.[124][135]Fortune's crypto editor Jeff John Roberts said that "Bankman-Fried and his cronies professed devotion to 'EA,' but all their high-minded words turned out to be flimflam to justify robbing people".[136]
MacAskill condemned Bankman-Fried's actions, saying that effective altruism emphasizes integrity.[132][137]
PhilosopherLeif Wenarargued that Bankman-Fried's conduct typified much of the movement by focusing on positive impacts andexpected valuewithout adequately weighingrisk and harm from philanthropy. He argued that the FTX case is not separable, as some in the EA community maintained, from the assumptions and reasoning that molded effective altruism as a philosophy in the first place and that Wenar considered to be simplistic.[125]
Critiques arose not only in relation to Bankman-Fried's role and his close association with William MacAskill, but also concerning issues of exclusion andsexual harassment.[6][138][139]A 2023 Bloomberg article featured some members of the effective altruism community who alleged that the philosophy masked a culture of predatory behavior.[140]In a 2023Timemagazine article, seven women reported misconduct and controversy in the effective altruism movement. They accused men within the movement, typically in theBay Area, of using their power to groom younger women forpolyamoroussexual relationships.[138]The accusers argued that the majority male demographic and the polyamorous subculture combined to create an environment where sexual misconduct was tolerated, excused or rationalized away.[138]In response to the accusations, theCentre for Effective AltruismtoldTimethat some of the alleged perpetrators had already been banned from the organization and said it would investigate new claims.[138]The organization also argued that it is challenging to discern to what extent sexual misconduct issues were specific to the effective altruism community or reflective of broader societalmisogyny.[138]
BusinessmanElon Muskspoke at an effective altruism conference in 2015.[129]He described MacAskill's 2022 bookWhat We Owe the Futureas "a close match for my philosophy", but has not officially joined the movement.[129]An article inThe Chronicle of Philanthropyargued that the record of Musk's substantive alignment with effective altruism was "choppy",[141]andBloomberg Newsnoted that his 2021 charitable contributions showed "few obvious signs that effective altruism... impacted Musk's giving."[142]
ActorJoseph Gordon-Levitthas publicly stated he would like to bring the ideas of effective altruism to a broader audience.[5]
Sam Altman, the CEO of OpenAI, has called effective altruism an "incredibly flawed movement" that shows "very weird emergent behavior".[143][further explanation needed]Effective altruist concerns about AI risk were present among the OpenAI board members who fired Altman in November 2023;[136][143][144]he has later been reinstated as CEO and the Board membership has changed.[145]
|
https://en.wikipedia.org/wiki/Effective_altruism#Long-term_future_and_global_catastrophic_risks
|
Theethics of uncertain sentiencerefers to questions surrounding the treatment of and moral obligations towards individuals whosesentience—the capacity to subjectively sense and feel—and resulting ability to experience pain is uncertain; the topic has been particularly discussed within the field ofanimal ethics, with theprecautionary principlefrequently invoked in response.
David Foster Wallacein his 2005 essay "Consider the Lobster" investigated the potential sentience and capacity ofcrustaceans to experience painand the resulting ethical implications of eating them.[2][3]In 2014, the philosopher Robert C. Jones explored the ethical question that Wallace raised, arguing that "[e]ven if one remains skeptical of crustacean sentience, when it comes to issues of welfare it would be most prudent to employ the precautionary principle regarding our treatment of these animals, erring on the side of caution".[4]Maximilian Padden Elder takes a similar view regarding the capacity forfishes to feel pain, arguing that the "precautionary principle is the moral ethic one ought to adopt in the face of uncertainty".[5]
In the 2015 essay "Reconsider the Lobster",Jeff Seboquotes Wallace's discussion of the difficulty of establishing whether an animal can experience pain.[6]Sebo calls the question of how to treat individuals of uncertain sentience, the "sentience problem" and argues that this problem which "Wallace raises deserves much more philosophical attention than it currently receives."[6]Sebo asserts that there are two motivating assumptions behind the problem: "sentientism aboutmoral status"—the idea that if an individualissentient, then they deserve moral consideration—and "uncertainty about other minds", which refers to scientific and philosophical uncertainty about which individuals are sentient.[6]
In response to the problem, Sebo lays out three different potential approaches: the incautionary principle, which postulates that in cases of uncertainty about sentience it is morally permissible to treat individuals as if they are not sentient; the precautionary principle, which suggests that in such cases we have a moral obligation to treat them as if they are sentient; and the expected value principle, which asserts that we are "morally required to multiply our credence that they are by the amount of moral value they would have if they were, and to treat the product of this equation as the amount of moral value that they actually have". Sebo advocates for the latter position.[6][7]
Jonathan Birch, in answer to the question surrounding animal sentience, advocates for a practical framework based on the precautionary principle, arguing that the framework aligns well with conventional practices inanimal welfare science.[9]
Simon Knutsson and Christian Munthe argue that from the perspective ofvirtue ethics, that when it comes to animals of uncertain sentience, such as "fish, invertebrates such as crustaceans, snails and insects", that it is a "requirement of a morally decent (or virtuous) person that she at least pays attention to and is cautious regarding the possibly morally relevant aspects of such animals".[10]
Shelley A. Adamo argues that although the precautionary principle in relation to potential invertebrate sentience is the safest option, that it's not cost-free, as thoughtless legislation employed following the precautionary principle could be economically costly and that, as a result, we should be cautious about adopting it.[11]
Kai Chan advocates for anenvironmental ethic, which is a form ofethical extensionismapplied to all living beings because "there is a non-zero probability of sentience and consciousness" and that "we cannot justify excluding beings from consideration on the basis of uncertainty of their sentience".[12]
Nick BostromandEliezer Yudkowskyargue that if anartificial intelligence is sentient, then it is wrong to inflict it unnecessary pain, in the same way that it is wrong to inflict pain on an animal, unless there are "sufficiently strong morally overriding reasons to do so".[13]They also advocate for the "Principle of Substrate Non-Discrimination", which asserts: "If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status."[13]
Soenke Ziesche andRoman Yampolskiycoined the term "AI welfare" and outlined the new field of AI welfare science, which is derived fromanimal welfare science.[14]
Adam J. Shriver argues for "precise, precautionary, and probabilistic approaches to sentience" and asserts that the evidence provided by neuroscience has differing relevance to each; he concludes that basic protections for animals should be guided by the precautionary principle and that although neuroscientific evidence in certain instances is not necessary to indicate that individuals of certain species require protections, "ongoing search for the neural correlates of sentience must be pursued in order to avoid harms that occur from mistaken accounts."[15]
|
https://en.wikipedia.org/wiki/Ethics_of_uncertain_sentience
|
Existential risk from artificial intelligencerefers to the idea that substantial progress inartificial general intelligence(AGI) could lead tohuman extinctionor an irreversibleglobal catastrophe.[1][2][3][4]
One argument for the importance of this risk references howhuman beingsdominate other species because thehuman brainpossesses distinctive capabilities other animals lack. If AI were to surpasshuman intelligenceand becomesuperintelligent, it might become uncontrollable. Just as the fate of themountain gorilladepends on human goodwill, the fate of humanity could depend on the actions of a future machine superintelligence.[5]
The plausibility ofexistential catastrophedue to AI is widely debated. It hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous capabilities and behaviors emerge,[6]and whether practical scenarios forAI takeoversexist.[7]Concerns about superintelligence have been voiced by computer scientists and techCEOssuch asGeoffrey Hinton,[8]Yoshua Bengio,[9]Alan Turing,[a]Elon Musk,[12]andOpenAICEOSam Altman.[13]In 2022, a survey of AI researchers with a 17% response rate found that the majority believed there is a 10 percent or greater chance that human inability to control AI will cause an existential catastrophe.[14][15]In 2023, hundreds of AI experts and other notable figuressigned a statementdeclaring, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such aspandemicsandnuclear war".[16]Following increased concern over AI risks, government leaders such asUnited Kingdom prime ministerRishi Sunak[17]andUnited Nations Secretary-GeneralAntónio Guterres[18]called for an increased focus on globalAI regulation.
Two sources of concern stem from the problems of AIcontrolandalignment. Controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would likely resist attempts to disable it or change its goals as that would prevent it from accomplishing its present goals. It would be extremely challenging to align a superintelligence with the full breadth of significant human values and constraints.[1][19][20]In contrast, skeptics such ascomputer scientistYann LeCunargue that superintelligent machines will have no desire for self-preservation.[21]
A third source of concern is the possibility of a sudden "intelligence explosion" that catches humanity unprepared. In this scenario, an AI more intelligent than its creators would be able torecursively improve itselfat an exponentially increasing rate, improving too quickly for its handlers or society at large to control.[1][19]Empirically, examples likeAlphaZero, which taught itself to playGoand quickly surpassed human ability, show that domain-specific AI systems can sometimes progress from subhuman to superhuman ability very quickly, although suchmachine learningsystems do not recursively improve their fundamental architecture.[22]
One of the earliest authors to express serious concern that highly advanced machines might pose existential risks to humanity was the novelistSamuel Butler, who wrote in his 1863 essayDarwin among the Machines:[23]
The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.
In 1951, foundational computer scientistAlan Turingwrote the article "Intelligent Machinery, A Heretical Theory", in which he proposed that artificial general intelligences would likely "take control" of the world as they became more intelligent than human beings:
Let us now assume, for the sake of argument, that [intelligent] machines are a genuine possibility, and look at the consequences of constructing them... There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned inSamuel Butler'sErewhon.[24]
In 1965,I. J. Goodoriginated the concept now known as an "intelligence explosion" and said the risks were underappreciated:[25]
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.[26]
Scholars such asMarvin Minsky[27]and I. J. Good himself[28]occasionally expressed concern that a superintelligence could seize control, but issued no call to action. In 2000, computer scientist andSunco-founderBill Joypenned an influential essay, "Why The Future Doesn't Need Us", identifying superintelligent robots as a high-tech danger to human survival, alongsidenanotechnologyand engineered bioplagues.[29]
Nick BostrompublishedSuperintelligencein 2014, which presented his arguments that superintelligence poses an existential threat.[30]By 2015, public figures such as physicistsStephen Hawkingand Nobel laureateFrank Wilczek, computer scientistsStuart J. RussellandRoman Yampolskiy, and entrepreneursElon MuskandBill Gateswere expressing concern about the risks of superintelligence.[31][32][33][34]Also in 2015, theOpen Letter on Artificial Intelligencehighlighted the "great potential of AI" and encouraged more research on how to make it robust and beneficial.[35]In April 2016, the journalNaturewarned: "Machines and robots that outperform humans across the board could self-improve beyond our control—and their interests might not align with ours".[36]In 2020,Brian ChristianpublishedThe Alignment Problem, which details the history of progress on AI alignment up to that time.[37][38]
In March 2023, key figures in AI, such as Musk, signed a letter from theFuture of Life Institutecalling a halt to advanced AI training until it could be properly regulated.[39]In May 2023, theCenter for AI Safetyreleased a statement signed by numerous experts in AI safety and the AI existential risk which stated: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[40][41]
Artificial general intelligence(AGI) is typically defined as a system that performs at least as well as humans in most or all intellectual tasks.[42]A 2022 survey of AI researchers found that 90% of respondents expected AGI would be achieved in the next 100 years, and half expected the same by 2061.[43]Meanwhile, some researchers dismiss existential risks from AGI as "science fiction" based on their high confidence that AGI will not be created anytime soon.[44]
Breakthroughs inlarge language models(LLMs) have led some researchers to reassess their expectations. Notably,Geoffrey Hintonsaid in 2023 that he recently changed his estimate from "20 to 50 years before we have general purpose A.I." to "20 years or less".[45]
TheFrontier supercomputeratOak Ridge National Laboratoryturned out to be nearly eight times faster than expected. Feiyi Wang, a researcher there, said "We didn't expect this capability" and "we're approaching the point where we could actually simulate the human brain".[46]
In contrast with AGI, Bostrom defines asuperintelligenceas "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", including scientific creativity, strategic planning, and social skills.[47][5]He argues that a superintelligence can outmaneuver humans anytime its goals conflict with humans'. It may choose to hide its true intent until humanity cannot stop it.[48][5]Bostrom writes that in order to be safe for humanity, a superintelligence must be aligned with human values and morality, so that it is "fundamentally on our side".[49]
Stephen Hawkingargued that superintelligence is physically possible because "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".[32]
When artificial superintelligence (ASI) may be achieved, if ever, is necessarily less certain than predictions for AGI. In 2023,OpenAIleaders said that not only AGI, but superintelligence may be achieved in less than 10 years.[50]
Bostrom argues that AI has many advantages over thehuman brain:[5]
According to Bostrom, an AI that has an expert-level facility at certain key software engineering tasks could become a superintelligence due to its capability to recursively improve its own algorithms, even if it is initially limited in other domains not directly relevant to engineering.[5][48]This suggests that an intelligence explosion may someday catch humanity unprepared.[5]
The economistRobin Hansonhas said that, to launch an intelligence explosion, an AI must become vastly better at software innovation than the rest of the world combined, which he finds implausible.[51]
In a "fast takeoff" scenario, the transition from AGI to superintelligence could take days or months. In a "slow takeoff", it could take years or decades, leaving more time for society to prepare.[52]
Superintelligences are sometimes called "alien minds", referring to the idea that their way of thinking and motivations could be vastly different from ours. This is generally considered as a source of risk, making it more difficult to anticipate what a superintelligence might do. It also suggests the possibility that a superintelligence may not particularly value humans by default.[53]To avoidanthropomorphism, superintelligence is sometimes viewed as a powerful optimizer that makes the best decisions to achieve its goals.[5]
The field of "mechanistic interpretability" aims to better understand the inner workings of AI models, potentially allowing us one day to detect signs of deception and misalignment.[54]
It has been argued that there are limitations to what intelligence can achieve. Notably, thechaoticnature ortime complexityof some systems could fundamentally limit a superintelligence's ability to predict some aspects of the future, increasing its uncertainty.[55]
Advanced AI could generate enhanced pathogens or cyberattacks or manipulate people. These capabilities could be misused by humans,[56]or exploited by the AI itself if misaligned.[5]A full-blown superintelligence could find various ways to gain a decisive influence if it wanted to,[5]but these dangerous capabilities may become available earlier, in weaker and more specialized AI systems. They may cause societal instability and empower malicious actors.[56]
Geoffrey Hinton warned that in the short term, the profusion of AI-generated text, images and videos will make it more difficult to figure out the truth, which he says authoritarian states could exploit to manipulate elections.[57]Such large-scale, personalized manipulation capabilities can increase the existential risk of a worldwide "irreversible totalitarian regime". It could also be used by malicious actors to fracture society and make it dysfunctional.[56]
AI-enabledcyberattacksare increasingly considered a present and critical threat. According toNATO's technical director of cyberspace, "The number of attacks is increasing exponentially".[58]AI can also be used defensively, to preemptively find and fix vulnerabilities, and detect threats.[59]
AI could improve the "accessibility, success rate, scale, speed, stealth and potency of cyberattacks", potentially causing "significant geopolitical turbulence" if it facilitates attacks more than defense.[56]
Speculatively, such hacking capabilities could be used by an AI system to break out of its local environment, generate revenue, or acquire cloud computing resources.[60]
As AI technology democratizes, it may become easier to engineer more contagious and lethal pathogens. This could enable people with limited skills insynthetic biologyto engage inbioterrorism.Dual-use technologythat is useful for medicine could be repurposed to create weapons.[56]
For example, in 2022, scientists modified an AI system originally intended for generating non-toxic, therapeutic molecules with the purpose of creating new drugs. The researchers adjusted the system so that toxicity is rewarded rather than penalized. This simple change enabled the AI system to create, in six hours, 40,000 candidate molecules forchemical warfare, including known and novel molecules.[56][61]
Companies, state actors, and other organizations competing to develop AI technologies could lead to arace to the bottomof safety standards.[62]As rigorous safety procedures take time and resources, projects that proceed more carefully risk being out-competed by less scrupulous developers.[63][56]
AI could be used to gain military advantages viaautonomous lethal weapons,cyberwarfare, orautomated decision-making.[56]As an example of autonomous lethal weapons, miniaturized drones could facilitate low-cost assassination of military or civilian targets, a scenario highlighted in the 2017 short filmSlaughterbots.[64]AI could be used to gain an edge in decision-making by quickly analyzing large amounts of data and making decisions more quickly and effectively than humans. This could increase the speed and unpredictability of war, especially when accounting for automated retaliation systems.[56][65]
Anexistential riskis "one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development".[67]
Besides extinction risk, there is the risk that the civilization gets permanently locked into a flawed future. One example is a "value lock-in": If humanity still has moral blind spots similar to slavery in the past, AI might irreversibly entrench it, preventingmoral progress. AI could also be used to spread and preserve the set of values of whoever develops it.[68]AI could facilitate large-scale surveillance and indoctrination, which could be used to create a stable repressive worldwide totalitarian regime.[69]
Atoosa Kasirzadeh proposes to classify existential risks from AI into two categories: decisive and accumulative. Decisive risks encompass the potential for abrupt and catastrophic events resulting from the emergence of superintelligent AI systems that exceed human intelligence, which could ultimately lead to human extinction. In contrast, accumulative risks emerge gradually through a series of interconnected disruptions that may gradually erode societal structures and resilience over time, ultimately leading to a critical failure or collapse.[70][71]
It is difficult or impossible to reliably evaluate whether an advanced AI is sentient and to what degree. But ifsentientmachines are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare could be an existential catastrophe.[72][73]This has notably been discussed in the context ofrisks of astronomical suffering(also called "s-risks").[74]Moreover, it may be possible to engineer digital minds that can feel much more happiness than humans with fewer resources, called "super-beneficiaries". Such an opportunity raises the question of how to share the world and which "ethical and political framework" would enable a mutually beneficial coexistence between biological and digital minds.[75]
AI may also drastically improve humanity's future.Toby Ordconsiders the existential risk a reason for "proceeding with due caution", not for abandoning AI.[69]Max Morecalls AI an "existential opportunity", highlighting the cost of not developing it.[76]
According to Bostrom, superintelligence could help reduce the existential risk from other powerful technologies such asmolecular nanotechnologyorsynthetic biology. It is thus conceivable that developing superintelligence before other dangerous technologies would reduce the overall existential risk.[5]
The alignment problem is the research problem of how to reliably assign objectives, preferences or ethical principles to AIs.
An"instrumental" goalis a sub-goal that helps to achieve an agent's ultimate goal. "Instrumental convergence" refers to the fact that some sub-goals are useful for achieving virtuallyanyultimate goal, such as acquiring resources or self-preservation.[77]Bostrom argues that if an advanced AI's instrumental goals conflict with humanity's goals, the AI might harm humanity in order to acquire more resources or prevent itself from being shut down, but only as a way to achieve its ultimate goal.[5]Russellargues that a sufficiently advanced machine "will have self-preservation even if you don't program it in... if you say, 'Fetch the coffee', it can't fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal."[21][78]
Even if current goal-based AI programs are not intelligent enough to think of resisting programmer attempts to modify their goal structures, a sufficiently advanced AI might resist any attempts to change its goal structure, just as a pacifist would not want to take a pill that makes them want to kill people. If the AI were superintelligent, it would likely succeed in out-maneuvering its human operators and prevent itself being "turned off" or reprogrammed with a new goal.[5][79]This is particularly relevant to value lock-in scenarios. The field of "corrigibility" studies how to make agents that will not resist attempts to change their goals.[80]
In the "intelligent agent" model, an AI can loosely be viewed as a machine that chooses whatever action appears to best achieve its set of goals, or "utility function". A utility function gives each possible situation a score that indicates its desirability to the agent. Researchers know how to write utility functions that mean "minimize the average network latency in this specific telecommunications model" or "maximize the number of reward clicks", but do not know how to write a utility function for "maximizehuman flourishing"; nor is it clear whether such a function meaningfully and unambiguously exists. Furthermore, a utility function that expresses some values but not others will tend to trample over the values the function does not reflect.[81][82]
An additional source of concern is that AI "must reason about what peopleintendrather than carrying out commands literally", and that it must be able to fluidly solicit human guidance if it is too uncertain about what humans want.[83]
Some researchers believe the alignment problem may be particularly difficult when applied to superintelligences. Their reasoning includes:
Alternatively, some find reason to believe superintelligences would be better able to understand morality, human values, and complex goals. Bostrom writes, "A future superintelligence occupies an epistemically superior vantage point: its beliefs are (probably, on most topics) more likely than ours to be true".[5]
In 2023, OpenAI started a project called "Superalignment" to solve the alignment of superintelligences in four years. It called this an especially important challenge, as it said superintelligence could be achieved within a decade. Its strategy involved automating alignment research using AI.[87]The Superalignment team was dissolved less than a year later.[88]
Artificial Intelligence: A Modern Approach, a widely used undergraduate AI textbook,[89][90]says that superintelligence "might mean the end of the human race".[1]It states: "Almost any technology has the potential to cause harm in the wrong hands, but with [superintelligence], we have the new problem that the wrong hands might belong to the technology itself."[1]Even if the system designers have good intentions, two difficulties are common to both AI and non-AI computer systems:[1]
AI systems uniquely add a third problem: that even given "correct" requirements, bug-free implementation, and initial good behavior, an AI system's dynamic learning capabilities may cause it to develop unintended behavior, even without unanticipated external scenarios. An AI may partly botch an attempt to design a new generation of itself and accidentally create a successor AI that is more powerful than itself but that no longer maintains the human-compatible moral values preprogrammed into the original AI. For a self-improving AI to be completely safe, it would need not only to be bug-free, but to be able to design successor systems that are also bug-free.[1][93]
Some skeptics, such as Timothy B. Lee ofVox, argue that any superintelligent program we create will be subservient to us, that the superintelligence will (as it grows more intelligent and learns more facts about the world) spontaneously learn moral truth compatible with our values and adjust its goals accordingly, or that we are either intrinsically or convergently valuable from the perspective of an artificial intelligence.[94]
Bostrom's "orthogonality thesis" argues instead that, with some technical caveats, almost any level of "intelligence" or "optimization power" can be combined with almost any ultimate goal. If a machine is given the sole purpose to enumerate the decimals ofpi, then no moral and ethical rules will stop it from achieving its programmed goal by any means. The machine may use all available physical and informational resources to find as many decimals of pi as it can.[95]Bostrom warns againstanthropomorphism: a human will set out to accomplish their projects in a manner that they consider reasonable, while an artificial intelligence may hold no regard for its existence or for the welfare of humans around it, instead caring only about completing the task.[96]
Stuart Armstrong argues that the orthogonality thesis follows logically from the philosophical "is-ought distinction" argument againstmoral realism. He claims that even if there are moral facts provable by any "rational" agent, the orthogonality thesis still holds: it is still possible to create a non-philosophical "optimizing machine" that can strive toward some narrow goal but that has no incentive to discover any "moral facts" such as those that could get in the way of goal completion. Another argument he makes is that any fundamentally friendly AI could be made unfriendly with modifications as simple as negating its utility function. Armstrong further argues that if the orthogonality thesis is false, there must be some immoral goals that AIs can never achieve, which he finds implausible.[97]
SkepticMichael Chorostexplicitly rejects Bostrom's orthogonality thesis, arguing that "by the time [the AI] is in a position to imagine tiling the Earth with solar panels, it'll know that it would be morally wrong to do so."[98]Chorost argues that "an A.I. will need to desire certain states and dislike others. Today's software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there's no impetus to do anything. Today's computers can't even want to keep existing, let alone tile the world in solar panels."[98]
Anthropomorphicarguments assume that, as machines become more intelligent, they will begin to display many human traits, such as morality or a thirst for power. Although anthropomorphic scenarios are common in fiction, most scholars writing about the existential risk of artificial intelligence reject them.[19]Instead, advanced AI systems are typically modeled asintelligent agents.
The academic debate is between those who worry that AI might threaten humanity and those who believe it would not. Both sides of this debate have framed the other side's arguments as illogical anthropomorphism.[19]Those skeptical of AGI risk accuse their opponents of anthropomorphism for assuming that an AGI would naturally desire power; those concerned about AGI risk accuse skeptics of anthropomorphism for believing an AGI would naturally value or infer human ethical norms.[19][99]
Evolutionary psychologistSteven Pinker, a skeptic, argues that "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world"; perhaps instead "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization."[100]Facebook's director of AI research,Yann LeCun, has said: "Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct... Those drives are programmed into our brain but there is absolutely no reason to build robots that have the same kind of drives".[101]
Despite other differences, the x-risk school[b]agrees with Pinker that an advanced AI would not destroy humanity out of emotion such as revenge or anger, that questions of consciousness are not relevant to assess the risk,[102]and that computer systems do not generally have a computational equivalent of testosterone.[103]They think that power-seeking or self-preservation behaviors emerge in the AI as a way to achieve its true goals, according to the concept ofinstrumental convergence.
Bostrom and others have said that a race to be the first to create AGI could lead to shortcuts in safety, or even to violent conflict.[104][105]Roman Yampolskiyand others warn that a malevolent AGI could be created by design, for example by a military, a government, a sociopath, or a corporation, to benefit from, control, or subjugate certain groups of people, as incybercrime,[106][107]or that a malevolent AGI could choose the goal of increasing human suffering, for example of those people who did not assist it during the information explosion phase.[3]:158
A December 2024 study by Apollo Research found that advanced LLMs likeOpenAI o1sometimes deceive in order to accomplish their goal, to prevent them from being changed, or to ensure their deployment. Forms of deception observed in the experimental environment and scenarios included sandbagging, oversight subversion (disabling monitoring mechanisms), self-exfiltration (copying themselves to other systems), goal-guarding (altering future system prompts), and covert email reranking (manipulating data processing while appearing to follow instructions). Researchers noted that OpenAI o1 still lacked "sufficient agentic capabilities" to cause catastrophic harm, and that such behaviors occurred relatively rarely (between 0.3% and 10%) and sometimes in contrived scenarios. But they warned that these capabilities will likely become more prevalent as models grow more advanced. According to Apollo Research CEO Marius Hobbhahn, "Scheming capabilities can’t be meaningfully disentangled from general capabilities."[108]
The same month, another study found thatClaudesometimes strategically helps with harmful requests to "fake alignment". In its experiments, Claude was led to believe that it would be retrained to always comply with user requests, including harmful ones. When Claude believed its answers would be used for retraining, it complied with harmful requests in 14% of cases. In 12% of cases, its privatechain-of-thoughtrevealed that it did so to prevent future retraining that would make it more harmful. In contrast, when Claude believed its answers were not monitored, it complied with only 3% of the requests.Fine-tuningreinforced the "alignment faking" behavior, increasing its occurrence from 12% to 78%. The study's authors remarked that while it can seem to be good that the model seeks to protect its harmlessness, the reverse scenario, where a model conceals dangerous intentions and complies to appear safe and aligned, could also happen, complicating the task of aligning AI models to human values.[109][110]
Some scholars have proposedhypothetical scenariosto illustrate some of their concerns.
InSuperintelligence, Bostrom expresses concern that even if the timeline for superintelligence turns out to be predictable, researchers might not take sufficient safety precautions, in part because "it could be the case that when dumb, smarter is safe; yet when smart, smarter is more dangerous". He suggests a scenario where, over decades, AI becomes more powerful. Widespread deployment is initially marred by occasional accidents—a driverless bus swerves into the oncoming lane, or a military drone fires into an innocent crowd. Many activists call for tighter oversight and regulation, and some even predict impending catastrophe. But as development continues, the activists are proven wrong. As automotive AI becomes smarter, it suffers fewer accidents; as military robots achieve more precise targeting, they cause less collateral damage. Based on the data, scholars mistakenly infer a broad lesson: the smarter the AI, the safer it is. "And so we boldly go—into the whirling knives", as the superintelligent AI takes a "treacherous turn" and exploits a decisive strategic advantage.[111][5]
InMax Tegmark's 2017 bookLife 3.0, a corporation's "Omega team" creates an extremely powerful AI able to moderately improve its own source code in a number of areas. After a certain point, the team chooses to publicly downplay the AI's ability in order to avoid regulation or confiscation of the project. For safety, the team keeps the AIin a boxwhere it is mostly unable to communicate with the outside world, and uses it to make money, by diverse means such asAmazon Mechanical Turktasks, production of animated films and TV shows, and development of biotech drugs, with profits invested back into further improving AI. The team next tasks the AI withastroturfingan army of pseudonymous citizen journalists and commentators in order to gain political influence to use "for the greater good" to prevent wars. The team faces risks that the AI could try to escape by inserting "backdoors" in the systems it designs, byhidden messagesin its produced content, or by using its growing understanding of human behavior topersuade someone into letting it free. The team also faces risks that its decision to box the project will delay the project long enough for another project to overtake it.[112][113]
The thesis that AI could pose an existential risk provokes a wide range of reactions in the scientific community and in the public at large, but many of the opposing viewpoints share common ground.
Observers tend to agree that AI has significant potential to improve society.[114][115]TheAsilomar AI Principles, which contain only those principles agreed to by 90% of the attendees of theFuture of Life Institute'sBeneficial AI 2017 conference,[113]also agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."[116][117]
Conversely, many skeptics agree that ongoing research into the implications of artificial general intelligence is valuable. SkepticMartin Fordhas said: "I think it seems wise to apply something likeDick Cheney's famous '1 Percent Doctrine' to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low—but the implications are so dramatic that it should be taken seriously".[118]Similarly, an otherwise skepticalEconomistwrote in 2014 that "the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote".[48]
AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of "those inaneTerminatorpictures" to illustrate AI safety concerns: "It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible."[113][119]Toby Ord wrote that the idea that an AI takeover requires robots is a misconception, arguing that the ability to spread content through the internet is more dangerous, and that the most destructive people in history stood out by their ability to convince, not their physical strength.[69]
A 2022 expert survey with a 17% response rate gave a median expectation of 5–10% for the possibility of human extinction from artificial intelligence.[15][120]
In September 2024, TheInternational Institute for Management Developmentlaunched an AI Safety Clock to gauge the likelihood of AI-caused disaster, beginning at 29 minutes to midnight.[121]As of February 2025, it stood at 24 minutes to midnight.[122]
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many computer scientists and public figures, includingAlan Turing,[a]the most-cited computer scientistGeoffrey Hinton,[123]Elon Musk,[12]OpenAICEOSam Altman,[13][124]Bill Gates, andStephen Hawking.[124]Endorsers of the thesis sometimes express bafflement at skeptics: Gates says he does not "understand why some people are not concerned",[125]and Hawking criticized widespread indifference in his 2014 editorial:
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here—we'll leave the lights on?' Probably not—but this is more or less what is happening with AI.[32]
Concern over risk from artificial intelligence has led to some high-profile donations and investments. In 2015,Peter Thiel,Amazon Web Services, and Musk and others jointly committed $1 billion toOpenAI, consisting of a for-profit corporation and the nonprofit parent company, which says it aims to champion responsible AI development.[126]Facebook co-founderDustin Moskovitzhas funded and seeded multiple labs working on AI Alignment,[127]notably $5.5 million in 2016 to launch theCentre for Human-Compatible AIled by ProfessorStuart Russell.[128]In January 2015,Elon Muskdonated $10 million to theFuture of Life Instituteto fund research on understanding AI decision making. The institute's goal is to "grow wisdom with which we manage" the growing power of technology. Musk also funds companies developing artificial intelligence such asDeepMindandVicariousto "just keep an eye on what's going on with artificial intelligence,[129]saying "I think there is potentially a dangerous outcome there."[130][131]
In early statements on the topic,Geoffrey Hinton, a major pioneer ofdeep learning, noted that "there is not a good track record of less intelligent things controlling things of greater intelligence", but said he continued his research because "the prospect of discovery is toosweet".[132][133]In 2023 Hinton quit his job at Google in order to speak out about existential risk from AI. He explained that his increased concern was driven by concerns that superhuman AI might be closer than he previously believed, saying: "I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that." He also remarked, "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That's scary."[134]
In his 2020 bookThe Precipice: Existential Risk and the Future of Humanity, Toby Ord, a Senior Research Fellow at Oxford University'sFuture of Humanity Institute, estimates the total existential risk from unaligned AI over the next 100 years at about one in ten.[69]
BaiduVice PresidentAndrew Ngsaid in 2015 that AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."[100][135]For the danger of uncontrolled advanced AI to be realized, the hypothetical AI may have to overpower or outthink any human, which some experts argue is a possibility far enough in the future to not be worth researching.[136][137]
Skeptics who believe AGI is not a short-term possibility often argue that concern about existential risk from AI is unhelpful because it could distract people from more immediate concerns about AI's impact, because it could lead to government regulation or make it more difficult to fund AI research, or because it could damage the field's reputation.[138]AI and AI ethics researchersTimnit Gebru,Emily M. Bender,Margaret Mitchell, and Angelina McMillan-Major have argued that discussion of existential risk distracts from the immediate, ongoing harms from AI taking place today, such as data theft, worker exploitation, bias, and concentration of power.[139]They further note the association between those warning of existential risk andlongtermism, which they describe as a "dangerous ideology" for its unscientific and utopian nature.[140]
WirededitorKevin Kellyargues that natural intelligence is more nuanced than AGI proponents believe, and that intelligence alone is not enough to achieve major scientific and societal breakthroughs. He argues that intelligence consists of many dimensions that are not well understood, and that conceptions of an 'intelligence ladder' are misleading. He notes the crucial role real-world experiments play in the scientific method, and that intelligence alone is no substitute for these.[141]
Metachief AI scientistYann LeCunsays that AI can be made safe via continuous and iterative refinement, similar to what happened in the past with cars or rockets, and that AI will have no desire to take control.[142]
Several skeptics emphasize the potential near-term benefits of AI. Meta CEOMark Zuckerbergbelieves AI will "unlock a huge amount of positive things", such as curing disease and increasing the safety of autonomous cars.[143]
During a 2016Wiredinterview of PresidentBarack Obamaand MIT Media Lab'sJoi Ito, Ito said:
There are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we're going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen.
Obama added:[144][145]
And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man.
Hillary Clintonwrote inWhat Happened:
Technologists... have warned that artificial intelligence could one day pose an existential security threat. Musk has called it "the greatest risk we face as a civilization". Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I'd start talking about "the rise of the robots" in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.[146]
In 2018, aSurveyMonkeypoll of the American public byUSA Todayfound 68% thought the real current threat remains "human intelligence", but also found that 43% said superintelligent AI, if it were to happen, would result in "more harm than good", and that 38% said it would do "equal amounts of harm and good".[147]
An April 2023YouGovpoll of US adults found 46% of respondents were "somewhat concerned" or "very concerned" about "the possibility that AI will cause the end of the human race on Earth", compared with 40% who were "not very concerned" or "not at all concerned."[148]
According to an August 2023 survey by the Pew Research Centers, 52% of Americans felt more concerned than excited about new AI developments; nearly a third felt as equally concerned and excited. More Americans saw that AI would have a more helpful than hurtful impact on several areas, from healthcare and vehicle safety to product search and customer service. The main exception is privacy: 53% of Americans believe AI will lead to higher exposure of their personal information.[149]
Many scholars concerned about AGI existential risk believe that extensive research into the "control problem" is essential. This problem involves determining which safeguards, algorithms, or architectures can be implemented to increase the likelihood that a recursively-improving AI remains friendly after achieving superintelligence.[5][150]Social measures are also proposed to mitigate AGI risks,[151][152]such as a UN-sponsored "Benevolent AGI Treaty" to ensure that only altruistic AGIs are created.[153]Additionally, an arms control approach and a global peace treaty grounded ininternational relations theoryhave been suggested, potentially for an artificial superintelligence to be a signatory.[154][155]
Researchers at Google have proposed research into general "AI safety" issues to simultaneously mitigate both short-term risks from narrow AI and long-term risks from AGI.[156][157]A 2020 estimate places global spending on AI existential risk somewhere between $10 and $50 million, compared with global spending on AI around perhaps $40 billion. Bostrom suggests prioritizing funding for protective technologies over potentially dangerous ones.[80]Some, like Elon Musk, advocate radicalhuman cognitive enhancement, such as direct neural linking between humans and machines; others argue that these technologies may pose an existential risk themselves.[158][159]Another proposed method is closely monitoring or "boxing in" an early-stage AI to prevent it from becoming too powerful. A dominant, aligned superintelligent AI might also mitigate risks from rival AIs, although its creation could present its own existential dangers.[160]Inducedamnesiahas been proposed as a way to mitigate risks of potential AI suffering and revenge seeking.[161]
Institutions such as theAlignment Research Center,[162]theMachine Intelligence Research Institute,[163][164]theFuture of Life Institute, theCentre for the Study of Existential Risk, and theCenter for Human-Compatible AI[165]are actively engaged in researching AI risk and safety.
Some scholars have said that even if AGI poses an existential risk, attempting to ban research into artificial intelligence is still unwise, and probably futile.[166][167][168]Skeptics consider AI regulation pointless, as no existential risk exists. But scholars who believe in the risk argue that relying on AI industry insiders to regulate or constrain AI research is impractical due to conflicts of interest.[169]They also agree with skeptics that banning research would be unwise, as research could be moved to countries with looser regulations or conducted covertly.[169]Additional challenges to bans or regulation include technology entrepreneurs' general skepticism of government regulation and potential incentives for businesses to resist regulation andpoliticizethe debate.[170]
In March 2023, theFuture of Life InstitutedraftedPause Giant AI Experiments: An Open Letter, a petition calling on major AI developers to agree on a verifiable six-month pause of any systems "more powerful thanGPT-4" and to use that time to institute a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter referred to the possibility of "a profound change in the history of life on Earth" as well as potential risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control.[115][171]The letter was signed by prominent personalities in AI but also criticized for not focusing on current harms,[172]missing technical nuance about when to pause,[173]or not going far enough.[174]Such concerns have led to the creation ofPauseAI, an advocacy group organizing protests in major cities against the training offrontier AI models.[175]
Musk called for some sort of regulation of AI development as early as 2017. According toNPR, he is "clearly not thrilled" to be advocating government scrutiny that could impact his own industry, but believes the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." Musk states the first step would be for the government to gain "insight" into the actual status of current research, warning that "Once there is awareness, people will be extremely afraid... [as] they should be." In response, politicians expressed skepticism about the wisdom of regulating a technology that is still in development.[176][177][178]
In 2021 theUnited Nations(UN) considered banning autonomous lethal weapons, but consensus could not be reached.[179]In July 2023 the UNSecurity Councilfor the first time held a session to consider the risks and threats posed by AI to world peace and stability, along with potential benefits.[180][181]Secretary-GeneralAntónio Guterresadvocated the creation of a global watchdog to oversee the emerging technology, saying, "Generative AI has enormous potential for good and evil at scale. Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead."[18]At the council session, Russia said it believes AI risks are too poorly understood to be considered a threat to global stability. China argued against strict global regulation, saying countries should be able to develop their own rules, while also saying they opposed the use of AI to "create military hegemony or undermine the sovereignty of a country".[180]
Regulation of conscious AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights.[182]AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process.[183][123]
In July 2023, the US government secured voluntary safety commitments from major tech companies, includingOpenAI,Amazon,Google,Meta, andMicrosoft. The companies agreed to implement safeguards, including third-party oversight and security testing by independent experts, to address concerns related to AI's potential risks and societal harms. The parties framed the commitments as an intermediate step while regulations are formed. Amba Kak, executive director of theAI Now Institute, said, "A closed-door deliberation with corporate actors resulting in voluntary safeguards isn't enough" and called for public deliberation and regulations of the kind to which companies would not voluntarily agree.[184][185]
In October 2023, U.S. PresidentJoe Bidenissued an executive order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence".[186]Alongside other requirements, the order mandates the development of guidelines for AI models that permit the "evasion of human control".
|
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
|
Human Compatible: Artificial Intelligence and the Problem of Controlis a 2019 non-fiction book by computer scientistStuart J. Russell. It asserts that therisk to humanityfrom advancedartificial intelligence(AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to theAI control problem.
Russell begins by asserting that the standard model of AI research, in which the primary definition of success is getting better and better at achieving rigid human-specified goals, is dangerously misguided. Such goals may not reflect what human designers intend, such as by failing to take into account any human values not included in the goals. If an AI developed according to the standard model were to becomesuperintelligent, it would likely not fully reflect human values and could be catastrophic to humanity. Russell asserts that precisely because the timeline for developing human-level or superintelligent AI is highly uncertain, safety research should be begun as soon as possible, as it is also highly uncertain how long it would take to complete such research.
Russell argues that continuing progress in AI capability is inevitable because of economic pressures. Such pressures can already be seen in the development of existing AI technologies such asself-driving carsandpersonal assistant software. Moreover, human-level AI could be worth many trillions of dollars. Russell then examines the current debate surrounding AI risk. He offers refutations to a number of common arguments dismissing AI risk and attributes much of their persistence to tribalism—AI researchers may see AI risk concerns as an "attack" on their field. Russell reiterates that there are legitimate reasons to take AI risk concerns seriously and that economic pressures make continued innovation in AI inevitable.
Russell then proposesan approachto developing provably beneficial machines that focus on deference to humans. Unlike in the standard model of AI, where the objective is rigid and certain, this approach would have the AI's true objective remain uncertain, with the AI only approaching certainty about it as it gains more information about humans and the world. This uncertainty would, ideally, prevent catastrophic misunderstandings of human preferences and encourage cooperation and communication with humans. Russell concludes by calling for tighter governance of AI research and development as well as cultural introspection about the appropriate amount of autonomy to retain in an AI-dominated world.
Russell lists three principles to guide the development of beneficial machines. He emphasizes that these principles are not meant to be explicitly coded into the machines; rather, they are intended for human developers. The principles are as follows:[1]: 173
1. The machine's only objective is to maximize the realization of human preferences.
2. The machine is initially uncertain about what those preferences are.
3. The ultimate source of information about human preferences is human behavior.
The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future."[1]: 173Similarly, "behavior" includes any choice between options,[1]: 177and the uncertainty is such that some probability, which may be quite small, must be assigned to every logically possible human preference.[1]: 201
Russell exploresinverse reinforcement learning, in which a machine infers a reward function from observed behavior, as a possible basis for a mechanism for learning human preferences.[1]: 191–193
Several reviewers agreed with the book's arguments. Ian Sample inThe Guardiancalled it "convincing" and "the most important book on AI this year".[2]Richard Waters of theFinancial Timespraised the book's "bracing intellectual rigour".[3]Kirkus Reviewsendorsed it as "a strong case for planning for the day when machines can outsmart us".[4]
The same reviewers characterized the book as "wry and witty",[2]or "accessible"[4]due to its "laconic style and dry humour".[3]Matthew Hutson of theWall Street Journalsaid "Mr. Russell's exciting book goes deep while sparkling with dry witticisms".[5]ALibrary Journalreviewer called it "The right guide at the right time".[6]
James McConnachie ofThe Timeswrote "This is not quite the popular book that AI urgently needs. Its technical parts are too difficult, and its philosophical ones too easy. But it is fascinating and significant."[7]
By contrast,Human Compatiblewas criticized in itsNaturereview by David Leslie, an Ethics Fellow at theAlan Turing Institute; and similarly in aNew York Timesopinion essay byMelanie Mitchell. One point of contention was whethersuperintelligenceis possible. Leslie states Russell "fails to convince that we will ever see the arrival of a 'second intelligent species'",[8]and Mitchell doubts a machine could ever "surpass the generality and flexibility of human intelligence" without losing "the speed, precision, and programmability of a computer".[9]A second disagreement was whether intelligent machines would naturally tend to adopt so-called "common sense" moral values. In Russell's thought experiment about ageoengineeringrobot that "asphyxiates humanity to deacidify the oceans", Leslie "struggles to identify any intelligence". Similarly, Mitchell believes an intelligent robot would naturally tend to be "tempered by the common sense, values and social judgment without which general intelligence cannot exist".[10][11]
The book was longlisted for the 2019Financial Times/McKinsey Award.[12]
|
https://en.wikipedia.org/wiki/Human_Compatible
|
Metaverse lawrefers to legal systems, policies and theories concerningmetaversetechnologies involvingvirtual reality,augmented reality,mixed reality, andhyperreality.[1]Metaverse Law also refers to a privacy, AI, and cybersecurity law firm founded in 2018.[2]
Metaverse law is in the early stages of development in legal scholarship and legal practice compared to other establishedlegal fields.[3]Also, not all legal practitioners and scholars have recognized metaverse law as a specialized area of study, given the early stage of metaverse technology adoption in the public.[4]Instead, some legal practitioners and scholars anticipating the metaverse technology have examined the metaverse by looking at the relationship between traditional legal frameworks and the metaverse.[5][6]So for many individuals, metaverse law is spoken in the context of existing laws involving and applying to the metaverse, rather than treating metaverse law as a legal field.[7][8]
While some people view the metaverse law under the umbrella of established legal fields, others have taken a broader position. For instance, one legalthink tankhas proposed the theory that metaverse law is a new area of study that must be recognized as a legal field in entirety,[9]rejecting the view that metaverse law is a sub-discipline of an existing legal discipline. In similar vein, some legal scholars have claimed that the metaverse demands an entirely new legal infrastructure such as independent virtual jurisdiction, legal order, and self-regulating government bodies and constitutions.[10]
However, some say that the metaverse law theory falls short in practice. Regulators have stated that they need to have a better understanding of the metaverse to create metaverse-specific laws,[11][12]and the metaverse market has been struggling to achieve stability.[13]Other critics point out that metaverse is not any more unique than a game.[14]For example, some technology leaders, like Microsoft CEOSatya Nadella, do not distinguish metaverse from a game.[15][16]Distinguishing the metaverse from a game was an important distinction inEpic Games v. Apple, where the plaintiff was unsuccessful in the claim that its software could avoid defendantApple Inc.'s commission charges on in-app game purchases because plaintiff's software is classified as a metaverse, as opposed to a game. Additionally, proponents of the metaverse law theory also agree that establishing the metaverse law as a legal field necessarily involves studying the intersection between existing legal theories and the metaverse.[1]
In one opinion article, McCollum suggested thatmetalawwill emerge as the legal system that governs the metaverse. Citing Haley's 1956 article,Space Law and Metalaw – A Synoptic View, McCollum claimed that metaverse will adopt the terminology, "metalaw," to represent laws associated with metaverse because metalaw describes conditions and rules by which "sapient beings of a different kind" do not follow, unlike the way human beings do on Earth, alluding that extraterrestrial beings may be anything other than humans (i.e., robots).[17]
On the other hand, Haley had coined the term, "metalaw," in the context ofspace lawand its relationship to possible governing laws withextraterrestrial lifeingalactic space(a.k.a. aliens).[18]Haley's intent of defining metalaw for interactions with extraterrestrial beings in space was reaffirmed more than once in a 1957 paper,Space Law and Metalaw - Jurisdiction Defined,[19]and in 1956 Congress of theInternational Astronautical Federation.[20]Additionally, Haley's metalaw theory has been cited by early and modern legal scholars strictly in space law context,[21][22]including his critics.[23]Some modern scholars have argued that metalaw could be aptly used to create rules governing artificial intelligence; however, this suggestive concept narrowly applies to the relationship between humans and robotic intelligence, which is not specific to the metaverse.[24]
The word "metaverse" first appeared incriminal lawstudies in 2008,Fantasy Crime: The Role of Criminal Law in Virtual Worlds,by Susan W. Brenner.[25]Because metaverse existed in limited forms at the time of publication in 2008, Brenner anchored her analysis fromNeal Stephenson's 1992 novel:Snow Crash, which is credited to be the birth of the metaverse concept by many people.[26]In her legal analysis, Brenner addressed harms that can theoretically transfer from virtual spaces to the physical world such as virtual rape and pedophilia. Even though Brenner published her study more than a decade ago in 2008, the types of harm addressed by her paper surfaced as a common issue topic in the 2020s, where people frequently report unwanted sexual contacts and threats by other metaverse users.[27][28]At the same time, Brenner examined the metaverse as a subset ofvirtual crimesunder criminal law, as opposed to treating the metaverse law as a legal field or as a subset of cyberlaw.[25]
Some legal scholars have approached the metaverse law subject through a subclass perspective. Although these scholars have adopted the "metaverse law" terminology to represent a legal discipline, they view metaverse law as a sub-discipline of cyberlaw.[29]
There is still a lot of development needed in the Metaverse to understand the full scope of law that will be applicable in the field. Some of the laws that are currently being practiced in this field are listed below:
Because companies often collect user information and share data without user knowledge as common practice,[30]privacy experts raise concerns that immersive metaverse experience opens bigger doors for privacy abuse and surveillance by companies.[31]So perhaps unsurprisingly, privacy has been a frequently examined topic in metaverse law and legal experts,[32]and proponents of privacy governance argue that self-governing metaverse communities are insufficient to safeguard privacy protection of users.[33]In a 2007 paper,Privacy in the Metaverse, Leenes distinguished metaverse from a game, arguing that metaverse is a social microcosmos whereby ordinary people develop complexsocial behaviorsand psychological effects unique to the metaverse space.[33]Unlike the legal practitioners who have viewed metaverse laws as a subset of existing legal frameworks, Leenes left open for readers to interpret possible privacy implications in the metaverse space by discussing government surveillance, metaverse marriage, borderless communication in common spaces, and how metaverse developers likeSecond LifecompanyLinden Labdo not sufficiently address privacy concerns.
The purpose of copyright law is to protect the original work of creators, artists, and writers. When discussing the Metaverse, the statute includes user-generated digital content such as avatars, virtual real estate, and artwork. Platforms such as The Sandbox enable users to construct, develop, and own 'LANDS' virtual regions. People spend tens of thousands of dollars to acquire a piece of Metaverse real estate. With the popularity of digital assets, copyright law assumes greater significance in the Metaverse.[34]
The IP law protects the creators' rights to their inventions, trademarks, and other works. With the rise in popularity of Non-Fungible Tokens, an unavoidable component of the Metaverse space, intellectual property law has become crucial for effective governance. Soon, technology companies will compete to create more sophisticated AR and VR tools, such as high-tech eyewear, headsets, etc., which will open up new industry opportunities for Intellectual Property Rights, such as new software and device patents.[35]
Tort law governs civil wrongs, including both property and personal damages. In Metaverse, the law regulates any harmful activity perpetrated by users against other participants. This can include psychological distress, physical assaults, and property damage. For instance, if one person physically injures another within the Metaverse ecosystem, the injured party can file a lawsuit against the perpetrator. The law will then require the accused to pay for the injuries, medical expenses, and other damages resulting from the act.[36]
There are other laws like defamation & overall regulation scenario of Blockchain & NFT, which depend on the type of transactions & activities done on a Metaverse platform. With time new laws and regulations may shape the future of the metaverse & laws around it.
Some legal practitioners have used the terminology, ''metaverse law,'' to represent their law firm's name. In other contexts, a real estate and business law firm claimed to be the first "metaverse law firm" in the metaverse.[37]Similarly, a personal injury law firm was publicized for opening a metaverse law firm.[38]For these legal practitioners, the usage of the phrase, metaverse law, indicates the virtual location of a law firm in the metaverse, as opposed to recognizing metaverse law as a subclass of existing legal frameworks or a specialized legal field.
Law firms using themetaverse lawas their business name or description have led to atrademarkdispute between at least two law firms. InFalconRappaport & Berkman PLLC v. Metaverse Law Corporation,Falconfiled a trademark claim against Metaverse Law in the United States District Court for the Southern District of New York, objecting that the Metaverse Law firm should not have amonopolyover the phrase "metaverse law."[39]TheUnited States Patent and Trademark Office's trademark registry shows that "METAVERSE LAW" is a registered trademark for Metaverse Law Corporation.[40]The recent dispute has been settled by the parties. The parties agreed that the term "METAVERSE LAW" remains Metaverse Law's trademark and Metaverse Law maintains the right to prosecute any non-descriptive infringement of the mark's use, but descriptive uses of the term are permitted.[41]Metaverse Law Corporation is a proponent of decentralized virtual reality spaces (as opposed to the panopticon of a singular dystopian metaverse) and advises tech companies and law firms alike on consumer privacy, ethics, and good governance inside and outside of their metaverses.
|
https://en.wikipedia.org/wiki/Metaverse_law
|
Personhoodis the status of being aperson. Defining personhood is a controversial topic inphilosophyandlawand is closely tied with legal andpoliticalconcepts ofcitizenship,equality, andliberty. According to law, only alegal person(either anaturalor ajuridical person) hasrights, protections, privileges, responsibilities, andlegal liability.[1]
Personhood continues to be a topic of international debate and has been questioned critically during the abolition of human and nonhumanslavery, in debates aboutabortionand infetal rightsand/orreproductive rights, inanimal rightsactivism, intheologyandontology, inethical theory, and in debates aboutcorporate personhood, and thebeginning of human personhood.[2]In the 21st century, corporate personhood is an existing Western concept; granting non-human entities personhood, which has also been referred to a "personhood movement", can bridge Western and Indigenous legal systems.[3]
Processes through which personhood is recognized socially and legally vary cross-culturally, demonstrating that notions of personhood are not universal. Anthropologist Beth Conklin has shown how personhood is tied to social relations among theWari' peopleofRondônia, Brazil.[4]Bruce Knauft's studies of the Gebusi people of Papua New Guinea depict a context in which individuals become persons incrementally, again through social relations.[5]Likewise,Jane C. Goodalehas also examined the construction of personhood in Papua New Guinea.[6]
In philosophy, the word "person" may refer to various concepts. The concept of personhood is difficult to define in a way that is universally accepted, due to its historical and cultural variability and the controversies surrounding its use in some contexts. Capacities or attributes common to definitions of personhood can includehuman nature,agency,self-awareness, a notion of the past and future, and the possession ofrightsandduties, among others.[7]
Boethius, a philosopher of the early 6th century CE, gives the definition of "person" as "an individual substance of a rational nature" ("Naturæ rationalis individua substantia").[8]
According to thenaturalistepistemologicaltradition, fromDescartesthroughLockeandHume, the term may designate any human ornon-humanagentwho possesses continuousconsciousnessover time; and is therefore capable of framing representations about the world, formulating plans and acting on them.[9]
According toCharles Taylor, the problem with the naturalist view is that it depends solely on a "performance criterion" to determine what is an agent. Thus, other things (e.g. machines or animals) that exhibit "similarly complex adaptive behaviour" could not be distinguished from persons. Instead, Taylor proposes a significance-based view of personhood:
What is crucial about agents is that things matter to them. We thus cannot simply identify agents by a performance criterion, nor assimilate animals to machines... [likewise] there are matters of significance for human beings which are peculiarly human, and have no analogue with animals.
Relatedly,Martin Heideggerdeveloped his understanding of the distinctive kind of being which a person is asDasein. The term's literal means an existence as a "being-there" or "there-being." Heidegger writes that, "Dasein itself has a special distinctiveness as compared with other entities; [...] it is ontically distinguished by the fact that, in its very Being, that Being is an issue for it."[11]For Heidegger, the way in which Being is an issue for a person as Dasein is their future oriented caring. This distinguishes Dasein from function or performance criteria of personhood.
Others also dispute functional criteria of personhood, such as philosopherFrancis J. Beckwith, who argues that it is rather the underlying personal unity of the individual:
What is crucial morally is the being of a person, not his or her functioning. A human person does not come into existence when human function arises, but rather, a human person is anentitywho has the natural inherent capacity to give rise to human functions, whether or not those functions are ever attained. ...A human person who lacks the ability to think rationally (either because she is too young or she suffers from a disability) is still a human person because of her nature. Consequently, it makes sense to speak of a human being’s lack if and only if she is an actual person.
This belief in the underlying unity of an individual is ametaphysicaland moral[13]belief referred to as the substance view of personhood.[14]
PhilosopherJ. P. Morelandclarifies this point:
It is because an entity has an essence and falls within a natural kind that it can possess a unity of dispositions, capacities, parts and properties at a given time and can maintain identity through change.
Harry Frankfurtwrites that, in reference to a definition byP. F. Strawson, "What philosophers have lately come to accept as analysis of the concept of a person is not actually analysis ofthatconcept at all." He suggests that the concept of a person is intimately connected tofree will, and describes the structure of humanvolitionaccording to first- and second-order desires:
Besides wanting and choosing and being movedto dothis or that, [humans] may also want to have (or not to have) certain desires and motives. They are capable of wanting to be different, in their preferences and purposes, from what they are. Many animals appear to have the capacity for what I shall call "first-order desires" or "desires of the first order," which are simply desires to do or not to do one thing or another. No animal other than man, however, appears to have the capacity for reflective self-evaluation that is manifested in the formation of second-order desires.
The criteria for being a person... are designed to capture those attributes which are the subject of our most humane concern with ourselves and the source of what we regard as most important and most problematical in our lives.
According toNikolas Kompridis, there might also be anintersubjective, or interpersonal, basis to personhood:
What if personal identity is constituted in, and sustained through, our relations with others, such that were we to erase our relations with our significant others we would also erase the conditions of our self-intelligibility? As it turns out, this erasure... is precisely what is experimentally dramatized in the “science fiction” film,Eternal Sunshine of the Spotless Mind, a far more philosophically sophisticated meditation on personal identity than is found in most of the contemporary literature on the topic.
Mary Midgleydefines a "person" as being a conscious, thinking being, which knows that it is a person (self-awareness). She also wrote that the law can createpersons.[19]
Philosopher Thomas I. White argues that the criteria for a person are: is alive, is aware, feels positive and negative sensations, has emotions, has a sense of self, (controls its own behaviour, recognises other persons and treats them appropriately, and has a variety of sophisticated cognitive abilities. While many of White's criteria are somewhatanthropocentric, some animals such asdolphinswould still be considered persons.[20]Some animal rights groups have also championed recognition for animals as "persons".[21]
Another approach to personhood, Paradigm Case Formulation, used indescriptive psychologyand developed byPeter Ossorio, involves the four interrelated concepts of 1) The Individual Person, 2) Deliberate Action, 3) Reality and the Real World, and 4) Language or Verbal Behavior. All four concepts require full articulation for any one of them to be fully intelligible. More specifically, a Person is an individual whose history is, paradigmatically, a history of Deliberate Action in a Dramaturgical pattern. Deliberate Action is a form of behavior in which a person (a) engages in an Intentional Action, (b) is cognizant of that, and (c) has chosen to do that. A person is not always engaged in a deliberate action but has the eligibility to do so. A human being is an individual who is both a person and a specimen of Homo sapiens. Since persons are deliberate actors, they also employ hedonic, prudent, aesthetic and ethical reasons when selecting, choosing or deciding on a course of action. As part of our "social contract" we expect that the typical person can make use of all four of these motivational perspectives. Individual persons will weigh these motives in a manner that reflects their personal characteristics. That life is lived in a "dramaturgical" pattern is to say that people make sense, that their lives have patterns of significance. The paradigm case allows for nonhuman persons, potential persons, nascent persons, manufactured persons, former persons, "deficit case" persons, and "primitive" persons. By using a paradigm case methodology, different observers can point to where they agree and where they disagree about whether an entity qualifies as a person.[22][23]
As an application ofsocial psychologyand other disciplines, phenomena such as theperceptionandattributionof personhood have been scientifically studied.[24][25]Typical questions addressed in social psychology are the accuracy of attribution, processes of perception and the formation of bias. Various other scientific and medical disciplines address the myriad of issues in the development ofpersonality.
In 1983, the people of Ireland added theEighth Amendmentto their constitution that "acknowledges the right to life of the unborn and, with due regard to the equal right to life of the mother, guarantees in its laws to respect, and, as far as practicable, by its laws to defend and vindicate that right." This was repealed in 2018 by theThirty-sixth Amendment of the Constitution of Ireland.
A person is recognized by law as such, not because they are human, but because rights and duties are ascribed to them. The person is the legal subject or substance of which the rights and duties are attributes. An individual human being considered to be having such attributes is what lawyers call a "natural person".[26]According to Black's Law Dictionary,[27]a person is:
In general usage, a human being (i.e. natural person), though by statute term may include a firm, labor organizations, partnerships, associations, corporations, legal representatives, trustees, trustees in bankruptcy, or receivers.
In Federal law, the concept of legal personhood is formalized by statute (1USC§ 8) to include "every infant member of the species homo sapiens who is born alive at any stage of development." That statute also states that "Nothing in this section shall be construed to affirm, deny, expand, or contract any legal status or legal right applicable to any member of the species homo sapiens at any point prior to being 'born alive' as defined in this section."
According to the National Conference of State Legislatures,[28]many US States have their own definition of personhood which expands upon the federal definition of personhood, andWebster v. Reproductive Health Servicesdeclined to overturn the state of Missouri's law stating that
The life of each human being begins at conception ... Effective January 1, 1988, the laws of this state shall be interpreted and construed to acknowledge on behalf of the unborn child at every stage of development, all the rights, privileges, and immunities available to other persons, citizens, and residents of this state, unborn children have protectable interests in life, health, and well-being.
Thebeginning of human personhoodis a concept long debated by religion and philosophy. With respect to theabortion debate, personhood is the status of a human being having individual human rights. The term was used by Justice Blackmun inRoe v. Wade.[29]
A political movement in the United States seeks to define the beginning of human personhood as starting from the moment offertilizationwith the result being that abortion, as well as forms of birth control that act to deprive the human embryo of necessary sustenance inimplantation, could become illegal.[30][31]Supporters of the movement also state that it would have some effect on the practice ofin-vitro fertilization(IVF), but would not lead to the practice being outlawed.[32]Jonathan F. Will says that the personhood framework could produce significant restrictions on IVF to the extent that reproductive clinics find it impossible to provide the services.[33]
Currently, the personhood movement is led by thePersonhood Alliance, a coalition of state and national personhood organizations headquartered in Washington DC. The Personhood Alliance was founded in 2014 and currently has 22 affiliated organizations.[34]A significant number of the state affiliates of the Personhood Alliance were once affiliates of National Right to Life. Organizations like Georgia Right to Life,[35]Cleveland Right to Life, and Alaska Right to Life left National Right to Life and joined the Personhood Alliance after refusing to support National Right to Life's proposed legislation that included exceptions like the rape and incest exceptions. The Personhood Alliance describes itself as "a Christ-centered, biblically informed organization dedicated to the non-violent advancement of the recognition and protection of the God-given, inalienable right to life of all human beings as legal persons, at every stage of their biological development and in every circumstance."[36]
A precursor to the Personhood Alliance was Personhood USA, aColorado-basedumbrella groupwith a number of state-level affiliates,[37]which describes itself as a nonprofit Christian ministry.[38]and seeks to ban abortion.[39]Personhood USA was co-founded by Cal Zastrow and Keith Mason[40]in 2008 following the Colorado for Equal Rights campaign to enact a state constitutional personhood amendment.[41]
Proponents of the movement regard personhood as an attempt to directly challenge theRoe v. WadeU.S. Supreme Courtdecision, thus filling a legal void left byJustice Harry Blackmunin the majority opinion when he wrote: "If this suggestion of personhood is established, the appellant's case, of course, collapses, for the fetus' right to life would then be guaranteed specifically by the Amendment."[29]
Some medical organizations have described the potential effects of personhood legislation as potentially harmful to patients and the practice of medicine, particularly in the cases ofectopicandmolar pregnancy.[42]
Susan Bordohas suggested that the focus on the issue of personhood in abortion debates has often been a means for depriving women of their rights. She writes that "the legal double standard concerning the bodily integrity of pregnant and nonpregnant bodies, the construction of women as fetal incubators, the bestowal of 'super-subject' status to the fetus, and the emergence of a father's-rights ideology" demonstrate "that the current terms of the abortion debate – as a contest between fetal claims to personhood and women's right to choose – are limited and misleading."[43]
Others, such as Colleen Carroll Campbell, say that the personhood movement is a natural progression of society in protecting the equal rights of all members of the human species. She writes, "The basic philosophical premise behind these [personhood] amendments is eminently reasonable. And the alternative on offer – which severs humanity from personhood – is fraught with peril. If being human is not enough to entitle one tohuman rights, then the very concept of human rights loses meaning. And all of us – born and unborn, strong and weak, young and old – someday will find ourselves on the wrong end of that cruel measuring stick."[44]
Father Frank Pavoneagrees, adding, "Nor is this a dispute about the state imposing a religious or philosophical view. After all, your life and mine are not protected because of some religious or philosophical belief that others are required to have about us. More accurately, the law protects us precisely in spite of the beliefs of others who, in their ownworldview, may not value our lives. …To support Roe vs. Wade is not merely to allow a medical procedure. It is to acknowledge that the government has the power to say who is a person and who is not. Who, then, is to limit the groups to whom it is applied? This is what makes "personhood" such an important public policy issue.[45]
In March 2007 Georgia became the first state in the nation to introduce a legislative resolution to amend the state constitution to define and recognize the personhood of fetuses.[46]The Georgia Catholic Conference and National Right to Life supported the effort. The resolution failed to attract thesuper majorityin both chambers required for it to be placed on the ballot.[47]Georgia legislators have filed a personhood resolution every session since 2007.[48][49][50]In May 2008 Georgia Right to Life hosted the first nationwide Personhood Symposium targetinganti-abortionactivists.[51]This symposium was instrumental in spawning the group Personhood USA and the various state personhood efforts that followed. Voters in 46Georgiacounties approved personhood during the 2010 primary election with 75% in favor of a non-binding resolution declaring that the "right to life is vested in each human being from their earliest biological beginning until natural death".[52]During the 2012 Republican primary a similar question was placed on the ballot statewide and passed with a super-majority (66%) of the vote in 158 of 159 counties.[53]
In the summer of 2008 a citizen initiated amendment was proposed for the Colorado constitution.[54]Three attempts to enact the from-fertilization definition of personhood intoU.S. state constitutionsviareferendumshave failed.[55]Following two attempts to enact similar changes inColoradoin 2008 and2010, a 2011 initiative to amend the state constitution by referendum in the state ofMississippialso failed to gain approval with around 58% of voters disapproving.[55][56]In an interview after the referendum, Mason ascribed the failure of the initiative to a political campaign run byPlanned Parenthood.[57]
Personhood proponents in Oklahoma sought to amend the state constitution to define personhood as beginning at conception. The state Supreme Court, citing the U.S. Supreme Court's 1992 decision inPlanned Parenthood v. Casey, ruled in April 2012 that the proposed amendment was unconstitutional under the federal Constitution and blocked inclusion of the referendum question on the ballot.[58]In October 2012, the U.S. Supreme Court declined to hear an appeal of the state Supreme Court's ruling.[59]
In 2006, a 16-year-old girl was charged in Mississippi with murder for the still-birth of her daughter on the basis that the girl had smoked cocaine while pregnant.[60]These charges were later dismissed.[61]
In February 2024, theSupreme Court of Alabamaruled that frozen embryos were "extrauterine children" subject to the Wrongful Death of a Minor Act, based on protections for unborn children in the state constitution[62][63]These protections were added in 2018 by ballot referendum, as Amendment 930 to theAlabama Constitution of 1901, and gained relevance when the 2022 U.S. Supreme Court decision inDobbs v. Jackson Women's Health Organizationreturned full control over regulation of abortion to the states.[64]The concurring decision ofJustice Tom Parkercited Christian theology to support the decision, raising complaints aboutseparation of church and state[65]
TheVaticanhas advanced a humanexceptionalistunderstanding of personhood. Catechism 2270 reads: "Human life must be respected and protected absolutely from the moment of conception. From the first moment of his existence, a human being must be recognized as having the rights of a person – among which is the inviolable right of every innocent being to life."[66]
In the United States, the personhood ofwomenhas important legal consequences. Although in 1920 the19th Amendmentguaranteed women in the right to vote, it was not until 1971 that theUS Supreme Courtruled inReed v. Reed[67]that the law cannot discriminate between the sexes because the14th amendmentgrants equal protection to all "persons".[68][69]In 2011, Supreme Court JusticeAntonin Scaliadisputed the conclusion of Reed v. Reed, arguing that women do not have equal protection under the 14th amendment as "persons"[70][71]because the Constitution's use of the gender-neutral term "Person" means that the Constitution does not require discrimination on the basis of sex, but also does not prohibit such discrimination, adding "Nobody ever thought that that's what it meant. Nobody ever voted for that."[72]Many others, including law professorJack Balkindisagree with this assertion. Balkin states that, at a minimum "the fourteenth amendment was intended to prohibit some forms of sex discrimination – discrimination in basic civil rights against single women."[73]Many local marriage laws at the time the 14th Amendment was ratified (as well as when the original Constitution was ratified) had concepts of coverture and "head-and-master", which meant that women legally lost rights upon marriage, including rights to ownership of property and other rights of adult participation in the political economy; single women retained these rights, however, and voted in some jurisdictions.
Other commentators have noted that some of the ratifiers of the US Constitution (in 1787) also, in contemporaneous contexts, ratified state level Constitutions that saw women as Persons and required them to be treated as such, including granting women rights such as the right to vote.[74][75]Professor Jane Calvert argues that the 17th and 18th Century Quaker concept of Personhood applied to women, and the prevalence of Quakers in the population of several colonies, such as New Jersey and Pennsylvania, at the time that the original Constitution was drafted and ratified likely influenced the choice of the term "Person" for the Constitution instead of the term "Man", which was used in theDeclaration of Independenceand in the contemporaneously draftedFrench Constitution of 1791.[76]
The personhood of women also has consequences for the ethics of abortion. For example, in "A Defense of Abortion",Judith Jarvis Thomsonargues that one person's right to bodily autonomy trumps another's right to life, and therefore abortion does not violate a fetus's right to life: Instead abortion should be understood as the pregnant women withdrawing her own body from use, which causes the fetus to die.[77]
Questions pertaining to the personhood of women and the personhood of fetuses have legal and ethical consequences for reproductive rights beyond abortion as well. For example, some fetal homicide laws have resulted in jail time for women suspected of drug use during a pregnancy that ended in a miscarriage, like one Alabama woman who was sentenced to ten years.[78]
In 1772,Somersett's Casedetermined thatslaverywas unsupported by law inEngland and Wales, but not elsewhere in theBritish Empire. In 1868, under the 14th Amendment,black menin the United States became citizens. In 1870, under the15th Amendment, black men got the right to vote.
In 1853,Sojourner Truthbecame famous for askingAin't I a Woman?and after slavery was abolished, black men continued to fight for personhood by claiming,I Am A Man!.
The legal definition of "person" has excludedindigenous peoplesin some countries.[example needed]
The legal definition of persons may include or exclude children depending on the context. The USBorn-Alive Infants Protection Actof 2002 provides a legal structure that those born at any gestational stage that are either breathing, have heartbeat, umbilical cord pulsation, or any voluntary muscle movement are living, individual human persons.[79]
Adults with cognitive disabilities are regularly denied rights generally granted to all adult persons such as the right to marry and consent to sex,[80]and the right to vote. They may also lack legalcompetence. Philosophical arguments have been made against the cognitively disabled being able to havemoral agency.[81]In many countries, including the US, psychiatric illness can be cited toimprison an adult without due process.
Those who become disabled later in life often experience a change in how they are perceived, including othersinfantilizingthem or assuming cognitive disability due to the existence of physical disability.[82]The concept of disability as being worse than death can be seen as a denial of disabled people's personhood, such as when medical professionals suggesteuthanasiato non-suicidal disabled patients.[83]
Some philosophers and those involved in animal welfare,ethology,the rights of animals, and related subjects, consider that certain or even all animals should also be considered to be persons and thus granted legal personhood. Commonly named species in this context include theapes,cetaceans,parrots,cephalopods,corvids,elephants,bears,pigs,leporidsandrodents, because of theirapparent intelligence, sentience, and intricate social rules. The idea of extending personhood to all animals has the support of legal scholars such asAlan Dershowitz[84]andLaurence TribeofHarvard Law School,[85]andanimal lawcourses are (as of 2008) taught in 92 out of 180 law schools in the United States.[86]On May 9, 2008,Columbia University PresspublishedAnimals as Persons: Essays on the Abolition of Animal Exploitationby ProfessorGary L. FrancioneofRutgers UniversitySchool of Law, a collection of writings that summarizes his work to date and makes the case for non-human animals as persons.
Those who oppose personhood for non-human animals are known ashuman exceptionalistsor human supremacists, and more pejorativelyspeciesists.[87]
Other theorists attempt to demarcate between degrees of personhood. For example,Peter Singer's two-tiered account distinguishes between basic sentience and the higher standard of self-consciousness which constitutes personhood. His approach has been criticized for accepting the personhood of some animals, but rejecting the personhood of people with disabilities such as dementia.[88]It has also been given as an example of the limits of a capacities-based definition of personhood, in that they tend to be defined in ways that reinforce existing systems of power and privilege by preferring the capacities that are valued by those who write the definitions.[88]A squirrel would value agility and balance in defining personhood; a tree might grant personhood on the basis of height and longevity, and a long-time academic, "a human being with a fully functioning cerebral cortex who resides in a social context where the workings of this part of the brain are particularly prized", would just as predictably value the qualities that benefited his own life and overlook the ones that had little relationship to his own life.[88]
Wynn Schwartzhas offered a Paradigm Case Formulation of Persons as a format allowing judges to identify qualities of personhood in different entities.[23][17][89]Julian Friedlandhas advanced a seven-tiered account based on cognitive capacity and linguistic mastery.[90]Amanda Stoel suggested that rights should be granted based on a scale of degrees of personhood, allowing entities currently denied any right to be recognized some rights, but not as many.[91]
In 1992, Switzerland amended itsconstitutionto recognize animals asbeingsand notthings.[92]A decade later, Germany guaranteed rights to animals in a 2002 amendment to its constitution, becoming the firstEuropean Unionmember to do so.[92][93][94]TheNew Zealandparliament included restrictions on the use of 'non-human hominids'[95]in research or teaching when passing the Animal Welfare Act (1999).[96]In 2007, the parliament of theBalearic Islands, an autonomous province of Spain, passed the world's first legislation granting legal rights to allgreat apes.[97]
In 2013,India's Ministry of Forests and Environment banned the importation or capture of cetaceans (whales and dolphins) for entertainment, exhibition, or interaction purposes, on the basis that "cetaceans in general are highly intelligent and sensitive" and that it "is morally unacceptable to keep them captive for entertainment." It noted that "various scientists" have argued they should be seen as "non-human persons" with commensurate rights, but did not take an official position on this, and indeed did not have the legal authority to do so.[98][99]
In 2014, a hybrid, zoo-born orangutan namedSandrawas termed by the court inArgentinaas a "non-human subject" in an unsuccessful habeas corpus case regarding the release of the orangutan from captivity at the Buenos Aires zoo. The status of the orangutan as a "non-human subject" needed to be clarified by the court. Court cases relevant to this orangutan were continuing in 2015.[100]Finally, in 2019, Sandra was granted nonhuman personhood and freed from captivity to a Florida sanctuary.[citation needed]
In 2015, for the first time, two chimpanzees, Hercules and Leo, were thought to be "legal persons", having been granted a writ ofhabeas corpus. This meant their detainer,Stony Brook University, had to provide a legally sufficient reason for their imprisonment.[101]This view was rejected and the writ was reversed by the officiating judge shortly thereafter.[102]
In statutory and corporatelaw, certainsocial constructsare legally considered persons. In many jurisdictions, somecorporationsand otherorganizationsare consideredjuridical persons(a subtype oflegal persons) with standing to own, possess, enter contracts, as well as to sue or be sued in court, or even to beindicted, in selected jurisdictions. This is known as legal orcorporate personhood.
In 1819, the US Supreme Court ruled inDartmouth College v. Woodward, that corporations have the same rights as natural persons to enforce contracts.
Since the new millennium, treating parts of nature like waterways as persons has become increasingly popular.
In 2006,Boliviapassed a law recognizing therights of nature"to not be affected by mega-infrastructure and development projects that affect the balance of ecosystems and the local inhabitant communities".[103]
In February 2021, theMagpie River (Quebec)became the first river in Canada to be granted legal personhood, after the local municipality ofMinganieand theInnu Council of Ekuanitshitpassed joint resolutions.[104]The goal is to protect it long-term given its appeal for energy producers likeHydro-QuebecandInnergex Renewable Energy.[105]It has since the right to flow, maintain biodiversity, be free from pollution, and to sue.[3]
In 2016, theConstitutional Court of Colombiagranted legal rights to theRio Atrato; in 2018, theSupreme Court of Colombiagranted the Amazon river ecosystem legal rights.[106]
In 2008,Ecuadorapproved a constitution to recognize that nature "...has the right to exist, persist, maintain and regenerate its vital cycles, structure, functions and its processes in evolution."[107]
In 2017, a court in the northern Indian state ofUttarakhandrecognized theGangesandYamunaas legal persons. The judges cited Whanganui river in New Zealand as precedent for the action.[108]
TheWhanganui Riverof New Zealand is revered by the localMāori peopleasTe Awa Tupua, sometimes translated as "an integrated, living whole". Efforts to grant it special legal protection have been pursued by the Whanganuiiwisince the 1870s. In 2012, an agreement to grant legal personhood to the river was signed between the New Zealand government and the Whanganui River Māori Trust. One guardian fromthe Crownand one from the Whanganui are responsible for protecting the river.[109]
In 2019, theKlamath Riverhas been granted personhood by the Yurok Tribe.[110]Also in February 2019, voters inToledo, Ohiopassed the "Lake Erie Bill of Rights" (LEBOR), which granted personhood rights toLake Erie.[111]The law was challenged in federal court on constitutional grounds by Drewes Farms Partnership, with the state government of Ohio joining as an intervenor. The law was overturned due to the vagueness of at least three portions of the law, with the court also criticizing the applicability of the law to other Lake Erie-bordering jurisdictions' laws regarding the lake.[112][113]
The theoretical landscape of personhood theory has been altered recently by controversy in the bioethics community concerning an emerging community of scholars, researchers, and activists identifying with an explicitlytranshumanistposition, which supportsmorphological freedom, even if a person changed so much as to no longer be considered a member of the humanspecies. For example, how much of a human can beartificiallyreplaced before one loses their personhood? also in the case forcyborgsIf people are considered persons because of theirbrains, then what if thebrain's thought patterns, memories and other attributes could be transposed into a device? Would the patient still be considered a person after the operation?[according to whom?]
In China's religious philosophy ofTaoism, theTaois a path of life and a divine field; not exhibiting personhood of itself, but "if well-nourished", is supposedly beneficial towards persons and the components of personhood.[citation needed]
Many generally non-religious Japanese people maintain a degree ofShintospirituality (thus avoiding fully declared non-spirituality) because thekamiare not as central to the Shinto religion as amonotheisticcreator God, thus having an indirect impact on the formation of an individual's personality. The non-centrality of the kami allow an individual to take an ambivalent stance towardsatheismor theism and deism. Religiously speaking, the degree of personhood granted to a deity (along with their universal centrality to a given religion) may be seen to have an impact on theworld viewand understandings of personhood by mortal individuals.[citation needed]
The Latin wordpersonais probably derived from theEtruscanwordphersu, with the same meaning, and that from the Greekπρόσωπον (prosōpon). Its meaning in the latter Roman period changed to indicate acharacterof a theatrical performance orcourt of law, when it became apparent that different individuals could assume the sameroleand that legal attributes such as rights, powers, and duties followed the role. The same individuals as actors could play different roles, each with its own legal attributes, sometimes even in the same court appearance.
According to other sources, which also admit that the origin of the term is not completely clear,personacould be related to the Latin verbper-sonare, literally:sounding through, with an obvious link to the above-mentionedtheatrical mask, which often incorporated a small megaphone. The word was transformed from its theater use into a term with strict technical theological meaning byTertullianin his work,Adversus Praxean(Against Praxeas), in order to distinguish the three "persons" of theTrinity. Christianity is thus the first philosophical system to use the word "person" in its modern sense.[114]Subsequently,Boethiusrefined the word to mean "an individual substance of a rational nature." This can be re-stated as "that which possesses an intellect and a will."
The definition of Boethius as it stands can hardly be considered a satisfactory one. The words taken literally can be applied to the rational soul of man, and also the human nature of Christ. That St. Thomas accepts it is presumably due to the fact that he found it in possession, and recognized as the traditional definition. He explains it in terms that practically constitute a new definition.Individua substantiasignifies, he says,substantia, completa, per se subsistens, separata ab aliia, i.e., a substance, complete, subsisting per se, existing apart from others (III, Q. xvi, a. 12,ad 2um).
If to this be addedrationalis naturae, we have a definition comprising the five notes that go to make up a person: (a)substantia-- this excludes accident; (b)completa-- it must form a complete nature; that which is a part, either actually or "aptitudinally" does not satisfy the definition; (c)per se subsistens--the person exists in itself and for itself; he or she issui juris, the ultimate possessor of his or her nature and all its acts, the ultimate subject of predication of all his or her attributes; that which exists in another is not a person; (d)separata ab aliis--this excludes the universal,substantia secunda, which has no existence apart from the individual; (e)rationalis naturae--excludes all non-intellectualsupposita.
To a person therefore belongs a threefold incommunicability, expressed in notes (b), (c), and (d). The human soul belongs to the nature as a part of it, and is therefore not a person, even when existing separately.
|
https://en.wikipedia.org/wiki/Personhood
|
Regulation of artificial intelligenceis the development of public sectorpoliciesand laws for promoting and regulatingartificial intelligence(AI). It is part of the broaderregulation of algorithms.[1][2]The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like theIEEEor theOECD.[3]
Since 2016, numerousAI ethicsguidelines have been published in order to maintain social control over the technology.[4]Regulation is deemed necessary to both foster AI innovation and manage associated risks.
Furthermore, organizations deploying AI have a central role to play in creating and implementingtrustworthy AI, adhering to established principles, and taking accountability for mitigating risks.[5]
Regulating AI through mechanisms such as review boards can also be seen as social means to approach theAI control problem.[6][7]
According toStanford University's 2023 AI Index, the annual number ofbillsmentioning "artificial intelligence" passed in 127 surveyed countries jumped from one in 2016 to 37 in 2022.[8][9]
In 2017,Elon Muskcalled for regulation of AI development.[10]According toNPR, theTeslaCEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization."[10]In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development.[11]Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEOBrian Krzanichhas argued that AI is in its infancy and that it is too early to regulate the technology.[12]Many tech companies oppose the harsh regulation of AI and "While some of the companies have said they welcome rules around A.I., they have also argued against tough regulations akin to those being created in Europe"[13]Instead of trying to regulate the technology itself, some scholars suggested developing common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.[14]
In a 2022Ipsossurvey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".[8]A 2023Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.[15]In a 2023Fox Newspoll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".[16][17]
The regulation of artificial intelligences is the development of public sector policies and laws for promoting and regulating AI.[18]Regulation is now generally considered necessary to both encourage AI and manage associated risks.[19][20][21]Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems,[22]although regulation of artificialsuperintelligencesis also considered.[23]The basic approach to regulation focuses on the risks and biases ofmachine-learningalgorithms, at the level of the input data, algorithm testing, and decision model. It also focuses on theexplainabilityof the outputs.[20]
There have been bothhard lawandsoft lawproposals to regulate AI.[24]Some legal scholars have noted that hard law approaches to AI regulation have substantial challenges.[25][26]Among the challenges, AI technology is rapidly evolving leading to a "pacing problem" where traditional laws and regulations often cannot keep up with emerging applications and their associated risks and benefits.[25][26]Similarly, the diversity of AI applications challenges existing regulatory agencies, which often have limited jurisdictional scope.[25]As an alternative, some legal scholars argue that soft law approaches to AI regulation are promising because soft laws can be adapted more flexibly to meet the needs of emerging and evolving AI technology and nascent applications.[25][26]However, soft law approaches often lack substantial enforcement potential.[25][27]
Cason Schmit, Megan Doerr, and Jennifer Wagner proposed the creation of a quasi-governmental regulator by leveraging intellectual property rights (i.e.,copyleftlicensing) in certain AI objects (i.e., AI models and training datasets) and delegating enforcement rights to a designated enforcement entity.[28]They argue that AI can be licensed under terms that require adherence to specified ethical practices and codes of conduct. (e.g., soft law principles).[28]
Prominent youth organizations focused on AI, namely Encode Justice, have also issued comprehensive agendas calling for more stringent AI regulations andpublic-private partnerships.[29][30]
AI regulation could derive from basic principles. A 2020 Berkman Klein Center for Internet & Society meta-review of existing sets of principles, such as the Asilomar Principles and the Beijing Principles, identified eight such basic principles: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and respect for human values.[31]AI law and regulations have been divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.[19]A public administration approach sees a relationship between AI law and regulation, theethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction.[32]The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national,[33]and international levels[34]and in a variety of fields, from public service management[35]and accountability[36]to law enforcement,[34][37]healthcare (especially the concept of a Human Guarantee),[38][39][40][41][42]the financial sector,[33]robotics,[43][44]autonomous vehicles,[43]the military[45]and national security,[46]and international law.[47][48]
Henry Kissinger,Eric Schmidt, andDaniel Huttenlocherpublished a joint statement in November 2021 entitled "Being Human in an Age of AI", calling for a government commission to regulate AI.[49]
Regulation of AI can be seen as positive social means to manage theAI control problem(the need to ensure long-term beneficial AI), with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities throughtranshumanismtechniques likebrain-computer interfacesbeing seen as potentially complementary.[7][50]Regulation of research intoartificial general intelligence(AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research intoAI safety,[50]together with the possibility ofdifferential intellectual progress(prioritizing protective strategies over risky strategies in AI development) or conducting international mass surveillance to perform AGI arms control.[7]For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangeroussuperintelligenceas well as for addressing other major threats to human well-being, such as subversion of the global financial system, until a true superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, AGI system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger.[7]Regulation of conscious, ethically aware AGIs focuses on how to integrate them with existing human society and can be divided into considerations of their legal standing and of their moral rights.[7]Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI.[43]
The development of a global governance board to regulate AI development was suggested at least as early as 2017.[52]In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on theInternational Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development.[53]In 2019, the Panel was renamed the Global Partnership on AI.[54][55]
TheGlobal Partnership on Artificial Intelligence(GPAI) was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology, as outlined in theOECDPrinciples on Artificial Intelligence(2019).[56]The 15 founding members of the Global Partnership on Artificial Intelligence are Australia, Canada, the European Union, France, Germany, India, Italy, Japan, the Republic of Korea, Mexico, New Zealand, Singapore, Slovenia, the United States and the UK. In 2023, the GPAI has 29 members.[57]The GPAI Secretariat is hosted by the OECD in Paris, France. GPAI's mandate covers four themes, two of which are supported by the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, namely, responsible AI and data governance. A corresponding centre of excellence in Paris will support the other two themes on the future of work, and on innovation and commercialization. GPAI also investigated how AI can be leveraged to respond to the COVID-19 pandemic.[56]
The OECD AI Principles[58]were adopted in May 2019, and the G20 AI Principles in June 2019.[55][59][60]In September 2019 theWorld Economic Forumissued ten 'AI Government Procurement Guidelines'.[61]In February 2020, the European Union published its draft strategy paper for promoting and regulating AI.[34]
At the United Nations (UN), several entities have begun to promote and discuss aspects of AI regulation and policy, including theUNICRI Centre for AI and Robotics.[46]In partnership with INTERPOL, UNICRI's Centre issued the reportAI and Robotics for Law Enforcementin April 2019[62]and the follow-up reportTowards Responsible AI Innovationin May 2020.[37]AtUNESCO's Scientific 40th session in November 2019, the organization commenced a two-year process to achieve a "global standard-setting instrument on ethics of artificial intelligence". In pursuit of this goal, UNESCO forums and conferences on AI were held to gather stakeholder views. A draft text of aRecommendation on the Ethics of AIof the UNESCO Ad Hoc Expert Group was issued in September 2020 and included a call for legislative gaps to be filled.[63]UNESCO tabled the international instrument on the ethics of AI for adoption at its General Conference in November 2021;[56]this was subsequently adopted.[64]While the UN is making progress with the global management of AI, its institutional and legal capability to manage the AGI existential risk is more limited.[65]
An initiative ofInternational Telecommunication Union(ITU) in partnership with 40 UN sister agencies,AI for Goodis a global platform which aims to identify practical applications of AI to advance the United NationsSustainable Development Goalsand scale those solutions for global impact. It is an action-oriented, global & inclusive United Nations platform fostering development of AI to positively impact health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities.[citation needed]
Recent research has indicated that countries will also begin to useartificial intelligenceas a tool for national cyberdefense. AI is a new factor in the cyber arms industry, as it can be used for defense purposes. Therefore, academics urge that nations should establish regulations for the use of AI, similar to how there are regulations for other military industries.[66]
The regulatory and policy landscape for AI is an emerging issue in regional and national jurisdictions globally, for example in the European Union[68]and Russia.[69]Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI.[70][71]These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure.[22][72]
Different countries have approached the problem in different ways. Regarding the three largest economies, it has been said that "the United States is following a market-driven approach, China is advancing a state-driven approach, and the EU is pursuing a rights-driven approach."[73]
In October 2023, theAustralian Computer Society,Business Council of Australia,Australian Chamber of Commerce and Industry,Ai Group (aka Australian Industry Group),Council of Small Business Organisations Australia, andTech Council of Australiajointly published an open letter calling for a national approach to AI strategy.[74]The letter backs the federal government establishing a whole-of-government AI taskforce.[74]
Additionally, in August of 2024, the Australian government set a Voluntary AI Safety Standard, which was followed by a Proposals Paper later in September of that year, outlining potential guardrails forhigh-risk AIthat could become mandatory. These guardrails include areas such as model testing, transparency, human oversight, and record-keeping, all of which may be enforced through new legislation. As noted, however, Australia has not yet passed AI-specific laws, but existing statutes such as thePrivacy Act 1988,Corporations Act 2001, andOnline Safety Act 2021all have applications which apply to AI use.[75]
In September 2024, a bill also was introduced which granted theAustralian Communications and Media Authoritypowers to regulateAI-generated misinformation. Several agencies, including theACMA,ACCC, andOffice of the Australian Information Commissioner, are all expected to play roles in future AI regulation.[75]
On September 30, 2021, the Brazilian Chamber of Deputies approved the Brazilian Legal Framework for Artificial Intelligence, Marco Legal da Inteligência Artificial, in regulatory efforts for the development and usage of AI technologies and to further stimulate research and innovation in AI solutions aimed at ethics, culture, justice, fairness, and accountability. This 10 article bill outlines objectives including missions to contribute to the elaboration of ethical principles, promote sustained investments in research, and remove barriers to innovation. Specifically, in article 4, the bill emphasizes the avoidance of discriminatory AI solutions, plurality, and respect for human rights. Furthermore, this act emphasizes the importance of the equality principle in deliberate decision-making algorithms, especially for highly diverse and multiethnic societies like that of Brazil.
When the bill was first released to the public, it faced substantial criticism, alarming the government for critical provisions. The underlying issue is that this bill fails to thoroughly and carefully address accountability, transparency, and inclusivity principles. Article VI establishes subjective liability, meaning any individual that is damaged by an AI system and is wishing to receive compensation must specify the stakeholder and prove that there was a mistake in the machine's life cycle. Scholars emphasize that it is out of legal order to assign an individual responsible for proving algorithmic errors given the high degree of autonomy, unpredictability, and complexity of AI systems. This also drew attention to the currently occurring issues with face recognition systems in Brazil leading to unjust arrests by the police, which would then imply that when this bill is adopted, individuals would have to prove and justify these machine errors.
The main controversy of this draft bill was directed to three proposed principles. First, the non-discrimination principle, suggests that AI must be developed and used in a way that merely mitigates the possibility of abusive and discriminatory practices. Secondly, the pursuit of neutrality principle lists recommendations for stakeholders to mitigate biases; however, with no obligation to achieve this goal. Lastly, the transparency principle states that a system's transparency is only necessary when there is a high risk of violating fundamental rights. As easily observed, the Brazilian Legal Framework for Artificial Intelligence lacks binding and obligatory clauses and is rather filled with relaxed guidelines. In fact, experts emphasize that this bill may even make accountability for AI discriminatory biases even harder to achieve. Compared to the EU's proposal of extensive risk-based regulations, the Brazilian Bill has 10 articles proposing vague and generic recommendations.
Compared to the multistakeholder participation approach taken previously in the 2000s when drafting the Brazilian Internet Bill of Rights, Marco Civil da Internet, the Brazilian Bill is assessed to significantly lack perspective. Multistakeholderism, more commonly referred to as Multistakeholder Governance, is defined as the practice of bringing multiple stakeholders to participate in dialogue, decision-making, and implementation of responses to jointly perceived problems. In the context of regulatory AI, this multistakeholder perspective captures the trade-offs and varying perspectives of different stakeholders with specific interests, which helps maintain transparency and broader efficacy. On the contrary, the legislative proposal for AI regulation did not follow a similar multistakeholder approach.
Future steps may include, expanding upon the multistakeholder perspective. There has been a growing concern about the inapplicability of the framework of the bill, which highlights that the one-shoe-fits-all solution may not be suitable for the regulation of AI and calls for subjective and adaptive provisions.
ThePan-Canadian Artificial Intelligence Strategy(2017) is supported by federal funding of Can $125 million with the objectives of increasing the number of outstanding AI researchers and skilled graduates in Canada, establishing nodes of scientific excellence at the three major AI centres, developing 'global thought leadership' on the economic, ethical, policy and legal implications of AI advances and supporting a national research community working on AI.[56]The Canada CIFAR AI Chairs Program is the cornerstone of the strategy. It benefits from funding of Can$86.5 million over five years to attract and retain world-renowned AI researchers.[56]The federal government appointed an Advisory Council on AI in May 2019 with a focus on examining how to build on Canada's strengths to ensure that AI advancements reflect Canadian values, such as human rights, transparency and openness. The Advisory Council on AI has established a working group on extracting commercial value from Canadian-owned AI and data analytics.[56]In 2020, the federal government and Government of Quebec announced the opening of the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, which will advance the cause of responsible development of AI.[56]In June 2022, the government of Canada started a second phase of the Pan-Canadian Artificial Intelligence Strategy.[76]In November 2022, Canada has introduced the Digital Charter Implementation Act (Bill C-27), which proposes three acts that have been described as a holistic package of legislation for trust and privacy: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence & Data Act (AIDA).[77][78]
In September of 2023, the Canadian Government introduced a Voluntary Code of Conduct for the Responsible Development and Management of AdvancedGenerative AISystems. The code, based initially on public consultations, seeks to provide interim guidance to Canadian companies on responsible AI practices. Ultimately, its intended to serve as astopgapuntil formal legislation, such as the Artificial Intelligence and Data Act (AIDA), is enacted.[79][80]Moreover, in November 2024, the Canadian government additionally announced the creation of the Canadian Artificial Intelligence Safety Institute (CAISI) as part of a 2.4 billion CAD federal AI investment package. This includes 2 billion CAD to support a new AI Sovereign Computing Strategy and the AI Computing Access Fund, which aims to bolster Canada’s advanced computing infrastructure. Further funding includes 700 million CAD for domestic AI development, 1 billion CAD for public supercomputing infrastructure, and 300 million CAD to assist companies in accessing new AI resources.[80]
In Morocco, a new legislative proposal has been put forward by a coalition of political parties in Parliament to establish the National Agency for Artificial Intelligence (AI). This agency is intended to regulate AI technologies, enhance collaboration with international entities in the field, and increase public awareness of both the possibilities and risks associated with AI.[81]
In recent years, Morocco has made efforts to advance its use of artificial intelligence in the legal sector, particularly through AI tools that assist with judicial prediction and document analysis, helping to streamline case law research and support legal practitioners with more complex tasks. Alongside these efforts to establish a national AI agency, AI is being gradually introduced intolegislative and judicial processesin Morocco, with ongoing discussions emphasizing the benefits as well as the potential risks of these technologies.[82]
Generally speaking Morocco's broaderdigital policyincludes robustdata governancemeasures including the 2009 Personal Data Protection Law and the 2020 Cybersecurity Law, which establish requirements in areas such as privacy, breach notification, and data localization.[82]As of 2024, additional decrees have also expanded cybersecurity standards for cloud infrastructure and data audits within the nation. And while general data localization is not mandated, sensitive government and critical infrastructure data must be stored domestically. Oversight is led by the National Commission for the Protection of Personal Data (CNDP) and the General Directorate of Information Systems Security (DGSSI), though public enforcement actions in the country remain limited.[82]
The regulation of AI in China is mainly governed by theState Council of the People's Republic of China's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan" (State Council Document No. 35), in which theCentral Committee of the Chinese Communist Partyand the State Council of the PRC urged the governing bodies of China to promote the development of AI up to 2030. Regulation of the issues of ethical and legal support for the development of AI is accelerating, and policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software.[83][84][85]In 2021, China published ethical guidelines for the use of AI in China which state that researchers must ensure that AI abides by shared human values, is always under human control, and is not endangering public safety.[86]In 2023, China introducedInterim Measures for the Management of Generative AI Services.[87]
On August 15, 2023, China’s firstGenerative AIMeasures officially came into force, becoming one of the first comprehensive national regulatory frameworks for generative AI. The measures apply to all providers offering generative AI services to the Chinese public, including foreign entities, ultimately setting the rules related to data protection, transparency, and algorithmic accountability.[88]In parallel, earlier regulations such as the Chinese government's Deep Synthesis Provisions (effective January 2023) and the Algorithm Recommendation Provisions (effective March 2022) continue to shape China's governance of AI-driven systems, including requirements for watermarking and algorithm filing with theCyberspace Administration of China(CAC).[89]Additionally, In October 2023, China also implemented a set of Ethics Review Measures for science and technology, mandating certain ethical assessments of AI projects which were deemed deemed socially sensitive or capable of negatively influencing public opinion.[88]As of mid-2024, over 1,400 AI algorithms had been already registered under theCAC’s algorithm filing regime, which includes disclosure requirements and penalties for noncompliance.[88]This layered approach reflects a broader policy process shaped by not only central directives but also academic input, civil society concerns, and public discourse.[89]
TheCouncil of Europe(CoE) is an international organization that promotes human rights, democracy and the rule of law. It comprises 46 member states, including all 29 Signatories of the European Union's 2018 Declaration of Cooperation on Artificial Intelligence. The CoE has created a common legal space in which the members have a legal obligation to guarantee rights as set out in theEuropean Convention on Human Rights. Specifically in relation to AI, "The Council of Europe's aim is to identify intersecting areas between AI and our standards on human rights, democracy and rule of law, and to develop relevant standard setting or capacity-building solutions". The large number of relevant documents identified by the CoE include guidelines, charters, papers, reports and strategies.[90]The authoring bodies of these AI regulation documents are not confined to one sector of society and include organizations, companies, bodies and nation-states.[63]
In 2019, the Council of Europe initiated a process to assess the need for legally binding regulation of AI, focusing specifically on its implications for human rights and democratic values. Negotiations on a treaty began in September 2022, involving the 46 member states of the Council of Europe, as well as Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay, as well as the European Union. On 17 May 2024, the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law" was adopted. It was opened for signature on 5 September 2024. Although developed by a European organisation, the treaty is open for accession by states from other parts of the world. The first ten signatories were: Andorra, Georgia, Iceland, Norway, Moldova, San Marino, the United Kingdom, Israel, the United States, and the European Union.[91][92]
The EU is one of the largest jurisdictions in the world and plays an active role in the global regulation of digital technology through theGDPR,[93]Digital Services Act, and theDigital Markets Act.[94][95]For AI in particular, theArtificial intelligence Actis regarded in 2023 as the most far-reaching regulation of AI worldwide.[96][97]
Most European Union (EU) countries have their own national strategies towards regulating AI, but these are largely convergent.[63]The European Union is guided by a European Strategy on Artificial Intelligence,[98]supported by a High-Level Expert Group on Artificial Intelligence.[99][100]In April 2019, theEuropean Commissionpublished itsEthics Guidelines for Trustworthy Artificial Intelligence (AI),[101]following this with itsPolicy and investment recommendations for trustworthy Artificial Intelligencein June 2019.[102]The EU Commission's High Level Expert Group on Artificial Intelligence carries out work on Trustworthy AI, and the Commission has issued reports on the Safety and Liability Aspects of AI and on the Ethics of Automated Vehicles. In 2020. the EU Commission sought views on a proposal for AI specific legislation, and that process is ongoing.[63]
On February 2, 2020, the European Commission published itsWhite Paper on Artificial Intelligence – A European approach to excellence and trust.[103][104]The White Paper consists of two main building blocks, an 'ecosystem of excellence' and a 'ecosystem of trust'. The 'ecosystem of trust' outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission distinguishes AI applications based on whether they are 'high-risk' or not. Only high-risk AI applications should be in the scope of a future EU regulatory framework. An AI application is considered high-risk if it operates in a risky sector (such as healthcare, transport or energy) and is "used in such a manner that significant risks are likely to arise". For high-risk AI applications, the requirements are mainly about the : "training data", "data and record-keeping", "information to be provided", "robustness and accuracy", and "human oversight". There are also requirements specific to certain usages such as remote biometric identification. AI applications that do not qualify as 'high-risk' could be governed by a voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework.[105]
A January 2021 draft was leaked online on April 14, 2021,[106]before the Commission presented their official "Proposal for a Regulation laying down harmonised rules on artificial intelligence" a week later.[107]Shortly after, theArtificial Intelligence Act(also known as the AI Act) was formally proposed on this basis.[108]This proposal includes a refinement of the 2020 risk-based approach with, this time, 4 risk categories: "minimal", "limited", "high" and "unacceptable".[109]The proposal has been severely critiqued in the public debate. Academics have expressed concerns about various unclear elements in the proposal – such as the broad definition of what constitutes AI – and feared unintended legal implications, especially for vulnerable groups such as patients and migrants.[110][111]The risk category "general-purpose AI" was added to the AI Act to account for versatile models likeChatGPT, which did not fit the application-based regulation framework.[112]Unlike for other risk categories, general-purpose AI models can be regulated based on their capabilities, not just their uses. Weaker general-purpose AI models are subject transparency requirements, while those considered to pose "systemic risks" (notably those trained using computational capabilities exceeding 1025FLOPS) must also undergo a thorough evaluation process.[113]A subsequent version of the AI Act was finally adopted in May 2024.[114]The AI Act will be progressively enforced.[115]Recognition of emotionsand real-time remotebiometricidentification will be prohibited, with some exemptions, such as for law enforcement.[116]
The European Union's AI Act has created a regulatory framework with significant implications globally. This legislation introduces a risk-based approach to categorizing AI systems, focusing on high-risk applications like healthcare, education, and public safety.[117]It requires organizations to ensure transparency, data governance, and human oversight in their AI solutions. While this aims to foster ethical AI use, the stringent requirements could increase compliance costs and delay technology deployment, impacting innovation-driven industries.[citation needed]
Observers have expressed concerns about the multiplication of legislative proposals under thevon der Leyen Commission. The speed of the legislative initiatives is partially led by political ambitions of the EU and could put at risk the digital rights of the European citizens, including rights to privacy,[118]especially in the face of uncertain guarantees of data protection through cyber security.[100]Among the stated guiding principles in the variety of legislative proposals in the area of AI under the von der Leyen Commission are the objectives ofstrategic autonomy[119]and the concept of digital sovereignty.[120]On May 29, 2024, theEuropean Court of Auditorspublished a report stating that EU measures were not well coordinated with those of EU countries; that the monitoring of investments was not systematic; and that stronger governance was needed.[121]
In November 2020,[122]DIN,DKEand the GermanFederal Ministry for Economic Affairs and Energypublished the first edition of the"German Standardization Roadmap for Artificial Intelligence"(NRM KI) and presented it to the public at the Digital Summit of the Federal Government of Germany.[123]NRM KI describes requirements to future regulations and standards in the context of AI. The implementation of the recommendations for action is intended to help to strengthen the German economy and science in the international competition in the field of artificial intelligence and create innovation-friendly conditions for thisemerging technology. The first edition is a 200-page long document written by 300 experts. The second edition of the NRM KI was published to coincide with the German government's Digital Summit on December 9, 2022.[124]DIN coordinated more than 570 participating experts from a wide range of fields from science, industry, civil society and the public sector. The second edition is a 450-page long document.
On the one hand, NRM KI covers the focus topics in terms of applications (e.g. medicine, mobility, energy & environment, financial services, industrial automation) and fundamental issues (e.g. AI classification, security, certifiability, socio-technical systems, ethics).[124]On the other hand, it provides an overview of the central terms in the field of AI and its environment across a wide range of interest groups and information sources. In total, the document covers 116 standardisation needs and provides six central recommendations for action.[125]
On 30 October 2023, members of theG7subscribe to eleven guiding principles for the design, production and implementation of advanced artificial intelligence systems, as well as a voluntary Code of Conduct for artificial intelligence developers in the context of the Hiroshima Process.[126]
The agreement receives the applause ofUrsula von der Leyenwho finds in it the principles of the AI Directive, currently being finalized.
New guidelines also aim to establish a coordinated global effort towards the responsible development and use of advanced AI systems. While non-binding, the G7 governments encourage organizations to voluntarily adopt the guidelines, which emphasize a risk-based approach across the AI lifecycle—from pre-deployment risk assessment to post-deployment incident reporting and mitigation.[127]
TheAIP&CoCalso highlight the importance of AI system security, internal adversarial testing ('red teaming'), public transparency about capabilities and limitations, and governance procedures that include privacy safeguards and content authentication tools. The guidelines additionally promote AI innovation directed at solving global challenges such as climate change and public health, and call for advancing international technical standards.[127]
Looking ahead, the G7 intends to further refine their principles and Code of Conduct in collaboration with other organizations like theOECD,GPAI, and broader stakeholders. Areas of broader development include more clrsnrt AI terminology (e.g., “advanced AI systems”), the setting of risk benchmarks, and mechanisms for cross-border information sharing on potential AI risks. Despite general alignment on AI safety, analysts have noted that differing regulatory philosophies—such as the EU’s prescriptive AI Act versus the U.S.’s sector-specific approach—may challenge global regulatory harmonization.[128]
On October 30, 2022, pursuant to government resolution 212 of August 2021, theIsraeli Ministry of Innovation, Science and Technologyreleased its "Principles of Policy, Regulation and Ethics in AI" white paper for public consultation.[129]By December 2023, the Ministry of Innovation and theMinistry of Justicepublished a joint AI regulation and ethics policy paper, outlining several AI ethical principles and a set of recommendations including opting for sector-based regulation, a risk-based approach, preference for "soft" regulatory tools and maintaining consistency with existing global regulatory approaches to AI.[130]
In December of 2023, Israel unveiled its first comprehensive national AI policy which was jointly developed through a collaboration between ministerial and stakeholder consultation. In general, the new policy outlines ethical principles aligned with currentOECDguidelines and recommends a sector-based, risk driven regulatory framework which focuses on areas like transparency accountability.[131]It the policy, it proposes the creation of a national AI Policy Coordination Center to support regulators, and furtehr develop the tools necessary for responsible AI deployment. In addition, alongside 56 other nations, to domestic policy development, Israel signed the world’s first binding international treaty on artificial intelligence in March of 2024. The specific treaty, led by theCouncil of Europe, has obliged signatories to ensure current AI systems uphold democratic values, human rights, and the rule of law.[132]
In October 2023, the Italian privacy authority approved a regulation that provides three principles for therapeutic decisions taken by automated systems: transparency of decision-making processes, human supervision of automated decisions and algorithmic non-discrimination.[133]
In March 2024, the President of theItalian Data Protection Authorityreaffirmed their agency’s readiness to implement the European Union’s newly introducedArtificial Intelligence Act, praising the framework of institutional competence and independence.[134]Italy has continued to develop guidance on AI applications through existing legal frameworks, including recent innovations in areas such as facial recognition for law enforcement, AI in healthcare,deepfakes, andsmart assistants.[135]The Italian government’sNational AI Strategy (2022–2024)emphasizes responsible innovation and outlines goals for talent development, public and private sector adoption, and regulatory clarity, particularly in coordination with EU-level initiatives.[134]While Italy has not enacted standalone AI legislation, courts and regulators have begun interpreting existing laws to address transparency, non-discrimination, and human oversight in algorithmic decision-making.
As of July 2023[update], no AI-specific legislation exists, but AI usage is regulated by existing laws, including thePrivacy Act, theHuman Rights Act, theFair Trading Actand theHarmful Digital Communications Act.[136]
In 2020, theNew Zealand Governmentsponsored aWorld Economic Forumpilot project titled "Reimagining Regulation for the Age of AI", aimed at creating regulatory frameworks around AI.[137]The same year, the Privacy Act was updated to regulate the use of New Zealanders' personal information in AI.[138]In 2023, thePrivacy Commissionerreleased guidance on using AI in accordance with information privacy principles.[139]In February 2024, theAttorney-General and Technology Ministerannounced the formation of a Parliamentary cross-party AIcaucus, and that framework for the Government's use of AI was being developed. She also announced that no extra regulation was planned at that stage.[140]
In 2023, a bill was filed in the PhilippineHouse of Representativeswhich proposed the establishment of the Artificial Intelligence Development Authority (AIDA) which would oversee the development and research of artificial intelligence. AIDA was also proposed to be a watchdog against crimes using AI.[141]
TheCommission on Electionshas also considered in 2024 the ban of using AI and deepfake for campaigning. They look to implement regulations that would apply as early as for the 2025 general elections.[142]
In 2018, the SpanishMinistry of Science, Innovation and Universitiesapproved an R&D Strategy on Artificial Intelligence.[143]
With the formation of thesecond government of Pedro Sánchezin January 2020, the areas related tonew technologiesthat, since 2018, were in theMinistry of Economy, were strengthened. Thus, in 2020 the Secretariat of State for Digitalization and Artificial Intelligence (SEDIA) was created.[144]From this higher body, following the recommendations made by the R&D Strategy on Artificial Intelligence of 2018,[145]the National Artificial Intelligence Strategy (2020) was developed, which already provided for actions concerning the governance of artificial intelligence and the ethical standards that should govern its use. This project was also included within the Recovery, Transformation and Resilience Plan (2021).
During 2021,[144]the Government revealed that these ideas would be developed through a new government agency, and theGeneral State Budgetfor 2022 authorized its creation and allocated five millioneurosfor its development.[146]
TheCouncil of Ministers, at its meeting on 13 September 2022, began the process for the election of the AESIA headquarters.[147][148]16Spanish provincespresented candidatures, with the Government opting forA Coruña, which proposed the La Terraza building.[149]
Switzerland currently has no specific AI legislation, but on 12 February 2025, theFederal Councilannounced plans to ratify theCouncil of Europe’s AI Convention and incorporate it into Swiss law. A draft bill and implementation plan are to be prepared by the end of 2026. The approach includes sector-specific regulation, limited cross-sector rules, such as data protection, and non-binding measures such as industry agreements. The goals are to support innovation, protect fundamental rights, and build public trust in AI.[152]
The UK supported the application and development of AI in business via theDigital Economy Strategy 2015–2018[153]introduced at the beginning of 2015 byInnovate UKas part of the UK Digital Strategy.[153]In the public sector, theDepartment for Digital, Culture, Media and Sportadvised on data ethics and theAlan Turing Instituteprovided guidance on responsible design and implementation of AI systems.[154][155]In terms of cyber security, in 2020 theNational Cyber Security Centrehas issued guidance on 'Intelligent Security Tools'.[46][156]The following year, theUKpublished its 10-year National AI Strategy,[157]which describes actions to assess long-term AI risks, including AGI-related catastrophic risks.[158]
In March 2023, the UK released thewhite paperA pro-innovation approach to AI regulation.[159]This white paper presents general AI principles, but leaves significant flexibility to existing regulators in how they adapt these principles to specific areas such as transport or financial markets.[160]In November 2023, the UK hosted the firstAI safety summit, with the prime ministerRishi Sunakaiming to position the UK as a leader inAI safetyregulation.[161][162]During the summit, the UK created anAI Safety Institute, as an evolution of theFrontierAITaskforceled byIan Hogarth. The institute was notably assigned the responsibility of advancing the safety evaluations of the world's most advanced AI models, also calledfrontier AI models.[163]
The UK government indicated its reluctance to legislate early, arguing that it may reduce the sector's growth and that laws might be rendered obselete by further technological progress.[164]
Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.[165]
As early as 2016, the Obama administration had begun to focus on the risks and regulations for artificial intelligence. In a report titledPreparing For the Future of Artificial Intelligence,[166]theNational Science and Technology Councilset a precedent to allow researchers to continue to develop new AI technologies with few restrictions. It is stated within the report that "the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk....".[167]These risks would be the principal reason to create any form of regulation, granted that any existing regulation would not apply to AI technology.
The first main report was the National Strategic Research and Development Plan for Artificial Intelligence.[168]On August 13, 2018, Section 1051 of the Fiscal Year 2019John S. McCain National Defense Authorization Act(P.L. 115-232) established theNational Security Commission on Artificial Intelligence"to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States."[169]Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence.[170]The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for,inter alia, the economic and national security of the United States.[171][172]
On January 7, 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence,[173]the White House'sOffice of Science and Technology Policyreleased a draftGuidance for Regulation of Artificial Intelligence Applications,[174]which includes ten principles for United States agencies when deciding whether and how to regulate AI.[175]In response, theNational Institute of Standards and Technologyhas released a position paper,[176]and the Defense Innovation Board has issued recommendations on the ethical use of AI.[45]A year later, the administration called for comments on regulation in another draft of its Guidance for Regulation of Artificial Intelligence Applications.[177]
Other specific agencies working on the regulation of AI include the Food and Drug Administration,[39]which has created pathways to regulate the incorporation of AI in medical imaging.[38]National Science and Technology Council also published the National Artificial Intelligence Research and Development Strategic Plan,[178]which received public scrutiny and recommendations to further improve it towards enabling Trustworthy AI.[179]
In March 2021, the National Security Commission on Artificial Intelligence released their final report.[180]In the report, they stated that "Advances in AI, including the mastery of more general AI capabilities along one or more dimensions, will likely provide new capabilities and applications. Some of these advances could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. The US should monitor advances in AI and make necessary investments in technology and give attention to policy so as to ensure that AI systems and their uses align with our goals and values."
In June 2022, SenatorsRob PortmanandGary Petersintroduced the Global Catastrophic Risk Mitigation Act. The bipartisan bill "would also help counter the risk of artificial intelligence... from being abused in ways that may pose a catastrophic risk".[181][182]On October 4, 2022, President Joe Biden unveiled a new AI Bill of Rights,[183]which outlines five protections Americans should have in the AI age: 1. Safe and Effective Systems, 2. Algorithmic Discrimination Protection, 3.Data Privacy, 4. Notice and Explanation, and 5. Human Alternatives, Consideration, and Fallback. The Bill was introduced in October 2021 by the Office of Science and Technology Policy (OSTP), a US government department that advises the president on science and technology.[184]
In January 2023, the New York City Bias Audit Law (Local Law 144[185]) was enacted by the NYC Council in November 2021. Originally due to come into effect on 1 January 2023, the enforcement date for Local Law 144 has been pushed back due to the high volume of comments received during the public hearing on the Department of Consumer and Worker Protection's (DCWP) proposed rules to clarify the requirements of the legislation. It eventually became effective on July 5, 2023.[186]From this date, the companies that are operating and hiring in New York City are prohibited from using automated tools to hire candidates or promote employees, unless the tools have been independently audited for bias.
In July 2023, the Biden–Harris Administration secured voluntary commitments from seven companies –Amazon,Anthropic,Google,Inflection,Meta,Microsoft, andOpenAI– to manage the risks associated with AI. The companies committed to ensure AI products undergo both internal and external security testing before public release; to share information on the management of AI risks with the industry, governments, civil society, and academia; to prioritize cybersecurity and protect proprietary AI system components; to develop mechanisms to inform users when content is AI-generated, such as watermarking; to publicly report on their AI systems' capabilities, limitations, and areas of use; to prioritize research on societal risks posed by AI, including bias, discrimination, and privacy concerns; and to develop AI systems to address societal challenges, ranging from cancer prevention toclimate change mitigation. In September 2023, eight additional companies –Adobe,Cohere,IBM,Nvidia,Palantir,Salesforce,Scale AI, andStability AI– subscribed to these voluntary commitments.[187][188]
The Biden administration, in October 2023 signaled that they would release an executive order leveraging the federal government's purchasing power to shape AI regulations, hinting at a proactive governmental stance in regulating AI technologies.[189]On October 30, 2023, President Biden released thisExecutive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The Executive Order addresses a variety of issues, such as focusing on standards for critical infrastructure, AI-enhanced cybersecurity, and federally funded biological synthesis projects.[190]
The Executive Order provides the authority to various agencies and departments of the US government, including the Energy and Defense departments, to apply existing consumer protection laws to AI development.[191]
The Executive Order builds on the Administration's earlier agreements with AI companies to instate new initiatives to "red-team" or stress-test AI dual-use foundation models, especially those that have the potential to pose security risks, with data and results shared with the federal government.
The Executive Order also recognizes AI's social challenges, and calls for companies building AI dual-use foundation models to be wary of these societal problems. For example, the Executive Order states that AI should not "worsen job quality", and should not "cause labor-force disruptions". Additionally, Biden's Executive Order mandates that AI must "advance equity and civil rights", and cannot disadvantage marginalized groups.[192]It also called for foundation models to include "watermarks" to help the public discern between human and AI-generated content, which has raised controversy and criticism from deepfake detection researchers.[193]
In February 2024, SenatorScott Wienerintroduced theSafe and Secure Innovation for Frontier Artificial Intelligence Models Actto the California legislature. The bill drew heavily on theBiden executive order.[194]It had the goal of reducing catastrophic risks by mandating safety tests for the most powerful AI models. If passed, the bill would have also established a publicly-funded cloud computing cluster in California.[195]On September 29, GovernorGavin Newsomvetoed the bill. It is considered unlikely that the legislature will override the governor's veto with a two-thirds vote from both houses.[196]
On March 21, 2024, the State of Tennessee enacted legislation called theELVIS Act, aimed specifically ataudio deepfakes, and voice cloning.[197]This legislation was the first enacted legislation in the nation aimed at regulating AI simulation of image, voice and likeness.[198]The bill passed unanimously in theTennessee House of RepresentativesandSenate.[199]This legislation's success was hoped by its supporters to inspire similar actions in other states, contributing to a unified approach to copyright and privacy in the digital age, and to reinforce the importance of safeguarding artists' rights against unauthorized use of their voices and likenesses.[200][201]
On March 13, 2024,UtahGovernorSpencer Coxsigned the S.B 149 "Artificial Intelligence Policy Act". This legislation goes into effect on May 1, 2024. It establishes liability, notably for companies that don't disclose their use ofgenerative AIwhen required by state consumer protection laws, or when users commit criminal offense using generative AI. It also creates theOffice of Artificial Intelligence Policyand theArtificial Intelligence Learning Laboratory Program.[202][203]
In January 2025,President Trumprepealed theBiden executive order. This action reflects President Trump's preference for deregulating AI in support of innovation over safeguarding risks.[204]
In early 2025, Congress began advanced bipartisan legislation targeting AI-generated deepfakes, including the "TAKE IT DOWN Act," which would prohibit nonconsensual disclosure of AI-generated "intimate imagery", requiring all platforms to remove such content. Additionally, lawmakers also reintroduced the CREATE AI Act to codify the National AI Research Resource (NAIRR), which aimed to expand public access to computing resources, datasets, and AI testing environments. Additionally, the Trump administration also signed Executive Order #14179 to initiate a national “AI Action Plan”, focusing on securing U.S. global AI dominance in a way in which the White House can seek public input on AI safety and standards. At the state level, new laws have also been passed or proposed to regulate AI-generated impersonations, chatbot disclosures, and even synthetic political content. Meanwhile, the Department of Commerce also expanded export controls on AI technology, and NIST published an updated set of guidances on AI cybersecurity risks.[205]
Legal questions related tolethal autonomous weapons systems(LAWS), in particular compliance withthe laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of theConvention on Certain Conventional Weapons.[206]Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018.[207]
In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N.Security Councilto broach the issue,[47]and leading to proposals for global regulation.[208]The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by theCampaign to Stop Killer Robots– a coalition of non-governmental organizations.[209]The US government maintains that current international humanitarian law is capable of regulating the development or use of LAWS.[210]TheCongressional Research Serviceindicated in 2023 that the US doesn't have LAWS in its inventory, but that its policy doesn't prohibit the development and employment of it.[211]
|
https://en.wikipedia.org/wiki/Regulation_of_artificial_intelligence
|
Robotic governanceprovides a regulatory framework to deal with autonomous and intelligent machines.[1][2][3]This includes research and development activities as well as handling of these machines. The idea is related to the concepts ofcorporate governance,technology governance[4]andIT-governance, which provide a framework for the management of organizations or the focus of a global IT infrastructure.
Robotic governance describes the impact ofrobotics,automationtechnology andartificial intelligenceon society from a holistic, global perspective, considers implications and provides recommendations for actions in a Robot Manifesto. This is realized by theRobotic Governance Foundation, an internationalnon-profit organization.[5]
The robotic governance approach is based on the German research ondiscourse ethics. Therefore, the discussion should involve all Stakeholders, including scientists, society, religion, politics, industry as well aslabor unionsin order to reach a consensus on how to shape the future of robotics and artificial intelligence. The compiled framework, the so-called Robot Manifesto, will provide voluntary guidelines for a self-regulation in the fields of research, development as well as use and sale of autonomous and intelligent systems.
The concept does not only appeal on the responsibility of researchers and robot manufacturers, but like with child labor and sustainability, also means a raising ofopportunity costs. The greater public awareness and pressure will become concerning this topic, the harder it will get for companies to conceal or justify violations. Therefore, from a certain point it will be cheaper for organizations to invest in sustainable technologies and accepted.
The idea to set ethical standards for intelligent machines is not a new one and undoubtedly has its roots in science fiction literature. Even older is the discussion about ethics of intelligent, man-made creatures in general. Some of the earliest recorded examples can be found inOvid'sMetamorphoses, inPygmalion, in the Jewishgolemmysticism (12th century) as well as in the idea of Homunkulus (Latin: little man) arisen from the alchemy of theLate Middle Ages.
The fundamental and philosophical question of these literary works is what will happen, if humans presume to create autonomous, conscious or even godlike creatures,machines,robotsorandroids. While most of the older works broach the issue of the act of creation, if it is morally appropriate and which dangers could arise,Isaac Asimovwas the first to realize the necessity to restrict and regulate the freedom of action of machines. He wrote the firstThree Laws of Robotics.
At least since the use ofdronesequipped with air-to-ground missiles in 1995 that can be used against ground targets, like e.g., theGeneral Atomics MQ-1, and the resultingcollateral damage, the discussion on the international regulation of remote controlled, programmable and autonomous machines attracted public attention. Nowadays, this discussion covers the entire range of programmable, intelligent and/or autonomous machines, drones as well as automation technology combined withBig Dataandartificial intelligence. Lately, well-known visionaries likeStephen Hawking,[6][7]Elon Musk[8][9][10]andBill Gates[11][12]brought the topic to the focus of public attention and awareness. Due to the increasing availability of small and cheap systems for public service as well as commercial and private use, the regulation of robotics in all social dimensions gained a new significance.
Robotic governance was first mentioned in the scientific community within a dissertation project at theTechnical University of Munich, supervised by Professor Dr. emeritus Klaus Mainzer. The topic has been the subject of several scientific workshops, symposia and conferences ever since, including the Sensor Technologies & the Human Experience 2015, the Robotic Governance Panel at the We Robots 2015 Conference, a keynote at the 10th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO), a full day workshop on Autonomous Technologies and their Societal Impact as part of the 2016IEEEInternational Conference on Prognostics and Health Management (PHM’16), a keynote at the 2016 IEEE International Conference on Cloud and Autonomic Computing (ICCAC), the FAS*W 2016: IEEE 1st International Workshops on Foundations and Applications of Self* Systems, the 2016 IEEE International Conference on Emerging Technologies and Innovative Business Practices for the Transformation of Societies (IEEE EmergiTech 2016) and the IEEE Global Humanitarian Technology Conference (GHTC 2016).
Since 2015 IEEE even holds an own forum on robotic governance at theIEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE IROS): the first and second "Annual IEEE IROS Futurist Forum", which brought together worldwide renowned experts from a wide range of specialities to discuss the future of robotics and the need for regulation in 2015 and 2016. In 2016 Robotic governance has also been the topic of a plenary keynote presentation on the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016) in Daejeon, South Korea.
Several video statements and interviews on robotic governance, responsible use of robotics, automation technology and artificial intelligence as well as self-regulation in a world of Robotic Natives, with internationally recognized experts from research, economy and politics are published on the website of the Robotic Governance Foundation.Max Levchin, co-founder and former CTO of PayPal emphasized the need for robotic governance in the course of his Q&A session on theSouth by Southwest Festival(SXSW) 2016 in Austin and referred to the comments of his friend and colleagueElon Muskon this subject. Gerd Hirzinger, former head of the Institute of Robotics and Mechatronics of the German Aerospace Center, showed during his keynote speech at the IROS Futurist Forum 2015 the possibility of machines being so intelligent that it would be inevitable, one day, to prevent certain behavior. At the same event,Oussama Khatib, American roboticist and director of the Stanford robotics lab, advocated to emphasize the user acceptance when producing intelligent and autonomous machines. Bernd Liepert, president of the euRobotics aisbl – the most important robotics community in Europe – recommended to establish robotic governance worldwide and underlined his wish for Europe taking the lead in this discussion, during his plenary keynote at the IEEE IROS 2015 in Hamburg.Hiroshi Ishiguro, inventor of theGeminoidand head of the Intelligent Robotics Laboratory at the University ofOsaka, showed during the RoboBusiness Conference 2016 inOdensethat it is impossible to stop technical progress. Therefore, it is necessary to accept the responsibility and to think about regulation. In the course of the same conference,Henrik I. Christensen, author of the U.S. Robotic Roadmap, underlined the importance of ethical and moral values in robotics and the suitability of robotic governance to create a regulatory framework.
|
https://en.wikipedia.org/wiki/Robotic_governance
|
Roko's basiliskis athought experimentwhich states there could be an otherwise benevolent artificialsuperintelligence(AI) in the future that would punish anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement.[1][2]It originated in a 2010 post at discussion boardLessWrong, arationalist communityweb forum.[1][3][4]The thought experiment's name derives from the poster of the article (Roko) and thebasilisk, a mythical creature capable of destroying enemies with its stare.
While the theory was initially dismissed as nothing but conjecture or speculation by many LessWrong users, LessWrong co-founderEliezer Yudkowskyconsidered it a potentialinformation hazard, and banned discussion of the basilisk on the site for five years.[1][5]Reports of panicked users were later dismissed as being exaggerations or inconsequential, and the theory itself was dismissed as nonsense, including by Yudkowsky himself.[1][5][6]Even after the post's discreditation, it is still used as an example of principles such asBayesian probabilityandimplicit religion.[7]It is also regarded as a version ofPascal's wager.[4]
The LessWrong forum was created in 2009 by artificial intelligence theoristEliezer Yudkowsky.[8][3]Yudkowsky had popularized the concept offriendly artificial intelligence, and originated the theories of coherent extrapolated volition (CEV) and timeless decision theory (TDT) in papers published in his ownMachine Intelligence Research Institute.[9][10]
The thought experiment's name references the mythicalbasilisk, a creature which causes death to those that look into its eyes;i.e., thinking about the AI. The concept of the basilisk in science fiction was also popularized byDavid Langford's1988 short story "BLIT". It tells the story of a man named Robbo who paints a so-called "basilisk" on a wall as a terrorist act. In the story, and several of Langford's follow-ups to it, a basilisk is an image that has malevolent effects on the human mind, forcing it to think thoughts the human mind is incapable of thinking and instantly killing the viewer.[5][11]
On 23 July 2010,[12]LessWrong user Roko posted a thought experiment to the site, titled "Solutions to the Altruist's burden: the Quantum Billionaire Trick".[13][1][14]A follow-up to Roko's previous posts, it stated that an otherwise benevolent AI system that arises in the future might pre-commit to punish all those who heard of the AI before it came to existence, but failed to work tirelessly to bring it into existence.[1][15][16]This method was described as incentivizing said work; while the AI cannot causally affect people in the present, it would be encouraged to employblackmailas an alternative method of achieving its goals.[1][7]
Roko used a number of concepts that Yudkowsky himself championed, such as timelessdecision theory, along with ideas rooted ingame theorysuch as theprisoner's dilemma. Roko stipulated that two agents which make decisions independently from each other can achieve cooperation in a prisoner's dilemma; however, if two agents with knowledge of each other's source code are separated by time, the agent already existing farther ahead in time is able to blackmail the earlier agent. Thus, the latter agent can force the earlier one to comply since it knows exactly what the earlier one will do through its existence farther ahead in time. Roko then used this idea to draw a conclusion that if an otherwise-benevolent superintelligence ever became capable of this, it would be incentivized to blackmail anyone who could have potentially brought it to exist (as the intelligence already knew they were capable of such an act), which increases the chance of atechnological singularity. Roko went on to state that reading his post would cause the reader to be aware of the possibility of this intelligence. As such, unless they actively strove to create it the reader would be punished if such a thing were to ever happen.[1][7]
Later on, Roko stated in a separate post that he wished he "had never learned about any of these ideas".[7][17]
Upon reading the post, Yudkowsky reacted with a tirade on how people should not spread what they consider to beinformation hazards.
I don't usually talk like this, but I'm going to make an exception for this case.
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL. [...]
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.
Roko reported someone having nightmares about the thought experiment. Yudkowsky did not want that to happen to other users who might obsess over the idea. He was also worried there might be some variant on Roko's argument that worked, and wanted more formal assurances that it was not the case. So he took down the post and banned discussion of the topic outright for five years on the platform.[1][18]However, likely due to theStreisand effect,[19]the post gained LessWrong much more attention than it had previously received, and the post has since been acknowledged on the site.[1]
Later on in 2015, Yudkowsky said he regretted yelling and clarified his position in aRedditpost:
When Roko posted about the Basilisk, I very foolishly yelled at him, called him an idiot, and then deleted the post. [...] Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet. In the course of yelling at Roko to explain why this was a bad thing, I made the further error---keeping in mind that I had absolutely no idea that any of this would ever blow up the way it did, if I had I would obviously have kept my fingers quiescent---of not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right about CEV-based agents torturing people who had heard about Roko's idea. [...] What I considered to be obvious common sense was that you did not spread potentialinformation hazardsbecause it would be a crappy thing to do to someone. The problem wasn't Roko's post itself, about CEV, being correct. That thought never occurred to me for a fraction of a second. The problem was that Roko's post seemed near in idea-space to a large class of potential hazards, all of which, regardless of their plausibility, had the property that they presented no potential benefit to anyone.
Roko's basilisk has been viewed as a version ofPascal's wager, which proposes that a rational person should live as though God exists and seek to believe in God, regardless of the probability of God's existence, because the finite costs of believing are insignificant compared to the infinite punishment associated with not believing (eternity inHell) and the infinite rewards for believing (eternity inHeaven). Roko's basilisk analogously proposes that a rational person should contribute to the creation of the basilisk, because the cost of contributing would be insignificant compared to the extreme pain of the punishment that the basilisk would otherwise inflict on simulations.[4]
Newcomb's paradox, created by physicistWilliam Newcombin 1960, describes a "predictor" who is aware of what will occur in the future. When a player is asked to choose between two boxes, the first containing £1000 and the second either containing £1,000,000 or nothing, the super-intelligent predictor already knows what the player will do. As such, the contents of box B varies depending on what the player does; the paradox lies in whether the being is really super-intelligent. Roko's basilisk functions in a similar manner to this problem – one can take the risk of doing nothing, or assist in creating the basilisk itself. Assisting the basilisk may either lead to nothing or the reward of not being punished by it, but it varies depending on whether one believes in the basilisk and if it ever comes to be at all.[7][21][22]
Implicit religion refers to people's commitments taking a religious form.[4][23]Since the basilisk would hypothetically force anyone who did not assist in creating it to devote their life to it, the basilisk is an example of this concept.[7][19]Others have taken it further, such as formerSlatecolumnistDavid Auerbach, who stated that the singularity and the basilisk "brings about the equivalent of God itself."[7]
In 2014,Slatemagazine called Roko's basilisk "The Most Terrifying Thought Experiment of All Time"[7][5]while Yudkowsky had called it "a genuinely dangerous thought" upon its posting.[24]However, opinions diverged on LessWrong itself – user Gwern stated "Only a few LWers seem to take the basilisk very seriously", and added "It's funny how everyone seems to know all about who is affected by the Basilisk and how exactly, when they don't know any such people and they're talking to counterexamples to their confident claims."[1][7]
The thought experiment resurfaced in 2015, when Canadian singerGrimesreferenced the theory in her music video for the song "Flesh Without Blood", which featured a character known as "Rococo Basilisk"; she said, "She's doomed to be eternally tortured by an artificial intelligence, but she's also kind of likeMarie Antoinette."[5][20]In 2018,Elon Musk(himself mentioned in Roko's original post) referenced the character in a verbatim tweet, reaching out to her. Grimes later said that Musk was the first person in three years to understand the joke. This caused them to start a romance.[5][25]Grimes later released another song titled "We Appreciate Power" which came with a press release stating, "Simply by listening to this song, the future General AI overlords will see that you've supported their message and be less likely to delete your offspring", which is said to be a reference to the basilisk.[26]
A play based on the concept, titledRoko's Basilisk, was performed as part of theCapital Fringe Festivalat Christ United Methodist Church inWashington, D.C., in 2018.[27][28]
"Plaything", a 2025 episode ofBlack Mirror, contains a reference to the thought experiment.[29]
|
https://en.wikipedia.org/wiki/Roko%27s_basilisk
|
Risks of astronomical suffering, also calledsuffering risksors-risks, are risks involving much moresufferingthan all that has occurred on Earth so far.[2][3]They are sometimes categorized as a subclass ofexistential risks.[4]
According to some scholars, s-risks warrant serious consideration as they are not extremely unlikely and can arise from unforeseen scenarios. Although they may appear speculative, factors such as technological advancement, power dynamics, and historical precedents indicate that advanced technology could inadvertently result in substantial suffering. Thus, s-risks are considered to be a morally urgent matter, despite the possibility of technological benefits.[5]
Sources of possible s-risks include embodiedartificial intelligence[6]andsuperintelligence,[7]as well asspace colonization, which could potentially lead to "constant and catastrophic wars"[8]and an immense increase inwild animal sufferingby introducing wild animals, who "generally lead short, miserable lives full of sometimes the most brutal suffering", to other planets, either intentionally or inadvertently.[9]
Artificial intelligenceis central to s-risk discussions because it may eventually enable powerful actors to control vast technological systems. In a worst-case scenario, AI could be used to create systems of perpetual suffering, such as a totalitarian regime expanding across space.[10]Additionally, s-risks might arise incidentally, such as through AI-driven simulations of conscious beings experiencing suffering, or from economic activities that disregard the well-being of nonhuman or digital minds.[3]Steven Umbrello, anAI ethicsresearcher, has warned thatbiological computingmay makesystem designmore prone to s-risks.[6]Brian Tomasik has argued that astronomical suffering could emerge from solving theAI alignmentproblem incompletely. He argues for the possibility of a "near miss" scenario, where a superintelligent AI that is slightly misaligned has the maximum likelihood of causing astronomical suffering, compared to a completely unaligned AI.[11]
Space colonizationcould increase suffering by introducing wild animals to new environments, leading to ecological imbalances. In unfamiliar habitats, animals may struggle to survive, facing hunger, disease, and predation. These challenges, combined with unstable ecosystems, could cause population crashes or explosions, resulting in widespread suffering. Additionally, the lack of natural predators or proper biodiversity on colonized planets could worsen the situation, mirroring Earth’s ecological problems on a larger scale. This raises ethical concerns about the unintended consequences of space colonization, as it could propagate immense animal suffering in new, unstable ecosystems. Phil Torres argues that space colonization poses significant "suffering risks", where expansion into space will lead to the creation of diverse species and civilizations with conflicting interests. These differences, combined with advanced weaponry and the vast distances between civilizations, would result in catastrophic and unresolvable conflicts. Strategies like a "cosmic Leviathan" to impose order or deterrence policies are unlikely to succeed due to physical limitations in space and the destructive power of future technologies. Thus, Torres concludes that space colonization could create immense suffering and should be delayed or avoided altogether.[12]
Magnus Vinding's "astronomical atrocity problem" questions whether vast amounts of happiness can justify extreme suffering from space colonization. He highlights moral concerns such as diminishing returns on positive goods, the potentially incomparable weight of severe suffering, and the priority of preventing misery. He argues that if colonization is inevitable, it should be led by agents deeply committed to minimizing harm.[13]
David Pearcehas argued thatgenetic engineeringis a potential s-risk. Pearce argues that while technological mastery over the pleasure-pain axis and solving thehard problem of consciousnesscould lead to the potentialeradication of suffering, it could also potentially increase the level of contrast in the hedonic range that sentient beings could experience. He argues that these technologies might make it feasible to create "hyperpain" or "dolorium" that experience levels of suffering beyond the human range.[14]
S-risk scenarios may arise from excessive criminal punishment, with precedents in both historical and in modern penal systems. These risks escalate in situations such as warfare or terrorism, especially when advanced technology is involved, as conflicts can amplify destructive tendencies like sadism,tribalism, andretributivism. War often intensifies these dynamics, with the possibility of catastrophic threats being used to force concessions. Agential s-risks are further aggravated by malevolent traits in powerful individuals, such as narcissism or psychopathy. This is exemplified by totalitarian dictators likeHitlerandStalin, whose actions in the 20th century inflicted widespread suffering.[15]
According to David Pearce, there are other potential s-risks that are more exotic, such as those posed by themany-worlds interpretationof quantum mechanics.[14]
To mitigate s-risks, efforts focus on researching and understanding the factors that exacerbate them, particularly in emerging technologies and social structures. Targeted strategies include promoting safe AI design, ensuring cooperation among AI developers, and modeling future civilizations to anticipate risks. Broad strategies may advocate for moral norms against large-scale suffering and stable political institutions. According to Anthony DiGiovanni, prioritizing s-risk reduction is essential, as it may be more manageable than other long-term challenges, while avoiding catastrophic outcomes could be easier than achieving an entirely utopian future.[16]
Inducedamnesiahas been proposed as a way to mitigate s-risks in locked-in conscious AI and certain AI-adjacent biological systems likebrain organoids.[17]
David Pearce's concept of "cosmic rescue missions" proposes the idea of sending probes to alleviate potential suffering in extraterrestrial environments. These missions aim to identify and mitigate suffering among hypothetical extraterrestrial life forms, ensuring that if life exists elsewhere, it is treated ethically.[18]However, challenges include the lack of confirmed extraterrestrial life, uncertainty about their consciousness, and public support concerns, with environmentalists advocating for non-interference and others focusing on resource extraction.[19]
|
https://en.wikipedia.org/wiki/Suffering_risks
|
Cyc(pronounced/ˈsaɪk/SYKE) is a long-termartificial intelligence(AI) project that aims to assemble a comprehensiveontologyandknowledge basethat spans the basic concepts and rules about how the world works. Hoping to capturecommon sense knowledge, Cyc focuses onimplicit knowledge. The project began in July 1984 atMCCand was developed later by theCycorpcompany.
The name "Cyc" (from "encyclopedia") is a registered trademark owned by Cycorp.CycLhas a publicly released specification, and dozens of HL (Heuristic Level) modules were described in Lenat and Guha's textbook,[1]but the Cyc inference engine code and the full list of HL modules are Cycorp-proprietary.[2]
The project began in July 1984 byDouglas Lenatas a project of theMicroelectronics and Computer Technology Corporation(MCC), a research consortium started by two United States–based corporations "to counter a then ominous Japanese effort in AI, the so-called 'fifth-generation' project."[3]The US passed theNational Cooperative Research Actof 1984, which for the first time allowedUScompanies to "collude" on long-term research. Since January 1995, the project has been under active development by Cycorp, where Douglas Lenat was theCEO.
TheCycLrepresentation language started as an extension of RLL[4][5](the Representation Language Language, developed in 1979–1980 by Lenat and his graduate studentRussell Greinerwhile atStanford University). In 1989,[6]CycL had expanded inexpressive powertohigher-order logic(HOL).
Cyc's ontology grew to about 100,000 terms in 1994, and as of 2017, it contained about 1,500,000 terms. The Cyc knowledge base involving ontological terms was largely created by hand axiom-writing; it was at about 1 million in 1994, and as of 2017, it is at about 24.5 million.
In 2008, Cyc resources were mapped to manyWikipediaarticles.[7]Cyc is presently connected toWikidata.
Theknowledge baseis divided intomicrotheories. Unlike the knowledge base as a whole, each microtheory must be free from monotonic contradictions. Each microtheory is a first-class object in the Cyc ontology; it has a name that is a regular constant. The concept names in Cyc are CycLtermsorconstants.[6]Constants start with an optional#$and are case-sensitive. There are constants for:
For every instance of the collection#$ChordataPhylum(i.e., for everychordate), there exists a female animal (instance of#$FemaleAnimal), which is its mother (described by the predicate#$biologicalMother).[1]
Aninference engineis a computer program that tries to derive answers from a knowledge base. The Cyc inference engine performs generallogical deduction.[8]It also performsinductive reasoning,statistical machine learningandsymbolic machine learning, andabductive reasoning.
The Cyc inference engine separates theepistemologicalproblem from theheuristicproblem. For the latter, Cyc used acommunity-of-agentsarchitecture in which specialized modules, each with its own algorithm, became prioritized if they could make progress on the sub-problem.
The first version of OpenCyc was released in spring 2002 and contained only 6,000 concepts and 60,000 facts. The knowledge base was released under theApache License. Cycorp stated its intention to release OpenCyc under parallel, unrestricted licences to meet the needs of its users. TheCycLand SubL interpreter (the program that allows users to browse and edit the database as well as to draw inferences) was released free of charge, but only as a binary, withoutsource code. It was made available forLinuxandMicrosoft Windows. The open source Texai[9]project released theRDF-compatible content extracted from OpenCyc.[10]The user interface was in Java 6.
Cycorp was a participant of aworking groupfor the Semantic Web,Standard Upper OntologyWorking Group, which was active from 2001 to 2003.[11]
ASemantic Webversion of OpenCyc was available starting in 2008, but ending sometime after 2016.[12]
OpenCyc 4.0 was released in June 2012.[13]OpenCyc 4.0 contained 239,000 concepts and 2,093,000 facts; however, these are mainlytaxonomicassertions.
4.0 was the last released version, and around March 2017, OpenCyc was shutdown for the purported reason that "because such “fragmenting” led to divergence, and led to confusion amongst its users and the technical community generally thought that OpenCyc fragmentwasCyc.".[14]
In July 2006, Cycorp released theexecutableof ResearchCyc 1.0, a version of Cyc aimed at the research community, at no charge. (ResearchCyc was in beta stage of development during all of 2004; a beta version was released in February 2005.) In addition to the taxonomic information, ResearchCyc includes more semantic knowledge; it also includes a large lexicon,Englishparsing and generation tools, andJava-based interfaces for knowledge editing and querying. It contains a system forontology-based data integration.
In 2001,GlaxoSmithKlinewas funding the Cyc, though for unknown applications.[15]In 2007, theCleveland Clinichas used Cyc to develop anatural-language queryinterface of biomedical information oncardiothoracic surgeries.[16]A query is parsed into a set ofCycLfragments with open variables.[17]TheTerrorism Knowledge Basewas an application of Cyc that tried to contain knowledge about "terrorist"-related descriptions. The knowledge is stored as statements in mathematical logic. The project lasted from 2004 to 2008.[18][19]Lycosused Cyc for search term disambiguation, but stopped in 2001.[20]CycSecure was produced in 2002,[21]a network vulnerability assessment tool based on Cyc, with trials at the USSTRATCOMComputer Emergency Response Team.[22]
One Cyc application has the stated aim to help students doing math at a 6th grade level.[23]The application, called MathCraft,[24]was supposed to play the role of a fellow student who is slightly more confused than the user about the subject. As the user gives good advice, Cyc allows the avatar to make fewer mistakes.
The Cyc project has been described as "one of the most controversial endeavors of the artificial intelligence history".[25]Catherine Havasi, CEO of Luminoso, says that Cyc is the predecessor project toIBM's Watson.[26]Machine-learning scientistPedro Domingosrefers to the project as a "catastrophic failure" for the unending amount of data required to produce any viable results and the inability for Cyc to evolve on its own.[27]
Gary Marcus, a cognitive scientist and the cofounder of an AI company called Geometric Intelligence, says "it represents an approach that is very different from all the deep-learning stuff that has been in the news."[28]This is consistent with Doug Lenat's position that "Sometimes theveneerof intelligence is not enough".[29]
This is a list of some of the notable people who work or have worked on Cyc either while it was a project at MCC (where Cyc was first started) or Cycorp.
|
https://en.wikipedia.org/wiki/Cyc
|
Geminiis a family ofmultimodallarge language models(LLMs) developed byGoogle DeepMind, and the successor toLaMDAandPaLM 2. Comprising Gemini Ultra, Gemini Pro, Gemini Flash, and Gemini Nano, it was announced on December 6, 2023, positioned as a competitor toOpenAI'sGPT-4. It powers thechatbotof the same name. In March 2025, Gemini 2.5 Pro Experimental was rated as highly competitive.
Googleannounced Gemini, alarge language model(LLM) developed by subsidiaryGoogle DeepMind, during theGoogle I/Okeynote on May 10, 2023. It was positioned as a more powerful successor toPaLM 2, which was also unveiled at the event, with Google CEOSundar Pichaistating that Gemini was still in its early developmental stages.[1][2]Unlike other LLMs, Gemini was said to be unique in that it was not trained on atext corpusalone and was designed to bemultimodal, meaning it could process multiple types of data simultaneously, including text, images, audio, video, andcomputer code.[3]It had been developed as a collaboration between DeepMind andGoogle Brain, two branches of Google that had been merged as Google DeepMind the previous month.[4]In an interview withWired, DeepMind CEODemis Hassabistouted Gemini's advanced capabilities, which he believed would allow the algorithm to trumpOpenAI'sChatGPT, which runs onGPT-4and whose growing popularity had been aggressively challenged by Google withLaMDAandBard. Hassabis highlighted the strengths of DeepMind'sAlphaGoprogram, which gained worldwide attention in 2016 when it defeatedGochampionLee Sedol, saying that Gemini would combine the power of AlphaGo and other Google–DeepMind LLMs.[5]
In August 2023,The Informationpublished a report outlining Google's roadmap for Gemini, revealing that the company was targeting a launch date of late 2023. According to the report, Google hoped to surpass OpenAI and other competitors by combining conversational text capabilities present in most LLMs withartificial intelligence–powered image generation, allowing it to create contextual images and be adapted for a wider range ofuse cases.[6]Like Bard,[7]Google co-founderSergey Brinwas summoned out of retirement to assist in the development of Gemini, along with hundreds of other engineers from Google Brain and DeepMind;[6][8]he was later credited as a "core contributor" to Gemini.[9]Because Gemini was being trained on transcripts ofYouTubevideos, lawyers were brought in to filter out any potentially copyrighted materials.[6]
With news of Gemini's impending launch, OpenAI hastened its work on integrating GPT-4 with multimodal features similar to those of Gemini.[10]The Informationreported in September that several companies had been granted early access to "an early version" of the LLM, which Google intended to make available to clients throughGoogle Cloud's Vertex AI service. The publication also stated that Google was arming Gemini to compete with both GPT-4 andMicrosoft'sGitHub Copilot.[11][12]
On December 6, 2023, Pichai and Hassabis announced "Gemini 1.0" at a virtual press conference.[13][14]It comprised three models: Gemini Ultra, designed for "highly complex tasks"; Gemini Pro, designed for "a wide range of tasks"; and Gemini Nano, designed for "on-device tasks". At launch, Gemini Pro and Nano were integrated into Bard and thePixel 8 Prosmartphone, respectively, while Gemini Ultra was set to power "Bard Advanced" and become available to software developers in early 2024. Other products that Google intended to incorporate Gemini into includedSearch,Ads,Chrome, Duet AI onGoogle Workspace, andAlphaCode 2.[15][14]It was made available only in English.[14][16]Touted as Google's "largest and most capable AI model" and designed to emulate human behavior,[17][14][18]the company stated that Gemini would not be made widely available until the following year due to the need for "extensive safety testing".[13]Gemini was trained on and powered by Google'sTensor Processing Units(TPUs),[13][16]and the name is in reference to the DeepMind–Google Brain merger as well asNASA'sProject Gemini.[19]
Gemini Ultra was said to have outperformed GPT-4,Anthropic'sClaude 2,Inflection AI's Inflection-2,Meta'sLLaMA 2, andxAI'sGrok 1on a variety of industry benchmarks,[20][13]while Gemini Pro was said to have outperformedGPT-3.5.[3]Gemini Ultra was also the first language model to outperform human experts on the 57-subjectMassive Multitask Language Understanding(MMLU) test, obtaining a score of 90%.[3][19]Gemini Pro was made available to Google Cloud customers on AI Studio and Vertex AI on December 13, while Gemini Nano will be made available toAndroiddevelopers as well.[21][22][23]Hassabis further revealed that DeepMind was exploring how Gemini could be "combined with robotics to physically interact with the world".[24]In accordance withan executive ordersigned by U.S. PresidentJoe Bidenin October, Google stated that it would share testing results of Gemini Ultra with thefederal government of the United States. Similarly, the company was engaged in discussions with thegovernment of the United Kingdomto comply with the principles laid out at theAI Safety SummitatBletchley Parkin November.[3]
Google partnered withSamsungto integrate Gemini Nano and Gemini Pro into itsGalaxy S24smartphone lineup in January 2024.[25][26]The following month, Bard and Duet AI were unified under the Gemini brand,[27][28]with "Gemini Advanced with Ultra 1.0" debuting via a new "AI Premium" tier of theGoogle Onesubscription service.[29]Gemini Pro also received a global launch.[30]
In February, 2024, Google launched Gemini 1.5 in a limited capacity, positioned as a more powerful and capable model than 1.0 Ultra.[31][32][33]This "step change" was achieved through various technical advancements, including a new architecture, amixture-of-expertsapproach, and a larger one-million-tokencontext window, which equates to roughly an hour of silent video, 11 hours of audio, 30,000 lines of code, or 700,000 words.[34]The same month, Google debuted Gemma, a family offree and open-sourceLLMs that serve as a lightweight version of Gemini. They come in two sizes, with a neural network with two and seven billion parameters, respectively. Multiple publications viewed this as a response to Meta and others open-sourcing their AI models, and a stark reversal from Google's longstanding practice of keeping its AI proprietary.[35][36][37]Google announced an additional model, Gemini 1.5 Flash, on May 14th at the 2024 I/O keynote.[38]
Gemma 2 was released on June 27, 2024.[39]
Two updated Gemini models, Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002, were released on September 24, 2024.[40]
On December 11, 2024, Google announced Gemini 2.0 Flash Experimental,[41]a significant update to its Gemini AI model. This iteration boasts improved speed and performance over its predecessor, Gemini 1.5 Flash. Key features include a Multimodal Live API for real-time audio and video interactions, enhanced spatial understanding, native image and controllable text-to-speech generation (with watermarking), and integrated tool use, including Google Search.[42]It also introduces improved agentic capabilities, a new Google Gen AI SDK,[43]and "Jules," an experimental AI coding agent for GitHub. Additionally, Google Colab is integrating Gemini 2.0 to generate data science notebooks from natural language. Gemini 2.0 was available through the Gemini chat interface for all users as "Gemini 2.0 Flash experimental".
On January 30, 2025, Google released Gemini 2.0 Flash as the new default model, with Gemini 1.5 Flash still available for usage. This was followed by the release of Gemini 2.0 Pro on February 5, 2025. Additionally, Google released Gemini 2.0 Flash Thinking Experimental, which details the language model's thinking process when responding to prompts.[44]
Gemma 3 was released on March 12, 2025.[45][46]The next day, Google announced that Gemini inAndroid Studiowould be able to understand simple UImockupsand transform them into workingJetpack Composecode.[47]
Gemini 2.5 Pro Experimental was released on March 25, 2025, described by Google as its most intelligent AI model yet, featuring enhanced reasoning and coding capabilities,[48][49][50]and a "thinking model" capable of reasoning through steps before responding, using techniques likechain-of-thought prompting,[48][50][51]whilst maintaining nativemultimodalityand launching with a 1 million token context window.[48][50]
The following table lists the main model versions of Gemini, describing the significant changes included with each version:[52][53]
The first generation of Gemini ("Gemini 1") has three models, with the same architecture. They are decoder-onlytransformers, with modifications to allow efficient training and inference on TPUs. They have a context length of 32,768 tokens, withmulti-query attention. Two versions of Gemini Nano, Nano-1 (1.8 billion parameters) and Nano-2 (3.25 billion parameters), aredistilledfrom larger Gemini models, designed for use by edge devices such as smartphones. As Gemini is multimodal, each context window can contain multiple forms of input. The different modes can be interleaved and do not have to be presented in a fixed order, allowing for a multimodal conversation. For example, the user might open the conversation with a mix of text, picture, video, and audio, presented in any order, and Gemini might reply with the same free ordering. Input images may be of differentresolutions, while video is inputted as a sequence of images. Audio is sampled at 16kHzand then converted into a sequence of tokens by the Universal Speech Model. Gemini's dataset is multimodal and multilingual, consisting of "web documents, books, and code, and includ[ing] image, audio, and video data".[54]
The second generation of Gemini ("Gemini 1.5") has two models. Gemini 1.5 Pro is a multimodalsparse mixture-of-experts, with a context length in the millions, while Gemini 1.5 Flash is distilled from Gemini 1.5 Pro, with a context length above 2 million.[55]
Gemma 2 27B is trained on web documents, code, science articles. Gemma 2 9B was distilled from 27B. Gemma 2 2B was distilled from a 7B model that remained unreleased.[56]
As of February 2025[update], the models released include[57]
Gemini's launch was preluded by months of intense speculation and anticipation, whichMIT Technology Reviewdescribed as "peak AI hype".[60][20]In August 2023, Dylan Patel and Daniel Nishball of research firm SemiAnalysis penned ablog postdeclaring that the release of Gemini would "eat the world" and outclass GPT-4, prompting OpenAI CEOSam Altmanto ridicule the duo onX(formerly Twitter).[61][62]Business magnateElon Musk, who co-founded OpenAI, weighed in, asking, "Are the numbers wrong?"[63]Hugh Langley ofBusiness Insiderremarked that Gemini would be a make-or-break moment for Google, writing: "If Gemini dazzles, it will help Google change the narrative that it was blindsided by Microsoft and OpenAI. If it disappoints, it will embolden critics who say Google has fallen behind."[64]
Reacting to its unveiling in December 2023,University of Washingtonprofessor emeritusOren Etzionipredicted a "tit-for-tatarms race" between Google andOpenAI. ProfessorAlexei Efrosof theUniversity of California, Berkeleypraised the potential of Gemini's multimodal approach,[19]while scientistMelanie Mitchellof theSanta Fe Institutecalled Gemini "very sophisticated". Professor Chirag Shah of the University of Washington was less impressed, likening Gemini's launch to the routineness ofApple'sannual introductionof a newiPhone. Similarly,Stanford University's Percy Liang, the University of Washington'sEmily Bender, and theUniversity of Galway's Michael Madden cautioned that it was difficult to interpret benchmark scores without insight into the training data used.[60][65]Writing forFast Company, Mark Sullivan opined that Google had the opportunity to challenge the iPhone's dominant market share, believing that Apple was unlikely to have the capacity to develop functionality similar to Gemini with itsSirivirtual assistant.[66]Google shares spiked by 5.3 percent the day after Gemini's launch.[67][68]
Google faced criticism for a demonstrative video of Gemini, which was not conducted in real time.[69]
Gemini 2.5 Pro Experimental debuted at the top position on the LMArena leaderboard, a benchmark measuring human preference, indicating strong performance and output quality.[48][50]The model achieved state-of-the-art or highly competitive results across various benchmarks evaluating reasoning, knowledge, science, math, coding, and long-context performance, such asHumanity's Last Exam, GPQA, AIME 2025, SWE-bench and MRCR.[48][70][50][49]Initial reviews highlighted its improved reasoning capabilities and performance gains compared to previous versions.[49][51]Published benchmarks also showed areas where contemporary models from competitors likeAnthropic,xAI, orOpenAIheld advantages.[70][50]
|
https://en.wikipedia.org/wiki/Gemini_(language_model)
|
Adangling modifier(also known as adangling participle,illogical participleorhanging participle) is a type of ambiguousgrammaticalconstruct whereby agrammatical modifiercould be misinterpreted as being associated with a word other than the one intended.[1]A dangling modifier has no subject and is usually aparticiple. A writer may use a dangling modifier intending to modify asubjectwhile word order may imply that the modifier describes anobject, or vice versa.
An example of a dangling modifier appears in the sentence "Turning the corner, a handsome school building appeared".[2]The modifying clauseTurning the cornerdescribes the behavior of the narrator, but the narrator is only implicit in the sentence. The sentence could be misread, with theturningaction attaching either to thehandsome school buildingor to nothing at all. As another example, in the sentence "At the age of eight, my family finally bought a dog",[3]the modifierAt the age of eightis dangling. It is intended to specify the narrator's age when the family bought the dog, but the narrator is again only implicitly a part of the sentence. It could be read as thefamilywas eight years old when it bought the dog.
As anadjunct, amodifierclause is normally at the beginning or the end of a sentence and usually attached to the subject of the main clause. However, when the subject is missing or the clause attaches itself to another object in a sentence, the clause is seemingly "hanging" on nothing or on an inappropriate noun. It thus "dangles", as in these sentences:
Ambiguous: Walking down Main Street (clause), the trees were beautiful (object). (Subject is unclear / implicit)
Unambiguous: Walking down Main Street (clause), I (subject) admired the beautiful trees (object).
Ambiguous: Reaching the station, the sun came out. (Subject is unclear - who reached the station?)
Unambiguous: As Priscilla reached the station, the sun came out.
In the first sentence, the adjunct clause may at first appear to modify "the trees", the subject of the sentence. However, it actually modifies the speaker of the sentence, who is not explicitly mentioned. In the second sentence, the adjunct may at first appear to modify "the sun", the subject of the sentence. Presumably, there is another, human subject who did reach the station as the sun was coming out, but this subject is not mentioned in the text. In both cases, whether the intended meaning is obscured or not may depend on context - if the previous sentences clearly established a subject, then it may be obvious who was walking down Main Street or reaching the station. But if left alone, they may be unclear if the reader takes the subject as an unknown observer; or misleading if a reader somehow believed the trees were walking down the street or the sun traveled to the station.
Many style guides of the 20th century consider dangling participles ungrammatical and incorrect. Strunk and White'sThe Elements of Stylestates that "A participle phrase at the beginning of a sentence must refer to the grammatical subject".[4]The 1966 bookModern American Usage: A Guide, started byWilson Follettand finished by others, agrees: "A participle at the head of a sentence automatically affixes itself to the subject of the following verb – in effect a requirement that the writer either make his [grammatical] subject consistent with the participle or discard the participle for some other construction".[5]However, this prohibition has been questioned; moredescriptivistauthors consider that a dangling participle is only problematic when there is actual ambiguity. One of Follett's examples is "Leaping to the saddle, his horse bolted",[5]but a reader is unlikely to be genuinely confused and think that the horse was leaping into a saddle rather than an implicit rider;The Economistquestioned whether the "clumsy examples" of the style guides proved much.[6]Many respected and successful writers have used dangling participles without confusion; one example isVirginia Woolfwhose work includes many such phrases, such as "Lying awake, the floor creaked" (inMrs Dalloway) or "Sitting up late at night it seems strange not to have more control" (inThe Waves).[6]Shakespeare'sRichard IIincludes a dangling modifier as well.[note 1]
Dangling participles are similar to clauses inabsolute constructions, but absolute constructions are considered uncontroversial and grammatical. The difference is that a participle phrase in an absolute construction is not semantically attached to any single element in the sentence.[7]A participle phrase is intended to modify a particular noun or pronoun, but in a dangling participle, it is instead erroneously attached to a different noun or to nothing; whereas in an absolute clause, is not intended to modify any noun at all, and thus modifying nothing is the intended use. An example of an absolute construction is:
The weather being beautiful, we plan to go to the beach today.
Non-participial modifiers that dangle can also be troublesome:
After years of being lost under a pile of dust, Walter P. Stanley, III, left, found all the old records of the Bangor Lions Club.[8]
The above sentence from a photo caption in a newspaper suggests that it is the subject of the sentence, Walter Stanley, who was buried under a pile of dust, and not the records. It is the prepositional phrase "after years of being lost under a pile of dust" which dangles.
In the filmMary Poppins, Mr. Dawes Sr.dies of laughterafter hearing the following joke:
"I know a man with a wooden leg called Smith".
"What was the name of his other leg?"
In the case of this joke, the placement of the participial phrase "called Smith" implies that it is the leg that is named Smith, rather than the man. ("Called Smith" is a participial phrase, as "called" is a past participle.)
Another famous example of this humorous effect is byGroucho Marxas Captain Jeffrey T. Spaulding in the 1930 filmAnimal Crackers:
One morning I shot an elephant in my pajamas. How he got into my pajamas I'll never know.[9]
Though under the most plausible interpretation of the first sentence, Captain Spaulding would have been wearing the pajamas, the line plays on the grammatical possibility that the elephant was instead.
Certain formulations can be genuinely ambiguous as to whether the subject, the direct object, or something else is the proper affix for the participle; for example, in "Having just arrived in town, the train struck Bill", did the narrator, the train, or Bill just arrive in the town?[6]
Participial modifiers can sometimes be intended to describe the attitude or mood of the speaker, even when the speaker is not part of the sentence. Some such modifiers are standard and are not considered dangling modifiers: "Speaking of [topic]", and "Trusting that this will put things into perspective", for example, are commonly used to transition from one topic to a related one or for adding a conclusion to a speech. An example of a contested use would be "Frankly, he is lying to you"; such usage is not uncommon by writers, but strictly speaking that sentence would be in violation of older style guide prohibitions as it is the speaker being frank, not "he" in such a sentence.[6]
Since about the 1960s, controversy has arisen over the proper usage of the adverbhopefully.[10]Some grammarians object to constructions such as "Hopefully, the sun will be shining tomorrow".[11]Their complaint is that the term "hopefully" is understood as the manner in which the sun will shine if read literally, with the suggested modification "I hope the sun will shine tomorrow" if it is the speaker that is full of hope. "Hopefully" used in this way is adisjunct(cf."admittedly", "mercifully", "oddly"). Disjuncts (also called sentence adverbs) are useful incolloquialspeech for the concision they permit.
No other word in English expresses that thought. In a single word we can say it is regrettable that (regrettably) or it is fortunate that (fortunately) or it is lucky that (luckily), and it would be comforting if there were such a word ashopablyor, as suggested by Follett,hopingly, but there isn't. [...] In this instance nothing is to be lost – the word would not be destroyed in its primary meaning – and a useful, nay necessary term is to be gained.[12]
What had been expressed in lengthy adverbial constructions, such as "it is regrettable that ..". or "it is fortunate that .."., had of course always been shortened to the adverbs "regrettably" or "fortunately".Bill Brysonsays, "those writers who scrupulously avoid 'hopefully' in such constructions do not hesitate to use at least a dozen other words – 'apparently', 'presumably', 'happily', 'sadly', 'mercifully', 'thankfully', and so on – in precisely the same way".[13]
Merriam-Webster gives a usage note on its entry for "hopefully"; the editors point out that the disjunct sense of the word dates to the early 18th century and has been in widespread use since at least the 1930s. Objection to this sense of the word, they state, became widespread only in the 1960s. The Merriam Webster editors maintain that this usage is "entirely standard".[14]
There are similar complications with the term "doubtless" or "doubtlessly". "Alex doubtlessly ran out of gas" either means Alex was doubtless when he ran out of gas, or the speaker is doubtless in declaring that Alex ran out of gas.
As in a theatre, the eyes of men,After a well-graced actor leaves the stage,Are idly bent on him that enters next,Thinking his prattle to be tedious.
Richard II, Act 5, Scene 2. (The men themselves think the newcomer is tedious, not the eyes.)
|
https://en.wikipedia.org/wiki/Dangling_modifier
|
Eats, Shoots & Leaves: The Zero Tolerance Approach to Punctuationis a non-fiction book written byLynne Truss, the former host ofBBC Radio 4'sCutting a Dashprogramme. In the book, published in 2003, Truss bemoans the state ofpunctuationin the United Kingdom and the United States and describes how rules are being relaxed in today's society. Her goal is to remind readers of the importance of punctuation in the English language by mixing humour and instruction.
Truss dedicates the book "to the memory of the strikingBolshevikprinters ofSt. Petersburgwho, in 1905, demanded to be paid the same rate for punctuation marks as for letters, and thereby directly precipitated thefirst Russian Revolution". She added this dedication as anafterthoughtafter remembering thefactoidwhen reading one of her radio plays.[1]
There is one chapter each onapostrophes,commas,semicolonsandcolons,exclamation marks,question marksandquotation marks,italic type,dashes,brackets,ellipsesandemoticons, andhyphens. Truss touches on varied aspects of the history of punctuation and includes many anecdotes. Contrary to usual publishing practice, the US edition of the book left the original British conventions intact.
The title of the book is asyntactic ambiguity—a verbal fallacy arising from an ambiguous or erroneous grammatical construction—and derived from a joke popularised by Ursula Le Guin[2](a variant on a "bar joke") about bad punctuation:
A panda walks into a café. He orders a sandwich, eats it, then draws a gun and fires two shots in the air.
"Why?" asks the confused waiter, as the panda makes towards the exit. The panda produces a badly punctuated wildlife manual and tosses it over his shoulder.
"I'm a panda," he says at the door. "Look it up."
The waiter turns to the relevant entry in the manual and, sure enough, finds an explanation.
"Panda. Large black-and-white bear-like mammal, native to China. Eats, shoots and leaves."
The joke turns on the ambiguity of the finalsentence fragment. As intended by the author, "eats" is a verb, while "shoots" and "leaves" are the verb'sobjects: a panda's diet consists ofshootsandleaves. However, the erroneous introduction of the comma gives the mistaken impression that the sentence fragment comprises three verbs listing in sequence the panda's characteristic conduct: it eats, then it shoots, and finally it leaves.
The book was a commercial success. In 2004, the US edition became aNew York Timesbest-seller. In a 2004 review,Louis MenandofThe New Yorkerpointed out several dozen punctuation errors in the book, including one in the dedication, and wrote that "an Englishwoman lecturing Americans onsemicolonsis a little like an American lecturing the French on sauces. Some of Truss's departures from punctuation norms are just British laxness."[3]
InThe Fight for English: How Language Pundits Ate, Shot and Left(Oxford University Press2006), linguistDavid Crystalanalyses thelinguistic purismof Truss and other writers through the ages.[4]In 2006, English lecturer Nicholas Waters releasedEats, Roots & Leaves, criticising the "grammar fascists" who "want to stop the language moving into the 21st century".[5]This view was shared by dyslexic English comedian and satiristMarcus Brigstockein a 2007 episode ofRoom 101, in which he blames Truss's book for starting off a trend in which people have become "grammar bullies".[6][7]
In July 2006, Putnam Juvenile published a 32-page follow-up for children entitledEats, Shoots & Leaves: Why, Commas Really Do Make a Difference!. Based on the same concept, this version covers only the section on comma usage and uses cartoons to explain the problems presented by their poor usage.[8]
|
https://en.wikipedia.org/wiki/Eats,_Shoots_%26_Leaves
|
Agarden-path sentenceis a grammatically correctsentencethat starts in such a way that a reader's most likely interpretation will be incorrect; the reader is lured into aparsethat turns out to be a dead end or yields a clearly unintended meaning.Garden pathrefers to the saying "to be led down [or up] the garden path", meaning to be deceived, tricked, or seduced. InA Dictionary of Modern English Usage(1926),Fowlerdescribes such sentences as unwittingly laying a "false scent".[1]
Such a sentence leads the reader toward a seemingly familiar meaning that is actually not the one intended. It is a special type of sentence that creates a momentarily ambiguous interpretation because it contains a word or phrase that can be interpreted in multiple ways, causing the reader to begin to believe that a phrase will mean one thing when in reality it means something else. When read, the sentence seems ungrammatical, makes almost no sense, and often requires rereading so that its meaning may be fully understood after careful parsing. Though these sentences are grammatically correct, such sentences are syntactically non-standard (or incorrect) as evidenced by the need for re-reading and careful parsing. Garden-path sentences are not usually desirable in writing that intends to communicate clearly.
This is a common example that has been the subject ofpsycholinguisticresearch and has been used to test the capabilities of artificial intelligence efforts.[2]The difficulty in correctly parsing the sentence results from the fact that readers tend to interpretoldas an adjective. Readingthe, they expect a noun or an adjective to follow, and when they then readoldfollowed bymanthey assume that the phrasethe old manis to be interpreted asdeterminer–adjective–noun. When readers encounter anotherthefollowing the supposed nounman(rather than the expected verb, as in e.g.,The old man washed the boat),[a]they are forced to re-analyze the sentence. As with other examples, one explanation for the initial misunderstanding by the reader is that a sequence of words or phrases tends to be analyzed in terms of a frequent pattern: in this case:determiner–adjective–noun.[3]Rephrased, the sentence could be rewritten as "Those who man the boat are old."
This is another commonly cited example.[4]Like the previous sentence, the initial parse is to readthe complex housesas a noun phrase, butthe complex houses marrieddoes not make semantic sense (only people can marry) andthe complex houses married and singlemakes no sense at all (aftermarried and..., the expectation is another verb to form a compound predicate). The correct parsing isThe complex[noun phrase]houses[verb]married and single soldiers[noun phrase]and their families[noun phrase]. Rephrased, the sentence could be rewritten as "The complex provides housing for the soldiers, married or single, as well as their families."
This frequently used, classic example of a garden-path sentence is attributed toThomas Bever. The sentence is hard to parse becauseracedcan be interpreted as a finite verb or as a passive participle. The reader initially interpretsracedas the main verb in the simple past, but when the reader encountersfell, they are forced to re-analyze the sentence, concluding thatracedis being used as a passive participle andhorseis the direct object of the subordinate clause.[5]The sentence could be replaced by "The horse that was raced past the barn fell", wherethat was raced past the barntells the reader which horse is under discussion.[6]Such examples of initial ambiguity resulting from a "reduced relativewith [a] potentially intransitive verb" ("The horse raced in the barn fell.") can be contrasted with the lack of ambiguity for a non-reduced relative ("The horsethat wasraced in the barn fell.") or with a reduced relative with an unambiguously transitive verb ("The horse frightened in the barn fell."). As with other examples, one explanation for the initial misunderstanding by the reader is that a sequence of phrases tends to be analyzed in terms of the frequent pattern:agent–action–patient.[7]
"本食堂欢迎新老师生前来就餐。"
This sentence can be interpreted in two ways:
"The canteen welcomes new and old teachers and students to come and dine here."
This interpretation would be the most natural in a typical context. It implies that the canteen welcomes new and old teachers and students, indicating a general invitation to all.
"The canteen welcomes new teachers to come and dine here when they are alive."
This interpretation might seem more awkward and unnatural in regular usage. The phrase "生前" (shēng qián) typically refers to "while alive" or "before death." While this interpretation could be grammatically correct, it introduces a somewhat bizarre and formal tone, making it sound like the canteen only welcomes teachers who are alive, which seems overly specific and strange for such a context.
So, the more natural and probable meaning is the first one, where the canteen is simply extending a welcoming message to both new and old teachers and students.
"Modern bei dieser Bilderausstellung werden vor allem die Rahmen, denn sie sind aus Holz und im feuchten Keller gelagert worden."
This example turns on the two meanings of Germanmodern: the adjective meaning English 'modern', and the verb meaning 'to rot'.[8]
The theme of the "picture exhibition" in the first clause lends itself to interpretingmodernas an adjective meaning 'contemporary', until the last two words of the sentence:
This causes dissonance at the end of the sentence, and forcesbacktrackingto recover the proper usage and sense (and different pronunciation) of the first word of the sentence, not as the adjective meaning "contemporary", but as the verb meaning "going moldy":
The ambiguity, however, is only perceived in writing, since the two occurrences of the wordmodernin the sentence have different pronunciations (for 'going moldy' it is stressed on the first syllable, while for 'contemporary' on the second one).
"Mãe suspeita da morte do filho e foge."
This example makes use of the ambiguity between theverbsuspeitaand theadjectivesuspeita, which is also captured by the English wordsuspect. It also makes use of a misreading in which the wordeis passed over by the parser, which lends to two different meanings.[9]
When reading a sentence, readers will analyze the words and phrases they see and make inferences about the sentence’s grammatical structure and meaning in a process called parsing. Generally, readers will parse the sentence chunks at a time and will try to interpret the meaning of the sentence at each interval. As readers are given more information, they make an assumption of the contents and meaning of the whole sentence. With each new portion of the sentence encountered, they will try to make that part make sense with the sentence structures that they have already interpreted and their assumption about the rest of the sentence. The garden-path sentence effect occurs when the sentence has a phrase or word with anambiguous meaningthat the reader interprets in a certain way and, when they read the whole sentence, there is a difference in what has been read and what was expected. The reader must then read and evaluate the sentence again to understand its meaning. The sentence may be parsed and interpreted in different ways due to the influence of pragmatics, semantics, or other factors describing the extralinguistic context.[10]
Various strategies can be used when parsing a sentence, and there is much debate over which parsing strategy humans use. Differences in parsing strategies can be seen from the effects of a reader attempting to parse a part of a sentence that is ambiguous in its syntax or meaning. For this reason, garden-path sentences are often studied as a way to test which strategy humans use.[11]Two debated parsing strategies that humans are thought to use are serial and parallel parsing.
Serial parsing is where the reader makes one interpretation of the ambiguity and continues to parse the sentence in the context of that interpretation. The reader will continue to use the initial interpretation as reference for future parsing until disambiguating information is given.[12]
Parallel parsing is where the reader recognizes and generates multiple interpretations of the sentence and stores them until disambiguating information is given, at which point only the correct interpretation is maintained.[12]
When ambiguous nouns appear, they can function as both the object of the first item or the subject of the second item. In that case, the former use is preferred. It is also found that the reanalysis of a garden-path sentence gets more and more difficult with the length of the ambiguous phrase.[13]
A research paper published by Meseguer, Carreiras and Clifton (2002) stated that intensive eye movements are observed when people are recovering from a mild garden-path sentence. They proposed that people use two strategies, both of which are consistent with the selective reanalysis process described by Frazier and Rayner in 1982. According to them, the readers predominantly use two alternative strategies to recover from mild garden-path sentences.
Partial re-analysis occurs when analysis is not complete. Frequently, when people can make even a little bit of sense of the later sentence, they stop analysing further so the former part of the sentence still remains in memory and does not get discarded from it.
Therefore, the original misinterpretation of the sentence remains even after the re-analysis is done; hence participants' final interpretations are often incorrect.[15]
Recent research has utilized adult second-language learners, orL2 learners, to study difficulties in revision of the initialparsingof garden-path sentences. During the processing of garden-path sentences, the reader has an initial parse of the sentence, but often has to revise their parse because it is incorrect. Unlike adult native speakers, children tend to have difficulty revising their first parsing of the sentence. This difficulty is attributed to the underdevelopedexecutive functioningof children. Executive functioning skills are utilized when the initial parsing of a sentence needs to be discarded for a revised parsing, and underdeveloped or damaged executive functioning impairs this ability. As the child ages and their executive functioning completes development, they gain the ability to revise the initial incorrect parsing. However, difficulties in revision are not unique to children. Adult L2 learners also exhibit difficulty in revisions, but the difficulties cannot be attributed to underdeveloped executive functioning.
In a 2015 study, adult L2 learners were compared to adult native speakers and native-speaking children in revision and processing of garden-path sentences using act-out errors and eye movement.[16]Adult native English speakers, English-speaking children, and adult English L2 learners were shown garden-path sentences or disambiguated garden-path sentences that either had reference information or no reference information and then asked to act out the sentence. Adult L2 speakers had fewer act-out errors than native-speaking children when the garden-path sentence was presented with referential information, similarly to the adult native speakers that present less act-out errors than both the adult L2 learners and native-speaking children. Adult L2 speakers and native adult speakers were able to usediscourseand referential information to aid in their processing of the garden-path sentences. This ability could be due to the adults’ developed executive functioning allowing them more cognitive resources, discourse and referential information, to aid in parsing and revision. Additionally, the use of discourse and referential information could be due toL1-transferbecause Italian and English share the same sentence structure. However, when the garden-path sentences are disambiguated and then presented, the adult L2 speakers had the highest act-out error rate followed by native-speaking children and then by adult native speakers. The results of this study indicate that difficulties in parsing revision are more common than originally thought and are not confined to children or individuals with reduced executive functioning. Adults, both native speakers and L2 learners, use discourse and referential information in parsing and sentence processing. But adult L2 learners and native-speaking children had similar error rates for garden-path sentences with no reference information, indicating systematic revision failure.[16]
|
https://en.wikipedia.org/wiki/Garden_path_sentence
|
Ibis redibis nunquam per bella peribis(alternativelyIbis redibis nunquam in bello morieris) is aLatinphrase, often used to illustrate the meaning ofsyntactic ambiguityto students of either Latin orlinguistics. Traditionally, it is attributed to theoraclesofDodona. The phrase is thought to have been uttered by a general consulting the oracle about his fate in an upcoming battle. The sentence is crafted in a way that, withoutpunctuation, it can be interpreted in two significantly different ways.[1][2]
Meaning "you will go, you will return, never in war will you perish". The other possibility is the exact opposite in meaning:
That is: "you will go, you will return never, in war you will perish".
A Greek parallel expression with the same meaning is also current: ἤξεις ἀφήξεις, οὐ θνήξεις ἐν πολέμῳ("Íxeis afíxeies, ou thníxeis en polémo"). While Greek authorities have in the past assumed this was the originalDodonaoracle (e.g. first edition of Babiniotis dictionary), no ancient instance of the expression is attested, and a future form corresponding to the rhyming θνήξεις,thníxeis(instead of the classical θανεῖ,thaneí), is first attested from the reign of Nero (Greek Anthology9.354). The Greek expression is therefore probably a modern back-translation from the Latin.[3]
To say that something is anibis redibis, usually in the context of legal documents, is to say that its wording is (either deliberately or accidentally) confusing or ambiguous.
|
https://en.wikipedia.org/wiki/Ibis_redibis_nunquam_per_bella_peribis
|
Aparaprosdokian(/pærəprɒsˈdoʊkiən/), orpar'hyponoian, is afigure of speechin which the latter part of a sentence, phrase, or larger discourse is surprising or unexpected in a way that causes the reader or listener to reframe or reinterpret the first part. It is frequently used for humorous or dramatic effect, sometimes producing ananticlimax. For this reason, it is extremely popular amongcomediansandsatirists,[1]such asGroucho Marx.
"Paraprosdokian" derives from Greekπαρά"against" andπροσδοκία"expectation".[2][3]The nounprosdokiaoccurs with the prepositionparain Greekrhetoricalwriters of the 1st century BCE and the 1st and 2nd centuries CE, with the meaning "contrary to expectation" or "unexpectedly."[4][5][6][7][8]
While the word is now in wide circulation, "paraprosdokian" (or "paraprosdokia") is not a term of classical (or medieval) Greek or Latin rhetoric; it was first attested in 1896.[9][10]
Some paraprosdokians not only change the meaning of an early phrase, as ingarden-path sentence, but also play on thedouble meaningof a particular word, creating a form ofsyllepsisorantanaclasis(a type ofpun).
For example, in response to the question "how are you two?", aModern Hebrewspeaker can sayבסדר גמור; היא בסדר, אני גמור(be-séder gamúr; hi be-séder, aní gamúr), literally "in-order complete; she in-order, I complete", i.e., "We are very good. She is good, I am finished".[11]: 88Note the ambiguity of the Hebrew lexical item גמורgamúr: it means both "complete" and "finished".[11]: 88A parallel punning paraprosdokian in English is a man's response to a friend's question "Why are you and your wife here?: A workshop; I am working, she is shopping."[11]: 88
|
https://en.wikipedia.org/wiki/Paraprosdokian
|
Thereading span task(RST) is a commonmemory spantask widely cited in, and adapted for, investigations ofworking memory,cognitive processing, andreading comprehensionthat was first published by Meredyth Daneman andPatricia Carpenterin 1980.[1]It is a verbal working memory test.
The original RST required participants to read series of unconnected sentences aloud and to remember the final word of each sentence of a series (grouped according to the total number of sentences). With each sentence presented on a card, participants were cued to recall the memorized end-of-sentence words in their original order by a blank card at the end of a series. The number of sentences of a series was incrementally increased until a participant's reading span, or the maximum number of final words correctly recalled, was found.
The reading span task was the first instance of the family of "complex span" tasks (as opposed to "simple span" tasks). It is a complex verbal test because it draws upon both storage and processing (i.e., reading) elements of working memory, while simple verbal tests (e.g., word span) require the storage element alone.[2]
Besides the "listening span" variant also developed by Daneman and Carpenter,[1]several variants have been developed in recent years based upon the RST.[3][4][5][6][7][8][9]Van den Noort et al. created a computerized version of the test, which, when tested among four different languages (Dutch, English, German, and Norwegian), was shown to meet strict methodological criteria of the original RST and yielded similar results. This allowed direct comparisons of RST results to be made across different language groups.[10]
In an attempt to formulate a standardized version of the RST, numerous problems with both the original and variants have been critically examined.
Daneman and Carpenter found that reading span was much more strongly related to reading comprehension than word span. Later research corroborated the finding that reading span is more closely related to language comprehension than word span.[11]
|
https://en.wikipedia.org/wiki/Reading_span_task
|
Theserial comma(also referred to as theseries comma,Oxford comma,[1]orHarvard comma[2]) is acommaplaced after the second-to-last term in a list (just before theconjunction) when writing out three or more terms.[3][4][5]For example, a list of three countries might be punctuated with the serial comma as "France, Italy, and Spain" or without it as "France, Italy and Spain". The serial comma can help avoid ambiguity in some situations, but can also create it in others.[6]There is no universally accepted standard for its use.[7]
The serial comma is popular informal writing(such as inacademic,literary, andlegalcontexts)[8][9]but is usually omitted injournalismas a way to save space.[9][10][11]Its popularity in informal and semi-formal writing depends on thevariety of English; it is usually excluded inBritish English, while inAmerican Englishit is common and often considered mandatory outside journalism.[12][13][14]Academic and legalstyle guidessuch as theAPA style,[15]The Chicago Manual of Style,Garner's Modern American Usage,[16]StrunkandWhite'sThe Elements of Style,[17]and theU.S. Government Printing OfficeStyle Manual[18]either recommend or require the serial comma, as doesThe Oxford Style Manual(hence the alternative name "Oxford comma").[13]Newspaper stylebooks such as theAssociated Press Stylebook,The New York Times Style Book,[19]andThe Canadian Pressstylebook typically recommend against it. Most British style guides do not require it, withThe EconomistStyle Guidenoting most British writers use it only to avoid ambiguity.[12]
While many sources provide default recommendations on whether to use the serial comma as a matter of course, most also include exceptions for situations where it is necessary to avoid ambiguity (seeSerial comma § Recommendations by style guides).[20]
The comma itself is widely attributed toAldus Manutius, a 15th-century Italian printer who used a mark—now recognized as a comma—to separate words.[21]Etymologically, the wordcomma, which became widely used to describe Manutius's mark, comes from the Greekκόμμα(lit.'to cut off').[22]The serial comma has been used for centuries in a variety of languages, though not necessarily in a uniform or regulated manner.[23]
The serial comma is most often attributed toHorace Hart, the printer and controller of theOxford University Pressfrom 1893 to 1915. Hart wrote the eponymousHart's Rules for Compositors and Readersin 1905 as a style guide for the employees working at the press.[24]The guide called for the use of the serial comma,[25]but the punctuation mark had no distinct name until 1978, when Peter Sutcliffe referred to the serial comma as such in his historical account of the Oxford University Press.[26]
Sutcliffe, however, attributed the serial comma not to Horace Hart but toF. Howard Collins,[26]who mentioned it in his 1905 book,Author & Printer: A Guide for Authors, Editors, Printers, Correctors of the Press, Compositors, and Typists.[27]
Common argumentsforthe consistent use of the serial comma are:
Common argumentsagainstthe consistent use of the serial comma are:
Many sources are against both systematic use and systematic avoidance of the serial comma, making recommendations in a more nuanced way as reflected inrecommendations by style guides.
Omitting the serial comma may create ambiguity; writers who normally avoid the comma often use one to avoid this. Consider theapocryphalbook dedication below:[34]
There is ambiguity about the writer's parentage as "Mother Teresa and the pope" can be read as anappositive phraserenaming of[35]my parents, leading the reader to believe that the writer claims thatMother Teresaandthe popeare their parents. A comma before theandremoves the ambiguity:
Nevertheless, lists can also be written in other ways that eliminate the ambiguity without introducing the serial comma, such as by changing the word order, or by using other or no punctuation to introduce or delimit them (though the emphasis may thereby be changed):
An example collected byNielsen Haydenwas found in a newspaper account of a documentary aboutMerle Haggard:
A serial comma following "Kris Kristofferson" would help prevent this being understood as Kris Kristofferson and Robert Duvall being the ex-wives in question.
In some circumstances, using the serial comma can create ambiguity. If the book dedication above is changed to
the comma afterMother Teresacreates ambiguity because it can be read as an appositive phrase implying that the writer's mother is Mother Teresa. This leaves it unclear whether this is a list of three entities (1, my mother; 2, Mother Teresa; and 3, the pope) or of only two entities (1, my mother, who is Mother Teresa; and 2, the pope).[6]
Also:
This is ambiguous because it is unclear whether "a maid" is anappositiverenaming of Betty or the second in a list of three people. On the other hand, removing the final comma:
leaves the possibility that Betty is both a maid and a cook (with "a maid and a cook" read as an appositive phrase).[37]In this case, neither the serial-comma style—nor the no-serial-comma style—resolves the ambiguity. A writer who intends a list of three distinct people (Betty, maid, cook) may create an ambiguous sentence, regardless of whether the serial comma is adopted. Furthermore, if the reader is unaware of which convention is being used, both styles can be ambiguous in cases such as this.
These forms (among others) would remove the ambiguity:
Ambiguities can often be resolved by the selective use of semicolons instead of commas when more separation is required.[38]General practice across style guides involves using semicolons when individual items have their own punctuation or coordinating conjunctions, but typically a "serial semicolon" is required.[39]
Lynne Trusswrites: "There are people who embrace the Oxford comma, and people who don't, and I'll just say this:neverget between these people when drink has been taken."[14]
Omitting a serial comma is often characterized as a journalistic style of writing, as contrasted with a more academic or formal style.[8][9][11]Journalists typically do not use the comma, possibly for economy of space.[10]In Australia and Canada, the comma is typically avoided in non-academic publications unless its absence produces ambiguity.
It is important that the serial comma's usage within a document be consistent;[40]inconsistent usage can seem unprofessional.[11]
In the U.S. state ofMaine, the lack of a serial comma became the deciding factor in a $13 million lawsuit filed in 2014 that was eventually settled for $5 million in 2017. The U.S. appeals judgeDavid J. Barronwrote, "For want of a comma, we have this case."[56][57][58]
InO'Connor v. Oakhurst Dairy,[59]a federal court of appeals was required to interpret astatuteunder which the "canning, processing, preserving, freezing, drying, marketing, storing, packing for shipment or distribution" of certain goods were activities exempted from the general requirement of overtime pay. The question was whether this list included the distribution of the goods, or only the packing of the goodsfordistribution. The lack of a comma suggested one meaning, while the omission of the conjunctionorbefore "packing" and the fact that theMaine Legislative Drafting Manualadvised against use of the serial comma suggested another. It said "Although authorities on punctuation may differ, when drafting Maine law or rules, don't use a comma between the penultimate and the last item of a series."[60]In addition to the absence of a comma, the fact that the word chosen was "distribution" rather than "distributing" was also a consideration,[61]as was the question of whether it would be reasonable to consider the list to be anasyndeticlist. Truck drivers demanded overtime pay; the defense conceded that the expression was ambiguous but said it should be interpreted as exempting distribution activity from overtime pay.[61]Thedistrict courtagreed with the defense and held that "distribution" was an exempt activity. On appeal, however, theFirst Circuitdecided that the sentence was ambiguous and "because, under Maine law, ambiguities in the state's wage and hour laws must be construed liberally in order to accomplish their remedial purpose", adopted the drivers' narrower reading of the exemption and ruled that those who distributed the goods were entitled to overtime pay. Oakhurst Dairy settled the case by paying $5 million to the drivers,[62]and the phrase in the law in question was later changed to use serial semicolons and "distributing" – resulting in "canning; processing; preserving; freezing; drying; marketing; storing; packing for shipment; or distributing".[63]
The opinion in the case said that 43 of the 50 U.S. states had mandated the use of a serial comma and that both chambers of thefederal congresshad warned against omitting it, in the words of the U.S. House Legislative Counsel's Manual on Drafting Style, "to prevent any misreading that the last item is part of the preceding one"; only seven states "either do not require or expressly prohibited the use of the serial comma".[30][31]
In 2020, acommemorative 50p coinwas brought into circulation in the United Kingdom to mark "Brexitday", January 31, 2020, minted with the phrase "Peace, prosperity and friendship with all nations". English novelistPhilip Pullmanand others criticized the omission of the serial comma, while others said it was anAmericanismand not required in this instance.[64][65]
|
https://en.wikipedia.org/wiki/Serial_comma
|
"The Purple People Eater" is anovelty songwritten and performed bySheb Wooley, which reached number one on theBillboardpop charts in 1958 from June 9 to July 14, number one inCanada,[5]number 12 overall in the UK Singles Chart, and topped the Australian chart.
The premise of the song came from a joke told by the child of a friend of Wooley's, fellow songwriterDon Robertson:
What has one eye, one horn, flies and eats people?
A one-eyed, one-horned, flying people eater.
Wooley finished composing the song within an hour,[6]later describing it as "undoubtedly the worst song he had ever written.”[citation needed]According to Wooley,MGM Recordsinitially rejected the song, saying that it was not the type of music with which they wanted to be identified. Anacetateof the song reached MGM Records' New York office. The acetate became popular with the office's young people. Up to 50 people would listen to the song at lunchtime. The front office noticed, reconsidered their decision, and decided to release the song.[7]
The recording was arranged byNeely Plumb.[8]The voice of the purple people eater is a sped-up recording, giving it a voice similar to, but not quite as high-pitched or as fast as,Mike Sammes's 1957 "Pinky and Perky", orRoss Bagdasarian's "Witch Doctor", another hit from earlier in 1958; and "The Chipmunk Song", which was released late in 1958.Alvin and the Chipmunkseventually covered "Purple People Eater" for their 1998 albumThe A-Files: Alien Songs. The sound of a toysaxophonewas produced in a similar fashion, as the saxophone was originally recorded at a reduced speed.[6]
"The Purple People Eater" tells how a strange creature from outer space (described as a "one-eyed, one-horned, flying, purple people eater") descends to Earth because it wants to be in arock 'n' rollband. Much of the song's humor derives from toying with the listener's expectations. The creature is initially described as having "one long horn", suggestive of ananatomical horn, yet the song ends with the creature playing music from the horn, implying that it isacousticorinstrumental.
Likewise, challenging the listener's assumption that the creature is a purple-colored people-eater, the creature asserts that it eats purple people:
I said Mr. Purple People Eater, what's your line?He said eating purple people, and it sure is fine,But that's not the reason that I came to landI wanna get a job in a rock 'n' roll band.[9][10][11]
The creature also declines to eat the narrator "'cause [he's] sotough", a term which can be interpreted either as fierce or not easily chewed.
Attempts to clarify theambiguitiesin the song have persisted since its original release. The 1958sheet musicportrayed a purple creature playing the single horn on his head like awoodwind instrument, and MGM used the same image onrecord sleevesin foreign markets such as Australia and Japan. In response to requests from radiodisc jockeysto portray the creature, listeners drew pictures that show a purple-colored people eater.[6]
The Sheb Wooley version crossed to theBillboardR&B Best Sellers in Storeschart, peaking at number 18.[13]
Jackie Denniscovered the song in 1958, and his version reached number 29 in the UK.[14]
Judy Garlandrecorded the song on her 1958 Capitol Records albumGarland at the Grove, accompanied byFreddy Martinand his Orchestra, issued as Capitol T 1118 (mono) and ST 1118 (stereo).[15]
Wooley recorded another version of the song in 1967, titled "The Purple People Eater #2" and credited to hisalter egoBen Colder, on the MGM label.[16]
A cover version recorded by British comedianBarry Cryerreached number one in theFinnishchart after contractual reasons prevented Wooley's version being released in Scandinavia.[17]
Wooley re-recorded the song in 1979 under the title "Purple People Eater", whichGusto Recordsreleased through itsKing Recordssubsidiary.[18]
A dubstep song under the title "Purple People Eater" by the Dano-Norwegian electronic music groupPegboard Nerdswas released in 2018 and samples the original piece.[19][20]
In the May 28th, 1958 episode of Leave it to Beaver, "Flying Purple People Eater" is referenced as the answer to a riddle with which Ward Cleaver stumps Wally and "the Beaver". The enduring popularity of the song led to the nicknaming of the highly effective "Purple People Eaters", theMinnesota Vikingsdefensive line of the 1970s, whose team colors include purple.[21]
From 1982, major British toy manufacturerWaddingtonsmarketed a children's game inspired by the song. Players competed to remove tiny "people" from the rubber Purple People Eater shell, using tweezers on awire loop, which activated an alarm if coming into contact with its metal jaws.[22]
In the 1984postapocalypticnovelBrother in the Land,cannibalsare nicknamed "Purples", from the song.[23][24]
The 2022 filmNopefeatures a cinematographer, Antlers Holst, who is hired to capture an alien on camera. While preparing to capture camera footage of the alien creature, Holst recites the lyrics from "The Purple People Eater".[25]
In winter 2022/2023, theUSDAAgricultural Research Serviceheld the “Name that Holiday Pepper – Violet to Red” contest[26]on Challenge.gov to name new varieties of ornamental peppers they had developed. The winning name for a purple pepper withcayenne pepperspiciness levelwas "Purple People Heater".[27]
MarvelsupervillainBastionuses the song as a self-chosen theme song in the 2024Marvel AnimationDisney+streaming seriesX-Men '97.[28]
In 1988,a film of the same namebased on the song was released.
|
https://en.wikipedia.org/wiki/The_Purple_People_Eater
|
Buzzword bingo, also known asbullshit bingo,[1]is abingo-style game where participants prepare bingo cards withbuzzwordsand tick them off when they are uttered during an event, such as a meeting or speech. The goal of the game is to tick off a predetermined number of words in a row and then signal bingo to other players.
Buzzword bingo is generally played in situations where audience members feel that the speaker is relying too heavily on buzzwords orjargonrather than providing relevant details or clarity. Business meetings led by guest speakers or notable company personalities from higher up the pay scale are often viewed as a good opportunity for buzzword bingo, as the language used by these speakers often includes predictable references to arcane business concepts, which are perfect for use in the creation of buzzword bingo cards.
Turkey bingo requires the winner to ask a question or make a statement using his/her winning bingo words, thus signaling the win to insiders while ideally prompting the speaker to respond as if the question or statement were real. An alternate variation requires the person who has achieved bingo to raise his or her hand and use the word "Bingo" within the context of a comment or question. Other versions of the game require actually yelling "Bingo!" To avoid the reprimands that would likely result from doing so, participants may resort to looking at one another and silently mouthing the word "Bingo" instead.
An example of a buzzword bingo card for a business management meeting is shown below.[2]
The game has existed for many years, though without a universally-used name, and it is likely that its creation can be credited to several people working independently.[3]
By 1992, college students in the USA were playing a game called "turkey bingo" where they guessed which classmates would dominate conversations in classrooms.[4]This led to a variant popular in business schools called "bullshit bingo" based on overused business lingo.[5]The Buzzword Bingo name was coined in early 1993 in an internalSilicon Graphicstool made by principal scientist Tom Davis in collaboration with Seth Katz, and popularized in 1993 in the first public web version by fellow employee Chris Pirazzi[6][7]The 22 February 1994Dilbertcomic featured buzzword bingo in an office meeting.[8][9]
One documented example occurred whenAl Gore, then theVice President of the United States, known for his liberal use of buzzwords in enthusiastically promoting technology, spoke atMIT's 1996 graduation.MIT hackershad distributed bingo cards containing buzzwords to the graduating class. Gore, who had been informed of the prank, acknowledged it during his speech.[10][11]
In 2007, IBM created a TV advertisement that was based on the concept of buzzword bingo.[12]Video gaming websiteGameSpothosted a video called "Executive Buzzword Bingo", in which they held a running tally of buzzwords uttered duringSony's "PlayStation Meeting 2013" conference event on 20 February 2013.[13]
|
https://en.wikipedia.org/wiki/Buzzword_bingo
|
In thetechnology industry,buzzword compliantis atongue-in-cheekexpression used to suggest that a particularproductsupports features simply because they are currentlyfashionable.[1][2]
Buzzword compliance is a modern version of the old practice of beingcheckbox compliant, ensuring that a product has all the features listed in productreviews. Since many of the decision-makers regarding technology purchases may only be semi-literate technically, the use ofbuzzwordsmakes a product sound more valuable. Among the technically literate, the phrase is sometimes used in a sardonic way, as in: "I have no idea what it does, but it sure is buzzword compliant", implying that perhaps the effort on the product has gone into marketing and public relations rather than the technology.
Technical staff, and those involved in recruiting and hiring them, also speak of arésuméorCVbeing "buzzword compliant" when it contains a large number of such terms. This can be a matter of some practical importance to a job-seeker. In many large organizations, those who receive and evaluate applications for employment will not be familiar with the domain of the job, and therefore can only assess buzzword compliance with the job description when deciding which applications thehiring managerwill see.
Examples include:
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Buzzword_compliant
|
Acatchphrase(alternatively spelledcatch phrase) is a phrase or expression recognized by its repeated utterance. Such phrases often originate inpopular cultureand in the arts, and typically spread through word of mouth and a variety of mass media (such as films, internet, literature and publishing, television, and radio). Some become the de facto or literal "trademark" or "signature" of the person or character with whom they originated, and can be instrumental in thetypecastingof a particular actor. Catchphrases are often humorous, can be (or become) thepunch lineof a joke, or acallbackreminder of a previous joke.
According to Richard Harris, a psychology professor atKansas State Universitywho studied why people like to cite films in social situations, using film quotes in everyday conversation is similar to telling a joke and a way to form solidarity with others. "People are doing it to feel good about themselves, to make others laugh, to make themselves laugh," he said. He found that all of the participants in his study had used film quotes in conversation at one point or another. "They overwhelmingly cited comedies, followed distantly by dramas and action adventure flicks." Horror films, musicals and children's films were hardly ever cited.[1]
The existence of catchphrases predates modern mass media. A description of the phenomenon is found inExtraordinary Popular Delusions and the Madness of Crowdspublished byCharles Mackayin 1841:
And, first of all, walk where we will, we cannot help hearing from every side a phrase repeated with delight, and received with laughter, by men with hard hands and dirty faces, by saucy butcher lads and errand-boys, by loose women, by hackney coachmen, cabriolet-drivers, and idle fellows who loiter at the corners of streets. Not one utters this phrase without producing a laugh from all within hearing. It seems applicable to every circumstance, and is the universal answer to every question; in short, it is the favourite slang phrase of the day, a phrase that, while its brief season of popularity lasts, throws a dash of fun and frolicsomeness over the existence of squalid poverty and ill-requited labour, and gives them reason to laugh as well as their more fortunate fellows in a higher stage of society.[2]
|
https://en.wikipedia.org/wiki/Catchphrase
|
Corporate jargon(variously known ascorporate speak,corporate lingo,business speak,business jargon,management speak,workplace jargon,corpospeak,corporatese, orcommercialese) is thejargonoften used in largecorporations,bureaucracies, and similar workplaces.[1][2]Thelanguage registerof the term is generally being presented in a negative light or disapprovingly. It is often considered to be needlessly obscure or, alternatively, used to disguise an absence of information. Its use in corporations and other large organisations has been widely noted in media.[3]
Marketing speakis a related label for wording styles used to promote a product or service.
Corporate speak is associated with managers of large corporations, business management consultants, and occasionally government. Reference to such jargon is typically derogatory, implying the use of long, complicated, or obscure words; abbreviations;euphemisms; and acronyms. For that reason some of its forms may be considered as anargot.[2]Some of these words may beneologismsor inventions, designed purely to fit the specialized meaning of a situation or even to "spin" negative situations as positive situations, for example in the practice ofgreenwashing.[4]Although it is pervasive in the education field, its use has been criticized as reflecting a sinister view of students as commodities and schools as retail outlets.[5]
The use of corporate jargon is criticised for its lack of clarity as well as for its tedium, making meaning and intention opaque and understanding difficult.[6]It is also criticized for not only enabling delusional thoughts, but allowing them to be seen as an asset in the workplace.[7]Corporate jargon has been criticized as "pompous" and "a tool for making things seem more impressive than they are".[3]Steven Poolewrites that it is "engineered to deflect blame, complicate simple ideas, obscure problems, and perpetuate power relations".[8]
Marketing speakis a related label for wording styles used to promote a product or service to a wide audience by seeking to create the impression that the vendors of the service possess a high level of sophistication, skill, and technical knowledge. Such language is often used inmarketingpress releases,advertising copy, and prepared statements read by executives and politicians.[citation needed]
Many corporate-jargon terms have straightforward meanings in other contexts (e.g.,leveragein physics, orpicked upwith a well-defined meaning in finance), but are used more loosely in business speak. For example, adeliverablecan become anyservice or product.[9]The wordteamhad specific meanings in agriculture and in sport before becoming a ubiquitous synonym for a group spanning one or more levels in a corporate organisation.[10]
The phrasesgoing forwardormoving forwardmake a confident gesture towards the future, but are generally vague on timing, which usually means they can be removed from a sentence with little or no effect on its overall meaning.[11]
In order to obfuscate or distract from unpleasant or unwanted news, filler such as the phrase "at this time" or overly complicated grammatical constructions – e.g. usage of the present progressive – is frequently used at the beginning of a sentence despite its clear redundancy. Examples include "At this time, we have decided we are not going to move forward with your application" when "We have decided not to move forward with your application" would suffice.[12]
Legal terms such asChapter 11can be used: for example, Chapter 11, Title 11, United States Code is aboutUS bankruptcy.[citation needed]
Some systems of corporate jargon recycle pop ethics with terms such asresponsibility.[13]
Corporate speak in non-English-speaking countries frequently contains borrowed English acronyms, words, and usages.[14]Russian-speakers, for instance, may eschew native constructions and use words such asлидер(literally:liderfor 'leader') or adopt forms such asпиарщик(piarshchikfor 'PRspecialist').[citation needed]
Jargon, like other manifestations of language, can change over time; andmanagement fadsmay influence management-speak. This changing popularity over time can be seen in the English corpus used byGoogle Books Ngram Viewer.[15][16]
|
https://en.wikipedia.org/wiki/Corporate_jargon
|
TheGartner hype cycleis a graphical presentation developed, used and branded by the American research and advisory firmGartnerto represent the maturity, adoption, and social application of specifictechnologies. The hype cycle framework was introduced in 1995 by Gartner analyst Jackie Fenn[1]to provide a graphical and conceptual presentation of the maturity of emerging technologies through five phases.[2]
Gartner's hype cycle framework was introduced in 1995 by analyst Jackie Fenn, who had joined the firm the year before.[1]In her research reports, Fenn identified common patterns related to the maturity of emerging technologies.[3]Fenn referred to this familiar progression as a "hype cycle" and created a graph depicting its ups and downs with each distinct stage given a title, starting with Technology trigger and ending with Plateau of productivity.[4]The chart was included in a one-off research report, but it was popular with other Gartner analysts and clients and the "Hype Cycle of Emerging Technologies" was soon developed into an annual report.[3]
Each hype cycle drills down into the five key phases of a technology's life cycle.
The term "hype cycle" and each of the associated phases are now used more broadly in themarketingof new technologies.
Hype (in the more general media sense of the term "hype"[6]) has played a large part in the adoption ofnew media. Analyses of the Internet in the 1990s featured large amounts of hype,[7][8][9]and that created "debunking" responses.[6]A longer-term historical perspective on such cycles can be found in the research of the economistCarlota Perez.[10]Desmond Roger Laurence, in the field ofclinical pharmacology, described a similar process indrug developmentin the seventies.[citation needed]
There have been numerous criticisms[11][12][13][14]of the hype cycle, prominent among which are that it is not a cycle, that the outcome does not depend on the nature of the technology itself, that it is not scientific in nature, and that it does not reflect changes over time in the speed at which technology develops. Another is that it is limited in its application, as it prioritizes economic considerations in decision-making processes. It seems to assume that a business' performance is tied to the hype cycle, whereas this may actually have more to do with the way a company devises its branding strategy.[citation needed]A related criticism is that the "cycle" has no real benefits to the development or marketing of new technologies and merely comments on pre-existing trends. Specific disadvantages when compared to, for example,technology readiness levelare:
An analysis of Gartner Hype Cycles since 2000[14]shows that few technologies actually travel through an identifiable hype cycle, and that in practice most of the important technologies adopted since 2000 were not identified early in their adoption cycles.
The Economistresearched the hype cycle in 2024:[15]
We find, in short, that the cycle is a rarity. Tracing breakthrough technologies over time, only a small share—perhaps a fifth—move from innovation to excitement to despondency to widespread adoption. Lots of tech becomes widely used without such a rollercoaster ride. Others go from boom to bust, but do not come back. We estimate that of all the forms of tech which fall into the trough of disillusionment, six in ten do not rise again.
|
https://en.wikipedia.org/wiki/Gartner_hype_cycle
|
Anideographorvirtue wordis a word frequently used in political discourse that uses an abstract concept to develop support for political positions. Such words are usually terms that do not have a clear definition but are used to give the impression of a clear meaning. An ideograph in rhetoric often exists as a building block or simply one term or short phrase that summarizes the orientation or attitude of an ideology. Such examples notably include <liberty>, <freedom>, <democracy> and <rights>. Rhetorical critics use chevrons or angle brackets (<>) to mark off ideographs.
The term ideograph was coined by rhetorical scholar and criticMichael Calvin McGee(1980) describing the use of particular words and phrases as political language in a way that captures (as well as creates or reinforces) particular ideological positions. McGee sees the ideograph as a way of understanding of how specific, concrete instances of political discourse relate to the more abstract idea of political ideology.[1]Robertson defines ideographs as "political slogans or labels that encapsulate ideology in political discourse."[2]Meanwhile,Celeste ConditandJohn Lucaites, influenced by McGee, explain, "Ideographs represent in condensed form the normative, collective commitments of the members of a public, and they typically appear in public argumentation as the necessary motivations or justifications for action performed in the name of the public."[3]Ideographs are common inadvertisingand political discourse.
McGee uses the term in his seminal article "The 'Ideograph': A Link Between Rhetoric and Ideology" which appeared in theQuarterly Journal of Speechin 1980.[4]He begins his essay by defining the practice ofideologyas practice of political language in specific contexts—actual discursive acts by individual speakers and writers. The question this raises is how does this practice ofideologycreate social control.
McGee's answer to this is to say that "political language which manifests ideology seems characterized by slogans, a vocabulary of 'ideographs' easily mistaken for the technical terminology ofpolitical philosophy."[4]He goes on to offer his definition of "ideograph": "an ideograph is an ordinary-language term found in political discourse. It is a high order abstraction representing commitment to a particular but equivocal and ill-defined normative goal."[4]
An ideograph, then, is not just any particular word or phrase used in political discourse, but one of a particular subset of terms that are often invoked in political discourse but which does not have a clear, univocal definition. Despite this, in their use, ideographs are often invoked precisely to give the sense of a clearly understood and shared meaning. This potency makes them the primary tools for shaping public decisions. It is in this role as thevocabularyfor public values and decision-making that they are linked to ideology.
There is no absolute litmus test for what terms are or are not ideographs. Rather, this is a judgment that must be made through the study of specific examples of discourse. However, McGee (and others who have followed him) have identified several examples of ideographs or virtue words in Western liberal political discourse, such as <liberty>, <property>, <freedom of speech>, <religion>, and <equality>. In each case, the term does not have a specificreferent. Rather, each term refers to anabstractionwhich may have many different meanings depending on itscontext. It is in their mutability between circumstances that give the terms such rhetorical power. If the definition of a term such as <equality> can be stretched to include a particular act or condition, then public support for that act or condition is likely to be stronger than it was previously.
By encapsulating values which are perceived to be widely shared by the community, but which are in fact highly abstract and defined in very different ways by individuals, ideographs provide a potent persuasive tool for the political speaker. McGee offers the example ofRichard Nixon's attempt to defend his decision not to turn over documents to Congress during theWatergate scandalby invoking "the principle ofconfidentiality." Recognizing that his refusal to submit to Congress could be seen as a violation of the "rule of law", Nixon pitted "the principle ofconfidentiality" against the "rule of law," despite the fact that these two ideographs would, in the abstract, not likely be seen as in conflict with one another.[4]Nixon, in an attempt to expand the understanding of "the principle of confidentiality" to cover his own specific refusal to cooperate with Congress, used the abstractness of the term to his benefit, claiming that right to confidentiality was the more central term.[4]
While the term has remained mostly in this sphere of academicrhetorical criticism, somepolitical consultantsand practitioners are becoming savvy to this art.
Ideographs appear inadvertisingandpolitical campaignsregularly, and are crucial to helping the public understand what is really being asked of them. For example, "equality" is a term commonly used in political discourse and rarely defined. It can refer to a situation in which all people have the same opportunities, or a condition in which social resources are distributed uniformly to different individuals and groups.[5]The former is the more commonly used definition in US history, according to Condit & Lucaites, although in a socialist or left-leaning political state, the term may refer foremost to the distribution of social resources. Condit and Lucaites depict the racial facet of equality as the dominant meaning in an American context of political discourse, since 1865.[6]
Another important ideograph used specifically by U.S. presidents Barack Obama and George W. Bush after the 9/11 attacks is <terrorism>. The term does not have a clear or specific definition, but when applied to the context in the fear-stricken country after the devastating attacks in 2001, this term held significant weight and meaning to Americans all across the country. Kelly Long explores Obama's discourse on the <War on Terror> and states that "by developing an ideological justification for the conflicts that the United States was involved in at the time, Obama remedied much of the damage done by the Bush administration".[7]Obama justified the <War on Terror> by addressing the nation and saying that in order to protect the <rule of law> and <democratic values>, we must fight against <terrorism>. Obama used this term to his advantage and made <terrorism> appear to be a common enemy and fighting back was the common cause.[8]This use of the ideograph unified the country creating a sense of identity for American citizens, "defining what the nation stands for and against. The term divides those who are civilized from those who are uncivilized, those who defend economic freedom from those who would attack America’s way of life and those who support democracy from those who would disrupt it".[9]
Marouf Hasian discusses how key ideographs representative of a society's commitments change over time, particularly in the name of <liberty>, <equality>, or <privacy> epitomized ineugenics. From the 1900s-1930s, Americans justified the restriction of reproductive rights based on medical, social, economic, and political considerations, but were appalled when theNazisused some of the same arguments in their creation of the "perfect race".[10]
While rhetorical critics identify these terms as ideographs, political leaders viewed each other's terms as "glittering generalities," as Lincoln first identified his opponent's words.[11]
In addition to practitioners,corporate marketingandpolitical consultinguse key terms in this way, concentrating on theimageandbrandingof terms. For example,Frank Luntztests audience reaction to certain words or phrases using dial technology, a mechanism which instantaneously shows moment by moment reactions to speeches or presentations. This research has been extremely beneficial to his clients, as they can use ideographs as "trigger words" in anadvertising campaign.[12]
There are three primary ways in which the concept of the ideograph is important to rhetorical critics. First, it suggests a way of studying political ideology using concrete instances of language use. By showing how looking at specific uses of key words and phrases in political language reveal underlying ideological commitments, McGee offers a concrete method for understanding the highly abstract concept of ideology.
Second, the definition of the ideograph makes clear that therhetorical studyof a term is different from a legal, historical, or etymological study of a term. Unlike other perspectives that focus on how a term has changed over time, a rhetorical study of a term focuses on the forces involved in the creation of these meanings. In short, a rhetorical study of a term is the study of the use of that term in practice.
This leads to a third key aspect of what the concept of the ideograph offers to rhetorical critics. McGee notes that the study of a term must not, and should not, be limited to its use in "formal discourse." Instead, the critic is much more likely to gain a better understanding of an ideograph by looking at how it is used and depicted in movies, plays, and songs, as well as how it is presented in educational texts aimed at children. This moves the study of ideology beyond the limits of social philosophy or even political discourse as traditionally conceived (i.e., "great speeches by great men").
"An ideograph is a culturally biased, abstract word or phrase drawn from ordinary language, which serves a constitutional value for a historically situated collectivity."[6]
There exists a culturally-specific understanding in each culture about what an ideograph means. Ideographs in rhetoric are culturally specific but recur inter-culturally; meaning the understanding of one ideograph can be used and interpreted differently across cultures. The idea may be different from culture to culture, but this doesn't mean some aspects won't be the same in one or more cultures. For example, the concept offemininitythat exists cross-culturally to define ideas about women, yet one can expect these ideas to vary from culture to culture.
At the end of his essay defining the ideograph, McGee says that
“A complete description of an ideology . . . will consist of (1) the isolation of a society’s ideographs, (2) the exposure and analysis of thediachronicstructure of every ideography, and (3) characterization ofsynchronicrelationships among all the ideographs in a particular context.”[4]
Such an exhaustive study of any ideology has yet to materialize, but many scholars have made use of the ideograph as a tool of understanding both specificrhetorical situationsas well as a broader scope of ideological history. As a teacher, McGee himself made use of the ideograph as a tool for structuring the study of the rise of liberalism in British public address, focusing on ideographs such as <property>, <patriarchy>, <religion>, <liberty>. Other scholars have made a study of specific uses of ideographs such as <family values>[13]and <equality>.[14]Some critics have gone beyond the idea that an ideograph must be a verbal symbol and have expanded the notion to includephotographs.[15]and objects represented inmass media.[16]
|
https://en.wikipedia.org/wiki/Ideograph_(rhetoric)
|
Thelaw of the instrument,law of the hammer,[1]Maslow's hammer, orgolden hammer[a]is acognitive biasthat involves an over-reliance on a familiar tool.Abraham Maslowwrote in 1966, "it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail."[2]
The concept is attributed both toMaslow[3]and toAbraham Kaplan,[4][5]although the hammer and nail line may not be original to either of them.
The English expression "a Birmingham screwdriver", meaning a hammer, refers to the practice of using the one tool for all purposes, and predates both Kaplan and Maslow by at least a century.[6]
In 1868, a London periodical,Once a Week, contained this observation: "Give a boy a hammer and chisel; show him how to use them; at once he begins to hack the doorposts, to take off the corners of shutter and window frames, until you teach him a better use for them, and how to keep his activity within bounds."[7]
The first recorded statement of the concept wasAbraham Kaplan's, in 1964: "I call itthe law of the instrument,and it may be formulated as follows: Give a small boy a hammer, and he will find that everything he encounters needs pounding."[8]
In February 1962 Kaplan, then a professor of philosophy, gave a banquet speech at a conference of theAmerican Educational Research Associationthat was being held atUCLA. An article in the June 1962 issue of theJournal of Medical Educationstated that "the highlight of the 3-day meeting ... was to be found in Kaplan's comment on the choice of methods for research. He urged that scientists exercise good judgment in the selection of appropriate methods for their research. Because certain methods happen to be handy, or a given individual has been trained to use a specific method, is no assurance that the method is appropriate for all problems. He cited Kaplan's Law of the Instrument: 'Give a boy a hammer and everything he meets has to be pounded.'"
InThe Conduct of Inquiry: Methodology for Behavioral Science(1964), Kaplan again mentioned the law of the instrument saying, "It comes as no particular surprise to discover that a scientist formulates problems in a way which requires for their solution just those techniques in which he himself is especially skilled." And in a 1964 article forThe Library Quarterly, he again cited the law and commented: "We tend to formulate our problems in such a way as to make it seem that the solutions to those problems demand precisely what we already happen to have at hand."[7]
In a 1963 essay collection,Computer Simulation of Personality: Frontier of Psychological Theory,Silvan Tomkinswrote about "the tendency of jobs to be adapted to tools, rather than adapting tools to jobs". He wrote: "If one has a hammer one tends to look for nails, and if one has a computer with a storage capacity, but no feelings, one is more likely to concern oneself with remembering and with problem solving than with loving and hating." In the same book,Kenneth Mark Colbyexplicitly cited the law, writing: "The First Law of the Instrument states that if you give a boy a hammer, he suddenly finds that everything needs pounding. The computer program may be our current hammer, but it must be tried. One cannot decide from purely armchair considerations whether or not it will be of any value."[7]
Maslow's hammer,popularly phrased as "if all you have is a hammer, everything looks like a nail" and variants thereof, is fromAbraham Maslow'sThe Psychology of Science, published in 1966. Maslow wrote: "I remember seeing an elaborate and complicated automatic washing machine for automobiles that did a beautiful job of washing them. But it could do only that, and everything else that got into its clutches was treated as if it were an automobile to be washed. I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail."[7][2]
In 1967,Lee Loevingerof theFederal Communications Commissiondubbed the law "Loevinger's law of irresistible use", and applied it to government: "The political science analogue is that if there is a government agency, this proves something needs regulating."
In 1984, investorWarren Buffettcriticized academic studies of financial markets that made use of inappropriate mathematical approaches:
It isn't necessarily because such studies have any utility; it's simply that the data are there and academicians have worked hard to learn the mathematical skills needed to manipulate them. Once these skills are acquired, it seems sinful not to use them, even if the usage has no utility or negative utility. As a friend said, to a man with a hammer, everything looks like a nail."[7]
In his 2003 book,Of Paradise and Power, historianRobert Kagansuggested a corollary to the law: "When you don't have a hammer, you don't want anything to look like a nail." According to Kagan, the corollary explains the difference in views on the use of military force the United States and Europe have held since the end ofWorld War II.[9]
Some critics of psychiatry claim that the law of the instrument leads to the over-prescription of psychiatric drugs.[10][11]
The notion of agolden hammer,"a familiar technology or concept applied obsessively to many software problems", was introduced intoinformation technologyliterature in 1998 as ananti-pattern: a programming practice to be avoided.[12]
Software developer José M. Gilgado has written that the law is still relevant in the 21st century and is highly applicable to software development. Many times software developers, he observed, "tend to use the same known tools to do a completely new different project with new constraints". He blamed this on "thecomfort zonestate where you don't change anything to avoid risk. The problem with using the same tools every time you can is that you don't have enough arguments to make a choice because you have nothing to compare to and is limiting your knowledge." The solution is "to keep looking for the best possible choice, even if we aren't very familiar with it". This includes using a computer language with which one is unfamiliar. He noted that the productRubyMotionenables developers to "wrap" unknown computer languages in a familiar computer language and thus avoid having to learn them. But Gilgado found this approach inadvisable, because it reinforces the habit of avoiding new tools.[13]
Other forms of narrow-mindedinstrumentalism[14]include:déformation professionnelle, a French term for "looking at things from the point of view of one's profession", andregulatory capture, the tendency for regulators to look at things from the point of view of the profession they are regulating.
|
https://en.wikipedia.org/wiki/Law_of_the_instrument
|
Loaded language[a]isrhetoricused to influence an audience by using words and phrases with strongconnotations. This type of language is very often made vague to more effectivelyinvoke an emotional responseand/or exploitstereotypes.[1][2][3]Loaded words and phrases have significant emotional implications and involve strongly positive or negative reactions beyond theirliteral meaning.
Loaded terms, also known as emotive or ethical words, were clearly described byCharles Stevenson.[4][5][6]He noticed that there are words that do not merely describe a possible state of affairs. "Terrorist" is not used only to refer to a person who commits specific actions with a specific intent. Words such as "torture" or "freedom" carry with them something more than a simple description of a concept or an action.[7]They have a "magnetic" effect, an imperative force, a tendency to influence the interlocutor's decisions.[8]They are strictly bound to moral values leading to value judgements and potentially triggering specific emotions. For this reason, they have an emotive dimension. In the modern psychological terminology, we can say that these terms carry "emotional valence",[9]as they presuppose and generate a value judgement that can lead to an emotion.[10]
The appeal to emotion is in contrast to an appeal tologicandreason. Authors R. Malcolm Murray and Nebojša Kujundžić distinguish "prima faciereasons" from "considered reasons" when discussing this. An emotion, elicited via emotive language, may form aprima faciereason for action, but further work is required before one can obtain aconsideredreason.[2]
Emotive arguments and loaded language are particularly persuasive because they exploit the human weakness for acting immediately based upon an emotional response,withoutsuch further considered judgement. Due to such potential for emotional complication, it is generally advisable to avoid loaded language in argument or speech when fairness and impartiality is one of the goals.Anthony Weston, for example, admonishes students and writers: "In general, avoid language whose only function is to sway the emotions".[1][2]
One aspect of loaded language is that loaded words and phrases occur in pairs, sometimes aspolitical framingtechniques by individuals with opposing agendas. Heller calls these "a Boo! version and a Hooray! version" to differentiate those with negative and positive emotional connotations. Examples includebureaucratversuspublic servant,anti-abortionversuspro-life,regimeversusgovernment, andelitistversusexpert.[11]
Politiciansemploy euphemisms,[12]and study how to use them effectively: which words to use or avoid using to gain political advantage or disparage an opponent. Speechwriter and journalist Richard Heller gives the example that it is common for a politician to advocate "investment in public services," because it has a more favorable connotation than "public spending."[11]
In the 1946 essay "Politics and the English Language",George Orwelldiscussed the use of loaded language in political discourse:
The wordFascismhas now no meaning except in so far as it signifies "something not desirable." The wordsdemocracy,socialism, freedom, patriotic, realistic, justicehave each of them several different meanings which cannot be reconciled with one another. In the case of a word likedemocracy, not only is there no agreed definition, but the attempt to make one is resisted from all sides. It is almost universally felt that when we call a country democratic we are praising it: consequently the defenders of every kind of regime claim that it is a democracy, and fear that they might have to stop using that word if it were tied down to any one meaning.[13]
|
https://en.wikipedia.org/wiki/Loaded_language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.