text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
This is alist of misnamed theoremsinmathematics. It includestheorems(andlemmas, corollaries,conjectures, laws, and perhaps even the odd object) that are well known in mathematics, but which are not named for the originator. That is, these items on this list illustrateStigler's law of eponymy(which is not, of course, due toStephen Stigler, who creditsRobert K Merton).
|
https://en.wikipedia.org/wiki/List_of_misnamed_theorems
|
Logologyis the study of all things related toscienceand itspractitioners—philosophical, biological,psychological,societal,historical,political,institutional,financial. The term "logology" isback-formedfrom the suffix "-logy", as in "geology", "anthropology", etc., in the sense of the "study of science".[1][2]
The word "logology" provides grammatical variants not available with the earlier terms "science of science" and "sociology of science", such as "logologist", "logologize", "logological", and "logologically".[a]The emerging field ofmetascienceis a subfield of logology.
The early 20th century brought calls, initially fromsociologists, for the creation of a new, empirically basedsciencethat would study thescientific enterpriseitself.[5]The early proposals were put forward with some hesitancy and tentativeness.[6][b]The newmeta-sciencewould be given a variety of names,[8]including "science of knowledge", "science of science", "sociology of science", and "logology".
Florian Znaniecki, who is considered to be the founder of Polish academic sociology, and who in 1954 also served as the 44th president of theAmerican Sociological Association, opened a 1923 article:[9]
[T]hough theoretical reflection onknowledge—which arose as early asHeraclitusand theEleatics—stretches... unbroken... through the history of human thought to the present day... we are now witnessing the creation of a newscience of knowledge[author's emphasis] whose relation to the old inquiries may be compared with the relation of modernphysicsandchemistryto the 'natural philosophy' that preceded them, or of contemporarysociologyto the 'political philosophy' ofantiquityand theRenaissance. [T]here is beginning to take shape a concept of a single, general theory of knowledge... permitting of empirical study.... This theory... is coming to be distinguished clearly fromepistemology, from normativelogic, and from a strictly descriptivehistory of knowledge."[10]
A dozen years later, Polish husband-and-wife sociologistsStanisław OssowskiandMaria Ossowska(theOssowscy) took up the same subject in an article on "The Science of Science"[11]whose 1935 English-language version first introduced the term "science of science" to the world.[12]The article postulated that the new discipline would subsume such earlier ones asepistemology, thephilosophy of science, thepsychology of science, and thesociology of science.[13]The science of science would also concern itself with questions of a practical character such associal and state policyin relation to science, such as the organization of institutions of higher learning, of research institutes, and of scientific expeditions, and theprotection of scientific workers, etc. It would concern itself as well with historical questions: the history of the conception of science, of the scientist, of the various disciplines, and of learning in general.[14]
In their 1935 paper, theOssowscymentioned the German philosopherWerner Schingnitz(1899–1953) who, in fragmentary 1931 remarks, had enumerated some possible types of research in the science of science and had proposed his own name for the new discipline: scientiology. TheOssowscytook issue with the name:
Those who wish to replace the expression 'science of science' by a one-word term [that] sound[s] international, in the belief that only after receiving such a name [will] a given group of [questions be] officially dubbed an autonomous discipline, [might] be reminded of the name 'mathesiology', proposed long ago for similar purposes [by the French mathematician and physicistAndré-Marie Ampère(1775–1836)]."[15]
Yet, before long, in Poland, the unwieldy three-word termnauka o nauce, or science of science, was replaced by the more versatile one-word termnaukoznawstwo, or logology, and its natural variants:naukoznawcaor logologist,naukoznawczyor logological, andnaukoznawczoor logologically. And just afterWorld War II, only 11 years after theOssowscy's landmark 1935 paper, the year 1946 saw the founding of thePolish Academy of Sciences' quarterlyZagadnienia Naukoznawstwa(Logology) –— long before similar journals in many other countries.[16][c]
The new discipline also took root elsewhere—in English-speaking countries, without the benefit of a one-word name.
The word "science", from theLatin"scientia" (meaning "knowledge"), signifies somewhat different things in different languages. InEnglish, "science", when unqualified, generally refers to theexact,natural, orhard sciences.[18]The corresponding terms in other languages, for exampleFrench,German, andPolish, refer to a broader domain that includes not only the exact sciences (logicandmathematics) and the natural sciences (physics,chemistry,biology,Earth sciences,astronomy, etc.) but also theengineering sciences,social sciences(human geography,psychology,cultural anthropology,sociology,political science,economics,linguistics,archaeology, etc.), andhumanities(philosophy,history,classics,literary theory, etc.).[19][d]
University of Amsterdamhumanities professorRens Bodpoints out that science—defined as a set ofmethodsthat describes and interpretsobservedorinferredphenomena, past or present, aimed at testinghypothesesand buildingtheories—applies to such humanities fields asphilology,art history,musicology,philosophy,religious studies,historiography, andliterary studies.[19]
Bod gives a historic example of scientifictextual analysis. In 1440 the Italian philologistLorenzo Vallaexposed theLatindocumentDonatio Constantini, or The Donation of Constantine – which was used by theCatholic Churchto legitimize its claim to lands in theWestern Roman Empire– as aforgery. Valla used historical, linguistic, and philological evidence, includingcounterfactual reasoning, to rebut the document. Valla found words and constructions in the document that could not have been used by anyone in the time ofEmperor Constantine I, at the beginning of the fourth century C.E. For example, thelate Latinwordfeudum, meaning fief, referred to thefeudal system, which would not come into existence until themedievalera, in the seventh century C.E. Valla's methods were those of science, and inspired the later scientifically-minded work of Dutch humanistErasmus of Rotterdam(1466–1536),Leiden UniversityprofessorJoseph Justus Scaliger(1540–1609), and philosopherBaruch Spinoza(1632–1677).[19]Here it is not theexperimental methoddominant in theexactandnatural sciences, but thecomparative methodcentral to thehumanities, that reigns supreme.
Science's search for thetruthabout various aspects ofrealityentails the question of the veryknowabilityof reality. PhilosopherThomas Nagelwrites: "[In t]he pursuit ofscientific knowledgethrough the interaction betweentheoryandobservation... we test theories against their observational consequences, but we also question or reinterpret our observations in light of theory. (The choice betweengeocentricandheliocentric theoriesat the time of theCopernican Revolutionis a vivid example.) ...
How things seem is the starting point for all knowledge, and its development through further correction, extension, and elaboration is inevitably the result of more seemings—consideredjudgmentsabout the plausibility and consequences of different theoreticalhypotheses. The only way to pursue the truth is to consider what seems true, after careful reflection of a kind appropriate to the subject matter, in light of all the relevant data, principles, and circumstances."[21]
The question of knowability is approached from a different perspective by physicist-astronomerMarcelo Gleiser: "What we observe is notnatureitself but nature as discerned throughdatawe collect frommachines. In consequence, the scientificworldviewdepends on theinformationwe can acquire through ourinstruments. And given that our tools are limited, our view of theworldis necessarilymyopic. We can see only so far into the nature of things, and our ever shifting scientific worldview reflects this fundamental limitation on how we perceivereality." Gleiser cites the condition ofbiologybefore and after the invention of themicroscopeorgene sequencing; ofastronomybefore and after thetelescope; ofparticle physicsbefore and aftercollidersor fast electronics. "[T]he theories we build and the worldviews we construct change as our tools of exploration transform. This trend is the trademark of science."[22]
Writes Gleiser: "There is nothing defeatist in understanding the limitations of the scientific approach to knowledge.... What should change is a sense of scientific triumphalism—the belief that no question is beyond the reach of scientific discourse.[22][e]
"There are clear unknowables in science—reasonable questions that, unless currently accepted laws of nature are violated, we cannot find answers to. One example is themultiverse: the conjecture that ouruniverseis but one among a multitude of others, each potentially with a different set oflaws of nature. Other universes lie outside our causal horizon, meaning that we cannot receive or send signals to them. Any evidence for their existence would be circumstantial: for example, scars in the radiation permeating space because of a past collision with a neighboring universe."[24]
Gleiser gives three further examples of unknowables, involving the origins of theuniverse; oflife; and ofmind:[24][f]
"Scientific accounts of the origin of theuniverseare incomplete because they must rely on a conceptual framework to even begin to work:energy conservation,relativity,quantum physics, for instance. Why does the universe operate under these laws and not others?[24]
"Similarly, unless we can prove that only one or very fewbiochemical pathwaysexist from nonlife tolife, we cannot know for sure how life originated on Earth.[24]
"Forconsciousness, the problem is the jump from thematerialto thesubjective—for example, from firingneuronsto theexperienceofpainor thecolorred. Perhaps some kind of rudimentary consciousness could emerge in a sufficiently complex machine. But how could we tell? How do we establish—as opposed to conjecture—that something is conscious?"[24]Paradoxically, writes Gleiser, it is through our consciousness that we make sense of the world, even if imperfectly. "Can we fully understand something of which we are a part?"[24]
Among all the sciences (i.e.,disciplinesof learning, writ large) there seems to exist an inverse relation betweenprecisionandintuitiveness. The most intuitive of the disciplines, aptly termed the "humanities", relate to common human experience and, even at their most exact, are thrown back on thecomparative method; less intuitive and more precise than the humanities are thesocial sciences; while, at the base of the inverted pyramid of the disciplines,physics(concerned withmattergy– thematterandenergycomprising theuniverse) is, at its deepest, the most precise discipline and at the same time utterly non-intuitive.[g][h]
Theoretical physicist and mathematicianFreeman Dysonexplains that "[s]cience consists offactsandtheories":
"Facts are supposed to be true or false. They are discovered by observers or experimenters. A scientist who claims to have discovered a fact that turns out to be wrong is judged harshly....
"Theories have an entirely different status. They are free creations of the human mind, intended to describe our understanding of nature. Since our understanding is incomplete, theories are provisional. Theories are tools of understanding, and a tool does not need to be precisely true in order to be useful. Theories are supposed to be more-or-less true... A scientist who invents a theory that turns out to be wrong is judged leniently."[26]
Dyson cites a psychologist's description of how theories are born: "We can't live in a state of perpetual doubt, so we make up the best story possible and we live as if the story were true." Dyson writes: "The inventor of a brilliant idea cannot tell whether it is right or wrong." The passionate pursuit of wrong theories is a normal part of the development of science.[27]Dyson cites, afterMario Livio, five famous scientists who made major contributions to the understanding of nature but also believed firmly in a theory that proved wrong.[27]
Charles Darwinexplained theevolution of lifewith histheory of natural selectionof inherited variations, but he believed in a theory of blending inheritance that made the propagation of new variations impossible.[27]He never readGregor Mendel's studies that showed that thelaws of inheritancewould become simple when inheritance was considered as arandomprocess. Though Darwin in 1866 did the same experiment that Mendel had, Darwin did not get comparable results because he failed to appreciate thestatisticalimportance of using very large experimentalsamples. Eventually,Mendelian inheritanceby random variation would, no thanks to Darwin, provide the raw material for Darwinian selection to work on.[28]
William Thomson (Lord Kelvin)discovered basic laws ofenergyandheat, then used these laws to calculate an estimate of theage of the Earththat was too short by a factor of fifty. He based his calculation on the belief that theEarth's mantlewas solid and could transfer heat from the interior to the surface only byconduction. It is now known that the mantle is partly fluid and transfers most of the heat by the far more efficient process ofconvection, which carries heat by a massive circulation of hot rock moving upward and cooler rock moving downward. Kelvin could see the eruptions ofvolcanoesbringing hot liquid from deep underground to the surface; but his skill in calculation blinded him to processes, such asvolcanic eruptions, that could not be calculated.[27]
Linus Paulingdiscovered the chemical structure ofproteinand proposed a completely wrong structure forDNA, which carries hereditary information from parent to offspring. Pauling guessed a wrong structure for DNA because he assumed that a pattern that worked for protein would also work for DNA. He overlooked the gross chemical differences between protein and DNA.Francis CrickandJames Watsonpaid attention to the differences and found the correct structure for DNA that Pauling had missed a year earlier.[27]
AstronomerFred Hoylediscovered the process by which the heavierelementsessential tolifeare created bynuclear reactionsin the cores of massivestars. He then proposed a theory of the history of the universe known assteady-state cosmology, which has theuniverseexisting forever without an initialBig Bang(as Hoyle derisively dubbed it). He held his belief in the steady state long after observations proved that the Big Bang had happened.[27]
Albert Einsteindiscovered the theory of space, time, and gravitation known asgeneral relativity, and then added acosmological constant, later known asdark energy. Subsequently, Einstein withdrew his proposal of dark energy, believing it unnecessary. Long after his death, observations suggested that dark energy really exists, so that Einstein's addition to the theory may have been right; and his withdrawal, wrong.[27]
To Mario Livio's five examples of scientists who blundered, Dyson adds a sixth: himself. Dyson had concluded, on theoretical principles, that what was to become known as theW-particle, a chargedweak boson, could not exist. An experiment conducted atCERN, inGeneva, later proved him wrong. "With hindsight I could see several reasons why my stability argument would not apply to W-particles. [They] are too massive and too short-lived to be a constituent of anything that resembles ordinary matter."[29]
Harvard Universityhistorian of scienceNaomi Oreskespoints out that thetruthof scientific findings can never be assumed to be finally, absolutely settled.[30]The history of science offers many examples of matters that scientists once thought to be settled and which have proven not to be, such as the concepts ofEarthbeing the center of theuniverse, the absolute nature oftimeandspace, the stability ofcontinents, and the cause ofinfectious disease.[30]
Science, writes Oreskes, is not a fixed, immutable set of discoveries but "aprocessof learning and discovery [...]. Science can also be understood as an institution (or better, a set of institutions) that facilitates this work.[30]
It is often asserted that scientific findings are true because scientists use "thescientific method". But, writes Oreskes, "we can never actually agree on what that method is. Some will say it isempiricism:observationand description of the world. Others will say it is theexperimental method: the use of experience and experiment to testhypotheses. (This is cast sometimes as thehypothetico-deductive method, in which the experiment must be framed as a deduction from theory, and sometimes asfalsification, where the point of observation and experiment is to refute theories, not to confirm them.) Recently a prominent scientist claimed the scientific method was to avoid fooling oneself into thinking something is true that is not, and vice versa."[30]
In fact, writes Oreskes, the methods of science have varied between disciplines and across time. "Many scientific practices, particularlystatistical tests of significance, have been developed with the idea of avoiding wishful thinking and self-deception, but that hardly constitutes 'the scientific method.'"[30]
Science, writes Oreskes, "isnotsimple, and neither is thenatural world; therein lies the challenge of science communication. [...] Our efforts to understand and characterize the natural world are just that: efforts. Because we're human, we often fall flat."[30]
"Scientific theories", according to Oreskes, "are not perfect replicas ofreality, but we have good reason to believe that they capture significant elements of it."[30]
Steven Weinberg, 1979Nobel laureate in physics, and ahistorian of science, writes that the core goal of science has always been the same: "to explain the world"; and in reviewing earlier periods of scientific thought, he concludes that only sinceIsaac Newtonhas that goal been pursued more or less correctly. He decries the "intellectual snobbery" thatPlatoandAristotleshowed in their disdain for science's practical applications, and he holdsFrancis BaconandRené Descartesto have been the "most overrated" among the forerunners of modern science (they tried to prescribe rules for conducting science, which "never works").[31]
Weinberg draws parallels between past and present science, as when a scientific theory is "fine-tuned" (adjusted) to make certain quantities equal, without any understanding of why theyshouldbe equal. Such adjusting vitiated the celestial models of Plato's followers, in which different spheres carrying theplanetsandstarswere assumed, with no good reason, to rotate in exact unison. But, Weinberg writes, a similar fine-tuning also besets current efforts to understand the "dark energy" that isspeeding up the expansion of the universe.[32]
Ancient science has been described as having gotten off to a good start, then faltered. The doctrine ofatomism, propounded by thepre-SocraticphilosophersLeucippusandDemocritus, was naturalistic, accounting for the workings of the world by impersonal processes, not by divine volitions. Nevertheless, these pre-Socratics come up short for Weinberg as proto-scientists, in that they apparently never tried to justify their speculations or to test them against evidence.[32]
Weinberg believes that science faltered early on due to Plato's suggestion that scientific truth could be attained by reason alone, disregardingempirical observation, and due to Aristotle's attempt to explain natureteleologically—in terms of ends and purposes. Plato's ideal of attaining knowledge of the world by unaided intellect was "a false goal inspired by mathematics"—one that for centuries "stood in the way of progress that could be based only on careful analysis of careful observation." And it "never was fruitful" to ask, as Aristotle did, "what is the purpose of this or that physical phenomenon."[32]
A scientific field in which theGreekandHellenisticworld did make progress was astronomy. This was partly for practical reasons: the sky had long served as compass, clock, and calendar. Also, the regularity of the movements of heavenly bodies made them simpler to describe than earthly phenomena. But nottoosimple: though the sun, moon and "fixed stars" seemed regular in their celestial circuits, the "wandering stars"—the planets—were puzzling; they seemed to move at variable speeds, and even to reverse direction. Writes Weinberg: "Much of the story of the emergence of modern science deals with the effort, extending over two millennia, to explain the peculiar motions of the planets."[33]
The challenge was to make sense of the apparently irregular wanderings of the planets on the assumption that all heavenly motion is actually circular and uniform in speed. Circular, because Plato held thecircleto be the most perfect and symmetrical form; and therefore circular motion, at uniform speed, was most fitting for celestial bodies. Aristotle agreed with Plato. In Aristotle'scosmos, everything had a "natural" tendency to motion that fulfilled its inner potential. For the cosmos' sublunary part (the region below the Moon), the natural tendency was to move in a straight line: downward, for earthen things (such as rocks) and water; upward, for air and fiery things (such as sparks). But in thecelestialrealm things were not composed of earth, water, air, or fire, but of a "fifth element", or "quintessence," which was perfect and eternal. And its natural motion was uniformly circular. The stars, the Sun, the Moon, and the planets were carried in their orbits by a complicated arrangement of crystalline spheres, all centered around an immobile Earth.[34]
The Platonic-Aristotelian conviction that celestial motions must be circular persisted stubbornly. It was fundamental to the astronomerPtolemy's system, which improved on Aristotle's in conforming to the astronomical data by allowing the planets to move in combinations of circles called "epicycles".[34]
It even survived theCopernican Revolution. Copernicus was conservative in his Platonic reverence for the circle as the heavenly pattern. According to Weinberg, Copernicus was motivated to dethrone the Earth in favor of the Sun as the immobile center of the cosmos largely by aesthetic considerations: he objected to the fact that Ptolemy, though faithful to Plato's requirement that heavenly motion be circular, had departed from Plato's other requirement that it be of uniform speed. By putting the sun at the center—actually, somewhat off-center—Copernicus sought to honor circularity while restoring uniformity. But to make his system fit the observations as well as Ptolemy's system, Copernicus had to introduce still more epicycles. That was a mistake that, writes Weinberg, illustrates a recurrent theme in the history of science: "A simple and beautiful theory that agrees pretty well with observation is often closer to the truth than a complicated ugly theory that agrees better with observation."[34]
The planets, however, do not move in perfect circles but inellipses. It wasJohannes Kepler, about a century after Copernicus, who reluctantly (for he too had Platonic affinities) realized this. Thanks to his examination of the meticulous observations compiled by astronomerTycho Brahe, Kepler "was the first to understand the nature of the departures from uniform circular motion that had puzzled astronomers since the time of Plato."[34]
The replacement of circles by supposedly ugly ellipses overthrew Plato's notion ofperfectionas the celestial explanatory principle. It also destroyed Aristotle's model of the planets carried in their orbits by crystalline spheres; writes Weinberg, "there is no solid body whose rotation can produce an ellipse." Even if a planet were attached to an ellipsoid crystal, that crystal's rotation would still trace a circle. And if the planets were pursuing their elliptical motion through empty space, then what was holding them in their orbits?[34]
Science had reached the threshold of explaining the world notgeometrically, according to shape, but dynamically, according toforce. It wasIsaac Newtonwho finally crossed that threshold. He was the first to formulate, in his "laws of motion", the concept of force. He demonstrated that Kepler's ellipses were the very orbits the planets would take if they were attracted toward the Sun by a force that decreased as the square of the planet's distance from the Sun. And by comparing the Moon's motion in its orbit around the Earth to the motion of, perhaps, an apple as it falls to the ground, Newton deduced that the forces governing them were quantitatively the same. "This," writes Weinberg, "was the climactic step in the unification of the celestial and terrestrial in science."[34]
By formulating a unified explanation of the behavior of planets, comets, moons, tides, and apples, writes Weinberg, Newton "provided an irresistible model for what aphysical theoryshould be"—a model that fit no preexistingmetaphysicalcriterion. In contrast to Aristotle, who claimed to explain the falling of a rock by appeal to its inner striving, Newton was unconcerned with finding a deeper cause forgravity.[34]He declared in a postscript to the second, 1713 edition of hisPhilosophiæ Naturalis Principia Mathematica: "I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses. It is enough that gravity really exists and acts according to the laws that we have set forth."[35]What mattered were his mathematically stated principles describing this force, and their ability to account for a vast range of phenomena.[34]
About two centuries later, in 1915, a deeper explanation for Newton's law of gravitation was found inAlbert Einstein'sgeneral theory of relativity: gravity could be explained as a manifestation of the curvature inspacetimeresulting from the presence ofmatterandenergy. Successful theories like Newton's, writes Weinberg, may work for reasons that their creators do not understand—reasons that deeper theories will later reveal. Scientific progress is not a matter of building theories on a foundation ofreason, but of unifying a greater range ofphenomenaunder simpler and more general principles.[34]
Naomi Oreskescautions against making "the classic error of conflatingabsence of evidencewithevidence of absence[emphases added]." She cites two examples of this error that were perpetrated in 2016 and 2023.[36]
In 2016 theCochrane Library, a collection of databases in medicine and other healthcare specialties, published a report that was widely understood to indicate thatflossingone's teeth confers no advantage todental health. But theAmerican Academy of Periodontology, dental professors, deans of dental schools, and clinical dentists all held that clinical practice shows differences in tooth and gum health between those who floss and those who don't.[37]
Oreskes explains that "Cochrane Reviewsbase their findings onrandomized controlled trials(RCTs), often called the 'gold standard' of scientific evidence." But many questions can't be answered well using thismethod, and some can't be answered at all. "Nutritionis a case in point. [Y]ou can't control what people eat, and when you ask... what they have eaten, many people lie. Flossing is similar. One survey concluded that one in four Americans who claimed to floss regularly was fibbing."[38]
In 2023 Cochrane published a report determining that wearingsurgical masks"probably makes little or no difference" in slowing the spread of respiratory illnesses such asCOVID-19.Mass mediareduced this to the claim that masks did not work. The Cochrane Library's editor-in-chief objected to such characterizations of the review; she said the report hadnotconcluded that "masks don't work", but rather that the "results were inconclusive." The report had made clear that its conclusions were about thequalityandcapaciousnessof available evidence, which the authors felt were insufficient to prove that masking was effective. The report's authors were "uncertain whether wearing [surgical] masks or N95/P2 respirators helps to slow the spread of respiratory viruses." Still, they were alsouncertain about that uncertainty[emphasis added], stating that their confidence in their conclusion was "low to moderate."[39]
Subsequently the report's lead author confused the public by stating that mask-wearing "Makes no difference – none of it", and that Covid policies were "evidence-free": he thus perpetrated what Oreskes calls "the [...] error of conflating absence of evidence with evidence of absence." Studies have in fact shown that U.S. states with mask mandates saw a substantial decline in Covid spread within days of mandate orders being signed; in the period from 31 March to 22 May 2020, more than 200,000 cases were avoided.[40]
Oreskes calls the Cochrane report's neglect of theepidemiologicalevidence – because it didn't meet Cochrane's rigid standard – "methodological fetishism," when scientists "fixate on a preferredmethodologyand dismiss studies that don't follow it."[41]
The term "artificial intelligence" (AI) was coined in 1955 byJohn McCarthywhen he and othercomputer scientistswere planning a workshop and did not want to inviteNorbert Wiener, the brilliant, pugnacious, and increasingly philosophical (rather than practical) author onfeedback mechanismswho had coined the term "cybernetics". The new termartificial intelligence, writesKenneth Cukier, "set in motion decades of semantic squabbles ('Can machines think?') and fueled anxieties over malicious robots... If McCarthy... had chosen a blander phrase—say, 'automation studies'—the concept might not have appealed as much toHollywood[movie] producers and [to] journalists..."[42]SimilarlyNaomi Oreskeshas commented: "[M]achine 'intelligence'... isn't intelligence at all but something more like 'machine capability.'"[43]
As machines have become increasingly capable, specific tasks considered to require "intelligence", such asoptical character recognition, have often been removed from the definition of AI, a phenomenon known as the "AI effect". It has been quipped that "AI is whatever hasn't been done yet."[44][i]
Since 1950, whenAlan Turingproposed what has come to be called the "Turing test," there has been speculation whether machines such as computers can possess intelligence; and, if so,whether intelligent machines could become a threat to human intellectual and scientific ascendancy—or even an existential threat to humanity.[46]John Searlepoints out common confusion about the correct interpretation of computation and information technology. "For example, one routinely reads that in exactly the same sense in whichGarry Kasparov… beatAnatoly Karpovinchess, the computer calledDeep Blueplayed and beat Kasparov.... [T]his claim is [obviously] suspect. In order for Kasparov to play and win, he has to be conscious that he is playing chess, and conscious of a thousand other things... Deep Blue is conscious of none of these things because it is not conscious of anything at all. Why isconsciousnessso important? You cannot literally play chess or do much of anything else cognitive if you are totally disassociated from consciousness."[46]
Searle explains that, "in the literal, real, observer-independent sense in which humans compute, mechanical computers do not compute. They go through a set of transitions in electronic states that we can interpret computationally. The transitions in those electronic states are absolute or observer-independent, butthe computation is observer-relative. The transitions in physical states are just electrical sequences unless some conscious agent can give them a computational interpretation.... There is no psychological reality at all to what is happening in the [computer]."[47]
"[A] digital computer", writes Searle, "is a syntactical machine. It manipulates symbols and does nothing else. For this reason, the project of creating human intelligence by designing a computer program that will pass theTuring Test... is doomed from the start. The appropriately programmed computer has asyntax[rules for constructing or transforming the symbols and words of a language] but nosemantics[comprehension of meaning].... Minds, on the other hand, have mental or semantic content."[48]
Like Searle,Christof Koch, chief scientist and president of theAllen Institute for Brain Science, inSeattle, is doubtful about the possibility of "intelligent" machines attainingconsciousness, because "[e]ven the most sophisticatedbrain simulationsare unlikely to produce consciousfeelings." According to Koch,
Whether machines can becomesentient[is important] forethicalreasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to... humans. Per GNW [theGlobal Neuronal Workspacetheory], they turn from mere objects into subjects... with apoint of view.... Once computers'cognitive abilitiesrival those of humanity, their impulse to push for legal and politicalrightswill become irresistible – the right not to be deleted, not to have their memories wiped clean, not to sufferpainand degradation. The alternative, embodied by IIT [Integrated Information Theory], is that computers will remain only supersophisticated machinery, ghostlike empty shells, devoid of what we value most: the feeling of life itself."[49]
Professor of psychology and neural scienceGary Marcuspoints out a so far insuperable stumbling block to artificial intelligence: an incapacity for reliabledisambiguation. "[V]irtually every sentence [that people generate] isambiguous, often in multiple ways. Our brain is so good at comprehendinglanguagethat we do not usually notice."[50]A prominent example is known as the "pronoun disambiguation problem" ("PDP"): a machine has no way of determining to whom or what apronounin a sentence—such as "he", "she" or "it"—refers.[51]
Marcus has described currentlarge language modelsas "approximations to [...] language use rather than language understanding".[52]
Computer scientistPedro Domingoswrites: "AIs are likeautistic savantsand will remain so for the foreseeable future.... AIs lackcommon senseand can easily make errors that a human never would... They are also liable to take our instructions too literally, giving us precisely what we asked for instead of what we actually wanted.[53]
Kai-Fu Lee, aBeijing-basedventure capitalist,artificial-intelligence(AI) expert with aPh.D.incomputer sciencefromCarnegie Mellon University, and author of the 2018 book,AI Superpowers: China, Silicon Valley, and the New World Order,[54]emphasized in a 2018PBSAmanpourinterview withHari SreenivasanthatAI, with all its capabilities, will never be capable ofcreativityorempathy.[55]Bill Gates, interviewed in 2025 byWalter IsaacsononAmanpour and Company, similarly said that artificial intelligence possesses nosentienceand is incapable of human feeling or understanding.[56]
Paul Scharre writes inForeign Affairsthat "Today's AI technologies are powerful but unreliable."[57][j]George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand."[59]Computer scientistAlex Pentlandwrites: "CurrentAI machine-learningalgorithmsare, at their core, dead simple stupid. They work, but they work by brute force."[60]
"Artificial intelligence" is synonymous with "machine intelligence." The more perfectly adapted an AI program is to a given task, the less applicable it will be to other specific tasks. An abstracted, AIgeneral intelligenceis a remote prospect, if feasible at all.Melanie Mitchellnotes that an AI program calledAlphaGobested one of the world's bestGoplayers, but that its "intelligence" is nontransferable: it cannot "think" about anything except Go. Mitchell writes: "We humans tend to overestimate AI advances and underestimate the complexity of our own intelligence."[61]Writes Paul Taylor: "Perhaps there is a limit to what a computer can do without knowing that it is manipulating imperfect representations of an external reality."[62]
Humankind may not be able to outsource, to machines, its creative efforts in the sciences, technology, and culture.
Gary Marcuscautions against being taken in by deceptive claims aboutartificial general intelligencecapabilities that are put out inpress releasesby self-interested companies which tell the press and public "only what the companies want us to know."[63]Marcus writes:
Althoughdeep learninghas advanced the ability of machines torecognize patterns in data, it has three major flaws. The patterns that it learns are, ironically, superficial notconceptual; the results it creates are hard tointerpret; and the results are difficult to use in the context of other processes, such asmemoryandreasoning. AsHarvard Universitycomputer scientistLes Valiantnoted, "The central challenge [going forward] is to unify the formulation of...learningand reasoning."[64]
James Gleickwrites: "Agencyis what distinguishes us from machines. For biological creatures,reasonandpurposecome from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that."[65]
A central concern for science and scholarship is thereliabilityandreproducibilityof their findings. Of all fields of study, none is capable of such precision asphysics. But even there the results of studies, observations, andexperimentscannot be considered absolutely certain and must be treatedprobabilistically; hence,statistically.[66]
In 1925 British geneticist and statisticianRonald FisherpublishedStatistical Methods for Research Workers, which established him as the father of modern statistics. He proposed a statistical test that summarized the compatibility of data with a given proposed model and produced a "pvalue". He counselled pursuing results withpvalues below 0.05 and not wasting time on results above that. Thus arose the idea that apvalue less than 0.05 constitutes "statistical significance" – a mathematical definition of "significant" results.[67]
The use ofpvalues, ever since, to determine the statistical significance of experimental results has contributed to an illusion ofcertaintyand toreproducibility crisesin manyscientific fields,[68]especially inexperimental economics,biomedical research, andpsychology.[69]
Every statistical model relies on a set of assumptions about how data are collected and analyzed and about how researchers decide to present their results. These results almost always center onnull-hypothesissignificance testing, which produces apvalue. Such testing does not address the truth head-on but obliquely: significance testing is meant to indicate only whether a given line of research is worth pursuing further. It does not say how likely the hypothesis is to be true, but instead addresses an alternative question: if the hypothesis were false, how unlikely would the data be? The importance of "statistical significance", reflected in thepvalue, can be exaggerated or overemphasized – something that readily occurs with small samples. That has causedreplication crises.[66]
Some scientists have advocated "redefining statistical significance", shifting its threshold from 0.05 to 0.005 for claims of new discoveries. Others say such redefining does no good because the real problem is the very existence of a threshold.[70]
Some scientists prefer to useBayesian methods, a more direct statistical approach which takes initial beliefs, adds in new evidence, and updates the beliefs. Another alternative procedure is to use thesurprisal, a mathematical quantity that adjustpvalues to produce bits – as in computer bits – of information; in that perspective, 0.05 is a weak standard.[70]
When Ronald Fisher embraced the concept of "significance" in the early 20th century, it meant "signifying" but not "important". Statistical "significance" has, since, acquired am excessive connotation of confidence in the validity of the experimental results. Statistician Andrew Gelman says, "The original sin is people wantingcertaintywhen it's not appropriate." "Ultimately", writes Lydia Denworth, "a successful theory is one that stands up repeatedly to decades of scrutiny."[70]
Increasingly, attention is being given to the principles ofopen science, such as publishing more detailed research protocols and requiring authors to follow prespecified analysis plans and to report when they deviate from them.[70]
Fifty years beforeFlorian Znanieckipublished his 1923 paper proposing the creation of an empirical field of study to study the field ofscience, Aleksander Głowacki (better known by his pen name,Bolesław Prus) had made the same proposal. In an 1873 public lecture "On Discoveries and Inventions",[71]Prus said:
Until now there has been no science that describes the means for making discoveries and inventions, and the generality of people, as well as many men of learning, believe that there never will be. This is an error. Someday a science of making discoveries and inventions will exist and will render services. It will arise not all at once; first only its general outline will appear, which subsequent researchers will correct and elaborate, and which still later researchers will apply to individual branches of knowledge.[72]
Prus defines"discovery"as "the finding out of a thing that has existed and exists in nature, but which was previously unknown to people";[73]and"invention"as "the making of a thing that has not previously existed, and which nature itself cannot make."[74]
He illustrates the concept of "discovery":
Until 400 years ago, people thought that the Earth comprised just three parts: Europe, Asia, and Africa; it was only in 1492 that the Genoese,Christopher Columbus, sailed out from Europe into the Atlantic Ocean and, proceeding ever westward, after [10 weeks] reached a part of the world that Europeans had never known. In that new land he found copper-colored people who went about naked, and he found plants and animals different from those in Europe; in short, he had discovered a new part of the world that others would later name "America." We say that Columbus haddiscoveredAmerica, because America had already long existed on Earth.[75]
Prus illustrates the concept of "invention":
[As late as] 50 years ago,locomotiveswere unknown, and no one knew how to build one; it was only in 1828 that the English engineer Stephenson built the first locomotive and set it in motion. So we say that Stephensoninventedthe locomotive, because this machine had not previously existed and could not by itself have come into being in nature; it could only have been made by man.[74]
According to Prus, "inventions and discoveries are natural phenomena and, as such, are subject to certain laws." Those are the laws of "gradualness", "dependence", and "combination".[76]
1.The law of gradualness.No discovery or invention arises at once perfected, but is perfected gradually; likewise, no invention or discovery is the work of a single individual but of many individuals, each adding his little contribution.[77]
2.The law of dependence.An invention or discovery is conditional on the prior existence of certain known discoveries and inventions. ...If the rings ofSaturncan [only] be seen through telescopes, then the telescope had to have been invented before the rings could have been seen. [...][78]
3.The law of combination.Any new discovery or invention is a combination of earlier discoveries and inventions, or rests on them. When I study a new mineral, I inspect it, I smell it, I taste it ... I combine the mineral with a balance and with fire...in this way I learn ever more of its properties.[79][k]
Each of Prus' three "laws" entails important corollaries. The law of gradualness implies the following:[81]
a) Since every discovery and invention requires perfecting, let us not pride ourselves only on discovering or inventing somethingcompletely new, but let us also work to improve or get to know more exactly things that are already known and already exist. [...][81]b) The same law of gradualness demonstrates the necessity ofexpert training. Who can perfect a watch, if not a watchmaker with a good comprehensive knowledge of his métier? Who can discover new characteristics of an animal, if not a naturalist?[81]
From the law of dependence flow the following corollaries:[81]
a) No invention or discovery, even one seemingly without value, should be dismissed, because that particular trifle may later prove very useful. There would seem to be no simpler invention than the needle, yet the clothing of millions of people, and the livelihoods of millions of seamstresses, depend on the needle's existence. Even today's beautiful sewing machine would not exist, had the needle not long ago been invented.[82]b) The law of dependence teaches us that what cannot be done today, might be done later. People give much thought to the construction of a flying machine that could carry many persons and parcels. The inventing of such a machine will depend, among other things, on inventing a material that is, say, as light as paper and as sturdy and fire-resistant as steel.[83]
Finally, Prus' corollaries to his law of combination:[83]
a) Anyone who wants to be a successful inventor, needs to know a great many things—in the most diverse fields. For if a new invention is a combination of earlier inventions, then the inventor's mind is the ground on which, for the first time, various seemingly unrelated things combine. Example: The steam engine combines the kettle for cookingRumford's Soup, the pump, and the spinning wheel.[83]
[...] What is the connection among zinc, copper, sulfuric acid, a magnet, a clock mechanism, and an urgent message? All these had to come together in the mind of the inventor of the telegraph... [...][84]
The greater the number of inventions that come into being, the more things a new inventor must know; the first, earliest and simplest inventions were made by completely uneducated people—but today's inventions, particularly scientific ones, are products of the most highly educated minds. [...][85]
b) A second corollary concerns societies that wish to have inventors. I said that a new invention is created by combining the most diverse objects; let us see where this takes us.[85]
Suppose I want to make an invention, and someone tells me: Take 100 different objects and bring them into contact with one another, first two at a time, then three at a time, finally four at a time, and you will arrive at a new invention. Imagine that I take a burning candle, charcoal, water, paper, zinc, sugar, sulfuric acid, and so on, 100 objects in all, and combine them with one another, that is, bring into contact first two at a time: charcoal with flame, water with flame, sugar with flame, zinc with flame, sugar with water, etc. Each time, I shall see a phenomenon: thus, in fire, sugar will melt, charcoal will burn, zinc will heat up, and so on. Now I will bring into contact three objects at a time, for example, sugar, zinc and flame; charcoal, sugar and flame; sulfuric acid, zinc and water; etc., and again I shall experience phenomena. Finally I bring into contact four objects at a time, for example, sugar, zinc, charcoal, and sulfuric acid. Ostensibly this is a very simple method, because in this fashion I could make not merely one but a dozen inventions. But will such an effort not exceed my capability? It certainly will. A hundred objects, combined in twos, threes and fours, will make over4 millioncombinations; so if I made 100 combinations a day, it would take me over 110 years to exhaust them all![86]
But if by myself I am not up to the task, a sizable group of people will be. If 1,000 of us came together to produce the combinations that I have described, then any one person would only have to carry out slightly more than 4,000 combinations. If each of us performed just 10 combinations a day, together we would finish them all in less than a year and a half: 1,000 people would make an invention which a single man would have to spend more than 110 years to make…[87][l]
The conclusion is quite clear: a society that wants to win renown with its discoveries and inventions has to have a great many persons working in every branch of knowledge. One or a few men of learning and genius mean nothing today, or nearly nothing, because everything is now done by large numbers. I would like to offer the following simile: Inventions and discoveries are like a lottery; not every player wins, but from among the many players a fewmustwin. The point is not that John or Paul, because they want to make an invention and because they work for it, shall make an invention; but where thousands want an invention and work for it, the invention must appear, as surely as an unsupported rock must fall to the ground.[87][m]
But, asks Prus, "What force drives [the] toilsome, often frustrated efforts [of the investigators]? What thread will clew these people through hitherto unexplored fields of study?"[88][n]
[T]he answer is very simple: man is driven to efforts, including those of making discoveries and inventions, byneeds; and the thread that guides him isobservation: observation of the works of nature and of man.[88]
I have said that the mainspring of all discoveries and inventions is needs. In fact, is there any work of man that does not satisfy some need? We build railroads because we need rapid transportation; we build clocks because we need to measure time; we build sewing machines because the speed of [unaided] human hands is insufficient. We abandon home and family and depart for distant lands because we are drawn by curiosity to see what lies elsewhere. We forsake the society of people and we spend long hours in exhausting contemplation because we are driven by a hunger for knowledge, by a desire to solve the challenges that are constantly thrown up by the world and by life![88]
Needs never cease; on the contrary, they are always growing. While the pauper thinks about a piece of bread for lunch, the rich man thinks about wine after lunch. The foot traveler dreams of a rudimentary wagon; the railroad passenger demands a heater. The infant is cramped in its cradle; the mature man is cramped in the world. In short, everyone has his needs, and everyone desires to satisfy them, and that desire is an inexhaustible source of new discoveries, new inventions, in short, of all progress.[89]
But needs aregeneral, such as the needs for food, sleep and clothing; andspecial, such as needs for a new steam engine, a new telescope, a new hammer, a new wrench. To understand the former needs, it suffices to be a human being; to understand the latter needs, one must be aspecialist—anexpert worker. Who knows better than a tailor what it is that tailors need, and who better than a tailor knows how to find the right way to satisfy the need?[90]
Now consider how observation can lead man to new ideas; and to that end, as an example, let us imagine how, more or less, clay products came to be invented.[90]
Suppose that somewhere there lived on clayey soil a primitive people who already knew fire. When rain fell on the ground, the clay turned doughy; and if, shortly after the rain, a fire was set on top of the clay, the clay under the fire became fired and hardened. If such an event occurred several times, the people might observe and thereafter remember that fired clay becomes hard like stone and does not soften in water. One of the primitives might also, when walking on wet clay, have impressed deep tracks into it; after the sun had dried the ground and rain had fallen again, the primitives might have observed that water remains in those hollows longer than on the surface. Inspecting the wet clay, the people might have observed that this material can be easily kneaded in one's fingers and accepts various forms.[91]
Some ingenious persons might have started shaping clay into various animal forms [...] etc., including something shaped like a tortoise shell, which was in use at the time. Others, remembering that clay hardens in fire, might have fired the hollowed-out mass, thereby creating the first [clay] bowl.[92]
After that, it was a relatively easy matter to perfect the new invention; someone else could discover clay more suitable for such manufactures; someone else could invent a glaze, and so on, with nature and observation at every step pointing out to man the way to invention. [...][92]
[This example] illustrates how people arrive at various ideas:by closely observing all things and wondering about all things.[92]
Take another example. [S]ometimes, in a pane of glass, we find disks and bubbles, looking through which we see objects more distinctly than with the naked eye. Suppose that an alert person, spotting such a bubble in a pane, took out a piece of glass and showed it to others as a toy. Possibly among them there was a man with weak vision who found that, through the bubble in the pane, he saw better than with the naked eye. Closer investigation showed that bilaterally convex glass strengthens weak vision, and in this way eyeglasses were invented. People may first have cut glass for eyeglasses from glass panes, but in time others began grinding smooth pieces of glass into convex lenses and producing proper eyeglasses.[93]
The art of grinding eyeglasses was known almost 600 years ago. A couple of hundred years later, the children of a certain eyeglass grinder, while playing with lenses, placed one in front of another and found that they could see better through two lenses than through one. They informed their father about this curious occurrence, and he began producing tubes with two magnifying lenses and selling them as a toy. Galileo, the great Italian scientist, on learning of this toy, used it for a different purpose and built the first telescope.[94]
This example, too, shows us that observation leads man by the hand to inventions. This example again demonstrates the truth of gradualness in the development of inventions, but above all also the fact that education amplifies man's inventiveness. A simple lens-grinder formed two magnifying glasses into a toy—while Galileo, one of the most learned men of his time, made a telescope. As Galileo's mind was superior to the craftsman's mind, so the invention of the telescope was superior to the invention of a toy.[94][...]
The three laws [that have been discussed here] are immensely important and do not apply only to discoveries and inventions, but they pervade all of nature. An oak does not immediately become an oak but begins as an acorn, then becomes a seedling, later a little tree, and finally a mighty oak: we see here the law of gradualness. A seed that has been sown will not germinate until it finds sufficient heat, water, soil and air: here we see the law of dependence. Finally, no animal or plant, or even stone, is something homogeneous and simple but is composed of various organs: here we see the law of combination.[95]
Prus holds that, over time, the multiplication of discoveries and inventions has improved the quality of people's lives and has expanded their knowledge. "This gradual advance of civilized societies, this constant growth in knowledge of the objects that exist in nature, this constant increase in the number of tools and useful materials, is termedprogress, or thegrowth of civilization."[96]Conversely, Prus warns, "societies and people that do not make inventions or know how to use them, lead miserable lives and ultimately perish."[97][o]
A fundamental feature of the scientific enterprise isreproducibilityof results. "For decades", writes Shannon Palus, "it has been... anopen secretthat a [considerable part] of the literature in some fields is plain wrong." This effectively sabotages the scientific enterprise and costs the world many billions of dollars annually in wasted resources. Militating against reproducibility is scientists' reluctance to share techniques, for fear of forfeiting one's advantage to other scientists. Also,scientific journalsandtenurecommittees tend to prize impressive new results rather than gradual advances that systematically build on existing literature. Scientists who quietly fact-check others' work or spend extra time ensuring that their ownprotocolsare easy for other researchers to understand, gain little for themselves.[98]
With a view to improving reproducibility of scientific results, it has been suggested that research-funding agencies finance only projects that include a plan for making their worktransparent. In 2016 the U.S.National Institutes of Healthintroduced new application instructions and review questions to encourage scientists to improve reproducibility. The NIH requests more information on how the study builds on previous work, and a list of variables that could affect the study, such as the sex of animal subjects—a previously overlooked factor that led many studies to describe phenomena found in male animals as universal.[99]
Likewise, the questions that a funder can ask in advance could be asked by journals and reviewers. One solution is "registered reports", a preregistration of studies whereby a scientist submits, for publication, research analysis and design plans before actually doing the study.Peer reviewersthen evaluate themethodology, and the journal promises to print the results, no matter what they are. In order to prevent over-reliance on preregistered studies—which could encourage safer, less venturesome research, thus over-correcting the problem—the preregistered-studies model could be operated in tandem with the traditional results-focused model, which may sometimes be more friendly toserendipitousdiscoveries.[99]
The "replication crisis" is compounded by a finding, published in a study summarized in 2021 by historian of scienceNaomi Oreskes, that nonreplicable studies are cited oftener than replicable ones: in other words, that bad science seems to get more attention than good science. If a substantial proportion of science is unreplicable, it will not provide a valid basis for decision-making and may delay the use of science for developing new medicines and technologies. It may also undermine the public's trust, making it harder to get peoplevaccinatedor act againstclimate change.[100]
The study tracked papers – in psychology journals, economics journals, and inScienceandNature– with documented failures of replication. The unreplicable papers were cited more than average, even after news of their unreplicability had been published.[100]
"These results," writes Oreskes, "parallel those of a 2018 study. An analysis of 126,000 rumor cascades onTwittershowed that false news spread faster and reached more people than verified true claims. [I]t was people, not [ro]bots, who were responsible for the disproportionate spread of falsehoods online."[100]
A 2016Scientific Americanreport highlights the role ofrediscoveryin science.Indiana University Bloomingtonresearchers combed through 22 million scientific papers published over the previous century and found dozens of "Sleeping Beauties"—studies that lay dormant for years before getting noticed.[101]The top finds, which languished longest and later received the most intense attention from scientists, came from the fields of chemistry, physics, and statistics. The dormant findings were wakened by scientists from other disciplines, such asmedicine, in search of fresh insights, and by the ability to test once-theoretical postulations.[101]Sleeping Beauties will likely become even more common in the future because of increasing accessibility of scientific literature.[101]TheScientific Americanreport lists the top 15 Sleeping Beauties: 7 inchemistry, 5 inphysics, 2 instatistics, and 1 inmetallurgy.[101]Examples include:
Herbert Freundlich's "Concerning Adsorption in Solutions" (1906), the first mathematical model ofadsorption, whenatomsormoleculesadhere to a surface. Today bothenvironmental remediationanddecontaminationin industrial settings rely heavily on adsorption.[101]
A. Einstein,B. PodolskyandN. Rosen, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?"Physical Review, vol. 47 (May 15, 1935), pp. 777–780. This famousthought experimentinquantum physics—now known as theEPR paradox, after the authors' surname initials—was discussedtheoreticallywhen it first came out. It was not until the 1970s thatphysicshad the experimental means to testquantum entanglement.[101]
J[ohn] Turkevich, P. C. Stevenson, J. Hillier, "A Study of the Nucleation and Growth Processes in the Synthesis of Colloidal Gold",Discuss. Faraday. Soc., 1951, 11, pp. 55–75, explains how to suspendgold nanoparticlesin liquid. It owes its awakening tomedicine, which now employs gold nanoparticles to detecttumorsand deliver drugs.[101]
William S. Hummers and Richard E Offeman, "Preparation of Graphitic Oxide",Journal of the American Chemical Society, vol. 80, no. 6 (March 20, 1958), p. 1339, introducedHummers' Method, a technique for makinggraphite oxide. Recent interest ingraphene's potential has brought the 1958 paper to attention. Graphite oxide could serve as a reliable intermediate for the 2-D material.[101]
Historians and sociologists have remarked the occurrence, inscience, of "multiple independent discovery". SociologistRobert K. Mertondefined such "multiples" as instances in which similardiscoveriesare made by scientists working independently of each other.[102]"Sometimes the discoveries are simultaneous or almost so; sometimes a scientist will make a new discovery which, unknown to him, somebody else has made years before."[103][104]Commonly cited examples of multiple independent discovery are the 17th-century independent formulation ofcalculusbyIsaac Newton,Gottfried Wilhelm Leibniz, and others;[105]the 18th-century independent discovery ofoxygenbyCarl Wilhelm Scheele,Joseph Priestley,Antoine Lavoisier, and others; and the 19th-century independent formulation of thetheory of evolutionofspeciesbyCharles DarwinandAlfred Russel Wallace.[106]
Merton contrasted a "multiple" with a "singleton" — a discovery that has been made uniquely by a single scientist or group of scientists working together.[107]He believed that it is multiple discoveries, rather than unique ones, that represent thecommonpattern in science.[108]
Multiple discoveries in the history of science provide evidence forevolutionarymodels of science and technology, such asmemetics(the study of self-replicating units of culture),evolutionary epistemology(which applies the concepts ofbiological evolutionto study of the growth of human knowledge), andcultural selection theory(which studies sociological and cultural evolution in a Darwinian manner). Arecombinant-DNA-inspired "paradigmof paradigms", describing a mechanism of "recombinant conceptualization", predicates that a newconceptarises through the crossing of pre-existing concepts andfacts. This is what is meant when one says that a scientist, scholar, or artist has been "influenced by" another —etymologically, that a concept of the latter's has "flowed into" the mind of the former.[109]
The phenomenon of multiple independent discoveries and inventions can be viewed as a consequence ofBolesław Prus' three laws of gradualness, dependence, and combination (see "Discoveries and inventions", above). The first two laws may, in turn, be seen as corollaries to the third law, since the laws of gradualness and dependence imply the impossibility of certain scientific or technological advances pending the availability of certain theories, facts, or technologies that must be combined to produce a given scientific or technological advance.
Technology– the application of discoveries to practical matters – showed a remarkable acceleration in what economistRobert J. Gordonhas identified as "the special century" that spanned the period up to 1970. By then, he writes, all the key technologies of modern life were in place:sanitation,electricity,mechanized agriculture,highways,air travel,telecommunications, and the like. The one signature technology of the 21st century has been theiPhone. Meanwhile, a long list of much-publicized potential major technologies remain in theprototypephase, includingself-driving cars,flying cars,augmented-reality glasses,gene therapy, andnuclear fusion. An urgent goal for the 21st century, writes Gordon, is to undo some of the consequences of the last great technology boom by developing affordablezero- and negative-emissions technologies.[110]
Technologyis the sum oftechniques,skills,methods, andprocessesused in the production ofgoodsorservicesor in the accomplishment of objectives, such asscientific investigation. Paradoxically, technology, so conceived, has sometimes been noted to take primacy over the ends themselves – even to their detriment. Laura Grego and David Wright, writing in 2019 inScientific American, observe that "Current U.S.missile defenseplans are being driven largely bytechnology,politicsandfear. Missile defenses will not allow us to escape our vulnerability tonuclear weapons. Instead large-scale developments will create barriers to taking real steps towardreducing nuclear risks—by blocking further cuts innuclear arsenalsand potentially spurring new deployments."[111]
Yale Universityphysicist-astronomerPriyamvada Natarajan, writing of the virtually-simultaneous 1846 discovery of the planetNeptunebyUrbain Le VerrierandJohn Couch Adams(after other astronomers, as early asGalileo Galileiin 1612, had unwittinglyobservedthe planet), comments:
The episode is but one of many that proves science is not a dispassionate, neutral, and objective endeavor but rather one in which the violent clash of ideas and personal ambitions often combines withserendipityto propel new discoveries.[112]
A practical question concerns the traits that enable some individuals to achieve extraordinary results in their fields of work—and how suchcreativitycan be fostered.Melissa Schilling, a student ofinnovationstrategy, has identified some traits shared by eight major innovators innatural scienceortechnology:Benjamin Franklin(1706–90),Thomas Edison(1847–1931),Nikola Tesla(1856–1943),Maria Skłodowska Curie(1867–1934),Dean Kamen(born 1951),Steve Jobs(1955–2011),Albert Einstein(1879–1955), andElon Musk(born 1971).[113]
Schilling chose innovators in natural science and technology rather than in other fields because she found much more consensus about important contributions to natural science and technology than, for example, to art or music.[114]She further limited the set to individuals associated withmultipleinnovations. "When an individual is associated with only a single major invention, it is much harder to know whether the invention was caused by the inventor's personal characteristics or by simply being at the right place at the right time."[115]
The eight individuals were all extremely intelligent, but "that is not enough to make someone a serial breakthrough innovator."[113]Nearly all these innovators showed very high levels ofsocial detachment, or separateness (a notable exception being Benjamin Franklin).[116]"Their isolation meant that they were less exposed to dominant ideas and norms, and their sense of not belonging meant that even when exposed to dominant ideas and norms, they were often less inclined to adopt them."[117]From an early age, they had all shown extreme faith in their ability to overcome obstacles—whatpsychologycalls "self-efficacy".[117]
"Most [of them, writes Schilling] were driven byidealism, a superordinate goal that was more important than their own comfort, reputation, or families. Nikola Tesla wanted to free mankind from labor through unlimited freeenergyand to achieve internationalpeacethrough globalcommunication. Elon Musk wants to solve the world's energy problems and colonizeMars. Benjamin Franklin was seeking greater social harmony and productivity through the ideals ofegalitarianism,tolerance, industriousness, temperance, and charity. Marie Curie had been inspired byPolish Positivism's argument thatPoland, which was under Tsarist Russian rule, could be preserved only through the pursuit of education and technological advance by all Poles—including women."[118]
Most of the innovators also worked hard and tirelessly because they found work extremely rewarding. Some had an extremely high need for achievement. Many also appeared to find workautotelic—rewarding for its own sake.[119]A surprisingly large portion of the breakthrough innovators have beenautodidacts—self-taught persons—and excelled much more outside the classroom than inside.[120]
"Almost all breakthrough innovation," writes Schilling, "starts with an unusual idea or with beliefs that break withconventional wisdom.... However, creative ideas alone are almost never enough. Many people have creative ideas, even brilliant ones. But usually we lack the time, knowledge, money, or motivation to act on those ideas." It is generally hard to get others' help in implementing original ideas because the ideas are often initially hard for others to understand and value. Thus each of Schilling's breakthrough innovators showedextraordinaryeffort and persistence.[121]Even so, writes Schilling, "being at the right place at the right time still matter[ed]."[122]
When Swiss botanistSimon Schwendenerdiscovered in the 1860s thatlichenswere asymbioticpartnership between afungusand analga, his finding at first met with resistance from the scientific community. After his discovery that the fungus—which cannot make its own food—provides the lichen's structure, while the alga's contribution is itsphotosyntheticproduction of food, it was found that in some lichens acyanobacteriumprovides the food—and a handful of lichen species containbothan alga and a cyanobacterium, along with the fungus.[123]
A self-taught naturalist,Trevor Goward, has helped create aparadigm shiftin the study of lichens and perhaps of all life-forms by doing something that people did in pre-scientific times: going out into nature and closely observing. His essays about lichens were largely ignored by most researchers because Goward has no scientific degrees and because some of his radical ideas are not supported by rigorous data.[124]
When Goward toldToby Spribille, who at the time lacked a high-school education, about some of his lichenological ideas, Goward recalls, "He said I was delusional." Ultimately Spribille passed a high-school equivalency examination, obtained a Ph.D. in lichenology at theUniversity of Grazin Austria, and became an assistant professor of the ecology and evolution of symbiosis at theUniversity of Alberta. In July 2016 Spribille and his co-authors published a ground-breaking paper inSciencerevealing that many lichens contain a second fungus.
Spribille credits Goward with having "a huge influence on my thinking. [His essays] gave me license to think about lichens in [an unorthodox way] and freed me to see the patterns I worked out inBryoriawith my co-authors." Even so, "one of the most difficult things was allowing myself to have an open mind to the idea that 150 years of literature may have entirely missed the theoretical possibility that there would be more than one fungal partner in the lichen symbiosis." Spribille says that academia's emphasis on the canon of what others have established as important is inherently limiting.[125]
Contrary to previous studies indicating that higherintelligencemakes for betterleadersin various fields of endeavor, later research suggests that, at a certain point, a higherIQcan be viewed as harmful.[126]Decades ago, psychologistDean Simontonsuggested that brilliant leaders' words may go over people's heads, their solutions could be more complicated to implement, and followers might find it harder to relate to them. At last, in the July 2017Journal of Applied Psychology, he and two colleagues published the results of actual tests of the hypothesis.[126][127]
Studied were 379 men and women business leaders in 30 countries, including the fields of banking, retail, and technology. The managers took IQ tests—an imperfect but robust predictor of performance in many areas—and each was rated on leadership style and effectiveness by an average of 8 co-workers. IQ correlated positively with ratings of leadership effectiveness,strategyformation,vision, and several other characteristics—up to a point. The ratings peaked at an IQ of about 120, which is higher than some 80% of office workers. Beyond that, the ratings declined. The researchers suggested that the ideal IQ could be higher or lower in various fields, depending on whether technical orsocial skillsare more valued in a given work culture.[126]
Psychologist Paul Sackett, not involved in the research, comments: "To me, the right interpretation of the work would be that it highlights a need to understand what high-IQ leaders do that leads to lower perceptions by followers. The wrong interpretation would be,'Don't hire high-IQ leaders.'"[126]The study'slead author, psychologistJohn Antonakis, suggests that leaders should use their intelligence to generate creativemetaphorsthat will persuade and inspire others. "I think the only way a smart person can signal their intelligence appropriately and still connect with the people," says Antonakis, "is to speak incharismaticways."[126]
Academic specializationproduces great benefits for science and technology by focusing effort on discrete disciplines. But excessively narrow specialization can act as a roadblock to productive collaboration between traditional disciplines.
In 2017, inManhattan,James Harris Simons, a noted mathematician and retired founder of one of the world's largesthedge funds, inaugurated theFlatiron Institute, a nonprofit enterprise whose goal is to apply his hedge fund's analytical strategies to projects dedicated to expanding knowledge and helping humanity.[128]He has established computational divisions for research in astrophysics, biology, and quantum physics,[129]and an interdisciplinary division forclimate modellingthat interfaces geology, oceanography, atmospheric science, biology, and climatology.[130]
The latter, fourth Flatiron Institute division was inspired by a 2017 presentation to the institute's leadership byJohn Grotzinger, a "bio-geoscientist" from theCalifornia Institute of Technology, who explained the challenges of climate modelling. Grotzinger was a specialist in historical climate change—specifically, what had caused the greatPermian extinction, during which virtually all species died. To properly assess this cataclysm, one had to understand both the rock record and the ocean's composition, butgeologistsdid not interact much withphysical oceanographers. Grotzinger's own best collaboration had resulted from a fortuitous lunch with an oceanographer. Climate modelling was an intrinsically difficult problem made worse by theinformation silosofacademia. "If you had it all under one umbrella... it could result [much sooner] in a major breakthrough." Simons and his team found Grotzinger's presentation compelling, and the Flatiron Institute decided to establish its fourth and final computational division.[130]
SociologistHarriet Zuckerman, in her 1977 study of natural-scienceNobel laureatesin the United States, was struck by the fact that more than half (48) of the 92 laureates who did their prize-winning research in the U.S. by 1972 had worked either as students, postdoctorates, or junior collaborators under older Nobel laureates. Furthermore, those 48 future laureates had worked under a total of 71 laureate masters.[131][p]
Social viscosity ensures that not every qualified novice scientist attains access to the most productive centers of scientific thought. Nevertheless, writes Zuckerman, "To some extent, students of promise can choose masters with whom to work and masters can choose among the cohorts of students who present themselves for study. This process of bilateral assortative selection is conspicuously at work among the ultra-elite of science. Actual and prospective members of that elite select their scientist parents and therewith their scientist ancestors just as later they select their scientist progeny and therewith their scientist descendants."[133]
Zuckerman writes: "[T]he lines of elite apprentices to elite masters who had themselves been elite apprentices, and so on indefinitely, often reach far back into thehistory of science, long before 1900, when [Alfred] Nobel's will inaugurated what now amounts to the International Academy of Sciences. As an example of the many long historical chains of elite masters and apprentices, consider the German-born English laureateHans Krebs(1953), who traces his scientific lineage [...] back through his master, the 1931 laureateOtto Warburg. Warburg had studied withEmil Fis[c]her[1852–1919], recipient of a prize in 1902 at the age of 50, three years before it was awarded [in 1905] tohisteacher,Adolf von Baeyer[1835–1917], at age 70. This lineage of four Nobel masters and apprentices has its own pre-Nobelian antecedents. Von Baeyer had been the apprentice ofF[riedrich] A[ugust] Kekulé[1829–1896], whose ideas ofstructural formulaerevolutionizedorganic chemistryand who is perhaps best known for the often retold story about his having hit upon the ring structure ofbenzenein a dream (1865). Kekulé himself had been trained by the greatorganic chemistJustus von Liebig(1803–1873), who had studied at theSorbonnewith the masterJ[oseph] L[ouis] Gay-Lussac(1778–1850), himself once apprenticed toClaude Louis Berthollet(1748–1822). Among his many institutional and cognitive accomplishments, Berthollet helped found theÉcole Polytechnique, served as science advisor toNapoleoninEgypt, and, more significant for our purposes here, worked with[Antoine] Lavoisier[1743–1794] to revise the standard system ofchemical nomenclature."[134]
Sociologist Michael P. Farrell has studied close creative groups and writes: "Most of the fragile insights that laid the foundation of a new vision emerged not when the whole group was together, and not when members worked alone, but when they collaborated and repsonded to one another in pairs."[135]François Jacob, who, withJacques Monod, pioneered the study ofgene regulation, notes that by the mid-20th century, most research inmolecular biologywas conducted by twosomes. "Two are better than one for dreaming up theories and constructing models," writes Jacob. "For with two minds working on a problem, ideas fly thicker and faster. They are bounced from partner to partner.... And in the process, illusions are sooner nipped in the bud." As of 2018, in the previous 35 years, some half ofNobel Prizes in Physiology or Medicinehad gone to scientific partnerships.[136]James Somers describes a remarkable partnership betweenGoogle's topsoftware engineers,Jeff DeanandSanjay Ghemawat.[137]
Twosome collaborations have also been prominent in creative endeavors outside thenatural sciencesandtechnology; examples areClaude Monet's andPierre-Auguste Renoir's 1869 joint creation ofImpressionism,Pablo Picasso's andGeorges Braque's six-year collaborative creation ofCubism, andJohn Lennon's andPaul McCartney's collaborations onBeatlessongs. "Everyone", writes James Somers, "falls into creative ruts, but two people rarely do so at the same time."[138]
The same point was made byFrancis Crick, member of a famous scientific duo, Francis Crick andJames Watson, who together discovered the structure of thegeneticmaterial,DNA. At the end of aPBStelevision documentary on James Watson, in a video clipping Crick explains to Watson that their collaboration had been crucial to their discovery because, when one of them was wrong, the other would set him straight.[139]
What has been dubbed "Big Science" emerged from the United States'World War IIManhattan Projectthat produced the world's firstnuclear weapons; and Big Science has since been associated withphysics, which requires massiveparticle accelerators. Inbiology, Big Science debuted in 1990 with theHuman Genome Projectto sequence humanDNA. In 2013neurosciencebecame a Big Science domain when the U.S. announced aBRAIN Initiativeand theEuropean Unionannounced aHuman Brain Project. Major new brain-research initiatives were also announced by Israel, Canada, Australia, New Zealand, Japan, and China.[140]
Earlier successful Big Science projects had habituated politicians,mass media, and the public to view Big Science programs with sometimes uncritical favor.[141]
The U.S.'s BRAIN Initiative was inspired by concern about the spread and cost ofmental disordersand by excitement about new brain-manipulation technologies such asoptogenetics.[142]After some early false starts, the U.S.National Institute of Mental Healthlet the country's brain scientists define the BRAIN Initiative, and this led to an ambitious interdisciplinary program to develop new technological tools to better monitor, measure, and simulate the brain. Competition in research was ensured by the National Institute of Mental Health'speer-review process.[141]
In the European Union, theEuropean Commission's Human Brain Project got off to a rockier start because political and economic considerations obscured questions concerning the feasibility of the Project's initial scientific program, based principally oncomputer modelingofneural circuits. Four years earlier, in 2009, fearing that the European Union would fall further behind the U.S. in computer and other technologies, the European Union had begun creating a competition for Big Science projects, and the initial program for the Human Brain Project seemed a good fit for a European program that might take a lead in advanced and emerging technologies.[142]Only in 2015, after over 800 European neuroscientists threatened to boycott the European-wide collaboration, were changes introduced into the Human Brain Project, supplanting many of the original political and economic considerations with scientific ones.[143]
As of 2019, theEuropean Union'sHuman Brain Projecthad not lived up to its extravagant promise.[144]
Nathan Myhrvold, formerMicrosoftchief technology officer and founder ofMicrosoft Research, argues that the funding ofbasic sciencecannot be left to theprivate sector—that "without government resources, basic science will grind to a halt."[145]He notes thatAlbert Einstein'sgeneral theory of relativity, published in 1915, did not spring full-blown from his brain in a eureka moment; he worked at it for years—finally driven to complete it by a rivalry with mathematicianDavid Hilbert.[145]The history of almost any iconic scientific discovery or technological invention—thelightbulb, thetransistor,DNA, even theInternet—shows that the famous names credited with the breakthrough "were only a few steps ahead of a pack of competitors." Some writers and elected officials have used this phenomenon of "parallel innovation" to argue against public financing of basic research: government, they assert, should leave it to companies to finance the research they need.[145]
Myhrvold writes that such arguments are dangerously wrong: without government support, most basic scientific research will never happen. "This is most clearly true for the kind of pure research that has delivered... great intellectual benefits but no profits, such as the work that brought us theHiggs boson, or the understanding that a supermassiveblack holesits at the center of theMilky Way, or the discovery ofmethaneseas on the surface ofSaturn's moonTitan. Company research laboratories used to do this kind of work: experimental evidence for theBig Bangwas discovered atAT&T'sBell Labs, resulting in aNobel Prize. Now those days are gone."[145]
Even in applied fields such asmaterials scienceandcomputer science, writes Myhrvold, "companies now understand that basic research is a form ofcharity—so they avoid it." Bell Labs scientists created thetransistor, but that invention earned billions forIntelandMicrosoft.Xerox PARCengineers invented the moderngraphical user interface, butAppleand Microsoft profited most.IBMresearchers pioneered the use of giantmagnetoresistanceto boosthard-diskcapacity but soon lost the disk-drive business toSeagateandWestern Digital.[145]
Company researchers now have to focus narrowly on innovations that can quickly bring revenue; otherwise the research budget could not be justified to the company's investors. "Those who believe profit-driven companies will altruistically pay for basic science that has wide-ranging benefits—but mostly to others and not for a generation—are naive.... Ifgovernmentwere to leave it to theprivate sectorto pay forbasic research, mostsciencewould come to a screeching halt. What research survived would be done largely in secret, for fear of handing the next big thing to a rival."[145]
Governmental investment is equally vital in the field of biological research. According toWilliam A. Haseltine, a formerHarvard Medical Schoolprofessor and founder of that university's cancer and HIV / AIDS research departments, early efforts to control theCOVID-19 pandemicwere hampered by governments and industry everywhere having "pulled the plug oncoronavirusresearch funding in 2006 after the firstSARS[...] pandemic faded away and again in the years immediately following theMERS[outbreak, also caused by a coronavirus] when it seemed to be controllable.[146][...] The development of promising anti-SARS and MERS drugs, which might have been active against SARS–CoV-2 [in the Covid-19 pandemic] as well, was left unfinished for lack of money."[147]Haseltine continues:
We learned from theHIVcrisis that it was important to have research pipelines already established. [It was c]ancer research in the 1950s, 1960s and 1970s [that] built a foundation for HIV / Aids studies. [During those decades t]he government [had] responded to public concerns, sharply increasing federal funding of cancer research [...]. These efforts [had] culminated in Congress's approval of PresidentRichard Nixon'sNational Cancer Actin 1971. This [had] built the science we needed to identify and understand HIV in the 1980s, although of course no one knew that payoff was coming.[147]
In the 1980s theReagan administrationdid not want to talk about AIDS or commit much funding to HIV research. [But o]nce the news broke that actorRock Hudsonwas seriously ill with AIDS, [...] $320 million [were added to] the fiscal 1986 budget for AIDS research. [...] I helped [...] design this first congressionally funded AIDS research program withAnthony Fauci, the doctor now leading [the U.S.] fight against COVID-19.[147][...]
[The] tool set for virus and pharmaceutical research has improved enormously in the past 36 years since HIV was discovered. What used to take five or 10 years in the 1980s and 1990s in many cases now can be done in five or 10 months. We can rapidly identify and synthesize chemicals to predict which drugs will be effective. We can docryoelectron microscopyto probe virus structures and simulate molecule-by-molecule interactions in a matter of weeks – something that used to take years. The lesson is to never let down our guard when it comes to funding antiviral research. We would have no hope of beating COVID-19 if it were not for the molecular biology gains we made during earlier virus battles. What we learn this time around will help us [...] during the next pandemic, but we must keep the money coming.[147]
A complementary perspective on the funding of scientific research is given by D.T. Max, writing about theFlatiron Institute, a computational center set up in 2017 inManhattanto provide scientists with mathematical assistance. The Flatiron Institute was established byJames Harris Simons, a mathematician who had used mathematicalalgorithmsto make himself aWall Streetbillionaire. The institute has three computational divisions dedicated respectively toastrophysics,biology, andquantum physics, and is working on a fourth division forclimate modelingthat will involve interfaces ofgeology,oceanography,atmospheric science,biology, andclimatology.[130]
The Flatiron Institute is part of a trend in the sciences toward privately funded research. In the United States,basic sciencehas traditionally been financed by universities or the government, but private institutes are often faster and more focused. Since the 1990s, whenSilicon Valleybegan producing billionaires, private institutes have sprung up across the U.S. In 1997Larry Ellisonlaunched theEllison Medical Foundationto study the biology ofaging. In 2003Paul Allenfounded theAllen Institute for Brain Science. In 2010Eric Schmidtfounded theSchmidt Ocean Institute.[148]
These institutes have done much good, partly by providing alternatives to more rigid systems. Butprivate foundationsalso have liabilities. Wealthy benefactors tend to direct their funding toward their personal enthusiasms. And foundations are not taxed; much of the money that supports them would otherwise have gone to the government.[148]
John P.A. Ioannidis, ofStanford University Medical School, writes that "There is increasing evidence that some of the ways we conduct, evaluate, report and disseminate research are miserably ineffective. A series of papers in 2014 inThe Lancet... estimated that 85 percent of investment inbiomedical researchis wasted. Many other disciplines have similar problems."[149]Ioannidis identifies some science-funding biases that undermine the efficiency of the scientific enterprise, and proposes solutions:
Funding too few scientists: "[M]ajor success [in scientific research] is largely the result of luck, as well as hard work. The investigators currently enjoying huge funding are not necessarily genuine superstars; they may simply be the best connected." Solutions: "Use alotteryto decide whichgrant applicationsto fund (perhaps after they pass a basic review).... Shift... funds from senior people to younger researchers..."[149]
No reward fortransparency: "Many scientific protocols, analysis methods, computational processes and data are opaque. [M]any top findings cannot bereproduced. That is the case for two out of three top psychology papers, one out of three top papers in experimental economics and more than 75 percent of top papers identifying new cancer drug targets. [S]cientists are not rewarded for sharing their techniques." Solutions: "Create better infrastructure for enabling transparency, openness and sharing. Make transparency a prerequisite for funding. [P]referentially hire, promote or tenure... champions of transparency."[149]
No encouragement forreplication: Replication is indispensable to thescientific method. Yet, under pressure to produce newdiscoveries, researchers tend to have little incentive, and much counterincentive, to try replicating results of previous studies. Solutions: "Funding agencies must pay for replication studies. Scientists' advancement should be based not only on their discoveries but also on their replication track record."[149]
No funding for young scientists: "Werner Heisenberg,Albert Einstein,Paul DiracandWolfgang Paulimade their top contributions in their mid-20s." But the average age of biomedical scientists receiving their first substantial grant is 46. The average age for a full professor in the U.S. is 55. Solutions: "A larger proportion of funding should be earmarked for young investigators. Universities should try to shift the aging distribution of their faculty by hiring more young investigators."[149]
Biased funding sources: "Most funding forresearch and developmentin the U.S. comes not from the government but from private, for-profit sources, raising unavoidableconflicts of interestand pressure to deliver results favorable to the sponsor." Solutions: "Restrict or even ban funding that has overt conflicts of interest.Journalsshould not accept research with such conflicts. For less conspicuous conflicts, at a minimum ensure transparent and thorough disclosure."[150][q]
Funding the wrong fields: "Well-funded fields attract more scientists to work for them, which increases their lobbying reach, fueling avicious circle. Some entrenched fields absorb enormous funding even though they have clearly demonstrated limited yield or uncorrectable flaws." Solutions: "Independent, impartial assessment of output is necessary for lavishly funded fields. More funds should be earmarked for new fields and fields that are high risk. Researchers should be encouraged to switch fields, whereas currently they are incentivized to focus in one area."[150]
Not spending enough: The U.S. military budget ($886 billion) is 24 times the budget of theNational Institutes of Health($37 billion). "Investment in science benefits society at large, yet attempts to convince the public often make matters worse when otherwise well-intentioned science leaders promise the impossible, such as promptly eliminating all cancer orAlzheimer's disease." Solutions: "We need to communicate how science funding is used by making the process of science clearer, including the number of scientists it takes to make major accomplishments.... We would also make a more convincing case for science if we could show that we do work hard on improving how we run it."[150]
Rewarding big spenders: "Hiring, promotion andtenuredecisions primarily rest on a researcher's ability to secure high levels of funding. But the expense of a project does not necessarily correlate with its importance. Such reward structures select mostly for politically savvy managers who know how to absorb money." Solutions: "We should reward scientists for high-quality work, reproducibility and social value rather than for securing funding. Excellent research can be done with little to no funding other than protected time. Institutions should provide this time and respect scientists who can do great work without wasting tons of money."[150]
No funding for high-risk ideas: "The pressure that taxpayer money be 'well spent' leads government funders to back projects most likely to pay off with a positive result, even if riskier projects might lead to more important, but less assured, advances. Industry also avoids investing in high-risk projects...Innovationis extremely difficult, if not impossible, to predict..." Solutions: "Fund excellent scientists rather than projects and give them freedom to pursue research avenues as they see fit. Some institutions such asHoward Hughes Medical Institutealready use this model with success." It must be communicated to the public and to policy-makers that science is a cumulative investment, that no one can know in advance which projects will succeed, and that success must be judged on the total agenda, not on a single experiment or result.[150]
Lack of good data: "There is relatively limited evidence about which scientific practices work best. We need more research on research ('meta-research') to understand how to best perform, evaluate, review, disseminate and reward science." Solutions: "We should invest in studying how to get the best science and how to choose and reward the best scientists."[150]
Naomi Oreskes, professor of thehistory of scienceatHarvard University, writes about the desirability of diversity in the backgrounds of scientists.
The history of science is rife with [...] cases ofmisogyny,prejudiceandbias. For centuries biologists promoted false theories of female inferiority, and scientific institutions typically barred women's participation. Historian of science [...]Margaret Rossiterhas documented how, in the mid-19th century, female scientists created their own scientific societies to compensate for their male colleagues' refusal to acknowledge their work.Sharon Bertsch McGraynefilled an entire volume with the stories of women who should have been awarded theNobel Prizefor work that they did in collaboration with male colleagues – or, worse, that they had stolen by them. [...]Racial biashas been at least as pernicious asgender bias; it was scientists, after all, who codified the concept ofraceas a biological category that was not simply descriptive but also hierarchical.[152]
[...][C]ognitive scienceshows that humans are prone to bias, misperception, motivated reasoning and other intellectual pitfalls. Because reasoning is slow and difficult, we rely onheuristics– intellectual shortcuts that often work but sometimes fail spectacularly. (Believing that men are, in general, better than women in math is one tiring example.) [...][152]
[...] Science is a collective effort, and it works best when scientific communities are diverse. [H]eterogeneous communities are more likely than homogeneous ones to be able to identify blind spots and correct them. Science does not correct itself; scientists correct one another through critical interrogation. And that means being willing to interrogate not just claims about the external world but claims about [scientists'] own practices and processes as well.[152]
Claire Pomeroy, president of theLasker Foundation, which is dedicated to advancingmedical research, points out thatwomen scientistscontinue to be subjected todiscriminationin professional advancement.[153]
Though the percentage of doctorates awarded to women inlife sciencesin the United States increased from 15 to 52 percent between 1969 and 2009, only a third of assistant professors and less than a fifth of full professors in biology-related fields in 2009 were women. Women make up only 15 percent of permanent department chairs inmedical schoolsand barely 16 percent of medical-school deans.[153]
The problem is a culture of unconsciousbiasthat leaves many women feeling demoralized and marginalized. In one study, science faculty were given identicalrésumésin which the names and genders of two applicants were interchanged; both maleandfemale faculty judged the male applicant to be more competent and offered him a higher salary.[153]
Unconscious bias also appears as "microassaults" againstwomen scientists: purportedly insignificantsexistjokes and insults that accumulate over the years and undermine confidence and ambition. Writes Claire Pomeroy: "Each time it is assumed that the only woman in the lab group will play the role of recording secretary, each time a research plan becomes finalized in the men's lavatory between conference sessions, each time a woman is not invited to go out for a beer after the plenary lecture to talk shop, the damage is reinforced."[153]
"When I speak to groups of women scientists," writes Pomeroy, "I often ask them if they have ever been in a meeting where they made a recommendation, had it ignored, and then heard a man receive praise and support for making the same point a few minutes later. Each time the majority of women in the audience raise their hands. Microassaults are especially damaging when they come from ahigh-schoolscience teacher, collegementor, university dean or a member of the scientific elite who has been awarded a prestigious prize—the very people who should be inspiring and supporting the next generation of scientists."[153]
Sexual harassmentis more prevalent inacademiathan in any other social sector except themilitary. A June 2018 report by theNational Academies of Sciences, Engineering, and Medicinestates that sexual harassment hurts individuals, diminishes the pool of scientific talent, and ultimately damages the integrity of science.[154]
Paula Johnson, co-chair of the committee that drew up the report, describes some measures for preventing sexual harassment in science. One would be to replace trainees' individualmentoringwith group mentoring, and to uncouple the mentoring relationship from the trainee's financial dependence on the mentor. Another way would be to prohibit the use ofconfidentiality agreementsin connection with harassment cases.[154]
A novel approach to the reporting of sexual harassment, dubbedCallisto, that has been adopted by some institutions of higher education, lets aggrieved persons record experiences of sexual harassment, date-stamped, without actually formally reporting them. This program lets people see if others have recorded experiences of harassment from the same individual, and share information anonymously.[154]
PsychologistAndrei Cimpian andphilosophyprofessorSarah-Jane Lesliehave proposed a theory to explain why American women andAfrican-Americansare often subtly deterred from seeking to enter certain academic fields by a misplaced emphasis ongenius.[155]Cimpian and Leslie had noticed that their respective fields are similar in their substance but hold different views on what is important for success. Much more than psychologists, philosophers value a certainkind of person: the "brilliant superstar" with an exceptional mind. Psychologists are more likely to believe that the leading lights in psychology grew to achieve their positions through hard work and experience.[156]In 2015, women accounted for less than 30% of doctorates granted in philosophy; African-Americans made up only 1% of philosophy Ph.D.s. Psychology, on the other hand, has been successful in attracting women (72% of 2015 psychology Ph.D.s) and African-Americans (6% of psychology Ph.D.s).[157]
An early insight into these disparities was provided to Cimpian and Leslie by the work of psychologistCarol Dweck. She and her colleagues had shown that a person's beliefs aboutabilitymatter a great deal for that person's ultimate success. A person who sees talent as a stable trait is motivated to "show off this aptitude" and to avoid makingmistakes. By contrast, a person who adopts a "growthmindset" sees his or her current capacity as a work in progress: for such a person, mistakes are not an indictment but a valuable signal highlighting which of their skills are in need of work.[158]Cimpian and Leslie and their collaborators tested the hypothesis that attitudes, about "genius" and about the unacceptability of making mistakes, within various academic fields may account for the relative attractiveness of those fields for American women and African-Americans. They did so by contacting academic professionals from a wide range of disciplines and asking them whether they thought that some form of exceptional intellectual talent was required for success in their field. The answers received from almost 2,000 academics in 30 fields matched the distribution of Ph.D.s in the way that Cimpian and Leslie had expected: fields that placed more value on brilliance also conferred fewer Ph.D.s on women and African-Americans. The proportion of women and African-American Ph.D.s in psychology, for example, was higher than the parallel proportions for philosophy, mathematics, or physics.[159]
Further investigation showed that non-academics share similar ideas of which fields require brilliance. Exposure to these ideas at home or school could discourage young members ofstereotypedgroups from pursuing certain careers, such as those in the natural sciences or engineering. To explore this, Cimpian and Leslie asked hundreds of five-, six-, and seven-year-old boys and girls questions that measured whether they associated being "really, really smart" (i.e., "brilliant") with their sex. The results, published in January 2017 inScience, were consistent with scientific literature on the early acquisition of sex stereotypes. Five-year-old boys and girls showed no difference in their self-assessment; but by age six, girls were less likely to think that girls are "really, really smart." The authors next introduced another group of five-, six-, and seven-year-olds to unfamiliar gamelike activities that the authors described as being "for children who are really, really smart." Comparison of boys' and girls' interest in these activities at each age showed no sex difference at age five but significantly greater interest from boys at ages six and seven—exactly the ages when stereotypes emerge.[160]
Cimpian and Leslie conclude that, "Given current societal stereotypes, messages that portray [genius or brilliance] as singularly necessary [for academic success] may needlessly discourage talented members of stereotyped groups."[160]
Largely as a result of his growing popularity, astronomer and science popularizerCarl Sagan, creator of the 1980PBSTVCosmosseries, came to be ridiculed by scientist peers and failed to receive tenure atHarvard Universityin the 1960s and membership in theNational Academy of Sciencesin the 1990s. Theeponymous"Sagan effect" persists: as a group, scientists still discourage individual investigators from engaging with the public unless they are already well-established senior researchers.[161][162]
The operation of the Sagan effect deprives society of the full range of expertise needed to make informed decisions about complex questions, includinggenetic engineering,climate change, andenergyalternatives. Fewer scientific voices mean fewer arguments to counterantiscienceorpseudoscientificdiscussion. The Sagan effect also creates the false impression that science is the domain of older white men (who dominate the senior ranks), thereby tending to discourage women and minorities from considering science careers.[161]
A number of factors contribute to the Sagan effect's durability. At the height of theScientific Revolutionin the 17th century, many researchers emulated the example ofIsaac Newton, who dedicated himself to physics and mathematics and never married. These scientists were viewed as pure seekers of truth who were not distracted by more mundane concerns. Similarly, today anything that takes scientists away from their research, such as having a hobby or taking part in public debates, can undermine their credibility as researchers.[163]
Another, more prosaic factor in the Sagan effect's persistence may be professionaljealousy.[163]
However, there appear to be some signs that engaging with the rest of society is becoming less hazardous to a career in science. So many people have social-media accounts now that becoming a public figure is not as unusual for scientists as previously. Moreover, as traditional funding sources stagnate, going public sometimes leads to new, unconventional funding streams. A few institutions such asEmory Universityand theMassachusetts Institute of Technologymay have begun to appreciate outreach as an area of academic activity, in addition to the traditional roles of research, teaching, and administration. Exceptional among federal funding agencies, theNational Science Foundationnow officially favors popularization.[164][162]
Likeinfectious diseases, ideas inacademiaare contagious. But why some ideas gain great currency while equally good ones remain in relative obscurity had been unclear. A team ofcomputer scientistshas used anepidemiological modelto simulate how ideas move from one academic institution to another. The model-based findings, published in October 2018, show that ideas originating at prestigious institutions cause bigger "epidemics" than equally good ideas from less prominent places. The finding reveals a big weakness in how science is done. Many highly trained people with good ideas do not obtain posts at the most prestigious institutions; much good work published by workers at less prestigious places is overlooked by other scientists and scholars because they are not paying attention.[165]
Naomi Oreskesremarks on another drawback to deprecatingpublic universitiesin favor ofIvy Leagueschools: "In 1970 most jobs did not require a college degree. Today nearly all well-paying ones do. With the rise ofartificial intelligenceand the continuedoutsourcingof low-skilled and de-skilled jobs overseas, that trend most likely will accelerate. Those who care aboutequityofopportunityshould pay less attention to the lucky few who get intoHarvardor other highly selective private schools and more to public education, because for most Americans, the road to opportunity runs through public schools."[166]
Resistance, among some of the public, to acceptingvaccinationand the reality ofclimate changemay be traceable partly to several decades of partisan attacks on government, leading to distrust of government science and then of science generally.[167]
Many scientists themselves have been loth to involve themselves in public policy debates for fear of losing credibility: they worry that if they participate in public debate on a contested question, they will be viewed as biased and discounted as partisan. However, studies show that most people want to hear from scientists on matters within their areas of expertise. Research also suggests that scientists can feel comfortable offering policy advice within their fields. "Theozonestory", writesNaomi Oreskes, "is a case in point: no one knew better than ozone scientists about the cause of the dangerous hole and therefore what needed to be done to fix it."[168]
Oreskes, however, identifies a factor that does "turn off" the public: scientists' frequent use ofjargon– of expressions that tend to be misinterpreted by, or incomprehensible to, laypersons.[167]
Inclimatologicalparlance, "positive feedback" refers to amplifyingfeedback loops, such as the ice-albedofeedback. ("Albedo", another piece of jargon, simply means "reflectivity".) The positive loop in question develops whenglobal warmingcausesArctic iceto melt, exposing water that is darker and reflects less of the sun's warming rays, leading to more warming, which leads to more melting... and so on. In climatology, such positive feedback is a bad thing; but for most laypersons, "it conjures reassuring images, such as receiving praise from your boss.".[167]
When astronomers say "metals," they mean anyelementheavier thanhelium, which includesoxygenandnitrogen, a usage that is massively confusing not just to laypersons but also tochemists. [To astronomers] [t]heBig Dipperisn't aconstellation[...] it is an "asterism" [...] InAI, there is machine "intelligence," which isn't intelligence at all but something more like "machine capability." Inecology, there are "ecosystem services," which you might reasonably think refers to companies that clean upoil spills, but it is [actually] ecological jargon for all the good things that thenatural worlddoes for us. [T]hen there's [...] the theory of "communication accommodation," which means speaking so that the listener can understand.[167]
"[R]esearchers," writesNaomi Oreskes, "are often judged more by the quantity of their output than its quality. Universities [emphasize] metrics such as the numbers of published papers andcitationswhen they make hiring,tenureand promotion decisions."[169]
When – for a number of possible reasons – publication in legitimatepeer-reviewed journalsis not feasible, this often creates aperverse incentiveto publish in "predatory journals", which do not uphold scientific standards. Some 8,000 such journals publish 420,000 papers annually – nearly a fifth of the scientific community's annual output of 2.5 million papers. The papers published in a predatory journal are listed in scientific databases alongside legitimate journals, making it hard to discern the difference.[170]
One reason why some scientists publish in predatory journals is that prestigious scientific journals may charge scientists thousands of dollars for publishing, whereas a predatory journal typically charges less than $200. (Hence authors of papers in the predatory journals are disproportionately located in less wealthy countries and institutions.)[171]
Publishing in predatory journals can be life-threatening when physicians and patients accept spurious claims about medical treatments; and invalid studies can wrongly influence public policy. More such predatory journals are appearing every year. In 2008Jeffrey Beall, aUniversity of Coloradolibrarian, developed a list of predatory journals which he updated for several years.[172]
Naomi Oreskes argues that, "[t]o put an end to predatory practices, universities and other research institutions need to find ways to correct the incentives that lead scholars to prioritize publication quantity... Setting a maximum limit on the number of articles that hiring or funding committees can consider might help... as could placing less importance on the number of citations an author gets. After all, the purpose of science is not merely to produce papers. It is to produce papers that tell us something truthful and meaningful about the world."[173]
Theperverse incentiveto "publish or perish" is often facilitated bythe fabrication of data. A classic example is the identical-twin-studies results ofCyril Burt, which – soon after Burt's death – were found to have been based on fabricated data.
Writes Gideon Lewish-Kraus:
"One of the confounding things about thesocial sciencesis thatobservational evidencecan produce onlycorrelations. [For example, t]o what extent isdishonesty[which is the subject of a number of social-science studies] a matter ofcharacter, and to what extent a matter of situation?Research misconductis sometimes explained away byincentives– thepublishingrequirements for thejobmarket, or the acclaim that can lead toconsultingfees andDavosappearances. [...] The differences betweenp-hackingandfraudis one of degree. And once it becomes customary within a field to inflate results, the field selects for researchers inclined to do so."[174]
Joe Simmons, abehavioral-scienceprofessor, writes:
"[A] field cannot rewardtruthif it does not or cannot decipher it, so it rewards other things instead. Interestingness.Novelty. Speed. Impact. Fantasy. And it effectively punishes the opposite.Intuitive Findings. Incremental Progress. Care.Curiosity.Reality."[175]
Harvard Universityhistorian of scienceNaomi Oreskeswrites that a theme at the 2024World Economic ForuminDavos, Switzerland, was a "perceived need to 'accelerate breakthroughs in research and technology.'"[176]
"[R]ecent years", however, writes Oreskes, "[have] seen important papers, written by prominent scientists and published in prestigious journals,retractedbecause of questionable data or methods." For example, the Davos meeting took place after the resignations – over questionably reliable academic papers – in 2023 ofStanford UniversitypresidentMarc Tessier-Lavigneand, in 2024, ofHarvard UniversitypresidentClaudine Gay. "In one interesting case,Frances H. Arnoldof theCalifornia Institute of Technology, who shared the 2018Nobel Prize in Chemistry, voluntarily retracted a paper when her lab was unable toreplicateher results – but after the paper had been published." Such incidents, suggests Oreskes, are likely to erode public trust in science and in experts generally.[177]
Academics at leading universities in the United States and Europe are subject toperverse incentivesto produce results – andlotsof them –quickly. A study has put the number of papers published around 2023 by scientists and other scholars at over seven million annually, compared with less than a million in 1980. Another study found 265 authors – two-thirds in the medical and life sciences – who published on average a paperevery five days.[178]
"Good science [and scholarship take] time", writes Oreskes. "More than 50 years elapsed between the 1543 publication ofCopernicus's magnum opus... and the broad scientific acceptance of theheliocentric model... Nearly a century passed between biochemistFriedrich Miescher's identification of theDNAmolecule and suggestion that it might be involved in inheritance and the elucidation of itsdouble-helixstructure in the 1950s. And it took just about half a century for geologists and geophysicists to accept geophysicistAlfred Wegener's idea ofcontinental drift."[179]
Return to top of page.
|
https://en.wikipedia.org/wiki/Logology_(science_of_science)#Multiple_discovery
|
TheMatilda effectis a bias against acknowledging the achievements ofwomen scientistswhose work is attributed to their male colleagues. This phenomenon was first described by suffragist and abolitionistMatilda Joslyn Gage(1826–1898) in her essay, "Woman as Inventor" (first published as a tract in 1870 and in theNorth American Reviewin 1883). The termMatilda effectwas coined in 1993 by science historianMargaret W. Rossiter.[1][2]
Rossiter provides several examples of this effect. Trotula (Trota of Salerno), a 12th-century Italian woman physician, wrote books which, after her death, were attributed to male authors. Nineteenth- and twentieth-century cases illustrating the Matilda effect include those ofNettie Stevens,[3]Lise Meitner,Marietta Blau,Rosalind Franklin, andJocelyn Bell Burnell.
The Matilda effect was compared to theMatthew effect, whereby an eminent scientist often gets more credit than a comparatively unknown researcher, even if their work is shared or similar.[4][5]
In 2012, Marieke van den Brink and Yvonne Benschop fromRadboud University Nijmegenshowed that in theNetherlandsthe sex of professorship candidates influences the evaluation made of them.[6]Similar cases are described by Andrea Cerroni and Zenia Simonella in a study[7]corroborated further by a Spanish study.[8]On the other hand, several studies found no difference between citations and impact of publications of male authors and those of female authors.[9][10][11]
Swiss researchers have indicated that mass media asks male scientists more often to contribute on shows than they do their female fellow scientists.[12]
According to one U.S. study, "although overt gender discrimination generally continues to decline in American society," "women continue to be disadvantaged with respect to the receipt of scientific awards and prizes, particularly for research."[13]
Examples of women subjected to the Matilda effect:
Examples of men scientists favored over women scientists forNobel Prizes:
The Spanish Association of Women Researchers and Technologists (AMIT) has created a movement called "No more Matildas" that honours Matilda Joslyn Gage.[33]The campaign's goal is to promote the number of women in science from an early age, eliminating stereotypes.
Ben Barres(1954–2017) was a neurobiologist atStanford University Medical Schoolwho transitioned from female to male. He spoke of his scientific achievements having been perceived differently, depending on what sex others thought he was at the time.[34]Prior to his transition to male, Barres' scientific achievements were ascribed to men or devalued, but after transitioning to male, his achievements were credited to him and lauded.
|
https://en.wikipedia.org/wiki/Matilda_effect
|
TheMatthew effect, sometimes called theMatthew principleorcumulative advantage,[1]is the tendency of individuals to accrue social or economic success in proportion to their initial level of popularity, friends, and wealth. It is sometimes summarized by the adage or platitude "the rich get richer and the poor get poorer".[2][3]Also termed the "Matthew effect of accumulated advantage", taking its name from theParable of the Talentsin the biblicalGospel of Matthew, it was coined by sociologistsRobert K. MertonandHarriet Zuckermanin 1968.[4][5]
Early studies of Matthew effects were primarily concerned with the inequality in the way scientists were recognized for their work. However, Norman W. Storer, of Columbia University, led a new wave of research. He believed he discovered that the inequality that existed in the social sciences also existed in other institutions.[6]
Later, innetwork science, a form of the Matthew effect was discovered in internet networks and calledpreferential attachment. The mathematics used for this network analysis of the internet was later reapplied to the Matthew effect in general, whereby wealth or credit is distributed among individuals according to how much they already have. This has the net effect of making it increasingly difficult for low ranked individuals to increase their totals because they have fewer resources to risk over time, and increasingly easy for high rank individuals to preserve a large total because they have a large amount to risk.[7]
The concept is named according to two of theparables of Jesusin thesynoptic Gospels(Table 2, of theEusebian Canons). The concept concludes both synoptic versions of theparable of the talents:
For to every one who has will more be given, and he will have abundance; but from him who has not, even what he has will be taken away.
I tell you, that to every one who has will more be given; but from him who has not, even what he has will be taken away.
The concept concludes two of the three synoptic versions of the parable of thelamp under a bushel(absent in the version of Matthew):
For to him who has will more be given; and from him who has not, even what he has will be taken away.
Take heed then how you hear; for to him who has will more be given, and from him who has not, even what he thinks that he has will be taken away.
The concept is presented again in Matthew outside of a parable duringChrist's explanation to his disciples of the purpose of parables:
And he answered them, "To you it has been given to know the secrets of the kingdom of heaven, but to them it has not been given. For to him who has will more be given, and he will have abundance; but from him who has not, even what he has will be taken away."
Prior to being called "The Matthew effect",Udny Yule, in 1925, noticed the effect in flower populations, which in population growth studies is called theYule processin his honor.
In thesociology of science, the first description of the Matthew effect was given byPricein 1976.[8](He referred to the process as a "cumulative advantage" process.) His was also the first application of the process to the growth of a network, producing what would now be called ascale-free network. It is in the context of network growth that the process is most frequently studied today. Price also promoted preferential attachment as a possible explanation for power laws in many other phenomena, includingLotka's lawof scientific productivity andBradford's lawof journal use.
"Matthew effect" was a term coined byRobert K. MertonandHarriet Anne Zuckermanto describe how, among other things, eminent scientists will often get more credit than a comparatively unknown researcher, even if their work is similar; it also means that credit will usually be given to researchers who are already famous.[4][5]For example, a prize will almost always be awarded to the most senior researcher involved in a project, even if all the work was done by agraduate student. This was later formulated byStephen StiglerasStigler's law of eponymy– "No scientific discovery is named after its original discoverer" – with Stigler explicitly naming Merton as the true discoverer, making his "law" an example of itself. Merton and Zuckerman further argued that in the scientific community the Matthew effect reaches beyond simple reputation to influence the wider communication system, playing a part in social selection processes and resulting in a concentration of resources and talent. They gave as an example the disproportionate visibility given to articles from acknowledged authors, at the expense of equally valid or superior articles written by unknown authors. They also noted that the concentration of attention on eminent individuals can lead to an increase in their self-assurance, pushing them to perform research in important but risky problem areas.[4]
The Matthew Effect also relates to broader patterns of scientific productivity, which can be explained by additional sociological concepts in science, such as the sacred spark, cumulative advantage, and search costs minimization by journal editors. The sacred spark paradigm suggests that scientists differ in their initial abilities, talent, skills, persistence, work habits, etc. that provide particular individuals with an early advantage. These factors have a multiplicative effect which helps these scholars succeed later. The cumulative advantage model argues that an initial success helps a researcher gain access to resources (e.g., teaching release, best graduate students, funding, facilities, etc.), which in turn results in further success. Search costs minimization by journal editors takes place when editors try to save time and effort by consciously or subconsciously selecting articles from well-known scholars. Whereas the exact mechanism underlying these phenomena is yet unknown, it is documented that a minority of all academics produce the most research output and attract the most citations.[9]
In addition to its influence on recognition and productivity, the Matthew Effect can also be observed in the distribution of scientific resources, such as funding. A large Matthew effect was discovered in a study of science funding in the Netherlands, where winners just above the funding threshold were found to accumulate more than twice as much funding during the subsequent eight years as non-winners with near-identical review scores that fell just below the threshold.[10]
In education, the term "Matthew effect" has been adopted by psychologistKeith Stanovich[11]and popularised by education theoristAnthony Kellyto describe a phenomenon observed in research on how new readers acquire the skills to read. Effectively, early success in acquiring reading skills usually leads to later successes in reading as the learner grows, while failing to learn to read before the third or fourth year of schooling may be indicative of lifelong problems in learning new skills.[12]
This is because children who fall behind in reading would read less, increasing the gap between them and their peers. Later, when students need to "read to learn" (where before they were learning to read), their reading difficulty creates difficulty in most other subjects. In this way they fall further and further behind in school, dropping out at a much higher rate than their peers.[13]This effect has been used in legal cases, such asBrody v. Dare County Board of Education.[14]Such cases argue that early education intervention is essential fordisabledchildren, and that failing to do so negatively impacts those children.[15]
A 2014 review of Matthew effect in education found mixed empirical evidence, where Matthew effect tends to describe the development of primary school skills, while a compensatory pattern was found for skills with ceiling effects.[16]A 2016 study on reading comprehension assessments for 99 thousand students found a pattern of stable differences, with some narrowing of the gap for students with learning disabilities.[17]
Innetwork science, the Matthew effect was noticed aspreferential attachmentof earlier nodes in a network, which explains that these nodes tend to attract more links early on.[18]
The application of preferential attachment to the growth of the World Wide Web was proposed byBarabási and Albertin 1999.[19]Barabási and Albert also coined the name "preferential attachment", and suggested that the process might apply to the growth of other networks as well. For growing networks, the precise functional form of preferential attachment can be estimated bymaximum likelihood estimation.[20]
Due to preferential attachment, Matjaž Perc writes "a node that acquires more connections than another one will increase its connectivity at a higher rate, and thus an initial difference in the connectivity between two nodes will increase further as the network grows, while the degree of individual nodes will grow proportional with the square root of time."[7]The Matthew Effect therefore explains the growth of some nodes in vast networks such as the Internet.[21]
A model for career progress quantitatively incorporates the Matthew Effect in order to predict the distribution of individual career length in competitive professions. The model predictions are validated by analyzing the empirical distributions of career length for careers in science and professional sports (e.g.Major League Baseball).[22]As a result, the disparity between the large number of short careers and the relatively small number of extremely long careers can be explained by the "rich-get-richer" mechanism, which in this framework, provides more experienced and more reputable individuals with a competitive advantage in obtaining new career opportunities.
Bask (2024) reviewed theoretical research on academic career progression and found that Feichtinger et al. developed a model where a researcher’s reputation grows through scientific effort but declines without continual activity[23]Their model incorporates the Matthew effect, in that researchers with high initial reputations benefit more from their efforts, while those with low reputations may see theirs diminish even with similar effort. They showed that if a researcher starts with low reputation, their career is likely to decline and eventually end, whereas researchers starting with high reputation may either sustain a successful career or face early exit depending on their effort over time.[23]
Experiments manipulating download counts or bestseller lists for books and music have shown consumer activity follows the apparent popularity.[24][25][26]
Social influence often induces a rich-get-richer phenomenon where popular products tend to become even more popular.[27]An example of the Matthew Effect's role on social influence is an experiment by Salganik, Dodds, and Watts in which they created an experimental virtual market named MUSICLAB. In MUSICLAB, people could listen to music and choose to download the songs they enjoyed the most. The song choices were unknown songs produced by unknown bands. There were two groups tested; one group was given zero additional information on the songs and one group was told the popularity of each song and the number of times it had previously been downloaded.[28]As a result, the group that saw which songs were the most popular and were downloaded the most were then biased to choose those songs as well. The songs that were most popular and downloaded the most stayed at the top of the list and consistently received the most plays. To summarize the experiment's findings, the performance rankings had the largest effect boosting expected downloads the most. Download rankings had a decent effect; however, not as impactful as the performance rankings.[29]Abeliuk et al. (2016) also proved that when utilizing "performance rankings", a monopoly will be created for the most popular songs.[30]
The ideas of this theory were developed by Kenneth Ferraro and colleagues as an integrative ormiddle-range theory. Originally specified in fiveaxiomsand nineteen propositions, cumulative inequality theory incorporates elements from the following theories and perspectives, several of which are related to the study of society:
In recent years, Ferraro and several other researchers have been testing and elaborating elements of the theory on a variety of topics to provide evidence for the theoretical framework. In the following information you will find some of the uses of this theory in sociological studies. '"social systems generate inequality, which is manifested over the life course via demographic and developmental processes."[31]
McDonough, Worts, Booker, et al. (2015) for example studied cumulative disadvantage in the generations of health inequality among mothers in Britain and the United States. The study examined "if adverse circumstances early in the life course cumulate as health harming biographical patterns across working and family caregiving years."[32]Also, it was examined if institutional context moderated cumulative effects of micro level processes. The results showed that existing health disparities of women in midlife, during work and family rearing time, were intensified by cumulative disadvantages caused by adversities in early life. Thus, the accumulation of disadvantage had negative connotations for the well-being of women's occupational experiences and family life.
McLean (2010), on the other hand, studied U.S. combat and non combat veterans through cumulative disadvantage. He discovered that cumulated negative disadvantages caused by disability and unemployment were more likely to influence the lives of combat veterans versus non combat veterans. Combat veterans suffered physical and emotional trauma that had a disabling effect which impeded their ability to successfully obtain employment. . The research is crucial for social policy implementation that assist United States Veterans to find and retain employment that is suitable to their personal conditions.[citation needed]
In continuation, Woolredge, Frank, Coulette, et al. (2016) studied the prison sentencing of racial groups. specifically of African American males with prior felony convictions. They examined how pre-trial processes affect trial outcomes. It was determined that cumulative disadvantage was existent for African American males and young men; the results were measured by: set bail amounts, pre-trial detention, prison sentencing, and no reduction in sentencing length. The research are striving to create changes in the justice system that reduce incarceration rates of African American Males by reducing bail amounts, and pre trial imprisonment. Further studies are important to decrease the incarceration of minority groups in society, and to create a non biased justice system.[citation needed]
Additionally, Ferraro & Moore (2003) have applied the theory to the study of long-term consequences of early obesity for midlife health and socioeconomic attainment. The study shows that obesity experienced in early life leads to lower-body disability, but higher risk factors to health.[33]Moreover. The research mentions a risk that has been brought to attention in the past years; it ties being over weight to negative stigma (DeJong 1980),and has influenced fair labor market positioning[34]and wages.[35]
Lastly, Crystal, Shea, & Reyes (2016) studied the effects of cumulative advantage in increasing within age cohort economic inequalities in diverse periods of time. The study utilized economic patterns such as annual wealth value and household size. The inequalities of age were analyzed by using the gini coefficient. The study took place between 1980 and 2010. The results showed that at age 65 plus individuals had higher rates of inequality and it increased significantly for baby boomers or during economic recession and times of war. The research is written to estimate the possible impacts of social security changes on older adults in American Society.
In conclusion, Cumulative Inequality or Cumulative Disadvantage Theory, is broadly examining various topics that impact public policy, and the view of our role within society. Further benefits of the theory are still to be seen in the next coming years.
The concept of cumulative advantage, based on Merton and Zuckerman's Matthew Effect, has been widely applied to the study oflife courseinequality.[36][37]Dannefer (2003) argued that inequalities in resources, health, and social status systematically widen over time, shaped by social institutions, economic structures, and psychosocial factors like perceived agency and self-efficacy. Early advantages or disadvantages become amplified, producing growing disparities as individuals age. Pallas (2009) further highlighted how cumulative advantage involves shifts between different types of capital, such as human, economic, and symbolic, complicating efforts to measure inequality over time.[38]
Research has expanded cumulative advantage beyond aging to domains such as education, work, health, and wealth.[37]In education, early academic differences lead to greater access to opportunities and resources, compounding over time. In the workforce, initial job placements and early career achievements create divergent paths in earnings and occupational mobility. Family background and neighborhood contexts also play a role, reinforcing early disparities across the life course[37]
Open Scienceis "the movement to make scientific research (including publications, data, physical samples, and software) and its dissemination accessible to all levels of society, amateur or professional". One of its key motivations is increasing equity in scientific endeavors. However, Ross-Hellauer, T. et. al. (2022) argue that Open Science's ambition to reduce inequalities in academia may inadvertently perpetuate or exacerbate existing disparities caused by cumulative advantage.[39]As Open Science progresses, it faces the challenge of balancing its goals of openness and accessibility with the risk that its practices could reinforce the privileges of the more advantaged, particularly in terms of access to knowledge, technology, and funding. The authors make this critique to urge professionals to reflect "upon the ways in which implementation may run counter to ideals".[39]
|
https://en.wikipedia.org/wiki/Matthew_effect
|
The concept ofmultiple discovery(also known assimultaneous invention)[1][self-published source]is the hypothesis that most scientific discoveries and inventions are made independently and more or less simultaneously by multiple scientists and inventors.[2][page needed]The concept of multiple discovery opposes a traditional view—the"heroic theory" of invention and discovery.[not verified in body]Multiple discovery is analogous toconvergent evolutioninbiological evolution.[according to whom?][clarification needed]
WhenNobel laureatesare announced annually—especially in physics, chemistry, physiology, medicine, and economics—increasingly, in the given field, rather than just a single laureate, there are two, or the maximally permissible three, who often have independently made the same discovery.[according to whom?][citation needed]Historians and sociologists have remarked the occurrence, inscience, of "multiple independent discovery".Robert K. Mertondefined such "multiples" as instances in which similardiscoveriesare made by scientists working independently of each other.[3][4]Merton contrasted a "multiple" with a "singleton"—a discovery that has been made uniquely by a single scientist or group of scientists working together.[5]As Merton said, "Sometimes the discoveries are simultaneous or almost so; sometimes a scientist will make a new discovery which, unknown to him, somebody else has made years before."[4][page needed][6]
Commonly cited examples of multiple independent discovery are the 17th-century independent formulation ofcalculusbyIsaac Newton,Gottfried Wilhelm Leibnizand others;[7][page needed]the 18th-century discovery ofoxygenbyCarl Wilhelm Scheele,Joseph Priestley,Antoine Lavoisierand others;[citation needed]and thetheory of evolutionofspecies, independently advanced in the 19th century byCharles DarwinandAlfred Russel Wallace.[8][better source needed][better source needed]What holds for discoveries, also goes forinventions.[according to whom?][citation needed]Examples are theblast furnace(invented independently in China, Europe and Africa),[citation needed]thecrossbow(invented independently in China, Greece, Africa, northern Canada, and the Baltic countries),[citation needed],magnetism(discovered independently in Greece, China, and India)[citation needed], thecomputer mouse(both rolling andoptical),powered flight, and thetelephone.
Multiple independent discovery, however, is not limited to only a few historic instances involving giants of scientific research. Merton believed that it is multiple discoveries, rather than unique ones, that represent thecommonpattern in science.[9]
Multiple discoveries in the history of science provide evidence forevolutionarymodels of science and technology, such asmemetics(the study of self-replicating units of culture),evolutionary epistemology(which applies the concepts ofbiological evolutionto study of the growth of human knowledge), andcultural selection theory(which studies sociological and cultural evolution in a Darwinian manner).[citation needed]
Arecombinant-DNA-inspired "paradigmof paradigms" has been posited, that describes a mechanism of "recombinant conceptualization".[10]This paradigm predicates that a newconceptarises through the crossing of pre-existing concepts andfacts.[10][11]This is what is meant when one says that a scientist or artist has been "influenced by" another—etymologically, that a concept of the latter's has "flowed into" the mind of the former.[10]Not every new concept so formed will be viable: adaptingsocial DarwinistHerbert Spencer's phrase, only the fittest concepts survive.[10]
Multiple independent discovery and invention, like discovery and invention generally, have been fostered by the evolution of means ofcommunication:roads,vehicles,sailing vessels,writing,printing, institutions ofeducation, reliablepostal services,[12]telegraphy, andmass media, including theinternet.[according to whom?][citation needed]Gutenberg's invention of printing (which itself involved a number of discrete inventions) substantially facilitated the transition from theMiddle Agestomodern times.[citation needed]All these communication developments havecatalyzedand accelerated the process of recombinant conceptualization,[clarification needed]and thus also of multiple independent discovery.[citation needed]
Multiple independent discoveriesshow an increased incidence beginning in the 17th century. This may accord with the thesis of British philosopherA.C. Graylingthat the 17th century was crucial in the creation of the modernworld view, freed from the shackles of religion, the occult, and uncritical faith in the authority ofAristotle. Grayling speculates that Europe'sThirty Years' War(1618–1648), with the concomitant breakdown of authority, made freedom of thought and open debate possible, so that "modern science... rests on the heads of millions of dead." He also notes "the importance of the development of a reliablepostal service... in enabling savants... to be in scholarly communication.... [T]he cooperative approach, first recommended byFrancis Bacon, was essential to making science open topeer reviewand public verification, and not just a matter of the lone [individual] issuing... idiosyncratic pronouncements."[12]
Theparadigmof recombinant conceptualization (see above)—more broadly, of recombinant occurrences—that explains multiple discovery in science and the arts, also elucidates the phenomenon ofhistoric recurrence, wherein similar events are noted in thehistoriesof countries widely separated in time and geography. It is the recurrence ofpatternsthat lends a degree ofprognosticpower—and, thus, additional scientific validity—to the findings ofhistory.[13][page needed]
Lamb and Easton, and others, have argued that science andartare similar with regard to multiple discovery.[2][page needed][10]When two scientists independently make the same discovery, their papers are not word-for-word identical, but the core ideas in the papers are the same; likewise, two novelists may independently write novels with the same core themes, though their novels are not identical word-for-word.[2][page needed]
AfterIsaac NewtonandGottfried Wilhelm Leibnizhad exchanged information on their respective systems ofcalculusin the 1670s, Newton in the first edition of hisPrincipia(1687), in ascholium, apparently accepted Leibniz's independent discovery of calculus. In 1699, however, a Swiss mathematician suggested to Britain'sRoyal Societythat Leibniz had borrowed his calculus from Newton. In 1705 Leibniz, in an anonymous review of Newton'sOpticks, implied that Newton'sfluxions(Newton's term fordifferential calculus) were an adaptation of Leibniz's calculus. In 1712 the Royal Society appointed a committee to examine the documents in question; the same year, the Society published a report, written by Newton himself, asserting his priority. Soon after Leibniz died in 1716, Newton denied that his own 1687Principiascholium"allowed [Leibniz] the invention of thecalculus differentialisindependently of my own"; and the third edition of Newton'sPrincipia(1726) omitted the tell-tale scholium. It is now accepted that Newton and Leibniz discovered calculus independently of each other.[14]
In another classic case of multiple discovery, the two discoverers showed morecivility. By June 1858Charles Darwinhad completed over two-thirds of hisOn the Origin of Specieswhen he received a startling letter from a naturalist,Alfred Russel Wallace, 13 years his junior, with whom he had corresponded. The letter summarized Wallace'stheory of natural selection, with conclusions identical to Darwin's own. Darwin turned for advice to his friendCharles Lyell, the foremost geologist of the day. Lyell proposed that Darwin and Wallace prepare a joint communication to the scientific community. Darwin being preoccupied with his mortally ill youngest son, Lyell enlisted Darwin's closest friend,Joseph Hooker, director ofKew Gardens, and together on 1 July 1858 they presented to theLinnean Societya joint paper that brought together Wallace's abstract with extracts from Darwin's earlier, 1844 essay on the subject. The paper was also published that year in the Society's journal. Neither the public reading of the joint paper nor its publication attracted interest; but Wallace, "admirably free from envy or jealousy," had been content to remain in Darwin's shadow.[8][better source needed]
|
https://en.wikipedia.org/wiki/Multiple_discovery
|
This is a list ofpriority disputesinhistory of scienceand science-related fields (such asmathematics).
|
https://en.wikipedia.org/wiki/Priority_disputes
|
Stigler's law of eponymy, proposed byUniversity of ChicagostatisticsprofessorStephen Stiglerin his 1980 publication "Stigler's law ofeponymy",[1]states that "no scientific discovery is named after its original discoverer." Examples includeHubble's law, which was derived byGeorges Lemaîtretwo years beforeEdwin Hubble; thePythagorean theorem, whichwas knowntoBabylonian mathematiciansbefore Pythagoras; andHalley's Comet, which was observed by astronomers since at least 240 BC (although its official designation is due to the first evermathematical predictionof such astronomical phenomenon in the sky, not to its discovery).
Stigler attributed the discovery of Stigler's law tosociologistRobert K. Merton, from whom Stigler stole credit so that it would be an example of the law. The same observation had previously also been made by many others.[2]
Historical acclaim for discoveries is often assigned to persons of note who bring attention to an idea that is not yet widely known, whether or not that person was its original inventor – theories may be named long after their discovery. In the case ofeponymy, the idea becomes named after that person, even if that person is acknowledged byhistorians of sciencenot to be the one who discovered it. Often, several people willarrive at a new idea around the same time, as in the case ofcalculus. It can be dependent on the publicity of the new work and the fame of its publisher as to whether the scientist's name becomes historically associated.
There is a similar quote attributed toMark Twain:
It takes a thousand men to invent a telegraph, or a steam engine, or a phonograph, or a photograph, or a telephone or any other important thing—and the last man gets the credit and we forget the others. He added his little mite—that is all he did. These object lessons should teach us that ninety-nine parts of all things that proceed from the intellect are plagiarisms, pure and simple; and the lesson ought to make us modest. But nothing can do that.[3]
Stephen Stigler's father, the economistGeorge Stigler, also examined the process of discovery ineconomics. He said, "If an earlier, valid statement of a theory falls on deaf ears, and a later restatement is accepted by the science, this is surely proof that the science accepts ideas only when they fit into the then-current state of the science." He gave several examples in which the original discoverer was not recognized as such.[4]Similar arguments were made in regards to accepted ideas relative to the state of science by Thomas Kuhn inThe Structure of Scientific Revolutions.[5]
TheMatthew effectwas coined by Robert K. Merton to describe how eminent scientists get more credit than a comparatively unknown researcher, even if their work is similar, so that credit will usually be given to researchers who are already famous. Merton notes:
This pattern of recognition, skewed in favor of the established scientist, appears principally
(i) in cases of collaboration and
(ii) in cases of independent multiple discoveries made by scientists of distinctly different rank.[6]
The effect applies specifically to women through theMatilda effect.
Boyer's lawwas named byHubert Kennedyin 1972. It says, "Mathematical formulas and theorems are usually not named after their original discoverers" and was named afterCarl Boyer, whose bookA History of Mathematicscontains many examples of this law. Kennedy observed that "it is perhaps interesting to note that this is probably a rare instance of a law whose statement confirms its own validity".[7]
"Everything of importance has been said before by somebody who did not discover it" is anadageattributed toAlfred North Whitehead.[8]
|
https://en.wikipedia.org/wiki/Stigler%27s_law_of_eponymy
|
Synchronicity(German:Synchronizität) is a concept introduced byCarl Jung, founder ofanalytical psychology, to describe events that coincide in time and appearmeaningfully related, yet lack a discoverablecausal connection.[1]Jung held that this was a healthy function of the mind, although it can become harmful withinpsychosis.[2][3]
Jung developed the theory as a hypothetical noncausal principle serving as theintersubjectiveorphilosophically objectiveconnection between these seemingly meaningful coincidences. After coining the term in the late 1920s[4]Jung developed the concept with physicistWolfgang Paulithrough correspondence and in their 1952 workThe Interpretation of Nature and the Psyche.[5][6][7][8]This culminated in thePauli–Jung conjecture.[9][10][11][12][13]Jung and Pauli's view was that, just as causal connections can provide a meaningful understanding of thepsycheand the world, so too may acausal connections.[14]
A 2016 study found 70% of therapists agreed synchronicity experiences could be useful for therapy. Analytical psychologists hold that individuals must understand the compensatory meaning of these experiences to "enhanceconsciousnessrather than merely build upsuperstitiousness". However, clients who disclose synchronicity experiences report not being listened to, accepted, or understood. The experience of overabundance of meaningful coincidences can be characteristic ofschizophrenic delusion.[15]
Jung used synchronicity in arguing for the existence of the paranormal.[16]This idea was explored byArthur KoestlerinThe Roots of Coincidence[17]and taken up by theNew Agemovement. Unlikemagical thinking, which believes causally unrelated events to have paranormal causal connection, synchronicity supposes events may be causally unrelated yet have unknown noncausal connection.
The objection from a scientific standpoint is that this is neithertestablenorfalsifiable, so does not fall within empirical study.[18]Scientific scepticismregards it aspseudoscience. Jung stated that synchronicity events are chance occurrences from a statistical point of view, but meaningful in that they may seem to validate paranormal ideas. No empirical studies of synchronicity based on observablemental statesandscientific datawere conducted by Jung to draw his conclusions, though studies have since been done (see§ Studies). While someone may experience a coincidence as meaningful, this alone cannot prove objective meaning to the coincidence.
Statistical lawsorprobability, show how unexpected occurrences can be inevitable or more likely encountered than people assume. These explain coincidences such as synchronicity experiences aschance eventswhich have been misinterpreted byconfirmation biases,spurious correlations, or underestimated probability.[19][20]
Synchronicity arose with Jung's use of the ancient Chinese divination textI Ching. It has 64hexagrams, each built from two trigrams orbagua. A divination is made by seemingly random numerical happenings for which theI Chingtext gives detailed situational analysis.Richard Wilhelm, translator of Chinese, provided Jung with validation. Jung met Wilhelm inDarmstadt, GermanywhereHermann von KeyserlinghostedGesellschaft für Freie Philosophie. In 1923 Wilhelm was in Zurich, as was Jung, attending the psychology club, where Wilhelm promulgated theI Ching. Finally,
I Chingwas published with Wilhelm's commentary. I instantly obtained the book and found to my gratification that Wilhelm took much the same view of the meaningful connections as I had. But he knew the entire literature and could therefore fill in the gaps which had been outside my competence.
Jung coined the termsynchronicityas part of a lecture in May 1930,[14]or as early as 1928,[4]at first for use in discussingChinese religious and philosophicalconcepts.[14][21]His first public articulation of the term came in 1930 at the memorial address for Richard Wilhelm where Jung stated:[21]
The science [i.e.cleromancy] of theI Chingis based not on the causality principle but on one which—hitherto unnamed because not familiar to us—I have tentatively called thesynchronisticprinciple.
TheI Chingis one of the five classics ofConfucianism. By selecting a passage according to the traditional chance operations such as tossing coins and counting outyarrow stalks, the text is supposed to give insights into a person's inner states. Jung characterised this as the belief in synchronicity, and himself believed the text to give apt readings in his own experiences.[22]He would later also recommend this practice to certain of his patients.[23]Jung argued that synchronicity could be found diffused throughoutChinese philosophymore broadly and in variousTaoist concepts.[21]Jung also drew heavily from German philosophersGottfried Leibniz, whose own exposure toI Chingdivinationin the 17th century was the primary precursor to the theory of synchronicity in the West,[21]Arthur Schopenhauer, whom Jung placed alongside Leibniz as the two philosophers most influential to his formulation of the concept,[21][22]andJohannes Kepler.[18]He points to Schopenhauer, especially, as providing an early conception of synchronicity in the quote:[22]
All the events in a man's life would accordingly stand in two fundamentally different kinds of connection: firstly, in the objective, causal connection of the natural process; secondly, in a subjective connection which exists only in relation to the individual who experiences it, and which is thus as subjective as his own dreams[.]
As withPaul Kammerer's theory of serialitydeveloped in the late 1910s, Jung looked to hidden structures of nature for an explanation of coincidences.[24]In 1932, physicistWolfgang Pauliand Jung began what would become an extended correspondence in which they discussed and collaborated on various topics surrounding synchronicity, contemporary science, and what is now known as thePauli effect.[25]Jung also built heavily upon the idea ofnuminosity, a concept originating in the work of German religious scholarRudolf Otto, which describes the feeling ofgravitasfound inreligious experiences, and which perhaps brought greatest criticism upon Jung's theory.[26]Jung also drew from parapsychologistJ. B. Rhinewhose work in the 1930s had at the timeappearedto validate certain claims aboutextrasensory perception.[18]It was not until a 1951Eranos conferencelecture, after having gradually developed the concept for over two decades, that Jung gave his first major outline of synchronicity.[14]The following year, Jung and Pauli published their 1952 workThe Interpretation of Nature and the Psyche(German:Naturerklärung und Psyche), which contained Jung's central monograph on the subject, "Synchronicity: An Acausal Connecting Principle".[14]
Other notable influences and precursors to synchronicity can be found in: the theological concept ofcorrespondences,[27][28]sympathetic magic,[29]astrology,[23]andalchemy.[18]
ThePauli–Jung conjectureis a collaboration inmetatheorybetween physicistWolfgang Pauliand analytical psychologistCarl Jung, centered on the concept of synchronicity. It was mainly developed between the years 1946 and 1954, four years before Pauli's death, and speculates on adouble-aspectperspective within the disciplines of both collaborators.[9][30]Pauli additionally drew on various elements ofquantum theorysuch ascomplementarity,nonlocality, and theobserver effectin his contributions to the project.[9][31][32]Jung and Pauli thereby "offered the radical and brilliant idea that the currency of these correlations is not (quantitative) statistics, as in quantum physics, but (qualitative) meaning".[33]
Contemporary physicist T. Filk writes thatquantum entanglement, being "a particular type of acausal quantum correlations", was plausibly taken by Pauli as "a model for the relationship between mind and matter in the framework [...] he proposed together with Jung".[31]Specifically, quantum entanglement may be the physical phenomenon which most closely represents the concept of synchronicity.[31]
Inanalytical psychology, the recognition of seemingly-meaningful coincidences is a mechanism by which unconscious material is brought to the attention of the conscious mind. A harmful or developmental outcome can then result only from the individual's response to such material.[2][22]Jung proposed that the concept could havepsychiatricuse in mitigating the negative effects ofover-rationalisation[2]and proclivities towardsmind–body dualism.[34]
Analytical psychology considers modern modes of thought to rest upon the pre-modern and primordial structures of the psyche. Causal connections thus form the basis of modernworldviews, and connections which lackcausal reasoningare seen aschance. This chance-based interpretation, however, is incongruent with the primordial mind, which instead interprets thiscategoryasintention.[14]The primordial framework in fact places emphasis on these connections, just as the modern framework emphasizes causal ones. In this regard, causality, like synchronicity, is a human interpretation imposed onto external phenomena.[14]Primordial modes of thought are however, according to Jung, necessary constituents of the modern psyche that inevitably protrude into modern life—providing the basis for meaningful interpretation of the world by way of meaning-based connections.[14]Just as the principles of psychological causality provide meaningful understanding of causal connections, so too the principle of synchronicity attempts to provide meaningful understanding of acasual connections. Jung placed synchronicity as one of three main conceptual elements in understanding the psyche:[2]
Jung felt synchronicity to be a principle that hadexplanatorypower towards his concepts ofarchetypesand thecollective unconscious.[i]It described a governing dynamic which underlies the whole of human experience and history—social,emotional,psychological, andspiritual. The emergence of the synchronisticparadigmwas a significant move away fromCartesian dualismtowards an underlying philosophy ofdouble-aspect theory. Some argue this shift was essential in bringing theoretical coherence to Jung's earlier work.[35][ii]
Jung held that there was both a philosophical and scientific basis for synchronicity.[18]He identified the complementary nature of causality and acausality withEastern sciencesandprotoscientific disciplines, stating "the Eastbases much of its science on this irregularity and considers coincidences as the reliable basis of the world rather than causality. Synchronism is the prejudice of the East; causality is the modern prejudice ofthe West"[26](see also:universal causation). Contemporary scholar L. K. Kerr writes:
Jung also looked tomodern physicsto understand the nature of synchronicity, and attempted to adapt many ideas in this field to accommodate his conception of synchronicity, including the property ofnuminosity. He worked closely withNobel Prizewinning physicistWolfgang Pauliand also consulted withAlbert Einstein. The notion of synchronicity shares with modern physics the idea that under certain conditions, the laws governing the interactions of space and time can no longer be understood according to the principle of causality. In this regard, Jung joined modern physicists in reducing the conditions in which the laws ofclassical mechanicsapply.[26]
It is also pointed out that, since Jung took into consideration only the narrow definition of causality—only theefficient cause—his notion ofacausalityis also narrow and so is not applicable tofinalandformalcauses as understood inAristotelianorThomistsystems.[36]Either the final causality is inherent[37]in synchronicity, as it leads toindividuation; or synchronicity can be a kind of replacement for final causality. However, suchfinalismorteleologyis considered to be outside the domain ofmodern science.[citation needed]
Jung's theory, and philosophical worldview implicated by it, includes not only mainstream science thoughts but alsoesotericones and ones that are against mainstream.[38][39]
Jung's use of the concept in arguing for the existence ofparanormal phenomenahas been widely consideredpseudoscientificby modernscientific scepticism.[18]Furthermore, his collaborator Wolfgang Pauli objected to his dubious experiments of the concept involvingastrology—which Jung believed to be supported by the laboratory experiments behind theuncertainty principle's formulation.[26]Jung similarly turned to the works of parapsychologistJoseph B. Rhineto support a connection between synchronicity and the paranormal.[26]In his bookSynchronicity: An Acausal Connecting Principle, Jung wrote:
How are we to recognize acausal combinations of events, since it is obviously impossible to examine all chance happenings for their causality? The answer to this is that acausal events may be expected most readily where, on closer reflection, a causal connection appears to be inconceivable.[42]It is impossible, with our present resources, to explain ESP [extrasensory perception], or the fact of meaningful coincidence, as a phenomenon of energy. This makes an end of the causal explanation as well, for "effect" cannot be understood as anything except a phenomenon of energy. Therefore it cannot be a question of cause and effect, but of a falling together in time, a kind of simultaneity. Because of this quality of simultaneity, I have picked on the term "synchronicity" to designate a hypothetical factor equal in rank to causality as a principle of explanation.[43]
Roderick Main, in the introduction to his 1997 bookJung on Synchronicity and the Paranormal, wrote:[44]
The culmination of Jung's lifelong engagement with the paranormal is his theory of synchronicity, the view that the structure of reality includes a principle of acausal connection which manifests itself most conspicuously in the form of meaningful coincidences. Difficult, flawed, prone to misrepresentation, this theory none the less remains one of the most suggestive attempts yet made to bring theparanormalwithin the bounds of intelligibility. It has been found relevant by psychotherapists, parapsychologists, researchers of spiritual experience and a growing number of non-specialists. Indeed, Jung's writings in this area form an excellent general introduction to the whole field of the paranormal.
For example, psychologists were significantly more likely than both counsellors and psychotherapists to agree that chance coincidence was an explanation for synchronicity, whereas, counsellors and psychotherapists were significantly more likely than psychologists to agree that a need for unconscious material to be expressed could be an explanation for synchronicity experiences in the clinical setting.[48]
Since their inception, Jung's theories of synchronicity have been highly controversial[18]and have never hadwidespread scientific approval.[26]Scientific scepticismregards them aspseudoscience.[18]Likewise, mainstream science does not support paranormal explanations of coincidences.[24]
Despite this, synchronicity experiences and the synchronicity principle continue to be studied withinphilosophy,cognitive science, andanalytical psychology.[14]Synchronicity is widely challenged by the sufficiency ofprobability theoryin explaining the occurrence of coincidences, the relationship between synchronicity experiences andcognitive biases, and doubts about the theory's psychiatric or scientific usefulness.
Psychologist Fritz Levi, a contemporary of Jung, criticised the theory in his 1952 review, published in the periodicalNeue Schweizer Rundschau(New Swiss Observations). Levi saw Jung's theory as vague in determinability of synchronistic events, saying that Jung never specifically explained his rejection of "magic causality" to which such an acausal principle as synchronicity would be related. He also questioned the theory's usefulness.[53]
In a 1981 paper, parapsychologistCharles Tartwrites:
[There is] a danger inherent in the concept of synchronicity. This danger is the temptation to mental laziness. If, in working with paranormal phenomena, I cannot get my experiments to replicate and cannot find any patterns in the results, then, as attached as I am to the idea of causality, it would be very tempting to say, "Well, it's synchronistic, it's forever beyond my understanding," and so (prematurely) give up trying to find a causal explanation. Sloppy use of the concept of synchronicity then becomes a way of being intellectually lazy and dodging our responsibilities.[54]
Robert Todd Carroll, author ofThe Skeptic's Dictionaryin 2003, argues that synchronicity experiences are better explained asapophenia—the tendency for humans to find significance or meaning where none exists. He states that over a person's lifetime one can be expected to encounter several seemingly-unpredictable coincidences and that there is no need for Jung's metaphysical explanation of these occurrences.[55]
In a 2014 interview, emeritus professor and statisticianDavid J. Handstates:
Synchronicity is an attempt to come up with an explanation for the occurrence of highly improbable coincidences between events where there is no causal link. It's based on the premise that existing physics and mathematics cannot explain such things. This is wrong, however—standard science can explain them. That's really the point of the improbability principle. What I have tried to do is pull out and make explicit how physics and mathematics, in the form ofprobability calculusdoes explain why such striking and apparently highly improbable events happen. There's no need to conjure up other forces or ideas, and there's no need to attribute mystical meaning or significance to their occurrence. In fact, we shouldexpectthem to happen, as they do, purely in the natural course of events.[56]
In a 2015 paper, scholars M. K. Johansen and M. Osman state:
As theories, the main problem with both synchronicity and seriality is that they ignore the possibility that coincidences are a psychological phenomenon and focus instead on the premise that coincidences are examples of actual but hidden structures in the world.[24]
Jung tells the following story as an example of a synchronistic event in his 1960 bookSynchronicity:
By way of example, I shall mention an incident from my own observation. A young woman I was treating had, at a critical moment, a dream in which she was given a golden scarab. While she was telling me this dream I sat with my back to the closed window. Suddenly I heard a noise behind me, like a gentle tapping. I turned round and saw a flying insect knocking against the window pane from outside. I opened the window and caught the creature in the air as it flew in. It was the nearest analogy to a golden scarab that one finds in our latitudes, a scarabaeid beetle, the common rose-chafer (Cetonia aurata), which contrary to its usual habits had evidently felt an urge to get into a dark room at this particular moment.It was an extraordinarily difficult case to treat, and up to the time of the dream little or no progress had been made. I should explain that the main reason for this was my patient's animus, which was steeped in Cartesian philosophy and clung so rigidly to its own idea of reality that the efforts of three doctors—I was the third—had not been able to weaken it. Evidently something quite irrational was needed which was beyond my powers to produce. The dream alone was enough to disturb ever so slightly the rationalistic attitude of my patient. But when the "scarab" came flying in through the window in actual fact, her natural being could burst through the armor of her animus possession and the process of transformation could at last begin to move.[57]
After describing some examples, Jung wrote: "When coincidences pile up in this way, one cannot help being impressed by them—for the greater the number of terms in such a series, or the more unusual its character, the more improbable it becomes."[12]: 91
French writerÉmile Deschampsclaims in his memoirs that, in 1805, he was treated to someplum puddingby a stranger named Monsieur de Fontgibu. Ten years later, the writer encountered plum pudding on the menu of a Paris restaurant and wanted to order some, but the waiter told him that the last dish had already been served to another customer, who turned out to be de Fontgibu. Many years later, in 1832, Deschamps was at a dinner and once again ordered plum pudding. He recalled the earlier incident and told his friends that only de Fontgibu was missing to make the setting complete—and in the same instant, the now-senilede Fontgibu entered the room, having got the wrong address.[58]
In his bookThirty Years That Shook Physics: The Story of Quantum Theory(1966),George Gamowwrites aboutWolfgang Pauli, who was apparently considered a person particularly associated with synchronicity events. Gamow whimsically refers to the "Pauli effect", a mysteriousphenomenonwhich is not understood on a purelymaterialisticbasis, and probably never will be. The followinganecdoteis told:
It is well known that theoretical physicists cannot handle experimental equipment; it breaks whenever they touch it. Pauli was such a good theoretical physicist that something usually broke in the lab whenever he merely stepped across the threshold. A mysterious event that did not seem at first to be connected with Pauli's presence once occurred in Professor J. Franck's laboratory in Göttingen. Early one afternoon, without apparent cause, a complicated apparatus for the study of atomic phenomena collapsed. Franck wrote humorously about this to Pauli at his Zürich address and, after some delay, received an answer in an envelope with a Danish stamp. Pauli wrote that he had gone to visit Bohr and at the time of the mishap in Franck's laboratory his train was stopped for a few minutes at the Göttingen railroad station. You may believe this anecdote or not, but there are many other observations concerning the reality of the Pauli Effect![59]
Philip K. Dickmakes reference to "Pauli's synchronicity" in his 1963 science-fiction novel,The Game-Players of Titan, in reference topre-cognitivepsionicabilities being interfered with by other psionic abilities such aspsychokinesis: "an acausal connective event".[60]
In 1983The Policereleased an album titledSynchronicity,inspired byArthur Koestler's discussion of synchronicity in his bookThe Roots of Coincidence.[61]A song from the album, "Synchronicity II", simultaneously describes the story of a man experiencing a mental breakdown and a lurking monster emerging from a Scottish lake.
Björkwrote a song titled "Synchronicity" forSpike Jonze'sHot ChocolateDVD.[62]
Rising Appalachiareleased a song titled "Synchronicity" on their 2015 albumWider Circles.[63]
|
https://en.wikipedia.org/wiki/Synchronicity
|
Thetimeline of historic inventionsis a chronological list of particularly significant technologicalinventionsand theirinventors, where known.[a]This page lists nonincremental inventions that are widely recognized by reliable sources as having had a direct impact on the course of history that was profound, global, and enduring. The dates in this article make frequent use of theunits mya and kya, which refer to millions and thousands of years ago, respectively.
The dates listed in this section refer to the earliest evidence of an invention found and dated byarchaeologists(or in a few cases, suggested by indirect evidence). Dates are often approximate and change as more research is done, reported and seen. Older examples of any given technology are often found. The locations listed are for the site where the earliest solid evidence has been found, but especially for the earlier inventions, there is little certainty how close that may be to where the invention took place.
The Lower Paleolithic period lasted over 3 million years, during which there many human-like speciesevolvedincluding toward the end of this period,Homo sapiens. The original divergence between humans andchimpanzeesoccurred 13 (Mya), however interbreeding continued until as recently as 4 Ma, with the first species clearly belonging to the human (and not chimpanzee) lineage beingAustralopithecus anamensis. Some species are controversial among paleoanthropologists, who disagree whether they are species on their own or not. HereHomo ergasteris included underHomo erectus, whileHomo rhodesiensisis included underHomo heidelbergensis.
During this period theQuaternary glaciationbegan (about 2.58 million years ago), and continues to today. It has been anice age, withcycles of 40–100,000 yearsalternating between long, cold, more glaciated periods, and shorter warmer periods –interglacialepisodes.
The evolution ofearly modern humansaround 300 kya coincides with the start of the Middle Paleolithic period. During this 250,000-year period, our relatedarchaic humanssuch asNeanderthalsandDenisovansbegan to spread out of Africa, joined later byHomo sapiens. Over the course of the period we see evidence of increasingly long-distance trade, religious rites, and other behavior associated withBehavioral modernity.
50 kya was long regarded as the beginning ofbehavioral modernity, which defined the Upper Paleolithic period. The Upper Paleolithic lasted nearly 40,000 years, while research continues to push the beginnings of behavioral modernity earlier into the Middle Paleolithic. Behavioral modernity is characterized by the widespread observation of religious rites, artistic expression and the appearance of tools made for purely intellectual or artistic pursuits.
The end of theLast Glacial Period("ice age") and the beginning of theHolocenearound 11.7 ka coincide with theAgricultural Revolution, marking the beginning of the agricultural era, which persisted there until the industrial revolution.[94]
During the Neolithic period, lasting 8400 years, stone began to be used for construction, and remained a predominant hard material for toolmaking. Copper and arsenic bronze were developed towards the end of this period, and of course the use of many softer materials such as wood, bone, and fibers continued. Domestication spread both in the sense of how many species were domesticated, and how widespread the practice became.
The beginning of bronze-smelting coincides with the emergence of the first cities and of writing in the Ancient Near East and the Indus Valley. TheBronze Agestarting in Eurasia in the 4th millennia BC and ended, in Eurasia, c.1200 BC.
TheLate Bronze Age collapseoccurs around 1200 BC,[220]extinguishing most Bronze-Age Near Eastern cultures, and significantly weakening the rest. This is coincident with the complete collapse of theIndus Valley Civilisation. This event is followed by the beginning of the Iron Age. We define the Iron Age as ending in 510 BC for the purposes of this article, even though the typical definition is region-dependent (e.g. 510 BC in Greece, 322 BC in India, 200 BC in China), thus being an 800-year period.[e]
[394][395]
|
https://en.wikipedia.org/wiki/Timeline_of_historic_inventions
|
Transport Layer Security pre-shared key ciphersuites(TLS-PSK) is a set ofcryptographic protocolsthat providesecurecommunication based onpre-shared keys(PSKs). These pre-shared keys aresymmetric keysshared in advance among the communicating parties.
There are several cipher suites: The first set of ciphersuites use onlysymmetric keyoperations forauthentication. The second set use aDiffie–Hellmankey exchangeauthenticated with a pre-shared key. The third set combinepublic keyauthentication of the server with pre-shared key authentication of the client.
Usually,Transport Layer Security(TLS) usespublic key certificatesorKerberosfor authentication. TLS-PSK uses symmetric keys, shared in advance among the communicating parties, to establish a TLS connection. There are several reasons to use PSKs:
|
https://en.wikipedia.org/wiki/TLS-PSK
|
Wi-Fi Protected Access(WPA) (Wireless Protected Access),Wi-Fi Protected Access 2(WPA2), andWi-Fi Protected Access 3(WPA3) are the three security certification programs developed after 2000 by theWi-Fi Allianceto secure wireless computer networks. The Alliance defined these in response to serious weaknesses researchers had found in the previous system,Wired Equivalent Privacy(WEP).[1]
WPA (sometimes referred to as the TKIP standard) became available in 2003. The Wi-Fi Alliance intended it as an intermediate measure in anticipation of the availability of the more secure and complex WPA2, which became available in 2004 and is a common shorthand for the full IEEE 802.11i (orIEEE 802.11i-2004) standard.
In January 2018, the Wi-Fi Alliance announced the release of WPA3, which has several security improvements over WPA2.[2]
As of 2023, most computers that connect to a wireless network have support for using WPA, WPA2, or WPA3. All versions thereof, at least as implemented through May, 2021, are vulnerable to compromise.[3]
WEP (Wired Equivalent Privacy) is an early encryption protocol for wireless networks, designed to secure WLAN connections. It supports 64-bit and 128-bit keys, combining user-configurable and factory-set bits. WEP uses the RC4 algorithm for encrypting data, creating a unique key for each packet by combining a new Initialization Vector (IV) with a shared key (it has 40 bits of vectored key and 24 bits of random numbers). Decryption involves reversing this process, using the IV and the shared key to generate a key stream and decrypt the payload. Despite its initial use, WEP's significant vulnerabilities led to the adoption of more secure protocols.[4]
The Wi-Fi Alliance intended WPA as an intermediate measure to take the place ofWEPpending the availability of the fullIEEE 802.11istandard. WPA could be implemented throughfirmware upgradesonwireless network interface cardsdesigned for WEP that began shipping as far back as 1999. However, since the changes required in thewireless access points(APs) were more extensive than those needed on the network cards, most pre-2003 APs were not upgradable by vendor-provided methods to support WPA.
The WPA protocol implements theTemporal Key Integrity Protocol(TKIP). WEP uses a 64-bit or 128-bit encryption key that must be manually entered on wireless access points and devices and does not change. TKIP employs a per-packet key, meaning that it dynamically generates a new 128-bit key for each packet and thus prevents the types of attacks that compromise WEP.[5]
WPA also includes aMessage Integrity Check, which is designed to prevent an attacker from altering and resending data packets. This replaces thecyclic redundancy check(CRC) that was used by the WEP standard. CRC's main flaw is that it does not provide a sufficiently strongdata integrityguarantee for the packets it handles.[6]Well-testedmessage authentication codesexisted to solve these problems, but they require too much computation to be used on old network cards. WPA uses a message integrity check algorithm calledTKIPto verify the integrity of the packets. TKIP is much stronger than a CRC, but not as strong as the algorithm used in WPA2. Researchers have since discovered a flaw in WPA that relied on older weaknesses in WEP and the limitations of the message integrity code hash function, namedMichael, to retrieve the keystream from short packets to use for re-injection andspoofing.[7][8]
Ratified in 2004, WPA2 replaced WPA. WPA2, which requires testing and certification by the Wi-Fi Alliance, implements the mandatory elements of IEEE 802.11i. In particular, it includes support forCCMP, anAES-based encryption mode.[9][10][11]Certification began in September, 2004. From March 13, 2006, to June 30, 2020, WPA2 certification was mandatory for all new devices to bear the Wi-Fi trademark.[12]In WPA2-protected WLANs, secure communication is established through a multi-step process. Initially, devices associate with the Access Point (AP) via an association request. This is followed by a 4-way handshake, a crucial the for step ensuring both the client and AP have the correctPre-Shared Key(PSK) without actually transmitting it. During this handshake, aPairwise Transient Key(PTK) is generated for secure data exchange key fution for the exchange RP = 2025
WPA2 employs the Advanced Encryption Standard (AES) with a 128-bit key, enhancing security through the Counter-Mode/CBC-Mac ProtocolCCMP. This protocol ensures robust encryption and data integrity, using different Initialization Vectors (IVs) for encryption and authentication purposes.[13]
The 4-way handshake involves:
Post-handshake, the established PTK is used for encrypting unicast traffic, and theGroup Temporal Key(GTK) is used for broadcast traffic. This comprehensive authentication and encryption mechanism is what makes WPA2 a robust security standard for wireless networks.[14]
In January 2018, the Wi-Fi Alliance announced WPA3 as a replacement to WPA2.[15][16]Certification began in June 2018,[17]and WPA3 support has been mandatory for devices which bear the "Wi-Fi CERTIFIED™" logo since July 2020.[18]
The new standard uses an equivalent 192-bit cryptographic strength in WPA3-Enterprise mode[19](AES-256inGCM modewithSHA-384asHMAC), and still mandates the use ofCCMP-128(AES-128inCCM mode) as the minimum encryption algorithm in WPA3-Personal mode.TKIPis not allowed in WPA3.
The WPA3 standard also replaces thepre-shared key(PSK) exchange withSimultaneous Authentication of Equals(SAE) exchange, a method originally introduced withIEEE 802.11s, resulting in a more secure initial key exchange in personal mode[20][21]andforward secrecy.[22]The Wi-Fi Alliance also says that WPA3 will mitigate security issues posed by weak passwords and simplify the process of setting up devices with no display interface.[2][23]WPA3 also supportsOpportunistic Wireless Encryption (OWE)for open Wi-Fi networks that do not have passwords.
Protection of management frames as specified in theIEEE 802.11wamendment is also enforced by the WPA3 specifications.
WPA has been designed specifically to work with wireless hardware produced prior to the introduction of WPA protocol,[24]which provides inadequate security throughWEP. Some of these devices support WPA only after applying firmware upgrades, which are not available for some legacy devices.[24]
Wi-Fi devices certified since 2006 support both the WPA and WPA2 security protocols. WPA3 is required since July 1, 2020.[18]
Different WPA versions and protection mechanisms can be distinguished based on the target end-user (such as WEP, WPA, WPA2, WPA3) and the method of authentication key distribution, as well as the encryption protocol used. As of July 2020, WPA3 is the latest iteration of the WPA standard, bringing enhanced security features and addressing vulnerabilities found in WPA2. WPA3 improves authentication methods and employs stronger encryption protocols, making it the recommended choice for securing Wi-Fi networks.[23]
Also referred to asWPA-PSK(pre-shared key) mode, this is designed for home, small office and basic uses and does not require an authentication server.[25]Each wireless network device encrypts the network traffic by deriving its 128-bit encryption key from a 256-bit sharedkey. This key may be entered either as a string of 64hexadecimaldigits, or as apassphraseof 8 to 63printable ASCII characters.[26]This pass-phrase-to-PSK mapping is nevertheless not binding, as Annex J is informative in the latest 802.11 standard.[27]If ASCII characters are used, the 256-bit key is calculated by applying thePBKDF2key derivation functionto the passphrase, using theSSIDas thesaltand 4096 iterations ofHMAC-SHA1.[28]WPA-Personal mode is available on all three WPA versions.
This enterprise mode uses an802.1Xserver for authentication, offering higher security control by replacing the vulnerable WEP with the more advanced TKIP encryption. TKIP ensures continuous renewal of encryption keys, reducing security risks. Authentication is conducted through aRADIUSserver, providing robust security, especially vital in corporate settings. This setup allows integration with Windows login processes and supports various authentication methods likeExtensible Authentication Protocol, which uses certificates for secure authentication, and PEAP, creating a protected environment for authentication without requiring client certificates.[29]
Originally, only EAP-TLS (Extensible Authentication Protocol-Transport Layer Security) was certified by the Wi-Fi alliance. In April 2010, theWi-Fi Allianceannounced the inclusion of additional EAP[31]types to its WPA- and WPA2-Enterprise certification programs.[32]This was to ensure that WPA-Enterprise certified products can interoperate with one another.
As of 2010[update]the certification program includes the following EAP types:
802.1X clients and servers developed by specific firms may support other EAP types. This certification is an attempt for popular EAP types to interoperate; their failure to do so as of 2013[update]is one of the major issues preventing rollout of 802.1X on heterogeneous networks.
Commercial 802.1X servers include MicrosoftNetwork Policy ServerandJuniper NetworksSteelbelted RADIUS as well as Aradial Radius server.[34]FreeRADIUSis an open source 802.1X server.
WPA-Personal and WPA2-Personal remain vulnerable topassword crackingattacks if users rely on aweak password or passphrase. WPA passphrase hashes are seeded from the SSID name and its length;rainbow tablesexist for the top 1,000 network SSIDs and a multitude of common passwords, requiring only a quick lookup to speed up cracking WPA-PSK.[35]
Brute forcing of simple passwords can be attempted using theAircrack Suitestarting from the four-way authentication handshake exchanged during association or periodic re-authentication.[36][37][38][39][40]
WPA3 replaces cryptographic protocols susceptible to off-line analysis with protocols that require interaction with the infrastructure for each guessed password, supposedly placing temporal limits on the number of guesses.[15]However, design flaws in WPA3 enable attackers to plausibly launch brute-force attacks (see§ Dragonblood).
WPA and WPA2 do not provideforward secrecy, meaning that once an adverse person discovers the pre-shared key, they can potentially decrypt all packets encrypted using that PSK transmitted in the future and even past, which could be passively and silently collected by the attacker. This also means an attacker can silently capture and decrypt others' packets if a WPA-protected access point is provided free of charge at a public place, because its password is usually shared to anyone in that place. In other words, WPA only protects from attackers who do not have access to the password. Because of that, it's safer to useTransport Layer Security(TLS) or similar on top of that for the transfer of any sensitive data. However starting from WPA3, this issue has been addressed.[22]
In 2013, Mathy Vanhoef and Frank Piessens[41]significantly improved upon theWPA-TKIPattacks of Erik Tews and Martin Beck.[42][43]They demonstrated how to inject an arbitrary number of packets, with each packet containing at most 112 bytes of payload. This was demonstrated by implementing aport scanner, which can be executed against any client usingWPA-TKIP. Additionally, they showed how to decrypt arbitrary packets sent to a client. They mentioned this can be used to hijack aTCP connection, allowing an attacker to inject maliciousJavaScriptwhen the victim visits a website.
In contrast, the Beck-Tews attack could only decrypt short packets with mostly known content, such asARPmessages, and only allowed injection of 3 to 7 packets of at most 28 bytes. The Beck-Tews attack also requiresquality of service(as defined in802.11e) to be enabled, while the Vanhoef-Piessens attack does not. Neither attack leads to recovery of the shared session key between the client andAccess Point. The authors say using a short rekeying interval can prevent some attacks but not all, and strongly recommend switching fromTKIPto AES-basedCCMP.
Halvorsen and others show how to modify the Beck-Tews attack to allow injection of 3 to 7 packets having a size of at most 596 bytes.[44]The downside is that their attack requires substantially more time to execute: approximately 18 minutes and 25 seconds. In other work Vanhoef and Piessens showed that, when WPA is used to encrypt broadcast packets, their original attack can also be executed.[45]This is an important extension, as substantially more networks use WPA to protectbroadcast packets, than to protectunicast packets. The execution time of this attack is on average around 7 minutes, compared to the 14 minutes of the original Vanhoef-Piessens and Beck-Tews attack.
The vulnerabilities of TKIP are significant because WPA-TKIP had been held before to be an extremely safe combination; indeed, WPA-TKIP is still a configuration option upon a wide variety of wireless routing devices provided by many hardware vendors. A survey in 2013 showed that 71% still allow usage of TKIP, and 19% exclusively support TKIP.[41]
A more serious security flaw was revealed in December 2011 by Stefan Viehböck is the production that affects wireless routers with theWi-Fi Protected Setup(WPS) feature, regardless of which encryption method they use. Most recent models have this feature and enable it by default. Many consumer Wi-Fi device manufacturers had taken steps to eliminate the potential of weak passphrase choices by promoting alternative methods of automatically generating and distributing strong keys when users add a new wireless adapter or appliance to a network. These methods include pushing buttons on the devices or entering an 8-digitPIN.
The Wi-Fi Alliance standardized these methods as Wi-Fi Protected Setup; however, the PIN feature as widely implemented introduced a major new security flaw. The flaw allows a remote attacker to recover the WPS PIN and, with it, the router's WPA/WPA2 password in a few hours.[46]Users have been urged to turn off the WPS feature,[47]although this may not be possible on some router models. Also, the PIN is written on a label on most Wi-Fi routers with WPS, which cannot be changed if compromised.
In 2018, the Wi-Fi Alliance introduced Wi-Fi Easy Connect[48]as a new alternative for the configuration of devices that lack sufficient user interface capabilities by allowing nearby devices to serve as an adequate UI for network provisioning purposes, thus mitigating the need for WPS.[49]
Several weaknesses have been found inMS-CHAPv2, some of which severely reduce the complexity of brute-force attacks, making them feasible with modern hardware. In 2012 the complexity of breaking MS-CHAPv2 was reduced to that of breaking a singleDESkey (work byMoxie Marlinspikeand Marsh Ray). Moxie advised: "Enterprises who are depending on the mutual authentication properties of MS-CHAPv2 for connection to their WPA2 Radius servers should immediately start migrating to something else."[50]
Tunneled EAP methods using TTLS or PEAP which encrypt the MSCHAPv2 exchange are widely deployed to protect against exploitation of this vulnerability. However, prevalent WPA2 client implementations during the early 2000s were prone to misconfiguration by end users, or in some cases (e.g.Android), lacked any user-accessible way to properly configure validation of AAA server certificate CNs. This extended the relevance of the original weakness in MSCHAPv2 withinMiTMattack scenarios.[51]Under stricter compliance tests for WPA2 announced alongside WPA3, certified client software will be required to conform to certain behaviors surrounding AAA certificate validation.[15]
Hole196 is a vulnerability in the WPA2 protocol that abuses the shared Group Temporal Key (GTK). It can be used to conduct man-in-the-middle anddenial-of-serviceattacks. However, it assumes that the attacker is already authenticated against Access Point and thus in possession of the GTK.[52][53]
In 2016 it was shown that the WPA and WPA2 standards contain an insecure expositoryrandom number generator(RNG). Researchers showed that, if vendors implement the proposed RNG, an attacker is able to predict the group key (GTK) that is supposed to be randomly generated by theaccess point(AP). Additionally, they showed that possession of the GTK enables the attacker to inject any traffic into the network, and allowed the attacker to decrypt unicast internet traffic transmitted over the wireless network. They demonstrated their attack against anAsusRT-AC51U router that uses theMediaTekout-of-tree drivers, which generate the GTK themselves, and showed the GTK can be recovered within two minutes or less. Similarly, they demonstrated the keys generated by Broadcom access daemons running on VxWorks 5 and later can be recovered in four minutes or less, which affects, for example, certain versions of Linksys WRT54G and certain Apple AirPort Extreme models. Vendors can defend against this attack by using a secure RNG. By doing so,Hostapdrunning on Linux kernels is not vulnerable against this attack and thus routers running typicalOpenWrtorLEDEinstallations do not exhibit this issue.[54]
In October 2017, details of theKRACK(Key Reinstallation Attack) attack on WPA2 were published.[55][56]The KRACK attack is believed to affect all variants of WPA and WPA2; however, the security implications vary between implementations, depending upon how individual developers interpreted a poorly specified part of the standard. Software patches can resolve the vulnerability but are not available for all devices.[57]KRACK exploits a weakness in the WPA2 4-Way Handshake, a critical process for generating encryption keys. Attackers can force multiple handshakes, manipulating key resets. By intercepting the handshake, they could decrypt network traffic without cracking encryption directly. This poses a risk, especially with sensitive data transmission.[58]
Manufacturers have released patches in response, but not all devices have received updates. Users are advised to keep their devices updated to mitigate such security risks. Regular updates are crucial for maintaining network security against evolving threats.[58]
The Dragonblood attacks exposed significant vulnerabilities in the Dragonfly handshake protocol used in WPA3 and EAP-pwd. These included side-channel attacks potentially revealing sensitive user information and implementation weaknesses in EAP-pwd and SAE. Concerns were also raised about the inadequate security in transitional modes supporting both WPA2 and WPA3. In response, security updates and protocol changes are being integrated into WPA3 and EAP-pwd to address these vulnerabilities and enhance overall Wi-Fi security.[59]
On May 11, 2021,FragAttacks, a set of new security vulnerabilities, were revealed, affecting Wi-Fi devices and enabling attackers within range to steal information or target devices. These include design flaws in the Wi-Fi standard, affecting most devices, and programming errors in Wi-Fi products, making almost all Wi-Fi products vulnerable. The vulnerabilities impact all Wi-Fi security protocols, including WPA3 and WEP. Exploiting these flaws is complex but programming errors in Wi-Fi products are easier to exploit. Despite improvements in Wi-Fi security, these findings highlight the need for continuous security analysis and updates. In response, security patches were developed, and users are advised to use HTTPS and install available updates for protection.
|
https://en.wikipedia.org/wiki/Wi-Fi_Protected_Access#WPA-Personal
|
TheFederal Information Processing Standard Publication 140-2, (FIPS PUB 140-2),[1][2]is aU.S.governmentcomputer securitystandardused to approvecryptographic modules. The title isSecurity Requirements for Cryptographic Modules. Initial publication was on May 25, 2001, and was last updated December 3, 2002.
Its successor,FIPS 140-3, was approved on March 22, 2019, and became effective on September 22, 2019.[3]FIPS 140-3 testing began on September 22, 2020, and the first FIPS 140-3 validation certificates were issued in December 2022.[4]FIPS 140-2 testing was still available until September 21, 2021 (later changed for applications already in progress to April 1, 2022[5]), creating an overlapping transition period of more than one year. FIPS 140-2 test reports that remain in the CMVP queue will still be granted validations after that date, but all FIPS 140-2 validations will be moved to the Historical List on September 21, 2026 regardless of their actual final validation date.[6]
TheNational Institute of Standards and Technology(NIST) issued theFIPS 140Publication Series to coordinate the requirements and standards for cryptography modules that include both hardware and software components. Protection of a cryptographic module within a security system is necessary to maintain the confidentiality and integrity of the information protected by the module. This standard specifies the security requirements that will be satisfied by a cryptographic module. The standard provides four increasing qualitative levels of security intended to cover a wide range of potential applications and environments. The security requirements cover areas related to the secure design and implementation of a cryptographic module. These areas include cryptographic module specification; cryptographic module ports and interfaces; roles, services, and authentication; finite state model; physical security; operational environment; cryptographic key management; electromagnetic interference/electromagnetic compatibility (EMI/EMC); self-tests; design assurance; and mitigation of other attacks.[7]
Federal agencies and departments can validate that the module in use is covered by an existingFIPS 140-1or FIPS 140-2 certificate that specifies the exact module name, hardware, software, firmware, and/or applet version numbers. The cryptographic modules are produced by theprivate sectororopen sourcecommunities for use by the U.S. government and other regulated industries (such as financial and health-care institutions) that collect, store, transfer, share and disseminatesensitive but unclassified(SBU) information. A commercial cryptographic module is also commonly referred to as ahardware security module(HSM).
FIPS 140-2 defines four levels of security, simply named "Level 1" to "Level 4". It does not specify in detail what level of security is required by any particular application.
Security Level 1 provides the lowest level of security. Basic security requirements are specified for a cryptographic module (e.g., at least one Approved algorithm or Approved security function shall be used). No specific physical security mechanisms are required in a Security Level 1 cryptographic module beyond the basic requirement for production-grade components. An example of a Security Level 1 cryptographic module is a personal computer (PC) encryption board.
Security Level 2 improves upon the physical security mechanisms of a Security Level 1 cryptographic module by requiring features that show evidence of tampering, including tamper-evident coatings or seals that must be broken to attain physical access to the plaintext cryptographic keys andcritical security parameters(CSPs) within the module, or pick-resistant locks on covers or doors to protect against unauthorized physical access.
In addition to the tamper-evident physical security mechanisms required at Security Level 2, Security Level 3 attempts to prevent the intruder from gaining access to CSPs held within the cryptographic module. Physical security mechanisms required at Security Level 3 are intended to have a high probability of detecting and responding to attempts at physical access, use or modification of the cryptographic module. The physical security mechanisms may include the use of strong enclosures and tamper-detection/response circuitry that zeroes all plaintext CSPs when the removable covers/doors of the cryptographic module are opened.
Security Level 4 provides the highest level of security. At this security level, the physical security mechanisms provide a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access. Penetration of the cryptographic module enclosure from any direction has a very high probability of being detected, resulting in the immediate deletion of all plaintext CSPs.
Security Level 4 cryptographic modules are useful for operation in physically unprotected environments. Security Level 4 also protects a cryptographic module against a security compromise due to environmental conditions or fluctuations outside of the module's normal operating ranges for voltage and temperature. Intentional excursions beyond the normal operating ranges may be used by an attacker to thwart a cryptographic module's defenses. A cryptographic module is required to either include special environmental protection features designed to detect fluctuations and delete CSPs, or to undergo rigorous environmental failure testing to provide a reasonable assurance that the module will not be affected by fluctuations outside of the normal operating range in a manner that can compromise the security of the module.
For Levels 2 and higher, the operating platform upon which the validation is applicable is also listed. Vendors do not always maintain their baseline validations.
FIPS 140-2 establishes theCryptographic Module Validation Program(CMVP) as a joint effort by the NIST and theCommunications Security Establishment(CSE) for theGovernment of Canada
Security programs overseen by NIST and CSE focus on working with government and industry to establish more secure systems and networks by developing, managing and promoting security assessment tools, techniques, services, and supporting programs for testing, evaluation and validation; and addresses such areas as: development and maintenance of security metrics, security evaluation criteria and evaluation methodologies, tests and test methods; security-specific criteria for laboratory accreditation; guidance on the use of evaluated and tested products; research to address assurance methods and system-wide security and assessment methodologies; security protocol validation activities; and appropriate coordination with assessment-related activities of voluntary industry standards bodies and other assessment regimes.
The FIPS 140-2 standard is aninformation technologysecurity approval program for cryptographic modules produced by private sector vendors who seek to have their products certified for use in government departments and regulated industries (such as financial and health-care institutions) that collect, store, transfer, share and disseminatesensitive but unclassified(SBU) information.
Tamper evident FIPS 140-2 security labels are utilized to deter and detect tampering of modules.
All of the tests under the CMVP are handled by third-party laboratories that are accredited as Cryptographic Module Testing laboratories[8]by the National Voluntary Laboratory Accreditation Program (NVLAP).[9]Vendors interested in validation testing may select any of the twenty-one accredited labs.
NVLAP accredited Cryptographic Modules Testing laboratories perform validation testing of cryptographic modules.[10][11]Cryptographic modules are tested against requirements found in FIPS PUB 140–2, Security Requirements for Cryptographic Modules. Security requirements cover 11 areas related to the design and implementation of a cryptographic module. Within most areas, a cryptographic module receives a security level rating (1–4, from lowest to highest), depending on what requirements are met. For other areas that do not provide for different levels of security, a cryptographic module receives a rating that reflects fulfillment of all of the requirements for that area.
An overall rating is issued for the cryptographic module, which indicates:
On a vendor's validation certificate, individual ratings are listed, as well as the overall rating.
NIST maintains validation lists[12]for all of its cryptographic standards testing programs (past and present). All of these lists are updated as new modules/implementations receive validation certificates from NIST and CSE. Items on the FIPS 140-1 and FIPS 140-2 validation list reference validated algorithm implementations that appear on the algorithm validation lists.
In addition to using a valid cryptographic module, encryption solutions are required to use cipher suites with approved algorithms or security functions established by the FIPS 140-2 Annex A to be considered FIPS 140-2 compliant.
FIPS PUB 140-2 Annexes:
Steven Marquess has posted a criticism that FIPS 140-2 validation can lead to incentives to keep vulnerabilities and other defects hidden. CMVP can decertify software in which vulnerabilities are found, but it can take a year to re-certify software if defects are found, so companies can be left without a certified product to ship. As an example, Steven Marquess mentions a vulnerability that was found, publicised, and fixed in the FIPS-certified open-source derivative of OpenSSL, with the publication meaning that the OpenSSL derivative was decertified. This decertification hurt companies relying on the OpenSSL-derivative's FIPS certification. By contrast, companies that had renamed and certified a copy of the open-source OpenSSL derivative were not decertified, even though they were basically identical, and did not fix the vulnerability. Steven Marquess therefore argues that the FIPS process inadvertently encourages hiding software's origins, to de-associate it from defects since found in the original, while potentially leaving the certified copy vulnerable.[13]
In recent years, CMVP has taken steps to avoid the situation described by Marquess, moving validations to the Historical List based on the algorithms and functions contained in the module, rather than based on the provenance.[14]
|
https://en.wikipedia.org/wiki/FIPS_140-2
|
Hardware accelerationis the use ofcomputer hardwaredesigned to perform specific functions more efficiently when compared tosoftwarerunning on a general-purposecentral processing unit(CPU). Anytransformationofdatathat can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both.
To perform computing tasks more efficiently, generally one can invest time and money in improving the software, improving the hardware, or both. There are various approaches with advantages and disadvantages in terms of decreasedlatency, increasedthroughput, and reducedenergy consumption. Typical advantages of focusing on software may include greater versatility, more rapiddevelopment, lowernon-recurring engineeringcosts, heightenedportability, and ease ofupdating featuresorpatchingbugs, at the cost ofoverheadto compute general operations. Advantages of focusing on hardware may includespeedup, reducedpower consumption,[1]lower latency, increasedparallelism[2]andbandwidth, andbetter utilizationof area andfunctional componentsavailable on anintegrated circuit; at the cost of lower ability to update designs onceetched onto siliconand higher costs offunctional verification, times to market, and the need for more parts. In the hierarchy of digital computing systems ranging from general-purpose processors tofully customizedhardware, there is a tradeoff between flexibility and efficiency, with efficiency increasing byorders of magnitudewhen any given application is implemented higher up that hierarchy.[3]This hierarchy includes general-purpose processors such as CPUs,[4]more specialized processors such as programmableshadersin aGPU,[5]applications implemented onfield-programmable gate arrays(FPGAs),[6]and fixed-function implemented onapplication-specific integrated circuits(ASICs).[7]
Hardware acceleration is advantageous forperformance, and practical when the functions are fixed, so updates are not as needed as in software solutions. With the advent ofreprogrammablelogic devicessuch as FPGAs, the restriction of hardware acceleration to fully fixed algorithms has eased since 2010, allowing hardware acceleration to be applied to problem domains requiring modification to algorithms and processingcontrol flow.[8][9]The disadvantage, however, is that in many open source projects, it requires proprietary libraries that not all vendors are keen to distribute or expose, making it difficult to integrate in such projects.
Integrated circuitsare designed to handle various operations on both analog and digital signals. In computing, digital signals are the most common and are typically represented as binary numbers.Computer hardwareand software use thisbinary representationto perform computations. This is done by processingBoolean functionson the binary input, and then outputting the results for storage or further processing by other devices.
Because allTuring machinescan run anycomputable function, it is always possible to design custom hardware that performs the same function as a given piece of software. Conversely, software can always be used to emulate the function of a given piece of hardware. Custom hardware may offer higher performance per watt for the same functions that can be specified in software.Hardware description languages(HDLs) such asVerilogandVHDLcan model the samesemanticsas software andsynthesizethe design into anetlistthat can be programmed to an FPGA or composed into thelogic gatesof an ASIC.
The vast majority of software-based computing occurs on machines implementing thevon Neumann architecture, collectively known asstored-program computers.Computer programsare stored as data andexecutedbyprocessors. Such processors must fetch and decode instructions, as well asload data operandsfrommemory(as part of theinstruction cycle), to execute the instructions constituting the software program. Relying on a commoncachefor code and data leads to the "von Neumann bottleneck", a fundamental limitation on the throughput of software on processors implementing the von Neumann architecture. Even in themodified Harvard architecture, where instructions and data have separate caches in thememory hierarchy, there is overhead to decoding instructionopcodesandmultiplexingavailableexecution unitson amicroprocessorormicrocontroller, leading to low circuit utilization. Modern processors that providesimultaneous multithreadingexploit under-utilization of available processor functional units andinstruction level parallelismbetween different hardware threads.
Hardware execution units do not in general rely on the von Neumann or modified Harvard architectures and do not need to perform the instruction fetch and decode steps of aninstruction cycleand incur those stages' overhead. If needed calculations are specified in aregister transfer level(RTL) hardware design, the time and circuit area costs that would be incurred by instruction fetch and decoding stages can be reclaimed and put to other uses.
This reclamation saves time, power, and circuit area in computation. The reclaimed resources can be used for increased parallel computation, other functions, communication, or memory, as well as increasedinput/outputcapabilities. This comes at the cost of general-purpose utility.
Greater RTL customization of hardware designs allows emerging architectures such asin-memory computing,transport triggered architectures(TTA) andnetworks-on-chip(NoC) to further benefit from increasedlocalityof data to execution context, thereby reducing computing and communication latency between modules and functional units.
Custom hardware is limited in parallel processing capability only by the area andlogic blocksavailable on theintegrated circuit die.[10]Therefore, hardware is much more free to offermassive parallelismthan software on general-purpose processors, offering a possibility of implementing theparallel random-access machine(PRAM) model.
It is common to buildmulticoreandmanycoreprocessing units out ofmicroprocessor IP core schematicson a single FPGA or ASIC.[11][12][13][14][15]Similarly, specialized functional units can be composed in parallel, asin digital signal processing, without being embedded in a processorIP core. Therefore, hardware acceleration is often employed for repetitive, fixed tasks involving littleconditional branching, especially on large amounts of data. This is howNvidia'sCUDAline of GPUs are implemented.
As device mobility has increased, new metrics have been developed that measure the relative performance of specific acceleration protocols, considering characteristics such as physical hardware dimensions, power consumption, and operations throughput. These can be summarized into three categories: task efficiency, implementation efficiency, and flexibility. Appropriate metrics consider the area of the hardware along with both the corresponding operations throughput and energy consumed.[16]
Examples of hardware acceleration includebit blitacceleration functionality in graphics processing units (GPUs), use ofmemristorsfor acceleratingneural networks, andregular expressionhardware acceleration forspam controlin theserverindustry, intended to preventregular expression denial of service(ReDoS) attacks.[17]The hardware that performs the acceleration may be part of a general-purpose CPU, or a separate unit called a hardware accelerator, though they are usually referred to with a more specific term, such as 3D accelerator, orcryptographic accelerator.
Traditionally, processors were sequential (instructions are executed one by one), and were designed to run general purpose algorithms controlled byinstruction fetch(for example, moving temporary resultsto and fromaregister file). Hardware accelerators improve the execution of a specific algorithm by allowing greaterconcurrency, having specificdatapathsfor theirtemporary variables, and reducing the overhead of instruction control in the fetch-decode-execute cycle.
Modern processors aremulti-coreand often feature parallel "single-instruction; multiple data" (SIMD) units. Even so, hardware acceleration still yields benefits. Hardware acceleration is suitable for any computation-intensive algorithm which is executed frequently in a task or program. Depending upon the granularity, hardware acceleration can vary from a small functional unit, to a large functional block (likemotion estimationinMPEG-2).
|
https://en.wikipedia.org/wiki/Hardware_acceleration
|
ATrusted Platform Module(TPM) is asecure cryptoprocessorthat implements theISO/IEC 11889standard. Common uses are verifying that theboot processstarts from a trusted combination of hardware and software and storing disk encryption keys.
A TPM 2.0 implementation is part of theWindows 11system requirements.[1]
The first TPM version that was deployed was 1.1b in 2003.[2]
Trusted Platform Module (TPM) was conceived by acomputer industryconsortium calledTrusted Computing Group(TCG). It evolved intoTPM Main Specification Version 1.2which was standardized byInternational Organization for Standardization(ISO) andInternational Electrotechnical Commission(IEC) in 2009 as ISO/IEC 11889:2009.[3]TPM Main Specification Version 1.2was finalized on 3 March 2011 completing its revision.[4][5]
On April 9ᵗʰ 2014, theTrusted Computing Groupannounced a major upgrade to their specification entitledTPM Library Specification 2.0.[6]The group continues work on the standard incorporating errata, algorithmic additions and new commands, with its most recent edition published as 2.0 in November 2019.[7]This version became ISO/IEC 11889:2015.
When a new revision is released it is divided into multiple parts by the Trusted Computing Group. Each part consists of a document that makes up the whole of the new TPM specification.
While TPM 2.0 addresses many of the same use cases and has similar features, the details are different. TPM 2.0 is not backward compatible with TPM 1.2.[8][9][10]
The TPM 2.0 policy authorization includes the 1.2 HMAC, locality, physical presence, and PCR. It adds authorization based on an asymmetric digital signature, indirection to another authorization secret, counters and time limits, NVRAM values, a particular command or command parameters, and physical presence. It permits the ANDing and ORing of these authorization primitives to construct complex authorization policies.[23]
The Trusted Platform Module (TPM) provides:
Computer programs can use a TPM for theauthenticationof hardware devices, since each TPM chip has a unique and secret Endorsement Key (EK) burned in as it is produced. Security embedded in hardware provides more protection than a software-only solution.[31]Its use is restricted in some countries.[32]
The primary scope of TPM is to ensure theintegrityof a platform during boot time. In this context, "integrity" means "behaves as intended", and a "platform" is any computer device regardless of itsoperating system. This is to ensure that theboot processstarts from a trusted combination of hardware and software, and continues until the operating system has fully booted andapplicationsare running.
When TPM is used, the firmware and the operating system are responsible for ensuring integrity.
For example, theUnified Extensible Firmware Interface(UEFI) can use TPM to form aroot of trust: The TPM contains several Platform Configuration Registers (PCRs) that allow secure storage and reporting of security-relevant metrics. These metrics can be used to detect changes to previous configurations and decide how to proceed. Examples of such use can be found inLinux Unified Key Setup(LUKS),[33]BitLockerandPrivateCorevCage memory encryption. (See below.)
Another example of platform integrity via TPM is in the use ofMicrosoft Office 365licensing and Outlook Exchange.[34]
Another example of TPM use for platform integrity is theTrusted Execution Technology(TXT), which creates a chain of trust. It could remotely attest that a computer is using the specified hardware and software.[35]
Full disk encryptionutilities, such asdm-crypt, can use this technology to protect the keys used to encrypt the computer's storage devices and provide integrityauthenticationfor a trusted boot pathway that includes firmware and theboot sector.[36]
In 2006 newlaptopsbegan being sold with a built-in TPM chip. In the future, this concept could be co-located on an existingmotherboardchip in computers, or any other device where the TPM facilities could be employed, such as acellphone. On a PC, either theLow Pin Count(LPC) bus or theSerial Peripheral Interface(SPI) bus is used to connect to the TPM chip.
TheTrusted Computing Group(TCG) has certified TPM chips manufactured byInfineon Technologies,Nuvoton, andSTMicroelectronics,[37]having assigned TPM vendorIDstoAdvanced Micro Devices,Atmel,Broadcom,IBM, Infineon,Intel,Lenovo,National Semiconductor, Nationz Technologies, Nuvoton,Qualcomm,Rockchip,Standard Microsystems Corporation, STMicroelectronics,Samsung, Sinosun,Texas Instruments, andWinbond.[38]
There are five different types of TPM 2.0 implementations (listed in order from most to least secure):[39][40]
The official TCG reference implementation of the TPM 2.0 Specification has been developed byMicrosoft. It is licensed underBSD Licenseand thesource codeis available onGitHub.[44]
In 2018Intelopen-sourced its Trusted Platform Module 2.0 (TPM2) software stack with support for Linux and Microsoft Windows.[45]The source code is hosted on GitHub and licensed underBSD License.[46][47]
Infineonfunded the development of an open source TPM middleware that complies with the Software Stack (TSS) Enhanced System API (ESAPI) specification of the TCG.[48]It was developed byFraunhofer Institutefor Secure Information Technology (SIT).[49]
IBM's Software TPM 2.0 is an implementation of the TCG TPM 2.0 specification. It is based on the TPM specification Parts 3 and 4 and source code donated by Microsoft. It contains additional files to complete the implementation. The source code is hosted onSourceForge[50]andGitHub[51]and licensed under BSD License.
In 2022,AMDannounced that under certain circumstances their fTPM implementation causes performance problems. A fix is available in form of aBIOS-Update.[52][53]
TheTrusted Computing Group(TCG) has faced resistance to the deployment of this technology in some areas, where some authors see possible uses not specifically related toTrusted Computing, which may raise privacy concerns. The concerns include the abuse of remote validation of software decides what software is allowed to run and possible ways to follow actions taken by the user being recorded in a database, in a manner that is completely undetectable to the user.[54]
TheTrueCryptdisk encryption utility, as well as its derivativeVeraCrypt, do not support TPM. The original TrueCrypt developers were of the opinion that the exclusive purpose of the TPM is "to protect against attacks that require the attacker to have administrator privileges, or physical access to the computer". The attacker who has physical or administrative access to a computer can circumvent TPM, e.g., by installing a hardwarekeystroke logger, by resetting TPM, or by capturing memory contents and retrieving TPM-issued keys. The condemning text goes so far as to claim that TPM is entirely redundant.[55]The VeraCrypt publisher has reproduced the original allegation with no changes other than replacing "TrueCrypt" with "VeraCrypt".[56]The author is right that, after achieving either unrestricted physical access or administrative privileges, it is only a matter of time before other security measures in place are bypassed.[57][58]However, stopping an attacker in possession of administrative privileges has never been one of the goals of TPM (see§ Usesfor details), and TPM canstop some physical tampering.[33][35][59][60][61]
In 2015Richard Stallmansuggested to replace the term "Trusted computing" with the term "Treacherous computing" due to the danger that the computer can be made to systematically disobey its owner if the cryptographical keys are kept secret from them. He also considers that TPMs available for PCs in 2015 are not currently[timeframe?]dangerous and that there is no reasonnotto include one in a computer or support it in software due to failed attempts from the industry to use that technology forDRM, but that the TPM2 released in 2022 is precisely the "treacherous computing" threat he had warned of.[62]
In August 2023,Linus Torvalds, who was frustrated with AMD fTPM's stuttering bugs opined, "Let's just disable the stupid fTPMhwrndthing." He said the CPU-based random number generation,rdrandwas equally suitable, despite having its share of bugs. Writing forNeowin, Sayan Sen quoted Torvalds' bitter comments and called him "a man with a strong opinion."[63]
In 2010Christopher Tarnovskypresented an attack against TPMs atBlack Hat Briefings, where he claimed to be able to extract secrets from a single TPM. He was able to do this after 6 months of work by inserting a probe and spying on aninternal busfor the Infineon SLE 66 CL PC.[64][65]
In case of physical access, computers with TPM 1.2 are vulnerable tocold boot attacksas long as the system is on or can be booted without a passphrase from shutdown,sleeporhibernation, which is the default setup for Windows computers with BitLocker full disk encryption.[66]A fix was proposed, which has been adopted in the specifications for TPM 2.0.
In 2009, the concept of shared authorisation data in TPM 1.2 was found to be flawed. An adversary given access to the data could spoof responses from the TPM.[67]A fix was proposed, which has been adopted in the specifications for TPM 2.0.
In 2015 as part of theSnowden revelations, it was revealed that in 2010 aUS CIAteam claimed at an internal conference to have carried out adifferential power analysisattack against TPMs that was able to extract secrets.[68][69]
MainTrusted Boot (tboot)distributions before November 2017 are affected by a dynamic root of trust for measurement (DRTM) attackCVE-2017-16837, which affects computers running onIntel's Trusted eXecution Technology (TXT)for the boot-up routine.[70]
In October 2017, it was reported that a code library developed byInfineon, which had been in widespread use in its TPMs, contained a vulnerability, known asROCA, which generated weakRSAkey pairs that allowed private keys to be inferred frompublic keys. As a result, all systems depending upon the privacy of such weak keys are vulnerable to compromise, such asidentity theftor spoofing.[71]Cryptosystems that store encryption keys directly in the TPM withoutblindingcould be at particular risk to these types of attacks, as passwords and other factors would be meaningless if the attacks can extract encryption secrets.[72]Infineon has released firmware updates for its TPMs to manufacturers who have used them.[73]
In 2018, a design flaw in the TPM 2.0 specification for the static root of trust for measurement (SRTM) was reported (CVE-2018-6622). It allows an adversary to reset and forge platform configuration registers which are designed to securely hold measurements of software that are used for bootstrapping a computer.[74]Fixing it requires hardware-specific firmware patches.[74]An attacker abuses power interrupts and TPM state restores to trick TPM into thinking that it is running on non-tampered components.[70]
In 2021, the Dolos Group showed an attack on a discrete TPM, where the TPM chip itself had some tamper resistance, but the other endpoints of its communication bus did not. They read a full-disk-encryption key as it was transmitted across the motherboard, and used it to decrypt the laptop's SSD.[75]
Currently, a TPM is provided by nearly all PC and notebook manufacturers in their products.
Vendors include:
There are also hybrid types; for example, TPM can be integrated into anEthernetcontroller, thus eliminating the need for a separate motherboard component.[83][84]
Field upgrade is the TCG term for updating the TPM firmware. The update can be between TPM 1.2 and TPM 2.0, or between firmware versions. Some vendors limit the number of transitions between 1.2 and 2.0, and some restrict rollback to previous versions.[citation needed]Platform OEMs such asHP[85]supply an upgrade tool.
Since July 28, 2016, all new Microsoft device models, lines, or series (or updating the hardware configuration of an existing model, line, or series with a major update, such as CPU, graphic cards) implement, and enable by default TPM 2.0.
While TPM 1.2 parts are discrete silicon components, which are typically soldered on the motherboard, TPM 2.0 is available as a discrete (dTPM) silicon component in a single semiconductor package, an integrated component incorporated in one or more semiconductor packages - alongside other logic units in the same package(s), and as a firmware (fTPM) based component running in a trusted execution environment (TEE) on a general purpose System-on-a-chip (SoC).[86]
TPM endorsement keys (EKs) are asymmetric key pairs unique to each TPM. They use theRSAandECCalgorithms. The TPM manufacturer usually provisions endorsement key certificates in TPMnon-volatile memory. The certificates assert that the TPM is authentic. Starting with TPM 2.0, the certificates are inX.509DERformat.
These manufacturers typically provide theircertificate authorityroot (and sometimes intermediate) certificates on their web sites.
To utilize a TPM, the user needs a software library that communicates with the TPM and provides a friendlier API than the raw TPM communication. Currently, there are several such open-source TPM 2.0 libraries. Some of them also support TPM 1.2, but mostly TPM 1.2 chips are now deprecated and modern development is focused on TPM 2.0.
Typically, a TPM library provides an API with one-to-one mappings to TPM commands. The TCG specification calls this layer the System API (SAPI). This way the user has more control over the TPM operations, however the complexity is high. To hide some of the complexity most libraries also offer simpler ways to invoke complex TPM operations. The TCG specification call these two layers Enhanced System API (ESAPI) and Feature API (FAPI).
There is currently only one stack that follows the TCG specification. All the other available open-source TPM libraries use their own form of richer API.
These TPM libraries are sometimes also called TPM stacks, because they provide the interface for the developer or user to interact with the TPM. As seen from the table, the TPM stacks abstract the operating system and transport layer, so the user could migrate one application between platforms. For example, by using TPM stack API the user would interact the same way with a TPM, regardless if the physical chip is connected over SPI, I2C or LPC interface to the Host system.
|
https://en.wikipedia.org/wiki/Trusted_Platform_Module
|
Atrusted execution environment(TEE) is a secure area of amain processor. It helps the code and data loaded inside it be protected with respect toconfidentiality and integrity. Data confidentiality prevents unauthorized entities from outside the TEE from reading data, while code integrity prevents code in the TEE from being replaced or modified by unauthorized entities, which may also be the computer owner itself as in certainDRMschemes described inIntel SGX.
This is done by implementing unique, immutable, and confidential architectural security, which offers hardware-based memory encryption that isolates specific application code and data in memory. This allows user-level code to allocate private regions of memory, called enclaves, which are designed to be protected from processes running at higher privilege levels.[1][2][3]A TEE as an isolated execution environment provides security features such as isolated execution, integrity of applications executing with the TEE, and confidentiality of their assets. In general terms, the TEE offers an execution space that provides a higher level of security for trusted applications running on the device than a rich operating system (OS) and more functionality than a 'secure element' (SE).
TheOpen Mobile Terminal Platform(OMTP) first defined TEE in their "Advanced Trusted Environment:OMTP TR1" standard, defining it as a "set of hardware and software components providing facilities necessary to support applications," which had to meet the requirements of one of two defined security levels. The first security level, Profile 1, was targeted against only software attacks, while Profile 2, was targeted against both software and hardware attacks.[4]
Commercial TEE solutions based on ARMTrustZonetechnology, conforming to the TR1 standard, were later launched, such as Trusted Foundations developed by Trusted Logic.[5]
Work on the OMTP standards ended in mid-2010 when the group transitioned into theWholesale Applications Community(WAC).[6]
The OMTP standards, including those defining a TEE, are hosted byGSMA.[7]
The TEE typically consists of a hardware isolation mechanism plus a secure operating system running on top of that isolation mechanism, although the term has been used more generally to mean a protected solution.[8][9][10][11]Whilst a GlobalPlatform TEE requires hardware isolation, others, such as EMVCo, use the term TEE to refer to both hardware and software-based solutions.[12]FIDO uses the concept of TEE in the restricted operating environment for TEEs based on hardware isolation.[13]Only trusted applications running in a TEE have access to the full power of a device's main processor, peripherals, and memory, while hardware isolation protects these from user-installed apps running in a main operating system. Software and cryptogaphic inside the TEE protect the trusted applications contained within from each other.[14]
Service providers,mobile network operators(MNO), operating system developers,application developers, device manufacturers, platform providers, and silicon vendors are the main stakeholders contributing to the standardization efforts around the TEE.
To prevent the simulation of hardware with user-controlled software, a so-called "hardware root of trust" is used. This is aset of private keys that are embedded directly into the chip during manufacturing; one-time programmable memory such aseFusesis usually used on mobile devices. These cannot be changed, even after the device resets, and whose public counterparts reside in a manufacturer database, together with a non-secret hash of a public key belonging to the trusted party (usually a chip vendor) which is used to sign trusted firmware alongside the circuits doing cryptographic operations and controlling access.
The hardware is designed in a way which prevents all software not signed by the trusted party's key from accessing the privileged features. The public key of the vendor is provided at runtime and hashed; this hash is then compared to the one embedded in the chip. If the hash matches, the public key is used to verify adigital signatureof trusted vendor-controlled firmware (such as a chain of bootloaders on Android devices or 'architectural enclaves' in SGX). The trusted firmware is then used to implement remote attestation.[15]
When an application is attested, its untrusted components loads its trusted component into memory; the trusted application is protected from modification by untrusted components with hardware. Anonceis requested by the untrusted party from verifier's server and is used as part of a cryptographic authentication protocol, proving integrity of the trusted application. The proof is passed to the verifier, which verifies it. A valid proof cannot be computed in simulated hardware (i.e.QEMU) because in order to construct it, access to the keys baked into hardware is required; only trusted firmware has access to these keys and/or the keys derived from them or obtained using them. Because only the platform owner is meant to have access to the data recorded in the foundry, the verifying party must interact with the service set up by the vendor. If the scheme is implemented improperly, the chip vendor can track which applications are used on which chip and selectively deny service by returning a message indicating that authentication has not passed.[16]
To simulate hardware in a way which enables it to pass remote authentication, an attacker would have to extract keys from the hardware, which is costly because of the equipment and technical skill required to execute it. For example, usingfocused ion beams,scanning electron microscopes,microprobing, and chipdecapsulation[17][18][19][20][21][22]is difficult, or even impossible, if the hardware is designed in such a way that reverse-engineering destroys the keys. In most cases, the keys are unique for each piece of hardware, so that a key extracted from one chip cannot be used by others (for examplephysically unclonable functions[23][24]).
Though deprivation of ownership is not an inherent property of TEEs (it is possible to design the system in a way that allows only the user who has obtained ownership of the device first to control the system by burning a hash of their own key into e-fuses), in practice all such systems in consumer electronics are intentionally designed so as to allow chip manufacturers to control access to attestation and its algorithms. It allows manufacturers to grant access to TEEs only to software developers who have a (usually commercial) business agreement with the manufacturer,monetizingthe user base of the hardware, to enable such use cases astivoizationand DRM and to allow certain hardware features to be used only with vendor-supplied software, forcing users to use it despite itsantifeatures, likeads, tracking and use case restriction formarket segmentation.
There are a number of use cases for the TEE. Though not all possible use cases exploit the deprivation of ownership, TEE is usually used exactly for this.
Note: Much TEE literature covers this topic under the definition "premium content protection," which is the preferred nomenclature of many copyright holders. Premium content protection is a specific use case ofdigital rights management(DRM) and is controversial among some communities, such as theFree Software Foundation.[25]It is widely used by copyright holders to restrict the ways in which end users can consume content such as 4K high-definition films.
The TEE is a suitable environment for protecting digitally encoded information (for example, HD films or audio) on connected devices such as smartphones, tablets, and HD televisions. This suitability comes from the ability of the TEE to deprive the owner of the device of access stored secrets, and the fact that there is often a protected hardware path between the TEE and the display and/or subsystems on devices.
The TEE is used to protect the content once it is on the device. While the content is protected during transmission or streaming by the use of encryption, the TEE protects the content once it has been decrypted on the device by ensuring that decrypted content is not exposed to the environment not approved by the app developer or platform vendor.
Mobile commerce applications such as: mobile wallets, peer-to-peer payments, contactless payments or using a mobile device as a point of sale (POS) terminal often have well-defined security requirements. TEEs can be used, often in conjunction withnear-field communication(NFC), SEs, and trusted backend systems to provide the security required to enable financial transactions to take place
In some scenarios, interaction with the end user is required, and this may require the user to expose sensitive information such as a PIN, password, or biometric identifier to themobile OSas a means of authenticating the user. The TEE optionally offers a trusted user interface which can be used to construct user authentication on a mobile device.
With the rise of cryptocurrency, TEEs are increasingly used to implement crypto-wallets, as they offer the ability to store tokens more securely than regular operating systems, and can provide the necessary computation and authentication applications.[26]
The TEE is well-suited for supporting biometric identification methods (facial recognition, fingerprint sensor, and voice authorization), which may be easier to use and harder to steal than PINs and passwords. The authentication process is generally split into three main stages:
A TEE is a good area within a mobile device to house the matching engine and the associated processing required to authenticate the user. The environment is designed to protect the data and establish a buffer against the non-secure apps located inmobile OSes. This additional security may help to satisfy the security needs of service providers in addition to keeping the costs low for handset developers.
The TEE can be used by governments, enterprises, and cloud service providers to enable the secure handling of confidential information on mobile devices and on server infrastructure. The TEE offers a level of protection against software attacks generated in themobile OSand assists in the control of access rights. It achieves this by housing sensitive, ‘trusted’ applications that need to be isolated and protected from the mobile OS and any malicious malware that may be present. Through utilizing the functionality and security levels offered by the TEE, governments, and enterprises can be assured that employees using their own devices are doing so in a secure and trusted manner. Likewise, server-based TEEs help defend against internal and external attacks against backend infrastructure.
With the rise of software assets and reuses,modular programmingis the most productive process to design software architecture, by decoupling the functionalities into small independent modules. As each module contains everything necessary to execute its desired functionality, the TEE allows the organization of the complete system featuring a high level of reliability and security, while preventing each module from vulnerabilities of the others.
In order for the modules to communicate and share data, TEE provides means to securely have payloads sent/received between the modules, using mechanisms such as object serialization, in conjunction with proxies.
SeeComponent-based software engineering
The following hardware technologies can be used to support TEE implementations:
|
https://en.wikipedia.org/wiki/Secure_Enclave
|
The following is a list of products, services, and apps provided byGoogle. Active, soon-to-be discontinued, and discontinued products, services, tools, hardware, and other applications are broken out into designated sections.
Applications that are no longer in development and scheduled to be discontinued in the future:
Google has retired many offerings, either because of obsolescence, integration into other Google products, or lack of interest.[21]Google's discontinued offerings are colloquially referred to as Google Graveyard.[22][23]
|
https://en.wikipedia.org/wiki/Titan_M
|
TheApple–FBI encryption disputeconcerns whether and to what extent courts in theUnited Statescan compel manufacturers to assist in unlockingcell phoneswhosedataarecryptographically protected.[1]There is much debate over public access tostrong encryption.[2]
In 2015 and 2016,Apple Inc.received and objected to or challenged at least 11 orders issued byUnited States district courtsunder theAll Writs Actof 1789. Most of these seek to compel Apple "to use its existing capabilities to extract data like contacts, photos and calls from lockediPhonesrunning on operating systemsiOS 7and older" in order to assist in criminal investigations and prosecutions. A few requests, however, involve phones with more extensive security protections, which Apple has no current ability to break. These orders would compel Apple to write new software that would let the government bypass these devices' security and unlock the phones.[3]
The most well-known instance of the latter category was a February 2016 court case in theUnited States District Court for the Central District of California. TheFederal Bureau of Investigation(FBI) wanted Apple to create andelectronically signnew software that would enable the FBI to unlock a work-issuediPhone 5Cit recovered from one of the shooters who, ina December 2015 terrorist attackinSan Bernardino, California, killed 14 people and injured 22. The two attackers later died in a shootout with police, having first destroyed their personal phones. The work phone was recovered intact but was locked with a four-digit passcode and was set to eliminate all its data after ten failed password attempts (a common anti-theft measure on smartphones). Apple declined to create the software, and a hearing was scheduled for March 22. However, a day before the hearing was supposed to happen, the government obtained a delay, saying it had found a third party able to assist in unlocking the iPhone. On March 28, the government claimed that the FBI had unlocked the iPhone and withdrew its request. In March 2018, theLos Angeles Timesreported "the FBI eventually found that Farook's phone had information only about work and revealed nothing about the plot" but cited only government claims, not evidence.[4]
In another case in Brooklyn, a magistrate judge ruled that theAll Writs Actcould not be used to compel Apple to unlock an iPhone. The government appealed the ruling, but then dropped the case on April 22, 2016, saying it had been given the correct passcode.[5]
In 1993, theNational Security Agency(NSA) introduced theClipper chip, an encryption device with an acknowledgedbackdoorfor government access, that NSA proposed be used for phone encryption. The proposal touched off a public debate, known as theCrypto Wars, and the Clipper chip was never adopted.[6]
It was revealed as a part of the2013 mass surveillance disclosuresbyEdward Snowdenthat the NSA and the BritishGovernment Communications Headquarters(GCHQ) had access to the user data in iPhones,BlackBerry, andAndroidphones and could read almost all smartphone information, including SMS, location, emails, and notes.[7]Additionally, the leak stated that Apple had been a part of thegovernment's surveillance programsince 2012, however, Apple per their spokesman at the time, "had never heard of it".[8]
According toThe New York Times, Apple developed new encryption methods for itsiOSoperating system, versions 8 and later, "so deep that Apple could no longer comply with government warrants asking for customer information to be extracted from devices."[9]Throughout 2015, prosecutors advocated for the U.S. government to be able to compel decryption of iPhone contents.[10][11][12][13]
In September 2015, Apple released a white paper detailing the security measures in its then-newiOS 9operating system. iPhone models including theiPhone 5Ccan be protected by a four-digitPINcode. After more than ten incorrect attempts to unlock the phone with the wrong PIN, the contents of the phone will be rendered inaccessible by erasing theAESencryption keythat protects its stored data. According to the Apple white paper, iOS includes aDevice Firmware Upgrade(DFU) mode, and that "[r]estoring a device after it enters DFU mode returns it to a known good state with the certainty that only unmodified Apple-signed code is present."[14]
The FBI recovered an AppleiPhone 5C—owned by theSan Bernardino County, Californiagovernment—that had been issued to its employee,Syed Rizwan Farook, one of the shooters involved in the December2015 San Bernardino attack.[15]The attack killed 14 people and seriously injured 22. The two attackers died four hours after the attack in a shootout with police, having previously destroyed their personal phones. Authorities were able to recover Farook's work phone, but could not unlock its four-digit passcode,[16][17]and the phone was programmed to automatically delete all its data after ten failed password attempts.
On February 9, 2016, the FBI announced that it was unable to unlock the county-owned phone it recovered, due to its advanced security features, including encryption of user data.[18][19]The FBI first asked theNational Security Agencyto break into the phone, but they were unable to since they only had knowledge of breaking into other devices that are commonly used by criminals, and not iPhones.[20]As a result, the FBI askedApple Inc.to create a new version of the phone'siOSoperating systemthat could be installed and run in the phone'srandom access memoryto disable certain security features that Apple refers to as "GovtOS". Apple declined due to its policy which required it to never undermine the security features of its products. The FBI responded by successfully applying to aUnited States magistrate judge, Sheri Pym, to issue a court order, mandating Apple to create and provide the requested software.[21]The order was not asubpoena, but rather was issued under theAll Writs Act of 1789.[22][23]The court order, calledIn the Matter of the Search of an Apple iPhone Seized During the Execution of a Search Warrant on a Black Lexus IS300, California License Plate #5KGD203, was filed in theUnited States District Court for the Central District of California.[24][25][26]
The use of theAll Writs Actto compel Apple to write new software was unprecedented and, according to legal experts, it was likely to prompt "an epic fight pitting privacy against national security."[27]It was also pointed out that the implications of the legal precedent that would be established by the success of this action against Apple would go far beyond issues of privacy.[28]
The court order specified that Apple provide assistance to accomplish the following:
The order also specifies that Apple's assistance may include providing software to the FBI that "will be coded by Apple with a unique identifier of the phone so that the [software] would only load and execute on the SUBJECT DEVICE"[25]
There has been much research and analysis of the technical issues presented in the case since the court order was made available to the public.[30]
The February 16, 2016 order issued by Magistrate Judge Pym gave Apple five days to apply for relief if Apple believed the order was "unreasonably burdensome". Apple announced its intent to oppose the order, citing the security risks that the creation of abackdoorwould pose towards customers.[31]It also stated that no government had ever asked for similar access.[32]The company was given until February 26 to fully respond to the court order.[33][34]
On the same day the order was issued,chief executive officerTim Cookreleased an online statement to Apple customers, explaining the company's motives for opposing the court order. He also stated that while they respect the FBI, the request they made threatens data security by establishing a precedent that the U.S. government could use to force any technology company to create software that could undermine the security of its products.[35]He said in part:
The United States government has demanded that Apple take an unprecedented step which threatens the security of our customers. We oppose this order, which has implications far beyond the legal case at hand. This moment calls for public discussion, and we want our customers and people around the country to understand what is at stake.[35]
In response to the opposition, on February 19, theU.S. Department of Justicefiled a new application urging a federal judge to compel Apple to comply with the order.[36]The new application stated that the company could install the software on the phone in its own premises, and after the FBI had hacked the phone via remote connection, Apple could remove and destroy the software.[37]Apple hired attorneysTed Olsonand Theodore J. Boutrous Jr. to fight the order on appeal.[27]
The same day, Apple revealed that in early January it had discussed with the FBI four methods to access data in the iPhone, but, as was revealed by a footnote in the February 19 application to the court, one of the more promising methods was ruled out by a mistake during the investigation of the attack. After the shooter's phone had been recovered, the FBI asked San Bernardino County, the owner of the phone, to reset the password to the shooter'siCloudaccount in order to acquire data from the iCloud backup. However, this rendered the phone unable to backup recent data to iCloud, until the new iCloud password was entered. This however, requires the phone to be unlocked.[38][39][40]This was confirmed by the U.S. Department of Justice, which then added that any backup would have been "insufficient" because they would not have been able to recover enough information from it.[41]
The government cited as precedentUnited States v. New York Telephone Co., in which theSupreme Courtruled in 1977 that the All Writs Act gave courts the power to demand reasonable technical assistance from the phone company in accessing phone calling records. Apple responded that New York Telephone was already collecting the data in question in the course of its business, something the Supreme Court took note of in its ruling. Apple also asserts that being compelled to write new software "amounts to compelled speech andviewpoint discriminationin violation of theFirst Amendment. ... What is to stop the government from demanding that Apple write code to turn on the microphone in aid of government surveillance, activate the video camera, surreptitiously record conversations, or turn on location services to track the phone's user?" Apple argued that the FBI had not made use of all of the government's tools, such as employing the resources of the NSA. A hearing on the case was scheduled for March 22, 2016.[42]
San Bernardino County District AttorneyMichael Ramosfiled a brief stating the iPhone may contain evidence of a "lying dormant cyber pathogen" that could have been introduced into the San Bernardino County computer network,[43][44][45]as well as identification of a possible third gunman who was alleged to have been seen at the scene of the attack by eyewitnesses.[46]The following day, Ramos told theAssociated Pressthat he did not know whether the shooters had compromised the county's infrastructure, but the only way to know for sure was by gaining access to the iPhone.[47][48]This statement has been criticized by cyber-security professionals as being improbable.[48][49][50][51]
In an interview for aTimemagazine cover story, Cook said that the issue is not "privacy versus security ... it's privacy and security or privacy and safety versus security." Cook also said, "[T]his is the golden age of surveillance that we live in. There is more information about all of us, so much more than ten years ago, or five years ago. It's everywhere. You are leaving digital footprints everywhere."[52]
In a March 21, 2016, Apple press conference, Cook talked about the ongoing conflict with the FBI, saying, "[W]e have a responsibility to protect your data and your privacy. We will not shrink from this responsibility."[53]
On March 21, 2016, the government requested and was granted a delay, saying a third party had demonstrated a possible way to unlock the iPhone in question and the FBI needed more time to determine if it will work.[54][55][56]On March 28, 2016, the FBI said it had unlocked the iPhone with the third party's help, and an anonymous official said that the hack's applications were limited; the Department of Justice withdrew the case.[57][58]The lawyer for the FBI claimed that they were using the alleged extracted information to further investigate the case.[59]
On April 7, 2016, FBI DirectorJames Comeysaid that the tool used could only unlock an iPhone 5C like that used by the San Bernardino shooter as well as older iPhone models lacking theTouch IDsensor. Comey also confirmed that the tool was purchased from a third party but would not reveal the source,[60]later indicating the tool cost more than $1.3 million and that they did not purchase the rights to technical details about how the tool functions.[61]Although the FBI claimed they were able to use other technological means to access the cellphone data from the San Bernardino shooter's iPhone 5C, without the aid of Apple, law enforcement still expresses concern over the encryption controversy.[62]
Some news outlets, citing anonymous sources, identified the third party as Israeli companyCellebrite. However,The Washington Postreported that, according to anonymous "people familiar with the matter", the FBI had instead paid "professional hackers" who used azero-dayvulnerability in the iPhone's software to bypass its ten-try limitation, and did not need Cellebrite's assistance.[63][64]In April 2021,The Washington Postreported that the Australian company Azimuth Security, awhite hathacking firm, had been the one to help the FBI, with work from security researchersMark DowdandDavid Wang.[65]In 2020, the New York Times reported that "new data reveals a twist to the encryption debate that undercuts both sides," with public records showing that at least 2,000 US law enforcement agencies had since acquired "tools to get into locked, encrypted phones and extract their data," mostly from Cellebrite andGrayshift.[66]
Apple had previously challenged the U.S. Department of Justice's authority to compel it to unlock an iPhone 5S in a drug case in theUnited States District Court for the Eastern District of New Yorkin Brooklyn (In re Order Requiring Apple Inc. to Assist in the Execution of a Search Warrant Issued by the Court, case number 1:15-mc-01902[68]), after themagistrate judgein the case, James Orenstein, requested Apple's position before issuing an order.[69][70][71]On February 29, 2016, Judge Orenstein denied the government's request, saying the All Writs Act cannot be used to force a company to modify its products: "The implications of the government's position are so far-reaching – both in terms of what it would allow today and what it implies about Congressional intent in 1789 – as to produce impermissibly absurd results."[72]Orenstein went on to criticize the government's stance, writing, "It would be absurd to posit that the authority the government sought was anything other than obnoxious to the law."[68][73][74]The Justice Department appealed the ruling to District Court Judge Margot Brodie.[75]Apple requested a delay while the FBI attempted to access the San Bernardino iPhone without Apple's help.[76]On April 8, after the FBI succeeded, the Justice Department told the Brooklyn court it intended to press forward with its demand for assistance there,[77]but on April 22, the government withdrew its request, telling the court "an individual" (the suspect, according to press reports) had provided the correct passcode.[78]
National reactions to Apple's opposition of the order were mixed.[79]ACBS Newspoll that sampled 1,022 Americans found that 50% of the respondents supported the FBI's stance, while 45% supported Apple's stance.[80]Also, 1,002 surveyed Americans who own smartphones were divided into two sides; 51% were against Apple's decision, while 38% supported their stance.[81]
TheReform Government Surveillancecoalition, which includes major tech firms likeGoogle,Microsoft,Facebook,Yahoo!,Twitter, andLinkedIn, has indicated its opposition to the order.[82][83][84]By March 3, the deadline, a large number ofamicus curiaebriefs were filed with the court, with numerous technology firms supporting Apple's position, including a joint brief fromAmazon.com,Box,Cisco Systems,Dropbox,Evernote, Facebook,Google,Lavabit, Microsoft,Mozilla,Nest Labs,Pinterest,Slack Technologies,Snapchat,WhatsApp, and Yahoo!. Briefs from theAmerican Civil Liberties Union, theElectronic Frontier Foundation, Access Now, and the Center for Democracy and Technology also supported Apple.[85][86][87]
Thethink tankNiskanen Centerhas suggested that the case is adoor-in-the-face techniquedesigned to gain eventual approval forencryptionbackdoors[88]and is viewed as a revival of theCrypto Wars.[89]
U.S. RepresentativeMike Honda, a Democrat who represented theSilicon Valleyregion, voiced his support for Apple.[90]
On February 23, 2016, a series of pro-Apple protests organized byFight for the Futurewere held outside of Apple's stores in over 40 locations.[91][92][93]
Zeid Ra'ad al-Hussein, theUnited Nations High Commissioner for Human Rights, warned the FBI of the potential for "extremely damaging implications" onhuman rightsand that they "risk unlocking aPandora's box" through their investigation.[94]
GeneralMichael Hayden, former director of the NSA and theCentral Intelligence Agency, in a March 7 interview withMaria Bartiromoon theFox Business Network, supported Apple's position, noting that the CIA considerscyber-attacksthe number one threat to U.S. security and saying that "this may be a case where we've got to give up some things in law enforcement and even counter terrorism in order to preserve this aspect, our cybersecurity."[95]
Salihin Kondoker, whose wife was shot in the attack but survived, filed a friend of the court brief siding with Apple; his brief said that he "understand[s] that this software the government wants them to use will be used against millions of other innocent people. I share their fear."[96]
Edward Snowdensaid that the FBI already has the technical means to unlock Apple's devices and said, "The global technological consensus is against the FBI."[97][98]
McAfeefounder andLibertarian Partypresidential primarycandidateJohn McAfeehad publicly volunteered to decrypt the iPhone used by the San Bernardino shooters, avoiding the need for Apple to build a backdoor.[99]He later indicated that the method he would employ, extracting the unique ID from inside the A7 processor chip, is difficult and risks permanently locking the phone, and that he was seeking publicity.[100]
Ron Wyden,Democraticsenator for Oregon and a noted privacy and encryption advocate, questioned the FBI's honesty concerning the contents of the phone. He said in a statement, "There are real questions about whether [the FBI] has been straight with the public on [the Apple case]."[101]
Some families of the victims and survivors of the attack indicated they would file a brief in support of the FBI.[102]
TheNational Sheriffs' Associationhas suggested that Apple's stance is "putting profit over safety" and "has nothing to do with privacy."[73]TheFederal Law Enforcement Officers Association, theAssociation of Prosecuting Attorneys, and the National Sheriffs' Association filed a brief supporting the FBI.[103]
"With Apple's privacy policy for the customers there is no way of getting into a phone without a person's master password. With this policy there will be no backdoor access on the phone for the law enforcement to access the person's private information. This has caused a great dispute between the FBI and Apple's encryption.[62]Apple has closed this backdoor for the law enforcement because they believe that by creating this backdoor it would make it easier for law enforcement, and also make it easier for criminal hackers to gain access to people's personal data on their phone." Former FBI directorJames Comeysays that "We are drifting to a place in this country where there will be zones that are beyond the reach of the law."[62]He believes that this backdoor access is crucial to investigations, and without it many criminals will not be convicted.[62]
SenatorDianne FeinsteinofCalifornia, a Democrat and vice chairman of theSenate Intelligence Committee, voiced her opposition to Apple.[90]Allcandidates for the Republican nominationfor the2016 U.S. presidential electionwho had not dropped out of the race before February 19, 2016, supported the FBI's position, though several expressed concerns about adding backdoors to mobile phones.[104]
On February 23, 2016, theFinancial Timesreported[105]thatBill Gates, founder of Microsoft, has sided with the FBI in the case. However, Gates later said in an interview withBloomberg News"that doesn't state my view on this."[106]He added that he thought the right balance and safeguards need to be found in the courts and in Congress, and that the debate provoked by this case is valuable.[107]
San Bernardino Police Chief Jarrod Burguan said in an interview:
I'll be honest with you, I think that there is a reasonably good chance that there is nothing of any value on the phone. What we are hoping might be on the phone would be potential contacts that we would obviously want to talk to. This is an effort to leave no stone unturned in the investigation. [To] allow this phone to sit there and not make an effort to get the information or the data that may be inside of that phone is simply not fair to the victims and their families.[108]
Manhattan District AttorneyCyrus Vance Jr., said that he wants Apple to unlock 175 iPhones that his office'sCyber-Crime Labhas been unable to access, adding, "Apple should be directed to be able to unlock its phones when there is a court order by an independent judge proving and demonstrating that there's relevant evidence on that phone necessary for an individual case."[109]
FBI Director Comey, testifying before theHouse Judiciary Committee, compared Apple's iPhone security to aguard dog, saying, "We're asking Apple to take the vicious guard dog away and let uspick the lock."[110]
Apple'siOS 8and later have encryption mechanisms that make it difficult for the government to get through. Apple provided no backdoor for surveillance without the company's discretion. However, Comey stated that he did not want a backdoor method of surveillance and that "We want to use the front door, with clarity and transparency, and with clear guidance provided by law." He believes that special access is required in order to stop criminals such as "terrorists and child molesters".[111]
Both2016 Democratic presidential candidates—formerSecretary of StateHillary Clintonand SenatorBernie Sanders—suggested some compromise should be found.[104][112]
U.S. Defense SecretaryAshton Cartercalled for Silicon Valley and the federal government to work together. "We are squarely behind strong data security and strong encryption, no question about it," he said. Carter also added that he is "not a believer in back doors."[113]
In an address to the 2016South by Southwestconference on March 11, PresidentBarack Obamastated that while he could not comment on the specific case, "You cannot take an absolutist view on [encryption]. If your view is strong encryption no matter what, and we can and should create black boxes, that does not strike the balance that we've lived with for 200 or 300 years. And it's fetishizing our phones above every other value. That can't be the right answer."[114]
On April 13, 2016, U.S. SenatorsRichard BurrandDianne Feinstein, the Republican Chair and senior Democrat on theSenate Intelligence Committee, respectively, released draft legislation that would authorize state and federal judges to order "any person who provides a product or method to facilitate a communication or the processing or storage of data" to provide data in intelligible form or technical assistance in unlocking encrypted data and that any such person who distributes software or devices must ensure they are capable of complying with such an order.[115][116]
In September 2016, theAssociated Press,Vice Media, andGannett(the owner ofUSA Today) filed aFreedom of Information Act(FOIA) lawsuit against the FBI, seeking to compel the agency to reveal who it hired to unlock Farook's iPhone, and how much was paid.[117][118]On September 30, 2017, a federal court ruled against the media organizations and grantedsummary judgmentin the government's favor.[118][119]The court ruled that the company that hacked the iPhone and the amount paid to it by the FBI were national security secrets and "intelligence sources or methods" that are exempt from disclosure under FOIA; the court additionally ruled that the amount paid "reflects a confidential law enforcement technique or procedure" that also falls under a FOIA exemption.[118]
On August 31, 2016, Amy Hess, the FBI's Executive Assistant Director, raised concerns with theOffice of Inspector Generalalleging there was a disagreement between units of the Operational Technology Division (OTD) of their capability to access Farook's iPhone; namely between the Cryptographic and Electronic Analysis Unit (CEAU) and the Remote Operations Unit (ROU). She also alleged that some OTD officials were indifferent to FBI leadership (herself included)[120]giving possibly misleading testimony to Congress and in court orders that they had no such capability.
Ultimately, the Inspector General's March 2018 report[121]found no evidence that the OTD had withheld knowledge of the ability to unlock Farook's iPhone at the time of Director Comey's congressional testimony of February 9 and March 1, 2016. However, the report also found that poor communication and coordination between the CEAU and ROU meant that "not all relevant personnel had been engaged at the outset".
The ROU Chief (named byViceto be Eric Chuang)[122]said he only became aware of the access problem after a February 11 meeting of the Digital Forensics and Analysis Section (DFAS) - of which the ROU is not a member. While the OTD directors were in frequent contact during the investigation, including discussions about Farook's iPhone, Asst. Dir. Stephen Richardson and the Chief of DFAS, John F. Bennett, believed at the time that a court order was their only alternative.
Chuang claimed the CEAU Chief didn't ask for their help due to a "line in the sand" against using classified security tools in domestic criminal cases.[a]The CEAU Chief denied such a line existed and that not using classified techniques was merely a preference. Nevertheless, the perception of this line resulted in the ROU not getting involved until after John Bennett's February 11 meeting asking "anyone" in the bureau to help.
Once Chuang "got the word out", he soon learned that a trusted vendor was "almost 90 percent of the way" to a solution after "many months" of work and asked they prioritize its completion. The unnamed vendor came forward with their solution on March 16, 2016, and successfully demonstrated it to FBI leadership on March 20. TheUS Attorneys Officewas informed the next day and they withdrew their court action against Apple on March 28.
When asked why the ROU was not involved earlier the Chief of Technical Surveillance Section (TSS), Eric Chuang's superior, initially said it was not in his "lane" and it was handled exclusively by the DFAS because "that is their mandate". He later claimed that Farook's phone was discussed from the outset but he did not instruct his unit chiefs to contact outside vendors until after February 11. In either event, neither he nor the ROU were asked to request help from their vendors until mid-February. By the time the Attorneys Office filed their February 16 court order, the ROU had only just begun contacting its vendors.
The CEAU Chief was unable to say with certainty that the ROU had been consulted beforehand and that the February 11th meeting was a final "mop-up" before a court action was filed. The CEAU's search for solutions within the FBI was undocumented and was handled informally by a senior engineer that the CEAU Chief personally trusted had checked with "everybody".
On the other hand, it's possible that Hess' asking questions is what prompted the February 11 "mop-up" meeting. During the CEAU's search Hess became concerned that she wasn't getting straight answers from the OTD and that unit chiefs didn't know the capabilities of the others. The Inspector General stated further:
... the CEAU Chief may not have been interested in researching all possible solutions and instead focused only on unclassified techniques that could readily be disclosed in court that OTD and its partner agencies already had in-hand.
Both Hess and Chuang stated the CEAU Chief seemed not to want to use classified techniques and appeared to have an agenda in pursuing a favorable ruling against Apple. Chuang described the CEAU Chief as "definitely not happy" that they undermined his legal case against Apple and had vented his frustration with him.
Hess said the CEAU Chief wanted to use the case as a "poster child" to resolve the larger problem with encrypted devices known as the "Going Dark challenge". The challenge is defined by the FBI as "changes in technology [that] hinder law enforcement's ability to exercise investigative tools and follow critical leads".[123]As The Los Angeles Times reported in March 2018, the FBI was unable to access data from 7,775 seized devices in their investigations. The unidentified method used to unlock Farook's phone - costing more than $1 million to obtain - quit working once Apple updated their operating system.[4]
The Inspector General's report found that statements in the FBI's testimony before Congress were accurate but relied on assumptions that the OTD units were coordinating effectively from the beginning. They also believe the miscommunication delayed finding a technical solution to accessing Farook's iPhone. The FBI disputed this since the vendor had been working on the project independently "for some time". However, according to Chuang – whom described himself as a "relationship holder" for the vendor – they were not actively working to complete the solution and that it was moved to the "front burner" on his request; to which the TSS Chief agreed.
In response to the Inspector General's report, the FBI intended to add a new OTD section to consolidate resources to address the Going Dark problem and to improve coordination between units.
|
https://en.wikipedia.org/wiki/FBI%E2%80%93Apple_encryption_dispute
|
In cryptography,security levelis a measure of the strength that acryptographic primitive— such as acipherorhash function— achieves. Security level is usually expressed as a number of "bitsof security" (alsosecurity strength),[1]wheren-bit security means that the attacker would have to perform 2noperations to break it,[2]but other methods have been proposed that more closely model the costs for an attacker.[3]This allows for convenient comparison between algorithms and is useful when combining multiple primitives in ahybrid cryptosystem, so there is no clear weakest link. For example,AES-128 (key size128 bits) is designed to offer a 128-bit security level, which is considered roughly equivalent to aRSAusing 3072-bit key.
In this context,security claimortarget security levelis the security level that a primitive was initially designed to achieve, although "security level" is also sometimes used in those contexts. When attacks are found that have lower cost than the security claim, the primitive is consideredbroken.[4][5]
Symmetric algorithms usually have a strictly defined security claim. Forsymmetric ciphers, it is typically equal to thekey sizeof the cipher — equivalent to thecomplexityof abrute-force attack.[5][6]Cryptographic hash functionswith output size ofnbits usually have acollision resistancesecurity leveln/2 and apreimage resistanceleveln. This is because the generalbirthday attackcan always find collisions in 2n/2steps.[7]For example,SHA-256offers 128-bit collision resistance and 256-bit preimage resistance.
However, there are some exceptions to this. ThePhelixand Helix are 256-bit ciphers offering a 128-bit security level.[5][8]The SHAKE variants ofSHA-3are also different: for a 256-bit output size, SHAKE-128 provides 128-bit security level for both collision and preimage resistance.[9]
The design of most asymmetric algorithms (i.e.public-key cryptography) relies on neatmathematical problemsthat are efficient to compute in one direction, but inefficient to reverse by the attacker. However, attacks against current public-key systems are always faster thanbrute-force searchof the key space. Their security level isn't set at design time, but represents acomputational hardness assumption, which is adjusted to match the best currently known attack.[6]
Various recommendations have been published that estimate the security level of asymmetric algorithms, which differ slightly due to different methodologies.
The following table are examples of typical security levels for types of algorithms as found in s5.6.1.1 of the US NIST SP-800-57 Recommendation for Key Management.[16]: Table 2
Under NIST recommendation, a key of a given security level should only be transported under protection using an algorithm of equivalent or higher security level.[14]
The security level is given for the cost of breaking one target, not the amortized cost for group of targets. It takes 2128operations to find a AES-128 key, yet the same number of amortized operations is required for any numbermof keys. On the other hand, breakingmECC keys using the rho method require sqrt(m) times the base cost.[15][17]
A cryptographic primitive is considered broken when an attack is found to have less than its advertised level of security. However, not all such attacks are practical: most currently demonstrated attacks take fewer than 240operations, which translates to a few hours on an average PC. The costliest demonstrated attack on hash functions is the 261.2attack on SHA-1, which took 2 months on 900GTX 970GPUs, and cost US$75,000 (although the researchers estimate only $11,000 was needed to find a collision).[18]
Aumasson draws the line between practical and impractical attacks at 280operations. He proposes a new terminology:[19]
|
https://en.wikipedia.org/wiki/Security_level
|
Time-based one-time password(TOTP) is a computer algorithm that generates a one-time password (OTP) using the current time as a source of uniqueness. As an extension of the HMAC-based one-time password algorithm HOTP, it has been adopted as Internet Engineering Task Force (IETF) standardRFC6238.[1]
TOTP is the cornerstone of Initiative for Open Authentication (OATH) and is used in a number of two-factor authentication[1](2FA) systems.
Through the collaboration of several OATH members, a TOTP draft was developed in order to create an industry-backed standard. It complements the event-based one-time standard HOTP, and it offers end user organizations and enterprises more choice in selecting technologies that best fit their application requirements andsecurityguidelines. In 2008, OATH submitted a draft version of the specification to the IETF. This version incorporates all the feedback and commentary that the authors received from the technical community based on the prior versions submitted to the IETF.[2]In May 2011, TOTP officially becameRFC6238.[1]
To establish TOTP authentication, the authenticatee and authenticator must pre-establish both theHOTP parametersand the following TOTP parameters:
Both the authenticator and the authenticatee compute the TOTP value, then the authenticator checks whether the TOTP value supplied by the authenticatee matches the locally generated TOTP value. Some authenticators allow values that should have been generated before or after the current time in order to account for slightclock skews, network latency and user delays.
TOTP uses the HOTP algorithm, replacing the counter with anon-decreasingvalue based on the current time:
TOTP value(K) =HOTP value(K,CT),
calculating counter valueCT=⌊T−T0TX⌋,{\displaystyle C_{T}=\left\lfloor {\frac {T-T_{0}}{T_{X}}}\right\rfloor ,}where
Unlikepasswords, TOTP codes are only valid for a limited time. However, users must enter TOTP codes into an authentication page, which creates the potential forphishing attacks. Due to the short window in which TOTP codes are valid, attackers must proxy the credentials in real time.[3]
TOTP credentials are also based on a shared secret known to both the client and the server, creating multiple locations from which a secret can be stolen. An attacker with access to this shared secret could generate new, valid TOTP codes at will. This can be a particular problem if the attacker breaches a large authentication database.[4]
|
https://en.wikipedia.org/wiki/Time-based_one-time_password
|
Inmathematicsandcomputer science, analgorithm(/ˈælɡərɪðəm/ⓘ) is a finite sequence ofmathematically rigorousinstructions, typically used to solve a class of specificproblemsor to perform acomputation.[1]Algorithms are used as specifications for performingcalculationsanddata processing. More advanced algorithms can useconditionalsto divert the code execution through various routes (referred to asautomated decision-making) and deduce validinferences(referred to asautomated reasoning).
In contrast, aheuristicis an approach to solving problems without well-defined correct or optimal results.[2]For example, although social mediarecommender systemsare commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation.
As aneffective method, an algorithm can be expressed within a finite amount of space and time[3]and in a well-definedformal language[4]for calculating afunction.[5]Starting from an initial state and initial input (perhapsempty),[6]the instructions describe a computation that, whenexecuted, proceeds through a finite[7]number of well-defined successive states, eventually producing "output"[8]and terminating at a final ending state. The transition from one state to the next is not necessarilydeterministic; some algorithms, known asrandomized algorithms, incorporate random input.[9]
Around 825 AD, Persian scientist and polymathMuḥammad ibn Mūsā al-Khwārizmīwrotekitāb al-ḥisāb al-hindī("Book of Indian computation") andkitab al-jam' wa'l-tafriq al-ḥisāb al-hindī("Addition and subtraction in Indian arithmetic"). In the early 12th century, Latin translations of these texts involving theHindu–Arabic numeral systemandarithmeticappeared, for exampleLiber Alghoarismi de practica arismetrice, attributed toJohn of Seville, andLiber Algorismi de numero Indorum, attributed toAdelard of Bath.[10]Here,alghoarismioralgorismiis theLatinizationof Al-Khwarizmi's name;[1]the text starts with the phraseDixit Algorismi, or "Thus spoke Al-Khwarizmi".[2]
The wordalgorismin English came to mean the use of place-value notation in calculations; it occurs in theAncrene Wissefrom circa 1225.[11]By the timeGeoffrey ChaucerwroteThe Canterbury Talesin the late 14th century, he used a variant of the same word in describingaugrym stones, stones used for place-value calculation.[12][13]In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number";cf."arithmetic"), the Latin word was altered toalgorithmus.[14]By 1596, this form of the word was used in English, asalgorithm, byThomas Hood.[15]
One informal definition is "a set of rules that precisely defines a sequence of operations",[16]which would include allcomputer programs(including programs that do not perform numeric calculations), and any prescribedbureaucraticprocedure[17]orcook-bookrecipe.[18]In general, a program is an algorithm only if it stops eventually[19]—even thoughinfinite loopsmay sometimes prove desirable.Boolos, Jeffrey & 1974, 1999define an algorithm to be an explicit set of instructions for determining an output, that can be followed by a computing machine or a human who could only carry out specific elementary operations on symbols.[20]
Most algorithms are intended to beimplementedascomputer programs. However, algorithms are also implemented by other means, such as in abiological neural network(for example, thehuman brainperformingarithmeticor an insect looking for food), in anelectrical circuit, or a mechanical device.
Step-by-step procedures for solving mathematical problems have been recorded since antiquity. This includes inBabylonian mathematics(around 2500 BC),[21]Egyptian mathematics(around 1550 BC),[21]Indian mathematics(around 800 BC and later),[22][23]the Ifa Oracle (around 500 BC),[24]Greek mathematics(around 240 BC),[25]Chinese mathematics (around 200 BC and later),[26]andArabic mathematics(around 800 AD).[27]
The earliest evidence of algorithms is found in ancientMesopotamianmathematics. ASumerianclay tablet found inShuruppaknearBaghdadand dated toc.2500 BCdescribes the earliestdivision algorithm.[21]During theHammurabi dynastyc.1800– c.1600 BC,Babylonianclay tablets described algorithms for computing formulas.[28]Algorithms were also used inBabylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events.[29]
Algorithms for arithmetic are also found in ancientEgyptian mathematics, dating back to theRhind Mathematical Papyrusc.1550 BC.[21]Algorithms were later used in ancientHellenistic mathematics. Two examples are theSieve of Eratosthenes, which was described in theIntroduction to ArithmeticbyNicomachus,[30][25]: Ch 9.2and theEuclidean algorithm, which was first described inEuclid's Elements(c.300 BC).[25]: Ch 9.1Examples of ancient Indian mathematics included theShulba Sutras, theKerala School, and theBrāhmasphuṭasiddhānta.[22]
The first cryptographic algorithm for deciphering encrypted code was developed byAl-Kindi, a 9th-century Arab mathematician, inA Manuscript On Deciphering Cryptographic Messages. He gave the first description ofcryptanalysisbyfrequency analysis, the earliest codebreaking algorithm.[27]
Bolter credits the invention of the weight-driven clock as "the key invention [ofEurope in the Middle Ages]," specifically theverge escapementmechanism[31]producing the tick and tock of a mechanical clock. "The accurate automatic machine"[32]led immediately to "mechanicalautomata" in the 13th century and "computational machines"—thedifferenceandanalytical enginesofCharles BabbageandAda Lovelacein the mid-19th century.[33]Lovelace designed the first algorithm intended for processing on a computer, Babbage's analytical engine, which is the first device considered a realTuring-completecomputer instead of just acalculator. Although the full implementation of Babbage's second device was not realized for decades after her lifetime, Lovelace has been called "history's first programmer".
Bell and Newell (1971) write that theJacquard loom, a precursor toHollerith cards(punch cards), and "telephone switching technologies" led to the development of the first computers.[34]By the mid-19th century, thetelegraph, the precursor of the telephone, was in use throughout the world. By the late 19th century, theticker tape(c.1870s) was in use, as were Hollerith cards (c. 1890). Then came theteleprinter(c.1910) with its punched-paper use ofBaudot codeon tape.
Telephone-switching networks ofelectromechanical relayswere invented in 1835. These led to the invention of the digital adding device byGeorge Stibitzin 1937. While working in Bell Laboratories, he observed the "burdensome" use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".[35][36]
In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve theEntscheidungsproblem(decision problem) posed byDavid Hilbert. Later formalizations were framed as attempts to define "effective calculability"[37]or "effective method".[38]Those formalizations included theGödel–Herbrand–Kleenerecursive functions of 1930, 1934 and 1935,Alonzo Church'slambda calculusof 1936,Emil Post'sFormulation 1of 1936, andAlan Turing'sTuring machinesof 1936–37 and 1939.
Algorithms can be expressed in many kinds of notation, includingnatural languages,pseudocode,flowcharts,drakon-charts,programming languagesorcontrol tables(processed byinterpreters). Natural language expressions of algorithms tend to be verbose and ambiguous and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts, and control tables are structured expressions of algorithms that avoid common ambiguities of natural language. Programming languages are primarily for expressing algorithms in a computer-executable form but are also used to define or document algorithms.
There are many possible representations andTuring machineprograms can be expressed as a sequence of machine tables (seefinite-state machine,state-transition table, andcontrol tablefor more), as flowcharts and drakon-charts (seestate diagramfor more), as a form of rudimentarymachine codeorassembly codecalled "sets of quadruples", and more. Algorithm representations can also be classified into three accepted levels of Turing machine description: high-level description, implementation description, and formal description.[39]A high-level description describes the qualities of the algorithm itself, ignoring how it is implemented on the Turing machine.[39]An implementation description describes the general manner in which the machine moves its head and stores data to carry out the algorithm, but does not give exact states.[39]In the most detail, a formal description gives the exact state table and list of transitions of the Turing machine.[39]
The graphical aid called aflowchartoffers a way to describe and document an algorithm (and a computer program corresponding to it). It has four primary symbols: arrows showing program flow, rectangles (SEQUENCE, GOTO), diamonds (IF-THEN-ELSE), and dots (OR-tie). Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure.
It is often important to know how much time, storage, or other cost an algorithm may require. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm that adds up the elements of a list ofnnumbers would have a time requirement ofO(n){\displaystyle O(n)}, usingbig O notation. The algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. If the space required to store the input numbers is not counted, it has a space requirement ofO(1){\displaystyle O(1)}, otherwiseO(n){\displaystyle O(n)}is required.
Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, abinary searchalgorithm (with costO(logn){\displaystyle O(\log n)}) outperforms a sequential search (costO(n){\displaystyle O(n)}) when used fortable lookupson sorted lists or arrays.
Theanalysis, and study of algorithmsis a discipline ofcomputer science. Algorithms are often studied abstractly, without referencing any specificprogramming languageor implementation. Algorithm analysis resembles other mathematical disciplines as it focuses on the algorithm's properties, not implementation.Pseudocodeis typical for analysis as it is a simple and general representation. Most algorithms are implemented on particular hardware/software platforms and theiralgorithmic efficiencyis tested using real code. The efficiency of a particular algorithm may be insignificant for many "one-off" problems but it may be critical for algorithms designed for fast interactive, commercial, or long-life scientific usage. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign.
Empirical testing is useful for uncovering unexpected interactions that affect performance.Benchmarksmay be used to compare before/after potential improvements to an algorithm after program optimization.
Empirical tests cannot replace formal analysis, though, and are non-trivial to perform fairly.[40]
To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating toFFTalgorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging.[41]In general, speed improvements depend on special properties of the problem, which are very common in practical applications.[42]Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power.
Algorithm design is a method or mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such asdivide-and-conquerordynamic programmingwithinoperation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns,[43]with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; thebig O notationis used to describe e.g., an algorithm's run-time growth as the size of its input increases.[44]
Per theChurch–Turing thesis, any algorithm can be computed by anyTuring completemodel. Turing completeness only requires four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language".[45]Tausworthe augments the threeBöhm-Jacopini canonical structures:[46]SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE.[47]An additional benefit of a structured program is that it lends itself toproofs of correctnessusingmathematical induction.[48]
By themselves, algorithms are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as inGottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, inDiamond v. Diehr, the application of a simplefeedbackalgorithm to aid in the curing ofsynthetic rubberwas deemed patentable. Thepatenting of softwareis controversial,[49]and there are criticized patents involving algorithms, especiallydata compressionalgorithms, such asUnisys'sLZW patent. Additionally, some cryptographic algorithms have export restrictions (seeexport of cryptography).
Another way of classifying algorithms is by their design methodology orparadigm. Some common paradigms are:
Foroptimization problemsthere is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following:
One of the simplest algorithms finds the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be described in plain English as:
High-level description:
(Quasi-)formal description:Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm inpseudocodeorpidgin code:
|
https://en.wikipedia.org/wiki/Algorithm
|
Acheck digitis a form of redundancy check used forerror detectionon identification numbers, such as bank account numbers, which are used in an application where they will at least sometimes be input manually. It is analogous to a binaryparity bitused to check for errors in computer-generated data. It consists of one or more digits (or letters) computed by an algorithm from the other digits (or letters) in the sequence input.[1]
With a check digit, one can detect simple errors in the input of a series of characters (usually digits) such as a single mistyped digit or some permutations of two successive digits.
Check digitalgorithmsare generally designed to capturehumantranscription errors. In order of complexity, these include the following:[2]
In choosing a system, a high probability of catching errors is traded off against implementation difficulty; simple check digit systems are easily understood and implemented by humans but do not catch as many errors as complex ones, which require sophisticated programs to implement.
A desirable feature is that left-padding with zeros should not change the check digit. This allows variable length numbers to be used and the length to be changed.
If there is a single check digit added to the original number, the system will not always capturemultipleerrors, such as two replacement errors (12 → 34) though, typically, double errors will be caught 90% of the time (both changes would need to change the output by offsetting amounts).
A very simple check digit method would be to take the sum of all digits (digital sum)modulo10. This would catch any single-digit error, as such an error would always change the sum, but does not catch any transposition errors (switching two digits) as re-ordering does not change the sum.
A slightly more complex method is to take theweighted sumof the digits, modulo 10, with different weights for each number position.
To illustrate this, for example if the weights for a four digit number were 5, 3, 2, 7 and the number to be coded was 4871, then one would take 5×4 + 3×8 + 2×7 + 7×1 = 65, i.e. 65 modulo 10, and the check digit would be 5, giving 48715.
Systems with weights of 1, 3, 7, or 9, with the weights on neighboring numbers being different, are widely used: for example, 31 31 weights inUPCcodes, 13 13 weights inEANnumbers (GS1 algorithm), and the 371 371 371 weights used in United States bankrouting transit numbers. This system detects all single-digit errors and around 90%[citation needed]of transposition errors. 1, 3, 7, and 9 are used because they arecoprimewith 10, so changing any digit changes the check digit; using a coefficient that is divisible by 2 or 5 would lose information (because 5×0 = 5×2 = 5×4 = 5×6 = 5×8 = 0 modulo 10) and thus not catch some single-digit errors. Using different weights on neighboring numbers means that most transpositions change the check digit; however, because all weights differ by an even number, this does not catch transpositions of two digits that differ by 5 (0 and 5, 1 and 6, 2 and 7, 3 and 8, 4 and 9), since the 2 and 5 multiply to yield 10.
The ISBN-10 code instead uses modulo 11, which is prime, and all the number positions have different weights 1, 2, ... 10. This system thus detects all single-digit substitution and transposition errors (including jump transpositions), but at the cost of the check digit possibly being 10, represented by "X". (An alternative is simply to avoid using the serial numbers which result in an "X" check digit.) ISBN-13 instead uses the GS1 algorithm used in EAN numbers.
More complicated algorithms include theLuhn algorithm(1954), which captures 98% of single-digit transposition errors (it does not detect 90 ↔ 09) and the still more sophisticatedVerhoeff algorithm(1969), which catches all single-digit substitution and transposition errors, and many (but not all) more complex errors. Similar is anotherabstract algebra-based method, theDamm algorithm(2004), that too detects all single-digit errors and all adjacent transposition errors. These three methods use a single check digit and will therefore fail to capture around 10%[citation needed]of more complex errors. To reduce this failure rate, it is necessary to use more than one check digit (for example, the modulo 97 check referred to below, which uses two check digits—for the algorithm, seeInternational Bank Account Number) and/or to use a wider range of characters in the check digit, for example letters plus numbers.
The final digit of aUniversal Product Code,International Article Number,Global Location NumberorGlobal Trade Item Numberis a check digit computed as follows:[3][4]
A GS1 check digit calculator and detailed documentation is online at GS1's website.[5]Another official calculator page shows that the mechanism for GTIN-13 is the same forGlobal Location Number/GLN.[6]
For instance, the UPC-A barcode for a box of tissues is "036000241457". The last digit is the check digit "7", and if the other numbers are correct then the check digit calculation must produce 7.
Another example: to calculate the check digit for the following food item "01010101010x".
The final character of a ten-digitInternational Standard Book Numberis a check digit computed so that multiplying each digit by its position in the number (counting from the right) and taking the sum of these productsmodulo11 is 0. The digit the farthest to the right (which is multiplied by 1) is the check digit, chosen to make the sum correct. It may need to have the value 10, which is represented as the letter X. For example, take theISBN0-201-53082-1: The sum of products is 0×10 + 2×9 + 0×8 + 1×7 + 5×6 + 3×5 + 0×4 + 8×3 + 2×2 + 1×1 = 99 ≡ 0 (mod 11). So the ISBN is valid. Positions can also be counted from left, in which case the check digit is multiplied by 10, to check validity: 0×1 + 2×2 + 0×3 + 1×4 + 5×5 + 3×6 + 0×7 + 8×8 + 2×9 + 1×10 = 143 ≡ 0 (mod 11).
ISBN 13 (in use January 2007) is equal to theEAN-13code found underneath a book's barcode. Its check digit is generated in a similar way to the UPC.[7]
The check digit is computed as follows:
For example, take theISBN978-0747532699, belonging toHarry Potter and the Philosopher's Stone.9 is the check digit here, so the calculations must yield 9 at the end.
The NOID Check Digit Algorithm (NCDA),[8]in use since 2004, is designed for application inpersistent identifiersand works with variable length strings of letters and digits, called extended digits. It is widely used with theARK identifierscheme and somewhat used with schemes, such as theHandle SystemandDOI. An extended digit is constrained tobetanumericcharacters, which are alphanumerics minus vowels and the letter 'l' (ell). This restriction helps when generating opaque strings that are unlikely to form words by accident and will not contain both O and 0, or l and 1. Having a prime radix of R=29, the betanumeric repertoire permits the algorithm to guarantee detection of single-character and transposition errors[9]for strings less than R=29 characters in length (beyond which it provides a slightly weaker check). The algorithm generalizes to any character repertoire with a prime radix R and strings less than R characters in length.
Notable algorithms include:
|
https://en.wikipedia.org/wiki/Check_digit
|
Inerror detection, theDamm algorithmis acheck digitalgorithmthat detects allsingle-digit errorsand alladjacent transposition errors. It was presented by H. Michael Damm in 2004,[1]as a part of his PhD dissertation entitledTotally Antisymmetric Quasigroups.
The Damm algorithm is similar to theVerhoeff algorithm. It too will detectalloccurrences of the two most frequently appearing types oftranscription errors, namely altering a single digit or transposing two adjacent digits (including the transposition of the trailing check digit and the preceding digit).[1][2]The Damm algorithm has the benefit that it does not have the dedicatedly constructedpermutationsand its position-specificpowersof theVerhoeff scheme. A table ofinversescan also be dispensed with when all main diagonal entries of the operation table are zero.
The Damm algorithm generates only 10 possible values, avoiding the need for a non-digit character (such as theXin the10-digit ISBNcheck digitscheme).
Prepending leading zeros does not affect the check digit (a weakness for variable-length codes).[1]
There are totally anti-symmetric quasigroups that detect all phonetic errors associated with the English language (13 ↔ 30,14 ↔ 40, ...,19 ↔ 90). The table used in the illustrating example is based on an instance of such kind.
For all checksum algorithms, including the Damm algorithm, prepending leading zeroes does not affect the check digit,[1]so 1, 01, 001, etc. produce the same check digit. Consequently variable-length codes should not be verified together.
Its essential part is aquasigroupoforder10 (i.e. having a10 × 10Latin squareas the body of itsoperation table) with the special feature of beingweakly totally anti-symmetric.[3][4][i][ii][iii]Damm revealed several methods to create totally anti-symmetric quasigroups of order 10 and gave some examples in his doctoral dissertation.[3][i]With this, Damm also disproved an old conjecture that totally anti-symmetric quasigroups of order 10 do not exist.[5]
A quasigroup(Q, ∗)is called totally anti-symmetric if for allc,x,y∈Q, the following implications hold:[4]
and it is called weak totally anti-symmetric if only the first implication holds. Damm proved that the existence of a totally anti-symmetric quasigroup of ordernis equivalent to the existence of a weak totally anti-symmetric quasigroup of ordern. For the Damm algorithm with the check equation(...((0 ∗xm) ∗xm−1) ∗ ...) ∗x0= 0,
a weak totally anti-symmetric quasigroup with the propertyx∗x= 0is needed. Such a quasigroup can be constructed from any totally anti-symmetric quasigroup by rearranging the columns in such a way that all zeros lay on the diagonal. And, on the other hand, from any weak totally anti-symmetric quasigroup a totally anti-symmetric quasigroup can be constructed by rearranging the columns in such a way that the first row is in natural order.[3]
The validity of a digit sequence containing a check digit is defined over a quasigroup. A quasigroup table ready for use can be taken from Damm's dissertation (pages 98, 106, 111).[3]It is useful if each main diagonal entry is0,[1]because it simplifies the check digit calculation.
Prerequisite:The main diagonal entries of the table are0.
The following operation table will be used.[1]It may be obtained from the totally anti-symmetric quasigroupx∗yin Damm's doctoral dissertation page 111[3]by rearranging the rows and changing the entries with the permutationφ= (1 2 9 5 4 8 6 7 3)and definingx⋅y=φ−1(φ(x) ∗y).
Suppose we choose the number (digit sequence)572.
The resulting interim digit is4. This is the calculated check digit. We append it to the number and obtain5724.
The resulting interim digit is0, hence the number isvalid.
This is the above example showing the detail of the algorithm generating the check digit (dashed blue arrow) and verifying the number572with the check digit.
|
https://en.wikipedia.org/wiki/Damm_algorithm
|
Data degradationis the gradualcorruptionofcomputer datadue to an accumulation of non-critical failures in adata storage device. It is also referred to asdata decay,data rotorbit rot.[1]This results in a decline in data quality over time, even when the data is not being utilized. The concept of data degradation involves progressively minimizing data in interconnected processes, where data is used for multiple purposes at different levels of detail. At specific points in the process chain, data is irreversibly reduced to a level that remains sufficient for the successful completion of the following steps[2]
Data degradation indynamic random-access memory(DRAM) can occur when theelectric chargeof abitin DRAM disperses, possibly altering program code or stored data. DRAM may be altered bycosmic rays[3]or other high-energy particles. Such data degradation is known as asoft error.[4]ECC memorycan be used to mitigate this type of data degradation.[5]
Data degradation results from the gradual decay ofstorage mediaover the course of years or longer. Causes vary by medium.
EPROMs,flash memoryand othersolid-state drivestore data using electrical charges, which can slowly leak away due to imperfect insulation. Modern flash controller chips account for this leak by trying several lower threshold voltages (untilECCpasses), prolonging the age of data.Multi-level cellswith much lower distance between voltage levels cannot be considered stable without this functionality.[6]
The chip itself is not affected by this, so reprogramming it approximately once per decade prevents decay. An undamaged copy of the master data is required for the reprogramming. Achecksumcan be used to assure that the on-chip data is not yet damaged and ready for reprogramming.
The typical SD card, USB stick and M.2 NVMe all have a limited endurance. Power on can usually recover data[citation needed]but error rates will eventually degrade the media to illegibility. Writing zeros to a degraded NAND device can revive the storage to close to new condition for further use.[citation needed]Refresh cycles should be no longer than 6 months to be sure the device is legible.
Magnetic media, such ashard disk drives,floppy disksandmagnetic tapes, may experience data decay as bits lose their magnetic orientation. Higher temperature speeds up the rate of magnetic loss. As with solid-state media, re-writing is useful as long as the medium itself is not damaged (see below).[7]Modern hard drives useGiant magnetoresistanceand have a higher magnetic lifespan on the order of decades. They also automatically correct any errors detected by ECC through rewriting. The reliance on aservowritercan complicate data recovery if it becomes unrecoverable, however.
Floppy disks and tapes are poorly protected against ambient air. In warm/humid conditions, they are prone to the physicaldecompositionof the storage medium.[8][7]
Optical mediasuch asCD-R,DVD-RandBD-R, may experience data decay from thebreakdownof the storage medium. This can be mitigated by storing discs in a dark, cool, low humidity location. "Archival quality" discs are available with an extended lifetime, but are still not permanent. However,data integrity scanningthat measures the rates of various types of errors is able to predict data decay on optical media well ahead of uncorrectable data loss occurring.[9]
Both the disc dye and the disc backing layer are potentially susceptible to breakdown. Early cyanine-based dyes used in CD-R were notorious for their lack of UV stability. Early CDs also suffered fromCD bronzing, and is related to a combination of bad lacquer material and failure of the aluminum reflection layer.[10]Later discs use more stable dyes or forgo them for an inorganic mixture. The aluminum layer is also commonly swapped out for gold or silver alloy.
Paper media, such aspunched cardsandpunched tape, may literallyrot.Mylarpunched tape is another approach that does not rely on electromagnetic stability. Degradation ofbooksandprinting paperis primarily driven byacid hydrolysisofglycosidic bondswithin thecellulosemolecule as well as byoxidation;[11]degradation of paper is accelerated by highrelative humidity, high temperature, as well as by exposure to acids, oxygen, light, and various pollutants, including variousvolatile organic compoundsandnitrogen dioxide.[12]
Data degradation instreaming mediaacquisition modules, as addressed by the repair algorithms, reflects real-time data quality issues caused by device limitations. However, a more general form of data degradation refers to the gradual decay of storage media over extended periods, influenced by factors like physical wear, environmental conditions, or technological obsolescence. Causes of such degradation can vary depending on the medium, such as magnetic fields in hard drives, moisture or temperature for tape storage, or electronic failure over time.[13]
One manifestation of data degradation is when one or a few bits are randomly flipped over a long period of time.[14]This is illustrated by several digital images below, all consisting of 326,272 bits. The original photo is displayed first. In the next image, a single bit was changed from 0 to 1. In the next two images, two and three bits were flipped. OnLinuxsystems, the binary difference between files can be revealed using thecmpcommand (e.g.cmp -b bitrot-original.jpg bitrot-1bit-changed.jpg).
This deterioration can be caused by a variety of factors that impact the reliability and integrity of digital information, including physical factors,software errors, security breaches,human error, obsolete technology, and unauthorized access incidents.[15][16][17][18]
Most disk,disk controllerand higher-level systems are subject to a slight chance of unrecoverable failure. With ever-growing disk capacities, file sizes, and increases in the amount of data stored on a disk, the likelihood of the occurrence of data decay and other forms of uncorrected and undetecteddata corruptionincreases.[19]
Low-level disk controllers typically employerror correction codes(ECC) to correct erroneous data.[20]
Higher-level software systems may be employed to mitigate the risk of such underlying failures by increasing redundancy and implementing integrity checking, error correction codes and self-repairing algorithms.[21]TheZFSfile systemwas designed to address many of these data corruption issues.[22]TheBtrfsfile system also includes data protection and recovery mechanisms,[23][better source needed]as doesReFS.[24]
There is no solution that completely eliminates the threat of data degradation,[25]but various measures exist that can stave it off. One of these is toreplicate the dataasbackups. Both the original and backed data are thenauditedfor any faults due to storage media errors bychecksummingthe data or comparing it with that of other copies. This is the only way to detectlatentfaults proactively,[26]which might otherwise go unnoticed until the data is actually accessed.[27]Current storage systems such as those based onRAIDalready employ such measures internally.[28]Ideally, and especially for data that must bepreserved digitally, the replicas should be distributed across multiple administrative sites that function autonomously and deploy various hardware and software, increasing resistance to failure, as well as human error and cyberattacks.[29]
|
https://en.wikipedia.org/wiki/Data_rot
|
File verificationis the process of using analgorithmfor verifying the integrity of acomputer file, usually bychecksum. This can be done bycomparing two filesbit-by-bit, but requires two copies of the same file, and may miss systematic corruptions which might occur to both files. A more popular approach is to generate ahashof the copied file and comparing that to the hash of the original file.
Fileintegrity can be compromised, usually referred to as the file becomingcorrupted. A file can become corrupted by a variety of ways: faultystorage media, errors in transmission, write errors during copying or moving,software bugs, and so on.
Hash-based verification ensures that a file has not been corrupted by comparing the file's hash value to a previously calculated value. If these values match, the file is presumed to be unmodified. Due to the nature of hash functions,hash collisionsmay result infalse positives, but the likelihood of collisions is often negligible with random corruption.
It is often desirable to verify that a file hasn't been modified in transmission or storage by untrusted parties, for example, to include malicious code such asvirusesorbackdoors. To verify the authenticity, a classical hash function is not enough as they are not designed to becollision resistant; it is computationally trivial for an attacker to cause deliberate hash collisions, meaning that a malicious change in the file is not detected by a hash comparison. In cryptography, this attack is called apreimage attack.
For this purpose,cryptographic hash functionsare employed often. As long as the hash sums cannot be tampered with — for example, if they are communicated over a secure channel — the files can be presumed to be intact. Alternatively,digital signaturescan be employed to assuretamper resistance.
Achecksum fileis a small file that contains the checksums of other files.
There are a few well-known checksum file formats.[1]
Several utilities, such asmd5deep, can use such checksum files to automatically verify an entire directory of files in one operation.
The particular hash algorithm used is often indicated by the file extension of the checksum file.
The ".sha1" file extension indicates a checksum file containing 160-bitSHA-1hashes insha1sumformat.
The ".md5" file extension, or a file named "MD5SUMS", indicates a checksum file containing 128-bitMD5hashes inmd5sumformat.
The ".sfv" file extension indicates a checksum file containing 32-bit CRC32 checksums insimple file verificationformat.
The "crc.list" file indicates a checksum file containing 32-bit CRC checksums in brik format.
As of 2012, best practice recommendations is to useSHA-2orSHA-3to generate new file integrity digests; and to accept MD5 and SHA-1 digests for backward compatibility if stronger digests are not available.
The theoretically weaker SHA-1, the weaker MD5, or much weaker CRC were previously commonly used for file integrity checks.[2][3][4][5][6][7][8][9][10]
CRC checksums cannot be used to verify the authenticity of files, as CRC32 is not acollision resistanthash function -- even if the hash sum file is not tampered with, it is computationally trivial for an attacker to replace a file with the same CRC digest as the original file, meaning that a malicious change in the file is not detected by a CRC comparison.[citation needed]
|
https://en.wikipedia.org/wiki/File_verification
|
TheFletcher checksumis analgorithmfor computing aposition-dependent checksumdevised by John G. Fletcher (1934–2012) atLawrence Livermore Labsin the late 1970s.[1]The objective of the Fletcher checksum was to provide error-detection properties approaching those of acyclic redundancy checkbut with the lower computational effort associated with summation techniques.
As with simpler checksum algorithms, the Fletcher checksum involves dividing thebinary dataword to be protected from errors into short "blocks" of bits and computing themodularsum of those blocks. (Note that the terminology used in this domain can be confusing. The data to be protected, in its entirety, is referred to as a "word", and the pieces into which it is divided are referred to as "blocks".)
As an example, the data may be a message to be transmitted consisting of 136 characters, each stored as an 8-bitbyte, making a data word of 1088 bits in total. A convenient block size would be 8 bits, although this is not required. Similarly, a convenient modulus would be 255, although, again, others could be chosen. So, the simple checksum is computed by adding together all the 8-bit bytes of the message, dividing by 255 and keeping only the remainder. (In practice, themodulo operationis performed during the summation to control the size of the result.) The checksum value is transmitted with the message, increasing its length to 137 bytes, or 1096 bits. The receiver of the message can re-compute the checksum and compare it to the value received to determine whether the message has been altered by the transmission process.
The first weakness of the simple checksum is that it is insensitive to the order of the blocks (bytes) in the data word (message). If the order is changed, the checksum value will be the same and the change will not be detected. The second weakness is that the universe of checksum values is small, being equal to the chosen modulus. In our example, there are only 255 possible checksum values, so it is easy to see that even random data has about a 0.4% probability of having the same checksum as our message.
Fletcher addresses both of these weaknesses by computing a second value along with the simple checksum. This is the modular sum of the values taken by the simple checksum as each block of the data word is added to it. The modulus used is the same. So, for each block of the data word, taken in sequence, the block's value is added to the first sum and the new value of the first sum is then added to the second sum. Both sums start with the value zero (or some other known value). At the end of the data word, the modulus operator is applied and the two values are combined to form the Fletcher checksum value.
Sensitivity to the order of blocks is introduced because once a block is added to the first sum, it is then repeatedly added to the second sum along with every block after it. If, for example, two adjacent blocks become exchanged, the one that was originally first will be added to the second sum one fewer times and the one that was originally second will be added to the second sum one more time. The final value of the first sum will be the same, but the second sum will be different, detecting the change to the message.
The universe of possible checksum values is now the square of the value for the simple checksum. In our example, the two sums, each with 255 possible values, result in 65025 possible values for the combined checksum.
While there is an infinity of parameters, the original paper only studies the case K=8 (word length) with modulus 255 and 256.
The 16 and 32 bits versions (Fletcher-32 and -64) have been derived from the original case and studied in subsequent specifications or papers.
When the data word is divided into 8-bit blocks, as in the example above, two 8-bit sums result and are combined into a 16-bit Fletcher checksum. Usually, the second sum will be multiplied by 256 and added to the simple checksum, effectively stacking the sums side-by-side in a 16-bit word with the simple checksum at the least significant end. This algorithm is then called the Fletcher-16 checksum. The use of the modulus 28−1=255 is also generally implied.
When the data word is divided into 16-bit blocks, two 16-bit sums result and are combined into a 32-bit Fletcher checksum. Usually, the second sum will be multiplied by 216and added to the simple checksum, effectively stacking the sums side-by-side in a 32-bit word with the simple checksum at the least significant end. This algorithm is then called the Fletcher-32 checksum. The use of the modulus 216−1=65,535 is also generally implied. The rationale for this choice is the same as for Fletcher-16.
When the data word is divided into 32-bit blocks, two 32-bit sums result and are combined into a 64-bit Fletcher checksum. Usually, the second sum will be multiplied by 232and added to the simple checksum, effectively stacking the sums side-by-side in a 64-bit word with the simple checksum at the least significant end. This algorithm is then called the Fletcher-64 checksum. The use of the modulus 232−1=4,294,967,295 is also generally implied. The rationale for this choice is the same as for Fletcher-16 and Fletcher-32.
TheAdler-32checksum is a specialization of the Fletcher-32 checksum devised byMark Adler. The modulus selected (for both sums) is the prime number 65,521 (65,535 is divisible by 3, 5, 17 and 257). The first sum also begins with the value 1. The selection of a prime modulus results in improved "mixing" (error patterns are detected with more uniform probability, improving the probability that the least detectable patterns will be detected, which tends to dominate overall performance). However, the reduction in size of the universe of possible checksum values acts against this and reduces performance slightly. One study showed that Fletcher-32 outperforms Adler-32 in both performance and in its ability to detect errors. As modulo-65,535 addition is considerably simpler and faster to implement than modulo-65,521 addition, the Fletcher-32 checksum is generally a faster algorithm.[2]
A modulus of 255 is used above and in examples below for Fletcher-16, however some real-world implementations use 256. The TCP protocol's alternate checksum has Fletcher-16 with a 256 modulus,[3]as do the checksums of UBX-* messages from aU-bloxGPS.[4]Which modulus is used is dependent on the implementation.
As an example, a Fletcher-16 checksum shall be calculated and verified for a byte stream of 0x01 0x02.
The checksum is therefore 0x0403. It could be transmitted with the byte stream and be verified as such on the receiving end.
Another option is to compute in a second step a pair of check bytes, which can be appended to the byte stream so that the resulting stream has a global Fletcher-16 checksum value of 0.
The values of the checkbytes are computed as follows:
where C0 and C1 are the result of the last step in the Fletcher-16 computation.
In our case the checksum bytes are CB0 = 0xF8 and CB1 = 0x04. The transmitted byte stream is 0x01 0x02 0xF8 0x04. The receiver runs the checksum on all four bytes and calculates a passing checksum of 0x00 0x00, as illustrated below:
The Fletcher checksum cannot distinguish between blocks of all 0 bits and blocks of all 1 bits. For example, if a 16-bit block in the data word changes from 0x0000 to 0xFFFF, the Fletcher-32 checksum remains the same. This also means a sequence of all 00 bytes has the same checksum as a sequence (of the same size) of all FF bytes.
These examples assumetwo's complement arithmetic, as Fletcher's algorithm will be incorrect onones' complementmachines.
The below is a treatment on how to calculate the checksum including the check bytes; i.e., the final result should equal 0, given properly-calculated check bytes. The code by itself, however, will not calculate the check bytes.
An inefficient but straightforward implementation of aC languagefunctionto compute the Fletcher-16 checksum of anarrayof 8-bit data elements follows:
On lines 3 and 4, the sums are 16-bitvariablesso that the additions on lines 9 and 10 will notoverflow. Themodulo operationis applied to the first sum on line 9 and to the second sum on line 10. Here, this is done after each addition, so that at the end of thefor loopthe sums are always reduced to 8 bits. At the end of the input data, the two sums are combined into the 16-bit Fletcher checksum value and returned by the function on line 13.
Each sum is computed modulo 255 and thus remains less than 0xFF at all times. This implementation will thus never produce the checksum results 0x??FF, 0xFF?? or 0xFFFF (i.e., 511 out of the total 65536 possible 16-bit values are never used). It can produce the checksum result 0x0000, which may not be desirable in some circumstances (e.g. when this value has been reserved to mean "no checksum has been computed").
Example source code for calculating the check bytes, using the above function, is as follows. The check bytes may be appended to the end of the data stream, with the c0 coming before the c1.
In a 1988 paper,Anastase Nakassisdiscussed and compared different ways to optimize the algorithm. The most important optimization consists in using larger accumulators and delaying the relatively costly modulo operation for as long as it can be proven that no overflow will occur. Further benefit can be derived from replacing the modulo operator with an equivalent function tailored to this specific case—for instance, a simple compare-and-subtract, since the quotient never exceeds 1.[5]
Here is aCimplementation that applies the first but not the second optimization:
The second optimization is not used because the "never exceeds 1" assumption only applies when the modulo is calculated naively; applying the first optimization would break it. On the other hand, moduloMersenne numberslike 255 and 65535 is a quick operation on computers anyway, as tricks are available to do them without the costly division operation.[6]
8-bit implementation (16-bit checksum)
16-bit implementation (32-bit checksum), with 8-bitASCIIvalues of the input word assembled into 16-bit blocks inlittle-endianorder, the word padded with zeros as necessary to the next whole block, using modulus 65535 and with the result presented as the sum-of-sums shifted left by 16 bits (multiplied by 65536) plus the simple sum
32-bit implementation (64-bit checksum)
As with any calculation that divides a binary data word into short blocks and treats the blocks as numbers, any two systems expecting to get the same result should preserve the ordering of bits in the data word. In this respect, the Fletcher checksum is not different from other checksum and CRC algorithms and needs no special explanation.
An ordering problem that is easy to envision occurs when the data word is transferred byte-by-byte between abig-endiansystem and alittle-endiansystem and the Fletcher-32 checksum is computed. If blocks are extracted from the data word in memory by a simple read of a 16-bit unsigned integer, then the values of the blocks will be different in the two systems, due to the reversal of the byte order of 16-bit data elements in memory, and the checksum result will be different as a consequence. The implementation examples, above, do not address ordering issues so as not to obscure the checksum algorithm. Because the Fletcher-16 checksum uses 8-bit blocks, it is not affected by byteendianness.
|
https://en.wikipedia.org/wiki/Fletcher%27s_checksum
|
Aframe check sequence(FCS) is anerror-detecting codeadded to aframein acommunication protocol. Frames are used to sendpayload datafrom a source to a destination.
All frames and the bits, bytes, and fields contained within them, are susceptible to errors from a variety of sources. The FCS field contains a number that is calculated by the source node based on the data in the frame. This number is added to the end of a frame that is sent. When the destination node receives the frame the FCS number is recalculated and compared with the FCS number included in the frame. If the two numbers are different, an error is assumed and the frame is discarded.
The FCS provides error detection only. Error recovery must be performed through separate means.Ethernet, for example, specifies that a damaged frame should be discarded and does not specify any action to cause the frame to be retransmitted. Other protocols, notably theTransmission Control Protocol(TCP), can notice the data loss and initiate retransmission and error recovery.[2]
The FCS is often transmitted in such a way that the receiver can compute a running sum over the entire frame, together with the trailing FCS, expecting to see a fixed result (such as zero) when it is correct. ForEthernetand otherIEEE 802protocols, the standard states that data is sent least significant bit first, while the FCS is sent most significant bit (bit 31) first. An alternative approach is to generate the bit reversal of the FCS so that the reversed FCS can be also sent least significant bit (bit 0) first. Refer toEthernet frame § Frame check sequencefor more information.
By far the most popular FCS algorithm is acyclic redundancy check(CRC), used in Ethernet and other IEEE 802 protocols with 32 bits, inX.25with 16 or 32 bits, inHDLCwith 16 or 32 bits, inFrame Relaywith 16 bits,[3]inPoint-to-Point Protocol(PPP) with 16 or 32 bits, and in otherdata link layerprotocols.
Protocols of theInternet protocol suitetend to usechecksums.[4]
|
https://en.wikipedia.org/wiki/Frame_check_sequence
|
Parchive(aportmanteauofparity archive, and formally known asParity Volume Set Specification[1][2]) is anerasure codesystem that producesparfiles forchecksumverification ofdata integrity, with the capability to performdata recoveryoperations that can repair or regenerate corrupted or missing data.
Parchive was originally written to solve the problem of reliable file sharing onUsenet,[3]but it can be used for protecting any kind of data fromdata corruption,disc rot,bit rot, and accidental or malicious damage. Despite the name, Parchive uses more advanced techniques (specificallyerror correction codes) than simplisticparitymethods oferror detection.
As of 2014,PAR1is obsolete,PAR2is mature for widespread use, andPAR3is a discontinued experimental version developed by MultiPar author Yutaka Sawada.[4][5][6][7]The original SourceForge Parchive project has been inactive since April 30, 2015.[8]A new PAR3 specification has been worked on since April 28, 2019 by PAR2 specification author Michael Nahas. An alpha version of the PAR3 specification has been published on January 29, 2022[9]while the program itself is being developed.
Parchive was intended to increase the reliability of transferring files via Usenetnewsgroups. Usenet was originally designed for informal conversations, and the underlying protocol,NNTPwas not designed to transmit arbitrary binary data. Another limitation, which was acceptable for conversations but not for files, was that messages were normally fairly short in length and limited to 7-bitASCIItext.[10]
Various techniques were devised to send files over Usenet, such asuuencodingandBase64. Later Usenet software allowed 8 bitExtended ASCII, which permitted new techniques likeyEnc. Large files were broken up to reduce the effect of a corrupted download, but the unreliable nature of Usenet remained.
With the introduction of Parchive, parity files could be created that were then uploaded along with the original data files. If any of the data files were damaged or lost while being propagated between Usenet servers, users could download parity files and use them to reconstruct the damaged or missing files. Parchive included the construction of small index files (*.par in version 1 and *.par2 in version 2) that do not contain any recovery data. These indexes containfile hashesthat can be used to quickly identify the target files and verify their integrity.
Because the index files were so small, they minimized the amount of extra data that had to be downloaded from Usenet to verify that the data files were all present and undamaged, or to determine how many parity volumes were required to repair any damage or reconstruct any missing files. They were most useful in version 1 where the parity volumes were much larger than the short index files. These larger parity volumes contain the actual recovery data along with a duplicate copy of the information in the index files (which allows them to be used on their own to verify the integrity of the data files if there is no small index file available).
In July 2001, Tobias Rieper and Stefan Wehlus proposed the Parity Volume Set specification, and with the assistance of other project members, version 1.0 of the specification was published in October 2001.[11]Par1 usedReed–Solomon error correctionto create new recovery files. Any of the recovery files can be used to rebuild a missing file from an incompletedownload.
Version 1 became widely used on Usenet, but it did suffer some limitations:
In January 2002, Howard Fukada proposed that a new Par2 specification should be devised with the significant changes that data verification and repair should work on blocks of data rather than whole files, and that the algorithm should switch to using 16 bit numbers rather than the 8 bit numbers that PAR1 used. Michael Nahas and Peter Clements took up these ideas in July 2002, with additional input from Paul Nettle and Ryan Gallagher (who both wrote Par1 clients). Version 2.0 of the Parchive specification was published by Michael Nahas in September 2002.[14]
Peter Clements then went on to write the first two Par2 implementations,QuickParand par2cmdline. Abandoned since 2004, Paul Houle created phpar2 to supersede par2cmdline. Yutaka Sawada created MultiPar to supersede QuickPar. MultiPar uses par2j.exe (which is partially based on par2cmdline's optimization techniques) to use as MultiPar's backend engine.
Versions 1 and 2 of thefile formatare incompatible. (However, many clients support both.)
For Par1, the filesf1,f2, ...,fn, the Parchive consists of an index file (f.par), which is CRC type file with no recovery blocks, and a number of "parity volumes" (f.p01,f.p02, etc.). Given all of the original files except for one (for example,f2), it is possible to create the missingf2given all of the other original files and any one of the parity volumes. Alternatively, it is possible to recreate two missing files from any two of the parity volumes and so forth.[15]
Par1 supports up to a total of 256 source and recovery files.
Par2 files generally use this naming/extension system:filename.vol000+01.PAR2,filename.vol001+02.PAR2,filename.vol003+04.PAR2,filename.vol007+06.PAR2, etc. The number after the"+"in the filename indicates how many blocks it contains, and the number after"vol"indicates the number of the first recovery block within the PAR2 file. If an index file of a download states that 4 blocks are missing, the easiest way to repair the files would be by downloadingfilename.vol003+04.PAR2. However, due to the redundancy,filename.vol007+06.PAR2is also acceptable. There is also an index filefilename.PAR2, it is identical in function to the small index file used in PAR1.
Par2 specification supports up to 32,768 source blocks and up to 65,535 recovery blocks. Input files are split into multiple equal-sized blocks so that recovery files do not need to be the size of the largest input file.
AlthoughUnicodeis mentioned in the PAR2 specification as an option, most PAR2 implementations do not support Unicode.
Directory support is included in the PAR2 specification, but most or all implementations do not support it.
The Par3 specification was originally planned to be published as an enhancement over the Par2 specification. However, to date,[when?]it has remained closed source by specification owner Yutaka Sawada.
A discussion on a new format started in the GitHub issue section of the maintained fork par2cmdline on January 29, 2019. The discussion led to a new format which is also named as Par3. The new Par3 format's specification ispublished on GitHub, but remains being an alpha draft as of January 28, 2022. The specification is written by Michael Nahas, the author of Par2 specification, with the help from Yutaka Sawada, animetosho and malaire.
The new format claims to have multiple advantages over the Par2 format, including support for:
Software forPOSIXconforming operating systems:
|
https://en.wikipedia.org/wiki/Parchive
|
sumis a legacy utility available on someUnixandUnix-likeoperating systems. This utility outputs a 16-bitchecksumof each argumentfile, as well as the number ofblocksthey take on disk.[1]Two different checksum algorithms are in use. POSIX abandonedsumin favor ofcksum.
Thesumprogram is generally only useful for historical interest. It is not part of POSIX. Two algorithms are typically available: aBSD checksumand aSYSV checksum. Both are weaker than the already weak 32-bit CRC used bycksum.[2]
Thedefaultalgorithm on FreeBSD and GNU implementations is the BSD checksum. Switching between the two algorithms is done via command line options.[2][1]
The two commonly used algorithms are as follows.
The BSD sum, -r in GNU sum and -o1 in FreeBSD cksum:
The above algorithm appeared inSeventh Edition Unix.
The System V sum, -s in GNU sum and -o2 in FreeBSD cksum:
Thesumutility is invoked from thecommand lineaccording to the following syntax:
with the possible option parameters being:
When no file parameter is given, or when FILE is-, thestandard inputis used as input file.
Example of use:
Example of -s use in GNU sum:
Example of using standard input, -r and printf to avoid newline:
|
https://en.wikipedia.org/wiki/Sum_(Unix)
|
TheSYSV checksum algorithmwas a commonly used, legacychecksumalgorithm.
It has been implemented inUNIX System Vand is also available through thesumcommand line utility.
This algorithm is useless on a security perspective, and is weaker than the CRC-32cksumfor error detection.[1][2]
The main part of this algorithm is simply adding up all bytes in a 32-bit sum. As a result, this algorithm has the characteristics of a simple sum:[2]
As a result, many common changes to text data are not detected by this method.
The FreeBSD pseudocode for this algorithm is:
The last part folds the value into 16 bits.
|
https://en.wikipedia.org/wiki/SYSV_checksum
|
TheBSD checksum algorithmwas a commonly used, legacychecksumalgorithm. It has been implemented in oldBSDand is also available through thesumcommand line utility.
This algorithm is useless from a security perspective, and is weaker than theCRC-32cksumfor error detection.[1][2]
Below is the relevant part of theGNUsum source code (GPLlicensed). It computes a 16-bit checksum by adding up all bytes (8-bit words) of the input data stream. In order to avoid many of the weaknesses of simply adding the data, the checksum accumulator is circular rotated to the right by one bit at each step before the new char is added.
As mentioned above, this algorithm computes a checksum by segmenting the data and adding it to an accumulator that is circular right shifted between each summation. To keep the accumulator within return value bounds, bit-masking with 1's is done.
Example:Calculating a 4-bit checksum using 4-bit sized segments (big-endian)
Iteration 1:
a) Apply circular shift to the checksum:
b) Add checksum and segment together, apply bitmask onto the obtained result:
Iteration 2:
a) Apply circular shift to the checksum:
b) Add checksum and segment together, apply bitmask onto the obtained result:
Iteration 3:
a) Apply circular shift to the checksum:
b) Add checksum and segment together, apply bitmask onto the obtained result:
Final checksum:1000
|
https://en.wikipedia.org/wiki/BSD_checksum
|
This is a list ofhash functions, includingcyclic redundancy checks,checksumfunctions, andcryptographic hash functions.
Adler-32is often mistaken for a CRC, but it is not: it is achecksum.
|
https://en.wikipedia.org/wiki/XxHash
|
Poly1305is auniversal hash familydesigned byDaniel J. Bernsteinin 2002 for use incryptography.[1][2]
As with any universal hash family, Poly1305 can be used as a one-timemessage authentication codeto authenticate a single message using a secret key shared between sender and recipient,[3]similar to the way that aone-time padcan be used to conceal the content of a single message using a secret key shared between sender and recipient.
Originally Poly1305 was proposed as part of Poly1305-AES,[2]a Carter–Wegman authenticator[4][5][1]that combines the Poly1305 hash withAES-128to authenticate many messages using a single short key and distinct message numbers.
Poly1305 was later applied with a single-use key generated for each message usingXSalsa20in theNaClcrypto_secretbox_xsalsa20poly1305 authenticated cipher,[6]and then usingChaChain theChaCha20-Poly1305authenticated cipher[7][8][1]deployed inTLSon the internet.[9]
Poly1305 takes a 16-byte secret keyr{\displaystyle r}and anL{\displaystyle L}-byte messagem{\displaystyle m}and returns a 16-byte hashPoly1305r(m){\displaystyle \operatorname {Poly1305} _{r}(m)}.
To do this, Poly1305:[2][1]
The coefficientsci{\displaystyle c_{i}}of the polynomialc1rq+c2rq−1+⋯+cqr{\displaystyle c_{1}r^{q}+c_{2}r^{q-1}+\cdots +c_{q}r}, whereq=⌈L/16⌉{\displaystyle q=\lceil L/16\rceil }, are:
ci=m[16i−16]+28m[16i−15]+216m[16i−14]+⋯+2120m[16i−1]+2128,{\displaystyle c_{i}=m[16i-16]+2^{8}m[16i-15]+2^{16}m[16i-14]+\cdots +2^{120}m[16i-1]+2^{128},}
with the exception that, ifL≢0(mod16){\displaystyle L\not \equiv 0{\pmod {16}}}, then:
cq=m[16q−16]+28m[16q−15]+⋯+28(Lmod16)−8m[L−1]+28(Lmod16).{\displaystyle c_{q}=m[16q-16]+2^{8}m[16q-15]+\cdots +2^{8(L{\bmod {1}}6)-8}m[L-1]+2^{8(L{\bmod {1}}6)}.}
The secret keyr=(r[0],r[1],r[2],…,r[15]){\displaystyle r=(r[0],r[1],r[2],\dotsc ,r[15])}is restricted to have the bytesr[3],r[7],r[11],r[15]∈{0,1,2,…,15}{\displaystyle r[3],r[7],r[11],r[15]\in \{0,1,2,\dotsc ,15\}},i.e., to have their top four bits clear; and to have the bytesr[4],r[8],r[12]∈{0,4,8,…,252}{\displaystyle r[4],r[8],r[12]\in \{0,4,8,\dotsc ,252\}},i.e., to have their bottom two bits clear.
Thus there are2106{\displaystyle 2^{106}}distinct possible values ofr{\displaystyle r}.
Ifs{\displaystyle s}is a secret 16-byte string interpreted as a little-endian integer, then
a:=(Poly1305r(m)+s)mod2128{\displaystyle a:={\bigl (}\operatorname {Poly1305} _{r}(m)+s{\bigr )}{\bmod {2}}^{128}}
is called theauthenticatorfor the messagem{\displaystyle m}.
If a sender and recipient share the 32-byte secret key(r,s){\displaystyle (r,s)}in advance, chosen uniformly at random, then the sender can transmit an authenticated message(a,m){\displaystyle (a,m)}.
When the recipient receives anallegedauthenticated message(a′,m′){\displaystyle (a',m')}(which may have been modified in transmit by an adversary), they can verify its authenticity by testing whether
a′=?(Poly1305r(m′)+s)mod2128.{\displaystyle a'\mathrel {\stackrel {?}{=}} {\bigl (}\operatorname {Poly1305} _{r}(m')+s{\bigr )}{\bmod {2}}^{128}.}Without knowledge of(r,s){\displaystyle (r,s)}, the adversary has probability8⌈L/16⌉/2106{\displaystyle 8\lceil L/16\rceil /2^{106}}of finding any(a′,m′)≠(a,m){\displaystyle (a',m')\neq (a,m)}that will pass verification.
However, the same key(r,s){\displaystyle (r,s)}must not be reused for two messages.
If the adversary learns
a1=(Poly1305r(m1)+s)mod2128,a2=(Poly1305r(m2)+s)mod2128,{\displaystyle {\begin{aligned}a_{1}&={\bigl (}\operatorname {Poly1305} _{r}(m_{1})+s{\bigr )}{\bmod {2}}^{128},\\a_{2}&={\bigl (}\operatorname {Poly1305} _{r}(m_{2})+s{\bigr )}{\bmod {2}}^{128},\end{aligned}}}
form1≠m2{\displaystyle m_{1}\neq m_{2}}, they can subtract
a1−a2≡Poly1305r(m1)−Poly1305r(m2)(mod2128){\displaystyle a_{1}-a_{2}\equiv \operatorname {Poly1305} _{r}(m_{1})-\operatorname {Poly1305} _{r}(m_{2}){\pmod {2^{128}}}}
and find a root of the resulting polynomial to recover a small list of candidates for the secret evaluation pointr{\displaystyle r}, and from that the secret pads{\displaystyle s}.
The adversary can then use this to forge additional messages with high probability.
The original Poly1305-AES proposal[2]uses the Carter–Wegman structure[4][5]to authenticate many messages by takingai:=Hr(mi)+pi{\displaystyle a_{i}:=H_{r}(m_{i})+p_{i}}to be the authenticator on theith messagemi{\displaystyle m_{i}}, whereHr{\displaystyle H_{r}}is a universal hash family andpi{\displaystyle p_{i}}is an independent uniform random hash value that serves as a one-time pad to conceal it.
Poly1305-AES usesAES-128to generatepi:=AESk(i){\displaystyle p_{i}:=\operatorname {AES} _{k}(i)}, wherei{\displaystyle i}is encoded as a 16-byte little-endian integer.
Specifically, a Poly1305-AES key is a 32-byte pair(r,k){\displaystyle (r,k)}of a 16-byte evaluation pointr{\displaystyle r}, as above, and a 16-byte AES keyk{\displaystyle k}.
The Poly1305-AES authenticator on a messagemi{\displaystyle m_{i}}is
ai:=(Poly1305r(mi)+AESk(i))mod2128,{\displaystyle a_{i}:={\bigl (}\operatorname {Poly1305} _{r}(m_{i})+\operatorname {AES} _{k}(i){\bigr )}{\bmod {2}}^{128},}
where 16-byte strings and integers are identified by little-endian encoding.
Note thatr{\displaystyle r}is reused between messages.
Without knowledge of(r,k){\displaystyle (r,k)}, the adversary has low probability of forging any authenticated messages that the recipient will accept as genuine.
Suppose the adversary seesC{\displaystyle C}authenticated messages and attemptsD{\displaystyle D}forgeries, and candistinguishAESk{\displaystyle \operatorname {AES} _{k}}from a uniform random permutationwith advantage at mostδ{\displaystyle \delta }.
(Unless AES is broken,δ{\displaystyle \delta }is very small.)
The adversary's chance of success at a single forgery is at most:
δ+(1−C/2128)−(C+1)/2⋅8D⌈L/16⌉2106.{\displaystyle \delta +{\frac {(1-C/2^{128})^{-(C+1)/2}\cdot 8D\lceil L/16\rceil }{2^{106}}}.}
The message numberi{\displaystyle i}must never be repeated with the same key(r,k){\displaystyle (r,k)}.
If it is, the adversary can recover a small list of candidates forr{\displaystyle r}andAESk(i){\displaystyle \operatorname {AES} _{k}(i)}, as with the one-time authenticator, and use that to forge messages.
TheNaClcrypto_secretbox_xsalsa20poly1305 authenticated cipher uses a message numberi{\displaystyle i}with theXSalsa20stream cipher to generate a per-messagekey stream, the first 32 bytes of which are taken as a one-time Poly1305 key(ri,si){\displaystyle (r_{i},s_{i})}and the rest of which is used for encrypting the message.
It then uses Poly1305 as a one-time authenticator for the ciphertext of the message.[6]ChaCha20-Poly1305does the same but withChaChainstead ofXSalsa20.[8]XChaCha20-Poly1305 using XChaCha20 instead of XSalsa20 has also been described.[10]
The security of Poly1305 and its derivatives against forgery follows from itsbounded difference probabilityas auniversal hash family:
Ifm1{\displaystyle m_{1}}andm2{\displaystyle m_{2}}are messages of up toL{\displaystyle L}bytes each, andd{\displaystyle d}is any 16-byte string interpreted as a little-endian integer, then
Pr[Poly1305r(m1)−Poly1305r(m2)≡d(mod2128)]≤8⌈L/16⌉2106,{\displaystyle \Pr[\operatorname {Poly1305} _{r}(m_{1})-\operatorname {Poly1305} _{r}(m_{2})\equiv d{\pmod {2^{128}}}]\leq {\frac {8\lceil L/16\rceil }{2^{106}}},}
wherer{\displaystyle r}is a uniform random Poly1305 key.[2]: Theorem 3.3, p. 8
This property is sometimes calledϵ{\displaystyle \epsilon }-almost-Δ-universalityoverZ/2128Z{\displaystyle \mathbb {Z} /2^{128}\mathbb {Z} }, orϵ{\displaystyle \epsilon }-AΔU,[11]whereϵ=8⌈L/16⌉/2106{\displaystyle \epsilon =8\lceil L/16\rceil /2^{106}}in this case.
With a one-time authenticatora=(Poly1305r(m)+s)mod2128{\displaystyle a={\bigl (}\operatorname {Poly1305} _{r}(m)+s{\bigr )}{\bmod {2}}^{128}}, the adversary's success probability for any forgery attempt(a′,m′){\displaystyle (a',m')}on a messagem′{\displaystyle m'}of up toL{\displaystyle L}bytes is:
Pr[a′=Poly1305r(m′)+s∣a=Poly1305r(m)+s]=Pr[a′=Poly1305r(m′)+a−Poly1305r(m)]=Pr[Poly1305r(m′)−Poly1305r(m)=a′−a]≤8⌈L/16⌉/2106.{\displaystyle {\begin{aligned}\Pr[&a'=\operatorname {Poly1305} _{r}(m')+s\mathrel {\mid } a=\operatorname {Poly1305} _{r}(m)+s]\\&=\Pr[a'=\operatorname {Poly1305} _{r}(m')+a-\operatorname {Poly1305} _{r}(m)]\\&=\Pr[\operatorname {Poly1305} _{r}(m')-\operatorname {Poly1305} _{r}(m)=a'-a]\\&\leq 8\lceil L/16\rceil /2^{106}.\end{aligned}}}
Here arithmetic inside thePr[⋯]{\displaystyle \Pr[\cdots ]}is taken to be inZ/2128Z{\displaystyle \mathbb {Z} /2^{128}\mathbb {Z} }for simplicity.
ForNaClcrypto_secretbox_xsalsa20poly1305 andChaCha20-Poly1305, the adversary's success probability at forgery is the same for each message independently as for a one-time authenticator, plus the adversary's distinguishing advantageδ{\displaystyle \delta }against XSalsa20 or ChaCha aspseudorandom functionsused to generate the per-message key.
In other words, the probability that the adversary succeeds at a single forgery afterD{\displaystyle D}attempts of messages up toL{\displaystyle L}bytes is at most:
δ+8D⌈L/16⌉2106.{\displaystyle \delta +{\frac {8D\lceil L/16\rceil }{2^{106}}}.}
The security of Poly1305-AES against forgery follows from the Carter–Wegman–Shoup structure, which instantiates a Carter–Wegman authenticator with a permutation to generate the per-message pad.[12]If an adversary seesC{\displaystyle C}authenticated messages and attemptsD{\displaystyle D}forgeries of messages of up toL{\displaystyle L}bytes, and if the adversary has distinguishing advantage at mostδ{\displaystyle \delta }against AES-128 as apseudorandom permutation, then the probability the adversary succeeds at any one of theD{\displaystyle D}forgeries is at most:[2]
δ+(1−C/2128)−(C+1)/2⋅8D⌈L/16⌉2106.{\displaystyle \delta +{\frac {(1-C/2^{128})^{-(C+1)/2}\cdot 8D\lceil L/16\rceil }{2^{106}}}.}
For instance, assuming that messages are packets up to 1024 bytes; that the attacker sees 264messages authenticated under a Poly1305-AES key; that the attacker attempts a whopping 275forgeries; and that the attacker cannot break AES with probability above δ; then, with probability at least 0.999999 − δ, all the 275are rejected
Poly1305-AES can be computed at high speed in various CPUs: for ann-byte message, no more than 3.1n+ 780 Athlon cycles are needed,[2]for example.
The author has released optimizedsource codeforAthlon,PentiumPro/II/III/M,PowerPC, andUltraSPARC, in addition to non-optimizedreference implementationsinCandC++aspublic domain software.[13]
Below is a list of cryptography libraries that support Poly1305:
|
https://en.wikipedia.org/wiki/Poly1305-AES
|
The history of theInternetoriginated in the efforts of scientists and engineers to build and interconnectcomputer networks. TheInternet Protocol Suite, the set of rules used to communicate between networks and devices on the Internet, arose from research and development in theUnited Statesand involved international collaboration, particularly with researchers in theUnited KingdomandFrance.[1][2][3]
Computer sciencewas an emerging discipline in the late 1950s that began to considertime-sharingbetween computer users, and later, the possibility of achieving this overwide area networks.J. C. R. Lickliderdeveloped the idea of a universal network at theInformation Processing Techniques Office(IPTO) of the United StatesDepartment of Defense(DoD)Advanced Research Projects Agency(ARPA). Independently,Paul Baranat theRAND Corporationproposed a distributed network based on data in message blocks in the early 1960s, andDonald Daviesconceived ofpacket switchingin 1965 at theNational Physical Laboratory(NPL), proposing a national commercial data network in the United Kingdom.
ARPA awarded contracts in 1969 for the development of theARPANETproject, directed byRobert Taylorand managed byLawrence Roberts. ARPANET adopted the packet switching technology proposed by Davies and Baran. The network ofInterface Message Processors(IMPs) was built by a team atBolt, Beranek, and Newman, with the design and specification led byBob Kahn. The host-to-host protocol was specified by a group of graduate students atUCLA, led bySteve Crocker, along withJon Posteland others. The ARPANET expanded rapidly across the United States with connections to the United Kingdom and Norway.
Severalearly packet-switched networksemerged in the 1970s which researched and provideddata networking.Louis PouzinandHubert Zimmermannpioneered a simplified end-to-end approach tointernetworkingat theIRIA.Peter Kirsteinput internetworking into practice atUniversity College Londonin 1973.Bob Metcalfedeveloped the theory behindEthernetand thePARC Universal Packet. ARPA initiatives and theInternational Network Working Groupdeveloped and refined ideas for internetworking, in which multiple separate networks could be joined into anetwork of networks.Vint Cerf, now atStanford University, and Bob Kahn, now at DARPA, published their research on internetworking in 1974. Through theInternet Experiment Noteseries and laterRFCsthis evolved into theTransmission Control Protocol(TCP) andInternet Protocol(IP), two protocols of theInternet protocol suite. The design included concepts pioneered in the FrenchCYCLADESproject directed by Louis Pouzin. The development of packet switching networks was underpinned by mathematical work in the 1970s byLeonard Kleinrockat UCLA.
Early research and development:
Merging the networks and creating the Internet:
Commercialization, privatization, broader access leads to the modern Internet:
Examples of Internet services:
In the late 1970s, national and internationalpublic data networksemerged based on theX.25protocol, designed byRémi Desprésand others. In the United States, theNational Science Foundation(NSF) funded nationalsupercomputingcenters at several universities in the United States, and provided interconnectivity in 1986 with theNSFNETproject, thus creating network access to these supercomputer sites for research and academic organizations in the United States. International connections to NSFNET, the emergence of architecture such as theDomain Name System, and theadoption of TCP/IPon existing networks in the United States and around the world marked the beginnings of theInternet.[4][5][6]CommercialInternet service providers(ISPs) emerged in 1989 in the United States and Australia.[7]Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990.[8]The optical backbone of the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic, as traffic transitioned to optical networks managed by Sprint, MCI and AT&T in the United States.
Research atCERNinSwitzerlandby the British computer scientistTim Berners-Leein 1989–90 resulted in theWorld Wide Web, linkinghypertextdocuments into an information system, accessible from anynodeon the network.[9]The dramatic expansion of the capacity of the Internet, enabled by the advent ofwave division multiplexing(WDM) and the rollout offiber optic cablesin the mid-1990s, had a revolutionary impact on culture, commerce, and technology. This made possible the rise of near-instant communication byelectronic mail,instant messaging,voice over Internet Protocol(VoIP) telephone calls,video chat, and the World Wide Web with itsdiscussion forums,blogs,social networking services, andonline shoppingsites. Increasing amounts of data are transmitted at higher and higher speeds overfiber-optic networksoperating at 1Gbit/s, 10 Gbit/s, and 800 Gbit/s by 2019.[10]The Internet's takeover of the global communication landscape was rapid in historical terms: it only communicated 1% of the information flowing through two-waytelecommunicationsnetworks in the year 1993, 51% by 2000, and more than 97% of the telecommunicated information by 2007.[11]The Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, andsocial networking services. However, the future of the global network may be shaped by regional differences.[12]
J. C. R. Licklider, while working at BBN, proposed a computer network in his March 1960 paperMan-Computer Symbiosis:[18]
A network of such centers, connected to one another by wide-band communication lines [...] the functions of present-day libraries together with anticipated advances in information storage and retrieval and symbiotic functions suggested earlier in this paper
In August 1962, Licklider and Welden Clark published the paper "On-Line Man-Computer Communication"[19]which was one of the first descriptions of a networked future.
In October 1962, Licklider was hired byJack Ruinaas director of the newly establishedInformation Processing Techniques Office(IPTO) within ARPA, with a mandate to interconnect the United States Department of Defense's main computers atCheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos in 1963 describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of theIntergalactic Computer Network".[20]
Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus for one of his successors,Robert Taylor, to initiate the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years.[21]
The infrastructure fortelephonesystems at the time was based oncircuit switching, which requires pre-allocation of a dedicated communication line for the duration of the call.Telegramservices had developedstore and forwardtelecommunication techniques.Western Union's Automatic Telegraph Switching SystemPlan 55-Awas based onmessage switching. The U.S. military'sAUTODINnetwork became operational in 1962. These systems, like SAGE and SBRE, still required rigid routing structures that were prone tosingle point of failure.[24]
The technology was considered vulnerable for strategic and military use because there were no alternative paths for the communication in case of a broken link. In the early 1960s,Paul Baranof theRAND Corporationproduced a study of survivable networks for the U.S. military in the event of nuclear war.[25][26]Information would be transmitted across a "distributed" network, divided into what he called "message blocks".[27][28][29][30]Baran's design was not implemented.[31]
In addition to being prone to a single point of failure, existing telegraphic techniques were inefficient and inflexible. Beginning in 1965Donald Davies, at theNational Physical Laboratoryin the United Kingdom, independently developed a more advanced proposal of the concept, designed for high-speedcomputer networking, which he calledpacket switching, the term that would ultimately be adopted.[32][33][34][35]
Packet switching is a technique for transmitting computer data by splitting it into very short, standardized chunks, attaching routing information to each of these chunks, and transmitting them independently through acomputer network. It provides better bandwidth utilization than traditional circuit-switching used for telephony, and enables the connection of computers with different transmission and receive rates. It is a distinct concept to message switching.[36]
Following discussions withJ. C. R. Lickliderin 1965,Donald Daviesbecame interested indata communicationsfor computer networks.[37][38]Later that year, at theNational Physical Laboratory(NPL) in the United Kingdom, Davies designed and proposed a national commercial data network based on packet switching.[39]The following year, he described the use of "switching nodes" to act asroutersin a digital communication network.[40][41]The proposal was not taken up nationally but he produced a design for a local network to serve the needs of the NPL and prove the feasibility of packet switching using high-speed data transmission.[42][43]To deal with packet permutations (due to dynamically updated route preferences) and to datagram losses (unavoidable when fast sources send to a slow destinations), he assumed that "all users of the network will provide themselves with some kind of error control",[44]thus inventing what came to be known as theend-to-end principle. In 1967, he and his team were the first to use the term 'protocol' in a modern data-commutation context.[45]
In 1968,[46]Davies began building the Mark I packet-switched network to meet the needs of his multidisciplinary laboratory and prove the technology under operational conditions.[47][48]The network's development was described at a 1968 conference.[49][50]Elements of the network became operational in early 1969,[47][51]the first implementation of packet switching,[52][53]and the NPL network was the first to use high-speed links.[54]Many other packet switching networks built in the 1970s were similar "in nearly all respects" to Davies' original 1965 design.[37]The Mark II version which operated from 1973 used a layered protocol architecture.[54]In 1977, there were roughly 30 computers, 30 peripherals and 100 VDU terminals all able to interact through the NPL Network.[55]The NPL team carried outsimulationwork on wide-area packet networks, includingdatagramsandcongestion; and research intointernetworkingandsecure communications.[47][56][57]The network was replaced in 1986.[54]
Robert Taylor was promoted to the head of theInformation Processing Techniques Office(IPTO) atAdvanced Research Projects Agency(ARPA) in 1966. He intended to realizeLicklider's ideas of an interconnected networking system.[58]As part of the IPTO's role, three network terminals had been installed: one forSystem Development CorporationinSanta Monica, one forProject GenieatUniversity of California, Berkeley, and one for theCompatible Time-Sharing Systemproject atMassachusetts Institute of Technology(MIT).[59]Taylor's identified need for networking became obvious from the waste of resources apparent to him.
For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them....
I said, oh man, it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet.[59]
Bringing inLarry Robertsfrom MIT in January 1967, he initiated a project to build such a network. Roberts and Thomas Merrill had been researching computertime-sharingoverwide area networks(WANs).[60]Wide area networks emerged during the late 1950s and became established during the 1960s. At the first ACMSymposium on Operating Systems Principlesin October 1967, Roberts presented a proposal for the "ARPA net", based onWesley Clark'sidea to useInterface Message Processors(IMP) to create amessage switchingnetwork.[61][62][63]At the conference,Roger ScantleburypresentedDonald Davies'work on a hierarchical digital communications network usingpacket switchingand referenced the work ofPaul BaranatRAND. Roberts incorporated the packet switching and routing concepts of Davies and Baran into the ARPANET design and upgraded the proposed communications speed from 2.4 kbit/s to 50 kbit/s.[64][65]
ARPA awarded the contract to build the network toBolt Beranek & Newman. The "IMP guys", led byFrank HeartandBob Kahn, developed the routing, flow control, software design and network control.[37][66]The first ARPANET link was established between the Network Measurement Center at theUniversity of California, Los Angeles(UCLA)Henry Samueli School of Engineering and Applied Sciencedirected byLeonard Kleinrock, and the NLS system atStanford Research Institute(SRI) directed byDouglas EngelbartinMenlo Park, California at 22:30 hours on October 29, 1969.[67][68]
"We set up a telephone connection between us and the guys at SRI ...", Kleinrock ... said in an interview: "We typed the L and we asked on the phone,
Yet a revolution had begun" ....[69][70]
By December 1969, a four-node network was connected by adding the Culler-Fried Interactive Mathematics Center at theUniversity of California, Santa Barbarafollowed by theUniversity of UtahGraphics Department.[71]In the same year, Taylor helped fundALOHAnet, a system designed by professorNorman Abramsonand others at theUniversity of Hawaiʻi at Mānoathat transmitted data by radio between seven computers on four islands onHawaii.[72]
Steve Crockerformed the "Network Working Group" in 1969 at UCLA. Working withJon Posteland others,[73]he initiated and managed theRequest for Comments(RFC) process, which is still used today for proposing and distributing contributions. RFC 1, entitled "Host Software", was written by Steve Crocker and published on April 7, 1969. The protocol for establishing links between network sites in the ARPANET, theNetwork Control Program(NCP), was completed in 1970. These early years were documented in the 1972 filmComputer Networks: The Heralds of Resource Sharing.
Roberts presented the idea of packet switching to the communication professionals, and faced anger and hostility. Before ARPANET was operating, they argued that the router buffers would quickly run out. After the ARPANET was operating, they argued packet switching would never be economic without the government subsidy. Baran faced the same rejection and thus failed to convince the military into constructing a packet switching network.[74][75]
Early international collaborations via the ARPANET were sparse. Connections were made in 1973 to the Norwegian Seismic Array (NORSAR),[76]via a satellite link at theTanumEarth Station in Sweden, and toPeter Kirstein's research group atUniversity College London, which provided a gateway toBritish academic networks, the first international heterogenousresource sharingnetwork.[77]Throughout the 1970s, Leonard Kleinrock developed the mathematical theory to model and measure the performance of packet-switching technology, building on his earlier work on the application ofqueueing theoryto message switching systems.[78]By 1981, the number of hosts had grown to 213.[79]The ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used.
TheMerit Network[80]was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development.[81]With initial support from theState of Michiganand theNational Science Foundation(NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host to host connection was made between theIBMmainframe computersystems at theUniversity of MichiganinAnn ArborandWayne State UniversityinDetroit.[82]In October 1972 connections to theCDCmainframe atMichigan State UniversityinEast Lansingcompleted the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to theTymnetandTelenetpublic data networks,X.25host attachments, gateways to X.25 data networks,Ethernetattached hosts, and eventuallyTCP/IPand additionalpublic universities in Michiganjoin the network.[82][83]All of this set the stage for Merit's role in theNSFNETproject starting in the mid-1980s.
TheCYCLADESpacket switching network was a French research network designed and directed byLouis Pouzin. In 1972, he began planning the network to explore alternatives to the early ARPANET design and to supportinternetworkingresearch. First demonstrated in 1973, it was the first network to implement theend-to-end principleconceived by Donald Davies and make the hosts responsible for reliable delivery of data, rather than the network itself, usingunreliable datagrams.[84][85]Concepts implemented in this network influencedTCP/IParchitecture.[86][87]
Based on international research initiatives, particularly the contributions ofRémi Després, packet switching network standards were developed by theInternational Telegraph and Telephone Consultative Committee(ITU-T) in the form ofX.25and related standards.[88][89]X.25 is built on the concept ofvirtual circuitsemulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later becameJANET, the United Kingdom's high-speednational research and education network(NREN). The initial ITU Standard on X.25 was approved in March 1976.[90]Existing networks, such asTelenetin the United States adopted X.25 as well as newpublic data networks, such asDATAPACin Canada andTRANSPACin France.[88][89]X.25was supplemented by theX.75protocol which enabled internetworking between national PTT networks in Europe and commercial networks in North America.[91][92][93]
TheBritish Post Office,Western Union International, andTymnetcollaborated to create the first international packet-switched network, referred to as theInternational Packet Switched Service(IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong, and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.[94]
Unlike ARPANET, X.25 was commonly available for business use.Telenetoffered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET.
The first public dial-in networks used asynchronousteleprinter(TTY) terminal protocols to reach a concentrator operated in the public network. Some networks, such asTelenetandCompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such asTymnet, used proprietary protocols. In 1979, CompuServe became the first service to offerelectronic mailcapabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offerreal-time chatwith itsCB Simulator. Other major dial-in networks wereAmerica Online(AOL) andProdigythat also provided communications, content, and entertainment features.[95]Manybulletin board system(BBS) networks also provided on-line access, such asFidoNetwhich was popular amongst hobbyist computer users, many of themhackersandamateur radio operators.[citation needed]
In 1979, two students atDuke University,Tom TruscottandJim Ellis, originated the idea of usingBourne shellscripts to transfer news and messages on a serial lineUUCPconnection with nearbyUniversity of North Carolina at Chapel Hill. Following public release of the software in 1980, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links betweenFidoNetand dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines,X.25links or evenARPANETconnections, and the lack of strict use policies compared to later networks likeCSNETandBITNET. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984.[96]
Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network evolved into one of the first examples of Internet technology coming into use through popular diffusion.
With so many different networking methods seeking interconnection, a method was needed to unify them.Louis Pouzininitiated theCYCLADESproject in 1972,[97]building on the work ofDonald Daviesand the ARPANET.[98]AnInternational Network Working Groupformed in 1972; active members includedVint CerffromStanford University, Alex McKenzie fromBBN, Donald Davies andRoger ScantleburyfromNPL, and Louis Pouzin andHubert ZimmermannfromIRIA.[99][100][101]Pouzin coined the termcatenetfor concatenated network.Bob MetcalfeatXerox PARCoutlined the idea ofEthernetandPARC Universal Packet(PUP) forinternetworking.Bob Kahn, now atDARPA, recruited Vint Cerf to work with him on the problem. By 1973, these groups had worked out a fundamental reformulation, in which the differences between network protocols were hidden by using a commoninternetworkingprotocol. Instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible.[2][102]
Cerf and Kahn published their ideas in May 1974,[103]which incorporated concepts implemented by Louis Pouzin and Hubert Zimmermann in the CYCLADES network.[104][105]The specification of the resulting protocol, theTransmission Control Program, was published asRFC675by the Network Working Group in December 1974.[106]It contains the first attested use of the terminternet, as a shorthand for internetwork. This software was monolithic in design using twosimplex communicationchannels for each user session.
With the role of the network reduced to a core of functionality, it became possible to exchange traffic with other networks independently from their detailed characteristics, thereby solving the fundamental problems of internetworking. DARPA agreed to fund the development of prototype software. Testing began in 1975 through concurrent implementations at Stanford, BBN andUniversity College London(UCL).[3]After several years of work, the first demonstration of a gateway between thePacket Radio network(PRNET) in the SF Bay area and the ARPANET was conducted by theStanford Research Institute. On November 22, 1977, a three network demonstration was conducted including the ARPANET, the SRI'sPacket Radio Vanon the Packet Radio Network and theAtlantic Packet Satellite Network(SATNET) including a node at UCL.[107][108]
The software was redesigned as a modular protocol stack, using full-duplex channels; between 1976 and 1977,Yogen Dalaland Robert Metcalfe among others, proposed separating TCP'sroutingand transmission control functions into two discrete layers,[109][110]which led to the splitting of the Transmission Control Program into theTransmission Control Protocol(TCP) and theInternet Protocol(IP) in version 3 in 1978.[110][111]Version 4was described inIETFpublication RFC 791 (September 1981), 792 and 793. It was installed onSATNETin 1982 and the ARPANET in January 1983 after the DoD made it standard for all military computer networking.[112][113]This resulted in a networking model that became known informally as TCP/IP. It was also referred to as the Department of Defense (DoD) model or DARPA model.[114]Cerf credits his graduate students Yogen Dalal, Carl Sunshine,Judy Estrin, Richard A. Karp, andGérard Le Lannwith important work on the design and testing.[115]DARPA sponsored or encouraged thedevelopment of TCP/IP implementationsfor many operating systems.
After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting-edge research and development, not running a communications utility. In July 1975, the network was turned over to theDefense Communications Agency, also part of theDepartment of Defense. In 1983, theU.S. militaryportion of the ARPANET was broken off as a separate network, theMILNET. MILNET subsequently became the unclassified but military-onlyNIPRNET, in parallel with the SECRET-levelSIPRNETandJWICSfor TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.
The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden.[116]This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and a growing number of companies such asDigital Equipment CorporationandHewlett-Packard, which were participating in research projects or providing services to those who were. Data transmission speeds depended upon the type of connection, the slowest being analog telephone lines and the fastest using optical networking technology.
Several other branches of theU.S. government, theNational Aeronautics and Space Administration(NASA), theNational Science Foundation(NSF), and theDepartment of Energy(DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid-1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed theNASA Science Network, NSF developedCSNETand DOE evolved theEnergy Sciences Networkor ESNet.
NASA developed the TCP/IP based NASA Science Network (NSN) in the mid-1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, theDECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.
In 1981, NSF supported the development of theComputer Science Network(CSNET). CSNET connected with ARPANET using TCP/IP, and ran TCP/IP overX.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. CSNET played a central role in popularizing the Internet outside the ARPANET.[23]
In 1986, the NSF createdNSFNET, a 56 kbit/sbackboneto support the NSF-sponsoredsupercomputingcenters. The NSFNET also provided support for the creation of regional research and education networks in the United States, and for the connection of university and college campus networks to the regional networks.[117]The use of NSFNET and the regional networks was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988 under a cooperative agreement with theMerit Networkin partnership withIBM,MCI, and theState of Michigan. The existence of NSFNET and the creation ofFederal Internet Exchanges(FIXes) allowed the ARPANET to be decommissioned in 1990.
NSFNET was expanded and upgraded to dedicated fiber, optical lasers and optical amplifier systems capable of delivering T3 start up speeds or 45 Mbit/s in 1991. However, the T3 transition by MCI took longer than expected, allowing Sprint to establish a coast-to-coast long-distance commercial Internet service. When NSFNET was decommissioned in 1995, its optical networking backbones were handed off to several commercial Internet service providers, including MCI,PSI Netand Sprint.[118]As a result, when the handoff was complete, Sprint and its Washington DC Network Access Points began to carry Internet traffic, and by 1996, Sprint was the world's largest carrier of Internet traffic.[119]
The research and academic community continues to develop and use advanced networks such asInternet2in the United States andJANETin the United Kingdom.
The term "internet" was reflected in the first RFC published on the TCP protocol (RFC 675:[120]Internet Transmission Control Program, December 1974) as a short form ofinternetworking, when the two terms were used interchangeably. In general, an internet was a collection of networks linked by a common protocol. In the time period when the ARPANET was connected to the newly formedNSFNETproject in the late 1980s, the term was used as the name of the network, Internet, being the large and global TCP/IP network.[121]
Opening the Internet and the fiber optic backbone to corporate and consumers increased demand for network capacity. The expense and delay of laying new fiber led providers to test a fiber bandwidth expansion alternative that had been pioneered in the late 1970s byOptelecomusing "interactions between light and matter, such as lasers and optical devices used foroptical amplificationand wave mixing".[122]This technology became known aswave division multiplexing (WDM). Bell Labs deployed a 4-channel WDM system in 1995.[123]To develop a mass capacity (dense) WDM system,Optelecomand its former head of Light Systems Research,David R. Huberformed a new venture,Ciena Corp., that deployed the world's first dense WDM system on the Sprint fiber network in June 1996.[123]This was referred to as the real start of optical networking.[124]
As interest in networking grew by needs of collaboration, exchange of data, and access of remote computing resources, the Internet technologies spread throughout the rest of the world. The hardware-agnostic approach in TCP/IP supported the use of existing network infrastructure, such as theInternational Packet Switched Service(IPSS) X.25 network, to carry Internet traffic.
Many sites unable to link directly to the Internet created simple gateways for the transfer of electronic mail, the most important application of the time. Sites with only intermittent connections usedUUCPorFidoNetand relied on the gateways between these networks and the Internet. Some gateway services went beyond simple mail peering, such as allowing access toFile Transfer Protocol(FTP) sites via UUCP or mail.[125]
Finally, routing technologies were developed for the Internet to remove the remaining centralized routing aspects. TheExterior Gateway Protocol(EGP) was replaced by a new protocol, theBorder Gateway Protocol(BGP). This provided a meshed topology for the Internet and reduced the centric architecture which ARPANET had emphasized. In 1994,Classless Inter-Domain Routing(CIDR) was introduced to support better conservation of address space which allowed use ofroute aggregationto decrease the size ofrouting tables.[126]
TheMOS transistorunderpinned the rapid growth of telecommunication bandwidth over the second half of the 20th century.[127]To address the need for transmission capacity beyond that provided byradio,satelliteand analog copper telephone lines, engineers developedoptical communicationssystems based onfiber optic cablespowered bylasersandoptical amplifiertechniques.
The concept of lasing arose from a 1917 paper byAlbert Einstein, "On the Quantum Theory of Radiation". Einstein expanded upon a conversation withMax Planckon howatomsabsorb and emitlight, part of a thought process that, with input fromErwin Schrödinger,Werner Heisenbergand others, gave rise toquantum mechanics. Specifically, in his quantum theory, Einstein mathematically determined that light could be generated not only byspontaneous emission, such as the light emitted by anincandescent lightor the Sun, but also bystimulated emission.
Forty years later, on November 13, 1957,Columbia Universityphysics studentGordon Gouldfirst realized how to make light by stimulated emission through a process ofoptical amplification. He coined the term LASER for this technology—Light Amplification by Stimulated Emission of Radiation.[128]Using Gould's light amplification method (patented as "Optically Pumped Laser Amplifier"),[129]Theodore Maimanmade the first working laser on May 16, 1960.[130]
Gould co-foundedOptelecomin 1973 to commercialize his inventions in optical fiber telecommunications,[131]just asCorning Glasswas producing the first commercial fiber optic cable in small quantities. Optelecom configured its own fiber lasers and optical amplifiers into the first commercial optical communication systems which it delivered toChevronand the US Army Missile Defense.[132]Three years later,GTEdeployed the first optical telephone system in 1977 in Long Beach, California.[133]By the early 1980s, optical networks powered by lasers,LEDand optical amplifier equipment supplied byBell Labs,NTTandPerelli[clarification needed]were used by select universities and long-distance telephone providers.[citation needed]
In 1982, Norway (NORSAR/NDRE) andPeter Kirstein'sresearch group at University College London (UCL) left the ARPANET and reconnected using TCP/IP overSATNET.[102][134]There were 40British research groupsusing UCL's link to ARPANET in 1975;[77]by 1984 there was a user population of about 150 people on both sides of the Atlantic.[135]
Between 1984 and 1988,CERNbegan installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs, and an accelerator control system. CERN continued to operate a limited self-developed system (CERNET) internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP, and the CERN TCP/IP intranets remained isolated from the Internet until 1989, when a transatlantic connection to Cornell University was established.[136][137][138]
TheComputer Science Network(CSNET) began operation in 1981 to provide networking connections to institutions that could not connect directly to ARPANET. Its first international connection was to Israel in 1984. Soon after, connections were established to computer science departments in Canada, France, and Germany.[23]
In 1988, the first international connections toNSFNETwas established by France'sINRIA,[139][140]andPiet Beertemaat theCentrum Wiskunde & Informatica(CWI) in the Netherlands.[141]Daniel Karrenberg, from CWI, visitedBen Segal, CERN's TCP/IP coordinator, looking for advice about the transition ofEUnet, the European side of the UUCP Usenet network (much of which ran over X.25 links), over to TCP/IP. The previous year, Segal had met withLen Bosackfrom the then still small companyCiscoabout purchasing some TCP/IP routers for CERN, and Segal was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks. TheNORDUnetconnection to NSFNET was in place soon after, providing open access for university students in Denmark, Finland, Iceland, Norway, and Sweden.[142]
In January 1989, CERN opened its first external TCP/IP connections.[143]This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out coordination work together. Later, in 1992, RIPE was formally registered as acooperativein Amsterdam.
The United Kingdom'snational research and education network(NREN),JANET, began operation in 1984 using the UK'sColoured Book protocolsand connected to NSFNET in 1989. In 1991, JANET adopted Internet Protocol on the existing network.[144][145]The same year, Dai Davies introduced Internet technology into the pan-European NREN,EuropaNet, which was built on the X.25 protocol.[146][147]TheEuropean Academic and Research Network(EARN) andRAREadopted IP around the same time, and the European Internet backboneEBONEbecame operational in 1992.[136]
Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations werepolarized over the issue of which standard, theOSI modelor the Internet protocol suite would result in the best and most robust computer networks.[100][148][149]
Japan, which had built the UUCP-based networkJUNETin 1984, connected to CSNET,[23]and later to NSFNET in 1989, marking the spread of the Internet to Asia.
South Korea set up a two-node domestic TCP/IP network in 1982, the System Development Network (SDN), adding a third node the following year. SDN was connected to the rest of the world in August 1983 using UUCP (Unix-to-Unix-Copy); connected to CSNET in December 1984;[23]and formally connected to the NSFNET in 1990.[150][151][152]
In Australia, ad hoc networking to ARPA and in-between Australian universities formed in the late 1980s, based on various technologies such as X.25,UUCPNet, and via a CSNET.[23]These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures.AARNetwas formed in 1989 by theAustralian Vice-Chancellors' Committeeand provided a dedicated IP based network for Australia.
New Zealand adopted the UK'sColoured Book protocolsas an interim standard and established its first international IP connection to the U.S. in 1989.[153]
While developed countries with technological infrastructures were joining the Internet,developing countriesbegan to experience adigital divideseparating them from the Internet. On an essentially continental basis, they built organizations for Internet resource administration and to share operational experience, which enabled more transmission facilities to be put into place.
At the beginning of the 1990s, African countries relied upon X.25IPSSand 2400 baud modem UUCP links for international and internetwork computer communications.
In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa's first native TCP/IP high-speed satellite Internet services. The data connection was originally carried by a C-Band RSCC Russian satellite which connected InfoMail's Kampala offices directly to NSN's MAE-West point of presence using a private network from NSN's leased ground station in New Jersey. InfoCom's first satellite connection was just 64 kbit/s, serving a Sun host computer and twelve US Robotics dial-up modems.
In 1996, aUSAIDfunded project, theLeland Initiative, started work on developing full Internet connectivity for the continent.Guinea, Mozambique,MadagascarandRwandagainedsatellite earth stationsin 1997, followed byIvory CoastandBeninin 1998.
Africa is building an Internet infrastructure.AFRINIC, headquartered inMauritius, manages IP address allocation for the continent. As with other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.[157]
There are many programs to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort betweenNew Partnership for Africa's Development (NEPAD)and the East Africa Submarine System (Eassy) has broken off and may become two efforts.[158]
TheAsia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).[159]
In South Korea, VDSL, a last mile technology developed in the 1990s by NextLevel Communications, connected corporate and consumer copper-based telephone lines to the Internet.[160]
The People's Republic of China established its first TCP/IP college network,Tsinghua University's TUNET in 1991. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro-Spectrometer Collaboration andStanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-widecontent filter.[161]
Japan hosted the annual meeting of theInternet Society, INET'92, inKobe. Singapore developedTECHNETin 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.[162]
As with the other regions,the Latin American and Caribbean Internet Addresses Registry (LACNIC)manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.
Initially, as with its predecessor networks, the system that would evolve into the Internet was primarily for government and government body use. Although commercial use was forbidden, the exact definition of commercial use was unclear and subjective.UUCPNet and the X.25IPSShad no such restrictions, which would eventually see the official barring of UUCPNet use ofARPANETandNSFNETconnections.
As a result, during the late 1980s, the firstInternet service provider(ISP) companies were formed. Companies likePSINet,UUNET,Netcom, andPortal Softwarewere formed to provide service to the regional research networks and provide alternate network access, UUCP-based email andUsenet Newsto the public. In 1989,MCI Mailbecame the first commercial email provider to get an experimental gateway to the Internet.[164]The first commercial dialup ISP in the United States wasThe World, which opened in 1989.[165]
In 1992, the U.S. Congress passed the Scientific and Advanced-Technology Act,42 U.S.C.§ 1862(g), which allowed NSF to support access by the research and education communities to computer networks which were not used exclusively for research and education purposes, thus permitting NSFNET to interconnect with commercial networks.[166][167]This caused controversy within the research and education community, who were concerned commercial use of the network might lead to an Internet that was less responsive to their needs, and within the community of commercial network providers, who felt that government subsidies were giving an unfair advantage to some organizations.[168]
By 1990, ARPANET's goals had been fulfilled and new networking technologies exceeded the original scope and the project came to a close. New network service providers includingPSINet,Alternet, CERFNet, ANS CO+RE, and many others were offering network access to commercial customers.NSFNETwas no longer the de facto backbone and exchange point of the Internet. TheCommercial Internet eXchange(CIX),Metropolitan Area Exchanges(MAEs), and laterNetwork Access Points(NAPs) were becoming the primary interconnections between many networks. The final restrictions on carrying commercial traffic ended on April 30, 1995, when the National Science Foundation ended its sponsorship of the NSFNET Backbone Service.[169][170]NSF provided initial support for the NAPs and interim support to help the regional research and education networks transition to commercial ISPs. NSF also sponsored thevery high speed Backbone Network Service(vBNS) which continued to provide support for the supercomputing centers and research and education in the United States.[171]
An event held on 11 January 1994,The Superhighway SummitatUCLA's Royce Hall, was the "first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about theInformation Superhighwayand its implications".[172]
The invention of theWorld Wide WebbyTim Berners-LeeatCERN, as an application on the Internet,[173]brought many social and commercial uses to what was, at the time, a network of networks for academic and research institutions.[174][175]The Web opened to the public in 1991 and began to enter general use in 1993–4, whenwebsites for everyday usestarted to become available.[176]
During the first decade or so of the public Internet, the immense changes it would eventually enable in the 2000s were still nascent. In terms of providing context for this period,mobile cellular devices("smartphones" and other cellular devices) which today provide near-universal access, were used for business and not a routine household item owned by parents and children worldwide.Social mediain the modern sense had yet to come into existence, laptops were bulky and most households did not have computers. Data rates were slow and most people lacked means to video or digitize video; media storage was transitioning slowly fromanalog tapetodigitaloptical discs(DVDand to an extent still,floppy disctoCD). Enabling technologies used from the early 2000s such asPHP, modernJavaScriptandJava, technologies such asAJAX,HTML 4(and its emphasis onCSS), and varioussoftware frameworks, which enabled and simplified speed of web development, largely awaited invention and their eventual widespread adoption.
The Internet was widely used formailing lists,emails,creating and distributing mapswith tools likeMapQuest,e-commerceand early popularonline shopping(AmazonandeBayfor example),online forumsandbulletin boards, and personal websites andblogs, and use was growing rapidly, but by more modern standards, the systems used were static and lacked widespread social engagement. It awaited a number of events in the early 2000s to change from a communications technology to gradually develop into a key part of global society's infrastructure.
Typical design elements of these "Web 1.0" era websites included:[177]Static pages instead ofdynamic HTML;[178]content served fromfilesystemsinstead ofrelational databases; pages built usingServer Side IncludesorCGIinstead of aweb applicationwritten in adynamic programming language;HTML 3.2-era structures such asframesand tables to create page layouts; onlineguestbooks; overuse ofGIFbuttons and similar small graphics promoting particular items;[179]and HTML forms sent viaemail. (Support forserver side scriptingwas rare onshared serversso the usual feedback mechanism was via email, usingmailto formsand theiremail program.[180]
During the period 1997 to 2001, the firstspeculative investmentbubblerelated to the Internet took place, in which"dot-com" companies(referring to the ".com"top level domainused by businesses) were propelled to exceedingly high valuations as investors rapidly stokedstock values, followed by amarket crash; the firstdot-com bubble. However this only temporarily slowed enthusiasm and growth, which quickly recovered and continued to grow.
Thehistory of the World Wide Webup to around 2004 was retrospectively named and described by some as "Web 1.0".[181]
In the final stage ofIPv4 address exhaustion, the last IPv4 address block was assigned in January 2011 at the level of the regional Internet registries.[182]IPv4 uses 32-bitaddresses which limits theaddress spaceto 232addresses, i.e.4294967296addresses.[111]IPv4 is in the process of replacement byIPv6, its successor, which uses 128-bit addresses, providing 2128addresses, i.e.340282366920938463463374607431768211456,[183]a vastly increased address space. The shift to IPv6 is expected to take a long time to complete.[182]
The rapid technical advances that would propel the Internet into its place as a social system, which has completely transformed the way humans interact with each other, took place during a relatively short period from around 2005 to 2010, coinciding with the point in time in whichIoTdevices surpassed the number of humans alive at some point in the late 2000s. They included:
The term "Web 2.0" describeswebsitesthat emphasizeuser-generated content(including user-to-user interaction),usability, andinteroperability. It first appeared in a January 1999 article called "Fragmented Future" written byDarcy DiNucci, a consultant onelectronic information design, where she wrote:[184][185][186][187]
The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will [...] appear on your computer screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...] hand-held game machines [...] maybe even your microwave oven.
The term resurfaced during 2002–2004,[188][189][190][191]and gained prominence in late 2004 following presentations byTim O'Reillyand Dale Dougherty at the firstWeb 2.0 Conference. In their opening remarks,John Battelleand Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you".[192][non-primary source needed]They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value.
"Web 2.0" does not refer to an update to any technical specification, but rather to cumulative changes in the way Web pages are made and used. "Web 2.0" describes an approach, in which sites focus substantially upon allowing users to interact and collaborate with each other in asocial mediadialogue as creators ofuser-generated contentin avirtual community, in contrast to Web sites where people are limited to the passive viewing ofcontent. Examples of Web 2.0 includesocial networking services,blogs,wikis,folksonomies,video sharingsites,hosted services,Web applications, andmashups.[193]Terry Flew, in his 3rd edition ofNew Media, described what he believed to characterize the differences between Web 1.0 and Web 2.0:
[The] move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on tagging (folksonomy).[194]
This era saw several household names gain prominence through their community-oriented operation –YouTube, Twitter, Facebook,Redditand Wikipedia being some examples.
Telephone systems have been slowly adoptingvoice over IPsince 2003. Early experiments proved that voice can be converted to digital packets and sent over the Internet. The packets are collected and converted back to analog voice.[195][196][197]
The process of change that generally coincided with Web 2.0 was itself greatly accelerated and transformed only a short time later by the increasing growth in mobile devices. This mobile revolution meant that computers in the form of smartphones became something many people used, took with them everywhere, communicated with, used for photographs and videos they instantly shared or to shop or seek information "on the move" – and used socially, as opposed to items on a desk at home or just used for work.[citation needed]
Location-based services, services using location and other sensor information, andcrowdsourcing(frequently but not always location based), became common, with posts tagged by location, or websites and services becoming location aware. Mobile-targeted websites (such as "m.example.com") became common, designed especially for the new devices used.Netbooks,ultrabooks, widespread4GandWi-Fi, and mobile chips capable or running at nearly the power of desktops from not many years before on far lower power usage, became enablers of this stage of Internet development, and the term "App" (short for "Application program" or "Program") became popularized, as did the "App store".
This "mobile revolution" has allowed for people to have a nearly unlimited amount of information at all times. With the ability to access the internet from cell phones came a change in the way media was consumed. Media consumption statistics show that over half of media consumption between those aged 18 and 34 were using a smartphone.[198]
The first Internet link intolow Earth orbitwas established on January 22, 2010, when astronautT. J. Creamerposted the first unassisted update to his Twitter account from theInternational Space Station, marking the extension of the Internet into space.[199](Astronauts at the ISS had used email and Twitter before, but these messages had been relayed to the ground through a NASA data link before being posted by a human proxy.) This personal Web access, which NASA calls the Crew Support LAN, uses the space station's high-speedKu bandmicrowave link. To surf the Web, astronauts can use a station laptop computer to control a desktop computer on Earth, and they can talk to their families and friends on Earth usingVoice over IPequipment.[200]
Communication with spacecraft beyond Earth orbit has traditionally been over point-to-point links through theDeep Space Network. Each such data link must be manually scheduled and configured. In the late 1990s NASA and Google began working on a new network protocol,delay-tolerant networking(DTN), which automates this process, allows networking of spaceborne transmission nodes, and takes the fact into account that spacecraft can temporarily lose contact because they move behind the Moon or planets, or becausespace weatherdisrupts the connection. Under such conditions, DTN retransmits data packages instead of dropping them, as the standard TCP/IP Internet Protocol does. NASA conducted the first field test of what it calls the "deep space internet" in November 2008.[201]Testing of DTN-based communications between the International Space Station and Earth (now termed disruption-tolerant networking) has been ongoing since March 2009, and was scheduled to continue until March 2014.[202][needs update]
This network technology is supposed to ultimately enable missions that involve multiple spacecraft where reliable inter-vessel communication might take precedence over vessel-to-Earth downlinks. According to a February 2011 statement by Google'sVint Cerf, the so-called "bundle protocols" have been uploaded to NASA'sEPOXImission spacecraft (which is in orbit around the Sun) and communication with Earth has been tested at a distance of approximately 80 light seconds.[203]
As aglobally distributed networkof voluntarily interconnected autonomous networks, the Internet operates without a central governing body. Each constituent network chooses the technologies and protocols it deploys from the technical standards that are developed by theInternet Engineering Task Force(IETF).[204]However, successful interoperation of many networks requires certain parameters that must be common throughout the network. For managing such parameters, theInternet Assigned Numbers Authority(IANA) oversees the allocation and assignment of various technical identifiers.[205]In addition, theInternet Corporation for Assigned Names and Numbers(ICANN) provides oversight and coordination for the two principalname spacesin the Internet, theInternet Protocol address spaceand theDomain Name System.
The IANA function was originally performed by USC Information Sciences Institute (ISI), and it delegated portions of this responsibility with respect to numeric network and autonomous system identifiers to theNetwork Information Center(NIC) atStanford Research Institute(SRI International) inMenlo Park, California. ISI'sJonathan Postelmanaged the IANA, served as RFC Editor and performed other key roles until his death in 1998.[206]
As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed fromSRI Internationalto each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of theDomain Name System, created by ISI'sPaul Mockapetrisin 1983.[207]The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including thetop-level domains(TLDs) of.mil,.gov,.edu,.org,.net,.comand.us,root nameserveradministration and Internet number assignments under aUnited States Department of Defensecontract.[205]In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sectorNetwork Solutions, Inc.[208][209]
The increasing cultural diversity of the Internet also posed administrative challenges for centralized management of the IP addresses. In October 1992, the Internet Engineering Task Force (IETF) published RFC 1366,[210]which described the "growth of the Internet and its increasing globalization" and set out the basis for an evolution of the IP registry process, based on a regionally distributed registry model. This document stressed the need for a single Internet number registry to exist in each geographical region of the world (which would be of "continental dimensions"). Registries would be "unbiased and widely recognized by network providers and subscribers" within their region.
The RIPE Network Coordination Centre (RIPE NCC) was established as the first RIR in May 1992. The second RIR, the Asia Pacific Network Information Centre (APNIC), was established in Tokyo in 1993, as a pilot project of the Asia Pacific Networking Group.[211]
Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that theDepartment of Defensewould no longer fund registration services outside of the .mil TLD. In 1993 the U.S.National Science Foundation, after a competitive bidding process in 1992, created theInterNICto manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided byNetwork Solutions; Directory and Database Services would be provided byAT&T; and Information Services would be provided byGeneral Atomics.[212]
Over time, after consultation with the IANA, theIETF,RIPE NCC,APNIC, and theFederal Networking Council(FNC), the decision was made to separate the management of domain names from the management of IP numbers.[211]Following the examples of RIPE NCC and APNIC, it was recommended that management of IP address space then administered by the InterNIC should be under the control of those that use it, specifically the ISPs, end-user organizations, corporate entities, universities, and individuals. As a result, theAmerican Registry for Internet Numbers(ARIN) was established as in December 1997, as an independent, not-for-profit corporation by direction of theNational Science Foundationand became the third Regional Internet Registry.[213]
In 1998, both the IANA and remaining DNS-related InterNIC functions were reorganized under the control ofICANN, a Californianon-profit corporationcontracted by theUnited States Department of Commerceto manage a number of Internet-related tasks. As these tasks involved technical coordination for two principal Internet name spaces (DNS names and IP addresses) created by the IETF, ICANN also signed a memorandum of understanding with theIABto define the technical work to be carried out by the Internet Assigned Numbers Authority.[214]The management of Internet address space remained with the regional Internet registries, which collectively were defined as a supporting organization within the ICANN structure.[215]ICANN provides central coordination for the DNS system, including policy coordination for the split registry / registrar system, with competition among registry service providers to serve each top-level-domain and multiple competing registrars offering DNS services to end-users.
TheInternet Engineering Task Force(IETF) is the largest and most visible of several loosely related ad-hoc groups that provide technical direction for the Internet, including theInternet Architecture Board(IAB), theInternet Engineering Steering Group(IESG), and theInternet Research Task Force(IRTF).
The IETF is a loosely self-organized group of international volunteers who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. Much of the work of the IETF is organized intoWorking Groups. Standardization efforts of the Working Groups are often adopted by the Internet community, but the IETF does not control or patrol the Internet.[216][217]
The IETF grew out of quarterly meetings with U.S. government-funded researchers, starting in January 1986. Non-government representatives were invited by the fourth IETF meeting in October 1986. The concept of Working Groups was introduced at the fifth meeting in February 1987. The seventh meeting in July 1987 was the first meeting with more than one hundred attendees. In 1992, theInternet Society, a professional membership society, was formed and IETF began to operate under it as an independent international standards body. The first IETF meeting outside of the United States was held in Amsterdam, the Netherlands, in July 1993. Today, the IETF meets three times per year and attendance has been as high as ca. 2,000 participants. Typically one in three IETF meetings are held in Europe or Asia. The number of non-US attendees is typically ca. 50%, even at meetings held in the United States.[216]
The IETF is not a legal entity, has no governing board, no members, and no dues. The closest status resembling membership is being on an IETF or Working Group mailing list. IETF volunteers come from all over the world and from many different parts of the Internet community. The IETF works closely with and under the supervision of theInternet Engineering Steering Group(IESG)[218]and theInternet Architecture Board(IAB).[219]TheInternet Research Task Force(IRTF) and theInternet Research Steering Group(IRSG), peer activities to the IETF and IESG under the general supervision of the IAB, focus on longer-term research issues.[216][220]
RFCsare the main documentation for the work of the IAB, IESG, IETF, and IRTF.[221]Originally intended as requests for comments, RFC 1, "Host Software", was written by Steve Crocker atUCLAin April 1969. These technical memos documented aspects of ARPANET development. They were edited byJon Postel, the firstRFC Editor.[216][222]
RFCs cover a wide range of information from proposed standards, draft standards, full standards, best practices, experimental protocols, history, and other informational topics.[223]RFCs can be written by individuals or informal groups of individuals, but many are the product of a more formal Working Group. Drafts are submitted to the IESG either by individuals or by the Working Group Chair. An RFC Editor, appointed by the IAB, separate from IANA, and working in conjunction with the IESG, receives drafts from the IESG and edits, formats, and publishes them. Once an RFC is published, it is never revised. If the standard it describes changes or its information becomes obsolete, the revised standard or updated information will be re-published as a new RFC that "obsoletes" the original.[216][222]
TheInternet Society(ISOC) is an international, nonprofit organization founded during 1992 "to assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". With offices near Washington, DC, US, and in Geneva, Switzerland, ISOC has a membership base comprising more than 80 organizational and more than 50,000 individual members. Members also form "chapters" based on either common geographical location or special interests. There are currently more than 90 chapters around the world.[224]
ISOC provides financial and organizational support to and promotes the work of the standards settings bodies for which it is the organizational home: theInternet Engineering Task Force(IETF), theInternet Architecture Board(IAB), theInternet Engineering Steering Group(IESG), and theInternet Research Task Force(IRTF). ISOC also promotes understanding and appreciation of theInternet modelof open, transparent processes and consensus-based decision-making.[225]
Since the 1990s, theInternet's governanceand organization has been of global importance to governments, commerce, civil society, and individuals. The organizations which held control of certain technical aspects of the Internet were the successors of the old ARPANET oversight and the current decision-makers in the day-to-day technical aspects of the network. While recognized as the administrators of certain aspects of the Internet, their roles and their decision-making authority are limited and subject to increasing international scrutiny and increasing objections. These objections have led to the ICANN removing themselves from relationships with first theUniversity of Southern Californiain 2000,[226]and in September 2009 gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the U.S. Department of Commerce continued.[227][228][229]Finally, on October 1, 2016, ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA), allowing oversight to pass to the global Internet community.[230]
The IETF, with financial and organizational support from the Internet Society, continues to serve as the Internet's ad-hoc standards body and issuesRequest for Comments.
In November 2005, theWorld Summit on the Information Society, held inTunis, called for anInternet Governance Forum(IGF) to be convened byUnited Nations Secretary General. The IGF opened an ongoing, non-binding conversation among stakeholders representing governments, the private sector, civil society, and the technical and academic communities about the future of Internet governance. The first IGF meeting was held in October/November 2006 with follow up meetings annually thereafter.[231]Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues.[232][233]
Tim Berners-Lee, inventor of the web, was becoming concerned about threats to the web's future and in November 2009 at the IGF in Washington DC launched theWorld Wide Web Foundation(WWWF) to campaign to make the web a safe and empowering tool for the good of humanity with access to all.[234][235]In November 2019 at the IGF in Berlin, Berners-Lee and the WWWF went on to launch theContract for the Web, a campaign initiative to persuade governments, companies and citizens to commit to nine principles to stop "misuse" with the warning "If we don't act now - and act together - to prevent the web being misused by those who want to exploit, divide and undermine, we are at risk of squandering" (its potential for good).[236]
Due to its prominence and immediacy as an effective means of mass communication, the Internet has also become morepoliticizedas it has grown. This has led in turn, to discourses and activities that would once have taken place in other ways, migrating to being mediated by internet.
Examples include political activities such aspublic protestandcanvassingof support andvotes, but also:
On April 23, 2014, theFederal Communications Commission(FCC) was reported to be considering a new rule that would permitInternet service providersto offer content providers a faster track to send content, thus reversing their earliernet neutralityposition.[237][238][239]A possible solution to net neutrality concerns may bemunicipal broadband, according toProfessor Susan Crawford, a legal and technology expert atHarvard Law School.[240]On May 15, 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality; and second, reclassify broadband as a telecommunication service, thereby preserving net neutrality.[241][242]On November 10, 2014,President Obamarecommended the FCC reclassify broadband Internet service as a telecommunications service in order to preservenet neutrality.[243][244][245]On January 16, 2015,Republicanspresented legislation, in the form of aU.S. CongressHRdiscussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affectingInternet service providers(ISPs).[246][247]On January 31, 2015,AP Newsreported that the FCC will present the notion of applying ("with some caveats")Title II (common carrier)of theCommunications Act of 1934to the internet in a vote expected on February 26, 2015.[248][249][250][251][252]Adoption of this notion would reclassify internet service from one of information to one oftelecommunications[253]and, according toTom Wheeler, chairman of the FCC, ensurenet neutrality.[254][255]The FCC is expected to enforce net neutrality in its vote, according toThe New York Times.[256][257]
On February 26, 2015, the FCC ruled in favor ofnet neutralityby applyingTitle II (common carrier)of theCommunications Act of 1934andSection 706of theTelecommunications act of 1996to the Internet.[258][259][260]The FCC chairman,Tom Wheeler, commented, "This is no more a plan to regulate the Internet than theFirst Amendmentis a plan to regulate free speech. They both stand for the same concept."[261]
On March 12, 2015, the FCC released the specific details of the net neutrality rules.[262][263][264]On April 13, 2015, the FCC published the final rule on its new "Net Neutrality" regulations.[265][266]
On December 14, 2017, the FCC repealed their March 12, 2015 decision by a 3–2 vote regarding net neutrality rules.[267]
Emailhas often been called thekiller applicationof the Internet. It predates the Internet, and was a crucial tool in creating it. Email started in 1965 as a way for multiple users of atime-sharingmainframe computerto communicate. Although the history is undocumented, among the first systems to have such a facility were theSystem Development Corporation(SDC)Q32and theCompatible Time-Sharing System(CTSS) at MIT.[268]
The ARPANET computer network made a large contribution to the evolution of electronic mail. An experimental inter-system transferred mail on the ARPANET shortly after its creation.[269]In 1971Ray Tomlinsoncreated what was to become the standard Internet electronic mail addressing format, using the@ signto separate mailbox names from host names.[270]
A number of protocols were developed to deliver messages among groups of time-sharing computers over alternative transmission systems, such asUUCPandIBM'sVNETemail system. Email could be passed this way between a number of networks, includingARPANET,BITNETandNSFNET, as well as to hosts connected directly to other sites via UUCP. See thehistory of SMTPprotocol.
In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel andTom Truscottin 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known asnewsgroups, on a wide range of topics. On ARPANET and NSFNET similar discussion groups would form viamailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list).
During the early years of the Internet, email and similar mechanisms were also fundamental to allow people to access resources that were not available due to the absence of online connectivity. UUCP was often used to distribute files using the 'alt.binary' groups. Also,FTP e-mail gatewaysallowed people that lived outside the US and Europe to download files using ftp commands written inside email messages. The file was encoded, broken in pieces and sent by email; the receiver had to reassemble and decode it later, and it was the only way for people living overseas to download items such as the earlier Linux versions using the slow dial-up connections available at the time. After the popularization of the Web and the HTTP protocol such tools were slowly abandoned.
Resource or file sharing has been an important activity on computer networks from well before the Internet was established and was supported in a variety of ways includingbulletin board systems(1978),Usenet(1980),Kermit(1981), and many others. TheFile Transfer Protocol(FTP) for use on the Internet was standardized in 1985 and is still in use today.[271]A variety of tools were developed to aid the use of FTP by helping users discover files they might want to transfer, including theWide Area Information Server(WAIS) in 1991,Gopherin 1991,Archiein 1991,Veronicain 1992,Jugheadin 1993,Internet Relay Chat(IRC) in 1988, and eventually theWorld Wide Web(WWW) in 1991 withWeb directoriesandWeb search engines.
In 1999,Napsterbecame the firstpeer-to-peer file sharingsystem.[272]Napster used a central server for indexing and peer discovery, but the storage and transfer of files was decentralized. A variety of peer-to-peer file sharing programs and services with different levels of decentralization andanonymityfollowed, including:Gnutella,eDonkey2000, andFreenetin 2000,FastTrack,Kazaa,Limewire, andBitTorrentin 2001, and Poisoned in 2003.[273]
All of these tools are general purpose and can be used to share a wide variety of content, but sharing of music files, software, and later movies and videos are major uses.[274]And while some of this sharing is legal, large portions are not. Lawsuits and other legal actions caused Napster in 2001, eDonkey2000 in 2005,Kazaain 2006, and Limewire in 2010 to shut down or refocus their efforts.[275][276]The Pirate Bay, founded in Sweden in 2003, continues despite atrial and appeal in 2009 and 2010that resulted in jail terms and large fines for several of its founders.[277]File sharing remains contentious and controversial with charges of theft ofintellectual propertyon the one hand and charges ofcensorshipon the other.[278][279]
File hosting allowed for people to expand their computer's hard drives and "host" their files on a server. Most file hosting services offer free storage, as well as larger storage amount for a fee. These services have greatly expanded the internet for business and personal use.
Google Drive, launched on April 24, 2012, has become the most popular file hosting service. Google Drive allows users to store, edit, and share files with themselves and other users. Not only does this application allow for file editing, hosting, and sharing. It also acts as Google's own free-to-access office programs, such asGoogle Docs,Google Slides, andGoogle Sheets. This application served as a useful tool for University professors and students, as well as those who are in need ofCloud storage.[280][281]
Dropbox, released in June 2007 is a similar file hosting service that allows users to keep all of their files in a folder on their computer, which is synced with Dropbox's servers. This differs from Google Drive as it is not web-browser based. Now, Dropbox works to keep workers and files in sync and efficient.[282]
Mega, having over 200 million users, is an encrypted storage and communication system that offers users free and paid storage, with an emphasis on privacy.[283]Being three of the largest file hosting services, Google Drive, Dropbox, and Mega all represent the core ideas and values of these services.
The earliest form of online piracy began with a P2P (peer to peer) music sharing service namedNapster, launched in 1999. Sites likeLimeWire,The Pirate Bay, andBitTorrentallowed for anyone to engage in online piracy, sending ripples through the media industry. With online piracy came a change in the media industry as a whole.[284]
Total global mobile data traffic reached 588 exabytes during 2020,[285]a 150-fold increase from 3.86 exabytes/year in 2010.[286]Most recently, smartphones accounted for 95% of this mobile data traffic with video accounting for 66% by type of data.[285]Mobile traffic travels by radio frequency to the closest cell phone tower and its base station where the radio signal is converted into an optical signal that is transmitted over high-capacity optical networking systems that convey the information to data centers. The optical backbones enable much of this traffic as well as a host of emerging mobile services including the Internet of things, 3-D virtual reality, gaming and autonomous vehicles. The most popular mobile phone application is texting, of which 2.1 trillion messages were logged in 2020.[287]The texting phenomenon began on December 3, 1992, when Neil Papworth sent the first text message of "Merry Christmas" over a commercial cell phone network to the CEO of Vodafone.[288]
The first mobile phone with Internet connectivity was theNokia 9000 Communicator, launched in Finland in 1996. The viability of Internet services access on mobile phones was limited until prices came down from that model, and network providers started to develop systems and services conveniently accessible on phones.NTT DoCoMoin Japan launched the first mobile Internet service,i-mode, in 1999 and this is considered the birth of the mobile phone Internet services. In 2001, the mobile phone email system by Research in Motion (nowBlackBerry Limited) for theirBlackBerryproduct was launched in America. To make efficient use of the small screen andtiny keypadand one-handed operation typical of mobile phones, a specific document and networking model was created for mobile devices, theWireless Application Protocol(WAP). Most mobile device Internet services operate using WAP. The growth of mobile phone services was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing resources by phone rather than by PC.[289]Developing countries followed, with India, South Africa, Kenya, the Philippines, and Pakistan all reporting that the majority of their domestic users accessed the Internet from a mobile phone rather than a PC. The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet access was more gradual, but had reached national penetration levels of 20–30% in most Western countries.[290]The cross-over occurred in 2008, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user.[291]
Global Internet traffic continues to grow at a rapid rate, rising 23% from 2020 to 2021[292]when the number of active Internet users reached 4.66 billion people, representing half of the global population. Further demand for data, and the capacity to satisfy this demand, are forecast to increase to 717 terabits per second in 2021.[293]This capacity stems from theoptical amplificationandWDMsystems that are the common basis of virtually every metro, regional, national, international and submarine telecommunications networks.[294]Theseoptical networkingsystems have been installed throughout the 5 billion kilometers offiber opticlines deployed around the world.[295]Continued growth in traffic is expected for the foreseeable future from a combination of new users, increased mobile phone adoption, machine-to-machine connections, connected homes, 5G devices and the burgeoning requirement for cloud and Internet services such asAmazon,Facebook,Apple MusicandYouTube.
There are nearly insurmountable problems in supplying ahistoriographyof the Internet's development. The process of digitization represents a twofold challenge both for historiography in general and, in particular, for historical communication research.[296]A sense of the difficulty in documenting early developments that led to the internet can be gathered from the quote:
"The Arpanet period is somewhat well documented because the corporation in charge –BBN– left a physical record. Moving into theNSFNETera, it became an extraordinarily decentralized process. The record exists in people's basements, in closets. ... So much of what happened was done verbally and on the basis of individual trust."
Notable works on the subject were published byKatie Hafnerand Matthew Lyon,Where Wizards Stay Up Late: The Origins Of The Internet(1996),Roy Rosenzweig,Wizards, Bureaucrats, Warriors, and Hackers: Writing the History of the Internet(1998), andJanet Abbate,Inventing the Internet(2000).[298]
Most scholarship and literature on the Internet lists ARPANET as the prior network that was iterated on and studied to create it,[299]although other early computer networks and experiments existed alongside or before ARPANET.[300]
These histories of the Internet have since been criticized asteleologiesorWhig history; that is, they take the present to be the end point toward which history has been unfolding based on a single cause:
In the case of Internet history, the epoch-making event is usually said to be the demonstration of the 4-node ARPANET network in 1969. From that single happening the global Internet developed.
In addition to these characteristics, historians have cited methodological problems arising in their work:
"Internet history" ... tends to be too close to its sources. Many Internet pioneers are alive, active, and eager to shape the histories that describe their accomplishments. Many museums and historians are equally eager to interview the pioneers and to publicize their stories.
|
https://en.wikipedia.org/wiki/History_of_the_Internet
|
Mass mediainclude the diverse arrays ofmediathat reach a large audience viamass communication.
Broadcast mediatransmit information electronically via media such asfilms,radio, recorded music, ortelevision.Digital mediacomprises bothInternetandmobilemass communication. Internet media comprise such services asemail,social mediasites,websites, and Internet-based radio and television. Many other mass media outlets have an additional presence on the web, by such means as linking to or running TV ads online, or distributingQR codesin outdoor or print media to direct mobile users to a website. In this way, they can use the easy accessibility and outreach capabilities the Internet affords, as thereby easily broadcast information throughout many different regions of the world simultaneously and cost-efficiently. Outdoor media transmits information via such media asaugmented reality(AR)advertising;billboards;blimps; flying billboards (signs in tow of airplanes); placards or kiosks placed inside and outside buses, commercial buildings, shops, sports stadiums, subway cars, or trains; signs; orskywriting.[1]Print media transmit information via physical objects, such asbooks,comics,magazines,newspapers, orpamphlets.[2]Event organising andpublic speakingcan also be considered forms of mass media.[3]
Mass media organisationsormass media companiesthat control these technologies include movie studios, publishing companies, and radio and television stations (the latter are also sometimes known asmass media networks);[4][5]they often formmedia conglomerates.
In the late 20th century, mass media could be classified into eight mass media industries: books, the Internet, magazines, movies, newspapers, radio, recordings and television. The explosion of digitalcommunication technologyin the late 20th and early 21st centuries made prominent the question: what forms of media should be classified as "mass media"? For example, it is controversial whether to includemobile phonesandvideo gamesin the definition. In the early 2000s, a classification called the "seven mass media" came into use.[6]In order of introduction, they are:
Each mass medium has its own content types, creative artists, technicians and business models. For example, the Internet includesblogs,podcasts,websitesand various other technologies built atop the general distribution network. The sixth and seventh media, Internet and mobile phones, are often referred to collectively asdigital media; and the fourth and fifth, radio and TV, asbroadcast media. Some argue thatvideo gameshave developed into a distinct mass form of media.[7]
While a telephone is a two-way communication device, mass media communicates to a large group. In addition, the telephone has transformed into a cell phone which is equipped withInternetaccess. A question arises whether this makes cell phones a mass medium or simply a device used to access a mass medium (the Internet).
Video games may also be evolving into a mass medium. Video games (for example, massively multiplayer online role-playing games (MMORPGs), such asRuneScape) provide a common gaming experience to millions of users across the globe and convey the same messages and ideologies to all their users. Users sometimes share the experience with one another by playing online. Excluding the Internet, however, it is questionable whether players of video games are sharing a common experience when they play the game individually. It is possible to discuss in great detail the events of a video game with a friend one has never played with, because the experience is identical to each. The question, then, is whether this is a form of mass communication.[citation needed]
Five characteristics of mass communication have been identified by sociologistJohn ThompsonofCambridge University:[8]
The term "mass media" is sometimes erroneously used as a synonym for "mainstream media". Mainstream media are distinguished fromalternative mediaby their content and point of view. Alternative media are also "mass media" outlets in the sense that they use technology capable of reaching many people, even if the audience is often smaller than the mainstream.
In common usage, the term "mass" denotes not that a given number of individuals receives the products, but rather that the products are available in principle to a plurality of recipients.[8]
The sequencing of content in a broadcast is called aschedule. With all technological endeavours a number of technical terms and slang have developed.[9]
Radioandtelevisionprograms are distributed over frequency bands which are highly regulated in the United States. Such regulation includes determination of the width of the bands, range, licensing, types of receivers and transmitters used, and acceptable content.
Cable televisionprograms are often broadcast simultaneously with radio and television programs, but have a more limited audience. By coding signals and requiring acable converter boxat individual recipients' locations, cable also enablessubscription-based channels andpay-per-viewservices.
A broadcastingorganisationmay broadcast several programs simultaneously, through several channels (frequencies), for exampleBBC OneandTwo. On the other hand, two or more organisations may share a channel and each use it during a fixed part of the day, such as theCartoon Network/Adult Swim.Digital radioanddigital televisionmay also transmitmultiplexedprogramming, with several channelscompressedinto oneensemble.
When broadcasting is done via the Internet the termwebcastingis often used. In 2004, a new phenomenon occurred when a number of technologies combined to producepodcasting. Podcasting is an asynchronous broadcast/narrowcast medium.Adam Curryand his associates, thePodshow, are principal proponents of podcasting.
The term 'film' encompasses motion pictures as individual projects, as well as the field in general. The name comes from thephotographic film(also calledfilm stock), historically the primarymediumfor recording and displaying motion pictures. Many other terms for film exist, such asmotion pictures(or justpicturesand "picture"),the silver screen,photoplays,the cinema,picture shows,flicksand, most commonly,movies.
Films are produced byrecordingpeople and objects withcameras, or by creating them usinganimationtechniques orspecial effects. Films comprise a series of individual frames, but when these images are shown in rapid succession, an illusion of motion is created. Flickering between frames is not seen because of an effect known aspersistence of vision, whereby the eye retains a visual image for a fraction of a second after the source has been removed. Also of relevance is what causes the perception of motion: a psychological effect identified asbeta movement.
Film has emerged as an important art form. They entertain, educate, enlighten and inspire audiences. Any film can become a worldwide attraction, especially with the addition ofdubbingorsubtitlesthat translate the original language.[10]
Avideo gameis a computer-controlled game in which a video display, such as a monitor or television set, is the primary feedback device. The term "computer game" also includes games which display only text or which use other methods, such as sound or vibration, as their primary feedback device. There always must also be some sort ofinput device, usually in the form ofbutton/joystickcombinations (on arcade games), a keyboard and mouse/trackballcombination (computer games), acontroller(consolegames), or a combination of any of the above. Also, more esoteric devices have been used for input, e.g., the player's motion. Usually there are rules and goals, but in more open-ended games the player may be free to do whatever they like within the confines of the virtual universe.
In common usage, an "arcade game" refers to a game designed to be played in an establishment in which patrons pay to play on a per-use basis. A "computer game" or "PC game" refers to a game that is played on a personal computer. A "Console game" refers to one that is played on a device specifically designed for the use of such, while interfacing with a standard television set. A "video game" (or "videogame") has evolved into a catchall phrase that encompasses the aforementioned along with any game made for any other device, including, but not limited to, advanced calculators, mobile phones,PDAs, etc.
Sound recording and reproductionis theelectricalor mechanical re-creation or amplification ofsound, often asmusic. This involves the use ofaudio equipmentsuch as microphones, recording devices and loudspeakers. From early beginnings with the invention of thephonographusing purely mechanical techniques, the field has advanced with the invention of electrical recording, the mass production of the78 record, the magneticwire recorderfollowed by thetape recorder, the vinylLP record. The invention of thecompact cassettein the 1960s, followed by Sony'sWalkman, gave a major boost to the mass distribution of music recordings, and the invention ofdigital recordingand thecompact discin 1983 brought massive improvements in ruggedness and quality. The most recent developments have been indigital audio players.
An album is a collection of related audio recordings, released together to the public, usually commercially.
The termrecord albumoriginated from the fact that 78RPMphonographdisc recordswere kept together in a book resembling a photo album. The first collection of records to be called an "album" wasTchaikovsky'sNutcracker Suite, release in April 1909 as a four-disc set byOdeon Records.[11][12]It retailed for 16shillings—about£15 in modern currency.
Amusic video(also promo) is ashort filmorvideothat accompanies a complete piece of music, most commonly asong. Modern music videos were primarily made and used as a marketing device intended to promote the sale of music recordings. Although the origins of music videos go back much further, they came into their own in the 1980s, whenMusic Television's format was based on them. In the 1980s, the term "rock video" was often used to describe this form of entertainment, although the term has fallen into disuse.
Music videos can accommodate all styles of filmmaking, includinganimation,live-actionfilms,documentaries, and non-narrative,abstract film.
TheInternet(also known simply as "the Net" or less precisely as "the Web") is a more interactive medium of mass media, and can be briefly described as "a network of networks". Specifically, it is the worldwide, publicly accessible network of interconnectedcomputer networksthat transmitdatabypacket switchingusing the standardInternet Protocol(IP). It consists of millions of smaller domestic, academic, business and governmental networks, which together carry variousinformationand services, such asemail,online chat,filetransfer, and the interlinkedweb pagesand other documents of theWorld Wide Web.
Contrary to some common usage, the Internet and the World Wide Web are not synonymous: the Internet is the system of interconnectedcomputer networks, linked bycopperwires,fibre-opticcables,wirelessconnections etc.; the Web is the contents, or the interconnecteddocuments, linked byhyperlinksandURLs. The World Wide Web is accessible through the Internet, along with many other services including e-mail,file sharingand others described below.
Toward the end of the 20th century, the advent of the World Wide Web marked the first era in which most individuals could have a means of exposure on a scale comparable to that of mass media. Anyone with aweb sitehas the potential to address a global audience, although serving to high levels ofweb trafficis still relatively expensive. It is possible that the rise ofpeer-to-peertechnologies may have begun the process of making the cost of bandwidth manageable. Although a vast amount of information, imagery, and commentary (i.e. "content") has been made available, it is often difficult to determine the authenticity and reliability of information contained in web pages (in many cases, self-published). The invention of the Internet has also allowed breaking news stories to reach around the globe within minutes. This rapid growth of instantaneous, decentralised communication is often deemed likely to change mass media and its relationship to society.
"Cross-media" means the idea of distributing the same message through different media channels. A similar idea is expressed in the news industry as "convergence". Many authors understand cross-media publishing to be the ability to publish in bothprintand on the web without manual conversion effort. An increasing number ofwirelessdevices with mutually incompatible data and screen formats make it even more difficult to achieve the objective "create once, publish many".
The Internet is quickly becoming the center of mass media. Everything is becoming accessible via the internet. Rather than picking up a newspaper, or watching the 10 o'clock news, people can log onto the internet to get the news they want, when they want it. For example, many workers listen to the radio through the Internet while sitting at their desk.
Even theeducation systemrelies on the Internet. Teachers can contact the entire class by sending one e-mail. They may have web pages on which students can get another copy of the class outline or assignments. Some classes have class blogs in which students are required to post weekly, with students graded on their contributions.
Blogging, too, has become a pervasive form of media. A blog is a website, usually maintained by an individual, with regular entries of commentary, descriptions of events, or interactive media such as images or video. Entries are commonly displayed in reverse chronological order, with most recent posts shown on top. Many blogs provide commentary or news on a particular subject; others function as more personal online diaries. A typical blog combines text, images and other graphics, and links to other blogs, web pages, and related media. The ability for readers to leave comments in an interactive format is an important part of many blogs. Most blogs are primarily textual, although some focus on art (artlog), photographs (photoblog), sketchblog, videos (vlog), music (MP3 blog) and audio (podcasting), are part of a wider network of social media.Microbloggingis another type of blogging which consists of blogs with very short posts.
RSSis a format for syndicating news and the content of news-like sites, including major news sites likeWired, news-oriented community sites likeSlashdot, and personal blogs. It is a family of Web feed formats used to publish frequently updated content such as blog entries, news headlines, and podcasts. An RSS document (which is called a "feed" or "web feed" or "channel") contains either a summary of content from an associated web site or the full text. RSS makes it possible for people to keep up with web sites in an automated manner that can be piped into special programs or filtered displays.
Apodcastis a series of digital-media files which are distributed over the Internet using syndication feeds for playback on portable media players and computers. The term podcast, like broadcast, can refer either to the series of content itself or to the method by which it is syndicated; the latter is also called podcasting. The host or author of a podcast is often called a podcaster.
Mobile phoneswere introduced inJapanin 1979 but became a mass media only in 1998 when the first downloadable ringing tones were introduced in Finland. Soon most forms of media content were introduced on mobile phones,tabletsand other portable devices, and today the total value of media consumed on mobile vastly exceeds that of internet content, and was worth over $31 billion in 2007 (source Informa). The mobile media content includes over $8 billion worth of mobile music (ringing tones, ringback tones, truetones, MP3 files, karaoke, music videos, music streaming services, etc.); over $5 billion worth of mobile gaming; and various news, entertainment and advertising services. In Japan mobile phone books are so popular that five of the ten best-selling printed books were originally released as mobile phone books.
Similar to the internet, mobile is also aninteractive media, but has far wider reach, with 3.3 billion mobile phone users at the end of 2007 to 1.3 billion internet users (source ITU). Like email on the internet, the top application on mobile is also a personal messaging service, but SMS text messaging is used by over 2.4 billion people. Practically all internet services and applications exist or have similar cousins on mobile, from search to multiplayer games to virtual worlds to blogs. Mobile has several unique benefits which many mobile media pundits claim make mobile a more powerful media than either TV or the internet, starting with mobile being permanently carried and always connected. Mobile has the best audience accuracy and is the only mass media with a built-in payment channel available to every user without any credit cards or PayPal accounts or even an age limit. Mobile is often called the 7th Mass Medium and either the fourth screen (if counting cinema, TV and PC screens) or the third screen (counting only TV and PC).
Amagazineis a periodicalpublicationcontaining a variety of articles, generally financed byadvertisingor purchase by readers.
Magazines are typically publishedweekly,biweekly,monthly,bimonthlyorquarterly, with adate on the coverthat is in advance of the date it is actually published. They are often printed in colour oncoated paper, and are bound with asoft cover.
Magazines fall into two broad categories: consumer magazines and business magazines. In practice, magazines are a subset ofperiodicals, distinct from those periodicals produced by scientific, artistic, academic or special interest publishers which are subscription-only, more expensive, narrowly limited in circulation, and often have little or no advertising.
Magazines can be classified as:
Anewspaperis apublicationcontaining news and information and advertising, usually printed on low-cost paper callednewsprint. It may be general or special interest, most often published daily or weekly. The most important function of newspapers is to inform the public of significant events.[13]Local newspapers inform local communities and include advertisements from local businesses and services, while national newspapers tend to focus on a theme, which can be exampled withThe Wall Street Journalas they offer news on finance and business related-topics.[13]The first printed newspaper was published in 1605, and the form has thrived even in the face of competition from technologies such as radio and television. Recent developments on the Internet are posing major threats to its business model, however. Paid circulation is declining in most countries, and advertising revenue, which makes up the bulk of a newspaper's income, is shifting from print to online; some commentators, nevertheless, point out that historically new media such as radio and television did not entirely supplant existing.
The internet has challenged the press as an alternative source of information and opinion but has also provided a new platform for newspaper organisations to reach new audiences.[14]According to theWorld Trends Report, between 2012 and 2016, print newspaper circulation continued to fall in almost all regions, with the exception ofAsia and the Pacific, where the dramatic increase in sales in a few select countries has offset falls in historically strong Asian markets such asJapanand theRepublic of Korea. Most notably, between 2012 and 2016,India's print circulation grew by 89 per cent.[15]
Outdoor media is a form of mass media which comprises billboards, signs, placards placed inside and outside commercial buildings/objects like shops/buses, flying billboards (signs in tow of airplanes), blimps, skywriting, AR advertising. Many commercial advertisers use this form of mass media when advertising in sports stadiums. Tobacco and alcohol manufacturers used billboards and other outdoor media extensively. However, in 1998, the Master Settlement Agreement between the US and the tobacco industries prohibited the billboard advertising of cigarettes. In a 1994 Chicago-based study, Diana Hackbarth and her colleagues revealed how tobacco- and alcohol-based billboards were concentrated in poor neighbourhoods. In other urban centers, alcohol and tobacco billboards were much more concentrated in African-American neighbourhoods than in white neighbourhoods.[1]
Mass media encompasses much more than just news, although it is sometimes misunderstood in this way. It can be used for various purposes:
Journalismis the discipline of collecting, analyzing, verifying and presentinginformationregardingcurrent events,trends, issues and people. Those who practice journalism are known asjournalists.
News-oriented journalism is sometimes described as the "first rough draft of history" (attributed toPhil Graham), because journalists often record important events, producing news articles on short deadlines. While under pressure to be first with their stories,news mediaorganisations usuallyeditandproofreadtheir reports prior to publication, adhering to each organisation's standards of accuracy, quality and style. Many news organisation claim proud traditions of holding government officials and institutions accountable to the public, while media critics have raised questions about holding the press itself accountable to the standards of professional journalism.
Public relationsis the art and science of managing communication between an organisation and its key publics to build, manage and sustain its positive image. Examples include:
Publishingis the industry concerned with the production ofliteratureorinformation– the activity of making information available for public view. In some cases, authors may be their own publishers.
Traditionally, the term refers to the distribution of printed works such asbooksandnewspapers. With the advent of digital information systems and theInternet, the scope of publishing has expanded to includewebsites,blogsand the like.
As abusiness, publishing includes the development,marketing,production, anddistributionof newspapers, magazines, books,literary works,musical works,softwareand other works dealing with information.
Publication is also important as alegal concept; (1) as the process of giving formal notice to the world of a significant intention, for example, to marry or enter bankruptcy, and; (2) as the essential precondition of being able to claimdefamation; that is, the allegedlibelmust have been published.
Asoftware publisheris apublishingcompanyin thesoftware industrybetween thedeveloperand thedistributor. In some companies, two or all three of these roles may be combined (and indeed, may reside in a single person, especially in the case ofshareware).
Software publishers often license software from developers with specific limitations, such as a time limit or geographical region. The terms of licensing vary enormously, and are typically secret.
Developers may use publishers to reach larger or foreign markets, or to avoid focussing on marketing. Or publishers may use developers to create software to meet a market need that the publisher has identified.
Aninternet celebrityis anyone who gained fame on the Internet, whether it be creating contenton social media sites or creating posts on blogging platforms and making a revenue through it through means such as sponsorships and advertisements. One such example is aYouTuber, who is asocial media influencer, and creates content on the social media platformYouTube.
The history of mass media can be traced back to the days when dramas were performed in various ancient cultures. This was the first time when a form of media was "broadcast" to a wider audience. The first dated printed book known is the "Diamond Sutra", printed in China in 868 AD, although it is clear that books were printed earlier. Movable clay type was invented in 1041 in China. However, due to the slow spread of literacy to the masses in China, and the relatively high cost of paper there, the earliest printed mass-medium was probably Europeanpopular printsfrom about 1400. Although these were produced in huge numbers, very few early examples survive, and even most known to be printed before about 1600 have not survived. The term "mass media" was coined with the creation of print media, which is notable for being the first example of mass media, as we use the term today. This form of media started in Europe in the Middle Ages.
Johannes Gutenberg's invention of the printing press allowed the mass production of books to sweep the nation. He printed the first book, a Latin Bible, on aprinting presswithmovable typein 1453. The invention of the printing press gave rise to some of the first forms of mass communication, by enabling the publication of books and newspapers on a scale much larger than was previously possible.[16][17][18]The invention also transformed the way the world received printed materials, although books remained too expensive really to be called a mass-medium for at least a century after that. Newspapers developed from about 1612, with the first example in English in 1620;[19]but they took until the 19th century to reach a mass-audience directly. The first high-circulation newspapers arose in London in the early 1800s, such asThe Times, and were made possible by the invention of high-speed rotary steam printing presses, and railroads which allowed large-scale distribution over wide geographical areas. The increase in circulation, however, led to a decline in feedback and interactivity from the readership, making newspapers a more one-way medium.[20][21][22][23]
The phrase "the media" began to be used in the 1920s.[24]The notion of "mass media" was generally restricted to print media up until the post-Second World War, when radio, television and video were introduced. The audio-visual facilities became very popular, because they provided both information and entertainment, because the colour and sound engaged the viewers/listeners and because it was easier for the general public to passively watch TV or listen to the radio than to actively read. In recent times, the Internet become the latest and most popular mass medium. Information has become readily available through websites, and easily accessible through search engines. One can do many activities at the same time, such as playing games, listening to music and social networking, irrespective of location. Whilst other forms of mass media are restricted in the type of information they can offer, the internet comprises a large percentage of thesum of human knowledgethrough such things asGoogle Books. Modern-day mass media includes the internet, mobile phones, blogs, podcasts and RSS feeds.[25]
During the 20th century, the growth of mass media was driven bytechnology, including that which allowed much duplication of material. Physical duplication technologies such asprinting, record pressing and film duplication allowed the duplication of books, newspapers and movies at low prices to huge audiences.Radioandtelevisionallowed the electronic duplication of information for the first time. Mass media had the economics of linear replication: a single work could make money. An example of Riel and Neil's theory.proportionalto the number of copies sold, and as volumes went up, unit costs went down, increasing profit margins further. Vast fortunes were to be made in mass media. In a democratic society, the media can serve theelectorateabout issues regarding government and corporate entities (seeMedia influence). Some consider theconcentration of media ownershipto be a threat to democracy.[26]
Between 1985 and 2018, about 76,720 deals have been announced in the media industry. This sums up to an overall value of around US$5,634 billion.[27]There have been three major waves of M&A in the mass media sector (2000, 2007 and 2015), while the most active year in terms of numbers was 2007 with around 3,808 deals. The United States is the most prominent country in media M&A with 41 of the top 50 deals having an acquirer from the United States.
The largest deal in history was the acquisition ofTime WarnerbyAOLInc. for US$164,746.86 million.
Limited-effects theorytheorizes that because people usually choose what media to interact with based on what they already believe, media exerts a negligible influence.
Class-dominant theoryargues that the media reflects and projects the view of a minority elite, which controls it.
Culturalist theorycombines the other two theories and claims that people interact with media to create their own meanings out of the images and messages they receive.
In 2012, an article asserted that 90 percent of all mass media—includingradio broadcastnetworks and programing, video news, sports entertainment, and other—were owned by six major companies (GE, News-Corp, Disney, Viacom, Time Warner and CBS).[28]According to Morris Creative Group, these six companies made over $200 billion in revenue in 2010. More diversity is brewing among many companies, but they have recently merged to form an elite which have the power to control the narrative of stories and alter people's beliefs. In the new media-driven age we live in, marketing has more value than ever before because of the various ways it can be implemented. Advertisements can convince consumers to purchase or avoid a particular product. What a society accepts can be dictated by the amount and kind of attention the media gives it.
The documentarySuper Size Medescribes how companies likeMcDonald'shave been sued in the past, the plaintiffs claiming that it was the fault of their liminal and subliminal advertising that "forced" them to purchase the product. The Barbie and Ken dolls of the 1950s are sometimes cited as the main cause for the obsession in modern-day society for women to be skinny and men to be buff. After the attacks of 9/11, the media gave extensive coverage of the event and exposed Osama Bin Laden's guilt for the attack, information they were told by the authorities. This shaped the public opinion to support the war on terrorism, and later, the war on Iraq. A main concern is that due to this extreme power of the mass media, portraying inaccurate information could lead to an immense public concern. In his bookThe Commercialization of American Culture, Matthew P. McAllister says that "a well-developed media system, informing and teaching its citizens, helps democracy move toward its ideal state".[1]
In 1997, J. R. Finnegan Jr. and K. Viswanath identified three main effects or functions of mass media:
Since the 1950s, when cinema, radio and TV began to be the primary or only source of information for most of the population, these media became the central instruments of mass control.[29][30]When a country reaches ahigh level of industrialisation, the country itself "belongs to the person who controls communications".[31]
Mass media play a significant role in shaping public perceptions on a variety of important issues, both through the information that is dispensed through them, and through the interpretations they place upon this information.[29]They also play a large role in shaping modern culture, by selecting and portraying a particular set of beliefs, values and traditions (an entire way of life), as reality. That is, by portraying a certain interpretation of reality, they shape reality to be more in line with that interpretation.[30]Mass media also play a crucial role in the spread of civil unrest activities such as anti-government demonstrations, riots and general strikes.[32]That is, the use of radio and television receivers has made the unrest influence among cities not only by the geographic location of cities, but also by proximity within the mass media distribution networks.[32]
Media artistJoey Skaggshas demonstrated the ease with which mass media can be manipulated using fabricated press releases, staged events, and fictitious experts. His long-running series of media hoaxes reveal how news outlets can be drawn to sensational narratives, often publishing stories with minimal fact-checking. Skaggs' work has been cited as a critique of journalistic practices and a case study in the vulnerabilities of modern media systems.[33]
Mass media sources, through theories like framing and agenda-setting, can affect the scope of a story as particular facts and information are highlighted (media influence). This can directly correlate with how individuals may perceive certain groups of people, as the only media coverage a person receives can be very limited and may not reflect the whole story or situation; stories are often covered to reflect a particular perspective to target a specific demographic.[34]
According to Stephen Balkaran, an Instructor of Political Science and African American Studies at Central Connecticut State University, mass media has played a large role in the way white Americans perceive African Americans. The media focus on African American in the contexts of crime, drug use, gang violence and other forms of anti-social behavior has resulted in a distorted and harmful public perception of African Americans.[35]In his 1999 article "Mass Media and Racism", Balkaran states: "The media has played a key role in perpetuating the effects of this historical oppression and in contributing to African Americans' continuing status as second-class citizens." This has resulted in an uncertainty among white Americans as to what the genuine nature of African Americans really is. Despite the resultingracial divide, the fact that these people are undeniably American has "raised doubts about the white man's value system". This means that there is a somewhat "troubling suspicion" among some Americans that their white America is tainted by the black influence.[35]Mass media, as well aspropaganda, tend to reinforce or introducestereotypesto the general public.
Lack of local or specific topic focus is a common criticism of mass media. A massnews mediaoutlet often chooses to cover national and international news due to it having to cater for and be relevant for a wide demographic. As such, it can skip over many interesting or important local stories because they simply do not interest the large majority of their viewers.
The term "mass" suggests that the recipients of media products constitute a vast sea of passive, undifferentiated individuals. This is an image associated with some earlier critiques of "mass culture" andmass societywhich generally assumed that the development of mass communication has had a largely negative impact on modern social life, creating a kind of bland and homogeneous culture which entertains individuals without challenging them.[8]However, interactive digital media have also been seen to challenge the read-only paradigm of earlier broadcast media.[8]
Since the 1950s, in the countries that have reached ahigh level of industrialisation, the mass media of cinema, radio and TV have a key role in political power.[31]
Contemporary research demonstrates an increasing level ofconcentration of media ownership, with many media industries already highly concentrated and dominated by a small number of firms.[36]
When the study of mass media began the media was compiled of only mass media which is a very different media system than the social media empire of the 21st-century experiences.[37]With this in mind, there are critiques that mass media no longer exists, or at least that it does not exist in the same form as it once did. This original form of mass media put filters on what the general public would be exposed to in regards to "news" something that is harder to do in a society of social media.[38]
Theorist Lance Bennett explains that excluding a few major events in recent history, it is uncommon for a group big enough to be labeled a mass, to be watching the same news via the same medium of mass production.[39]Bennett's critique of 21st-century mass media argues that today it is more common for a group of people to be receiving different news stories, from completely different sources, and thus, mass media has been re-invented. As discussed above, filters would have been applied to original mass medias when the journalists decided what would or would not be printed.
Social mediais a large contributor to the change from mass media to a new paradigm because through social media what is mass communication and what isinterpersonal communicationis confused.[40]Interpersonal/niche communication is an exchange of information and information in a specific genre. In this form of communication, smaller groups of people are consuming news/information/opinions. In contrast, mass media in its original form is not restricted by genre and it is being consumed by the masses.
|
https://en.wikipedia.org/wiki/Mass_media#History
|
Crimewareis a class ofmalwaredesigned specifically to automatecybercrime.[1]
Crimeware (as distinct fromspywareandadware) is designed to perpetrateidentity theftthroughsocial engineeringor technical stealth in order to access a computer user's financial and retail accounts for the purpose of taking funds from those accounts or completing unauthorized transactions on behalf of the cyberthief.[citation needed]Alternatively, crimeware may stealconfidentialor sensitive corporate information. Crimeware represents a growing problem innetwork securityas many malicious code threats seek to pilfer valuable, confidential information.
The cybercrime landscape has shifted from individuals developing their own tools to a market where crimeware, tools and services for illegal online activities, can be easily acquired in online marketplaces. These crimeware markets are expected to expand, especially targeting mobile devices.[2]
The term crimeware was coined byDavid Jevansin February 2005 in an Anti-Phishing Working Group response to the FDIC article "Putting an End to Account-Hijacking Identity Theft".[3]
Criminals use a variety of techniques to steal confidential data through crimeware, including through the following methods:
Crimeware threats can be installed on victims' computers through multiple delivery vectors, including:
Crimeware can have a significant economic impact due to loss of sensitive and proprietary information and associated financial losses. One survey estimates that in 2005 organizations lost in excess of $30 million due to the theft of proprietary information.[9]Thetheftof financial or confidential information from corporate networks often places the organizations in violation of government and industry-imposed regulatory requirements that attempt to ensure that financial, personal and confidential.
US laws and regulations include:
|
https://en.wikipedia.org/wiki/Crimeware
|
Data loss prevention(DLP)softwaredetects potentialdata breaches/data exfiltration transmissions and prevents them by monitoring,[1]detecting and blocking sensitive data whilein use(endpoint actions),in motion(network traffic), andat rest(data storage).[2]
The terms "data loss" and "data leak" are related and are often used interchangeably.[3]Data loss incidents turn into data leak incidents in cases where media containing sensitive information are lost and subsequently acquired by an unauthorized party. However, a data leak is possible without losing the data on the originating side. Other terms associated with data leakage prevention are information leak detection and prevention (ILDP), information leak prevention (ILP), content monitoring and filtering (CMF), information protection and control (IPC) and extrusion prevention system (EPS), as opposed tointrusion prevention system.
Thetechnologicalmeansemployedfor dealing with data leakage incidents can be divided into categories: standard security measures, advanced/intelligent security measures, access control and encryption and designated DLP systems, although only the latter category are currently thought of as DLP today.[4]Common DLP methods for spotting malicious or otherwise unwanted activity and responding to it mechanically are automatic detection and response. Most DLP systems rely on predefined rules to identify and categorize sensitive information, which in turn helps system administrators zero in on vulnerable spots. After that, some areas could have extra safeguards installed.
Standard security measures, such asfirewalls,intrusion detection systems(IDSs) andantivirus software, are commonly available products that guard computers against outsider and insider attacks.[5]The use of a firewall, for example, prevents the access of outsiders to the internal network and an intrusion detection system detects intrusion attempts by outsiders. Inside attacks can be averted through antivirus scans that detectTrojan horsesthat sendconfidential information, and by the use of thin clients that operate in aclient-server architecturewith no personal or sensitive data stored on a client device.
Advanced security measures employmachine learningand temporal reasoningalgorithmsto detect abnormal access to data (e.g., databases or information retrieval systems) or abnormal email exchange,honeypotsfor detecting authorized personnel with malicious intentions and activity-based verification (e.g., recognition of keystroke dynamics) anduser activity monitoringfor detecting abnormal data access.
Designated systems detect and prevent unauthorized attempts to copy or send sensitive data, intentionally or unintentionally, mainly by personnel who are authorized to access the sensitive information. In order to classify certain information as sensitive, these use mechanisms, such as exact data matching,structured data fingerprinting, statistical methods, rule andregular expressionmatching, published lexicons, conceptual definitions, keywords and contextual information such as the source of the data.[6]
Network (data in motion) technology is typically installed at network egress points near the perimeter. It analyzes network traffic to detect sensitive data that is being sent in violation ofinformation securitypolicies. Multiple security control points may report activity to be analyzed by a central management server.[3]Anext-generation firewall(NGFW) orintrusion detection system(IDS) are common examples of technology that can be leveraged to perform DLP capabilities on the network.[7][8]Network DLP capabilities can usually be undermined by a sophisticatedthreat actorthrough the use ofdata maskingtechniques such as encryption or compression.[9]
Endpoint (data in use) systems run on internal end-user workstations or servers. Like network-based systems, endpoint-based technology can address internal as well as external communications. It can therefore be used to control information flow between groups or types of users (e.g. 'Chinese walls'). They can also control email andInstant Messagingcommunications before they reach the corporate archive, such that a blocked communication (i.e., one that was never sent, and therefore not subject to retention rules) will not be identified in a subsequent legal discovery situation. Endpoint systems have the advantage that they can monitor and control access to physical devices (such as mobile devices with data storage capabilities) and in some cases can access information before it is encrypted. Endpoint systems also have access to the information needed to provide contextual classification; for example the source or author generating content. Some endpoint-based systems provide application controls to block attempted transmissions of confidential information and provide immediate user feedback. They must be installed on every workstation in the network (typically via aDLP Agent), cannot be used on mobile devices (e.g., cell phones and PDAs) or where they cannot be practically installed (for example on a workstation in anInternet café).[10]
Thecloudnow contains a lot of critical data as organizations transform tocloud-native technologiesto accelerate virtual team collaboration. The data floating in the cloud needs to be protected as well since they are susceptible tocyberattacks, accidental leakage and insider threats. Cloud DLP monitors and audits the data, while providing access and usage control of data using policies. It establishes greater end-to-end visibility for all the data stored in the cloud.[11]
DLP includes techniques for identifying confidential or sensitive information. Sometimes confused with discovery, data identification is a process by which organizations use a DLP technology to determine what to look for.
Data is classified as either structured or unstructured. Structured data resides in fixed fields within a file such as a spreadsheet, whileunstructured datarefers to free-form text or media in text documents, PDF files and video.[12]An estimated 80% of all data is unstructured and 20% structured.[13]
Sometimes a data distributor inadvertently or advertently gives sensitive data to one or more third parties, or uses it themselves in an authorized fashion. Sometime later, some of the data is found in an unauthorized place (e.g., on the web or on a user's laptop). The distributor must then investigate the source of the loss.
"Data at rest" specifically refers to information that is not moving, i.e. that exists in a database or a file share. This information is of great concern to businesses and government institutions simply because the longer data is left unused in storage, the more likely it might be retrieved by unauthorized individuals. Protecting such data involves methods such as access control, data encryption anddata retentionpolicies.[3]
"Data in use" refers to data that the user is currently interacting with. DLP systems that protect data in-use may monitor and flag unauthorized activities.[3]These activities include screen-capture, copy/paste, print and fax operations involving sensitive data. It can be intentional or unintentional attempts to transmit sensitive data over communication channels.
"Data in motion" is data that is traversing through a network to an endpoint. Networks can be internal or external. DLP systems that protect data in-motion monitor sensitive data traveling across a network through various communication channels.[3]
|
https://en.wikipedia.org/wiki/Data_loss_prevention_software
|
Malware(aportmanteauofmalicious software)[1]is anysoftwareintentionally designed to cause disruption to acomputer,server,client, orcomputer network, leak private information, gain unauthorized access to information or systems, deprive access to information, or which unknowingly interferes with the user'scomputer securityandprivacy.[1][2][3][4][5]Researchers tend to classify malware into one or more sub-types (i.e.computer viruses,worms,Trojan horses,logic bombs,ransomware,spyware,adware,rogue software,wipersandkeyloggers).[1]
Malware poses serious problems to individuals and businesses on the Internet.[6][7]According toSymantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016.[8]Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year.[9]Since 2021, malware has been designed to target computer systems that run critical infrastructure such as theelectricity distribution network.[10]
The defense strategies against malware differ according to the type of malware but most can be thwarted by installingantivirus software,firewalls, applying regularpatches,securing networksfrom intrusion, having regularbackupsandisolating infected systems. Malware can be designed to evade antivirus software detection algorithms.[8]
The notion of a self-reproducing computer program can be traced back to initial theories about the operation of complex automata.[11]John von Neumannshowed that in theory a program could reproduce itself. This constituted a plausibility result incomputability theory.Fred Cohenexperimented with computer viruses and confirmed Neumann's postulate and investigated other properties of malware such as detectability and self-obfuscation using rudimentary encryption. His 1987 doctoral dissertation was on the subject of computer viruses.[12]The combination of cryptographic technology as part of the payload of the virus, exploiting it for attack purposes was initialized and investigated from the mid-1990s, and includes initial ransomware and evasion ideas.[13]
BeforeInternetaccess became widespread, viruses spread on personal computers by infecting executable programs orboot sectorsof floppy disks. By inserting a copy of itself into themachine codeinstructions in these programs orboot sectors, a virus causes itself to be run whenever the program is run or the disk is booted. Early computer viruses were written for theApple IIandMac, but they became more widespread with the dominance of theIBM PCandMS-DOS. The first IBM PC virus in the wild was aboot sectorvirus dubbed(c)Brain, created in 1986 by the Farooq Alvi brothers in Pakistan.[14]Malware distributors would trick the user into booting or running from an infected device or medium. For example, a virus could make an infected computer add autorunnable code to any USB stick plugged into it. Anyone who then attached the stick to another computer set to autorun from USB would in turn become infected, and also pass on the infection in the same way.[15]
Older email software would automatically openHTML emailcontaining potentially maliciousJavaScriptcode. Users may also execute disguised malicious email attachments. The2018 Data Breach Investigations ReportbyVerizon, cited byCSO Online, states that emails are the primary method of malware delivery, accounting for 96% of malware delivery around the world.[16][17]
The first worms,network-borne infectious programs, originated not on personal computers, but on multitaskingUnixsystems. The first well-known worm was theMorris wormof 1988, which infectedSunOSandVAXBSDsystems. Unlike a virus, this worm did not insert itself into other programs. Instead, it exploited security holes (vulnerabilities) in networkserverprograms and started itself running as a separateprocess.[18]This same behavior is used by today's worms as well.[19]
With the rise of theMicrosoft Windowsplatform in the 1990s, and the flexiblemacrosof its applications, it became possible to write infectious code in the macro language ofMicrosoft Wordand similar programs. Thesemacro virusesinfect documents and templates rather than applications (executables), but rely on the fact that macros in a Word document are a form ofexecutablecode.[20]
Many early infectious programs, including theMorris Worm, the first internet worm, were written as experiments or pranks.[21]Today, malware is used by bothblack hat hackersand governments to steal personal, financial, or business information.[22][23]Today, any device that plugs into a USB port – even lights, fans, speakers, toys, or peripherals such as a digital microscope – can be used to spread malware. Devices can be infected during manufacturing or supply if quality control is inadequate.[15]
Since the rise of widespreadbroadbandInternetaccess, malicious software has more frequently been designed for profit. Since 2003, the majority of widespreadvirusesand worms have been designed to take control of users' computers for illicit purposes.[24]Infected "zombie computers" can be used to sendemail spam, to host contraband data such aschild pornography,[25]or to engage indistributed denial-of-serviceattacksas a form ofextortion.[26]Malware is used broadly against government or corporate websites to gather sensitive information,[27]or to disrupt their operation in general. Further, malware can be used against individuals to gain information such as personal identification numbers or details, bank or credit card numbers, and passwords.[28][29]
In addition to criminal money-making, malware can be used for sabotage, often for political motives.Stuxnet, for example, was designed to disrupt very specific industrial equipment. There have been politically motivated attacks which spread over and shut down large computer networks, including massive deletion of files and corruption ofmaster boot records, described as "computer killing." Such attacks were made on Sony Pictures Entertainment (25 November 2014, using malware known asShamoonor W32.Disttrack) and Saudi Aramco (August 2012).[30][31]
Malware can be classified in numerous ways, and certain malicious programs may fall into two or more categories simultaneously.[1]Broadly, software can categorised into three types:[32](i) goodware; (ii) grayware and (iii) malware.
A computer virus is software usually hidden within another seemingly harmless program that can produce copies of itself and insert them into other programs or files, and that usually performs a harmful action (such as destroying data).[33]They have been likened tobiological viruses.[3]An example of this is a portable execution infection, a technique, usually used to spread malware, that inserts extra data orexecutable codeintoPE files.[34]A computer virus is software that embeds itself in some otherexecutablesoftware (including the operating system itself) on the target system without the user's knowledge and consent and when it is run, the virus is spread to other executable files.
Awormis a stand-alone malware software thatactivelytransmits itself over anetworkto infect other computers and can copy itself without infecting files. These definitions lead to the observation that a virus requires the user to run an infected software or operating system for the virus to spread, whereas a worm spreads itself.[35]
Once malicious software is installed on a system, it is essential that it stays concealed, to avoid detection. Software packages known asrootkitsallow this concealment, by modifying the host's operating system so that the malware is hidden from the user. Rootkits can prevent a harmfulprocessfrom being visible in the system's list ofprocesses, or keep its files from being read.[36]
Some types of harmful software contain routines to evade identification and/or removal attempts, not merely to hide themselves. An early example of this behavior is recorded in theJargon Filetale of a pair of programs infesting a XeroxCP-Vtime sharing system:
Each ghost-job would detect the fact that the other had been killed, and would start a new copy of the recently stopped program within a few milliseconds. The only way to kill both ghosts was to kill them simultaneously (very difficult) or to deliberately crash the system.[37]
Abackdooris a broad term for a computer program that allows an attacker persistent unauthorised remote access to a victim's machine often without their knowledge.[38]The attacker typically uses another attack (such as atrojan,wormorvirus) to bypass authentication mechanisms usually over an unsecured network such as the Internet to install the backdoor application. A backdoor can also be a side effect of asoftware bugin legitimate software that is exploited by an attacker to gain access to a victim's computer or network.
The idea has often been suggested that computer manufacturers preinstall backdoors on their systems to provide technical support for customers, but this has never been reliably verified. It was reported in 2014 that US government agencies had been diverting computers purchased by those considered "targets" to secret workshops where software or hardware permitting remote access by the agency was installed, considered to be among the most productive operations to obtain access to networks around the world.[39]Backdoors may be installed by Trojan horses,worms,implants, or other methods.[40][41]
A Trojan horse misrepresents itself to masquerade as a regular, benign program or utility in order to persuade a victim to install it. A Trojan horse usually carries a hidden destructive function that is activated when the application is started. The term is derived from theAncient Greekstory of theTrojan horseused to invade the city ofTroyby stealth.[42][43]
Trojan horses are generally spread by some form ofsocial engineering, for example, where a user is duped into executing an email attachment disguised to be unsuspicious, (e.g., a routine form to be filled in), or bydrive-by download. Although their payload can be anything, many modern forms act as a backdoor, contacting a controller (phoning home) which can then have unauthorized access to the affected computer, potentially installing additional software such as a keylogger to steal confidential information, cryptomining software or adware to generate revenue to the operator of the trojan.[44]While Trojan horses and backdoors are not easily detectable by themselves, computers may appear to run slower, emit more heat or fan noise due to heavy processor or network usage, as may occur when cryptomining software is installed. Cryptominers may limit resource usage and/or only run during idle times in an attempt to evade detection.
Unlike computer viruses and worms, Trojan horses generally do not attempt to inject themselves into other files or otherwise propagate themselves.[45]
In spring 2017, Mac users were hit by the new version of Proton Remote Access Trojan (RAT)[46]trained to extract password data from various sources, such as browser auto-fill data, the Mac-OS keychain, and password vaults.[47]
Droppersare a sub-type of Trojans that solely aim to deliver malware upon the system that they infect with the desire to subvert detection through stealth and a light payload.[48]It is important not to confuse a dropper with a loader or stager. A loader or stager will merely load an extension of the malware (for example a collection of malicious functions through reflective dynamic link library injection) into memory. The purpose is to keep the initial stage light and undetectable. A dropper merely downloads further malware to the system.
Ransomware prevents a user from accessing their files until a ransom is paid. There are two variations of ransomware, being crypto ransomware and locker ransomware.[49]Locker ransomware just locks down a computer system without encrypting its contents, whereas crypto ransomware locks down a system and encrypts its contents. For example, programs such asCryptoLockerencryptfiles securely, and only decrypt them on payment of a substantial sum of money.[50]
Lock-screens, or screen lockers is a type of "cyber police" ransomware that blocks screens on Windows or Android devices with a false accusation in harvesting illegal content, trying to scare the victims into paying up a fee.[51]Jisut and SLocker impact Android devices more than other lock-screens, with Jisut making up nearly 60 percent of all Android ransomware detections.[52]
Encryption-based ransomware, like the name suggests, is a type of ransomware that encrypts all files on an infected machine. These types of malware then display a pop-up informing the user that their files have been encrypted and that they must pay (usually in Bitcoin) to recover them. Some examples of encryption-based ransomware areCryptoLockerandWannaCry.[53]
Some malware is used to generate money byclick fraud, making it appear that the computer user has clicked an advertising link on a site, generating a payment from the advertiser. It was estimated in 2012 that about 60 to 70% of all active malware used some kind of click fraud, and 22% of all ad-clicks were fraudulent.[54]
Grayware is any unwanted application or file that can worsen the performance of computers and may cause security risks but which there is insufficient consensus or data to classify them as malware.[32]Types of grayware typically includespyware,adware,fraudulent dialers, joke programs ("jokeware") andremote access tools.[38]For example, at one point,Sony BMGcompact discssilently installed a rootkiton purchasers' computers with the intention of preventing illicit copying.[55]
Potentially unwanted programs(PUPs) are applications that would be considered unwanted despite often being intentionally downloaded by the user.[56]PUPs include spyware, adware, and fraudulent dialers.
Many security products classify unauthorisedkey generatorsas PUPs, although they frequently carry true malware in addition to their ostensible purpose.[57]In fact, Kammerstetter et al. (2012)[57]estimated that as much as 55% of key generators could contain malware and that about 36% malicious key generators were not detected by antivirus software.
Some types of adware turn off anti-malware and virus protection; technical remedies are available.[58]
Programs designed to monitor users' web browsing, displayunsolicited advertisements, or redirectaffiliate marketingrevenues are calledspyware. Spyware programs do not spread like viruses; instead they are generally installed by exploiting security holes. They can also be hidden and packaged together with unrelated user-installed software.[59]TheSony BMG rootkitwas intended to prevent illicit copying; but also reported on users' listening habits, and unintentionally created extra security vulnerabilities.[55]
Antivirus software typically uses two techniques to detect malware: (i) static analysis and (ii) dynamic/heuristic analysis.[60]Static analysis involves studying the software code of a potentially malicious program and producing a signature of that program. This information is then used to compare scanned files by an antivirus program. Because this approach is not useful for malware that has not yet been studied, antivirus software can use dynamic analysis to monitor how the program runs on a computer and block it if it performs unexpected activity.
The aim of any malware is to conceal itself from detection by users or antivirus software.[1]Detecting potential malware is difficult for two reasons. The first is that it is difficult to determine if software is malicious.[32]The second is that malware uses technical measures to make it more difficult to detect it.[60]An estimated 33% of malware is not detected by antivirus software.[57]
The most commonly employed anti-detection technique involves encrypting the malware payload in order to prevent antivirus software from recognizing the signature.[32]Tools such as crypters come with an encrypted blob of malicious code and a decryption stub. The stub decrypts the blob and loads it into memory. Because antivirus does not typically scan memory and only scans files on the drive, this allows the malware to evade detection. Advanced malware has the ability to transform itself into different variations, making it less likely to be detected due to the differences in its signatures. This is known as polymorphic malware. Other common techniques used to evade detection include, from common to uncommon:[61](1) evasion of analysis and detection byfingerprintingthe environment when executed;[62](2) confusing automated tools' detection methods. This allows malware to avoid detection by technologies such as signature-based antivirus software by changing the server used by the malware;[61](3) timing-based evasion. This is when malware runs at certain times or following certain actions taken by the user, so it executes during certain vulnerable periods, such as during the boot process, while remaining dormant the rest of the time; (4)obfuscatinginternal data so that automated tools do not detect the malware;[63](v) information hiding techniques, namelystegomalware;[64]and (5) fileless malware which runs within memory instead of using files and utilizes existing system tools to carry out malicious acts. The use of existing binaries to carry out malicious activities is a technique known as LotL, or Living off the Land.[65]This reduces the amount of forensic artifacts available to analyze. Recently these types of attacks have become more frequent with a 432% increase in 2017 and makeup 35% of the attacks in 2018. Such attacks are not easy to perform but are becoming more prevalent with the help of exploit-kits.[66][67]
Avulnerabilityis a weakness,flawor software bug in anapplication, a complete computer, anoperating system, or acomputer networkthat is exploited by malware to bypass defences orgain privilegesit requires to run. For example,TestDisk 6.4or earlier contained a vulnerability that allowed attackers to inject code into Windows.[68]Malware can exploit security defects (security bugsorvulnerabilities) in the operating system, applications (such as browsers, e.g. older versions of Microsoft Internet Explorer supported by Windows XP[69]), or in vulnerable versions of browser plugins such asAdobe Flash Player,Adobe Acrobat or Reader, orJava SE.[70][71]For example, a common method is exploitation of abuffer overrunvulnerability, where software designed to store data in a specified region of memory does not prevent more data than the buffer can accommodate from being supplied. Malware may provide data that overflows the buffer, with maliciousexecutablecode or data after the end; when this payload is accessed it does what the attacker, not the legitimate software, determines.
Malware can exploit recently discovered vulnerabilities before developers have had time to release a suitablepatch.[6]Even when new patches addressing the vulnerability have been released, they may not necessarily be installed immediately, allowing malware to take advantage of systems lacking patches. Sometimes even applying patches or installing new versions does not automatically uninstall the old versions.
There are several ways the users can stay informed and protected from security vulnerabilities in software.
Software providers often announce updates that address security issues.[72]Common vulnerabilitiesare assigned unique identifiers (CVE IDs) and listed in public databases like theNational Vulnerability Database.
Tools like Secunia PSI,[73]free for personal use, can scan a computer for outdated software with known vulnerabilities and attempt to update them.Firewallsandintrusion prevention systemscan monitor the network traffic for suspicious activity that might indicate an attack.[74]
Users and programs can be assigned moreprivilegesthan they require, and malware can take advantage of this. For example, of 940 Android apps sampled, one third of them asked for more privileges than they required.[75]Apps targeting theAndroidplatform can be a major source of malware infection but one solution is to use third-party software to detect apps that have been assigned excessive privileges.[76]
Some systems allow all users to make changes to the core components or settings of the system, which is consideredover-privilegedaccess today. This was the standard operating procedure for early microcomputer and home computer systems, where there was no distinction between anadministratororroot, and a regular user of the system. In some systems,non-administratorusers are over-privileged by design, in the sense that they are allowed to modify internal structures of the system. In some environments, users are over-privileged because they have been inappropriately granted administrator or equivalent status.[77]This can be because users tend to demand more privileges than they need, so often end up being assigned unnecessary privileges.[78]
Some systems allow code executed by a user to access all rights of that user, which is known as over-privileged code. This was also standard operating procedure for early microcomputer and home computer systems. Malware, running as over-privileged code, can use this privilege to subvert the system. Almost all currently popular operating systems, and also manyscripting applicationsallow code too many privileges, usually in the sense that when a userexecutescode, the system allows that code all rights of that user.[citation needed]
A credential attack occurs when a user account with administrative privileges is cracked and that account is used to provide malware with appropriate privileges.[79]Typically, the attack succeeds because the weakest form of account security is used, which is typically a short password that can be cracked using adictionaryorbrute forceattack. Usingstrong passwordsand enablingtwo-factor authenticationcan reduce this risk. With the latter enabled, even if an attacker can crack the password, they cannot use the account without also having the token possessed by the legitimate user of that account.
Homogeneity can be a vulnerability. For example, when all computers in anetworkrun the same operating system, upon exploiting one, onewormcan exploit them all:[80]In particular,Microsoft WindowsorMac OS Xhave such a large share of the market that an exploited vulnerability concentrating on either operating system could subvert a large number of systems. It is estimated that approximately 83% of malware infections between January and March 2020 were spread via systems runningWindows 10.[81]This risk is mitigated by segmenting the networks into differentsubnetworksand setting upfirewallsto block traffic between them.[82][83]
Anti-malware (sometimes also calledantivirus) programs block and remove some or all types of malware. For example,Microsoft Security Essentials(for Windows XP, Vista, and Windows 7) andWindows Defender(forWindows 8,10and11) provide real-time protection. TheWindows Malicious Software Removal Toolremoves malicious software from the system.[84]Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use).[85]Tests found some free programs to be competitive with commercial ones.[85][86][87]
Typically, antivirus software can combat malware in the following ways:
A specific component of anti-malware software, commonly referred to as an on-access or real-time scanner, hooks deep into the operating system's core orkerneland functions in a manner similar to how certain malware itself would attempt to operate, though with the user's informed permission for protecting the system. Any time the operating system accesses a file, the on-access scanner checks if the file is infected or not. Typically, when an infected file is found, execution is stopped and the file isquarantinedto prevent further damage with the intention to prevent irreversible system damage. Most AVs allow users to override this behaviour. This can have a considerable performance impact on the operating system, though the degree of impact is dependent on how many pages it creates invirtual memory.[91]
Sandboxingis asecurity modelthat confines applications within a controlled environment, restricting their operations to authorized "safe" actions and isolating them from other applications on the host. It also limits access to system resources like memory and the file system to maintain isolation.[89]
Browser sandboxing is a security measure that isolates web browser processes and tabs from the operating system to prevent malicious code from exploiting vulnerabilities.
It helps protect against malware,zero-day exploits, and unintentional data leaks by trapping potentially harmful code within the sandbox.
It involves creating separate processes, limiting access to system resources, runningweb contentin isolated processes, monitoring system calls, and memory constraints.Inter-process communication(IPC) is used forsecure communicationbetween processes.
Escaping the sandbox involves targeting vulnerabilities in the sandbox mechanism or the operating system's sandboxing features.[90][92]
While sandboxing is not foolproof, it significantly reduces theattack surfaceof common threats.
Keeping browsers and operating systems updated is crucial to mitigate vulnerabilities.[90][92]
Website vulnerability scans check the website, detect malware, may note outdated software, and may report known security issues, in order to reduce the risk of the site being compromised.
Structuring a network as a set of smaller networks, and limiting the flow of traffic between them to that known to be legitimate, can hinder the ability of infectious malware to replicate itself across the wider network.Software-defined networkingprovides techniques to implement such controls.
As a last resort, computers can be protected from malware, and the risk of infected computers disseminating trusted information can be greatly reduced by imposing an"air gap"(i.e. completely disconnecting them from all other networks) and applying enhanced controls over the entry and exit of software and data from the outside world. However, malware can still cross the air gap in some situations, not least due to the need to introduce software into the air-gapped network and can damage the availability or integrity of assets thereon.Stuxnetis an example of malware that is introduced to the target environment via a USB drive, causing damage to processes supported on the environment without the need to exfiltrate data.
AirHopper,[93]BitWhisper,[94]GSMem[95]and Fansmitter[96]are four techniques introduced by researchers that can leak data from air-gapped computers using electromagnetic, thermal and acoustic emissions.
Utilizing bibliometric analysis, the study of malware research trends from 2005 to 2015, considering criteria such as impact journals, highly cited articles, research areas, number of publications, keyword frequency, institutions, and authors, revealed an annual growth rate of 34.1%.North Americaled in research output, followed byAsiaandEurope.ChinaandIndiawere identified as emerging contributors.[97]
|
https://en.wikipedia.org/wiki/Greynet
|
Theinternationalized domain name(IDN)homograph attack(sometimes written ashomoglyph attack) is a method used by malicious parties to deceive computer users about what remote system they are communicating with, by exploiting the fact that many different characters look alike (i.e., they rely onhomoglyphsto deceive visitors). For example, theCyrillic,GreekandLatinalphabets each have a letter⟨o⟩that has the same shape but represents different sounds or phonemes in their respective writing systems.[a]
This kind of spoofing attack is also known asscript spoofing.Unicodeincorporates numerous scripts (writing systems), and, for a number of reasons, similar-looking characters such asGreek Ο,Latin O, andCyrillic Оwere not assigned the same code. Their incorrect or malicious usage is a possibility for security attacks. Thus, for example, a regular user ofexаmple.commay be lured to click on it unquestioningly as an apparently familiar link, unaware that the third letter is not the Latin character "a" but rather the Cyrillic character "а" and is thus an entirely different domain from the intended one.
The registration of homographic domain names is akin totyposquatting, in that both forms of attacks use a similar-looking name to a more established domain to fool a user.[b]The major difference is that in typosquatting the perpetrator attracts victims by relying on natural typographical errors commonly made when manually entering a URL, while in homograph spoofing the perpetrator deceives the victims by presenting visually indistinguishable hyperlinks. Indeed, it would be a rare accident for a web user to type, for example, a Cyrillic letter within an otherwise English word, turning "bank" into "bаnk". There are cases in which a registration can be both typosquatting and homograph spoofing; the pairs ofl/I,i/j, and0/Oare all both close together on keyboards and, depending on thetypeface, may be difficult or impossible to distinguish visually.
An early nuisance of this kind, pre-dating the Internet and eventext terminals, was the confusion between "l" (lowercase letter "L") / "1" (the number "one") and "O" (capital letter for vowel "o") / "0" (the number "zero"). Sometypewritersin the pre-computer era evencombined the L and the one; users had to type a lowercase L when the number one was needed. The zero/o confusion gave rise to the tradition ofcrossing zeros, so that acomputer operatorwould type them correctly.[1]Unicode may contribute to this greatly with its combining characters, accents, several types ofhyphen, etc., often due to inadequaterenderingsupport, especially with smaller font sizes and the wide variety of fonts.[2]
Even earlier,handwritingprovided rich opportunities for confusion. A notable example is the etymology of the word "zenith". The translation from the Arabic "samt" included the scribe's confusing of "m" into "ni". This was common in medievalblackletter, which did not connect the vertical columns on the letters i, m, n, or u, making them difficult to distinguish when several were in a row. The latter, as well as "rn"/"m"/"rri" ("RN"/"M"/"RRI") confusion, is still possible for a human eye even with modern advanced computer technology.
Intentional look-alike character substitution with different alphabets has also been known in various contexts. For example,Faux Cyrillichas been used as an amusement or attention-grabber and "Volapuk encoding", in which Cyrillic script is represented by similar Latin characters, was used in early days of theInternetas a way to overcome the lack of support for the Cyrillic alphabet. Another example is thatvehicle registration platescan have both Cyrillic (for domestic usage in Cyrillic script countries) and Latin (for international driving) with the same letters. Registration plates that areissued in Greeceare limited to using letters of theGreek alphabetthat have homoglyphs in the Latin alphabet, asEuropean Unionregulations require the use of Latin letters.
ASCII has several characters or pairs of characters that look alike and are known ashomographs(orhomoglyphs).Spoofing attacksbased on these similarities are known ashomograph spoofing attacks. For example, 0 (the number) and O (the letter), "l" lowercase "L", and "I" uppercase "i".
In a typical example of a hypothetical attack, someone could register adomain namethat appears almost identical to an existing domain but goes somewhere else. For example, the domain "rnicrosoft.com" begins with "r" and "n", not "m".
Other examples areG00GLE.COMwhich looks much likeGOOGLE.COMin some fonts.
Using a mix of uppercase and lowercase characters,googIe.com(capitali, not smallL) looks much likegoogle.comin some fonts.PayPalwas a target of a phishing scam exploiting this, using the domainPayPaI.com. In certain narrow-spaced fonts such asTahoma(the default in the address bar inWindows XP), placing acin front of aj,loriwill produce homoglyphs such ascl cj ci(d g a).
Inmultilingualcomputer systems, different logical characters may have identical appearances.
For example,Unicodecharacter U+0430,Cyrillicsmall letter a ("а"), can look identical to Unicode character U+0061,Latinsmall letter a, ("a") which is the lowercase "a" used in English. Hencewikipediа.org(xn--wikipedi-86g.org; the Cyrillic version) instead ofwikipedia.org(the Latin version).
The problem arises from the different treatment of the characters in the user's mind and the computer's programming. From the viewpoint of the user, a Cyrillic "а" within a Latin stringisa Latin "a"; there is no difference in the glyphs for these characters in most fonts. However, the computer treats them differently when processing the character string as an identifier. Thus, the user's assumption of a one-to-one correspondence between the visual appearance of a name and the named entity breaks down.
Internationalized domain namesprovide a backward-compatible way for domain names to use the full Unicode character set, and this standard is already widely supported. However this system expanded the character repertoire from a few dozen characters in a single alphabet to many thousands of characters in many scripts; this greatly increased the scope for homograph attacks.
This opens a rich vein of opportunities forphishingand other varieties of fraud. An attacker could register a domain name thatlooksjust like that of a legitimate website, but in which some of the letters have been replaced by homographs in another alphabet. The attacker could then send e-mail messages purporting to come from the original site, but directing people to the bogus site. The spoof site could then record information such as passwords or account details, while passing traffic through to the real site. The victims may never notice the difference, until suspicious or criminal activity occurs with their accounts.
In December 2001Evgeniy GabrilovichandAlex Gontmakher, both fromTechnion,Israel, published a paper titled "The Homograph Attack",[1]which described an attack that used Unicode URLs to spoof a website URL. To prove the feasibility of this kind of attack, the researchers successfully registered a variant of the domain namemicrosoft.comwhich incorporated Cyrillic characters.
Problems of this kind were anticipated before IDN was introduced, and guidelines were issued to registries to try to avoid or reduce the problem. For example, it was advised that registries only accept characters from the Latin alphabet and that of their own country, not all of Unicode characters, but this advice was neglected by majorTLDs.[citation needed]
On February 6, 2005,Cory Doctorowreported that this exploit was disclosed by 3ric Johanson at thehackerconferenceShmoocon.[3][4]Web browsers supporting IDNA appeared to direct the URL http://www.pаypal.com/, in which the firstacharacter is replaced by a Cyrillicа, to the site of the well known payment sitePayPal, but actually led to a spoofed web site with different content. Popular browsers continued to have problems properly displaying international domain names through April 2017.[5]
The following alphabets have characters that can be used for spoofing attacks (please note, these are only the most obvious and common, given artistic license and how much risk the spoofer will take of getting caught; the possibilities are far more numerous than can be listed here):
Cyrillic is, by far, the most commonly used alphabet for homoglyphs, largely because it contains 11 lowercase glyphs that are identical or nearly identical to Latin counterparts.
TheCyrillic lettersа,с,е,о,р,хandуhave optical counterparts in the basic Latin alphabet and look close or identical toa,c,e,o,p,xandy. CyrillicЗ,Чandбresemble the numerals3,4and6.Italic typegenerates more homoglyphs:дтпиorдтпи(дтпиin standard type), resemblingdmnu(in some fontsдcan be used, since its italic form resembles a lowercaseg; however, in most mainstream fonts,дinstead resembles apartial differentialsign,∂).
If capital letters are counted,АВСЕНІЈКМОРЅТХcan substituteABCEHIJKMOPSTX, in addition to the capitals for the lowercase Cyrillic homoglyphs.
Cyrillic non-Russian problematic letters areіandi,јandj,ԛandq,ѕands,ԝandw,ҮandY, whileҒandF,ԌandGbear some resemblance to each other. Cyrillicӓёїӧcan also be used if an IDN itself is being spoofed, to fakeäëïö.
WhileKomi De(ԁ),shha(һ),palochka(Ӏ) andizhitsa(ѵ) bear strong resemblance to Latind,h,landv, these letters are either rare or archaic and are not widely supported in most standard fonts (they are not included in theWGL-4). Attempting to use them could cause aransom note effect.
From theGreek alphabet, onlyomicron(ο) and sometimesnu(ν) appear identical to a Latin alphabet letter in the lowercase used for URLs. Fonts that are initalic typewill feature Greek alpha (α) looking like a Latina.
This list increases if close matches are also allowed (such as Greekεικηρτυωχγforeiknptuwxy). Usingcapital letters, the list expands greatly. GreekΑΒΕΗΙΚΜΝΟΡΤΧΥΖlooks identical to LatinABEHIKMNOPTXYZ. GreekΑΓΒΕΗΚΜΟΠΡΤΦΧlooks similar to CyrillicАГВЕНКМОПРТФХ(as do CyrillicЛл(Лл) and GreekΛin certain geometric sans-serif fonts), Greek lettersκandοlook similar to Cyrillicкandо. Besides this Greekτ,φcan be similar to Cyrillicт,фin some fonts, Greekδlooks like Cyrillicб, and the Cyrillicаalso italicizes the same as its Latin counterpart, making it possible to substitute it for alpha or vice versa. The lunate form of sigma,Ϲϲ, resembles both LatinCcand CyrillicСс. Especially in contemporary typefaces, Cyrillicлis rendered with a glyph indistinguishable from Greekπ.
If an IDN itself is being spoofed, Greek betaβcan be a substitute for German eszettßin some fonts (and in fact,code page 437treats them as equivalent), as can Greek end-of-word-variant sigmaςforç; accented Greek substitutesόίάcan usually be used foróíáin many fonts, with the last of these (alpha) again only resemblingain italic type.
TheArmenian alphabetcan also contribute critical characters: several Armenian characters like օ, ո, ս, as well as capital Տ and Լ are often completely identical to Latin characters in modern fonts, and symbols which similar enough to pass off, such as ցհոօզս which look like ghnoqu, յ which resembles j (albeit dotless), and ք, which can either resemble p or f depending on the font; ա can resemble Cyrillic ш. However, the use of Armenian is, luckily, a bit less reliable: Not all standard fonts feature Armenian glyphs (whereas the Greek and Cyrillic scripts are); Windows prior to Windows 7 rendered Armenian in a distinct font,Sylfaen, of which the mixing of Armenian with Latin would appear obviously different if using a font other than Sylfaen or aUnicode typeface. (This is known as aransom note effect.) The current version ofTahoma, used in Windows 7, supports Armenian (previous versions did not). Furthermore, this font differentiates Latingfrom Armenian ց.
Two letters in Armenian (Ձշ) also can resemble the number 2, Յ resembles 3, while another (վ) sometimes resembles the number 4.
Hebrew spoofing is generally rare. Only three letters from that alphabet can reliably be used: samekh (ס), which sometimes resembles o, vav with diacritic (וֹ), which resembles an i, and heth (ח), which resembles the letter n. Less accurate approximants for some other alphanumerics can also be found, but these are usually only accurate enough to use for the purposes offoreign brandingand not for substitution. Furthermore, theHebrew alphabetis written from right to left and trying to mix it with left-to-right glyphs may cause problems.
Though theThai scripthas historically had a distinct look with numerous loops and small flourishes, modernThai typography, beginning withManopticain 1973 and continuing throughIBM Plexin the modern era, has increasingly adopted a simplified style in which Thai characters are represented withglyphsstrongly resembling Latin letters. ค (A), ท (n), น (u), บ (U), ป (J), พ (W), ร (S), and ล (a) are among the Thai glyphs that can closely resemble Latin.
TheChinese languagecan be problematic for homographs as many characters exist as bothtraditional(regular script) andsimplified Chinese characters. In the.orgdomain, registering one variant renders the other unavailable to anyone; in.biza single Chinese-language IDN registration delivers both variants as active domains (which must have the same domain name server and the same registrant)..hk(.香港) also adopts this policy.
Other Unicode scripts in which homographs can be found includeNumber Forms(Roman numerals),CJK CompatibilityandEnclosed CJK Letters and Months(certain abbreviations), Latin (certain digraphs),Currency Symbols,Mathematical Alphanumeric Symbols, andAlphabetic Presentation Forms(typographic ligatures).
Two names which differ only in an accent on one character may look very similar, particularly when the substitution involves thedotted letter i; the tittle (dot) on the i can be replaced with adiacritic(such as agrave accentoracute accent; both ì and í are included in most standard character sets and fonts) that can only be detected with close inspection. In most top-level domain registries, wíkipedia.tld (xn--wkipedia-c2a.tld) and wikipedia.tld are two different names which may be held by different registrants.[6]One exception is.ca, where reserving the plain-ASCIIversion of the domain prevents another registrant from claiming an accented version of the same name.[7]
Unicodeincludes many characters which are not displayed by default, such as thezero-width space. In general,ICANNprohibits any domain with these characters from being registered, regardless of TLD.
In 2011, an unknown source (registering under the name "Completely Anonymous") registered a domain name homographic to television stationKBOI-TV's to create afake news website. The sole purpose of the site was to spread anApril Fool's Dayjoke regarding theGovernor of Idahoissuing a supposed ban on the sale of music byJustin Bieber.[8][9]
In September 2017, security researcher Ankit Anubhav discovered an IDN homograph attack where the attackers registered adoḅe.com to deliver the Betabottrojan.[10]
The simplest defense is for web browsers not to support IDNA or other similar mechanisms, or for users to turn off whatever support their browsers have. That could mean blocking access to IDNA sites, but generally browsers permit access and just display IDNs inPunycode. Either way, this amounts to abandoning non-ASCII domain names.
As an additional defense, Internet Explorer 7, Firefox 2.0 and above, and Opera 9.10 include phishing filters that attempt to alert users when they visit malicious websites.[17][18][19]As of April 2017, several browsers (including Chrome, Firefox, and Opera) were displaying IDNs consisting purely of Cyrillic characters normally (not as punycode), allowing spoofing attacks. Chrome tightened IDN restrictions in version 59 to prevent this attack.[20][21]
Browser extensionslike No Homo-Graphs are available forGoogle ChromeandFirefoxthat check whether the user is visiting a website which is a homograph of another domain from a user-defined list.[22]
These methods of defense only extend to within a browser. Homographic URLs that house malicious software can still be distributed, without being displayed as Punycode, throughe-mail,social networkingor other websites without being detected until the user actually clicks the link. While the fake link will show in Punycode when it is clicked, by this point the page has already begun loading into the browser.[citation needed]
The IDN homographs database is a Python library that allows developers to defend against this usingmachine learning-based character recognition.[23]
ICANNhas implemented a policy prohibiting any potential internationalized TLD from choosing letters that could resemble an existing Latin TLD and thus be used for homograph attacks. Proposed IDN TLDs.бг(Bulgaria),.укр(Ukraine) and.ελ(Greece) have been rejected or stalled because of their perceived resemblance to Latin letters. All three (and Serbian.србand Mongolian.мон) have later been accepted.[24]Three-letter TLD are considered safer than two-letter TLD, since they are harder to match to normal Latin ISO-3166 country domains; although the potential to match new generic domains remains, such generic domains are far more expensive than registering a second- or third-level domain address, making it cost-prohibitive to try to register a homoglyphic TLD for the sole purpose of making fraudulent domains (which itself would draw ICANN scrutiny).
The Russian registry operatorCoordination Center for TLD RUonly accepts Cyrillic names for the top-level domain.рф, forbidding a mix with Latin or Greek characters. However, the problem in.comand othergTLDsremains open.[25]
In their 2019 study, Suzuki et al. introduced ShamFinder,[26]a program for recognizing IDNs, shedding light on their prevalence in real-world scenarios. Similarly, Chiba et al. (2019) designed DomainScouter,[27]a system adept at detecting diverse homograph IDNs in domains through analyzing an estimated 4.4 million registered IDNs across 570 Top-Level Domains (TLDs) it was able to successfully identify 8,284 IDN homographs, including many previously unidentified cases targeting brands in languages other than English.[28]
|
https://en.wikipedia.org/wiki/IDN_homograph_attack
|
Anetwork enclaveis a section of an internalnetworkthat is subdivided from the rest of thenetwork.[1][2]
The purpose of a network enclave is to limit internal access to a portion of a network. It is necessary when the set of resources differs from those of the general network surroundings.[3][4]Typically, network enclaves are not publicly accessible. Internal accessibility is restricted through the use of internalfirewalls,VLANs,network access controlandVPNs.[5]
Network Enclaves consist of standalone assets that do not interact with otherinformation systemsor networks. A major difference between aDMZordemilitarized zoneand a network enclave is a DMZ allows inbound and outbound traffic access, where firewall boundaries are traversed. In an enclave, firewall boundaries are not traversed. Enclave protection tools can be used to provide protection within specificsecurity domains. These mechanisms are installed as part of anIntranetto connect networks that have similar security requirements.[6]
ADMZcan be established within an enclave to host publicly accessible systems. The ideal design is to build theDMZon a separate network interface of the enclave perimeter firewall. All DMZ traffic would be routed through the firewall for processing and the DMZ would still be kept separate from the rest of the protected network.
|
https://en.wikipedia.org/wiki/Network_enclave
|
Network Security Toolkit(NST) is aLinux-based LiveDVD/USB Flash Drivethat provides a set offree and open-sourcecomputer securityandnetworkingtools to perform routine security and networking diagnostic and monitoring tasks. The distribution can be used as a network security analysis, validation and monitoring tool on servers hostingvirtual machines. The majority of tools published in the article "Top 125 security tools" byInsecure.orgare available in the toolkit. NST has package management capabilities similar toFedoraand maintains its own repository of additional packages.
Many tasks that can be performed within NST are available through aweb interfacecalled NST WUI.[1]Among the tools that can be used through this interface arenmapwith the vizualization tool ZenMap,ntop, a Network Interface Bandwidth Monitor, a Network Segment ARP Scanner, a session manager forVNC, aminicom-based terminal server,serial portmonitoring, andWPAPSKmanagement.
Other features include visualization ofntopng,ntop,wireshark,traceroute,NetFlowandkismetdata bygeolocatingthe host addresses, IPv4 Address conversation,traceroutedata andwireless access pointsand displaying them viaGoogle Earthor aMercator World Mapbit image, a browser-based packet capture and protocol analysis system capable of monitoring up to fournetwork interfacesusingWireshark, as well as aSnort-basedintrusion detection systemwith a "collector" backend that stores incidents in aMySQLdatabase.[2]For web developers, there is also aJavaScriptconsole with a built-inobject librarywith functions that aid the development ofdynamic web pages.
The following examplentophost geolocation images were generated by NST.
The following image depicts theinteractivedynamicSVG/AJAXenabled Network Interface Bandwidth Monitor which is integrated into the NST WUI. Also shown is aRuler Measurementtool overlay to perform time and bandwidth rate analysis.
|
https://en.wikipedia.org/wiki/Network_Security_Toolkit
|
TCP Gender Changeris a method in computer networking for making an internalTCP/IPbasednetwork serveraccessible beyond its protectivefirewall.
It consists of two nodes, one resides on the internal thelocal area networkwhere it can access the desired server, and the other node runs outside of the local area network, where the client can access it. These nodes are respectively called CC (Connect-Connect) and LL (Listen-Listen).
The reason behind naming the nodes are the fact that Connect-Connect node initiates two connections one to the Listen-Listen node and one to the actual server. The Listen-Listen node, however, passively Listens on twoTCP/IPports, one to receive a connection from CC and the other one for an incoming connection from the client.
The CC node, which runs inside the network will establish a control connection to the LL, and waiting for LL's signal to open a
connection to the internal server. Upon receiving a client connection LL will signal the CC node to connect the server, once done CC will let LL know of the result and if successful LL will keep the client connection and thus the client and server can communicate while CC and LL both relay the data back and forth.
One of the cases where it can be very useful is to connect to a desktop machine behind a firewall runningVNC, which would make the desktop remotely accessible over the network and beyond the firewall. Another useful scenario would be to create aVPNusingPPPoverSSH, or even simply using SSH to connect to an internalUnixbased server.
|
https://en.wikipedia.org/wiki/TCP_Gender_Changer
|
ATCP sequence prediction attackis an attempt to predict the sequence number used to identify thepacketsin aTCP connection, which can be used to counterfeit packets.[1]
The attacker hopes to correctly guess the sequence number to be used by thesending host. If they can do this, they will be able to send counterfeit packets to the receiving host which will seem to originate from the sending host, even though the counterfeit packets may in fact originate from some third host controlled by the attacker. One possible way for this to occur is for the attacker to listen to the conversation occurring between the trusted hosts, and then to issue packets using the same sourceIP address. By monitoring the traffic before an attack is mounted, the malicious host can figure out the correct sequence number. After the IP address and the correct sequence number are known, it is basically a race between the attacker and the trusted host to get the correct packet sent. One common way for the attacker to send it first is to launch another attack on the trusted host, such as adenial-of-service attack. Once the attacker has control over the connection, they are able to send counterfeit packets without getting a response.[2]
If an attacker can cause delivery of counterfeit packets of this sort, they may be able to cause various sorts of mischief, including the injection into an existing TCP connection of data of the attacker's choosing, and the premature closure of an existing TCP connection by the injection of counterfeit packets with the RST bit set, aTCP reset attack.
Theoretically, other information such as timing differences or information from lowerprotocol layerscould allow the receiving host to distinguish authentic TCP packets from the sending host and counterfeit TCP packets with the correct sequence number sent by the attacker. If such other information is available to the receiving host, if the attacker can also fake that other information, and if the receiving host gathers and uses the information correctly, then the receiving host may be fairly immune to TCP sequence prediction attacks. Usually, this is not the case, so the TCP sequence number is the primary means of protection of TCP traffic against these types of attack.
Another solution to this type of attack is to configure anyrouterorfirewallto not allow packets to come in from an external source but with an internal IP address. Although this does not fix the attack, it will prevent the potential attacks from reaching their targets.[2]
|
https://en.wikipedia.org/wiki/TCP_sequence_prediction_attack
|
Thelist of security hacking incidentscovers important or noteworthy events in the history ofsecurity hackingandcracking.
Science Research Associates undertook to write a full APL system for theIBM 1500. They modeled their system afterAPL/360, which had by that time been developed and seen substantial use inside of IBM, using code borrowed from MAT/1500 where possible. In their documentation, they acknowledge their gratitude to "a number of high school students for their compulsion to bomb the system". This was an early example of a kind of sportive, but very effective, debugging that was often repeated in the evolution of APL systems.
technical experts, skilled, often young, computer programmers who almost whimsically probe the defenses of a computer system, searching out the limits and the possibilities of the machine. Despite their seemingly subversive role, hackers are a recognized asset in the computer industry, often highly prized.
|
https://en.wikipedia.org/wiki/List_of_security_hacking_incidents
|
Low Orbit Ion Cannon(LOIC) is anopen-sourcenetworkstress testinganddenial-of-service attackapplication written inC#. LOIC was initially developed by Praetox Technologies, however it was later released into thepublic domain[2]and is currently available on several open-source platforms.[3][4]
LOIC performs aDoS attack(or, when used by multiple individuals, aDDoS attack) on a target site by flooding the server withTCP,UDP, or HTTP packets with the intention of disrupting the service of a particular host. People have used LOIC to joinvoluntary botnets.[5]
The software inspired the creation of an independentJavaScriptversion calledJS LOIC, as well as a LOIC-derived web version calledLow Orbit Web Cannon. These enable a DoS from aweb browser.[6][7][8]
Security experts quoted by the BBC indicated that well-writtenfirewallrules can filter out most traffic from DDoS attacks by LOIC, thus preventing the attacks from being fully effective.[9]In at least one instance, filtering out allUDPandICMPtraffic blocked a LOIC attack.[10]Firewall rules of this sort are more likely to be effective when implemented at a point upstream of an application server's Internet uplink to avoid the uplink from exceeding its capacity.[10]
LOIC attacks are easily identified in system logs, and the attack can be tracked down to the IP addresses used.[11]
LOIC was used byAnonymous(a group that spawned from the/b/ board of 4chan) duringProject Chanologyto attack websites from the Church ofScientology, once more to (successfully) attack theRecording Industry Association of America's website in October 2010,[12]and it was again used byAnonymousduring theirOperation Paybackin December 2010 to attack the websites of companies and organizations that opposedWikiLeaks.[13][14]
In retaliation for the shutdown of the file sharing serviceMegauploadand the arrest of four workers, members of Anonymous launched a DDoS attack upon the websites ofUniversal Music Group(the company responsible for the lawsuit against Megaupload), theUnited States Department of Justice, theUnited States Copyright Office, theFederal Bureau of Investigation, theMPAA,Warner Music Groupand theRIAA, as well as theHADOPI, all on the afternoon of January 19, 2012, through LOIC.[15]In general, the attack hoped to retaliate against those who Anonymous members believed harmed their digital freedoms.[16]
The LOIC application is named after theion cannon, a fictional weapon from many sci-fi works, video games,[17]and in particular after its namesake from theCommand & Conquerseries.[18]The artwork used in the application was a concept art forCommand & Conquer 3: Tiberium Wars.
The song "Low Orbit Ion Cannon" onEmperor X's 2017 albumOversleepers Internationaldirectly references the software.
|
https://en.wikipedia.org/wiki/Low_Orbit_Ion_Cannon
|
High Orbit Ion Cannon(HOIC) is anopen-sourcenetworkstress testinganddenial-of-service attackapplication designed to attack as many as 256URLsat the same time. It was designed to replace theLow Orbit Ion Cannonwhich was developed by Praetox Technologies and later released into thepublic domain. The security advisory for HOIC was released by Prolexic Technologies in February 2012.[1][2]
HOIC was developed during the conclusion ofOperation Paybackby thehacktivistcollectiveAnonymous.[3]AsOperation Paybackconcluded there was massive pressure on the group from law enforcement agencies, which captured and prosecuted more than 13 individuals connected with the group.[4]This forced many members of the group to rethink their strategies and subsequently this part of the group launchedOperation Leakspin.[5]However a large part of Anonymous remained focused on launching opt-in DDoS attacks. However theLow Orbit Ion Cannonwas not powerful enough to launch attacks with such a limited number of users. HOIC was designed to remedy this with the ability to cause anHTTP Floodwith as few as 50 user agents being required to successfully launch an attack, and co-ordination between multiple users leading to an exponential increase in the damage.[6][7]HOIC was the first tool of its kind to have support for the so-called "booster files", configurable VBscript modules that randomize theHTTP headersof attacking computers, allowing thousands upon thousands of highly randomized combinations for user agents.[8]Apart from allowing user agents to implement some form of randomization countermeasures the booster files can and have been used to increase the magnitude of the attack.[9]
HOIC and its predecessor, theLOIC, are named after anion cannon, a fictionaldirected-energy weapondescribed as firing beams ofionsfrom a space-based platform onto Earth-based targets. Although ion cannons appear in many movies, television shows, and video games that have a science fiction-based setting, the ones depicted in theCommand & Conquerseries of video games are considered to be the inspiration for the graphics on the software's GUI and website.[10]
Simply described, HOIC is a program for sendingHTTP POSTandGETrequests at a computer under attack, that uses alulz-inspiredgraphical interface.[11]HOIC primarily performs adenial-of-service (DoS) attackand aDDoS attackwhen co-ordinated by multiple individuals. Thedenial-of-service (DoS) attackon the target URL is accomplished by sending excessive traffic in an attempt to overload the site and bring it down. This basic version of the attack can be customized by using the booster files which follow theVB 6mixed withVB .NETsyntax. In addition, HOIC can simultaneously attack up to 256 domains, making it one of the most versatile tools for hackers who are attempting to co-ordinate DDoS attacks as a group.[12]
The minimalist GUI of the tool makes it user friendly and easy to control. The basic routine of an attack is to input the URL of the website which is to be attacked, and set the power option on low, medium or high. The power option sets the request velocity with low at two requests per second, medium at four and high at eight requests per second. Then a booster file is added which uses .hoic extension to define dynamic request attributes, launch attacks on multiple pages within the same website and help evade some defense filters. The attack is then launched by pressing the red button in the GUI labelled as "Fire Teh Lazer".[13]
The basic limitation of HOIC is that it requires a coordinated group of users to ensure that the attacks are successful. Even though it has allowed attacks to be launched by far fewer users than the older Low Orbit Ion Cannon, HOIC still requires a minimum of 50 users to launch an effective attack and more are required to sustain it if the target website has protection.[8]Another limiting factor is the lack of anonymizing and randomizing capability. Even though HOIC should, in theory, offer anonymizing through the use of booster files, the actual protection provided is not enough. Furthermore, anonymizing networks such asTorare not capable of handling the bandwidth of attacks generated by HOIC. Any attempt to launch an attack using the Tor network will actually harm the network itself.[11]However, Anonymous members routinely use proxy servers based in Sweden to launch their attacks. It has been speculated that this is due to the notion that Sweden may have lessinternet privacylaws than the rest of the world.[11][14]
Primarily, HOIC has been designed as a stress testing tool and can be lawfully used as such to stress test local networks and servers provided the person initiating the test has authorization to test and as long as no other networks, servers, clients, networking equipment or URLs are disrupted.[15]
HOIC can also be used to perform distributed denial-of-service attacks, which are illegal under various statutes. ThePolice and Justice Act 2006ofthe United Kingdomamended theComputer Misuse Act 1990, and specifically outlawed denial-of-service attacks and set a maximum penalty of 10 years in prison.[16]In the United States, denial-of-service attacks may be considered a federal crime under theComputer Fraud and Abuse Actwith penalties that include up to ten years of imprisonment. In 2013criminal chargeswere brought against 13 members ofAnonymousfor participating in a DDoS attack against various websites of organizations including the Recording Industry Association of America, the Motion Picture Association of America, the United States Copyright Office of the Library of Congress, Visa, MasterCard, and Bank of America. They were charged with one count of "conspiracy to intentionally cause damage to a protected computer" for the events that occurred between September 16, 2010 and January 2, 2011.[17]DDoS attacks are federal offenses in the United States and are prosecuted by theDepartment of JusticeunderUSCTitle 18, Section 1030.[18]
In 2013,Anonymouspetitioned the United States government viaWe the People, demanding that DDoS attacks be recognized as a form of virtual protest similar toOccupy protests.[19]
DDoS mitigationusually works on the principle of distribution, which is basically intelligent routing of traffic to avoid congestion and prevent overload at a single URL. Other methods to counter DDoS include installation ofintrusion prevention system (IPS)andintrusion detection system (IDS)devices and application software.[20]
Anonymouswere the first group to utilize High Orbit Ion Cannon publicly on January 19, 2012. AfterMegaupload, a file-sharing website, was shut down following federal agents raiding their premises,Anonymouslaunched an attack against the website of theUS Department of Justice. As the DOJ website went offline Anonymous claimed success via twitter, saying "One thing is certain: EXPECT US! #Megaupload".[21]Over the course of the next few hours, several other websites were knocked offline and kept offline. These included websites belonging to theRecording Industry Association of America (RIAA), theMotion Picture Association of America (MPAA)and theBMI.[22]Finally, as the day drew to a close, the website belonging to theFBIwas hit repeatedly before it ultimately succumbed to attacks and acquired a “Tango Down” status. Anonymous claimed that it was "the single largest Internet attack in its history", while it was reported that as many as 27,000 user agents were taking part in the attack.[23][24]
|
https://en.wikipedia.org/wiki/High_Orbit_Ion_Cannon
|
Acommunication channelrefers either to a physicaltransmission mediumsuch as a wire, or to alogical connectionover amultiplexedmedium such as a radio channel intelecommunicationsandcomputer networking. A channel is used forinformation transferof, for example, a digitalbit stream, from one or severalsendersto one or severalreceivers. A channel has a certaincapacityfor transmitting information, often measured by itsbandwidthinHzor itsdata rateinbits per second.
Communicating an informationsignalacross distance requires some form of pathway or medium. These pathways, called communication channels, use two types of media:Transmission line-basedtelecommunications cable(e.g.twisted-pair,coaxial, andfiber-optic cable) andbroadcast(e.g.microwave,satellite,radio, andinfrared).
Ininformation theory, a channel refers to a theoreticalchannel modelwith certain error characteristics. In this more general view, astorage deviceis also a communication channel, which can be sent to (written) and received from (reading) and allows communication of an information signal across time.
Examples of communications channels include:
All of these communication channels share the property that they transfer information. The information is carried through the channel by asignal.
Mathematical models of the channel can be made to describe how the input (the transmitted signal) is mapped to the output (the received signal). There exist many types and uses of channel models specific to the field of communication. In particular, separate models are formulated to describe each layer of a communication system.
A channel can be modeled physically by trying to calculate the physical processes which modify the transmitted signal. For example, in wireless communications, the channel can be modeled by calculating the reflection from every object in the environment. A sequence of random numbers might also be added to simulate external interference or electronic noise in the receiver.
Statistically, a communication channel is usually modeled as atupleconsisting of an input alphabet, an output alphabet, and for each pair(i, o)of input and output elements, a transition probabilityp(i, o). Semantically, the transition probability is the probability that thesymbolois received given thatiwas transmitted over the channel.
Statistical and physical modeling can be combined. For example, in wireless communications the channel is often modeled by a random attenuation (known asfading) of the transmitted signal, followed by additive noise. The attenuation term is a simplification of the underlying physical processes and captures the change in signal power over the course of the transmission. The noise in the model captures external interference or electronic noise in the receiver. If the attenuation term iscomplexit also describes the relative time a signal takes to get through the channel. The statistical properties of the attenuation in the model are determined by previous measurements or physical simulations.
Communication channels are also studied in discrete-alphabetmodulationschemes. The mathematical model consists of a transition probability that specifies an output distribution for each possible sequence of channel inputs. Ininformation theory, it is common to start with memoryless channels in which the output probability distribution only depends on the current channel input.
A channel model may either be digital or analog.
In a digital channel model, the transmitted message is modeled as adigital signalat a certainprotocol layer. Underlying protocol layers are replaced by a simplified model. The model may reflect channel performance measures such asbit rate,bit errors,delay,delay variation, etc. Examples of digital channel models include:
In an analog channel model, the transmitted message is modeled as ananalog signal. The model can be alinearornon-linear,time-continuous or time-discrete (sampled),memorylessor dynamic (resulting inburst errors),time-invariantortime-variant(also resulting in burst errors),baseband,passband(RF signal model),real-valuedorcomplex-valuedsignal model. The model may reflect the following channel impairments:
These are examples of commonly usedchannel capacityand performance measures:
In networks, as opposed topoint-to-pointcommunication, the communication media can be shared between multiple communication endpoints (terminals). Depending on the type of communication, different terminals can cooperate or interfere with each other. In general, any complex multi-terminal network can be considered as a combination of simplified multi-terminal channels. The following channels are the principal multi-terminal channels first introduced in the field of information theory[citation needed]:
|
https://en.wikipedia.org/wiki/Communication_channel
|
Adata linkis a means ofconnecting one location to anotherfor the purpose of transmitting and receiving digital information (data communication). It can also refer to a set of electronics assemblies, consisting of a transmitter and a receiver (two pieces ofdata terminal equipment) and the interconnecting datatelecommunication circuit. These are governed by alink protocolenabling digital data to be transferred from a data source to adata sink.
There are at least three types of basic data-link configurations that can be conceived of and used:
In civilaviation, a data-link system (known asController Pilot Data Link Communications) is used to send information between aircraft andair traffic controllersfor example when an aircraft is too far from the ATC to make voice radio communication andradarobservations possible. Such systems are used for aircraft crossing theAtlantic,PacificandIndianoceans. One such system, used byNav CanadaandNATSover the North Atlantic, uses a five-digit data link sequence number confirmed between air traffic control and the pilots of the aircraft before the aircraft proceeds to cross the ocean. This system uses the aircraft'sflight management computerto send location, speed and altitude information about the aircraft to the ATC. ATC can then send messages to the aircraft regarding any necessary change of course.
Inunmanned aircraft, land vehicles, boats, and spacecraft, a two-way (full-duplexorhalf-duplex) data-link is used to sendcontrol signals, and to receivetelemetry.
|
https://en.wikipedia.org/wiki/Data_link
|
Intelecommunications, apoint-to-pointconnection refers to a communications connection between twocommunication endpointsornodes. An example is atelephone call, in which one telephone is connected with one other, and what is said by one caller can only be heard by the other. This is contrasted with apoint-to-multipointorbroadcastconnection, in which many nodes can receive information transmitted by one node. Other examples of point-to-point communications links areleased linesandmicrowave radio relay.
The term is also used incomputer networkingandcomputer architectureto refer to a wire or other connection that links only two computers or circuits, as opposed to othernetwork topologiessuch asbusesorcrossbar switcheswhich can connect many communications devices.
Point-to-pointis sometimes abbreviated asP2P. This usage ofP2Pis distinct fromP2Pmeaningpeer-to-peerin the context offile sharingnetworks or other data-sharing protocols between peers.
A traditional point-to-point data link is a communications medium with exactly two endpoints and no data orpacketformatting. The host computers at either end take full responsibility for formatting the data transmitted between them. The connection between the computer and the communications medium was generally implemented through anRS-232or similar interface. Computers in close proximity may be connected by wires directly between their interface cards.
When connected at a distance, each endpoint would be fitted with amodemto convert analog telecommunications signals into a digital data stream. When the connection uses a telecommunications provider, the connection is called adedicated,leased, orprivate line. TheARPANETused leased lines to provide point-to-point data links between itspacket-switchingnodes, which were calledInterface Message Processors.
With the exception ofpassive optical networks, modernEthernetis exclusively point-to-point on thephysical layer– any cable only connects two devices. The term point-to-point telecommunications can also mean awirelessdata link between two fixed points. The wireless communication is typically bi-directional and eithertime-division multiple access(TDMA) orchannelized. This can be amicrowave relaylink consisting of a transmitter which transmits a narrow beam of microwaves with aparabolic dishantenna to a second parabolic dish at the receiver. It also includes technologies such aslaserswhich transmit data modulated on a light beam. These technologies require an unobstructedline of sightbetween the two points and thus are limited by the visual horizon to distances of about 40 miles (64 km).[a]
In alocal network,repeater hubsorswitchesprovide basic connectivity. A hub provides a point-to-multipoint (or simply multipoint) circuit in which all connected client nodes share the network bandwidth. A switch on the other hand provides a series of point-to-point circuits, via microsegmentation, which allows each client node to have a dedicated circuit and the added advantage of havingfull-duplexconnections.
From theOSI model's layer perspective, both switches and repeater hubs provide point-to-point connections on thephysical layer. However, on thedata link layer, a repeater hub provides point-to-multipoint connectivity – eachframeis forwarded to all nodes – while a switch provides virtual point-to-point connections – eachunicastframe is only forwarded to the destination node.
Within manyswitched telecommunications systems, it is possible to establish a permanent circuit. One example might be a telephone in the lobby of a public building, which is programmed to ring only the number of a telephone dispatcher. "Nailing down" a switched connection saves the cost of running a physical circuit between the two points. The resources in such a connection can be released when no longer needed, for example, a television circuit from a parade route back to the studio.
|
https://en.wikipedia.org/wiki/Point-to-point_(telecommunications)
|
Transmitis a file transfer client program formacOSdeveloped byPanic Inc.Transmit istrialware; after a seven-day trial period, the product can only be used for seven-minute sessions until it is purchased. Transmit was originally built as anFTP clientand now supports a number of protocols includingSFTPandWebDAVand cloud services includingGoogle DriveandDropbox.[1]
Many of the features of Transmit 4 take advantage of technologies Apple introduced in OS X 10.4, such asuploadingusing aDashboardwidgetor theDock, support for.MacandiDisk/WebDAV, FTP/WebDAV/S3 servers as disks in Finder (since v4.0),Spotlight, Droplets, Amazon S3 support andAutomatorplugins.
The app was called "Transit" at introduction in 1998,[2]but had to be changed due to a conflict with an existing product. Transmit was originally developed forClassic Mac OS, but that version has been discontinued and madefreeware.
Transmit for iOS was released in 2014 but removed and retired from the Apple app store in 2018.[3]
On February 16, 2005, Transmit 3 was released. The app was previewed to attendees of Macworld Expo the month prior in January 2005.
On April 27, 2010, Transmit 4 was released. The app was almost completely rewritten, had a brand new interface, over 45 new features, and was up to 25 times faster.[4]
On June 10, 2016, Panic beganbeta testingTransmit 5, touting improved performance and new features.[5]Transmit 5 was released the following year on July 18, 2017.[6]
Transmit is the recipient of a number of awards, including:
Thisnetwork-relatedsoftwarearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Transmit_(file_transfer_tool)
|
Inelectronicsandtelecommunications, aradio transmitteror justtransmitter(often abbreviated asXMTRorTXin technical documents) is anelectronic devicewhich producesradio waveswith anantennawith the purpose ofsignal transmissionto aradio receiver. The transmitter itself generates aradio frequencyalternating current, which is applied to theantenna. When excited by this alternating current, the antennaradiatesradio waves.
Transmitters are necessary component parts of all electronic devices that communicate byradio, such asradio(audio) andtelevision broadcastingstations,cell phones,walkie-talkies,wireless computer networks,Bluetoothenabled devices,garage door openers,two-way radiosin aircraft, ships, spacecraft,radarsets and navigational beacons. The termtransmitteris usually limited to equipment that generates radio waves forcommunicationpurposes; orradiolocation, such asradarand navigational transmitters. Generators of radio waves for heating or industrial purposes, such asmicrowave ovensordiathermyequipment, are not usually called transmitters, even though they often have similar circuits.
The term is popularly used more specifically to refer to abroadcast transmitter, a transmitter used inbroadcasting, as inFM radio transmitterortelevision transmitter. This usage typically includes both the transmitter proper, the antenna, and often the building it is housed in.
A transmitter can be a separate piece of electronic equipment, or anelectrical circuitwithin another electronic device. A transmitter and areceivercombined in one unit is called atransceiver. The purpose of most transmitters isradio communicationof information over a distance. The information is provided to the transmitter in the form of an electronic signal called the modulation signal, such as anaudio(sound) signal from a microphone, avideo(TV) signal from a video camera, or inwireless networkingdevices, adigital signalfrom a computer. The transmitter generates aradio frequencysignal which when applied to the antenna produces the radio waves, called thecarrier signal. It combines the carrier with the modulation signal, a process calledmodulation. The information can be added to the carrier in several different ways, in different types of transmitters. In anamplitude modulation(AM) transmitter, the information is added to the radio signal by varying itsamplitude. In afrequency modulation(FM) transmitter, it is added by varying the radio signal'sfrequencyslightly. Many other types of modulation are also used.
The radio signal from the transmitter is applied to theantenna, which radiates the energy as radio waves. The antenna may be enclosed inside the case or attached to the outside of the transmitter, as in portable devices such as cell phones, walkie-talkies, andgarage door openers. In more powerful transmitters, the antenna may be located on top of a building or on a separate tower, and connected to the transmitter by afeed line, that is atransmission line.
Electromagnetic wavesare radiated byelectric chargeswhen they areaccelerated.[1][2]Radio waves, electromagnetic waves of radiofrequency, are generated by time-varyingelectric currents, consisting ofelectronsflowing through a metal conductor called anantennawhich are changing their velocity and thus accelerating.[3][2]Analternating currentflowing back and forth in an antenna will create an oscillatingmagnetic fieldaround the conductor. The alternating voltage will also charge the ends of the conductor alternately positive and negative, creating an oscillatingelectric fieldaround the conductor. If thefrequencyof the oscillations is high enough, in theradio frequencyrange above about 20 kHz, the oscillating coupled electric and magnetic fields will radiate away from the antenna into space as an electromagnetic wave, a radio wave.
A radio transmitter is anelectronic circuitwhich transformselectric powerfrom a power source, a battery or mains power, into aradio frequencyalternating current to apply to the antenna, and the antenna radiates the energy from this current as radio waves.[4]The transmitter also encodes information such as anaudioorvideo signalinto the radio frequency current to be carried by the radio waves. When they strike the antenna of aradio receiver, the waves excite similar (but less powerful) radio frequency currents in it. The radio receiver extracts the information from the received waves.
A practical radio transmitter mainly consists of the following parts:
In higher frequency transmitters, in theUHFandmicrowaverange, free running oscillators are unstable at the output frequency. Older designs used an oscillator at a lower frequency, which was multiplied byfrequency multipliersto get a signal at the desired frequency. Modern designs more commonly use an oscillator at the operating frequency which is stabilized by phase locking to a very stable lower frequency reference, usually a crystal oscillator.
Two radio transmitters in the same area that attempt to transmit on the same frequency will interfere with each other, causing garbled reception, so neither transmission may be received clearly.Interferencewith radio transmissions can not only have a large economic cost, it can be life-threatening (for example, in the case of interference with emergency communications orair traffic control).
For this reason, in most countries, use of transmitters is strictly controlled by law. Transmitters must be licensed by governments, under a variety of license classes depending on use such asbroadcast,marine radio,Airband,Amateurand are restricted to certain frequencies and power levels. A body called theInternational Telecommunication Union(ITU) allocates thefrequencybands in theradio spectrumto various classes of users. In some classes, each transmitter is given a uniquecall signconsisting of a string of letters and numbers which must be used as an identifier in transmissions. The operator of the transmitter usually must hold a government license, such as ageneral radiotelephone operator license, which is obtained by passing a test demonstrating adequate technical and legal knowledge of safe radio operation.
Exceptions to the above regulations allow the unlicensed use of low-power short-range transmitters in consumer products such ascell phones,cordless telephones,wireless microphones,walkie-talkies,Wi-FiandBluetoothdevices,garage door openers, andbaby monitors. In the US, these fall underPart 15of theFederal Communications Commission(FCC) regulations. Although they can be operated without a license, these devices still generally must betype-approvedbefore sale.
The first primitive radio transmitters (calledspark gap transmitters) were built by German physicistHeinrich Hertzin 1887 during his pioneering investigations of radio waves. These generated radio waves by a high voltagesparkbetween two conductors. Beginning in 1895,Guglielmo Marconideveloped the first practical radio communication systems using these transmitters, and radio began to be used commercially around 1900. Spark transmitters could not transmitaudio(sound) and instead transmitted information byradiotelegraphy: the operator tapped on atelegraph keywhich turned the transmitter on-and-off to produce radio wave pulses spelling out text messages in telegraphic code, usuallyMorse code. At the receiver, these pulses were sometimes directly recorded on paper tapes, but more common was audible reception. The pulses were audible as beeps in the receiver's earphones, which were translated back to text by an operator who knew Morse code. These spark-gap transmitters were used during the first three decades of radio (1887–1917), called thewireless telegraphyor "spark" era. Because they generateddamped waves, spark transmitters were electrically "noisy". Their energy was spread over a broad band offrequencies, creatingradio noisewhich interfered with other transmitters. Damped wave emissions were banned by international law in 1934.
Two short-lived competing transmitter technologies came into use after the turn of the century, which were the firstcontinuous wavetransmitters: thearc converter(Poulsen arc) in 1904 and theAlexanderson alternatoraround 1910, which were used into the 1920s.
All these early technologies were replaced byvacuum tubetransmitters in the 1920s, which used thefeedback oscillatorinvented byEdwin ArmstrongandAlexander Meissneraround 1912, based on theAudion(triode) vacuum tube invented byLee De Forestin 1906. Vacuum tube transmitters were inexpensive and producedcontinuous waves, and could be easilymodulatedto transmit audio (sound) usingamplitude modulation(AM). This made AMradio broadcastingpossible, which began in about 1920. Practicalfrequency modulation(FM) transmission was invented byEdwin Armstrongin 1933, who showed that it was less vulnerable to noise and static than AM. The first FM radio station was licensed in 1937. Experimentaltelevisiontransmission had been conducted by radio stations since the late 1920s, but practicaltelevision broadcastingdidn't begin until the late 1930s. The development ofradarduringWorld War IImotivated the evolution of high frequency transmitters in theUHFandmicrowaveranges, using new active devices such as themagnetron,klystron, andtraveling wave tube.
The invention of thetransistorallowed the development in the 1960s of small portable transmitters such aswireless microphones,garage door openersandwalkie-talkies. The development of theintegrated circuit(IC) in the 1970s made possible the current proliferation ofwireless devices, such ascell phonesandWi-Finetworks, in which integrated digital transmitters and receivers (wireless modems) in portable devices operate automatically, in the background, to exchange data withwireless networks.
The need to conserve bandwidth in the increasingly congestedradio spectrumis driving the development of new types of transmitters such asspread spectrum,trunked radio systemsandcognitive radio. A related trend has been an ongoing transition fromanalogtodigitalradio transmission methods.Digital modulationcan have greaterspectral efficiencythananalog modulation; that is it can often transmit more information (data rate) in a givenbandwidththan analog, usingdata compressionalgorithms. Other advantages of digital transmission are increasednoise immunity, and greater flexibility and processing power ofdigital signal processingintegrated circuits.
|
https://en.wikipedia.org/wiki/Transmitter
|
Transmissibilitymay have several meanings:
In most contexts, transmissibility is related topermeability.
In medicine, transmissibility is a synonym forbasic reproduction numberand refers totransmission.
|
https://en.wikipedia.org/wiki/Transmissibility_(disambiguation)
|
Electromagnetic radiation can be affected in several ways by the medium in which it propagates. It can bescattered,absorbed, andreflected and refractedat discontinuities in the medium. This page is an overview of the last 3. Thetransmittanceof a material and any surfaces is its effectiveness in transmittingradiant energy; the fraction of the initial (incident) radiation which propagates to a location of interest (often an observation location). This may be described by thetransmission coefficient.
Hemispherical transmittanceof a surface, denotedT, is defined as[2]
where
Hemispheric transmittance may be calculated as an integral over the directional transmittance described below.
Spectral hemispherical transmittance in frequencyandspectral hemispherical transmittance in wavelengthof a surface, denotedTνandTλrespectively, are defined as[2]
where
Directional transmittanceof a surface, denotedTΩ, is defined as[2]
where
Spectral directional transmittance in frequencyandspectral directional transmittance in wavelengthof a surface, denotedTν,ΩandTλ,Ωrespectively, are defined as[2]
where
In the field ofphotometry (optics), the luminous transmittance of a filter is a measure of the amount of luminous flux or intensity transmitted by anoptical filter. It is generally defined in terms of astandard illuminant(e.g. Illuminant A, Iluminant C, or Illuminant E). The luminous transmittance with respect to the standard illuminant is defined as:
where:
The luminous transmittance is independent of the magnitude of the flux or intensity of the standard illuminant used to measure it, and is adimensionless quantity.
By definition, internal transmittance is related tooptical depthand toabsorbanceas
where
TheBeer–Lambert lawstates that, forNattenuating species in the material sample,
where
Attenuation cross section and molar attenuation coefficient are related by
and number density and amount concentration by
where NAis theAvogadro constant.
In case ofuniformattenuation, these relations become[3]
Cases ofnon-uniformattenuation occur inatmospheric scienceapplications andradiation shieldingtheory for instance.
|
https://en.wikipedia.org/wiki/Transmittance
|
Transmissivitymay refer to:
|
https://en.wikipedia.org/wiki/Transmissivity_(disambiguation)
|
Vint Hill Farms Station(VHFS) was aUnited States ArmyandNational Security Agency(NSA)signals intelligenceandelectronic warfarefacility located inFauquier County,Virginia, nearWarrenton. VHFS was closed in 1997 and the land was sold off in 1999. Today the site hosts various engineering and technology companies, as well as twoFederal Aviation Administration(FAA) air traffic control facilities, and theCold War Museum.
Vint Hill Farms Station was established duringWorld War IIin 1942 by the Army'sSignal Intelligence Service(SIS). The 701-acre (284 ha) facility was built because the Army needed a secure location near SIS headquarters inArlington Hallto serve as acryptographyschool and as a refitting station for signal units returning from combat prior to redeployment overseas.[1][2][3]The unit on station had a World War IIMonitoring Station Designatorof MS-1.[4][5]VHFS was one of the United States' most important intelligence gathering stations during the war, playing a pivotal role in eavesdropping on enemy communications.[1][3][6]In 1943, the VHFS intercepted a message from theJapaneseambassador inBerlinto his superiors in Tokyo. It also provided a detailed description of Nazi fortifications along the French Coast, and GeneralDwight D. Eisenhowerlater said the information made a significant contribution to theD-Dayinvasion atNormandy.[7]
After the war, VHFS became the first field station of theArmy Security Agency, a subordinate to the NSA,[8]and the facility conductedsignals intelligenceoperations and served as a training center for radio-intercept operators, cryptanalysts, and radio-repair technicians.[3]During theCold War, VHFS intercepted keySovietdiplomatic and military communications sent overFISHteleprinters.[6]The Army Electronic Material Readiness Activity moved to VHFS in 1961 and managed signals intelligence and electronic warfare equipment and systems maintenance for the Army Security Agency and other signals intelligence and electronic warfare units worldwide.[9][10]
In 1973, the VHFS's mission changed to research, development and support of intelligence andelectronic warfarefor the Army,Department of Defenseand foreign allies of the United States.[3]In addition, the U.S.Environmental Protection Agencytook over operation of the facility's photographic interpretation center from theDefense Intelligence Agencyand the center was renamed the Environmental Photographic Interpretation Center.[9]In the late 1970s, VHFS was put on the military base closure list, and all maintenance and construction at the facility was halted. In 1981, the facility was removed from the closure list and funding for maintenance and construction was restored.[3]
A VHFS employee told aHouse of Representativessubcommittee in 1977 that the facility had a bank of machines designed to intercept foreign communications, including those of U.S. allies, such as communications betweenUnited Kingdom'sWashington embassyandLondon.[7][11][12]TheAssociated Pressreported in 1989 that VHFS served as a "giant ear" operated by the NSA, with its likely target being foreign embassies inWashington, D.C., as well as international communications coming into the United States.[2]
In 1987, control of the facility was transferred from the ArmyIntelligence and Security Command(the successor to the Army Security Agency) to theCommunications-Electronics Command, which was based, at the time, inFort Monmouth,New Jersey.[3]The base took on a support role, developing and testing signal equipment and supporting the operations of agencies such as theCentral Intelligence Agencyand theFederal Bureau of Investigation.[7]
The1993 Base Realignment and Closure Commissionrecommended the closure of VHFS, which would produce savings of $10.5 million annually. At the time there were 846 military personnel, 1,356 civilian personnel and 454 contractors based at the facility. Most of the personnel were reassigned to Fort Monmouth, while others went toFort Belvoir,Virginia. The intelligence equipment maintenance and repair personnel were relocated toTobyhanna Army Depot,Pennsylvania.[3]
VHFS was closed on September 30, 1997.[10]The Army and the Virginia State Vint Hill Farms Economic Development Authority settled on a purchased price of $925,000 for VHFS, and the transfer of the property was completed in 1999.[3][9]Today the site hosts various engineering and technology companies,[9]Potomac Consolidated TRACONfacility and, since 2011, the FAA's Air Traffic Control System Command Center.[13][14]
TheCold War Museumopened on the property in November 2011. At present, it is open on weekends (and at other times by appointment), but it takes advantage of the historical aspects of the property. It occupies a two-story building used for supply purposes when the base was open.
There are also a dance school (Lyrique Dance), gymnastics school (Bull Run Academy of Gymnastics)and axe-throwing pub (HEROIC AXE) on the property on Kennedy Road.
The streets in the residential development which now occupies much of VHFS are named after people important in the history of computers. (For details, click on the coordinates in the infobox above.)
Vint Hill Farms Station was defined as acensus-designated place(CDP) at the 1970, 1980, and 1990 United States Censuses. Its population ranged from 1,018 in 1970 to 1,332 in 1990, before the facility was closed in the 1997.[15]The area is now part of theNew BaltimoreCDP. The current VHFS site population is approximately 300.[9]
38°44′53″N77°40′19″W / 38.748°N 77.672°W /38.748; -77.672
|
https://en.wikipedia.org/wiki/Vint_Hill_Farms_Station
|
In mathematics, aBeurling zeta functionis an analogue of theRiemann zeta functionwhere the ordinary primes are replaced by a set ofBeurling generalized primes: any sequence of real numbers greater than 1 that tend to infinity. These were introduced byBeurling(1937).
A Beurling generalized integer is a number that can be written as a product of Beurling generalized primes[definition needed]. Beurling generalized the usualprime number theoremto Beurling generalized primes. He showed that if the numberN(x) of Beurling generalized integers less thanxis of the formN(x) =Ax+ O(xlog−γx) withγ> 3/2 then the number of Beurling generalized primes less thanxis asymptotic tox/logx, just as for ordinary primes,
but ifγ= 3/2 then this conclusion need not hold.
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Beurling_zeta_function
|
Inmathematics,singular integral operatorsof convolution typeare thesingular integral operatorsthat arise onRnandTnthrough convolution by distributions; equivalently they are the singular integral operators that commute with translations. The classical examples inharmonic analysisare theharmonic conjugation operatoron the circle, theHilbert transformon the circle and the real line, theBeurling transformin the complex plane and theRiesz transformsin Euclidean space. The continuity of these operators onL2is evident because theFourier transformconverts them intomultiplication operators. Continuity onLpspaces was first established byMarcel Riesz. The classical techniques include the use ofPoisson integrals,interpolation theoryand theHardy–Littlewood maximal function. For more general operators, fundamental new techniques, introduced byAlberto CalderónandAntoni Zygmundin 1952, were developed by a number of authors to give general criteria for continuity onLpspaces. This article explains the theory for the classical operators and sketches the subsequent general theory.
The theory forL2functions is particularly simple on the circle.[1][2]Iff∈L2(T), then it has a Fourier series expansionf(θ)=∑n∈Zaneinθ.{\displaystyle f(\theta )=\sum _{n\in \mathbf {Z} }a_{n}e^{in\theta }.}
Hardy spaceH2(T) consists of the functions for which the negative coefficients vanish,an= 0 forn< 0. These are precisely the square-integrable functions that arise as boundary values of holomorphic functions in the open unit disk. Indeed,fis the boundary value of the function
F(z)=∑n≥0anzn,{\displaystyle F(z)=\sum _{n\geq 0}a_{n}z^{n},}
in the sense that the functions
fr(θ)=F(reiθ),{\displaystyle f_{r}(\theta )=F(re^{i\theta }),}
defined by the restriction ofFto the concentric circles |z| =r, satisfy
‖fr−f‖2→0.{\displaystyle \|f_{r}-f\|_{2}\rightarrow 0.}
The orthogonal projectionPofL2(T) onto H2(T) is called theSzegő projection. It is a bounded operator onL2(T) withoperator norm1. ByCauchy's integral formula,
F(z)=12πi∫|ζ|=1f(ζ)ζ−zdζ=12π∫−ππf(θ)1−e−iθzdθ.{\displaystyle F(z)={1 \over 2\pi i}\int _{|\zeta |=1}{\frac {f(\zeta )}{\zeta -z}}\,d\zeta ={1 \over 2\pi }\int _{-\pi }^{\pi }{f(\theta ) \over 1-e^{-i\theta }z}\,d\theta .}
Thus
F(reiφ)=12π∫−ππf(φ−θ)1−reiθdθ.{\displaystyle F(re^{i\varphi })={1 \over 2\pi }\int _{-\pi }^{\pi }{f(\varphi -\theta ) \over 1-re^{i\theta }}\,d\theta .}
Whenr= 1, the integrand on the right-hand side has a singularity at θ = 0. Thetruncated Hilbert transformis defined by
Hεf(φ)=iπ∫ε≤|θ|≤πf(φ−θ)1−eiθdθ=1π∫|ζ−eiφ|≥δf(ζ)ζ−eiφdζ,{\displaystyle H_{\varepsilon }f(\varphi )={i \over \pi }\int _{\varepsilon \leq |\theta |\leq \pi }{f(\varphi -\theta ) \over 1-e^{i\theta }}\,d\theta ={1 \over \pi }\int _{|\zeta -e^{i\varphi }|\geq \delta }{f(\zeta ) \over \zeta -e^{i\varphi }}\,d\zeta ,}
where δ = |1 –eiε|. Since it is defined as convolution with a bounded function, it is a bounded operator onL2(T). Now
Hε1=iπ∫επ2ℜ(1−eiθ)−1dθ=iπ∫επ1dθ=i−iεπ.{\displaystyle H_{\varepsilon }{1}={i \over \pi }\int _{\varepsilon }^{\pi }2\Re (1-e^{i\theta })^{-1}\,d\theta ={i \over \pi }\int _{\varepsilon }^{\pi }1\,d\theta =i-{i\varepsilon \over \pi }.}
Iffis a polynomial inzthen
Hεf(z)−i(1−ε)πf(z)=1πi∫|ζ−z|≥δf(ζ)−f(z)ζ−zdζ.{\displaystyle H_{\varepsilon }f(z)-{i(1-\varepsilon ) \over \pi }f(z)={1 \over \pi i}\int _{|\zeta -z|\geq \delta }{f(\zeta )-f(z) \over \zeta -z}\,d\zeta .}
By Cauchy's theorem the right-hand side tends to 0 uniformly asε, and henceδ, tends to 0. So
Hεf→if{\displaystyle H_{\varepsilon }f\rightarrow if}
uniformly for polynomials. On the other hand, ifu(z) =zit is immediate that
Hεf¯=−u−1Hε(uf¯).{\displaystyle {\overline {H_{\varepsilon }f}}=-u^{-1}H_{\varepsilon }(u{\overline {f}}).}
Thus iffis a polynomial inz−1without constant term
Define theHilbert transformon the circle byH=i(2P−I).{\displaystyle H=i(2P-I).}
Thus iffis a trigonometric polynomial
It follows that iffis anyL2function
This is an immediate consequence of the result for trigonometric polynomials once it is established that the operatorsHεare uniformly bounded inoperator norm. But on [–π,π]
(1−eiθ)−1=[(1−eiθ)−1−iθ−1]+iθ−1.{\displaystyle (1-e^{i\theta })^{-1}=[(1-e^{i\theta })^{-1}-i\theta ^{-1}]+i\theta ^{-1}.}
The first term is bounded on the whole of [–π,π], so it suffices to show that the convolution operatorsSεdefined by
Sεf(φ)=∫ε≤|θ|≤πf(φ−θ)θ−1dθ{\displaystyle S_{\varepsilon }f(\varphi )=\int _{\varepsilon \leq |\theta |\leq \pi }f(\varphi -\theta )\theta ^{-1}\,d\theta }
are uniformly bounded. With respect to the orthonormal basiseinθconvolution operators are diagonal and their operator norms are given by taking the supremum of the moduli of the Fourier coefficients. Direct computation shows that these all have the form
1π|∫absinttdt|{\displaystyle {\frac {1}{\pi }}\left|\int _{a}^{b}{\sin t \over t}\,dt\right|}
with 0 <a<b. These integrals are well known to be uniformly bounded.
It also follows that, for a continuous functionfon the circle,Hεfconverges uniformly toHf, so in particular pointwise. The pointwise limit is aCauchy principal value, written
Hf=P.V.1π∫f(ζ)ζ−eiφdζ.{\displaystyle Hf=\mathrm {P.V.} \,{1 \over \pi }\int {f(\zeta ) \over \zeta -e^{i\varphi }}\,d\zeta .}
Iffis just inL2thenHεfconverges toHfpointwise almost everywhere. In fact define thePoisson operatorsonL2functions by
Tr(∑aneinθ)=∑r|n|aneinθ,{\displaystyle T_{r}\left(\sum a_{n}e^{in\theta }\right)=\sum r^{|n|}a_{n}e^{in\theta },}
forr< 1. Since these operators are diagonal, it is easy to see thatTrftends tofin L2asrincreases to 1. Moreover, as Lebesgue proved,Trfalso tends pointwise tofat eachLebesgue pointoff. On the other hand, it is also known thatTrHf−H1 −rftends to zero at each Lebesgue point off. HenceH1 –rftends pointwise tofon the common Lebesgue points offandHfand therefore almost everywhere.[3][4][5]
Results of this kind on pointwise convergence are proved more generally below forLpfunctions using the Poisson operators and the Hardy–Littlewood maximal function off.
The Hilbert transform has a natural compatibility with orientation-preserving diffeomorphisms of the circle.[6]Thus ifHis a diffeomorphism of the circle with
H(eiθ)=eih(θ),h(θ+2π)=h(θ)+2π,{\displaystyle H(e^{i\theta })=e^{ih(\theta )},\,\,\,h(\theta +2\pi )=h(\theta )+2\pi ,}
then the operators
Hεhf(eiφ)=1π∫|eih(θ)−eih(φ)|≥εf(eiθ)eiθ−eiφeiθdθ,{\displaystyle H_{\varepsilon }^{h}f(e^{i\varphi })={\frac {1}{\pi }}\int _{|e^{ih(\theta )}-e^{ih(\varphi )}|\geq \varepsilon }{\frac {f(e^{i\theta })}{e^{i\theta }-e^{i\varphi }}}e^{i\theta }\,d\theta ,}
are uniformly bounded and tend in thestrong operator topologytoH. Moreover, ifVf(z) =f(H(z)), thenVHV−1−His an operator with smooth kernel, so aHilbert–Schmidt operator.
In fact ifGis the inverse ofHwith corresponding functiong(θ), then
(VHεhV−1−Hε)f(eiφ)=1π∫|eiθ−eiφ|≥ε[g′(θ)eig(θ)eig(θ)−eig(φ)−eiθeiθ−eiφ]f(eiθ)dθ.{\displaystyle (VH_{\varepsilon }^{h}V^{-1}-H_{\varepsilon })f(e^{i\varphi })={1 \over \pi }\int _{|e^{i\theta }-e^{i\varphi }|\geq \varepsilon }\left[{g^{\prime }(\theta )e^{ig(\theta )} \over e^{ig(\theta )}-e^{ig(\varphi )}}-{e^{i\theta } \over e^{i\theta }-e^{i\varphi }}\right]\,f(e^{i\theta })\,d\theta .}
Since the kernel on the right hand side is smooth onT×T, it follows that the operators on the right hand side are uniformly bounded and hence so too are the operatorsHεh. To see that they tend strongly toH, it suffices to check this on trigonometric polynomials. In that case
Hεhf(ζ)=1πi∫|H(z)−H(ζ)|≥εf(z)z−ζdz=1πi∫|H(z)−H(ζ)|≥εf(z)−f(ζ)z−ζdz+f(ζ)πi∫|H(z)−H(ζ)|≥εdzz−ζ.{\displaystyle H_{\varepsilon }^{h}f(\zeta )={1 \over \pi i}\int _{|H(z)-H(\zeta )|\geq \varepsilon }{\frac {f(z)}{z-\zeta }}dz={1 \over \pi i}\int _{|H(z)-H(\zeta )|\geq \varepsilon }{f(z)-f(\zeta ) \over z-\zeta }\,dz+{\frac {f(\zeta )}{\pi i}}\int _{|H(z)-H(\zeta )|\geq \varepsilon }{dz \over z-\zeta }.}
In the first integral the integrand is a trigonometric polynomial inzand ζ and so the integral is a trigonometric polynomial inζ. It tends inL2to the trigonometric polynomial1πi∫f(z)−f(ζ)z−ζdz.{\displaystyle {1 \over \pi i}\int {f(z)-f(\zeta ) \over z-\zeta }\,dz.}
The integral in the second term can be calculated by theargument principle. It tends inL2to the constant function 1, so that
limε→0Hεhf(ζ)=f(ζ)+1πi∫f(z)−f(ζ)z−ζdz,{\displaystyle \lim _{\varepsilon \to 0}H_{\varepsilon }^{h}f(\zeta )=f(\zeta )+{1 \over \pi i}\int {f(z)-f(\zeta ) \over z-\zeta }\,dz,}
where the limit is inL2. On the other hand, the right hand side is independent of the diffeomorphism. Since for the identity diffeomorphism, the left hand side equalsHf, it too equalsHf(this can also be checked directly iffis a trigonometric polynomial). Finally, letting ε → 0,
(VHV−1−H)f(eiφ)=1π∫[g′(θ)eig(θ)eig(θ)−eig(φ)−eiθeiθ−eiφ]f(eiθ)dθ.{\displaystyle (VHV^{-1}-H)f(e^{i\varphi })={\frac {1}{\pi }}\int \left[{g^{\prime }(\theta )e^{ig(\theta )} \over e^{ig(\theta )}-e^{ig(\varphi )}}-{e^{i\theta } \over e^{i\theta }-e^{i\varphi }}\right]\,f(e^{i\theta })\,d\theta .}
The direct method of evaluating Fourier coefficients to prove the uniform boundedness of the operatorHεdoes not generalize directly toLpspaces with 1 <p< ∞. Instead a direct comparison ofHεfwith thePoisson integralof the Hilbert transform is used classically to prove this. Iffhas Fourier series
f(eiθ)=∑n∈Zaneinθ,{\displaystyle f(e^{i\theta })=\sum _{n\in \mathbf {Z} }a_{n}e^{in\theta },}
its Poisson integral is defined by
Prf(eiθ)=∑n∈Zanr|n|einθ=12π∫02π(1−r2)f(eiθ)1−2rcosθ+r2dθ=Kr⋆f(eiθ),{\displaystyle P_{r}f(e^{i\theta })=\sum _{n\in \mathbf {Z} }a_{n}r^{|n|}e^{in\theta }={1 \over 2\pi }\int _{0}^{2\pi }{(1-r^{2})f(e^{i\theta }) \over 1-2r\cos \theta +r^{2}}\,d\theta =K_{r}\star f(e^{i\theta }),}
where thePoisson kernelKris given byKr(eiθ)=∑n∈Zr|n|einθ=1−r21−2rcosθ+r2.{\displaystyle K_{r}(e^{i\theta })=\sum _{n\in \mathbf {Z} }r^{|n|}e^{in\theta }={1-r^{2} \over 1-2r\cos \theta +r^{2}}.}
Infis in Lp(T) then the operatorsPrsatisfy‖Prf−f‖p→0.{\displaystyle \|P_{r}f-f\|_{p}\rightarrow 0.}
In fact theKrare positive so‖Kr‖1=12π∫02πKr(eiθ)dθ=1.{\displaystyle \|K_{r}\|_{1}={1 \over 2\pi }\int _{0}^{2\pi }K_{r}(e^{i\theta })\,d\theta =1.}
Thus the operatorsPrhave operator norm bounded by 1 onLp. The convergence statement above follows by continuity from the result for trigonometric polynomials, where it is an immediate consequence of the formula for the Fourier coefficients ofKr.
The uniform boundedness of the operator norm ofHεfollows becauseHPr−H1−ris given as convolution by the functionψr, where[7]ψr(eiθ)=1+1−r1+rcot(θ2)Kr(eiθ)≤1+1−r1+rcot(1−r2)Kr(eiθ){\displaystyle {\begin{aligned}\psi _{r}(e^{i\theta })&=1+{\frac {1-r}{1+r}}\cot \left({\tfrac {\theta }{2}}\right)K_{r}(e^{i\theta })\\&\leq 1+{\frac {1-r}{1+r}}\cot \left({\tfrac {1-r}{2}}\right)K_{r}(e^{i\theta })\end{aligned}}}for 1 −r≤ |θ| ≤π, and, for |θ| < 1 −r,ψr(eiθ)=1+2rsinθ1−2rcosθ+r2.{\displaystyle \psi _{r}(e^{i\theta })=1+{2r\sin \theta \over 1-2r\cos \theta +r^{2}}.}
These estimates show that theL1norms ∫ |ψr| are uniformly bounded. SinceHis a bounded operator, it follows that the operatorsHεare uniformly bounded in operator norm onL2(T). The same argument can be used onLp(T) once it is known that the Hilbert transformHis bounded in operator norm onLp(T).
As in the case of the circle, the theory forL2functions is particularly easy to develop. In fact, as observed by Rosenblum and Devinatz, the two Hilbert transforms can be related using theCayley transform.[8]
TheHilbert transformHRon L2(R) is defined byHRf^=(iχ[0,∞)−iχ(−∞,0])f^,{\displaystyle {\widehat {H_{\mathbf {R} }f}}=\left(i\chi _{[0,\infty )}-i\chi _{(-\infty ,0]}\right){\widehat {f}},}where theFourier transformis given byf^(t)=12π∫−∞∞f(x)e−itxdx.{\displaystyle {\widehat {f}}(t)={1 \over {\sqrt {2\pi }}}\int _{-\infty }^{\infty }f(x)e^{-itx}\,dx.}
Define the Hardy space H2(R) to be the closed subspace ofL2(R) consisting of functions for which the Fourier transform vanishes on the negative part of the real axis. Its orthogonal complement is given by functions for which the Fourier transform vanishes on the positive part of the real axis. It is the complex conjugate of H2(R). IfPRis the orthogonal projection onto H2(R), then
HR=i(2PR−I).{\displaystyle H_{\mathbf {R} }=i(2P_{\mathbf {R} }-I).}
The Cayley transformC(x)=x−ix+i{\displaystyle C(x)={x-i \over x+i}}carries the extended real line onto the circle, sending the point at ∞ to 1, and the upper halfplane onto the unit disk.
Define the unitary operator fromL2(T) ontoL2(R) byUf(x)=π−1/2(x+i)−1f(C(x)).{\displaystyle Uf(x)=\pi ^{-1/2}(x+i)^{-1}f(C(x)).}
This operator carries the Hardy space of the circle H2(T) onto H2(R). In fact for |w| < 1, the linear span of the functionsfw(z)=11−wz{\displaystyle f_{w}(z)={\frac {1}{1-wz}}}is dense in H2(T). Moreover,Ufw(x)=1π1(1−w)(x−z¯){\displaystyle Uf_{w}(x)={\frac {1}{\sqrt {\pi }}}{\frac {1}{(1-w)(x-{\overline {z}})}}}wherez=C−1(w¯).{\displaystyle z=C^{-1}({\overline {w}}).}
On the other hand, forz∈H, the linear span of the functionsgz(t)=eitzχ[0,∞)(t){\displaystyle g_{z}(t)=e^{itz}\chi _{[0,\infty )}(t)}is dense inL2((0,∞)). By theFourier inversion formula, they are the Fourier transforms ofhz(x)=gz^(−x)=i2π(x+z)−1,{\displaystyle h_{z}(x)={\widehat {g_{z}}}(-x)={i \over {\sqrt {2\pi }}}(x+z)^{-1},}so the linear span of these functions is dense in H2(R). SinceUcarries thefw's onto multiples of thehz's, it follows thatUcarries H2(T) onto H2(R). ThusUHTU∗=HR.{\displaystyle UH_{\mathbf {T} }U^{*}=H_{\mathbf {R} }.}
InNikolski (1986), part of the L2theory on the real line and the upper halfplane is developed by transferring the results from the circle and the unit disk. The natural replacements for concentric circles in the disk are lines parallel to the real axis inH. Under the Cayley transform, these correspond to circles in the disk that are tangent to the unit circle at the point one. The behaviour of functions in H2(T) on these circles is part of the theory ofCarleson measures. However, the theory of singular integrals can be developed more easily by working directly onR.
H2(R) consists exactly of L2functionsfthat arise of boundary values of holomorphic functions onHin the following sense:[9]fis in H2provided that there is a holomorphic functionF(z) onHsuch that the functionsfy(x) =f(x+iy) fory> 0 are in L2andfytends tofin L2asy→ 0. In this caseFis necessarily unique and given by Cauchy's integral formula:
F(z)=12πi∫−∞∞f(s)s−zds.{\displaystyle F(z)={1 \over 2\pi i}\int _{-\infty }^{\infty }{f(s) \over s-z}\,ds.}
In fact, identifying H2withL2(0,∞) via the Fourier transform, fory> 0 multiplication bye−ytonL2(0,∞) induces a contraction semigroupVyon H2. Hence forfin L2
12πi∫−∞∞f(s)s−zds=12π∫−∞∞f(s)gz^(s)ds=12π∫−∞∞f^(s)gz(s)ds=VyPf(x).{\displaystyle {1 \over 2\pi i}\int _{-\infty }^{\infty }{f(s) \over s-z}\,ds={1 \over {\sqrt {2\pi }}}\int _{-\infty }^{\infty }f(s){\widehat {g_{z}}}(s)\,ds={1 \over {\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\widehat {f}}(s)g_{z}(s)\,ds=V_{y}Pf(x).}
Iffis in H2,F(z) is holomorphic for Imz> 0, since the family of L2functionsgzdepends holomorphically onz. Moreover,fy=Vyftends tofinH2since this is true for the Fourier transforms. Conversely if such anFexists, by Cauchy's integral theorem and the above identity applied tofy
fy+t=VtPfy{\displaystyle f_{y+t}=V_{t}Pf_{y}}
fort> 0. Lettingttend to0, it follows thatPfy=fy, so thatfylies in H2. But then so too does the limitf. SinceVtfy=fy+t=Vyft,{\displaystyle V_{t}f_{y}=f_{y+t}=V_{y}f_{t},}uniqueness ofFfollows fromft=limy→0fy+t=limy→0Vtfy=Vtf.{\displaystyle f_{t}=\lim _{y\to 0}f_{y+t}=\lim _{y\to 0}V_{t}f_{y}=V_{t}f.}
Forfin L2, thetruncated Hilbert transformsare defined byHε,Rf(x)=1π∫ε≤|y−x|≤Rf(y)x−ydy=1π∫ε≤|y|≤Rf(x−y)ydyHεf(x)=1π∫|y−x|≥εf(y)x−ydy=1π∫|y|≥εf(x−y)ydy.{\displaystyle {\begin{aligned}H_{\varepsilon ,R}f(x)&={1 \over \pi }\int _{\varepsilon \leq |y-x|\leq R}{f(y) \over x-y}\,dy={1 \over \pi }\int _{\varepsilon \leq |y|\leq R}{f(x-y) \over y}\,dy\\H_{\varepsilon }f(x)&={1 \over \pi }\int _{|y-x|\geq \varepsilon }{f(y) \over x-y}\,dy={1 \over \pi }\int _{|y|\geq \varepsilon }{f(x-y) \over y}\,dy.\end{aligned}}}
The operatorsHε,Rare convolutions by bounded functions of compact support, so their operator norms are given by the uniform norm of their Fourier transforms. As before the absolute values have the form
12π|∫ab2sinttdt|.{\displaystyle {1 \over {\sqrt {2\pi }}}\left|\int _{a}^{b}{2\sin t \over t}\,dt\right|.}
with 0 <a<b, so the operatorsHε,Rare uniformly bounded in operator norm. SinceHε,Rftends toHεfinL2forfwith compact support, and hence for arbitraryf, the operatorsHεare also uniformly bounded in operator norm.
To prove thatHεftends toHfasεtends to zero, it suffices to check this on a dense set of functions. On the other hand,
Hεf¯=−Hε(f¯),{\displaystyle {\overline {H_{\varepsilon }f}}=-H_{\varepsilon }({\overline {f}}),}
so it suffices to prove thatHεftends toiffor a dense set of functions in H2(R), for example the Fourier transforms of smooth functionsgwith compact support in (0,∞). But the Fourier transformfextends to an entire functionFonC, which is bounded on Im(z) ≥ 0. The same is true of the derivatives ofg. Up to a scalar these correspond to multiplyingF(z) by powers ofz. ThusFsatisfies aPaley-Wiener estimatefor Im(z) ≥ 0:[10]
|F(m)(z)|≤KN,m(1+|z|)−N{\displaystyle |F^{(m)}(z)|\leq K_{N,m}(1+|z|)^{-N}}
for anym,N≥ 0. In particular, the integral definingHεf(x) can be computed by taking a standard semicircle contour centered onx. It consists of a large semicircle with radiusRand a small circle radius ε with the two portions of the real axis between them. By Cauchy's theorem, the integral round the contour is zero. The integral round the large contour tends to zero by the Paley-Wiener estimate. The integral on the real axis is the limit sought. It is therefore given as minus the limit on the small semicircular contour. But this is the limit of
1π∫ΓF(z)z−xdz.{\displaystyle {1 \over \pi }\int _{\Gamma }{F(z) \over z-x}\,dz.}
Where Γ is the small semicircular contour, oriented anticlockwise. By the usual techniques of contour integration, this limit equalsif(x).[11]In this case, it is easy to check that the convergence is dominated inL2since
Hεf(x)=1π∫|y−x|≥εf(y)−f(x)y−xdy=1π∫|y−x|≥ε∫01f′(x+t(y−x))dtdy{\displaystyle H_{\varepsilon }f(x)={\frac {1}{\pi }}\int _{|y-x|\geq \varepsilon }{\frac {f(y)-f(x)}{y-x}}\,dy={\frac {1}{\pi }}\int _{|y-x|\geq \varepsilon }\int _{0}^{1}f^{\prime }(x+t(y-x))\,dt\,dy}
so that convergence is dominated byG(x)=12π∫01∫−∞∞|f′(x+ty)|dy{\displaystyle G(x)={\frac {1}{2\pi }}\int _{0}^{1}\int _{-\infty }^{\infty }|f^{\prime }(x+ty)|\,dy}which is inL2by the Paley-Wiener estimate.
It follows that forfonL2(R)Hεf→Hf.{\displaystyle H_{\varepsilon }f\rightarrow Hf.}
This can also be deduced directly because, after passing to Fourier transforms,HεandHbecome multiplication operators by uniformly bounded functions. The multipliers forHεtend pointwise almost everywhere to the multiplier forH, so the statement above follows from thedominated convergence theoremapplied to the Fourier transforms.
As for the Hilbert transform on the circle,Hεftends toHfpointwise almost everywhere iffis an L2function. In fact, define thePoisson operatorson L2functions by
Tyf(x)=∫−∞∞Py(x−t)f(t)dt,{\displaystyle T_{y}f(x)=\int _{-\infty }^{\infty }P_{y}(x-t)f(t)\,dt,}
where the Poisson kernel is given by
Py(x)=yπ(x2+y2).{\displaystyle P_{y}(x)={\frac {y}{\pi (x^{2}+y^{2})}}.}
fory> 0. Its Fourier transform isPy^(t)=e−y|t|,{\displaystyle {\widehat {P_{y}}}(t)=e^{-y|t|},}
from which it is easy to see thatTyftends tofinL2asyincreases to 0. Moreover, as Lebesgue proved,Tyfalso tends pointwise tofat eachLebesgue pointoff. On the other hand, it is also known thatTyHf–Hyftends to zero at each Lebesgue point off. HenceHεftends pointwise tofon the common Lebesgue points offandHfand therefore almost everywhere.[12][13]The absolute values of the functionsTyf−fandTyHf–Hyfcan be bounded pointwise by multiples of the maximal function off.[14]
As for the Hilbert transform on the circle, the uniform boundedness of the operator norms ofHεfollows from that of theTεifHis known to be bounded, sinceHTε−Hεis the convolution operator by the function
gε(x)={xπ(x2+ε2)|x|≤εxπ(x2+ε2)−1πx|x|>ε{\displaystyle g_{\varepsilon }(x)={\begin{cases}{\frac {x}{\pi (x^{2}+\varepsilon ^{2})}}&|x|\leq \varepsilon \\{\frac {x}{\pi (x^{2}+\varepsilon ^{2})}}-{\frac {1}{\pi x}}&|x|>\varepsilon \end{cases}}}
TheL1norms of these functions are uniformly bounded.
The complex Riesz transformsRandR* in the complex plane are the unitary operators onL2(C) defined as multiplication byz/|z| and its conjugate on the Fourier transform of anL2functionf:
Rf^(z)=z¯|z|f^(z),R∗f^(z)=z|z|f^(z).{\displaystyle {\widehat {Rf}}(z)={{\overline {z}} \over |z|}{\widehat {f}}(z),\,\,\,{\widehat {R^{*}f}}(z)={z \over |z|}{\widehat {f}}(z).}
IdentifyingCwithR2,RandR* are given by
R=−iR1+R2,R∗=−iR1−R2,{\displaystyle R=-iR_{1}+R_{2},\,\,\,R^{*}=-iR_{1}-R_{2},}
whereR1andR2are the Riesz transforms onR2defined below.
OnL2(C), the operatorRand its integer powers are unitary. They can also be expressed as singular integral operators:[15]
Rkf(w)=limε→0∫|z−w|≥εMk(w−z)f(z)dxdy,{\displaystyle {R^{k}f(w)=\lim _{\varepsilon \to 0}\int _{|z-w|\geq \varepsilon }M_{k}(w-z)f(z)\,dx\,dy,}}
whereMk(z)=k2πikzk|z|k+2(k≥1),M−k(z)=Mk(z)¯.{\displaystyle M_{k}(z)={k \over 2\pi i^{k}}{z^{k} \over |z|^{k+2}}\,\,\,\,(k\geq 1),\,\,\,\,M_{-k}(z)={\overline {M_{k}(z)}}.}
Defining the truncated higher Riesz transforms asRε(k)f(w)=∫|z−w|≥εMk(w−z)f(z)dxdy,{\displaystyle {R_{\varepsilon }^{(k)}f(w)=\int _{|z-w|\geq \varepsilon }M_{k}(w-z)f(z)\,dx\,dy,}}these operators can be shown to be uniformly bounded in operator norm. For odd powers this can be deduced by the method of rotation of Calderón and Zygmund, described below.[16]If the operators are known to be bounded in operator norm it can also be deduced using the Poisson operators.[17]
The Poisson operatorsTsonR2are defined fors> 0 by
Tsf(x)=12π∫R2sf(x)(|x−t|2+s2)3/2dt.{\displaystyle {T_{s}f(x)={1 \over 2\pi }\int _{\mathbf {R} ^{2}}{sf(x) \over (|x-t|^{2}+s^{2})^{3/2}}\,dt.}}
They are given by convolution with the functions
Ps(x)=s2π(|x|2+s2)3/2.{\displaystyle {P_{s}(x)={s \over 2\pi (|x|^{2}+s^{2})^{3/2}}.}}
Psis the Fourier transform of the functione−s|x|, so under the Fourier transform they correspond to multiplication by these functions and form a contraction semigroup on L2(R2). SincePyis positive and integrable with integral 1, the operatorsTsalso define a contraction semigroup on each Lpspace with 1 <p< ∞.
The higher Riesz transforms of the Poisson kernel can be computed:
RkPs(z)=k2πikzk(|z|2+s2)k/2+1{\displaystyle {R^{k}P_{s}(z)={k \over 2\pi i^{k}}{z^{k} \over (|z|^{2}+s^{2})^{k/2+1}}}}
fork≥ 1 and the complex conjugate for −k. Indeed, the right hand side is a harmonic functionF(x,y,s) of three variable and for such functions[18]
Ts1F(x,y,s2)=F(x,y,s1+s2).{\displaystyle {T_{s_{1}}F(x,y,s_{2})=F(x,y,s_{1}+s_{2}).}}
As before the operators
TεRk−Rε(k){\displaystyle {T_{\varepsilon }R^{k}-R_{\varepsilon }^{(k)}}}
are given by convolution with integrable functions and have uniformly bounded operator norms. Since the Riesz transforms are unitary on L2(C), the uniform boundedness of the truncated Riesz transforms implies that they converge in the strong operator topology to the corresponding Riesz transforms.
The uniform boundedness of the difference between the transform and the truncated transform can also be seen for oddkusing the Calderón-Zygmund method of rotation.[19][20]The groupTacts by rotation on functions onCviaUθf(z)=f(eiθz).{\displaystyle {U_{\theta }f(z)=f(e^{i\theta }z).}}
This defines a unitary representation on L2(C) and the unitary operatorsRθcommute with the Fourier transform. IfAis a bounded operator on L2(R) then it defines a bounded operatorA(1)on
L2(C) simply by makingAact on the first coordinate. With the identification L2(R2) = L2(R) ⊗ L2(R),A(1)=A⊗I. If φ is a continuous function on the circle then a new operator can be defined byB=12π∫02πφ(θ)UθA(1)Uθ∗dθ.{\displaystyle {B={1 \over 2\pi }\int _{0}^{2\pi }\varphi (\theta )U_{\theta }A^{(1)}U_{\theta }^{*}\,d\theta .}}
This definition is understood in the sense that(Bf,g)=12π∫02πφ(θ)(UθA(1)Uθ∗f,g)dθ{\displaystyle {(Bf,g)={1 \over 2\pi }\int _{0}^{2\pi }\varphi (\theta )(U_{\theta }A^{(1)}U_{\theta }^{*}f,g)\,d\theta }}
for anyf,gin L2(C). It follows that‖B‖≤12π∫02π|φ(θ)|⋅‖A‖dθ.{\displaystyle {\|B\|\leq {1 \over 2\pi }\int _{0}^{2\pi }|\varphi (\theta )|\cdot \|A\|\,d\theta .}}
TakingAto be the Hilbert transformHonL2(R) or its truncationHε, it follows thatR=12π∫02πe−iθUθH(1)Uθ∗dθ,Rε=12π∫02πe−iθUθHε(1)Uθ∗dθ.{\displaystyle {\begin{aligned}R&={1 \over 2\pi }\int _{0}^{2\pi }e^{-i\theta }U_{\theta }H^{(1)}U_{\theta }^{*}\,d\theta ,\\R_{\varepsilon }&={1 \over 2\pi }\int _{0}^{2\pi }e^{-i\theta }U_{\theta }H_{\varepsilon }^{(1)}U_{\theta }^{*}\,d\theta .\end{aligned}}}
Taking adjoints gives a similar formula forR*and its truncation. This gives a second way to verify estimates of the norms ofR,R* and their truncations. It has the advantage of being applicable also forLpspaces.
The Poisson operators can also be used to show that the truncated higher Riesz transforms of a function tend to the higher Riesz transform at the common Lebesgue points of the function and its transform. Indeed, (RkTε−R(k)ε)f→ 0 at each Lebesgue point off; while (Rk−RkTε)f→ 0 at each Lebesgue point ofRkf.[21]
Since
z¯z=(z¯|z|)2,{\displaystyle {{\overline {z}} \over z}=\left({{\overline {z}} \over |z|}\right)^{2},}
the Beurling transformTonL2is the unitary operator equal toR2. This relation has been used classically inVekua (1962)andAhlfors (1966)to establish the continuity properties ofTonLpspaces. The results on the Riesz transform and its powers show thatTis the limit in the strong operator topology of the truncated operatorsTεf(w)=−1π∬|z−w|≥εf(z)(w−z)2dxdy.{\displaystyle T_{\varepsilon }f(w)=-{\frac {1}{\pi }}\iint _{|z-w|\geq \varepsilon }{\frac {f(z)}{(w-z)^{2}}}dxdy.}
Accordingly,Tfcan be written as a Cauchy principal value integral:
Tf(w)=−1πP.V.∬f(z)(w−z)2dxdy=−1πlimε→0∬|z−w|≥εf(z)(w−z)2dxdy.{\displaystyle Tf(w)=-{\frac {1}{\pi }}P.V.\iint {\frac {f(z)}{(w-z)^{2}}}dxdy=-{\frac {1}{\pi }}\lim _{\varepsilon \to 0}\iint _{|z-w|\geq \varepsilon }{\frac {f(z)}{(w-z)^{2}}}dx\,dy.}
From the description ofTandT* on Fourier transforms, it follows that iffis smooth of compact support
T(∂zf)=∂zT(f),T(∂z¯f)=∂z¯T(f).{\displaystyle {\begin{aligned}T(\partial _{z}f)&=\partial _{z}T(f),\\T(\partial _{\overline {z}}f)&=\partial _{\overline {z}}T(f).\end{aligned}}}
Like the Hilbert transform in one dimension, the Beurling transform has a compatibility with conformal changes of coordinate. Let Ω be a bounded region inCwith smooth boundary ∂Ω and let φ be a univalent holomorphic map of theunit diskDonto Ω extending to a smooth diffeomorphism of the circle onto ∂Ω. If χΩis thecharacteristic functionof Ω, the operator canχΩTχΩdefines an operatorT(Ω) onL2(Ω). Through the conformal mapφ, it induces an operator, also denotedT(Ω), on L2(D) which can be compared withT(D). The same is true of the truncationsTε(Ω) andTε(D).
LetUεbe the disk |z−w| < ε andVεthe region |φ(z) − φ(w)| <ε. OnL2(D)Tε(Ω)f(w)=−1π∬D∖Vε[φ′(w)φ′(z)(φ(z)−φ(w))2f(z)]dxdy,Tε(D)f(w)=−1π∬D∖Uεf(z)(z−w)2dxdy,{\displaystyle {\begin{aligned}T_{\varepsilon }(\Omega )f(w)&=-{\frac {1}{\pi }}\iint _{D\backslash V_{\varepsilon }}\left[{\varphi ^{\prime }(w)\varphi ^{\prime }(z) \over (\varphi (z)-\varphi (w))^{2}}f(z)\right]dx\,dy,\\T_{\varepsilon }(D)f(w)&=-{1 \over \pi }\iint _{D\backslash U_{\varepsilon }}{f(z) \over (z-w)^{2}}\,dx\,dy,\end{aligned}}}
and the operator norms of these truncated operators are uniformly bounded. On the other hand, if
Tε′(D)f(w)=−1π∬D∖Vεf(z)(z−w)2dxdy,{\displaystyle T_{\varepsilon }^{\prime }(D)f(w)=-{1 \over \pi }\iint _{D\backslash V_{\varepsilon }}{\frac {f(z)}{(z-w)^{2}}}dx\,dy,}
then the difference between this operator andTε(Ω) is a truncated operator with smooth kernelK(w,z):
K(w,z)=−1π[φ′(w)φ′(z)(φ(z)−φ(w))2−1(z−w)2].{\displaystyle K(w,z)=-{1 \over \pi }\left[{\varphi '(w)\varphi '(z) \over (\varphi (z)-\varphi (w))^{2}}-{1 \over (z-w)^{2}}\right].}
So the operatorsT′ε(D) must also have uniformly bounded operator norms. To see that their difference tends to 0 in the strong operator topology, it is enough to check this forfsmooth of compact support inD. By Green's theorem[22]
(Tε(D)−Tε′(D))f(w)=1π∬Uε∂zf(z)z−wdxdy−1π∬Vε∂zf(z)z−wdxdy+12πi∫∂Uεf(z)z−wdz¯−12πi∫∂Vεf(z)z−wdz¯.{\displaystyle \left(T_{\varepsilon }(D)-T_{\varepsilon }^{\prime }(D)\right)f(w)={\frac {1}{\pi }}\iint _{U_{\varepsilon }}{\partial _{z}f(z) \over z-w}dx\,dy-{1 \over \pi }\iint _{V_{\varepsilon }}{\partial _{z}f(z) \over z-w}dx\,dy+{1 \over 2\pi i}\int _{\partial U_{\varepsilon }}{\frac {f(z)}{z-w}}d{\overline {z}}-{\frac {1}{2\pi i}}\int _{\partial V_{\varepsilon }}{f(z) \over z-w}\,d{\overline {z}}.}
All four terms on the right hand side tend to 0. Hence the differenceT(Ω) −T(D) is theHilbert–Schmidt operatorwith kernelK.
For pointwise convergence there is simple argument due toMateu & Verdera (2006)showing that the truncated integrals converge toTfprecisely at its Lebesgue points, that is almost everywhere.[23]In factThas the following symmetry property forf,g∈L2(C)
∬(Tf)g=−1πlim∫|z−w|≥εf(w)g(z)(w−z)2=∬f(Tg).{\displaystyle \iint (Tf)g=-{1 \over \pi }\lim \int _{|z-w|\geq \varepsilon }{\frac {f(w)g(z)}{(w-z)^{2}}}=\iint f(Tg).}
On the other hand, ifχis thecharacteristic functionof the diskD(z,ε) with centrezand radiusε, then
Tχ(w)=−ε21−χ(w)(w−z)2.{\displaystyle T\chi (w)=-\varepsilon ^{2}{\frac {1-\chi (w)}{(w-z)^{2}}}.}
HenceTε(f)(z)=1πε2∬f(Tχ)=1πε2∬(Tf)χ=AvD(z,ε)Tf.{\displaystyle T_{\varepsilon }(f)(z)={1 \over \pi \varepsilon ^{2}}\iint f(T\chi )={1 \over \pi \varepsilon ^{2}}\iint (Tf)\chi =\mathbf {Av} _{D(z,\varepsilon )}\,Tf.}
By theLebesgue differentiation theorem, the right-hand side converges toTfat the Lebesgue points ofTf.
Forfin the Schwartz space ofRn, thejthRiesz transformis defined by
Rjf(x)=cnlimε→0∫|y|≥εf(x−y)yj|y|n+1dy=cnn−1∫∂jf(x−y)1|y|n−1dy,{\displaystyle R_{j}f(x)=c_{n}\lim _{\varepsilon \to 0}\int _{|y|\geq \varepsilon }f(x-y){y_{j} \over |y|^{n+1}}dy={\frac {c_{n}}{n-1}}\int \partial _{j}f(x-y){1 \over |y|^{n-1}}dy,}
wherecn=Γ(n+12)π−n+12.{\displaystyle c_{n}=\Gamma \left({\tfrac {n+1}{2}}\right)\pi ^{-{\frac {n+1}{2}}}.}
Under the Fourier transform:
Rjf^(t)=itj|t|f^(t).{\displaystyle {\widehat {R_{j}f}}(t)={it_{j} \over |t|}{\widehat {f}}(t).}
ThusRjcorresponds to the operator ∂jΔ−1/2, where Δ = −∂12− ⋯ −∂n2denotes the Laplacian onRn. By definitionRjis a bounded and skew-adjoint operator for theL2norm and
R12+⋯+Rn2=−I.{\displaystyle R_{1}^{2}+\cdots +R_{n}^{2}=-I.}
The corresponding truncated operatorsRj,εf(x)=cn∫|y|≥εf(x−y)yj|y|n+1dy{\displaystyle R_{j,\varepsilon }f(x)=c_{n}\int _{|y|\geq \varepsilon }f(x-y){y_{j} \over |y|^{n+1}}dy}are uniformly bounded in the operator norm. This can either be proved directly or can be established by theCalderón−Zygmund method of rotationsfor the group SO(n).[24]This expresses the operatorsRjand their truncations in terms of the Hilbert transforms in one dimension and its truncations. In fact ifG= SO(n) with normalised Haar measure andH(1)is the Hilbert transform in the first coordinate, then
Rj=∫Gφ(g)gH(1)g−1dg,Rj,ε=∫Gφ(g)gHε(1)g−1dg,Rj,ε,R=∫Gφ(g)gHε,R(1)g−1dg.{\displaystyle {\begin{aligned}R_{j}&=\int _{G}\varphi (g)gH^{(1)}g^{-1}\,dg,\\R_{j,\varepsilon }&=\int _{G}\varphi (g)gH_{\varepsilon }^{(1)}g^{-1}\,dg,\\R_{j,\varepsilon ,R}&=\int _{G}\varphi (g)gH_{\varepsilon ,R}^{(1)}g^{-1}\,dg.\end{aligned}}}
whereφ(g) is the (1,j) matrix coefficient ofg.
In particular forf∈L2,Rj,εf→RjfinL2. Moreover,Rj,εftends toRjalmost everywhere. This can be proved exactly as for the Hilbert transform by using the Poisson operators defined onL2(Rn) whenRnis regarded as the boundary of a halfspace inRn+1. Alternatively it can be proved directly from the result for the Hilbert transform onRusing the expression ofRjas an integral overG.[25][26]
The Poisson operatorsTyonRnare defined fory> 0 by[27]
Tyf(x)=cn∫Rnyf(x)(|x−t|2+y2)n+12dt.{\displaystyle T_{y}f(x)=c_{n}\int _{\mathbf {R} ^{n}}{\frac {yf(x)}{\left(|x-t|^{2}+y^{2}\right)^{\frac {n+1}{2}}}}dt.}
They are given by convolution with the functionsPy(x)=cny(|x|2+y2)n+12.{\displaystyle P_{y}(x)=c_{n}{\frac {y}{\left(|x|^{2}+y^{2}\right)^{\frac {n+1}{2}}}}.}
Pyis the Fourier transform of the functione−y|x|, so under the Fourier transform they correspond to multiplication by these functions and form a contraction semigroup on L2(Rn). SincePyis positive and integrable with integral 1, the operatorsTyalso define a contraction semigroup on eachLpspace with 1 <p< ∞.
The Riesz transforms of the Poisson kernel can be computed
RjPε(x)=cnxj(|x|2+ε2)n+12.{\displaystyle R_{j}P_{\varepsilon }(x)=c_{n}{\frac {x_{j}}{\left(|x|^{2}+\varepsilon ^{2}\right)^{\frac {n+1}{2}}}}.}
The operatorRjTεis given by convolution with this function. It can be checked directly that the operatorsRjTε−Rj,εare given by convolution with functions uniformly bounded inL1norm. The operator norm of the difference is therefore uniformly bounded. We have (RjTε−Rj,ε)f→ 0 at each Lebesgue point off; while (Rj−RjTε)f→ 0 at each Lebesgue point ofRjf. SoRj,εf→Rjfon the common Lebesgue points offandRjf.
The theorem ofMarcel Rieszasserts that singular integral operators that are continuous for theL2norm are also continuous in theLpnorm for1 <p< ∞and that the operator norms vary continuously withp.
Once it is established that the operator norms of the Hilbert transform onLp(T)are bounded for even integers, it follows from theRiesz–Thorin interpolation theoremand duality that they are bounded for allpwith1 <p< ∞and that the norms vary continuously withp. Moreover, the arguments with the Poisson integral can be applied to show that the truncated Hilbert transformsHεare uniformly bounded in operator norm and converge in the strong operator topology toH.
It is enough to prove the bound for real trigonometric polynomials without constant term:
f(eiθ)=∑m=1Nameimθ+a−me−imθ,a−m=am¯.{\displaystyle f\left(e^{i\theta }\right)=\sum _{m=1}^{N}a_{m}e^{im\theta }+a_{-m}e^{-im\theta },\qquad a_{-m}={\overline {a_{m}}}.}
Sincef+iHfis a polynomial ineiθwithout constant term
12π∫02π(f+iHf)2ndθ=0.{\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }(f+iHf)^{2n}\,d\theta =0.}
Hence, taking the real part and usingHölder's inequality:
‖Hf‖2n2n≤∑k=0n−1(2n2k)|((Hf)2k,f2n−2k)|≤∑k=0n−1(2n2k)‖Hf‖2n2k⋅‖f‖2n2n−2k.{\displaystyle \|Hf\|_{2n}^{2n}\leq \sum _{k=0}^{n-1}{2n \choose 2k}\left|\left((Hf)^{2k},f^{2n-2k}\right)\right|\leq \sum _{k=0}^{n-1}{2n \choose 2k}\|Hf\|_{2n}^{2k}\cdot \|f\|_{2n}^{2n-2k}.}
So the M. Riesz theorem follows by induction forpan even integer and hence for allpwith1 <p< ∞.
Once it is established that the operator norms of the Hilbert transform onLp(R)are bounded whenpis a power of 2, it follows from theRiesz–Thorin interpolation theoremand duality that they are bounded for allpwith1 <p< ∞and that the norms vary continuously withp. Moreover, the arguments with the Poisson integral can be applied to show that the truncated Hilbert transformsHεare uniformly bounded in operator norm and converge in the strong operator topology toH.
It is enough to prove the bound whenfis a Schwartz function. In that case the following identity of Cotlar holds:
(Hf)2=f2+2H(fH(f)).{\displaystyle (Hf)^{2}=f^{2}+2H(fH(f)).}
In fact, writef=f++f−according to the±ieigenspaces ofH. Sincef±iHfextend to holomorphic functions in the upper and lower half plane, so too do their squares. Hence
f2−(Hf)2=(f++f−)2+(f+−f−)2=2(f+2+f−2)=−2iH(f+2−f−2)=−2H(f(Hf)).{\displaystyle f^{2}-(Hf)^{2}=\left(f_{+}+f_{-}\right)^{2}+\left(f_{+}-f_{-}\right)^{2}=2\left(f_{+}^{2}+f_{-}^{2}\right)=-2iH\left(f_{+}^{2}-f_{-}^{2}\right)=-2H(f(Hf)).}
(Cotlar's identity can also be verified directly by taking Fourier transforms.)
Hence, assuming the M. Riesz theorem forp= 2n,
‖Hf‖2n+12=‖(Hf)2‖2n≤‖f2‖2n+2‖H(fH(f))‖2n≤‖f‖2n+12+2‖H‖2n‖f‖2n+1‖Hf‖2n+1.{\displaystyle \|Hf\|_{2^{n+1}}^{2}=\left\|(Hf)^{2}\right\|_{2^{n}}\leq \left\|f^{2}\right\|_{2^{n}}+2\|H(fH(f))\|_{2^{n}}\leq \|f\|_{2^{n+1}}^{2}+2\|H\|_{2^{n}}\|f\|_{2^{n+1}}\|Hf\|_{2^{n+1}}.}
Since
R2>1+2‖H‖2nR{\displaystyle R^{2}>1+2\|H\|_{2^{n}}R}
forRsufficiently large, the M. Riesz theorem must also hold forp= 2n+1.
Exactly the same method works for the Hilbert transform on the circle.[30]The same identity of Cotlar is easily verified on trigonometric polynomialsfby writing them as the sum of the terms with non-negative and negative exponents, i.e. the±ieigenfunctions ofH. TheLpbounds can therefore be established whenpis a power of 2 and follow in general by interpolation and duality.
The method of rotation for Riesz transforms and their truncations applies equally well onLpspaces for1 <p< ∞. Thus these operators can be expressed in terms of the Hilbert transform onRand its truncations. The integration of the functionsΦfrom the groupTorSO(n)into the space of operators onLpis taken in the weak sense:
(∫GΦ(x)dxf,g)=∫G(Φ(x)f,g)dx{\displaystyle \left(\int _{G}\Phi (x)\,dx\,f,g\right)=\int _{G}(\Phi (x)f,g)\,dx}
whereflies inLpandglies in thedual spaceLqwith1/p+1/q= 1. It follows that Riesz transforms are bounded onLpand that the differences with their truncations are also uniformly bounded. The continuity of theLpnorms of a fixed Riesz transform is a consequence of theRiesz–Thorin interpolation theorem.
The proofs of pointwise convergence for Hilbert and Riesz transforms rely on theLebesgue differentiation theorem, which can be proved using theHardy-Littlewood maximal function.[31]The techniques for the simplest and best-known case, namely the Hilbert transform on the circle, are a prototype for all the other transforms. This case is explained in detail here.
Letfbe in Lp(T) forp> 1. The Lebesgue differentiation theorem states that
A(ε)=12ε∫x−εx+ε|f(t)−f(x)|dt→0{\displaystyle {A(\varepsilon )={1 \over 2\varepsilon }\int _{x-\varepsilon }^{x+\varepsilon }|f(t)-f(x)|\,dt\to 0}}
for almost allxinT.[32][33][34]The points at which this holds are called theLebesgue pointsoff. Using this theorem it follows that iffis an integrable function on the circle, the Poisson integralTrftends pointwise tofat eachLebesgue pointoff. In fact, forxfixed,A(ε) is a continuous function on[0,π]. Continuity at 0 follows becausexis a Lebesgue point and elsewhere because, ifhis an integrable function, the integral of |h| on intervals of decreasing length tends to 0 byHölder's inequality.
Lettingr= 1 −ε, the difference can be estimated by two integrals:
2π|Trf(x)−f(x)|=∫02π|(f(x−y)−f(x))Pr(y)|dy≤∫|y|≤ε+∫|y|≥ε.{\displaystyle 2\pi |T_{r}f(x)-f(x)|=\int _{0}^{2\pi }|(f(x-y)-f(x))P_{r}(y)|\,dy\leq \int _{|y|\leq \varepsilon }+\int _{|y|\geq \varepsilon }.}
The Poisson kernel has two important properties forεsmall
supy∈[−ε,ε]|P1−ε(y)|≤ε−1.supy∉(−ε,ε)|P1−ε(y)|→0.{\displaystyle {\begin{aligned}\sup _{y\in [-\varepsilon ,\varepsilon ]}|P_{1-\varepsilon }(y)|&\leq \varepsilon ^{-1}.\\\sup _{y\notin (-\varepsilon ,\varepsilon )}|P_{1-\varepsilon }(y)|&\to 0.\end{aligned}}}
The first integral is bounded byA(ε) by the first inequality so tends to zero asεgoes to 0; the second integral tends to 0 by the second inequality.
The same reasoning can be used to show thatT1 − εHf–Hεftends to zero at each Lebesgue point off.[35]In fact the operatorT1 −εHfhas kernelQr+i, where the conjugate Poisson kernelQris defined byQr(θ)=2rsinθ1−2rcosθ+r2.{\displaystyle {Q_{r}(\theta )={2r\sin \theta \over 1-2r\cos \theta +r^{2}}.}}
Hence2π|T1−εHf(x)−Hεf(x)|≤∫|y|≤ε|f(x−y)−f(x)|⋅|Qr(y)|dy+∫|y|≥ε|f(x−y)−f(x)|⋅|Q1(y)−Qr(y)|dy.{\displaystyle {2\pi |T_{1-\varepsilon }Hf(x)-H_{\varepsilon }f(x)|\leq \int _{|y|\leq \varepsilon }|f(x-y)-f(x)|\cdot |Q_{r}(y)|\,dy+\int _{|y|\geq \varepsilon }|f(x-y)-f(x)|\cdot |Q_{1}(y)-Q_{r}(y)|\,dy.}}
The conjugate Poisson kernel has two important properties for ε smallsupy∈[−ε,ε]|Q1−ε(y)|≤ε−1.supy∉(−ε,ε)|Q1(y)−Q1−ε(y)|→0.{\displaystyle {\begin{aligned}\sup _{y\in [-\varepsilon ,\varepsilon ]}|Q_{1-\varepsilon }(y)|&\leq \varepsilon ^{-1}.\\\sup _{y\notin (-\varepsilon ,\varepsilon )}|Q_{1}(y)-Q_{1-\varepsilon }(y)|&\to 0.\end{aligned}}}
Exactly the same reasoning as before shows that the two integrals tend to 0 as ε → 0.
Combining these two limit formulas it follows thatHεftends pointwise toHfon the common Lebesgue points offandHfand therefore almost everywhere.[36][37][38]
Much of theLptheory has been developed using maximal functions and maximal transforms. This approach has the advantage that it also extends to L1spaces in an appropriate "weak" sense and gives refined estimates inLpspaces forp> 1. These finer estimates form an important part of the techniques involved inLennart Carleson's solution in 1966 ofLusin's conjecturethat the Fourier series of L2functions converge almost everywhere.[39]In the more rudimentary forms of this approach, the L2theory is given less precedence: instead there is more emphasis on the L1theory, in particular its measure-theoretic and probabilistic aspects; results for otherLpspaces are deduced by a form ofinterpolationbetween L1and L∞spaces. The approach is described in numerous textbooks, including the classicsZygmund (1977)andKatznelson (1968). Katznelson's account is followed here for the particular case of the Hilbert transform of functions in L1(T), the case not covered by the development above.F. Riesz's proof of convexity, originally established byHardy, is established directly without resorting toRiesz−Thorin interpolation.[40][41]
Iffis an L1function on the circle its maximal function is defined by[42]
f∗(t)=sup0<h≤π12h∫t−ht+h|f(s)|ds.{\displaystyle {f^{*}(t)=\sup _{0<h\leq \pi }{1 \over 2h}\int _{t-h}^{t+h}|f(s)|\,ds.}}
f* is finite almost everywhere and is of weak L1type. In fact for λ > 0 if
Ef(λ)={x:|f(x)|>λ},fλ=χE(λ)f,{\displaystyle {E_{f}(\lambda )=\{x:\,|f(x)|>\lambda \},\,\,f_{\lambda }=\chi _{E(\lambda )}f,}}
then[43]
m(Ef∗(λ))≤8λ∫Ef(λ)|f|≤8‖f‖1λ,{\displaystyle m(E_{f^{*}}(\lambda ))\leq {8 \over \lambda }\int _{E_{f}(\lambda )}|f|\leq {8\|f\|_{1} \over \lambda },}
wheremdenotes Lebesgue measure.
The Hardy−Littlewood inequality above leads to a proof that almost every pointxofTis aLebesgue pointof an integrable functionf, so that
limh→0∫x−hx+h|f(t)−f(x)|dt2h→0.{\displaystyle \lim _{h\to 0}{\frac {\int _{x-h}^{x+h}|f(t)-f(x)|\,dt}{2h}}\to 0.}
In fact, let
ω(f)(x)=lim suph→0∫x−hx+h|f(t)−f(x)|dt2h≤f∗(x)+|f(x)|.{\displaystyle \omega (f)(x)=\limsup _{h\to 0}{\frac {\int _{x-h}^{x+h}|f(t)-f(x)|\,dt}{2h}}\leq f^{*}(x)+|f(x)|.}
Ifgis continuous, then theω(g) =0, so thatω(f−g) =ω(f). On the other hand,fcan be approximated arbitrarily closely in L1by continuousg. Then, usingChebychev's inequality,
m{x:ω(f)(x)>λ}=m{x:ω(f−g)(x)>λ}≤m{x:(f−g)∗(x)>λ}+m{x:|f(x)−g(x)|>λ}≤Cλ−1‖f−g‖1.{\displaystyle m\{x:\,\omega (f)(x)>\lambda \}=m\{x:\,\omega (f-g)(x)>\lambda \}\leq m\{x:\,(f-g)^{*}(x)>\lambda \}+m\{x:\,|f(x)-g(x)|>\lambda \}\leq C\lambda ^{-1}\|f-g\|_{1}.}
The right-hand side can be made arbitrarily small, so that ω(f) = 0 almost everywhere.
The Poisson integrals of an L1functionfsatisfy[44]
|Trf|≤f∗.{\displaystyle {|T_{r}f|\leq f^{*}.}}
It follows thatTrftends tofpointwise almost everywhere. In fact let
Ω(f)=lim supr→1|Trf−f|.{\displaystyle {\Omega (f)=\limsup _{r\to 1}|T_{r}f-f|.}}
Ifgis continuous, then the difference tends to zero everywhere, so Ω(f−g) = Ω(f). On the other hand,fcan be approximated arbitrarily closely in L1by continuousg. Then, usingChebychev's inequality,
m{x:Ω(f)(x)>λ}=m{x:Ω(f−g)(x)>λ}≤m{x:(f−g)∗(x)>λ}+m{x:|f(x)−g(x)|>λ}≤Cλ−1‖f−g‖1.{\displaystyle m\{x:\,\Omega (f)(x)>\lambda \}=m\{x:\,\Omega (f-g)(x)>\lambda \}\leq m\{x:\,(f-g)^{*}(x)>\lambda \}+m\{x:\,|f(x)-g(x)|>\lambda \}\leq C\lambda ^{-1}\|f-g\|_{1}.}
The right-hand side can be made arbitrarily small, so that Ω(f) = 0 almost everywhere. A more refined argument shows that convergence occurs at each Lebesgue point off.
Iffis integrable the conjugate Poisson integrals are defined and given by convolution by the kernelQr. This definesHfinside |z| < 1. To show thatHfhas a radial limit for almost all angles,[45]consider
F(z)=exp(−f(z)−iHf(z)),{\displaystyle {F(z)=\exp(-f(z)-iHf(z)),}}
wheref(z) denotes the extension offby Poisson integral.Fis holomorphic in the unit disk with |F(z)| ≤ 1. The restriction ofFto a countable family of concentric circles gives a sequence of functions in L∞(T) which has a weakglimit in L∞(T) with Poisson integralF. By the L2results,gis the radial limit for almost all angles ofF. It follows thatHf(z) has a radial limit almost everywhere. This is taken as the definition ofHfonT, so thatTrHf tends pointwise toHalmost everywhere. The functionHfis of weak L1type.[46]
The inequality used above to prove pointwise convergence for Lpfunction with 1 <p< ∞ make sense for L1functions by invoking the maximal function. The inequality becomes
|Hεf−T1−εHf|≤4f∗.{\displaystyle {|H_{\varepsilon }f-T_{1-\varepsilon }Hf|\leq 4f^{*}.}}
Let
ω(f)=lim supε→0|Hεf−T1−εHf|.{\displaystyle {\omega (f)=\limsup _{\varepsilon \to 0}|H_{\varepsilon }f-T_{1-\varepsilon }Hf|.}}
Ifgis smooth, then the difference tends to zero everywhere, so ω(f−g) =ω(f). On the other hand,fcan be approximated arbitrarily closely inL1by smoothg. Then
m{x:ω(f)(x)>λ}=m{x:ω(f−g)(x)>λ}≤m{x:4(f−g)∗(x)>λ}≤Cλ−1‖f−g‖1.{\displaystyle m\{x:\,\omega (f)(x)>\lambda \}=m\{x:\,\omega (f-g)(x)>\lambda \}\leq m\{x:\,4(f-g)^{*}(x)>\lambda \}\leq C\lambda ^{-1}\|f-g\|_{1}.}
The right hand side can be made arbitrarily small, so thatω(f) = 0 almost everywhere. Thus the difference forftends to zero almost everywhere. A more refined argument can be given[47]to show that, as in case ofLp, the difference tends to zero at all Lebesgue points off. In combination with the result for the conjugate Poisson integral, it follows that, iffis in L1(T), thenHεfconverges toHfalmost everywhere, a theorem originally proved by Privalov in 1919.
Calderón & Zygmund (1952)introduced general techniques for studying singular integral operators of convolution type. In Fourier transform the operators are given by multiplication operators. These will yield bounded operators on L2if the corresponding multiplier function is bounded. To prove boundedness on Lpspaces, Calderón and Zygmund introduced a method of decomposing L1functions, generalising therising sun lemmaofF. Riesz. This method showed that the operator defined a continuous operator from L1to the space of functions of weak L1. TheMarcinkiewicz interpolation theoremand duality then implies that the singular integral operator is bounded on all Lpfor 1 <p< ∞. A simple version of this theory is described below for operators onR. Asde Leeuw (1965)showed, results onRcan be deduced from corresponding results forTby restricting the multiplier to the integers, or equivalently periodizing the kernel of the operator. Corresponding results for the circle were originally established by Marcinkiewicz in 1939. These results generalize toRnandTn. They provide an alternative method for showing that the Riesz transforms, the higher Riesz transforms and in particular the Beurling transform define bounded operators on Lpspaces.[48]
Letfbe a non-negative integrable or continuous function on [a,b]. LetI= (a,b). For any open subintervalJof [a,b], letfJdenote the average of |f| overJ. Let α be a positive constant greater thanfI. DivideIinto two equal intervals (omitting the midpoint). One of these intervals must satisfyfJ< α since their sum is 2fIso less than 2α. Otherwise the interval will satisfy α ≤fJ< 2α. Discard such intervals and repeat the halving process with the remaining interval, discarding intervals using the same criterion. This can be continued indefinitely. The discarded intervals are disjoint and their union is an open set Ω. For pointsxin the complement, they lie in a nested set of intervals with lengths decreasing to 0 and on each of which the average offis bounded by α. Iffis continuous these averages tend to |f(x)|. Iffis only integrable this is only true almost everywhere, for it is true at theLebesgue pointsoffby theLebesgue differentiation theorem. Thusfsatisfies |f(x)| ≤αalmost everywhere on Ωc, the complement of Ω. LetJnbe the set of discarded intervals and define the "good" functiongby
g(x)=χJn(f)(x∈Jn),g(x)=f(x)(x∈Ωc).{\displaystyle {g(x)=\chi _{J_{n}}(f)\,\,\,(x\in J_{n}),\,\,\,\,\,g(x)=f(x)\,\,\,(x\in \Omega ^{c}).}}
By construction |g(x)| ≤ 2αalmost everywhere and‖g‖1≤‖f‖1.{\displaystyle {\|g\|_{1}\leq \|f\|_{1}.}}
Combining these two inequalities gives‖g‖pp≤(2α)p−1‖f‖1.{\displaystyle {\|g\|_{p}^{p}\leq (2\alpha )^{p-1}\|f\|_{1}.}}
Define the "bad" functionbbyb=f−g. Thusbis 0 off Ω and equal tofminus its average onJn. So the average ofbonJnis zero and‖b‖1≤2‖f‖1.{\displaystyle {\|b\|_{1}\leq 2\|f\|_{1}.}}
Moreover, since |b| ≥αon Ωm(Ω)≤α−1‖f‖1.{\displaystyle {m(\Omega )\leq \alpha ^{-1}\|f\|_{1}.}}
The decompositionf(x)=g(x)+b(x){\displaystyle \displaystyle {f(x)=g(x)+b(x)}}
is called theCalderón–Zygmund decomposition.[49]
LetK(x) be a kernel defined onR\{0} such that
W(f)=limε→0∫|x|≥εK(x)f(x)dx{\displaystyle W(f)=\lim _{\varepsilon \to 0}\int _{|x|\geq \varepsilon }K(x)f(x)\,dx}
exists as atempered distributionforfaSchwartz function. Suppose that the Fourier transform ofTis bounded, so that convolution byWdefines a bounded operatorTon L2(R). Then ifKsatisfiesHörmander's condition
A=supy≠0∫|x|≥2|y||K(x−y)−K(x)|dx<∞,{\displaystyle A=\sup _{y\neq 0}\int _{|x|\geq 2|y|}|K(x-y)-K(x)|\,dx<\infty ,}
thenTdefines a bounded operator on Lpfor 1 <p< ∞ and a continuous operator from L1into functions of weak type L1.[50]
In fact by the Marcinkiewicz interpolation argument and duality, it suffices to check that iffis smooth of compact support then
m{x:|Tf(x)|≥2λ}≤(2A+4‖T‖)⋅λ−1‖f‖1.{\displaystyle m\{x:\,|Tf(x)|\geq 2\lambda \}\leq (2A+4\|T\|)\cdot \lambda ^{-1}\|f\|_{1}.}
Take a Calderón−Zygmund decomposition offas abovef(x)=g(x)+b(x){\displaystyle f(x)=g(x)+b(x)}with intervalsJnand withα=λμ, whereμ> 0. Then
m{x:|Tf(x)|≥2λ}≤m{x:|Tg(x)|≥λ}+m{x:|Tb(x)|≥λ}.{\displaystyle m\{x:\,|Tf(x)|\geq 2\lambda \}\leq m\{x:\,|Tg(x)|\geq \lambda \}+m\{x:\,|Tb(x)|\geq \lambda \}.}
The term forgcan be estimated usingChebychev's inequality:
m{x:|Tg(x)|≥2λ}≤λ−2‖Tg‖22≤λ−2‖T‖2‖g‖22≤2λ−1μ‖T‖2‖f‖1.{\displaystyle m\{x:\,|Tg(x)|\geq 2\lambda \}\leq \lambda ^{-2}\|Tg\|_{2}^{2}\leq \lambda ^{-2}\|T\|^{2}\|g\|_{2}^{2}\leq 2\lambda ^{-1}\mu \|T\|^{2}\|f\|_{1}.}
IfJ* is defined to be the interval with the same centre asJbut twice the length, the term forbcan be broken up into two parts:
m{x:|Tb(x)|≥λ}≤m{x:x∉∪Jn∗,|Tb(x)|≥λ}+m(∪Jn∗).{\displaystyle m\{x:\,|Tb(x)|\geq \lambda \}\leq m\{x:\,x\notin \cup J_{n}^{*},\,\,\,|Tb(x)|\geq \lambda \}+m(\cup J_{n}^{*}).}
The second term is easy to estimate:
m(∪Jn∗)≤∑m(Jn∗)=2∑m(Jn)≤2λ−1μ−1‖f‖1.{\displaystyle m(\cup J_{n}^{*})\leq \sum m(J_{n}^{*})=2\sum m(J_{n})\leq 2\lambda ^{-1}\mu ^{-1}\|f\|_{1}.}
To estimate the first term note that
b=∑bn,bn=(f−AvJn(f))χJn.{\displaystyle b=\sum b_{n},\qquad b_{n}=(f-\mathbf {Av} _{J_{n}}(f))\chi _{J_{n}}.}
Thus by Chebychev's inequality:
m{x:x∉∪Jm∗,|Tb(x)|≥λ}≤λ−1∫(∪Jm∗)c|Tb(x)|dx≤λ−1∑n∫(Jn∗)c|Tbn(x)|dx.{\displaystyle m\{x:\,x\notin \cup J_{m}^{*},\,\,\,|Tb(x)|\geq \lambda \}\leq \lambda ^{-1}\int _{(\cup J_{m}^{*})^{c}}|Tb(x)|\,dx\leq \lambda ^{-1}\sum _{n}\int _{(J_{n}^{*})^{c}}|Tb_{n}(x)|\,dx.}
By construction the integral ofbnoverJnis zero. Thus, ifynis the midpoint ofJn, then by Hörmander's condition:
∫(Jn∗)c|Tbn(x)|dx=∫(Jn∗)c|∫Jn(K(x−y)−K(x−yn))bn(y)dy|dx≤∫Jn|bn(y)|∫(Jn∗)c|K(x−y)−K(x−yn)|dxdy≤A‖bn‖1.{\displaystyle \int _{(J_{n}^{*})^{c}}|Tb_{n}(x)|\,dx=\int _{(J_{n}^{*})^{c}}\left|\int _{J_{n}}(K(x-y)-K(x-y_{n}))b_{n}(y)\,dy\right|\,dx\leq \int _{J_{n}}|b_{n}(y)|\int _{(J_{n}^{*})^{c}}|K(x-y)-K(x-y_{n})|\,dxdy\leq A\|b_{n}\|_{1}.}
Hencem{x:x∉∪Jm∗,|Tb(x)|≥λ}≤λ−1A‖b‖1≤2Aλ−1‖f‖1.{\displaystyle m\left\{x:\,x\notin \cup J_{m}^{*},|Tb(x)|\geq \lambda \right\}\leq \lambda ^{-1}A\|b\|_{1}\leq 2A\lambda ^{-1}\|f\|_{1}.}
Combining the three estimates gives
m{x:|Tf(x)|≥λ}≤(2μ‖T‖2+2μ−1+2A)λ−1‖f‖1.{\displaystyle m\{x:\,|Tf(x)|\geq \lambda \}\leq \left(2\mu \|T\|^{2}+2\mu ^{-1}+2A\right)\lambda ^{-1}\|f\|_{1}.}
The constant is minimized by takingμ=‖T‖−1.{\displaystyle \mu =\|T\|^{-1}.}
The Markinciewicz interpolation argument extends the bounds to anyLpwith 1 <p< 2 as follows.[51]Givena> 0, write
f=fa+fa,{\displaystyle f=f_{a}+f^{a},}
wherefa=fif |f| <aand 0 otherwise andfa=fif |f| ≥aand 0 otherwise. Then by Chebychev's inequality and the weak typeL1inequality above
m{x:|Tf(x)|>a}≤m{x:|Tfa(x)|>a2}+m{x:|Tfa(x)|>a2}≤4a−2‖T‖2‖fa‖22+Ca−1‖fa‖1.{\displaystyle m\{x:\,|Tf(x)|>a\}\leq m\left\{x:\,|Tf_{a}(x)|>{\tfrac {a}{2}}\right\}+m\left\{x:\,|Tf^{a}(x)|>{\tfrac {a}{2}}\right\}\leq 4a^{-2}\|T\|^{2}\|f_{a}\|_{2}^{2}+Ca^{-1}\|f^{a}\|_{1}.}
Hence
‖Tf‖pp=p∫0∞ap−1m{x:|Tf(x)|>a}da≤p∫0∞ap−1(4a−2‖T‖2‖fa‖22+Ca−1‖fa‖1)da=4‖T‖2∬|f(x)|<a|f(x)|2ap−3dxda+2C∬|f(x)|≥a|f(x)|ap−2dxda≤(4‖T‖2(2−p)−1+C(p−1)−1)∫|f|p=Cp‖f‖pp.{\displaystyle {\begin{aligned}\|Tf\|_{p}^{p}&=p\int _{0}^{\infty }a^{p-1}m\{x:\,|Tf(x)|>a\}\,da\\&\leq p\int _{0}^{\infty }a^{p-1}\left(4a^{-2}\|T\|^{2}\|f_{a}\|_{2}^{2}+Ca^{-1}\|f^{a}\|_{1}\right)da\\&=4\|T\|^{2}\iint _{|f(x)|<a}|f(x)|^{2}a^{p-3}\,dx\,da+2C\iint _{|f(x)|\geq a}|f(x)|a^{p-2}\,dx\,da\\&\leq \left(4\|T\|^{2}(2-p)^{-1}+C(p-1)^{-1}\right)\int |f|^{p}\\&=C_{p}\|f\|_{p}^{p}.\end{aligned}}}
By duality
‖Tf‖q≤Cp‖f‖q.{\displaystyle \|Tf\|_{q}\leq C_{p}\|f\|_{q}.}
Continuity of the norms can be shown by a more refined argument[52]or follows from theRiesz–Thorin interpolation theorem.
|
https://en.wikipedia.org/wiki/Beurling_transform
|
Cryptanalysis of theLorenz cipherwas the process that enabled the British to read high-level German army messages duringWorld War II. The BritishGovernment Code and Cypher School(GC&CS) atBletchley Parkdecrypted many communications between theOberkommando der Wehrmacht(OKW, German High Command) in Berlin and their army commands throughout occupied Europe, some of which were signed "Adolf Hitler, Führer".[3]These were intercepted non-Morseradio transmissions that had been enciphered by theLorenz SZteleprinterrotorstream cipherattachments. Decrypts of this traffic became an important source of "Ultra" intelligence, which contributed significantly to Allied victory.[4]
For its high-level secret messages, the German armed services enciphered eachcharacterusing various onlineGeheimschreiber(secret writer) stream cipher machines at both ends of atelegraphlink using the5-bitInternational Telegraphy Alphabet No. 2(ITA2). These machines were subsequently discovered to be the Lorenz SZ (SZ forSchlüssel-Zusatz, meaning "cipher attachment") for the army,[5]theSiemens and Halske T52for the air force and the Siemens T43, which was little used and never broken by the Allies.[6]
Bletchley Park decrypts of messages enciphered with theEnigma machinesrevealed that the Germans called one of their wireless teleprinter transmission systems"Sägefisch"(sawfish),[7]which led Britishcryptographersto refer to encrypted Germanradiotelegraphictraffic as "Fish".[5]"Tunny" (tunafish) was the name given to the first non-Morse link, and it was subsequently used for the cipher machines and their traffic.[8]
As with the entirely separatecryptanalysis of the Enigma, it was German operational shortcomings that allowed the initial diagnosis of the system, and a way into decryption.[9]Unlike Enigma, no physical machine reachedalliedhands until the very end of the war in Europe, long after wholesale decryption had been established.[10][11]The problems of decrypting Tunny messages led to the development of "Colossus", the world's first electronic, programmable digital computer, ten of which were in use by the end of the war,[12][13]by which time some 90% of selected Tunny messages were being decrypted at Bletchley Park.[14]
Albert W. Small, a cryptanalyst from theUS Army Signal Corpswho was seconded to Bletchley Park and worked on Tunny, said in his December 1944 report back toArlington Hallthat:
Daily solutions of Fish messages at GC&CS reflect a background of British mathematical genius, superb engineering ability, and solid common sense. Each of these has been a necessary factor. Each could have been overemphasised or underemphasised to the detriment of the solutions; a remarkable fact is that the fusion of the elements has been apparently in perfect proportion. The result is an outstanding contribution to cryptanalytic science.[15]
The Lorenz SZ cipher attachments implemented aVernamstream cipher(using theexclusive or (XOR)function) to encrypt theplaintextbits by combining them with thekeybits to produce theciphertextat the transmitting end. At the receiving end, an identically configured machine produced the same key stream which was combined with the ciphertext to produce the plaintext, i.e. the system implemented asymmetric-key algorithm.
The key stream was generated by a complex array of twelve wheels, ten of which delivered what should have been acryptographically secure pseudorandom numberstream. This stream was a product of XOR-ing the bits of the 5-bit character generated by the right hand five wheels (thechi(χ{\displaystyle \chi }) wheels) with the bits from the left hand five (thepsi(ψ{\displaystyle \psi }) wheels). Thechiwheels always moved on one position for every incoming plaintext or ciphertext character, but thepsiwheels' movement was determined by the central twomu(μ{\displaystyle \mu }) or "motor" wheels.[17][18]
Theμ{\displaystyle \mu }61wheel moved on after each character and its cams determined theμ{\displaystyle \mu }37wheel movement, whose cams in turn controlled thepsiwheels' movement.[19]On all but the earliest machines, there was an additional factor that played into the moving on or not of thepsiwheels. These were of four different types and were called "Limitations" at Bletchley Park. All involved some aspect of the previous positions of the machine's wheels.[20]
The numbers of cams on the set of twelve wheels of the SZ42 machines totalled 501 and wereco-primewith each other, giving an extremely long period before the key sequence repeated. Each cam could either be in a raised position, in which case it generatedx(as written at Bletchley Park) and equivalent to a binary digit 1 to the logic of the system, or in the lowered position when it generated•equivalent to a binary digit 0.[10]The total possible number of patterns of raised cams was 2501which is anastronomicallylarge number.[21]In practice about half of the cams on each wheel were in the raised position as the Germans realized that otherwise there would be runs ofx's and•'s, a cryptographic weakness.[22][23]
The process of working out the wheel cam patterns was called "wheel breaking" at Bletchley Park.[24]Deriving the start positions of the wheels for a particular transmission was termed "wheel setting" or simply "setting". The fact that thepsiwheels all moved together, but not with every input character, was a major weakness of the machines that contributed to British cryptanalytical success.
Electro-mechanicaltelegraphy was developed in the 1830s and 1840s, well beforetelephony, and operated worldwide by the time of theSecond World War. An extensive system of cables linked sites within and between countries, with a standard voltage of −80 V indicating a "mark" and +80 V indicating a "space".[25]Where cable transmission became impracticable or inconvenient, such as for mobile German Army Units, radio transmission was used.
Teleprintersat each end of the circuit consisted of a keyboard and a printing mechanism, and very often a five-holeperforated paper-tapereading and punching mechanism. When usedonline, pressing an alphabet key at the transmitting end caused the relevant character to print at the receiving end. Commonly, however, thecommunication systeminvolved the transmitting operator preparing a set of messages offline by punching them onto paper tape, and then going online only for the transmission of the messages recorded on the tape. The system would typically send some ten characters per second, and so occupy the line or the radio channel for a shorter period of time than for online typing.
The characters of the message were represented by the codes of the International Telegraphy Alphabet No. 2 (ITA2). The transmission medium, either wire or radio, usedasynchronous serial communicationwith each character signaled by a start (space) impulse, 5 data impulses and 1½ stop (mark) impulses. At Bletchley Park mark impulses were signified byx("cross") and space impulses by•("dot").[26]For example, the letter "H" would be coded as••x•x.
The figure shift (FIGS) and letter shift (LETRS) characters determined how the receiving end interpreted the string of characters up to the next shift character. Because of the danger of a shift character being corrupted, some operators would type a pair of shift characters when changing from letters to numbers orvice versa. So they would type 55M88 to represent a full stop.[28]Such doubling of characters was very helpful for the statistical cryptanalysis used at Bletchley Park. After encipherment, shift characters had no special meaning.
The speed of transmission of a radio-telegraph message was three or four times that of Morse code and a human listener could not interpret it. A standard teleprinter, however would produce the text of the message. The Lorenz cipher attachment changed theplaintextof the message intociphertextthat was uninterpretable to those without an identical machine identically set up. This was the challenge faced by the Bletchley Park codebreakers.
Intercepting Tunny transmissions presented substantial problems. As the transmitters were directional, most of the signals were quite weak at receivers in Britain. Furthermore, there were some 25 differentfrequenciesused for these transmissions, and the frequency would sometimes be changed part way through. After the initial discovery of the non-Morse signals in 1940, a radio intercept station called the Foreign Office Research and Development Establishment was set up on a hill at Ivy Farm atKnockholtin Kent, specifically to intercept this traffic.[29][30]The centre was headed by Harold Kenworthy, had 30receiving setsand employed some 600 staff. It became fully operational early in 1943.
Because a single missed or corrupted character could make decryption impossible, the greatest accuracy was required.[31]The undulator technology used to record the impulses had originally been developed for high-speed Morse. It produced a visible record of the impulses on narrow paper tape. This was then read by people employed as "slip readers" who interpreted the peaks and troughs as the marks and spaces of ITA2 characters.[32]Perforated paper tape was then produced for telegraphic transmission to Bletchley Park where it was punched out.[33]
The Vernam cipher implemented by the Lorenz SZ machines enciphers the plaintext bitstream by combining it with a random or pseudorandom bitstream (the "keystream") to generate the ciphertext. This combination is done using and verbalised as "A or B but not both". This is represented by the followingtruth table, wherexrepresents "true" and•represents "false".
Other names for this function are: exclusive disjunction, not equal (NEQ), andmodulo2 addition (without "carry") and subtraction (without "borrow"). Modulo 2 addition and subtraction are identical. Some descriptions of Tunny decryption refer to addition and some to differencing, i.e. subtraction, but they mean the same thing. The XOR operator is bothassociativeandcommutative.
Reciprocityis a desirable feature of a machine cipher so that the same machine with the same settings can be used either for enciphering or for deciphering. The Vernam cipher achieves this, as combining the stream of plaintext characters with the key stream produces the ciphertext, and combining the same key with the ciphertext regenerates the plaintext.[34]
Symbolically:
and
Vernam's original idea was to use conventional telegraphy practice, with a paper tape of the plaintext combined with a paper tape of the key at the transmitting end, and an identical key tape combined with the ciphertext signal at the receiving end. Each pair of key tapes would have been unique (aone-time tape), but generating and distributing such tapes presented considerable practical difficulties. In the 1920s four men in different countries invented rotor Vernam cipher machines to produce a key stream to act instead of a key tape. The Lorenz SZ40/42 was one of these.[35]
Amonoalphabetic substitution ciphersuch as theCaesar ciphercan easily be broken, given a reasonable amount of ciphertext. This is achieved byfrequency analysisof the different letters of the ciphertext, and comparing the result with the knownletter frequencydistribution of the plaintext.[36]
With apolyalphabetic cipher, there is a different substitution alphabet for each successive character. So a frequency analysis shows an approximatelyuniform distribution, such as that obtained from a(pseudo) random number generator. However, because one set of Lorenz wheels turned with every character while the other did not, the machine did not disguise the pattern in the use of adjacent characters in the German plaintext.Alan Turingdiscovered this weakness and invented the differencing technique described below to exploit it.[37]
The pattern of which of the cams were in the raised position, and which in the lowered position was changed daily on the motor wheels (μ{\displaystyle \mu }37 andμ{\displaystyle \mu }61). Thechiwheel cam patterns were initially changed monthly. Thepsiwheel patterns were changed quarterly until October 1942 when the frequency was increased to monthly, and then to daily on 1 August 1944, when the frequency of changing thechiwheel patterns was also changed to daily.[38]
The number of start positions of the wheels was 43×47×51×53×59×37×61×41×31×29×26×23 which is approximately 1.6×1019(16 billion billion), far too large a number for cryptanalysts to try an exhaustive "brute-force attack". Sometimes the Lorenz operators disobeyed instructions and two messages were transmitted with the same start positions, a phenomenon termed a"depth". The method by which the transmitting operator told the receiving operator the wheel settings that he had chosen for the message which he was about to transmit was termed the"indicator"at Bletchley Park.
In August 1942, the formulaic starts to the messages, which were useful to cryptanalysts, were replaced by some irrelevant text, which made identifying the true message somewhat harder. This new material was dubbedquatsch(German for "nonsense") at Bletchley Park.[39]
During the phase of the experimental transmissions, the indicator consisted of twelve German forenames, the initial letters of which indicated the position to which the operators turned the twelve wheels. As well as showing when two transmissions were fully in depth, it also allowed the identification of partial depths where two indicators differed only in one or two wheel positions. From October 1942 the indicator system changed to the sending operator transmitting the unenciphered letters QEP[40]followed by a two digit number. This number was taken serially from a code book that had been issued to both operators and gave, for each QEP number, the settings of the twelve wheels. The books were replaced when they had been used up, but between replacements, complete depths could be identified by the re-use of a QEP number on a particular Tunny link.[41]
The first step in breaking a new cipher is to diagnose the logic of the processes of encryption and decryption. In the case of a machine cipher such as Tunny, this entailed establishing the logical structure and hence functioning of the machine. This was achieved without the benefit of seeing a machine—which only happened in 1945, shortly before the allied victory in Europe.[45]The enciphering system was very good at ensuring that the ciphertextZcontained no statistical, periodic or linguistic characteristics to distinguish it from random. However this did not apply toK,χ,ψ'andD,which was the weakness that meant that Tunny keys could be solved.[46]
During the experimental period of Tunny transmissions when the twelve-letter indicator system was in use,John Tiltman, Bletchley Park's veteran and remarkably gifted cryptanalyst, studied the Tunny ciphertexts and identified that they used a Vernam cipher.
When two transmissions (aandb) use the same key, i.e. they are in depth, combining them eliminates the effect of the key.[47]Let us call the two ciphertextsZaandZb, the keyKand the two plaintextsPaandPb. We then have:
If the two plaintexts can be worked out, the key can be recovered from either ciphertext-plaintext pair e.g.:
On 31 August 1941, two long messages were received that had the same indicator HQIBPEXEZMUG. The first seven characters of these two ciphertexts were the same, but the second message was shorter. The first 15 characters of the two messages were as follows (in Bletchley Park interpretation):
John Tiltman tried various likely pieces of plaintext, i.e. a"cribs", against theZa ⊕ Zbstring and found that the first plaintext message started with the German wordSPRUCHNUMMER(message number). In the second plaintext, the operator had used the common abbreviationNRforNUMMER. There were more abbreviations in the second message, and the punctuation sometimes differed. This allowed Tiltman to work out, over ten days, the plaintext of both messages, as a sequence of plaintext characters discovered inPa, could then be tried againstPbandvice versa.[48]In turn, this yielded almost 4000 characters of key.[49]
Members of the Research Section worked on this key to try to derive a mathematical description of the key generating process, but without success.Bill Tuttejoined the section in October 1941 and was given the task. He had read chemistry and mathematics atTrinity College, Cambridgebefore being recruited to Bletchley Park. At his training course, he had been taught theKasiski examinationtechnique of writing out a key on squared paper with a new row after a defined number of characters that was suspected of being the frequency of repetition of the key. If this number was correct, the columns of the matrix would show more repetitions of sequences of characters than chance alone.
Tutte thought that it was possible that, rather than using this technique on the whole letters of the key, which were likely to have a long frequency of repetition, it might be worth trying it on the sequence formed by taking only one impulse (bit) from each letter, on the grounds that "the part might be cryptographically simpler than the whole".[50]Given that the Tunny indicators used 25 letters (excluding J) for 11 of them, but only 23 letters for the twelfth, he tried Kasiski's technique on the first bit of the key characters using a repetition of 25 × 23 = 575. This did not produce a large number of repetitions in the columns, but Tutte did observe the phenomenon on a diagonal. He therefore tried again with 574, which showed up repeats in the columns. Recognising that theprime factorsof this number are 2, 7 and 41, he tried again with a period of 41 and "got a rectangle of dots and crosses that was replete with repetitions".[51]
It was clear, however, that the sequence of first bits was more complicated than that produced by a single wheel of 41 positions. Tutte called this component of the keyχ1(chi). He figured that there was another component, which was XOR-ed with this, that did not always change with each new character, and that this was the product of a wheel that he calledψ1(psi). The same applied for each of the five bits (indicated here by subscripts). So for a single character, the keyKconsisted of two components:
The actual sequence of characters added by thepsiwheels, including those when they do not advance, was referred to as theextended psi,[43]and symbolised byψ′
Tutte's derivation of theψcomponent was made possible by the fact that dots were more likely than not to be followed by dots, and crosses more likely than not to be followed by crosses. This was a product of a weakness in the German key setting, which they later stopped. Once Tutte had made this breakthrough, the rest of the Research Section joined in to study the other bits, and it was established that the fiveψwheels all moved together under the control of twoμ(muor "motor") wheels.
Diagnosing the functioning of the Tunny machine in this way was a truly remarkable cryptanalytical achievement, and was described when Tutte was inducted as Officer of the Order of Canada in October 2001, as "one of the greatest intellectual feats of World War II".[52]
In July 1942Alan Turingspent a few weeks in the Research Section.[53]He had become interested in the problem of breaking Tunny from the keys that had been obtained from depths.[54]In July, he developed a method of deriving the cam settings ("wheel breaking") from a length of key. It became known as "Turingery"[55](playfully dubbed "Turingismus" by Peter Ericsson,Peter HiltonandDonald Michie[54]) and introduced the important method of "differencing" on which much of the rest of solving Tunny keys in the absence of depths, was based.[55]
The search was on for a process that would manipulate the ciphertext or key to produce a frequency distribution of characters that departed from the uniformity that the enciphering process aimed to achieve. Turing worked out that the XOR combination of the values of successive (adjacent) characters in a stream of ciphertext or key, emphasised any departures from a uniform distribution.[55][56]The resultant stream was called the difference (symbolised by the Greek letter "delta"Δ)[57]because XOR is the same as modulo 2 subtraction. So, for a stream of charactersS, the differenceΔSwas obtained as follows, whereunderlineindicates the succeeding character:
The streamSmay be ciphertextZ, plaintextP, keyKor either of its two componentsχandψ. The relationship amongst these elements still applies when they are differenced. For example, as well as:
It is the case that:
Similarly for the ciphertext, plaintext and key components:
So:
The reason that differencing provided a way into Tunny, was that although the frequency distribution of characters in the ciphertext could not be distinguished from a random stream, the same was not true for a version of the ciphertext from which thechielement of the key had been removed. This is because, where the plaintext contained a repeated character and thepsiwheels did not move on, the differencedpsicharacter (Δψ) would be the null character ('/' at Bletchley Park). When XOR-ed with any character, this character has no effect, so in these circumstances,ΔK=Δχ.The ciphertext modified by the removal of thechicomponent of the key was called the de-chiDat Bletchley Park,[58]and the process of removing it as "de-chi-ing". Similarly for the removal of thepsicomponent which was known as "de-psi-ing" (or "deep sighing" when it was particularly difficult).[59]
So the delta de-chiΔDwas:
Repeated characters in the plaintext were more frequent both because of the characteristics of German (EE, TT, LL and SS are relatively common),[60]and because telegraphists frequently repeated the figures-shift and letters-shift characters[61]as their loss in an ordinary telegraph transmission could lead to gibberish.[62]
To quote the General Report on Tunny:
Turingery introduced the principle that the key differenced at one, now calledΔΚ, could yield information unobtainable from ordinary key. ThisΔprinciple was to be the fundamental basis of nearly all statistical methods of wheel-breaking and setting.[55]
Differencing was applied to each of the impulses of the ITA2 coded characters.[63]So, for the first impulse, that was enciphered by wheelsχ1andψ1, differenced at one:
And for the second impulse:
And so on.
The periodicity of thechiandpsiwheels for each impulse (41 and 43 respectively for the first impulse) is also reflected in the pattern ofΔK. However, given that thepsiwheels did not advance for every input character, as did thechiwheels, it was not simply a repetition of the pattern every 41 × 43 = 1763 characters forΔK1, but a more complex sequence.
Turing's method of deriving the cam settings of the wheels from a length of key obtained from a depth, involved aniterativeprocess. Given that the deltapsicharacter was the null character '/' half of the time on average, an assumption thatΔK=Δχhad a 50% chance of being correct. The process started by treating a particularΔKcharacter as being theΔχfor that position. The resulting putative bit pattern ofxand•for eachchiwheel, was recorded on a sheet of paper that contained as many columns as there were characters in the key, and five rows representing the five impulses of theΔχ. Given the knowledge from Tutte's work, of the periodicity of each of the wheels, this allowed the propagation of these values at the appropriate positions in the rest of the key.
A set of five sheets, one for each of thechiwheels, was also prepared. These contained a set of columns corresponding in number to the cams for the appropriatechiwheel, and were referred to as a 'cage'. So theχ3cage had 29 such columns.[64]Successive 'guesses' ofΔχvalues then produced further putative cam state values. These might either agree or disagree with previous assumptions, and a count of agreements and disagreements was made on these sheets. Where disagreements substantially outweighed agreements, the assumption was made that theΔψcharacter was not the null character '/', so the relevant assumption was discounted. Progressively, all the cam settings of thechiwheels were deduced, and from them, thepsiand motor wheel cam settings.
As experience of the method developed, improvements were made that allowed it to be used with much shorter lengths of key than the original 500 or so characters."[55]
The Testery was the section at Bletchley Park that performed the bulk of the work involved in decrypting Tunny messages.[65]By July 1942, the volume of traffic was building up considerably. A new section was therefore set up, led byRalph Tester—hence the name. The staff consisted mainly of ex-members of the Research Section,[1]and included Peter Ericsson,Peter Hilton, Denis Oswald andJerry Roberts.[66]The Testery's methods were almost entirely manual, both before and after the introduction of automated methods in theNewmanryto supplement and speed up their work.[14][1]
The first phase of the work of the Testery ran from July to October, with the predominant method of decryption being based on depths and partial depths.[67]After ten days, however, the formulaic start of the messages was replaced by nonsensicalquatsch, making decryption more difficult. This period was productive nonetheless, even though each decryption took considerable time. Finally, in September, a depth was received that allowed Turing's method of wheel breaking, "Turingery", to be used, leading to the ability to start reading current traffic. Extensive data about the statistical characteristics of the language of the messages was compiled, and the collection of cribs extended.[55]
In late October 1942 the original, experimental Tunny link was closed and two new links (Codfish and Octopus) were opened. With these and subsequent links, the 12-letter indicator system of specifying the message key was replaced by the QEP system. This meant that only full depths could be recognised—from identical QEP numbers—which led to a considerable reduction in traffic decrypted.
Once the Newmanry became operational in June 1943, the nature of the work performed in the Testery changed, with decrypts, and wheel breaking no longer relying on depths.
The so-called "British Tunny Machine" was a device that exactly replicated the functions of the SZ40/42 machines. It was used to produce the German cleartext from a ciphertext tape, after the cam settings had been determined.[68]The functional design was produced at Bletchley Park where ten Testery Tunnies were in use by the end of the war. It was designed and built inTommy Flowers' laboratory at theGeneral Post Office Research Stationat Dollis Hill byGil Hayward,"Doc" Coombs, Bill Chandler and Sid Broadhurst.[69]It was mainly built from standard British telephone exchangeelectro-mechanicalequipment such asrelaysanduniselectors. Input and output was by means of a teleprinter with paper tape reading and punching.[70]These machines were used in both theTesteryand later theNewmanry.Dorothy Du Boissonwho was a machine operator and a member of theWomen's Royal Naval Service(Wren), described plugging up the settings as being like operating an old fashioned telephone exchange and that she received electric shocks in the process.[71]
When Flowers was invited by Hayward to try the first British Tunny machine at Dollis Hill by typing in the standard test phrase: "Now is the time for all good men to come to the aid of the party", he much appreciated that the rotor functions had been set up to provide the followingWordsworthianoutput:[72]
Additional features were added to the British Tunnies to simplify their operation. Further refinements were made for the versions used in the Newmanry, the third Tunny being equipped to produce de-chitapes.[73][74]
TheNewmanrywas a section set up underMax Newmanin December 1942 to look into the possibility of assisting the work of the Testery by automating parts of the processes of decrypting Tunny messages. Newman had been working with Gerry Morgan, head of the Research Section on ways of breaking Tunny when Bill Tutte approached them in November 1942 with the idea of what became known as the "1+2 break in".[75]This was recognised as being feasible, but only if automated.
Newman produced a functional specification of what was to become the "Heath Robinson" machine.[75]He recruited thePost Office Research Stationat Dollis Hill, andDr C.E. Wynn-Williamsat theTelecommunications Research Establishment(TRE) at Malvern to implement his idea. Work on the engineering design started in January 1943 and the first machine was delivered in June. The staff at that time consisted of Newman,Donald Michie,Jack Good, two engineers and 16 Wrens. By the end of the war the Newmanry contained three Robinson machines, ten Colossus Computers and a number of British Tunnies. The staff were 26 cryptographers, 28 engineers and 275 Wrens.[76]
The automation of these processes required the processing of large quantities of punched paper tape such as those on which the enciphered messages were received. Absolute accuracy of these tapes and their transcription was essential, as a single character in error could invalidate or corrupt a huge amount of work. Jack Good introduced the maxim "If it's not checked it's wrong".[77]
W. T. Tuttedeveloped a way of exploiting the non-uniformity ofbigrams(adjacent letters) in the German plaintext using the differenced cyphertext and key components. His method was called the "1+2 break in", or "double-delta attack".[78]The essence of this method was to find the initial settings of thechicomponent of the key by exhaustively trying all positions of its combination with the ciphertext, and looking for evidence of the non-uniformity that reflected the characteristics of the original plaintext.[79][80]The wheel breaking process had to have successfully produced the current cam settings to allow the relevant sequence of characters of thechiwheels to be generated. It was totally impracticable to generate the 22 million characters from all five of thechiwheels, so it was initially limited to 41 × 31 = 1271 from the first two.
Given that for each of the five impulsesi:
and hence
for the first two impulses:
Calculating a putativeP1⊕ P2in this way for each starting point of theχ1⊕χ2sequence would yieldxs and•s with, in the long run, a greater proportion of•s when the correct starting point had been used. Tutte knew, however, that using thedifferenced(∆) values amplified this effect[81]because any repeated characters in the plaintext would always generate•, and similarly ∆ψ1⊕ ∆ψ2would generate•whenever thepsiwheels did not move on, and about half of the time when they did – some 70% overall.
Tutte analyzed a decrypted ciphertext with the differenced version of the above function:
and found that it generated•some 55% of the time.[82]Given the nature of the contribution of thepsiwheels, the alignment ofchi-stream with the ciphertext that gave the highest count of•s from(∆Z1⊕ ∆Z2⊕ ∆χ1⊕ ∆χ2)was the one that was most likely to be correct.[83]This technique could be applied to any pair of impulses and so provided the basis of an automated approach to obtaining the de-chi(D) of a ciphertext, from which thepsicomponent could be removed by manual methods.
Heath Robinsonwas the first machine produced to automate Tutte's 1+2 method. It was given the name by theWrenswho operated it, after cartoonistWilliam Heath Robinson, who drew immensely complicated mechanical devices for simple tasks, similar to the American cartoonistRube Goldberg.
The functional specification of the machine was produced by Max Newman. The main engineering design was the work of Frank Morrell[84]at thePost Office Research Stationat Dollis Hill in North London, with his colleague Tommy Flowers designing the "Combining Unit". DrC. E. Wynn-Williamsfrom theTelecommunications Research Establishmentat Malvern produced the high-speed electronic valve and relay counters.[85]Construction started in January 1943,[86]the prototype machine was in use at Bletchley Park in June.[87]
The main parts of the machine were:
The prototype machine was effective despite a number of serious shortcomings. Most of these were progressively overcome in the development of what became known as "Old Robinson".[88]
Tommy Flowers' had reservations about Heath Robinson's two synchronised tape loops, and his previous, unique experience ofthermionic valves (vacuum tubes)led him to realize that a better machine could be produced using electronics. Instead of the key stream being read from a second punched paper tape, an electronically generated key stream could allow much faster and more flexible processing. Flowers' suggestion that this could be achieved with a machine that was entirely electronic and would contain between one and two thousand valves, was treated with incredulity at both the Telecommunications Research Establishment and at Bletchley Park, as it was thought that it would be "too unreliable to do useful work". He did, however, have the support of the Controller of Research at Dollis Hill, W Gordon Radley,[89]and he implemented these ideas producingColossus, the world's first electronic, digital, computing machine that was at all programmable, in the remarkably short time of ten months.[90]In this he was assisted by his colleagues at thePost Office Research StationDollis Hill: Sidney Broadhurst, William Chandler,Allen CoombsandHarry Fensom.
The prototype Mark 1 Colossus (Colossus I), with its 1500 valves, became operational at Dollis Hill in December 1943[2]and became fully operational at Bletchley Park on 5 February 1944.[91]This processed the message at 5000 characters per second using the impulse from reading the tape's sprocket holes to act as theclock signal. It quickly became evident that this was a huge leap forward in cryptanalysis of Tunny. Further Colossus machines were ordered and the orders for more Robinsons cancelled. An improved Mark 2 Colossus (Colossus II) contained 2400 valves and first worked at Bletchley Park on 1 June 1944, just in time for theD-day Normandy landings.
The main parts of this machine were:[92]
The five parallel processing units allowed Tutte's "1+2 break in" and other functions to be run at an effective speed of 25,000 characters per second by the use of circuitry invented by Flowers that would now be called ashift register. Donald Michie worked out a method of using Colossus to assist in wheel breaking as well as for wheel setting in early 1944.[93]This was then implemented in special hardware on later Colossi.
A total of ten Colossus computers were in use and an eleventh was being commissioned at the end of the war in Europe (VE-Day).[94]Of the ten, seven were used for "wheel setting" and 3 for "wheel breaking".[95]
As well as the commercially produced teleprinters and re-perforators, a number of other machines were built to assist in the preparation and checking of tapes in the Newmanry and Testery.[96][97]The approximate complement as of May 1945 was as follows.
Working out the start position of thechi(χ) wheels required first that their cam settings had been determined by "wheel breaking". Initially, this was achieved by two messages having been sent indepth.
The number of start positions for the first two wheels,χ1andχ2was 41×31 = 1271. The first step was to try all of these start positions against the message tape. This wasTutte's "1+2 break in"which involved computing(∆Z1⊕ ∆Z2⊕ ∆χ1⊕ ∆χ2)—which gives a putative (∆D1⊕ ∆D2)—and counting the number of times this gave•. Incorrect starting positions would, on average, give a dot count of 50% of the message length. On average, the dot count for a correct starting point would be 54%, but there was inevitably a considerable spread of values around these averages.[83]
Both Heath Robinson, which was developed into what became known as "Old Robinson", and Colossus were designed to automate this process. Statistical theory allowed the derivation of measures of how far any count was from the 50% expected with an incorrect starting point for thechiwheels. This measure of deviation from randomness was called sigma. Starting points that gave a count of less than 2.5 × sigma, named the "set total", were not printed out.[105]The ideal for a run to setχ1andχ2was that a single pair of trial values produced one outstanding value for sigma thus identifying the start positions of the first twochiwheels. An example of the output from such a run on a Mark 2 Colossus with its five counters: a, b, c, d and e, is given below.
With an average-sized message, this would take about eight minutes. However, by utilising the parallelism of the Mark 2 Colossus, the number of times the message had to be read could be reduced by a factor of five, from 1271 to 255.[107]Having identified possibleχ1,χ2start positions, the next step was to try to find the start positions for the otherchiwheels. In the example given above, there is a single setting ofχ1= 36 andχ2= 21 whose sigma value makes it stand out from the rest. This was not always the case, and Small enumerates 36 different further runs that might be tried according to the result of theχ1,χ2run.[108]At first the choices in this iterative process were made by the cryptanalyst sitting at the typewriter output, and calling out instructions to the Wren operators. Max Newman devised a decision tree and then set Jack Good and Donald Michie the task of devising others.[109]These were used by the Wrens without recourse to the cryptanalysts if certain criteria were met.[110]
In the above one of Small's examples, the next run was with the first twochiwheels set to the start positions found and three separate parallel explorations of the remaining threechiwheels. Such a run was called a "short run" and took about two minutes.[107]
So the probable start positions for thechiwheels are:χ1= 36,χ2= 21,χ3= 01,χ4= 19,χ5= 04. These had to be verified before the de-chi(D) message was passed to the Testery. This involved Colossus performing a count of the frequency of the 32 characters inΔD. Small describes the check of the frequency count of theΔDcharacters as being the "acid test",[112]and that practically every cryptanalyst and Wren in the Newmanry and Testery knew the contents of the following table by heart.
If the derived start points of thechiwheels passed this test, the de-chi-ed message was passed to the Testery where manual methods were used to derive thepsiand motor settings. As Small remarked, the work in the Newmanry took a great amount of statistical science, whereas that in the Testery took much knowledge of language and was of great interest as an art. Cryptanalyst Jerry Roberts made the point that this Testery work was a greater load on staff than the automated processes in the Newmanry.[14]
|
https://en.wikipedia.org/wiki/Cryptanalysis_of_the_Lorenz_cipher
|
Irving John Good(9 December 1916 – 5 April 2009)[1][2]was a British mathematician who worked as acryptologistatBletchley ParkwithAlan Turing. After theSecond World War, Good continued to work with Turing on the design of computers andBayesian statisticsat theUniversity of Manchester. Good moved to the United States where he was a professor atVirginia Tech.
He was bornIsadore Jacob Gudakto aPolish Jewishfamily in London. He later anglicised his name to Irving John Good and signed his publications "I. J. Good."
An originator of the concept known as theintelligence explosion, Good served as consultant on supercomputers toStanley Kubrick, director of the 1968 film2001: A Space Odyssey.[3]
Good was born Isadore Jacob Gudak to PolishJewishparents in London. His father was a watchmaker, who later managed and owned a successful fashionable jewellery shop, and was also a notable Yiddish writer writing under thepen nameof Moshe Oved. Good was educated atthe Haberdashers' Aske's Boys' School, at the time inHampsteadin northwest London, where, according toDan van der Vat, Good effortlessly outpaced the mathematicscurriculum.[3]
Good studied mathematics atJesus College, Cambridge, graduating in 1938 and winning theSmith's Prizein 1940.[4]He did research underG. H. HardyandAbram Besicovitchbefore moving to Bletchley Park in 1941 on completing his doctorate.
On 27 May 1941, having just obtained his doctorate at Cambridge, Good walked intoHut 8, Bletchley's facility for breaking German naval ciphers, for his first shift. This was the day that Britain'sRoyal Navydestroyed theGerman battleshipBismarckafter it had sunk the Royal Navy'sHMSHood. Bletchley had contributed toBismarck's destruction by discovering, through wireless-traffic analysis, that the German flagship was sailing forBrest, France, rather thanWilhelmshaven, from which she had set out.[3]Hut 8 had not, however, been able to decrypt on a current basis the 22 German NavalEnigmamessages that had been sent toBismarck. The German Navy's Enigma cyphers were considerably more secure than those of the German Army or Air Force, which had been well penetrated by 1940. Naval messages were taking three to seven days to decrypt, which usually made them operationally useless for the British. This was about to change, however, with Good's help.[3]
Alan Turing... had caught Good sleeping on the floor while on duty during his first night shift. At first, Turing thought Good was ill, but he was cross when Good explained that he was just taking a short nap because he was tired. For days afterwards, Turing would not deign to speak to Good, and he left the room if Good walked in. The new recruit only won Turing's respect after he solved the bigram tables problem. During a subsequent night shift, when there was no more work to be done, it dawned on Good that there might be another chink in the German indicating system. The German telegraphists had to add dummy letters to the trigrams which they selected out of theKenngruppenbuch... Good wondered if their choice of dummy letters was random, or whether there was a bias towards particular letters. After inspecting some messages which had been broken, he discovered that there was a tendency to use some letters more than others. That being the case, all the codebreakers had to do, was to work back from the indicators given at the beginning of each message, and apply each bigram table in turn in the same way asJoan Clarkehad done before. The bigram table which produced one of the popular dummy letters was probably the correct one. When Good mentioned his discovery to Alan Turing, Turing was very embarrassed, and said, 'I could have sworn that I tried that.' It quickly became an important part of theBanburismusprocedure.
Jack Good's refusal to go on working when tired was vindicated by a subsequent incident. During another long night shift, he had been baffled by his failure to break a doubly encipheredOffiziermessage. This was one of the messages which was supposed to be enciphered initially with the Enigma set up in accordance with theOffiziersettings, and subsequently with the general Enigma settings in place. However, while he was sleeping before returning for another shift, he dreamed that the order had been reversed; the general settings had been applied before theOffiziersettings. Next day he found that the message had yet to be read, so he applied the theory which had come to him during the night. It worked; he had broken the code in his sleep.[5]
Good served with Turing for nearly two years.[3]Subsequently, he worked withDonald MichieinMax Newman's group on theFishciphers, leading to the development of theColossus computer.[6]
Good was a member of the Bletchley Chess Club which defeated theOxford University Chess Club8–4 in a twelve-board team match held on 2 December 1944. Good played fourth board for Bletchley Park, withConel Hugh O'Donel Alexander,Harry GolombekandJames Macrae Aitkenin the top three spots.[7]He won his game againstSir Robert Robinson.[8]
In 1947, Newman invited Good to join him and Turing atManchester University. There, for three years, Good lectured in mathematics and researched computers, including theManchester Mark 1.[3]
In 1948, Good was recruited back to the Government Communications Headquarters (GCHQ). He remained there until 1959, while also taking up a brief associate professorship atPrinceton Universityand a short consultancy withIBM.[3]
From 1959 until he moved to the US in 1967, Good held government-funded positions and from 1964 a senior research fellowship atTrinity College, Oxford, and theAtlas Computer Laboratory, where he continued his interests in computing, statistics and chess.[2]He later left Oxford, declaring it "a little stiff".
In 1967, Good moved to the United States, where he was appointed a research professor of statistics atVirginia Polytechnic Institute and State University. In 1969, he was appointed a University Distinguished Professor at Virginia Tech, and in 1994 Emeritus University Distinguished Professor.[9]In 1973, he was elected as aFellow of the American Statistical Association.[10]
He later said about his arrival in Virginia (from Britain) in 1967 to start teaching at VPI, where he taught from 1967 to 1994:
I arrived in Blacksburg in the seventh hour of the seventh day of the seventh month of the year seven in the seventh decade, and I was put in Apartment 7 of Block 7...all by chance.[11]
Good's published work ran to over three million words.[3]He was known for his work onBayesian statistics.KassandRaftery[12]credit Good (and in turn Turing) with coining the termBayes factor. Good published a number of books onprobability theory. In 1958, he published an early version of what later became known as thefast Fourier transform[13]but it did not become widely known. He playedchessto county standard and helped populariseGo, an Asian boardgame, through a 1965 article inNew Scientist(he had learned the rules from Alan Turing).[14]In 1965, he originated the concept now known as "intelligence explosion" or the "technological singularity", which anticipates the eventual advent ofsuperhuman intelligence:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind... Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.[15][16]
Good's authorship of treatises such as his 1965 "Speculations Concerning the First Ultraintelligent Machine"[17]and "Logic of Man and Machine"[18]made him the obvious person forStanley Kubrickto consult when filming2001: A Space Odyssey(1968), one of whose principal characters was the paranoidHAL 9000supercomputer.[3]In 1995, Good was elected a member of theAcademy of Motion Picture Arts and Sciences.[2]Graphcore's proposedfoundation model$600m computer, that usesHuman-Centered Artificial Intelligence, which will have the potential capacity of running programs with 500trn parameters, was named to honor Good's intellectual heritage.[19][Notes 1][20]According toThe Economist, Graphcore aims to take the "first step" towards creating I. J. Good's imagined "Ultraintelligent Machine".[19]
According to his assistant, Leslie Pendleton, in 1998 Good wrote in an unpublished autobiographical statement that he suspected an ultraintelligent machine would lead tothe extinction of man.[21]
Good published a paper under the names IJ Good and "K Caj Doog"—the latter, his own nickname spelled backwards. In a 1988 paper,[22]he introduced its subject by saying, "Many people have contributed to this topic but I shall mainly review the writings of I. J. Good because I have read them all carefully." InVirginiahe chose, as hisvanity licence plate, "007IJG," in subtle reference to hisSecond World Warintelligencework.[3]
Good never married.[23]After going through ten assistants in his first thirteen years at Virginia Tech, he hired Leslie Pendleton, who proved up to the task of managing his quirks. He wanted to marry her, but she refused. Although there was speculation, they were never more than friends, but she was his assistant, companion, and friend for the rest of his life.[24]
Good died on 5 April 2009 ofnatural causesinRadford, Virginia, aged 92.[25][26]
Good, I. J.. “Explicativity, corroboration, and the relative odds of hypotheses.” Synthese 30 (1975): 39–73.[30]
|
https://en.wikipedia.org/wiki/I._J._Good
|
Good–Turing frequency estimationis astatisticaltechnique for estimating theprobabilityof encountering an object of a hitherto unseen species, given a set of past observations of objects from different species. In drawing balls from an urn, the 'objects' would be balls and the 'species' would be the distinct colours of the balls (finite but unknown in number). After drawingRred{\displaystyle R_{\text{red}}}red balls,Rblack{\displaystyle R_{\text{black}}}black balls andRgreen{\displaystyle R_{\text{green}}}green balls, we would ask what is the probability of drawing a red ball, a black ball, a green ball or one of a previously unseen colour.
Good–Turing frequency estimation was developed byAlan Turingand his assistantI. J. Goodas part of their methods used atBletchley Parkfor crackingGermanciphersfor theEnigma machineduringWorld War II. Turing at first modelled the frequencies as amultinomial distribution, but found it inaccurate. Good developedsmoothingalgorithms to improve the estimator's accuracy.
The discovery was recognised as significant when published by Good in 1953,[1]but the calculations were difficult so it was not used as widely as it might have been.[2]The method even gained some literary fame due to theRobert HarrisnovelEnigma.
In the 1990s,Geoffrey Sampsonworked with William A. Gale ofAT&Tto create and implement a simplified and easier-to-use variant of the Good–Turing method[3][4]described below. Various heuristic justifications[5]and a simple combinatorial derivation have been provided.[6]
The Good–Turing estimator is largely independent of the distribution of species frequencies.[7]
For example,N1{\displaystyle N_{1}}is the number of species for which only one individual was observed. Note that the total number of objects observed,N,{\displaystyle \,N\,,}can be found from
The first step in the calculation is to estimate the probability that a future observed individual (or the next observed individual) is a member of a thus far unseen species. This estimate is:[8]
The next step is to estimate the probability that the next observed individual is from a species which has been seenr{\displaystyle r}times. For asinglespecies this estimate is:
Here, the notationS(⋅){\displaystyle S(\cdot )}means thesmoothed, oradjustedvalue of the frequency shown in parentheses. An overview of how to perform this smoothingfollows in the next section(see alsoempirical Bayes method).
To estimate the probability that the next observed individual is from any species from this group (i.e., the group of species seenr{\displaystyle \,r\,}times) one can use the following formula:
For smoothing the erratic values inNr{\displaystyle \,N_{r}\,}for larger, we would like to make a plot oflogNr{\displaystyle \;\log N_{r}\;}versuslogr{\displaystyle \;\log r\;}but this is problematic because for larger{\displaystyle \,r\,}manyNr{\displaystyle \,N_{r}\,}will be zero. Instead a revised quantity,logZr,{\displaystyle \;\log Z_{r}\;,}is plotted versuslogr,{\displaystyle \;\log r~,}whereZr{\displaystyle \,Z_{r}\,}is defined as
and whereq,r, andtare three consecutive subscripts with non-zero countsNq,Nr,Nt{\displaystyle \;N_{q},N_{r},N_{t}\;}. For the special case whenris 1, takeqto be 0. In the opposite special case, whenr=rlast{\displaystyle \,r=r_{\mathsf {last}}\;}is the index of thelastnon-zero count, replace the divisor12(t−q){\displaystyle \,{\tfrac {1}{2}}\,(t-q)\,}withrlast−q,{\displaystyle \,r_{\mathsf {last}}-q\;,}soZrlast=Nrlastrlast−q.{\displaystyle \;Z_{r_{\mathsf {last}}}={\frac {N_{r_{\mathsf {last}}}}{\;r_{\mathsf {last}}-q\;}}~.}
Asimple linear regressionis then fitted to the log–log plot.
For small values ofr{\displaystyle \,r\,}it is reasonable to setS(Nr)=Nr{\displaystyle \;S(N_{r})=N_{r}\;}– that is, no smoothing is performed.
For large values ofr, values ofS(Nr){\displaystyle \;S(N_{r})\;}are read off the regression line. An automatic procedure (not described here) can be used to specify at what point the switch from no smoothing to linear smoothing should take place.[9][full citation needed]Code for the method is available in the public domain.[10]
Many different derivations of the above formula forpr{\displaystyle p_{r}}have been given.[1][6][11][12]
One of the simplest ways to motivate the formula is by assuming the next item will behave similarly to the previous item. The overall idea of the estimator is that currently we are seeing never-seen items at a certain frequency, seen-once items at a certain frequency, seen-twice items at a certain frequency, and so on. Our goal is to estimate just how likely each of these categories is, for thenextitem we will see. Put another way, we want to know the current rate at which seen-twice items are becoming seen-thrice items, and so on. Since we don't assume anything about the underlying probability distribution, it does sound a bit mysterious at first. But it is extremely easy to calculate these probabilitiesempiricallyfor thepreviousitem we saw, even assuming we don't remember exactly which item that was: Take all the items we have seen so far (including multiplicities) — the last item we saw was a random one of these, all equally likely. Specifically, the chance that we saw an item for the(r+1){\displaystyle (r+1)}th time is simply the chance that it was one of the items that we have now seenr+1{\displaystyle r+1}times, namely(r+1)Nr+1N{\displaystyle {\frac {(r+1)N_{r+1}}{N}}}. In other words, our chance of seeing an item that had been seenrtimes before was(r+1)Nr+1N{\displaystyle {\frac {(r+1)N_{r+1}}{N}}}. So now we simply assume that this chance will be about the same for the next item we see. This immediately gives us the formula above forp0{\displaystyle p_{0}}, by settingr=0{\displaystyle r=0}. And forr>0{\displaystyle r>0}, to get the probability thata particular oneof theNr{\displaystyle N_{r}}items is going to be the next one seen, we need to divide this probability (of seeingsomeitem that has been seenrtimes) among theNr{\displaystyle N_{r}}possibilities for which particular item that could be. This gives us the formulapr=(r+1)Nr+1NNr{\displaystyle \;p_{r}={\frac {(r+1)N_{r+1}}{NN_{r}}}}. Of course, your actual data will probably be a bit noisy, so you will want to smooth the values first to get a better estimate of how quickly the category counts are growing, and this gives the formula as shown above. This approach is in the same spirit as deriving the standardBernoulliestimatorby simply asking what the two probabilities were for the previous coin flip (after scrambling the trials seen so far), given only the current result counts, while assuming nothing about the underlying distribution.
|
https://en.wikipedia.org/wiki/Good%E2%80%93Turing_frequency_estimation
|
Tadeusz Walenty Pełczyński(codenames:Grzegorz,Adam,Wolf,Robak;Warsaw, 14 February 1892 – 3 January 1985, London) was aPolish Armymajor general(generał brygady),intelligence officerand chief of the General Staff's Section II (themilitary intelligencesection).[1]
During World War II, he became chief of staff of theHome Army(ZWZ,Armia Krajowa; July 1941 – October 1944) and its deputy commander (July 1943 – October 1944).
Tadeusz Pełczyński was the son of Ksawery Pełczyński, aSannikisugar-milltechnician, and Maria,néeLiczbińska, a teacher, and was a great-grandson of Michał Pełczyński, a general in the Army ofCongress Poland.
Pełczyński began school inŁęczyca. In 1905 he participated in a school strike connected with Polish efforts to win independence from theRussian Empire. He continued his schooling inWarsawat the Gen. Paweł ChrzanowskiGymnasium. In 1911 he began medical studies atKraków University. As a medical student he was a member of thepatriotic-gymnasticSokółorganisationand of the"Zet"Polish Youth Association (Związek Młodzieży Polskiej "Zet").[2]He completed a military course conducted by Zygmunt Zieliński, a futurePolish Armygenerał broni(lieutenant general).
In 1923 Pełczyński marriedWanda Filipowska, with whom he had a daughter, Maria, and a son, Krzysztof (Christopher, born 1924, who died during theWarsaw Uprisingon 17 August 1944, of wounds sustained on 1 August, the first day of the Uprising).
The outbreak of World War I in August 1914 found Pełczyński on vacation nearWłocławek. After the area had been occupied by the Germans, he was mobilised by them to work as amedicat a Russian-prisoner-of-war camp.
After his release from German service, in June 1915 he joined thePolish Legions.[2]He served as an officer in the 6th Legions Infantry Regiment (6 Pułk Piechoty Legionów) and commanded aplatoonand acompany. In July 1917, following theOath Crisis, he was interned at a camp inBeniaminów.[2]In March 1918, after release from internment, he took up work at a social-services agency (Rada Główna Opiekuńcza) while continuing his involvement with"Zet."
In November 1918 Pełczyński was accepted into the Polish Army and placed in command of a company, then abattalion, of the6th Legions' Infantry Regiment.[2]In March 1920 he was transferred to the Infantry Officer-Cadet School (Szkoła Podchorążych Piechoty) in Warsaw as a company commander, then a battalion commander. From September 1921 to September 1923 he attended the War College (Wyższa Szkoła Wojenna) in Warsaw.[2]After graduating with aGeneral Staffofficer's diploma, he returned to the Infantry Officer-Cadet School as a battalion commander.
In July 1924 he was posted to the Office of the Inner War Council (Ścisła Rada Wojenna). In May 1927 he began service in theSecond Department of Polish General Staff(theintelligencesection) as chief of the Information Department (Wydział Ewidencyjny). In January 1929 he was appointed chief of Section II. From March 1932 to September 1935 he commanded the5th Legions' Infantry Regiment(5 Pułk Piechoty Legionów) inWilno(it was part of the elite1st Legions Infantry Division), then returned to again head Section II.[2]
As chief of the Second Department of Polish General Staff, Pełczyński, like his predecessor ColonelTadeusz Schaetzeland like deputy chief Lt. Col.Józef Englicht, was very supportive ofMarshalJózef Piłsudski'sPrometheanproject, aimed at liberating the non-Russian peoples of the Soviet Union.[3]
Pełczyński was the longest-serving prewar chief of the Second Department (1929–32, and 1935 – January 1939). In January 1939 he was relieved of this post and placed in command of the19th Infantry Division(19 Dywizja Piechoty), stationed in Wilno.[2]His tenure as chief of Section II had reportedly been ended by his wife Wanda's political activities againstMarshalEdward Śmigły-Rydzand GeneralFelicjan Sławoj-Składkowski.
Pełczyński may have made his greatest contribution to Allied victory in World War II well before the opening of hostilities, when he proposed giving Polish knowledge of the GermanEnigma machineto the French and British. According to ColonelStefan Mayer, "From Gen. Pełczyński, now resident in Great Britain, I know that... he suggested [to the chief of the Polish General Staff, General Wacław Stachiewicz] that in case of [impending] war the Enigma secret... be used as our Polish contribution to the common... defence and divulged to our future allies. [Pełczyński] repeated [this] to Col.Józef Smoleńskiwhen in [the] first days of January 1939 [Smoleński] replaced [him] as... head of [Section II]. That was the basis of [Lt. Col. Langer]'s instructions... when he... represent[ed] the Polish side at the [Paris] conference... in January 1939 and then in Warsaw in July 1939.[4]
The Poles' gift, to their British and French allies, of Enigma decryption atWarsawon 26 July 1939, just five weeks before the outbreak of the war, came not a moment too soon, as it laid the foundations for later British cryptographic breakthroughs that produced theUltraintelligence that was a key factor during the war. FormerBletchley Parkmathematician-cryptologistGordon Welchmanlater wrote: "Ultra would never have gotten off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military... Enigma machine, and of the operating procedures that were in use."[5]
After the outbreak of war, from 5 September 1939, Pełczyński commanded a force in the rears of the invading GermanWehrmacht.
After the conclusion of theSeptember Campaign, he went to Warsaw to take up underground work with the Service for Polish Victory (Służba Zwycięstwu Polski), then with the Union for Armed Resistance (Związek Walki Zbrojnej, orZWZ) and the Home Army (Armia Krajowa, orAK).[2]
From July 1940 to April 1941 he commanded theLublinZWZdistrict. As the localGestapowere closing in, he returned to Warsaw and accepted the post of chief of staff ofZWZ(July 1941). From July 1943, he was also Home Army deputy commander. In November 1943, he was promoted to major general (generał brygady).[2]
He commandedsabotageoperations carried out byKedywunits against the German war machine (including disruption of several rail lines). He took part in the decision to begin theWarsaw uprising.[2]
Five weeks into the Warsaw Uprising, on 4 September 1944, Pełczyński was gravely wounded when thePKOsavings-bankbuilding onŚwiętokrzyskaStreet was bombed, and as a result he could no longer carry on the duties of Home Army chief of staff.
After the suppression of the Uprising, Pełczyński was imprisoned by the Germans at theLangwassercamp, then atColditz.[2]
Following his liberation by the Allies in 1945, he made his way to London in England.[2]
|
https://en.wikipedia.org/wiki/Tadeusz_Pe%C5%82czy%C5%84ski#Enigma
|
The Imitation Gameis a 2014 Americanbiographicalthriller filmdirected byMorten Tyldumand written byGraham Moore, based on the 1983 biographyAlan Turing: The EnigmabyAndrew Hodges. The film's title quotes the name of the gamecryptanalystAlan Turingproposed for answering the question "Can machines think?", in his 1950 seminal paper "Computing Machinery and Intelligence". The film starsBenedict Cumberbatchas Turing, who decrypted German intelligence messages for the British government duringWorld War II.Keira Knightley,Matthew Goode,Rory Kinnear,Charles Dance, andMark Strongappear in supporting roles.
Following its premiere at theTelluride Film Festivalon August 29, 2014,The Imitation Gamewas released theatrically in the United States on November 14. It grossed over $233 million worldwide on a $14 million production budget, making it the highest-grossing independent film of 2014. The film received critical acclaim but faced significant criticism for its historical inaccuracies, including depicting several events that had never taken place in real life.[6][7]It received eight nominations at the87th Academy Awards(includingBest Picture), winning forBest Adapted Screenplay. It also received five nominations at theGolden Globes, three at theSAG Awardsand nine at theBAFTAs. Cumberbatch and Knightley's highly acclaimed performances were nominated for Best Actor and Best Supporting Actress respectively at each award.
In 1951, police investigate the mathematicianAlan Turingafter an apparent home break-in. During his interrogation, Turing talks of his work atBletchley ParkduringWWII.
In 1928, the young Turing is constantly bullied at boarding school. Developing a friendship withChristopher Morcom, who sparks his interest incryptography, he soon develops romantic feelings. However, Christopher shortly dies from tuberculosis.
When Britain declares war on Germany in 1939, Turing joins the cryptography team ofHugh Alexander,John Cairncross,Peter Hilton, Keith Furman, and Charles Richards inBletchley Park, directed by CommanderAlastair Denniston. They areto analyzetheEnigma machine, which the Nazis use to send coded messages.
Difficult to work with, and believing his colleagues to be inferior, Turing works alone to design amachineto decipher Enigma messages. When Denniston refuses to fund the machine's £100,000 construction cost, Turing contacts Prime MinisterWinston Churchillwho appoints him as team leader and provides necessary funds. Turing then fires Furman and Richards and places a difficult crossword in newspapers as a test to find replacements.
Cambridge graduateJoan Clarkepasses Turing's test but her family will not allow her to work with the male cryptographers. Turing arranges for her to live and work with the women who intercept the messages, and shares his plans with her. Clarke helps Turing warm to the others, who begin to respect him.
Turing's machine, which he names Christopher, is constructed but cannot determine the Enigma encryption settings quickly enough; as the Germans reset them each day. Denniston orders it to be destroyed and Turing fired, but the other cryptographers threaten to leave if Turing goes.
When Clarke plans to leave because of her parents, Turing proposes, which she accepts. During their engagement party, Turing confirms his homosexuality to Cairncross, who advises he keep secret.
Overhearing a clerk talking about messages she receives from the same German coder, Turing has an epiphany: he can program the machine todecode words he already knows exist in certain messages. A German coder always opening his first message with a standard plaintext German script reveals enough of the day's Enigma code for Christopher to quickly decode all the day's messages. Recalibrating the machine, it quickly decodes a message, and the cryptographers celebrate.
Discovering a convoy is about to be attacked, Turing realizes that if they suddenly react to prevent it the Germans will know Enigma is compromised and change it. Therefore, the team cannot act on every decoded message, so they do not act to save the convoy although Peter begs them, as his brother is part of it. Turing creates a statistical model to choose the warnings to send to maximize destruction and minimize detection.
Discovering Cairncross is a Soviet spy, Turing confronts him. He argues that the Soviets are allies, working for the same goals, and threatens to retaliate by disclosing Turing's sexuality. When the top MI6 agentStewart Menziesappears to threaten Clarke, Turing reveals that Cairncross is a spy. Menzies already knew, leaking misinformation to the Soviets for British benefit.
Turing urges Clarke to leave Bletchley Park, telling her he is a homosexual. She always suspected but insists they would have been happy together anyway. Fearing for her safety, Turing says he never cared for her, and only used her for her cryptography skills.
Although heartbroken, Clarke stays on, knowing how important it is. She refuses to bow down to what Turing or her parents want her to do, or what they think of her decisions.
After the war, Menzies has the cryptographers destroy the evidence, as MI6 wants governments to believe they have unbreakable code machines. The team should never meet again or share what they have done.
In 1952, Turing is convicted ofgross indecencyand undergoeschemical castrationinstead of prison, so he can continue his work. Clarke visits him, witnesses his physical and mental deterioration, and tries to comfort him.
The epilogue shows Turingcommitted suicideon June 7, 1954, after a year of government-mandated hormonal therapy. In 2013, Queen Elizabeth II granted him aposthumous Royal Pardon. Historians estimate that breaking Enigma shortened the war by over two years, saving over 14 million lives; and Turing's work was an important step towards today's computers.
Before Cumberbatch joined the project,Warner Bros.bought the screenplay for a reported seven-figure sum because ofLeonardo DiCaprio's interest in playing Turing.[8][9][10][11][12]In the end, DiCaprio did not come on board and the rights of the script reverted to the screenwriter. Black Bear Pictures subsequently committed to finance the film for $14 million.[4][13][14]Various directors were attached during development includingRon HowardandDavid Yates.[15]In December 2012, it was announced thatHeadhuntersdirectorMorten Tyldumwould helm the project, making the film his English-language directorial debut.[16][17]
Thebombeseen in the film is based on a replica of Turing's original machine, which is housed in the museum at Bletchley Park. However, production designer Maria Djurkovic admitted that her team made the machine more cinematic by making it larger and having more of its internal mechanisms visible.[18]
The costumes were designed bySammy Sheldon Differ, using as many authentically 1940s pieces as possible. Benedict Cumberbatch had "frequent conversations" with Differ to develop his character's wardrobe and physical mannerisms. Where originally costumes were not available, the team commissioned reproductions, using appropriate fabrics like linen and cotton. Differ explained, "those materials can be made to drape in a particular way, and the way they sit on the body resists duplication".[19]
Principal photographybegan on September 15, 2013, in Britain. Filming locations included Turing's former school,Sherborne, Bletchley Park, where Turing and his colleagues worked during the war, andCentral Saint Martinscampus onSouthampton Rowin London.[20]Other locations included towns in England such asNettlebed(Joyce GroveinOxfordshire) andChesham(Buckinghamshire). Scenes were also filmed atBicester Airfieldand outside theLaw Societybuilding inChancery Lane, and atWest London Film Studios. Principal photography finished on November 11, 2013.[21]
The Weinstein Companyacquired the film for $7 million in February 2014, the highest amount ever paid for US distribution rights at theEuropean Film Market.[22]The film was also a recipient ofTribeca Film Festival's Sloan Filmmaker Fund, which grants filmmakers funding and guidance with regard to innovative films that are concerned with science, mathematics, and technology.[23]
In June 2014, it was announced thatAlexandre Desplatwould provide the original score of the film.[25]It was recorded by theLondon Symphony OrchestraatAbbey Road Studiosin London. Desplat uses continuous piano arpeggios to represent both Turing's thinking mind and the workings of a mechanical machine.[24]He said of the complexity of the continuity and structure of the score:
[W]hen the camera at the end of the film has those beautiful shots of the young boy, the young Alan, and he's meeting with the professor who's telling him his friend Christopher is dead, and the camera is pushing in on him, I play Christopher's theme that we heard very early on in the film. There's a simple continuity there. It's the accumulation of these moments that I can slowly but surely play that makes it even stronger.[24]
The score received an Academy Award nomination forBest Original Score, losing to the score ofThe Grand Budapest Hotel, also composed by Desplat.
Following theRoyal Pardongranted by the British government to Turing on December 24, 2013, the filmmakers released the first official promotional photograph of Cumberbatch in character beside Turing's bombe.[26][27]In the week of the anniversary of Turing's death in June 2014,Entertainment Weeklyreleased two new stills which marked the first look at the characters played by Keira Knightley, Matthew Goode, Matthew Beard, and Allen Leech.[28]On what would have been Turing's 102nd birthday on June 23,Empirereleased two photographs featuring Mark Strong and Charles Dance in character. Promotional stills were taken by photographerJack English, who also photographed Cumberbatch forTinker Tailor Soldier Spy.[29]
Princeton University PressandVintage Booksboth released film tie-in editions of Andrew Hodges' biographyAlan Turing: The Enigmain September 2014.[30]The first UK and US trailers were released on July 22, 2014.[31]The international teaser poster was released on September 18, 2014, with the tagline "The true enigma was the man who cracked the code".[32]
In November 2014, the Weinstein Company co-hosted a private screening of the film withDigital Sky TechnologiesbillionaireYuri MilnerandFacebookCEOMark Zuckerberg. Attendees of the screening atLos Altos Hills, CaliforniaincludedSilicon Valley's top executives, such asFacebookCOOSheryl Sandberg,LinkedIn'sReid Hoffman,Googleco-founderSergey Brin,Airbnb's Nathan Blecharczyk, andTheranosfounderElizabeth Holmes. Director Tyldum, screenwriter Moore, and actress Knightley were also in attendance.[33]In addition, Cumberbatch and Zuckerberg presented the Mathematics Prizes at the Breakthrough Awards on November 10, 2014, in honour of Turing.[34]
The bombe re-created by the filmmakers has been on display in a specialThe Imitation Gameexhibition at Bletchley Park since November 10, 2014. The year-long exhibit features clothes worn by the actors and props used in the film.[35]
The official film website allowed visitors to unlock exclusive content by solvingcryptic crosswordpuzzles supposedly conceived by Turing.[36]The website puzzle was a shorter version[37]of theDaily Telegraphpuzzle of January 13, 1942 that was actually used in Bletchley Park recruitment during the war[38](and the puzzle was not set by Turing, who was no good at them).[37]Google, which sponsored the New York Premiere of the film, launched a competition called "The Code-Cracking Challenge" on November 23, 2014. It is a skill contest where entrants must crack a code provided by Google. The prize/s will be awarded to entrant/s who crack the code and submit their entry the fastest.[39]
In November 2014, ahead of the film's US release,The New York Timesreprinted the 1942 puzzle fromThe Daily Telegraphused in recruiting codebreakers at Bletchley Park during the Second World War. Entrants who solved the puzzle could mail in their results for a chance to win a trip for two to London and a tour of Bletchley Park.[40]
TWClaunched a print and online campaign on January 2, 2015, featuring testimonials from leaders in the fields of technology, military, academia, and LGBTQ groups (all influenced by Turing's life and accomplishments) to promote the film and Turing's legacy. Yahoo! CEO Marissa Mayer,NetflixCEOReed Hastings,GoogleExecutive ChairmanEric Schmidt,TwitterCEODick Costolo,PayPalco-founderMax Levchin,YouTubeCEOSusan Wojcicki, andWikipedia'sJimmy Walesall gave tribute quotes. There were also testimonials from LGBT leaders includingHRCpresidentChad GriffinandGLAADCEOSarah Kate Ellisand from military leaders including the 22nd United States Defense Secretary Robert Gates.[41][42][43][44]
The film had its world premiere at the 41stTelluride Film Festivalin August 2014, and played at the39th Toronto International Film Festivalin September.[45]It had its European premiere as the opening film of the58th BFI London Film Festivalin October 2014.[46][47]It began a limited theatrical release on November 28, 2014, in the United States, two weeks after its premiere in the United Kingdom on November 14.[9]The US distributor TWC stated that the film would initially debut in four cinemas in Los Angeles and New York, expanding to six new markets on December 12, before being released nationwide on Christmas Day.[48]
The Imitation Gamewas released on March 31, 2015, in the United States in two formats: a one-disc standard DVD and a Blu-ray with a digital copy of the film.[49]
The Imitation Gamegrossed $91.1 million in the United States and Canada, and $142.4 million in other territories, for a worldwide total of $233.5 million, against a budget of $14 million.[5]It was the top-grossing independent film release of 2014.[50]
Debuting in four theaters in Los Angeles and New York on November 28, the film grossed $479,352 in its opening weekend with a $119,352 per-screen-average, the second highest per-screen-average of 2014 and the 7th highest of all time for a live-action film. Adjusted for inflation, it outperformedThe King's Speech($88,863 in 2010) andThe Artist($51,220 in 2011), which were also released on their respective Thanksgiving weekends. The film expanded into additional markets on December 12 and was released nationwide on Christmas Day.[51][52][53]
The film opened at number two at the UK box office behindInterstellar, earning $4.3 million from 459 screens. Its opening was 107% higher thanArgo, 81% higher thanPhilomenaand 26% higher thanThe Iron Lady.[54][55]
OnRotten Tomatoes,The Imitation Gameholds an approval rating of 90% based on 287 reviews, with an average rating of 7.7/10. The site's critical consensus reads: "With an outstanding starring performance from Benedict Cumberbatch illuminating its fact-based story,The Imitation Gameserves as an eminently well-made entry in the 'prestige biopic' genre."[56]OnMetacritic, the film has a weighted average score of 71 out of 100, based on 49 critics, indicating "generally favorable reviews".[57]The film received a rare average grade of "A+" from market-research firmCinemaScore, and a 90% "definite recommend" rating from its core audience, according toPostTrak. It was also included in both theNational Board of ReviewandAmerican Film Institute's "Top 10 Films of 2014".[58][59][60]
The New York Observer'sRex Reeddeclared that "one of the most important stories of the last century is one of the greatest movies of 2014".[61]Kaleem Aftab ofThe Independentgave the film a five-star review, hailing it the "Best British Film of the Year".[62][63]Empiredescribed it as a "superb thriller" andGlamourdeclared it "an instant classic".[64][65]Peter Debruge ofVarietyadded that the film is "beautifully written, elegantly mounted and poignantly performed".[66]Critic Scott Foundas stated that the "movie is undeniably strong in its sense of a bright light burned out too soon, and the often undignified fate of those who dare to chafe at society's established norms".[67]CriticLeonard Maltinasserted that the film has "an ideal ensemble cast with every role filled to perfection". Praise went to Knightley's supporting performance as Clarke,Goldenberg's editing, Desplat's score,Faura's cinematography and Djurkovic's production design.[68]The film was enthusiastically received at the Telluride Film Festival and won the "People's Choice Award for Best Film" at TIFF, the highest prize of the festival.
Cumberbatch's performance was met with widespread acclaim from critics.TIMEranked Cumberbatch's portrayal number one in its Top 10 film performances of 2014, with the magazine's chief film criticRichard Corlisscalling Cumberbatch's characterisation "the actor's oddest, fullest, mostCumberbatchiancharacter yet ... he doesn't play Turing so much as inhabit him, bravely and sympathetically but without mediation".[69][70]Kenneth Turanof theLos Angeles Timesdeclared Turing "the role of Cumberbatch's career", whileA.O. ScottofThe New York Timesstated that it is "one of the year's finest pieces of screen acting".[71][72]Peter TraversofRolling Stoneasserted that the actor "gives an explosive, emotionally complex" portrayal. Critic Clayton Davis stated that it is a "performance for the ages ... proving he's one of the best actors working today".[73][74]Foundas ofVarietywrote that Cumberbatch's acting is "masterful ... a marvel to watch",Manohla DargisofThe New York Timesdescribed it as "delicately nuanced, prickly and tragic" andOwen Gleibermanof theBBCproclaimed it an "emotionally tailored perfection".[75][76]It is "a storming performance from Cumberbatch: you'll be deciphering his work long after the credits roll" declared Dave Calhoun ofTime Out.[77]In addition, Claudia Puig ofUSA Todayconcluded in her review, "It's Cumberbatch's nuanced, haunted performance that leaves the most powerful impression".[78]The Hollywood Reporter'sTodd McCarthyreported that the undeniable highlight of the film was Cumberbatch, "whose charisma, tellingly modulated and naturalistic array of eccentricities, talent at indicating a mind never at rest and knack for simultaneously portraying physical oddness and attractiveness combine to create an entirely credible portrait of genius at work".[79][80]Gossip bloggerRoger Friedmanwrote at the end of his review, "Cumberbatch may be the closest thing we have to a real descendant of SirLaurence Olivier".[81]
While praising the performances of Cumberbatch and Knightley, Catherine Shoard ofThe Guardianstated that the film is "too formulaic, too efficient at simply whisking you through and making sure you've clocked the diversity message,"[82]going on to raise concerns about the film's alleged reluctance to show Turing "romantically or sexually involved with a man.[83]Tim Robey ofThe Telegraphdescribed it as "a film about a human calculator which feels ... a little too calculated".[84]British historianAlex von Tunzelmann, writing forThe Guardianin November 2014, pointed out many historical inaccuracies in the film, saying in conclusion: "Historically,The Imitation Gameis as much of a garbled mess as a heap of unbroken code".[85]JournalistChristian Carylalso found numerous historical inaccuracies, describing the film as constituting "a bizarre departure from the historical record" that changed Turing's rich life to be "multiplex-friendly".[86]L.V. Anderson ofSlatemagazine compared the film's account of Turing's life and work to the biography it was based on, writing, "I discovered thatThe Imitation Gametakes major liberties with its source material, injecting conflict where none existed, inventing entirely fictional characters, rearranging the chronology of events, and misrepresenting the very nature of Turing's work at Bletchley Park".[87]Andrew Grant ofScience Newswrote, "... like so many other Hollywood biopics, it takes some major artistic license – which is disappointing, because Turing's actual story is so compelling."[88]Computing historian Thomas Haigh, writing in the journalCommunications of the ACM, said that "the film is a bad guide to reality but a useful summary of everything that the popular imagination gets wrong about Bletchley Park", that it "combines the traditional focus of popular science writing on the lone genius who changes the world with the modern movie superhero narrative of a freak who must overcome his own flaws before he can save the world", and that, together with the likes ofA Beautiful MindandThe Theory of Everything, is part of a trend of "glossy scientific biopic[s]" that emphasize those famous scientists who were surrounded by tragedy rather than those who found contented lives, which in turn affects the way "[s]ome kinds of people, and work, have become famous and others have not."[89]The visual blogInformation Is Beautifuldeduced that, while taking creative licence into account, the film was 42.3% accurate when compared to real-life events, commenting that "to be fair, shoe-horning the incredible complexity of the Enigma machine and cryptography in general was never going to be easy. But this film just rips the historical record to shreds."[90]
Despite earlier reservations, Turing's niece Inagh Payne toldAllan BeswickofBBC Radio Manchesterthat the film "really did honour my uncle" after she watched the film at theLondon Film Festivalin October 2014. In the same interview, Turing's nephewDermot Turingstated that Cumberbatch is "perfect casting. I couldn't think of anyone better." James Turing, a great-nephew of the code-breaker, said Cumberbatch "knows things that I never knew before. The amount of knowledge he has about Alan is amazing."[91]
The Imitation Gamewas nominated for, and received, numerous awards, with Cumberbatch's portrayal of Turing particularly praised.[92][93][94]The film and its cast and crew were also honoured by Human Rights Campaign, the largest LGBT civil rights advocacy group and political lobbying organisation in the United States. "We are proud to honor the stars and filmmakers ofThe Imitation Gamefor bringing the captivating yet tragic story of Alan Turing to the big screen", HRC president Chad Griffin said in a statement.[95]
In January 2015, Cumberbatch, comedian-actorStephen Fry, producerHarvey Weinstein, and Turing's great-niece Rachel Barnes launched a campaign to pardon the 49,000 gay men convicted under the same law that led to Turing's chemical castration. An open letter published inThe Guardianurged the British government and the Royal family, particularlyQueen Elizabeth IIand theDukeandDuchess of Cambridge, to aid the campaign.[96]
The Human Rights Campaign'sChad Griffinalso offered his endorsement, saying that "Over 49,000 other gay men and women were persecuted in England under the same law. Turing was pardoned by Queen Elizabeth II in 2013. The others were not. Honor this movie. Honor this man. And honor the movement to bring justice to the other 49,000."[97]Aiding the cause were campaignerPeter Tatchell,Attitudemagazine, and other high-profile figures in the gay community.[98]
In February 2015,Matt Damon,Michael Douglas,Jessica Alba,Bryan Cranston, andAnna Wintouramong others joined the petition atPardon49k.org[usurped]demanding pardons for victims of anti-gay laws.[99][100]Historians, including Justin Bengry ofBirkbeck University of Londonand Matt Houlbrook of theUniversity of Birmingham, argued that such a pardon would be "bad history" despite its political appeal, because of the broad variety of cases in which the historical laws were applied (including cases of rape) and the distortion of history resulting from an attempt to clean up the wrongdoings of the pastpost facto. Bengry also cites the existing ability of those convicted under repealed anti-homosexuality laws to have their convictions declared spent.[101]
This petition eventually resulted in the Policing and Crime Act 2017, informally known as theAlan Turing law, which serves as an amnesty law to pardon men who were cautioned or convicted under historical legislation that outlawed homosexual acts, and which was implemented on January 31, 2017.[102]As the law and the disregard process applies only to England and Wales, groups inNorthern IrelandandScotlandhave campaigned for equivalent laws in their jurisdictions.[103][104]
During production, there was criticism regarding the film's purported downplaying of Turing's homosexuality,[105]particularly condemning the portrayal of his relationship with close friend and one-time fiancéeJoan Clarke. Hodges, author of the book upon which the film was based, described the script as having "built up the relationship with Joan much more than it actually was".[106]Turing's niece Payne thought that Knightley was inappropriately cast, as she described the real Clarke as "rather plain", and said: "I think they might be trying to romanticize it. It makes me a bit mad. You want the film to show it as it was, not a lot of nonsense."[107]
Speaking toEmpire, director Tyldum expressed his decision to take on the project: "It is such a complex story. It was the gay rights element, but also how his (Turing's) ideas were kept secret and how incredibly important his work was during the war, that he was never given credit for it".[29]In an interview forGQ UK, Matthew Goode, who plays fellow cryptographerHugh Alexanderin the film, stated that the script focuses on "Turing's life and how as a nation we celebrated him as being a hero by chemically castrating him because he was gay".[108]The producers of the film stated: "There is not – and never has been – a version of our script where Alan Turing is anything other than homosexual, nor have we included fictitious sex scenes."[109]
In a January 2015 interview withThe Huffington Post, its screenwriter Graham Moore said in response to complaints about the film's historical accuracy:
When you use the language of "fact-checking" to talk about a film, I think you're sort of fundamentally misunderstanding how art works. You don't fact check Monet'sWater Lilies. That's not what water lilies look like, that's what the sensation of experiencing water lilies feels like. That's the goal of the piece.[110]
In the same interview, Tyldum stated:
A lot of historical films sometimes feel like people reading a Wikipedia page to you onscreen, like just reciting "and then he did that, and then he did that, and then he did this other thing" – it's like a "Greatest Hits" compilation. We wanted the movie to be emotional and passionate. Our goal was to give you "What does Alan Turing feel like?" What does his story feel like? What'd it feel like to be Alan Turing? Can we create the experience of sort of "Alan Turing-ness" for an audience based on his life?[110]
For the most part, Hodges has not commented on the historical accuracy of the film, alluding to contractual obligations involving the film rights to his biography.[111]
Several events depicted in the film did not happen in real life. The visual blogInformation is Beautifuldeduced that, while taking creative license into account, the film was just 42.3% accurate when compared to real-life events, summarizing that "shoe-horning the incredible complexity of the Enigma machine and cryptography, in general, was never going to be easy. But this film just rips the historical records to shreds".[112]GCHQDepartmental HistorianTony Comer went even further in his criticism of the film's inaccuracies, saying that "The Imitation Game [only] gets two things absolutely right. There was a Second World War and Turing's first name was Alan".[113]
|
https://en.wikipedia.org/wiki/The_Imitation_Game
|
On 20 July 1944,Adolf Hitlerand his top military associates entered the briefing hut of theWolf's Lairmilitary headquarters, a series of concrete bunkers and shelters located deep in the forest ofEast Prussia, not far from the location of theWorld War IBattle of Tannenberg. Soon after, an explosion killed three officers and astenographer, injuring everyone else in the room.This assassination attemptwas the work of ColonelClaus von Stauffenberg, anaristocratwho had been severely wounded while serving in theNorth Africantheater of war, losing his right hand, left eye, and two fingers of his left hand.[1]
The bomb plot was a carefully plannedcoup d'étatattempt against theNazi regime, orchestrated by a group of army officers. Their plan was to assassinate Hitler, seize power inBerlin, establish a new pro-Western government and save Germany from total defeat.[1]
Immediately after the arrest and execution of the plot leaders in Berlin byFriedrich Fromm, theGestapo(the secret police force ofNazi Germany) began arresting people involved or suspected of being involved. This opportunity was also used to eliminate other, unrelated critics of the Nazi regime.[2]In total, an estimated 7,000 people were arrested of which approximately 4,980 were executed, some slowlystrangledwithpiano wireon Hitler's insistence.[3]A month after the failed attempt on Hitler's life, theGestapoinitiatedAktion Gitter.
|
https://en.wikipedia.org/wiki/List_of_members_of_the_20_July_plot
|
TheLucy spy ring(German:Lucy-Spionagering) was an anti-NaziWorld War IIespionageoperation headquartered inSwitzerlandand run byRudolf Roessler, aGermanrefugee. Its story was only published in 1966, and very little is clear about the ring, Roessler, or the effort's sources or motives.
At the outbreak of World War II, Roessler was a politicalrefugeefrom Bavaria who had fled to Switzerland when Hitler came to power. He was the founder of a small publishing firm,Vita NovaVerlag, producing copies of anti-NaziExilliteraturand other literary works in theGerman languagestrictly banned undercensorship in Nazi Germany, forsmugglingacross the border andblack marketdistribution todissidentintellectuals. He was employed by Brigadier Masson, head of Swiss Military Intelligence, who employed him as an analyst withBureau Ha, overtly a press cuttings agency but in fact a covert department of Swiss Intelligence. Roessler was approached by two German officers,Fritz ThieleandRudolph von Gersdorff, who were part of aGerman resistanceconspiracy to overthrowHitler, and had been known to Roessler in the 1930s through theHerrenklub.[clarification needed]
Thiele and Gersdorf wished him to act as a conduit for high-level military information, to be made available to him to make use of in the fight against Nazism. This they accomplished by equipping Roessler with a radio and anEnigma machine, and designating him as a German military station (call-signed RAHS). In this way they could openly transmit their information to him through normal channels. They were able to do this as Thiele, and his superior,Erich Fellgiebel(who was also part of the conspiracy), were in charge of theGerman Defence Ministry'scommunication centre, theBendlerblock. This was possible, as those employed to encode the information were unaware of where it was going, while those transmitting the messages had no idea what was in them.
At first Roessler passed the information toSwiss military intelligence, via a friend who was serving in Bureau Ha, an intelligence agency used by the Swiss as acut-out.Roger Masson, the head of Swiss MI, also chose to pass some of this information to the BritishSIS. Later, seeking to aid the USSR in its role in the fight against Nazism, Roessler was able to pass on information to it via another contact who was a part of a Soviet (GRU) network run byAlexander Rado. Roessler was not a Communist, nor even a Communist sympathizer until much later, and wished to remain at arm's length from Rado's network, insisting on complete anonymity and communicating with Rado only through the courier,Christian Schneider. Rado agreed to this, recognizing the value of the information being received. Rado code-named the source "Lucy", simply because all he knew about the source was that it was inLucerne.
Roessler's first major contribution to Soviet intelligence came in May 1941 when he was able to deliver details ofOperation Barbarossa, Germany's impending invasion of the Soviet Union. Though his warning was initially ignored - as Soviet intelligence had received multiple false alarms about an impending German invasion - Roessler's dates eventually proved accurate. Following the invasion, in June 1941, Lucy was regarded as a VYRDO source,i.e.of the highest importance, and to be transmitted immediately. Over the next two years "Lucy" was able to supply the Soviets with high grade military intelligence. During the autumn of 1942, "Lucy" provided the Soviets with detailed information aboutCase Blue, the German operations againstStalingradand theCaucasus; during this period decisions taken in Berlin were arriving in Moscow on average within a ten-hour period; on one occasion in just six hours, not much longer than it took to reach German front line units. Roessler, and Rado's network, particularlyAllan Foote, Rado's main radio operator, were prepared to work flat out to maintain the speed and flow of the information. At the peak of its operation, Rado's network was enciphering and sending several hundred messages per month, many of these from "Lucy". Meanwhile, Roessler alone had to do all the receiving, decoding and evaluating of the "Lucy" messages before passing them on; for him during this period it became a full-time operation. In the summer of 1943, the culmination of "Lucy's" success came in transmitting the details of Germany's plans forOperation Citadel, a planned summer offensive against theKursk salient, which became a strategic defeat for the German army—theBattle of Kurskgave theRed Armythe initiative on the eastern front for the remainder of the war.
During the winter of 1942, the Germans became aware of the transmissions from the Rado network, and began to take steps against it through their counter-espionage bureau. After several attempts to penetrate the network they succeeded in pressuring the Swiss to close it down in October 1943, when its radio transmitters were closed down and a number of key operatives were arrested. Thereafter Roessler's only outlet for the "Lucy" information was through the Bureau Ha and Swiss Military Intelligence. Roessler was unaware his information was also going to theWestern Allies.
The Lucy spy ring came to an end in the summer of 1944 when the German members, who were also involved in other anti-Nazi activities, were arrested in the aftermath of the failed20 July plot.
In Switzerland the Lucy network consisted of the following members:
The record of messages transmitted show that Roessler had four important sources, codenamedWerther,Teddy,Olga, andAnna.[1]While it was never discovered who they were,[1]the quartet was responsible for 42.5 percent of the intelligence sent from Switzerland to the Soviet Union.[1]
The search for the identity of those sources has created a very large body of work of varying quality and offering various conclusions.[2]Several theories can be dismissed immediately, including by Foote and several other writers, that the code names reflected the sources' access type rather than their identity- for example, that Werther stood for Wehrmacht, Olga for Oberkommando der Luftwaffe, Anna for Auswärtiges Amt (Foreign Office)- as the evidence does not support it.[1]Alexander Radó made this claim in his memoirs, that were examined in aDer Spiegelarticle.[3]Three and a half years before his death, Roessler described the identity of the four sources to a confidant.[1]They were a German major who was in charge of the Abwehr beforeWilhelm Canaris, Hans Bernd Gisevius, Carl Goerdeler and a General Boelitz, who was then deceased.[1]
The most reliable study by the CIA Historical Review Program[1]concluded that of the four sources, the most important source wasWerther. The study stated he was likelyWehrmachtGeneralHans Oster, other Abwehr officers working with Swiss intelligence, or Swiss intelligence on its own.[4][1]There was no evidence to link the other three codenames to known individuals.[1]The CIA believed that the German sources gave their reports to Swiss General Staff, who in turn supplied Roessler with information that the Swiss wanted to pass to the Soviets.[5]
Roessler's story was first published in 1966 by the French journalists Pierre Accoce and Pierre Quet.[6]In 1981, it was alleged byAnthony ReadandDavid Fisherthat Lucy was, at its heart, a BritishSecret Serviceoperation intended to getUltrainformation to the Soviets in a convincing way untraceable to British codebreaking operations against the Germans.[7]Stalinhad shown considerable suspicion of any information from the British aboutGerman plans to invade Russiain 1941, so anAlliedeffort to find a way to get helpful information to the Soviets in a form that would not be dismissed or, at least, not implausible. That the Soviets had, via their own espionage operations, learned of the British break into important German message traffic was not, at the time, known to the British. Various observations have suggested that Allan Foote was more than a mere radio operator: he was in a position to act as a radio interface between SIS and Roessler, and also between Roessler and Moscow; his return to the West in the 1950s was unusual in several ways; and his book was similarly troublesome. They also point out that not one of Roessler's claimed sources in Germany has been identified or has come forward. Hence their suspicion that, even more so than for most espionage operations, the Lucy ring was not what it seemed.
However, this is flatly denied byHarry Hinsley, the official historian for the British Secret Services in World War II, who stated that "there is no truth in the much-publicized claim that the British authorities made use of the ‘Lucy’ ring..to forward intelligence to Moscow".[8]
Phillip Knightleyalso dismisses the thesis that Ultra was the source of Lucy.[9]He indicates that the information was delivered very promptly (often within 24 hours) to Moscow, too fast if it had come via GCHQ Bletchley Park. Further, Ultra intelligence on the Eastern front was less than complete; many of the German messages were transmitted by landlines and wireless messages were often too garbled for timely decoding. Furthermore, theEnigmasystems employed by German forces on the Eastern Front were only broken intermittently. Knightley suggests that the source was Karel Sedlacek, a Czech military intelligence officer. Sedlacek died in London in 1967 and indicated that he received the information from one or more unidentified dissidents within the German High Command.[9]Another, but less likely possibility Knightley suggests is, that the information came from theSwiss secret service.[9]
V. E. Tarrant echoes Knightley's objections, and in addition points out that Read and Fisher's scenario was unnecessary, as Britain was already passing Ultra information to the Soviet Union following the German invasion in June 1941. While not wishing to reveal Britain's penetration of Enigma, Churchill ordered selected Ultra information to be passed via the British Military Mission in Moscow, reported as coming from "a well-placed source in Berlin," or "a reliable source".[10]However, as the Soviets showed little interest in co-operation on intelligence matters, refusing to share Soviet intelligence that would be useful to Britain (such as information on German air forces in the Eastern Front) or agreeing to use the Soviet mission in London as a transmission route, the British cut back the flow of information in the spring of 1942, and by the summer it had dwindled to a trickle. This hypothesis, that Britain lost the motivation to share intelligence with Stalin after this time, is also at variance with Read and Fisher's theory.
|
https://en.wikipedia.org/wiki/Lucy_spy_ring
|
Mercurywas a Britishcipher machineused by theAir Ministryfrom 1950 until at least the early 1960s. Mercury was an onlinerotor machinedescended fromTypex, but modified to achieve a longer cycle length using a so-calleddouble-drum basketsystem.
Mercury was designed by Wing Commander E. W. Smith and F. Rudd, who were awarded £2,250 and £750 respectively in 1960 for their work in the design of the machine. E. W. Smith, one of the developers ofTypeX, had designed the double-drum basket system in 1943, on his own initiative, to fulfil the need for an on-line system.
Mercury prototypes were operational by 1948, and the machine was in use by 1950. Over 200 Mercury machines had been made by 1959 with over £250,000 spent on its production. Mercury links were installed between the UK and various Overseas stations, including inCanada,Australia,Singapore,Cyprus,Germany,France,Middle East,Washington,NairobiandColombo. The machine was used for UK diplomatic messaging for more or less a decade, but saw almost no military use.
In 1960, it was anticipated that the machine would remain in use until 1963, when it would be made obsolete by the arrival ofBID 610(Alvis) equipment.
A miniaturised version of Mercury was designed, namedAriel, but this machine appears not to have been adopted for operational use.
In the Mercury system, two series of rotors were used. The first series, dubbed thecontrol maze, had four rotors, and stepped cyclometrically as in Typex. Five outputs from the control maze were used to determine the stepping of five rotors in the second series of rotors, themessage maze, the latter used to encrypt and decrypt the plaintext and ciphertext. A sixth rotor in the message maze was controlled independently and stepped in the opposite direction to the others. All ten rotors were interchangeable in any part of either maze. Using rotors to control the stepping of other rotors was a feature of an earlier cipher machine, the USECM Mark II.
Mercury also used double-wired rotors, consisting of "Inside and Outside Scrambled Wheels", the Outer wheels being settable in a number of positions with respect to the Inner wheels.
It had been mathematically determined that Typex had a sufficiently large cycle to permit only 750 characters to be sent using a single arrangement of its rotors without fear of compromising security. A counter recorded the number of keystrokes and when these reached 750 a predetermined rotor was manually advanced one step, thus permitting TypeX to safely encrypt messages with more than 750 key strokes.
Mercury, with its longer cycle length, was judged to be safe even after 56700 characters had passed on one setting of the rotors. This cycle length was sufficient for the machine to be used as an on-line cipher machine withtraffic flow security— the machine would transmit continuously, even if not sending a message.
|
https://en.wikipedia.org/wiki/Mercury_(cipher_machine)
|
SIGCUM, also known asConverter M-228, was arotor cipher machineused to encryptteleprintertraffic by theUnited States Army. Hastily designed byWilliam FriedmanandFrank Rowlett, the system was put into service in January 1943 before any rigorous analysis of its security had taken place. SIGCUM was subsequently discovered to be insecure by Rowlett, and was immediately withdrawn from service. The machine was redesigned to improve its security, reintroduced into service by April 1943, and remained in use until the 1960s.
In 1939, Friedman and Rowlett worked on the problem of creating a secure teleprinter encryption system. They decided against using a tape-based system, such as those proposed byGilbert Vernam, and instead conceived of the idea of generating a stream of five-bit pulses by use of wired rotors. Because of lack of funds and interest, however, the proposal was not pursued any further at that time. This changed with the United States' entry intoWorld War IIin December 1941. Rowlett was assigned to develop a teleprinter encryption system for use between Army command centers in United Kingdom and Australia (and later in North Africa).
Friedman described to Rowlett a concrete design for a teleprinter cipher machine that he had invented. However, Rowlett discovered some flaws in Friedman's proposed circuitry that showed the design to be flawed. Under pressure to report to a superior about the progress of the machine, Friedman responded angrily, accusing Rowlett of trying to destroy his reputation as a cryptanalyst. After Friedman calmed down, Rowlett proposed some designs for a replacement machine based on rotors. They settled on one, and agreed to write up a complete design and have it reviewed by another cryptanalyst by the following day.
The design agreed upon was a special attachment for a standard teleprinter. The attachment used a stack of five 26-contact rotors, the same as those used in theSIGABA, the highly secure US off-line cipher machine. Each time a key character was needed, thirteen inputs to the rotor stack were energized at the input endplate. Passing through the rotor stack, these thirteen inputs were to be scrambled at the output endplate. However, only five live contacts would be used. These five outputs would form five binary impulses, which would form thekeystreamfor the cipher, to be combined with the message itself, encoded in the 5-bitBaudot code.
The rotors advanced odometrically; that is, after each encipherment, the "fast" rotor would advance one step. Once every revolution of the fast rotor, the "medium" rotor would step once. Similarly, ever revolution of the medium rotor, the "slow" rotor would step, and so on for the other two rotors. However, which rotor was assigned as the "fast", "medium", "slow" etc. rotors was controlled by a set of five multi-switches. This gave a total of5!=120{\displaystyle 5!=120}different rotor stepping patterns. The machine was equipped with a total of 10 rotors, each of which could be inserted "direct" or in reversed order, yielding10×9×8×7×6×25=967,680{\displaystyle 10\times 9\times 8\times 7\times 6\times 2^{5}=967,680}possible rotor orderings and alignments.
The design for this machine, which was designated the Converter M-228, or SIGCUM, was given to theTeletype Corporation, who were also producingSIGABA. Rowlett recommended that the adoption of the machine be postponed until after a study of its cryptographic security, but SIGCUM was urgently needed by the Army, and the machine was put into production. Rowlett then proposed that the machine used in the Pentagon code room be monitored by connecting a page-printing "spy machine". The output could be then studied to establish whether the machine was resistant to attack. Rowlett's suggestion was implemented at the same time the first M-228 machines were installed at the Pentagon in January 1943, used for theWashington-Algierslink.
The machines worked as planned, and, initially, Rowlett's study of its security, joined by cryptanalyst Robert Ferner, uncovered no signs of cryptographic weakness. However, after a few days, a SIGCUM operator made a serious operating error, retransmitting the same message twice using the same machine settings, producing adepth.
From this, Rowlett was able to deduce the underlyingplaintextand keystream used by the machine. By 2 a.m., an analysis of the keystream allowed him to deduce the wiring of the fast and medium rotors, and of the output wiring. SIGCUM was immediately withdrawn from service, and work on a replacement system,SIGTOT— aone-time tapemachine designed byLeo Rosen— was given top priority.
Meanwhile, M-228 was redesigned to improve its security. Only five inputs, rather than thirteen, were energized. The five output contacts, instead of being used as the five output bits directly, were instead connected by three leads, each connected to different output point. That meant that an output bit could be energized by any of three different outputs from the rotor maze, making analysis of the machine more complex. The reduced number of inputs ensured that the generated key would not be biased.
The rotor stepping was also made more complex. The slowest two rotors, which originally were unlikely to step during the course of an encipherment, were redesigned so that they stepped depending on the output of the previous key output. One rotor, designated the "fast bump" rotor, would step if the fourth and fifth bits of the previous output were both true; and similarly the "slow bump" rotor would do the same for the first, second and third bits.
Certain of the rotor stepping arrangements were discovered to be weaker than others, and so these were ruled out for key lists.
This redesigned version of the M-228 was put into service by April 1943. However, the machine was judged to be only secure enough to handle traffic up to SECRET by landline, and CONFIDENTIAL by radio. The machine was also shared with the United Kingdom for joint communications.
A further-modified version of the M-228 that could be used for the highest level traffic, was designatedM-228-M, orSIGHUAD.
From that point on, the Army monitored the communications of its high-level systems to ensure that good operational procedure was being followed, even for highly secure devices such as the SIGABA and SIGTOT devices. As a result, poor operator practices, such as transmitting messages in depth, were largely eliminated.
|
https://en.wikipedia.org/wiki/SIGCUM
|
Thebomba, orbomba kryptologiczna(Polish for "bomb" or "cryptologic bomb"), was a special-purpose machine designed around October 1938 byPolish Cipher BureaucryptologistMarian Rejewskito break GermanEnigma-machineciphers.
How the machine came to be called a "bomb" has been an object of fascination and speculation. One theory, most likely apocryphal, originated with Polish engineer and army officer Tadeusz Lisicki (who knew Rejewski and his colleagueHenryk Zygalskiin wartime Britain but was never associated with theCipher Bureau). He claimed thatJerzy Różycki(the youngest of the three Enigma cryptologists, and who had died in a Mediterranean passenger-ship sinking in January 1942) named the "bomb" after anice-cream dessertof that name. This story seems implausible, since Lisicki had not known Różycki.
Rejewski himself stated that the device had been dubbed a "bomb" "for lack of a better idea".[1]
Perhaps the most credible explanation is given by a Cipher Bureau technician, Czesław Betlewski: workers at B.S.-4, the Cipher Bureau's German section, christened the machine a "bomb" (also, alternatively, a "washing machine" or a "mangle") because of the characteristic muffled noise that it produced when operating.[2]
A top-secret U.S. Army report dated 15 June 1945 stated:
A machine called the "bombe" is used to expedite the solution. The firstmachinewas built by the Poles and was a hand operated multiple enigma machine. When a possible solution was reached a part would fall off the machine onto the floor with a loud noise. Hence the name "bombe".[3]
The U.S. Army's above description of the Polishbombais both vague and inaccurate, as is clear from the device's description at the end of the second paragraph of the "History" section, below: "Each bomb... essentially constituted anelectrically poweredaggregate ofsixEnigmas..." Determination of a solution involved no disassembly ("a part... fall[ing] off") of the device.
The German Enigma used a combinationkeyto control the operation of the machine: rotor order, which rotors to install, which ring setting for each rotor, which initial setting for each rotor, and the settings of thesteckerplugboard. The rotor settings were trigrams (for example, "NJR") to indicate the way the operator was to set the machine. German Enigma operators were issued lists of these keys, one key for each day. For added security, however, each individual message was encrypted using an additional key modification. The operator randomly selected a trigram rotor setting for eachmessage(for example, "PDN"). This message key would be typed twice ("PDNPDN") andencrypted, using the daily key (all the rest of those settings). At this point each operator would reset his machine to the message key, which would then be used for the rest of the message. Because the configuration of the Enigma's rotor set changed with each depression of a key, the repetition would not be obvious in theciphertextsince the sameplaintextletters would encrypt to different ciphertext letters. (For example, "PDNPDN" might become "ZRSJVL.")
This procedure, which seemed reasonably secure to the Germans, was nonetheless acryptographicmalpractice, since the first insights into Enigma encryption could be inferred from seeing how the same character string was encrypted differently two times in a row.
Using the knowledge that the first three letters of a message were the same as the second three, Polish mathematician–cryptologistMarian Rejewskiwas able to determine the internal wiring of the Enigma machine and thus to reconstruct the logical structure of the device. Only general traits of the machine were suspected, from the example of the commercial Enigma variant, which the Germans were known to have been using for diplomatic communications. The military versions were sufficiently different to present an entirely new problem. Having done that much, it was still necessary to check each of the potential daily keys to break an encrypted message (i.e., a "ciphertext"). With many thousands of such possible keys, and with the growing complexity of the Enigma machine and its keying procedures, this was becoming an increasingly daunting task.
In order to mechanize and speed up the process, Rejewski, a civilian mathematician working at the Polish General Staff's Cipher Bureau inWarsaw, invented the"bomba kryptologiczna"(cryptologic bomb), probably in October 1938. Each bomb (six were built in Warsaw for the Cipher Bureau before September 1939) essentially constituted an electrically powered aggregate of six Enigmas and took the place of some one hundred workers.[4]
The bomb method was based, like the Poles' earlier"grill" method, on the fact that the plug connections in the commutator ("plugboard") did not change all the letters. But while the grill method required unchangedpairsof letters, the bomb method required only unchanged letters. Hence it could be applied even though the number of plug connections in this period was between five and eight. In mid-November 1938, the bombs were ready, and the reconstructing of daily keys now took about two hours.[5]
Up to July 25, 1939, the Poles had been breaking Enigma messages for over six and a half years without telling theirFrenchandBritishallies. On December 15, 1938, two new rotors, IV and V, were introduced (three of the now five rotors being selected for use in the machine at a time). As Rejewski wrote in a 1979 critique of appendix 1, volume 1 (1979), of the official history ofBritish Intelligence in the Second World War, "we quickly found the [wirings] within the [new rotors], but [their] introduction [...] raised the number of possible sequences of drums from 6 to 60 [...] and hence also raised tenfold the work of finding the keys. Thus the change was not qualitative but quantitative. We would have had to markedly increase the personnel to operate the bombs, to produce theperforated sheets(60 series of 26 sheets each were now needed, whereas up to the meeting on July 25, 1939, we had only two such series ready) and to manipulate the sheets."[6]
Harry Hinsleysuggested inBritish Intelligence in the Second World Warthat the Poles decided to share their Enigma-breaking techniques and equipment with the French and British in July 1939 because they had encountered insuperable technical difficulties. Rejewski rejected this: "No, it was not [cryptologic] difficulties [...] that prompted us to work with the British and French, but only the deteriorating political situation. If we had had no difficulties at all we would still, or even the more so, have shared our achievements with our allies asour contribution to the struggle against Germany."[6]
|
https://en.wikipedia.org/wiki/Bomba_(cryptography)
|
Thebombe(UK:/bɒmb/) was anelectro-mechanicaldevice used by Britishcryptologiststo help decipher GermanEnigma-machine-encrypted secret messages duringWorld War II.[1]TheUS Navy[2]andUS Army[3]later produced their own machines to the same functional specification, albeit engineered differently both from each other and from Polish and British bombes.
The British bombe was developed from a device known as the "bomba" (Polish:bomba kryptologiczna), which had been designed in Poland at theBiuro Szyfrów(Cipher Bureau) by cryptologistMarian Rejewski, who had been breaking GermanEnigmamessages for the previous seven years, using it and earlier machines. The initial design of the British bombe was produced in 1939 at the UKGovernment Code and Cypher School(GC&CS) atBletchley ParkbyAlan Turing,[4]with an important refinement devised in 1940 byGordon Welchman.[5]The engineering design and construction was the work ofHarold Keenof theBritish Tabulating Machine Company. The first bombe, code-namedVictory, was installed in March 1940[6]while the second version,Agnus DeiorAgnes, incorporating Welchman's new design, was working by August 1940.[7]
The bombe was designed to discover some of the daily settings of the Enigma machines on the various German militarynetworks: specifically, the set ofrotorsin use and their positions in the machine; the rotor core start positions for the message—the messagekey—and one of the wirings of theplugboard.[8][9][10]
The Enigma is anelectro-mechanicalrotor machineused for theencryptionand decryption of secret messages. It was developed in Germany in the 1920s. The repeated changes of the electrical pathway from the keyboard to the lampboard implement apolyalphabetic substitutioncipher, which turnsplaintextintociphertextand back again. The Enigma's scrambler contains rotors with 26 electrical contacts on each side, whose wiring diverts the current to a different position on the two sides. When a key is pressed on the keyboard, an electric current flows through an entry drum at the right-hand end of the scrambler, then through the set of rotors to areflecting drum(or reflector) which turns it back through the rotors and entry drum, and out to illuminate one of the lamps on the lampboard.[11]
At each key depression, the right-hand or "fast" rotor advances one position, which causes the encipherment to change. In addition, once per rotation, the right-hand rotor causes the middle rotor to advance; the middle rotor similarly causes the left-hand (or "slow") rotor to advance. Each rotor's position is indicated by a letter of the alphabet showing through a window. The Enigma operator rotates the wheels by hand to set the start position for enciphering or deciphering a message. The three-letter sequence indicating the start position of the rotors is the "message key". There are 263= 17,576 different message keys and different positions of the set of three rotors. By opening the lid of the machine and releasing a compression bar, the set of three rotors on their spindle can be removed from the machine and their sequence (called the "wheel order" at Bletchley Park) altered. Multiplying 17,576 by the six possible wheel orders gives 105,456 different ways that the scrambler can be set up.[12]
Although 105,456 is a large number,[13]it does not guarantee security. A brute-force attack is possible: one could imagine using 100 code clerks who each tried to decode a message using 1000 distinct rotor settings. The Poles developed card catalogs so they could easily find rotor positions; Britain built "EINS" (the German word for one) catalogs. Less intensive methods were also possible. If all message traffic for a day used the same rotor starting position, then frequency analysis for each position could recover the polyalphabetic substitutions. If different rotor starting positions were used, then overlapping portions of a message could be found using theindex of coincidence.[14]Many major powers (including the Germans) could break Enigma traffic if they knew the rotor wiring. The German military knew the Enigma was weak.[15]
In 1930, the German army introduced an additional security feature, a plugboard (Steckerbrettin German; each plug is aStecker, and the British cryptologists also used the word) that further scrambled the letters, both before and after they passed through the rotor-reflector system. The Enigma encryption is aself-inverse function, meaning that it substitutes letters reciprocally: ifAis transformed intoR, thenRis transformed intoA. The plugboard transformation maintained the self-inverse quality, but the plugboard wiring, unlike the rotor positions, does not change during the encryption. This regularity was exploited by Welchman's "diagonal board" enhancement to the bombe, which vastly increased its efficiency.[16]With six plug leads in use (leaving 14 letters "unsteckered"), there were 100,391,791,500 possible ways of setting up the plugboard.[17]
An important feature of the machine from a cryptanalyst's point of view, and indeed Enigma'sAchilles' heel, was that the reflector in the scrambler prevented a letter from being enciphered as itself. Any putative solution that gave, for any location, the same letter in the proposed plaintext and the ciphertext could therefore be eliminated.[18]
In the lead-up toWorld War II, the Germans made successive improvements to their military Enigma machines. By January 1939, additional rotors had been introduced so that three rotors were chosen from a set of five (hence there were now 60 possible wheel orders) for the army and air force Enigmas, and three out of eight (making 336 possible wheel orders) for the navy machines. In addition, ten leads were used on the plugboard, leaving only six letters unsteckered. This meant that the air force and army Enigmas could be set up in 1.5×1019ways. In 1941 the German navy introduced a version of Enigma with a rotatable reflector (theM4or Four-rotor Enigma) for communicating with itsU-boats. This could be set up in 1.8×1020different ways.[17]
By late 1941 a change in German Navy fortunes in theBattle of the Atlantic, combined with intelligence reports, convinced AdmiralKarl Dönitzthat the Allies were able to read the German Navy's coded communications, and a fourth rotor with unknown wiring was added to German Navy Enigmas used for U-boat communications, producing theTritonsystem,[dubious–discuss]known at Bletchley Park asShark.[19]This was coupled with a thinner reflector design to make room for the extra rotor. The Triton was designed in such a way that it remained compatible with three-rotor machines when necessary: one of the extra 'fourth' rotors, the 'beta', was designed so that when it was paired with the thin 'B' reflector, and the rotor and ring were set to 'A', the pair acted as a 'B' reflector coupled with three rotors. Fortunately for the Allies, in December 1941, before the machine went into official service, a submarine accidentally sent a message with the fourth rotor in the wrong position, and then retransmitted the message with the rotor in the correct position to emulate the three-rotor machine. In February 1942 the change in the number of rotors used became official, and the Allies' ability to read German submarines' messages ceased until a snatch from a captured U-boat revealed not only the four-rotor machine's ability to emulate a three-rotor machine, but also that the fourth rotor did not move during a message. This along with the aforementioned retransmission eventually allowed the code breakers to figure out the wiring of both the 'beta' and 'gamma' fourth rotors.[citation needed]
The first half of 1942 was the "Second Happy Time" for the German U-boats, with renewed success in attacking Allied shipping, as the US had just entered war unprepared for the onslaught, lacking in anti-submarine warfare (ASW) aircraft, ships, personnel, doctrine and organization. Also, the security of the new Enigma and the Germans' ability to read Allied convoy messages sent in Naval Cipher No. 3 contributed to their success. Between January and March 1942, German submarines sank 216 ships off the US east coast. In May 1942 the US began using the convoy system and requiring a blackout of coastal cities so that ships would not be silhouetted against their lights, but this yielded only slightly improved security for Allied shipping. The Allies' failure to change their cipher for three months, together with the fact that Allied messages never contained any raw Enigma decrypts (or even mentioned that they were decrypting messages), helped convince the Germans that their messages were secure. Conversely, the Allies learned that the Germans had broken the naval cipher almost immediately from Enigma decrypts, but lost many ships due to the delay in changing the cipher.[citation needed]
The following settings of the Enigma machine must be discovered to decipher German military Enigma messages. Once these are known, all the messages for that network for that day (or pair of days in the case of the German navy) could be decrypted.
Internal settings(that required the lid of the Enigma machine to be opened)
External settings(that could be changed without opening the Enigma machine)
The bombe identified possible initial positions of the rotor cores and thestecker partnerof a specified letter for a set of wheel orders. Manual techniques were then used to complete the decryption process.[23]In the words ofGordon Welchman, "... the task of the bombe was simply to reduce the assumptions of wheel order and scrambler positions that required 'further analysis' to a manageable number".[24]
The bombe was an electro-mechanical device that replicated the action of severalEnigma machineswired together. A standard German Enigma employed, at any one time, a set of threerotors, each of which could be set in any of 26 positions. The standard British bombe contained 36 Enigma equivalents, each with three drums wired to produce the same scrambling effect as the Enigma rotors. A bombe could run two or three jobs simultaneously.
Each job would have a 'menu' that had to be run against a number of different wheel orders. If the menu contained 12 or fewer letters, three different wheel orders could be run on one bombe; if more than 12 letters, only two.
In order to simulate Enigma rotors, each rotor drum of the bombe had two complete sets of contacts, one for input towards the reflector and the other for output from the reflector, so that the reflected signal could pass back through a separate set of contacts. Each drum had 104 wire brushes, which made contact with the plate onto which they were loaded. The brushes and the corresponding set of contacts on the plate were arranged in four concentric circles of 26. The outer pair of circles (input and output) were equivalent to the current in an Enigma passing in one direction through the scrambler, and the inner pair equivalent to the current flowing in the opposite direction.
The interconnections within the drums between the two sets of input and output contacts were both identical to those of the relevant Enigma rotor. There was permanent wiring between the inner two sets of contacts of the three input/output plates. From there, the circuit continued to a plugboard located on the left-hand end panel, which was wired to imitate an Enigma reflector and then back through the outer pair of contacts. At each end of the "double-ended Enigma", there were sockets on the back of the machine, into which 26-way cables could be plugged.
The bombe drums were arranged with the top one of the three simulating the left-hand rotor of the Enigma scrambler, the middle one the middle rotor, and the bottom one the right-hand rotor. The top drums were all driven in synchrony by an electric motor. For each full rotation of the top drums, the middle drums were incremented by one position, and likewise for the middle and bottom drums, giving the total of 26 × 26 × 26 =17576positions of the 3-rotor Enigma scrambler.[25][26]
The drums were colour-coded according to which Enigma rotor they emulated: I red; II maroon; III green; IV yellow; V brown; VI cobalt (blue); VII jet (black); VIII silver.[27]
At each position of the rotors, an electric current would or would not flow in each of the 26 wires, and this would be tested in the bombe's comparator unit. For a large number of positions, the test would lead to alogical contradiction, ruling out that setting. If the test did not lead to a contradiction, the machine would stop.
The operator would then find the point at which the test passed, record the candidate solution by reading the positions of the indicator drums and the indicator unit on the Bombe's right-hand end panel. The operator then restarted the run. The candidate solutions,stopsas they were called, were processed further to eliminate as many false stops as possible. Typically, there were many false bombe stops before the correct one was found.
The candidate solutions for the set of wheel orders were subject to extensive further cryptanalytical work. This progressively eliminated the false stops, built up the set of plugboard connections and established the positions of the rotor alphabet rings.[28]Eventually, the result would be tested on aTypexmachine that had been modified to replicate an Enigma, to see whether thatdecryptionproducedGerman language.[29]
A bombe run involved a cryptanalyst first obtaining acrib— a section ofplaintextthat was thought to correspond to theciphertext. Finding cribs was not at all straightforward; it required considerable familiarity with German military jargon and the communication habits of the operators. However, the codebreakers were aided by the fact that the Enigma would never encrypt a letter to itself. This helped in testing a possible crib against the ciphertext, as it could rule out a number of cribs and positions, where the same letter occurred in the same position in both the plaintext and the ciphertext. This was termed acrashat Bletchley Park.
Once a suitable crib had been decided upon, the cryptanalyst would produce amenufor wiring up the bombe to test the crib against the ciphertext. The following is a simplified explanation of the process of constructing a menu. Suppose that the crib isATTACKATDAWNto be tested against a certain stretch of ciphertext, say,WSNPNLKLSTCS. The letters of the crib and the ciphertext were compared to establish pairings between the ciphertext and the crib plaintext. These were then graphed as in the diagram. It should be borne in mind that the relationships are reciprocal so thatAin the plaintext associated withWin the ciphertext is the same asWin the plaintext associated withAin the ciphertext. At position 1 of the plaintext-ciphertext comparison, the letterAis associated withW, butAis also associated withPat position 4,Kat position 7 andTat position 10. Building up these relationships into such a diagram provided the menu from which the bombe connections and drum start positions would be set up.
In the illustration, there are three sequences of letters which form loops (orcyclesorclosures),ATLK,TNSandTAWCN. The more loops in the menu, the more candidate rotor settings the bombe could reject, and hence the fewer false stops.
Alan Turing conducted a very substantial analysis (without any electronic aids) to estimate how many bombe stops would be expected according to the number of letters in the menu and the number of loops. Some of his results are given in the following table.[30]Recent bombe simulations have shown similar results.
The German military Enigma included a plugboard (Steckerbrettin German) which swapped letters (indicated here byP) before and after the main scrambler's change (indicated byS). The plugboard connections were known to the cryptanalysts as Stecker values. If there had been no plugboard, it would have been relatively straightforward to test a rotor setting; aTypexmachine modified to replicate Enigma could be set up and the crib letterAencrypted on it, and compared with the ciphertext,W. If they matched, the next letter would be tried, checking thatTencrypted toSand so on for the entire length of the crib. If at any point the letters failed to match, the initial rotor setting would be rejected; most incorrect settings would be ruled out after testing just two letters. This test could be readily mechanised and applied to all17576settings of the rotors.
However, with the plugboard, it was much harder to perform trial encryptions because it was unknown what the crib and ciphertext letters were transformed to by the plugboard. For example, in the first position,P(A)andP(W)were unknown because the plugboard settings were unknown.
Turing's solution to working out the stecker values (plugboard connections) was to note that, even though the values for, say,P(A)orP(W), were unknown, the crib still provided known relationships amongst these values; that is, the values after the plugboard transformation. Using these relationships, a cryptanalyst could reason from one to another and, potentially, derive a logical contradiction, in which case the rotor setting under consideration could be ruled out.
A worked example of such reasoning might go as follows: a cryptanalyst might suppose thatP(A) =Y. Looking at position 10 of the crib:ciphertext comparison, we observe thatAencrypts toT, or, expressed as a formula:
Due to the functionPbeing its own inverse, we can apply it to both sides of the equation and obtain the following:
This gives us a relationship betweenP(A)andP(T). IfP(A)=Y, and for the rotor setting under considerationS10(Y)=Q(say), we can deduce that
While the crib does not allow us to determine what the values after the plugboard are, it does provide a constraint between them. In this case, it shows howP(T)is completely determined ifP(A)is known.
Likewise, we can also observe thatTencrypts toLat position 8. UsingS8, we can deduce the steckered value forLas well using a similar argument, to get, say,
Similarly, in position 6,Kencrypts toL. As the Enigma machine is self-reciprocal, this means that at the same positionLwould also encrypt toK. Knowing this, we can apply the argument once more to deduce a value forP(K), which might be:
And again, the same sort of reasoning applies at position 7 to get:
However, in this case, we have derived acontradiction, since, by hypothesis, we assumed thatP(A)=Yat the outset. This means that the initial assumption must have been incorrect, and so that (for this rotor setting)P(A)≠Y(this type of argument is termedreductio ad absurdumor "proof by contradiction").
The cryptanalyst hypothesised one plugboard interconnection for the bombe to test. The other stecker values and the ring settings were worked out by hand methods.
To automate these logical deductions, the bombe took the form of an electrical circuit. Current flowed around the circuit near-instantaneously, and represented all the possible logical deductions which could be made at that position. To form this circuit, the bombe used several sets of Enigma rotor stacks wired up together according to the instructions given on a menu, derived from a crib. Because each Enigma machine had 26 inputs and outputs, the replica Enigma stacks are connected to each other using 26-way cables. In addition, each Enigma stack rotor setting is offset a number of places as determined by its position in the crib; for example, an Enigma stack corresponding to the fifth letter in the crib would be four places further on than that corresponding to the first letter.
Practical bombes used several stacks of rotors spinning together to test multiple hypotheses about possible setups of the Enigma machine, such as the order of the rotors in the stack.
While Turing's bombe worked in theory, it required impractically long cribs to rule out sufficiently large numbers of settings.Gordon Welchmancame up with a way of using the symmetry of the Enigma stecker to increase the power of the bombe. His suggestion was an attachment called thediagonal boardthat further improved the bombe's effectiveness.[5]
The Polish cryptologicbomba(Polish:bomba kryptologiczna; pluralbomby) had been useful only as long as three conditions were met. First, the form of the indicator had to include the repetition of the message key; second, the number of rotors available had to be limited to three, giving six different "wheel orders" (the order of the three rotors within the machine); and third, the number of plug-board leads had to remain relatively small so that the majority of letters wereunsteckered.[dubious–discuss]Six machines were built, one for each possible rotor order. Thebombywere delivered in November 1938, but barely a month later the Germans introduced two additional rotors for loading into the Enigma scrambler, increasing the number of wheel orders by a factor of ten. Building another 54bombywas beyond the Poles' resources. Also, on 1 January 1939, the number of plug-board leads was increased to ten. The Poles therefore had to return to manual methods, theZygalski sheets.
Alan Turingdesigned the British bombe on a more general principle, the assumption of the presence of text, called acrib, that cryptanalysts could predict was likely to be present at a defined point in the message. This technique is termed aknown plaintext attackand had been used to a limited extent by the Poles, e.g., the Germans' use of "ANX" — "AN", German for "To", followed by "X" as a spacer.
A £100,000 budget for the construction of Turing's machine was acquired and the contract to build the bombes was awarded to theBritish Tabulating Machine Company(BTM) atLetchworth.[31]BTM placed the project under the direction ofHarold 'Doc' Keen. Each machine was about 7 feet (2.1 m) wide, 6 feet 6 inches (1.98 m) tall, 2 feet (0.61 m) deep and weighed about a ton.[32]On the front of each bombe were 108 places where drums could be mounted. The drums were in three groups of 12 triplets. Each triplet, arranged vertically, corresponded to the three rotors of an Enigma scrambler. The bombe drums' input and output contacts went to cable connectors, allowing the bombe to be wired up according to the menu. The 'fast' drum rotated at a speed of 50.4rpmin the first models[33]and 120 rpm in later ones,[34]when the time to set up and run through all 17,576 possible positions for one rotor order was about 20 minutes.[35]
The first bombe was named "Victory". It was installed in "Hut 1" at Bletchley Park on 18 March 1940. It was based on Turing's original design and so lacked a diagonal board.[36]On 26 April 1940,HMSGriffincaptured a German trawler (Schiff 26, thePolares) flying a Dutch flag; included in the capture were some Enigma keys for 23 to 26 April.[37]Bletchley retrospectively attacked some messages sent during this period using the captured material and an ingenious Bombe menu where the Enigma fast rotors were all in the same position.[38]In May and June 1940, Bletchley succeeded in breaking six days of naval traffic, 22–27 April 1940.[39]Those messages were the first breaks ofKriegsmarinemessages of the war, "[b]ut though this success expanded Naval Section's knowledge of the Kriegsmarines's signals organization, it neither affected naval operations nor made further naval Enigma solutions possible."[40]The second bombe, named "Agnus dei", later shortened to "Agnes", or "Aggie", was equipped with Welchman's diagonal board, and was installed on 8 August 1940; "Victory" was later returned to Letchworth to have a diagonal board fitted.[41]The bombes were later moved from "Hut 1" to "Hut 11". The bombe was referred to by Group CaptainWinterbothamas a "Bronze Goddess" because of its colour.[42]The devices were more prosaically described by operators as being "like great big metal bookcases".[43]
During 1940, 178 messages were broken on the two machines, nearly all successfully. Because of the danger of bombes at Bletchley Park being lost if there were to be a bombing raid, bombe outstations[44]were established, atAdstock,GayhurstandWavendon, all inBuckinghamshire.[45]In June–August 1941 there were 4 to 6 bombes at Bletchley Park, and when Wavendon was completed, Bletchley, Adstock and Wavenden had a total of 24 to 30 bombes. When Gayhurst became operational there were a total of 40 to 46 bombes, and it was expected that the total would increase to about 70 bombes run by some 700Wrens (Women's Royal Naval Service). But in 1942 with the introduction of the naval four-rotor Enigma, "far more than seventy bombes" would be needed. New outstations were established atStanmoreandEastcote, and the Wavendon and Adstock bombes were moved to them, though the Gayhurst site was retained. The few bombes left at Bletchley Park were used for demonstration and training purposes only.[46]
Production of bombes by BTM at Letchworth in wartime conditions was nowhere near as rapid as the Americans later achieved atNCRin Dayton, Ohio.
Sergeant Jones was given the overall responsibility for Bombe maintenance byEdward Travis. Later Squadron Leader and not to be confused withEric Jones, he was one of the original bombe maintenance engineers, and experienced inBTMtechniques. Welchman said that later in the war when other people tried to maintain them, they realised how lucky they were to have him. About 15 million delicate wire brushes on the drums had to make reliable contact with the terminals on the template. There were 104 brushes per drum, 720 drums per bombe, and ultimately around 200 bombes.[52]
After World War II, some fifty bombes were retained atRAF Eastcote, while the rest were destroyed. The surviving bombes were put to work, possibly onEastern blocciphers. Smith cites the official history of the bombe as saying that "some of these machines were to be stored away but others were required to run new jobs and sixteen machines were kept comparatively busy on menus." and "It is interesting to note that most of the jobs came up and the operating, checking and other times maintained were faster than the best times during the war periods."[53]
A program was initiated by Bletchley Park to design much faster bombes that could decrypt the four-rotor system in a reasonable time. There were two streams of development. One, code-named Cobra, with an electronic sensing unit, was produced byCharles Wynn-Williamsof theTelecommunications Research Establishment(TRE) at Malvern andTommy Flowersof theGeneral Post Office(GPO).[54]The other, code-named Mammoth, was designed byHarold KeenatBTM, Letchworth. Initial delivery was scheduled for August or September 1942.[47]The dual development projects created considerable tension between the two teams, both of which cast doubts on the viability of the opposing team's machine. After considerable internal rivalry and dispute,Gordon Welchman(by then, Bletchley Park's Assistant Director for mechanisation) was forced to step in to resolve the situation. Ultimately, Cobra proved unreliable and Mammoth went into full-scale production.[55]
Unlike the situation at Bletchley Park, the United States armed services did not share a combined cryptanalytical service. Indeed, there was considerable rivalry between theUS Army'sfacility, theSignals Intelligence Service (SIS), and that of theUS Navyknown asOP-20-G.[56]Before the US joined the war, there was collaboration with Britain, albeit with a considerable amount of caution on Britain's side because of the extreme importance of Germany and her allies not learning that its codes were being broken. Despite some worthwhile collaboration amongst the cryptanalysts, their superiors took some time to achieve a trusting relationship in which both British and American bombes were used to mutual benefit.
In February 1941, Captain Abe Sinkov and Lieutenant Leo Rosen of the US Army, and US Naval Lieutenants Robert Weeks andPrescott Currier, arrived at Bletchley Park bringing, amongst other things, a replica of the"Purple" cipher machinefor Bletchley Park's Japanese section inHut 7.[57]The four returned to America after ten weeks, with a naval radiodirection-findingunit and many documents[58]including a "paper Enigma".[59]
Currier later wrote:
There was complete cooperation. We went everywhere, including Hut 6. We watched the entire operation and had all the techniques explained in great detail. We were thoroughly briefed on the latest techniques in the solution of Enigma and the operations of the bombes. We had ample opportunity to take as many notes as we wanted and to watch first hand all operations involved.[60]
The main response to the Four-rotor Enigma was the US Navy bombe, which was manufactured in much less constrained facilities than were available in wartime Britain.
ColonelJohn Tiltman, who later became Deputy Director at Bletchley Park, visited the US Navy cryptanalysis office (OP-20-G) in April 1942 and recognised America's vital interest in deciphering U-boat traffic. The urgent need, doubts about the British engineering workload and slow progress, prompted the US to start investigating designs for a Navy bombe, based on the fullblueprintsand wiring diagrams received by US Naval Lieutenants Robert Ely and Joseph Eachus at Bletchley Park in July 1942.[62][16][63]Funding for a full, $2 million, navy development effort was requested on 3 September 1942 and approved the following day.
CommanderEdward Travis, Deputy Director andFrank Birch, Head of the German Naval Section travelled from Bletchley Park to Washington in September 1942. WithCarl Frederick Holden, US Director of Naval Communications they established, on 2 October 1942, a UK:US accord which may have "a stronger claim thanBRUSAto being the forerunner of theUKUSA Agreement," being the first agreement "to establish the specialSigintrelationship between the two countries," and "it set the pattern for UKUSA, in that the United States was very much the senior partner in the alliance."[65]It established a relationship of "full collaboration" between Bletchley Park and OP-20-G.[16]
An all electronic solution to the problem of a fast bombe was considered,[16]but rejected for pragmatic reasons, and a contract was let with theNational Cash Register Corporation(NCR) inDayton, Ohio. This established theUnited States Naval Computing Machine Laboratory.[3]Engineering development was led by NCR'sJoseph Desch.
Alan Turing, who had written a memorandum to OP-20-G (probably in 1941),[66]was seconded to the British Joint Staff Mission in Washington in December 1942, because of his exceptionally wide knowledge about the bombes and the methods of their use. He was asked to look at the bombes that were being built by NCR and at the security of certainspeech cipher equipmentunder development at Bell Labs.[67]He visited OP-20-G, and went to NCR in Dayton on 21 December. He was able to show that it was not necessary to build 336 Bombes, one for each possible rotor order, by utilising techniques such asBanburismus.[16]The initial order was scaled down to 96 machines.
The US Navy bombes used drums for the Enigma rotors in much the same way as the British bombes. They had eight Enigma-equivalents on the front and eight on the back. The fast drum rotated at 1,725rpm, 34 times the speed of the early British bombes. 'Stops' were detected electronically usingthermionic valves(vacuum tubes)—mostlythyratrons—for the high-speed circuits. When a 'stop' was found[68]the machine over-ran as it slowed, reversed to the position found and printed it out before restarting. The running time for a 4-rotor run was about 20 minutes, and for a 3-rotor run, about 50 seconds.[69]Each machine was 10 feet (3.0 m) wide, 7 feet (2.1 m) high, 2 feet (0.61 m) deep and weighed 2.5 tons.
The first machine was completed and tested on 3 May 1943. By 22 June, the first two machines, called 'Adam' and 'Eve' broke a particularly difficult German naval cipher, theOffiziersettings for 9 and 10 June.[70]A P Mahon, who had joined the Naval Section in Hut 8 in 1941, reported in his official 1945 "History of Hut Eight 1939-1945":
The American bombe was in its essentials the same as the English bombe though it functioned rather better as they were not handicapped by having to make it, as Keen was forced to do owing to production difficulties, on the framework of a 3 wheel machine. By late autumn [1943] new American machines were coming into action at the rate of about 2 a week, the ultimate total being in the region of 125.[71]
These bombes were faster, and soon more available, than the British bombes at Bletchley Park and its outstations. Consequently, they were put to use for Hut 6 as well as Hut 8 work.[72]In Alexander's "Cryptographic History of Work on German Naval Enigma", he wrote as follows.
When the Americans began to turn out bombes in large numbers there was a constant interchange of signal - cribs, keys, message texts, cryptographic chat and so on. This all went by cable being first encyphered on the combined Anglo-American cypher machine,C.C.M.Most of the cribs being of operational urgency rapid and efficient communication was essential and a high standard was reached on this; an emergency priority signal consisting of a long crib with crib and message text repeated as a safeguard against corruption would take under an hour from the time we began to write the signal out in Hut 8 to the completion of its decyphering in Op. 20 G. As a result of this we were able to use the Op. 20 G bombes almost as conveniently as if they had been at one of our outstations 20 or 30 miles away.[73]Ch. VIII para. 11
Production was stopped in September 1944 after 121 bombes had been made.[69]The last-manufactured US Navy bombe is on display at the USNational Cryptologic Museum. Jack Ingram, former Curator of the museum, describes being told of the existence of a second bombe and searching for it but not finding it whole. Whether it remains in storage in pieces, waiting to be discovered, or no longer exists, is unknown.
The US Army Bombe was physically very different from the British and US Navy bombes. The contract for its creation was signed withBell Labson 30 September 1942.[74]The machine was designed to analyse 3-rotor, not 4-rotor traffic. It was known as "003" or "Madame X".[75][76]It did not use drums to represent the Enigma rotors, using instead telephone-type relays. It could, however, handle one problem that the bombes with drums could not.[69][72]The set of ten bombes consisted of a total of 144 Enigma-equivalents, each mounted on a rack approximately 7 feet (2.1 m) long 8 feet (2.4 m) high and 6 inches (150 mm) wide. There were 12 control stations which could allocate any of the Enigma-equivalents into the desired configuration by means of plugboards. Rotor order changes did not require the mechanical process of changing drums, but was achieved in about half a minute by means of push buttons.[68]A 3-rotor run took about 10 minutes.[69]
In 1994 a group led byJohn Harperof the BCS Computer Conservation Society started a project to build a working replica of a bombe.[77]The project required detailed research, and took thirteen years of effort before the replica was completed, which was then put on display at the Bletchley Park museum. In March 2009 it won an Engineering Heritage Award.[78]The Bombe rebuild was relocated toThe National Museum of Computingon Bletchley Park in May 2018,[79]the new gallery officially re-opening on 23 June 2018.[80]
|
https://en.wikipedia.org/wiki/Bombe
|
Bletchley Parkis anEnglish country houseand estate inBletchley,Milton Keynes(Buckinghamshire), that became the principal centre ofAlliedcode-breaking during the Second World War. DuringWorld War II, the estate housed theGovernment Code and Cypher School(GC&CS), which regularly penetrated the secret communications of theAxis Powers– most importantly the GermanEnigmaandLorenzciphers. The GC&CS team of codebreakers includedJohn Tiltman,Dilwyn Knox,Alan Turing,Harry Golombek,Gordon Welchman,Hugh Alexander,Donald Michie,Bill TutteandStuart Milner-Barry.
The team at Bletchley Park devised automatic machinery to help with decryption, culminating in the development ofColossus, the world's first programmable digital electronic computer.[a]Codebreaking operations at Bletchley Park ended in 1946 and all information about the wartime operations was classified until the mid-1970s. After the war it had various uses and now houses theBletchley Park Museum.
A mansion was first built here in 1711, with the current house built in the 1870s.[1]In 1938, the mansion and much of the site was bought by a builder for a housing estate, but in May 1938 Admiral SirHugh Sinclair, head of theSecret Intelligence Service(SIS orMI6), bought the mansion and 58 acres (23 ha) of land for £6,000 (£484,000 today) for use by Code and Cypher School and SIS in the event of war. He used his own money as the Government said they did not have the budget to do so.[2]
A key advantage seen by Sinclair and his colleagues (inspecting the site under the cover of "Captain Ridley's shooting party")[3]was Bletchley's geographical centrality. It was almost immediately adjacent toBletchley railway station, where the "Varsity Line" betweenOxfordandCambridge– whose universities were expected to supply many of the code-breakers – met the mainWest Coast railway lineconnecting London,Birmingham,Manchester,Liverpool,GlasgowandEdinburgh.Watling Street, the main road linking London to the north-west (subsequently theA5) was close by, and high-volume communication links were available at the telegraph and telephone repeater station in nearbyFenny Stratford.[4]
Five weeks before the outbreak of war, Warsaw'sCipher Bureaurevealedits achievementsin breaking Enigma to astonished French and British personnel.[5]The British used the Poles' information and techniques, and theEnigma clonesent to them in August 1939, which greatly increased their (previously very limited) success in decrypting Enigma messages.[6]
The first personnel of theGovernment Code and Cypher School(GC&CS) moved to Bletchley Park on 15 August 1939. The Naval, Military, and Air Sections were on the ground floor of the mansion, together with a telephone exchange, teleprinter room, kitchen, and dining room; the top floor was allocated toMI6. Construction of the wooden huts began in late 1939, and Elmers School, a neighbouring boys' boarding school in a Victorian Gothic redbrick building by a church, was acquired for the Commercial and Diplomatic Sections.[8]
The only direct enemy damage to the site was done 20–21 November 1940 by three bombs probably intended forBletchley railway station; Hut 4, shifted two feet off its foundation, was winched back into place as work inside continued.[9]
During a morale-boosting visit on 9 September 1941,Winston Churchillreportedly remarked to Denniston or Menzies: "I told you to leave no stone unturned to get staff, but I had no idea you had taken me so literally."[10]Six weeks later, having failed to get sufficient typing and unskilled staff to achieve the productivity that was possible, Turing, Welchman, Alexander and Milner-Barry wrote directly to Churchill. His response was "Action this day make sure they have all they want on extreme priority and report to me that this has been done."[11]
After theUnited Statesjoined World War II, a number of Americancryptographerswere posted toHut 3, and from May 1943 onwards there was close co-operation between British and American intelligence[12]leading to the1943 BRUSA Agreementwhich was the forerunner of theFive Eyespartnership.[13]
In contrast, theSoviet Unionwas never officially told of Bletchley Park and its activities, a reflection of Churchill's distrust of the Soviets even during the US-UK-USSR alliance imposed by the Nazi threat.[14]However Bletchley Park was infiltrated by the Soviet moleJohn Cairncross, a member of theCambridge Spy Ring, who leaked Ultra material to Moscow.[15]
After the War, the secrecy imposed on Bletchley staff remained in force, so that most relatives never knew more than that a child, spouse, or parent had done some kind of secret war work.[16]Churchill referred to the Bletchley staff as "the geese who laid the golden eggs and never cackled".[17]That said, occasional mentions of the work performed at Bletchley Park slipped the censor's net and appeared in print.[18]
The site passed through a succession of hands and saw a number of uses, including as a teacher-training college and localGPOheadquarters. By 1991, the site was nearly empty and the buildings were at risk of demolition for redevelopment,[19]before the gradual development of theBletchley Park Museum.[20]
The Government Code & Cypher School became the Government Communications Headquarters (GCHQ), moving toEastcotein 1946 and toCheltenhamin the 1950s.[21]The site was used by various government agencies, including theGPOand theCivil Aviation Authority. One large building, block F, was demolished in 1987 by which time the site was being run down with tenants leaving.[22]
AdmiralHugh Sinclairwas the founder and head of GC&CS between 1919 and 1938 with CommanderAlastair Dennistonbeing operational head of the organization from 1919 to 1942, beginning with its formation from theAdmiralty'sRoom 40(NID25) and theWar Office'sMI1b.[23]Key GC&CScryptanalystswho moved from London to Bletchley Park includedJohn Tiltman,Dillwyn "Dilly" Knox,Josh Cooper,Oliver StracheyandNigel de Grey. These people had a variety of backgrounds – linguists and chess champions were common, and Knox's field waspapyrology. The British War Office recruited top solvers ofcryptic crosswordpuzzles, as these individuals had stronglateral thinkingskills.[24]
Onthe day Britain declared war on Germany, Denniston wrote to theForeign Officeabout recruiting "men of the professor type".[25]Personal networking drove early recruitments, particularly of men from the universities of Cambridge and Oxford. Trustworthy women were similarly recruited for administrative and clerical jobs.[26]In one 1941 recruiting stratagem,The Daily Telegraphwas asked to organise a crossword competition, after which promising contestants were discreetly approached about "a particular type of work as a contribution to the war effort".[27]
Denniston recognised, however, that the enemy's use of electromechanical cipher machines meant that formally trained mathematicians would also be needed;[28]Oxford'sPeter Twinnjoined GC&CS in February 1939;[29]Cambridge'sAlan Turing[30]andGordon Welchman[31]began training in 1938 and reported to Bletchley the day after war was declared, along withJohn Jeffreys. Later-recruited cryptanalysts included the mathematiciansDerek Taunt,[32]Jack Good,Bill Tutte,[33]andMax Newman; historianHarry Hinsley, and chess championsHugh AlexanderandStuart Milner-Barry.[34]Joan Clarkewas one of the few women employed at Bletchley as a full-fledged cryptanalyst.[35][36]
When seeking to recruit more suitably advanced linguists,John Tiltmanturned toPatrick Wilkinsonof the Italian section for advice, and he suggested askingLord Lindsay of Birker, ofBalliol College, Oxford, S. W. Grose, andMartin Charlesworth, ofSt John's College, Cambridge, to recommend classical scholars or applicants to their colleges.[37]
This eclectic staff of "BoffinsandDebs" (scientists and debutantes, young women of high society)[38]caused GC&CS to be whimsically dubbed the "Golf, Cheese and Chess Society".[39]Among those who worked there and later became famous in other fields were historianAsa Briggs, politicianRoy Jenkinsand novelistAngus Wilson.[40]
After initial training at the Inter-Service Special Intelligence School set up byJohn Tiltman(initially at an RAF depot in Buckingham and later inBedford– where it was known locally as "the Spy School")[41]staff worked a six-day week, rotating through three shifts: 4 p.m. to midnight, midnight to 8 a.m. (the most disliked shift), and 8 a.m. to 4 p.m., each with a half-hour meal break. At the end of the third week, a worker went off at 8 a.m. and came back at 4 p.m., thus putting in 16 hours on that last day. The irregular hours affected workers' health and social life, as well as the routines of the nearby homes at which most staff lodged. The work was tedious and demanded intense concentration; staff got one week's leave four times a year, but some "girls" collapsed and required extended rest.[42]Recruitment took place to combat a shortage of experts in Morse code and German.[43]
In January 1945, at the peak of codebreaking efforts, 8,995 personnel were working at Bletchley and its outstations.[44]About three-quarters of these were women.[45]Many of the women came from middle-class backgrounds and held degrees in the areas of mathematics, physics and engineering; they were given the chance due to the lack of men, who had been sent to war. They performed calculations and coding and hence were integral to the computing processes.[46]Among them wereEleanor Ireland, who worked on theColossus computers[47]andRuth Briggs, a German scholar, who worked within the Naval Section.[48][49]
The female staff in Dilwyn Knox's section were sometimes termed "Dilly's Fillies".[50]Knox's methods enabledMavis Lever(who married mathematician and fellow code-breakerKeith Batey) andMargaret Rockto solve a German code, theAbwehrcipher.[51][52]
Many of the women had backgrounds in languages, particularly French, German and Italian. Among them wereRozanne Colchester, a translator who worked mainly for the Italian air forces Section,[53]andCicely Mayhew, recruited straight from university, who worked in Hut 8, translating decoded German Navy signals,[54]as didJane Fawcett(née Hughes) who decrypted a vital message concerning theGerman battleshipBismarckand after the war became an opera singer and buildings conservationist.[40]
Alan Brooke(CIGS) in his secret wartime diary frequently refers to “intercepts”:[55]
Properly used, the German Enigma andLorenz ciphersshould have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities that made Bletchley's attacks just barely feasible. These vulnerabilities, however, could have been remedied by relatively simple improvements in enemy procedures,[5]and such changes would certainly have been implemented had Germany had any hint of Bletchley's success. Thus the intelligence Bletchley produced was considered wartime Britain's "Ultrasecret" – higher even than the normally highest classificationMost Secret–and security was paramount.[56]
All staff signed theOfficial Secrets Act (1939)and a 1942 security warning emphasised the importance of discretion even within Bletchley itself: "Do not talk at meals. Do not talk in the transport. Do not talk travelling. Do not talk in the billet. Do not talk by your own fireside. Be careful even in your Hut ..."[57]
Nevertheless, there were security leaks.Jock Colville, the Assistant Private Secretary toWinston Churchill, recorded in his diary on 31 July 1941, that the newspaper proprietorLord Camrosehad discovered Ultra and that security leaks "increase in number and seriousness".[58]
Despite the high degree of secrecy surrounding Bletchley Park during the Second World War, unique and hitherto unknown amateur film footage of the outstation at nearbyWhaddon Hallcame to light in 2020, after being anonymously donated to the Bletchley Park Trust.[59][60]A spokesman for the Trust noted the film's existence was all the more incredible because it was "very, very rare even to have [still] photographs" of the park and its associated sites.[61]
Bletchley Park was known as "B.P." to those who worked there.[62]"Station X" (X =Roman numeralten), "London Signals Intelligence Centre", and "Government Communications Headquarters" were all cover names used during the war.[63]The formal posting of the many "Wrens" – members of theWomen's Royal Naval Service– working there, was toHMSPembroke V. Royal Air Force names of Bletchley Park and its outstations includedRAF Eastcote, RAF Lime Grove and RAF Church Green.[64]The postal address that staff had to use was "Room 47, Foreign Office".[65]
Initially, when only a very limited amount of Enigma traffic was being read,[67]deciphered non-Naval Enigma messages were sent fromHut 6toHut 3which handled their translation and onward transmission. Subsequently, underGroup Captain Eric Jones, Hut 3 expanded to become the heart of Bletchley Park's intelligence effort, with input from decrypts of "Tunny" (Lorenz SZ42) traffic and many other sources. Early in 1942 it moved into Block D, but its functions were still referred to as Hut 3.[68]
Hut 3 contained a number of sections: Air Section "3A", Military Section "3M", a small Naval Section "3N", a multi-service Research Section "3G" and a large liaison section "3L".[69]It also housed the Traffic Analysis Section, SIXTA.[70]An important function that allowed the synthesis of raw messages into valuableMilitary intelligencewas the indexing and cross-referencing of information in a number of different filing systems.[71]Intelligence reports were sent out to the Secret Intelligence Service, the intelligence chiefs in the relevant ministries, and later on to high-level commanders in the field.[72]
Naval Enigma deciphering was inHut 8, with translation inHut 4. Verbatim translations were sent to theNaval Intelligence Division(NID) of the Admiralty's Operational Intelligence Centre (OIC), supplemented by information from indexes as to the meaning of technical terms and cross-references from a knowledge store of German naval technology.[73]Where relevant to non-naval matters, they would also be passed to Hut 3. Hut 4 also decoded a manual system known as the dockyard cipher, which sometimes carried messages that were also sent on an Enigma network. Feeding these back to Hut 8 provided excellent "cribs" forKnown-plaintext attackson the daily naval Enigma key.[74]
Initially, awirelessroom was established at Bletchley Park.
It was set up in the mansion's water tower under the code name "Station X",[75]a term now sometimes applied to the codebreaking efforts at Bletchley as a whole. The "X" is theRoman numeral"ten", this being the Secret Intelligence Service's tenth such station. Due to the long radio aerials stretching from the wireless room, the radio station was moved from Bletchley Park to nearbyWhaddon Hallto avoid drawing attention to the site.[76][77]
Subsequently, other listening stations – theY-stations, such as the ones atChicksandsin Bedfordshire,Beaumanor Hall, Leicestershire (where the headquarters of the War Office "Y" Group was located) andBeeston Hill Y Stationin Norfolk – gathered raw signals for processing at Bletchley. Coded messages were taken down by hand and sent to Bletchley on paper by motorcycledespatch ridersor (later) by teleprinter.[78]
The wartime needs required the building of additional accommodation.[79]
Often a hut's number became so strongly associated with the work performed inside that even when the work was moved to another building it was still referred to by the original "Hut" designation.[80][81]
In addition to the wooden huts, there were a number of brick-built "blocks".
Most German messages decrypted at Bletchley were produced by one or another version of theEnigmacipher machine, but an important minority were produced by the even more complicated twelve-rotorLorenz SZ42 on-line teleprinter cipher machineused for high command messages, known asFish.[95]
Thebombewas an electromechanical device whose function was to discover some of the daily settings of the Enigma machines on the various German militarynetworks.[97][98][99]Its pioneering design was developed byAlan Turing(with an important contribution from Gordon Welchman) and the machine was engineered byHarold 'Doc' Keenof theBritish Tabulating Machine Company. Each machine was about 7 feet (2.1 m) high and wide, 2 feet (0.61 m) deep and weighed about a ton.[100]
At its peak, GC&CS was reading approximately 4,000 messages per day.[101]As a hedge against enemy attack[102]most bombes were dispersed to installations atAdstockandWavendon(both later supplanted by installations atStanmoreandEastcote), andGayhurst.[103][104]
Luftwaffemessages were the first to be read in quantity. The German navy had much tighter procedures, and the capture of code books was needed before they could be broken. When, in February 1942, the German navy introduced the four-rotor Enigma for communications with its Atlantic U-boats, this traffic became unreadable for a period of ten months.[105]Britain produced modified bombes, but it was the success of theUS Navy Bombethat was the main source of reading messages from this version of Enigma for the rest of the war. Messages were sent to and fro across the Atlantic by enciphered teleprinter links.[78]
Bletchley's work was essential to defeating theU-boatsin theBattle of the Atlantic, and to the British naval victories in theBattle of Cape Matapanand theBattle of North Cape. In 1941, Ultra exerted a powerful effect on theNorth African desert campaignagainst German forces under GeneralErwin Rommel. General SirClaude Auchinleckwrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". While not changing the events, "Ultra" decrypts featured prominently in the story ofOperation SALAM,László Almásy's mission acrossthe desertbehind Allied lines in 1942.[106]Prior to theNormandy landingson D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eight Western-front divisions.[107]
TheLorenz messageswere codenamedTunnyat Bletchley Park. They were only sent in quantity from mid-1942. The Tunny networks were used for high-level messages between German High Command and field commanders. With the help of German operator errors, the cryptanalysts in theTestery(named afterRalph Tester, its head) worked out the logical structure of the machine despite not knowing its physical form. They devised automatic machinery to help with decryption, which culminated inColossus, the world's first programmable digital electronic computer. This was designed and built byTommy Flowersand his team at thePost Office Research StationatDollis Hill. The prototype first worked in December 1943, was delivered to Bletchley Park in January and first worked operationally on 5 February 1944. Enhancements were developed for the Mark 2 Colossus, the first of which was working at Bletchley Park on the morning of 1 June in time forD-day. Flowers then produced one Colossus a month for the rest of the war, making a total of ten with an eleventh part-built. The machines were operated mainly by Wrens in a section named theNewmanryafter its headMax Newman.[108]
Italian signals had been of interest since Italy's attack on Abyssinia in 1935.
During theSpanish Civil WartheItalian Navyused the K model of the commercial Enigma without a plugboard; this was solved by Knox in 1937.
When Italy entered the war in 1940 an improved version of the machine was used, though little traffic was sent by it and there were "wholesale changes" in Italian codes and cyphers.[109]
Knox was given a new section for work on Enigma variations, which he staffed with women ("Dilly's girls"), who includedMargaret Rock, Jean Perrin, Clare Harding, Rachel Ronald, Elisabeth Granger; andMavis Lever.[110]Mavis Lever solved the signals revealing the Italian Navy's operational plans before theBattle of Cape Matapanin 1941, leading to a British victory.[111]
Although most Bletchley staff did not know the results of their work, AdmiralCunninghamvisited Bletchley in person a few weeks later to congratulate them.[111]
On entering World War II in June 1940, theItalianswere using book codes for most of their military messages. The exception was theItalian Navy, which after the Battle of Cape Matapan started using theC-38version of theBoris Hagelinrotor-basedcipher machine, particularly to route their navy and merchant marine convoys to the conflict in North Africa.[112]As a consequence,JRM Butlerrecruited his former studentBernard Willsonto join a team with two others in Hut 4.[86][113]In June 1941, Willson became the first of the team to decode the Hagelin system, thus enabling military commanders to direct theRoyal NavyandRoyal Air Forceto sink enemy ships carrying supplies from Europe to Rommel'sAfrika Korps. This led to increased shipping losses and, from reading the intercepted traffic, the team learnt that between May and September 1941 the stock of fuel for theLuftwaffein North Africa reduced by 90 per cent.[114]After an intensive language course, in March 1944 Willson switched to Japanese language-based codes.[115]
A Middle East Intelligence Centre (MEIC) was set up in Cairo in 1939. When Italy entered the war in June 1940, delays in forwarding intercepts to Bletchley via congested radio links resulted in cryptanalysts being sent to Cairo. A Combined Bureau Middle East (CBME) was set up in November, though the Middle East authorities made "increasingly bitter complaints" that GC&CS was giving too little priority to work on Italian cyphers. However, the principle of concentrating high-grade cryptanalysis at Bletchley was maintained.[116]John Chadwickstarted cryptanalysis work in 1942 on Italian signals at the naval base 'HMS Nile' in Alexandria. Later, he was with GC&CS; in the Heliopolis Museum, Cairo and then in the Villa Laurens, Alexandria.[117]
Soviet signals had been studied since the 1920s. In 1939–40,John Tiltman(who had worked on Russian Army traffic from 1930) set up two Russian sections at Wavendon (a country house near Bletchley) and atSarafandin Palestine. Two Russian high-grade army and navy systems were broken early in 1940. Tiltman spent two weeks in Finland, where he obtained Russian traffic from Finland and Estonia in exchange for radio equipment. In June 1941, when the Soviet Union became an ally, Churchill ordered a halt to intelligence operations against it. In December 1941, the Russian section was closed down, but in late summer 1943 or late 1944, a small GC&CS Russian cypher section was set up in London overlooking Park Lane, then in Sloane Square.[118]
An outpost of the Government Code and Cypher School had been set up in Hong Kong in 1935, theFar East Combined Bureau(FECB). The FECB naval staff moved in 1940 to Singapore, thenColombo,Ceylon, thenKilindini,Mombasa, Kenya. They succeeded in deciphering Japanese codes with a mixture of skill and good fortune.[119]The Army and Air Force staff went from Singapore to theWireless Experimental CentreatDelhi, India.[120]
In early 1942, a six-month crash course in Japanese, for 20 undergraduates from Oxford and Cambridge, was started by the Inter-Services Special Intelligence School in Bedford, in a building across from the main Post Office. This course was repeated every six months until war's end. Most of those completing these courses worked on decoding Japanese naval messages inHut 7, underJohn Tiltman.[120]
By mid-1945, well over 100 personnel were involved with this operation, which co-operated closely with the FECB and the US Signal intelligence Service atArlington Hall, Virginia. In 1999, Michael Smith wrote that: "Only now are the British codebreakers (likeJohn Tiltman,Hugh Foss, andEric Nave) beginning to receive the recognition they deserve for breaking Japanese codes and cyphers".[121]
Until the mid 1970s thethirty year rulemeant that there was no official mention of Bletchley Park. This meant that there were many operations where codes broken by Bletchley Park played an important role, but this was not present in the history of those events.[122]
With the publication ofF. W. Winterbotham'sThe Ultra Secretin 1974[123][b]public discussion of Bletchley Park's work in the English speaking world finally became accepted, although some former staff considered themselves bound to silence forever.[124]Winterbotham's book was written from memory and although officially allowed, there was no access to archives.[125]
Not until July 2009 did the British government fully acknowledge the contribution of the many people working at Bletchley Park.[126]Only then was a commemorative medal struck to be presented to those involved.[127]The gilded medal bears the inscriptionGC&CS 1939–1945 Bletchley Park and its Outstations.[128]
TheBletchley Park Museumoperates on the current site[129]with a learning center and science centre. Other organisations share the campus includingThe National Museum of Computingand heRadio Society of Great Britain's National Radio Centre. The construction of aNational College of Cyber Securityhad previously been envisaged on the site.[130]
Bletchley Park is oppositeBletchley railway station. It is close to junctions 13 and 14 of theM1, about 50 miles (80 km) northwest ofLondon.[131]
Maps
|
https://en.wikipedia.org/wiki/Bletchley_Park
|
Ultrawas the designation adopted byBritishmilitary intelligencein June 1941 for wartimesignals intelligenceobtained by breaking high-levelencryptedenemyradioandteleprintercommunications at theGovernment Code and Cypher SchoolatBletchley Park.[1]Ultraeventually became the standard designation among the westernAlliesfor all such intelligence. The name arose because the intelligence obtained was considered more important than that designated by the highest Britishsecurity classificationthen used (Most Secret)and so was regarded as beingUltra Secret.[2]Several othercryptonymshad been used for such intelligence.
The code name "Boniface" was used as a cover name forUltra. In order to ensure that the successful code-breaking did not become apparent to the Germans, British intelligence created a fictionalMI6master spy, Boniface, who controlled a fictional series of agents throughout Germany. Information obtained through code-breaking was often attributed to thehuman intelligencefrom the Boniface network.[3][4]The U.S. used the codenameMagicfor its decrypts from Japanese sources, including the "Purple" cipher.[5]
Much of theGermancipher traffic was encrypted on theEnigma machine. Used properly, the German military Enigma would have been virtually unbreakable; in practice, shortcomings in operation allowed it to be broken. The term "Ultra" has often been used almost synonymously with "Enigma decrypts". However, Ultra also encompassed decrypts of the GermanLorenz SZ 40/42 machinesthat were used by the German High Command, and theHagelin machine.[a]
Many observers, at the time and later, regarded Ultra as immensely valuable to the Allies.Winston Churchillwas reported to have toldKing George VI, when presenting to himStewart Menzies(head of theSecret Intelligence Serviceand the person who controlled distribution of Ultra decrypts to the government): "It is thanks to the secret weapon of General Menzies, put into use on all the fronts, that we won the war!"[b]F. W. Winterbothamquoted the western Supreme Allied Commander,Dwight D. Eisenhower, at war's end describing Ultra as having been "decisive" to Allied victory.[7]Sir Harry Hinsley, Bletchley Park veteran and official historian of British Intelligence in World War II, made a similar assessment of Ultra, saying that while the Allies would have won the war without it,[8]"the war would have been something like two years longer, perhaps three years longer, possibly four years longer than it was."[9]However, Hinsley and others have emphasized the difficulties ofcounterfactual historyin attempting such conclusions, and some historians, such as Keegan, have said the shortening might have been as little as the three months it took the United States to deploy theatomic bomb.[8][10][11]
Most Ultra intelligence was derived from reading radio messages that had been encrypted with cipher machines, complemented by material from radio communications usingtraffic analysisanddirection finding. In the early phases of the war, particularly during the eight-monthPhoney War, the Germans could transmit most of their messages usingland linesand so had no need to use radio. This meant that those at Bletchley Park had some time to build up experience of collecting and starting to decrypt messages on the variousradio networks. German Enigma messages were the main source, with those of theLuftwaffepredominating, as they used radio more and their operators were particularly ill-disciplined.
"Enigma" refers to a family of electro-mechanicalrotor cipher machines. These produced apolyalphabetic substitution cipherand were widely thought to be unbreakable in the 1920s, when a variant of the commercial Model D was first used by theReichswehr. TheGerman Army,Navy,Air Force,Nazi party,Gestapoand German diplomats used Enigma machines in several variants.Abwehr(German military intelligence) used a four-rotor machine without a plugboard and Naval Enigma used different key management from that of the army or air force, making its traffic far more difficult to cryptanalyse; each variant required different cryptanalytic treatment. The commercial versions were not as secure andDilly Knoxof GC&CS is said to have broken one before the war.
German military Enigma was first broken in December 1932 byMarian Rejewskiand thePolish Cipher Bureau, using a combination of brilliant mathematics, the services of a spy in the German office responsible for administering encrypted communications, and good luck.[12][13]The Poles read Enigma to the outbreak of World War II and beyond, in France.[14]At the turn of 1939, the Germans made the systems ten times more complex, which required a tenfold increase in Polish decryption equipment, which they could not meet.[15]On 25 July 1939, the Polish Cipher Bureau handedreconstructed Enigma machinesand their techniques for decrypting ciphers to the French and British.[16]Gordon Welchmanwrote,
Ultra would never have got off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military Enigma machine, and of the operating procedures that were in use.
At Bletchley Park, some of the key people responsible for success against Enigma included mathematiciansAlan TuringandHugh Alexanderand, at theBritish Tabulating Machine Company, chief engineerHarold Keen.[18]After the war, interrogation of German cryptographic personnel led to the conclusion that German cryptanalysts understood that cryptanalytic attacks against Enigma were possible but were thought to require impracticable amounts of effort and investment.[19]The Poles' early start at breaking Enigma and the continuity of their success gave the Allies an advantage when World War II began.[17]
In June 1941, the Germans started to introduce on-linestream cipherteleprintersystems for strategic point-to-point radio links, to which the British gave the code-nameFish.[20]Several systems were used, principally theLorenz SZ 40/42(Tunny) andGeheimfernschreiber(Sturgeon). These cipher systems were cryptanalysed, particularly Tunny, which the British thoroughly penetrated. It was eventually attacked usingColossusmachines, which were the first digital programme-controlled electronic computers. In many respects the Tunny work was more difficult than for the Enigma, since the British codebreakers had no knowledge of the machine producing it and no head-start such as that the Poles had given them against Enigma.[18]
Although the volume of intelligence derived from this system was much smaller than that from Enigma, its importance was often far higher because it produced primarily high-level, strategic intelligence that was sent between Wehrmacht High Command (OKW). The eventual bulk decryption of Lorenz-enciphered messages contributed significantly, and perhaps decisively, to the defeat of Nazi Germany.[21][22]Nevertheless, the Tunny story has become much less well known among the public than the Enigma one.[18]At Bletchley Park, some of the key people responsible for success in the Tunny effort included mathematiciansW. T. "Bill" TutteandMax Newmanand electrical engineerTommy Flowers.[18]
In June 1940, the Italians were using book codes for most of their military messages, except for the Italian Navy, which in early 1941 had started using a version of the Hagelinrotor-basedcipher machineC-38.[23]This was broken from June 1941 onwards by theItalian subsection of GC&CSatBletchley Park.[24]
In thePacifictheatre, a Japanese cipher machine, called "Purple" by the Americans, was used for highest-level Japanese diplomatic traffic. It produced a polyalphabetic substitution cipher, but unlike Enigma, was not a rotor machine, being built around electricalstepping switches. It was broken by the US ArmySignal Intelligence Serviceand disseminated asMagic. Detailed reports by the Japanese ambassador to Germany were encrypted on the Purple machine. His reports included reviews of German assessments of the military situation, reviews of strategy and intentions, reports on direct inspections by the ambassador (in one case, of Normandy beach defences), and reports of long interviews with Hitler.[23]The Japanese are said to have obtained an Enigma machine in 1937, although it is debated whether they were given it by the Germans or bought a commercial version, which, apart from the plugboard and internal wiring, was the GermanHeer/Luftwaffemachine. Having developed a similar machine, the Japanese did not use the Enigma machine for their most secret communications.
The chief fleet communications code system used by the Imperial Japanese Navy was calledJN-25by the Americans, and by early 1942 the US Navy had made considerable progress in decrypting Japanese naval messages. The US Army also made progress on theJapanese Army's codesin 1943, including codes used by supply ships, resulting in heavy losses to their shipping.
Army- and Air Force-related intelligence derived fromsignals intelligence(SIGINT) sources – mainly Enigma decrypts inHut 6– was compiled in summaries at GC&CS (Bletchley Park) Hut 3 and distributed initially under the codeword "BONIFACE",[26]implying that it was acquired from a well placed agent in Berlin. The volume of the intelligence reports going out to commanders in the field built up gradually.
Naval Enigma decrypted inHut 8was forwarded from Hut 4 to theAdmiralty's Operational Intelligence Centre (OIC),[27]which distributed it initially under the codeword "HYDRO".[26]
The codeword "ULTRA" was adopted in June 1941.[28]This codeword was reportedly suggested by Commander Geoffrey Colpoys, RN, who served in the Royal Navy's OIC.
The distribution of Ultra information to Allied commanders and units in the field involved considerable risk of discovery by the Germans, and great care was taken to control both the information and knowledge of how it was obtained. Liaison officers were appointed for each field command to manage and control dissemination.
Dissemination of Ultra intelligence to field commanders was carried out byMI6, which operatedSpecial Liaison Units(SLU) attached to major army and air force commands. The activity was organized and supervised on behalf of MI6 byGroup CaptainF. W. Winterbotham. Each SLU included intelligence, communications, and cryptographic elements. It was headed by a British Army or RAF officer, usually a major, known as "Special Liaison Officer". The main function of the liaison officer or his deputy was to pass Ultra intelligence bulletins to the commander of the command he was attached to, or to other indoctrinated staff officers. In order to safeguard Ultra, special precautions were taken. The standard procedure was for the liaison officer to present the intelligence summary to the recipient, stay with him while he studied it, then take it back and destroy it.
By the end of the war, there were about 40 SLUs serving commands around the world.[29]Fixed SLUs existed at the Admiralty, theWar Office, theAir Ministry,RAF Fighter Command, the US Strategic Air Forces in Europe (Wycombe Abbey) and other fixed headquarters in the UK. An SLU was operating at the War HQ in Valletta, Malta.[30]These units had permanent teleprinter links to Bletchley Park.
Mobile SLUs were attached to field army and air force headquarters and depended on radio communications to receive intelligence summaries. The first mobile SLUs appeared during the French campaign of 1940. An SLU supported theBritish Expeditionary Force(BEF) headed byGeneral Lord Gort. The first liaison officers were Robert Gore-Browne and Humphrey Plowden.[31]A second SLU of the 1940 period was attached to theRAF Advanced Air Striking ForceatMeauxcommanded by Air Vice-MarshalP H Lyon Playfair. This SLU was commanded by Squadron Leader F.W. "Tubby" Long.
In 1940, special arrangements were made within the British intelligence services for handling BONIFACE and later Ultra intelligence. TheSecurity Servicestarted "Special Research Unit B1(b)" underHerbert Hart. In theSISthis intelligence was handled by "Section V" based atSt Albans.[32]
The communications system was founded by Brigadier SirRichard Gambier-Parry, who from 1938 to 1946 was head of MI6 Section VIII, based atWhaddon HallinBuckinghamshire, UK.[33]Ultra summaries from Bletchley Park were sent over landline to the Section VIII radio transmitter at Windy Ridge. From there they were transmitted to the destination SLUs.
The communications element of each SLU was called a "Special Communications Unit" or SCU. Radio transmitters were constructed at Whaddon Hall workshops, while receivers were theNational HRO, made in the USA. The SCUs were highly mobile and the first such units used civilianPackardcars. The following SCUs are listed:[33]SCU1 (Whaddon Hall), SCU2 (France before 1940, India), SCU3 (RSS Hanslope Park), SCU5, SCU6 (possibly Algiers and Italy), SCU7 (training unit in the UK), SCU8 (Europe after D-day), SCU9 (Europe after D-day), SCU11 (Palestine and India), SCU12 (India), SCU13 and SCU14.[c]
The cryptographic element of each SLU was supplied by the RAF and was based on theTYPEXcryptographic machine andone-time padsystems.
RN Ultra messages from the OIC to ships at sea were necessarily transmitted over normal naval radio circuits and were protected by one-time pad encryption.[34]
An intriguing question concerns the alleged use of Ultra information by the"Lucy" spy ring,[35]headquartered inSwitzerlandand apparently operated by one man,Rudolf Roessler. This was an extremely well informed, responsive ring that was able to get information "directly from German General Staff Headquarters" – often on specific request. It has been alleged that "Lucy" was in major part a conduit for the British to feed Ultra intelligence to the Soviets in a way that made it appear to have come from highly placed espionage rather than fromcryptanalysisof German radio traffic. The Soviets, however, through an agent at Bletchley,John Cairncross, knew that Britain had broken Enigma. The "Lucy" ring was initially treated with suspicion by the Soviets. The information it provided was accurate and timely, however, and Soviet agents in Switzerland (including their chief,Alexander Radó) eventually learned to take it seriously.[36]However, the theory that the Lucy ring was a cover for Britain to pass Enigma intelligence to the Soviets has not gained traction. Among others who have rejected the theory,Harry Hinsley, the official historian for the British Secret Services in World War II, stated that "there is no truth in the much-publicized claim that the British authorities made use of the ‘Lucy’ ring ... to forward intelligence to Moscow".[37]
Most deciphered messages, often about relative trivia, were insufficient as intelligence reports for military strategists or field commanders. The organisation, interpretation and distribution of decrypted Enigma message traffic and other sources into usable intelligence was a subtle task.
At Bletchley Park, extensive indices were kept of the information in the messages decrypted.[38]For each message the traffic analysis recorded the radio frequency, the date and time of intercept, and the preamble – which contained the network-identifying discriminant, the time of origin of the message, the callsign of the originating and receiving stations, and theindicatorsetting. This allowed cross referencing of a new message with a previous one.[39]The indices included message preambles, every person, every ship, every unit, every weapon, every technical term and of repeated phrases such as forms of address and other German military jargon that might be usable ascribs.[40]
The first decryption of a wartime Enigma message, albeit one that had been transmitted three months earlier, was achieved by the Poles atPC Brunoon 17 January 1940. Little had been achieved by the start of theAllied campaign in Norwayin April. At the start of theBattle of Franceon 10 May 1940, the Germans made a very significant change in the indicator procedures for Enigma messages. However, the Bletchley Park cryptanalysts had anticipated this, and were able – jointly with PC Bruno – to resume breaking messages from 22 May, although often with some delay. The intelligence that these messages yielded was of little operational use in the fast-moving situation of the German advance.
Decryption of Enigma traffic built up gradually during 1940, with the first two prototypebombesbeing delivered in March and August. The traffic was almost entirely limited toLuftwaffemessages. By the peak of theBattle of the Mediterraneanin 1941, however, Bletchley Park was deciphering daily 2,000 Italian Hagelin messages. By the second half of 1941 30,000 Enigma messages a month were being deciphered, rising to 90,000 a month of Enigma and Fish decrypts combined later in the war.[23]
Some of the contributions that Ultra intelligence made to the Allied successes are given below.
Rommel was appointed Inspector General of the West, and he inspected all the defences along the Normandy beaches and send a very detailed message that I think was 70,000 characters and we decrypted it as a small pamphlet. It was a report of the whole Western defences. How wide the V shaped trenches were to stop tanks, and how much barbed wire. Oh, it was everything and we decrypted it before D-Day.[71]
The Allies were seriously concerned with the prospect of the Axis command finding out that they had broken into the Enigma traffic. The British were more disciplined about such measures than the Americans, and this difference was a source of friction between them.[77][78]
To disguise the source of the intelligence for the Allied attacks on Axis supply ships bound for North Africa, "spotter" submarines and aircraft were sent to search for Axis ships. These searchers or their radio transmissions were observed by the Axis forces, who concluded their ships were being found by conventional reconnaissance. They suspected that there were some 400 Allied submarines in the Mediterranean and a huge fleet of reconnaissance aircraft onMalta. In fact, there were only 25 submarines and at times as few as three aircraft.[23]
This procedure also helped conceal the intelligence source from Allied personnel, who might give away the secret by careless talk, or under interrogation if captured. Along with the search mission that would find the Axis ships, two or three additional search missions would be sent out to other areas, so that crews would not begin to wonder why a single mission found the Axis ships every time.
Other deceptive means were used. On one occasion, a convoy of five ships sailed fromNaplesto North Africa with essential supplies at a critical moment in the North African fighting. There was no time to have the ships properly spotted beforehand. The decision to attack solely on Ultra intelligence went directly to Churchill. The ships were all sunk by an attack "out of the blue", arousing German suspicions of a security breach. To distract the Germans from the idea of a signals breach (such as Ultra), the Allies sent a radio message to a fictitious spy in Naples, congratulating him for this success. According to some sources the Germans decrypted this message and believed it.[79]
In the Battle of the Atlantic, the precautions were taken to the extreme. In most cases where the Allies knew from intercepts the location of a U-boat in mid-Atlantic, the U-boat was not attacked immediately, until a "cover story" could be arranged. For example, a search plane might be "fortunate enough" to sight the U-boat, thus explaining the Allied attack.
Some Germans had suspicions that all was not right with Enigma. AdmiralKarl Dönitzreceived reports of "impossible" encounters between U-boats and enemy vessels which made him suspect some compromise of his communications. In one instance, three U-boats met at a tiny island in theCaribbean Sea, and a British destroyer promptly showed up. The U-boats escaped and reported what had happened. Dönitz immediately asked for a review of Enigma's security. The analysis suggested that the signals problem, if there was one, was not due to the Enigma itself. Dönitz had the settings book changed anyway, blacking out Bletchley Park for a period. However, the evidence was never enough to truly convince him that Naval Enigma was being read by the Allies. The more so, sinceB-Dienst, his own codebreaking group, had partially broken Royal Navy traffic (including its convoy codes early in the war),[80]and supplied enough information to support the idea that the Allies were unable to read Naval Enigma.[d]
By 1945, most German Enigma traffic could be decrypted within a day or two, yet the Germans remained confident of its security.[81]
After encryption systems were "broken", there was a large volume of cryptologic work needed to recover daily key settings and keep up with changes in enemy security procedures, plus the more mundane work of processing, translating, indexing, analyzing and distributing tens of thousands of intercepted messages daily.[82]The more successful the code breakers were, the more labor was required. Some 8,000women worked at Bletchley Park, about three quarters of the work force.[83]Before the attack on Pearl Harbor, the US Navy sent letters to top women's colleges seeking introductions to their best seniors; the Army soon followed suit. By the end of the war, some 7000 workers in the Army Signal Intelligence service, out of a total 10,500, were female. By contrast, the Germans and Japanese had strong ideological objections to women engaging in war work. The Nazis even created aCross of Honour of the German Motherto encourage women to stay at home and have babies.[68]
The mystery surrounding the discovery of the sunkGerman submarineU-869off the coast ofNew Jerseyby diversRichie KohlerandJohn Chattertonwas unravelled in part through the analysis of Ultra intercepts, which demonstrated that, althoughU-869had been ordered by U-boat Command to change course and proceed to North Africa, near Rabat, the submarine had missed the messages changing her assignment and had continued to the eastern coast of the U.S., her original destination.
In 1953, the CIA'sProject ARTICHOKE, a series of experiments on human subjects to develop drugs for use in interrogations, was renamedProject MKUltra. MK was the CIA's designation for its Technical Services Division and Ultra was in reference to the Ultra project.[84][85]
Until the mid 1970s, thethirty year rulemeant that there was no official mention of Bletchley Park. This meant that although there were many operations where codes broken by Bletchley Park played an important role, this was not present in the history of those events. Churchill's seriesThe Second World Wardid mention Enigma but not that it had been broken.[86]
While it is obvious why Britain and the U.S. went to considerable pains to keep Ultra a secret until the end of the war, it has been a matter of some conjecture why Ultra was kept officially secret for 29 years thereafter, until 1974. During that period, the important contributions to the war effort of a great many people remained unknown, and they were unable to share in the glory of what is now recognised as one of the chief reasons the Allies won the war – or, at least, as quickly as they did.
At least three explanations exist as to why Ultra was kept secret so long. Each has plausibility, and all may be true. First, asDavid Kahnpointed out in his 1974New York Timesreview of Winterbotham'sThe Ultra Secret, after the war, surplus Enigmas and Enigma-like machines were sold toThird Worldcountries, which remained convinced of the security of the remarkable cipher machines. Their traffic was not as secure as they believed, however, which is one reason the British made the machines available.[87][better source needed]
By the 1970s, newer computer-based ciphers were becoming popular as the world increasingly turned to computerised communications, and the usefulness of Enigma copies (and rotor machines generally) rapidly decreased. Switzerland developed its own version of Enigma, known asNEMA, and used it into the late 1970s, while the United StatesNational Security Agency(NSA) retired the last of its rotor-based encryption systems, theKL-7series, in the 1980s.
A second explanation relates to a misadventure of one of Churchill's predecessors,Stanley Baldwin, between the World Wars, when he publicly disclosed information from decrypted Soviet communications about theGeneral Strike. This had prompted the Soviets to change their ciphers, leading to a blackout.[88]
The third explanation is given by Winterbotham, who recounts that two weeks afterV-E Day, on 25 May 1945, Churchill requested former recipients of Ultra intelligence not to divulge the source or the information that they had received from it, in order that there be neither damage to the future operations of the Secret Service nor any cause for the Axis to blame Ultra for their defeat.[89]
In 1967, Polish military historianWładysław Kozaczukin his bookBitwa o tajemnice("Battle for Secrets") first revealed Enigma had been broken by Polish cryptologists before World War II.
Also published in 1967,David Kahn's comprehensive chronicle of the history of cryptography,The Codebreakers, does not mention Bletchley Park, although it does make the claim that Soviet forces were reading Enigma messages by 1942.[86]He also described the 1944 capture of a naval Enigma machine fromU-505and gave the first published hint about the scale, mechanisation and operational importance of the Anglo-American Enigma-breaking operation:
The Allies now read U-boat operational traffic. For they had, more than a year before the theft, succeeded in solving the difficult U-boat systems, and – in one of the finest cryptanalytic achievements of the war – managed to read the intercepts on a current basis. For this, the cryptanalysts needed the help of a mass of machinery that filled two buildings.[90]
Ladislas Farago's 1971 best-sellerThe Game of the Foxesgave an early garbled version of the myth of the purloined Enigma. According to Farago, it was thanks to a "Polish-Swedish ring [that] the British obtained a working model of the 'Enigma' machine, which the Germans used to encipher their top-secret messages."[91]"It was to pick up one of these machines that Commander Denniston went clandestinely to a secluded Polish castle [!] on the eve of the war. Dilly Knox later solved its keying, exposing all Abwehr signals encoded by this system."[92]"In 1941 [t]he brilliant cryptologist Dillwyn Knox, working at the Government Code & Cypher School at the Bletchley centre of British code-cracking, solved the keying of the Abwehr's Enigma machine."[93]
The 1973 public disclosure of Enigma decryption in the bookEnigmaby French intelligence officerGustave Bertrand[94]– which dealt mainly with the Polish and then Franco-Polish efforts before theInvasion of Franceand before the Ultra program[95]– generated pressure to discuss the rest of the Enigma–Ultra story.[citation needed]
Since it was British and, later, American message-breaking which had been the most extensive, the importance of Enigma decrypts to the prosecution of the war remained unknown despite revelations by the Poles and the French of their early work on breaking the Enigma cipher. This work, which was carried out in the 1930s and continued into the early part of the war, was necessarily uninformed regarding further breakthroughs achieved by the Allies during the balance of the war.
The British ban was finally lifted in 1974, the year that a key participant on the distribution side of the Ultra project,F. W. Winterbotham, publishedThe Ultra Secret.[96]Winterbotham's book was written from memory and although officially allowed, there was no access to archives.[97]Public discussion of Bletchley Park's work in the English speaking world finally became accepted, although some former staff considered themselves bound to silence forever.[98]
Other books such asAnthony Cave Brown'sBodyguard of LiesandWilliam Stevenson'sA Man called Intrepidwere also being written at this time, and the military historianHarold C. Deutschregards Winterbotham's revelations as only to have anticipated what were going to be a number of revelations.[99]
A succession of books by former participants and others followed. The official history of British intelligence in World War II was published in five volumes from 1979 to 1988, and included further details from official sources concerning the availability and employment of Ultra intelligence. It was chiefly edited byHarry Hinsley, with one volume byMichael Howard. There is also a one-volume collection of reminiscences by Ultra veterans,Codebreakers(1993), edited by Hinsley and Alan Stripp.
In 2012,Alan Turing's last two papers on Enigma decryption were released to Britain'sNational Archives.[100]The Departmental Historian atGCHQstated that the seven decades' delay had been due to their "continuing sensitivity... It wouldn't have been safe to release [them earlier]."[citation needed]
Historians andHolocaust researchershave tried to establish when the Allies realized the full extent of Nazi-era extermination of Jews, and specifically, the extermination-camp system. In 1999, the U.S. Government passed the Nazi War Crimes Disclosure Act (P.L.105-246), making it policy to declassify all Nazi war crime documents in their files; this was later amended to include the Japanese Imperial Government.[101]As a result, more than 600 decrypts and translations of intercepted messages were disclosed;NSAhistorian Robert Hanyok would conclude that Allied communications intelligence, "by itself, could not have provided an early warning to Allied leaders regarding the nature and scope of the Holocaust."[102]
FollowingOperation Barbarossa, decrypts in August 1941 alerted British authorities to the many massacres in occupied zones of theSoviet Union, including those of Jews, but specifics were not made public for security reasons.[103]Revelations about the concentration camps were gleaned from other sources, and were publicly reported by thePolish government-in-exile,Jan Karskiand theWJCoffices in Switzerland a year or more later.[104]A decrypted message referring to "Einsatz Reinhard" (theHöfle telegram), from 11 January 1943 may have outlined the system and listed the number of Jews and others gassed at four death camps the previous year, but codebreakers did not understand the meaning of the message.[105]In summer 1944,Arthur Schlesinger, anOSSanalyst, interpreted the intelligence as an "incremental increase in persecution rather than ... extermination".[106]
The existence of Ultra was kept secret for many years after the war. Since the Ultra story was widely disseminated by Winterbotham in 1974,[107][108]historians have altered thehistoriography of World War II. For example,Andrew Roberts, writing in the 21st century, states, "Because he had the invaluable advantage of being able to read Field MarshalErwin Rommel's Enigma communications, GeneralBernard Montgomeryknew how short the Germans were of men, ammunition, food and above all fuel. When he put Rommel's picture up in his caravan he wanted to be seen to be almost reading his opponent's mind. In fact he was reading his mail."[109]Over time, Ultra has become embedded in the public consciousness and Bletchley Park has become a significantvisitor attraction.[110]As stated by historian Thomas Haigh, "The British code-breaking effort of the Second World War, formerly secret, is now one of the most celebrated aspects of modern British history, an inspiring story in which a free society mobilized its intellectual resources against a terrible enemy."[18]
There has been controversy about the influence of Allied Enigma decryption on the course of World War II with three views – that without Ultra the outcome of the war would be different, that without Ultra the Allies would have still won but that it was shortened by two years and that while useful Ultra decrypts were largely incidental to the fact and timing of the Allied victory.
An oft-repeated assessment is that decryption of German ciphers advanced theend of the European warby no less than two years.[111][112]Hinsley, who first made this claim, is typically cited as an authority for the two-year estimate.[113]
Winterbotham's quoting of Eisenhower's "decisive" verdict is part of a letter sent by Eisenhower to Menzies after the conclusion of the European war and later found among his papers at the Eisenhower Presidential Library.[114]It allows a contemporary, documentary view of a leader on Ultra's importance:
July 1945
Dear General Menzies:
I had hoped to be able to pay a visit to Bletchley Park in order to thank you, Sir Edward Travis, and the members of the staff personally for the magnificent service which has been rendered to the Allied cause.
I am very well aware of the immense amount of work and effort which has been involved in the production of the material with which you supplied us. I fully realize also the numerous setbacks and difficulties with which you have had to contend and how you have always, by your supreme efforts, overcome them.
The intelligence which has emanated from you before and during this campaign has been of priceless value to me. It has simplified my task as a commander enormously. It has saved thousands of British and American lives and, in no small way, contributed to the speed with which the enemy was routed and eventually forced to surrender.
I should be very grateful, therefore, if you would express to each and every one of those engaged in this work from me personally my heartfelt admiration and sincere thanks for their very decisive contribution to the Allied war effort.
Sincerely,
Dwight D. Eisenhower
There is wide disagreement about the importance of codebreaking in winning the crucialBattle of the Atlantic. To cite just one example, the historian Max Hastings states that "In 1941 alone, Ultra saved between 1.5 and two million tons of Allied ships from destruction." This would represent a 40 percent to 53 percent reduction, though it is not clear how this extrapolation was made.[115]
Another view is from a history based on the German naval archives written after the war for the British Admiralty by a former U-boat commander and son-in-law of his commander, Grand AdmiralKarl Dönitz. His book reports that several times during the war they undertook detailed investigations to see whether their operations were being compromised by broken Enigma ciphers. These investigations were spurred because the Germans had broken the British naval code and found the information useful. Their investigations were negative, and the conclusion was that their defeat "was due firstly to outstanding developments in enemy radar..."[116]The great advance wascentimetric radar, developed in a joint British-American venture, which became operational in the spring of 1943. Earlier radar was unable to distinguish U-boatconning towersfrom the surface of the sea, so it could not even locate U-boats attacking convoys on the surface on moonless nights; thus the surfaced U-boats were almost invisible, while having the additional advantage of being swifter than their prey. The new higher-frequency radar could spot conning towers, andperiscopescould even be detected from airplanes. Some idea of the relative effect of cipher-breaking and radar improvement can be obtained fromgraphsshowing the tonnage of merchantmen sunk and the number of U-boats sunk in each month of the Battle of the Atlantic. The graphs cannot be interpreted unambiguously, because it is challenging to factor in many variables such as improvements in cipher-breaking and the numerous other advances in equipment and techniques used to combat U-boats. Nonetheless, the data seem to favor the view of the former U-boat commander – that radar was crucial.
While Ultra certainly affected the course of theWestern Frontduring the war, two factors often argued against Ultra having shortened the overall war by a measure of years are the relatively small role it played in theEastern Front conflict between Germany and the Soviet Union, and the completely independent development of the U.S.-ledManhattan Projectto create theatomic bomb. AuthorJeffrey T. Richelsonmentions Hinsley's estimate of at least two years, and concludes that "It might be more accurate to say that Ultra helped shorten the war by three months – the interval between the actual end of the war in Europe and the time the United States would have been able to drop an atomic bomb on Hamburg or Berlin – and might have shortened the war by as much as two years had the U.S. atomic bomb program been unsuccessful."[11]Military historianGuy Hartcupanalyzes aspects of the question but then simply says, "It is impossible to calculate in terms of months or years how much Ultra shortened the war."[117]
F. W. Winterbotham, the first author to outline the influence of Enigma decryption on the course of World War II, likewise made the earliest contribution to an appreciation of Ultra'spostwarinfluence, which now continues into the 21st century – and not only in the postwar establishment of Britain'sGCHQ(Government Communication Headquarters) and the United States' NSA. "Let no one be fooled", Winterbotham admonishes in chapter 3, "by the spate of television films and propaganda which has made the war seem like some great triumphant epic. It was, in fact, a very narrow shave, and the reader may like to ponder [...] whether [...] we might have won [without] Ultra."[118]
Iain Standen, Chief Executive of the Bletchley Park Trust, says of the work done there: "It was crucial to the survival of Britain, and indeed of the West." The Departmental Historian atGCHQ(the Government Communications Headquarters), who identifies himself only as "Tony" but seems to speak authoritatively, says that Ultra was a "major force multiplier. It was the first time that quantities of real-time intelligence became available to the British military."[citation needed]
According to the official historian ofBritish Intelligence, Ultra intelligence shortened the war by two to four years, and without it the outcome of the war would have been uncertain.[9]
Phillip Knightleysuggests that Ultra may have contributed to the development of theCold War.[119]The Soviets received disguised Ultra information, but the existence of Ultra itself was not disclosed by the western Allies. The Soviets, who had clues to Ultra's existence, possibly throughKim Philby,John CairncrossandAnthony Blunt,[119]may thus have felt still more distrustful of theirwartime partners.
Debate continues on whether, had postwar political and military leaders been aware of Ultra's role in Allied victory in World War II, these leaders might have been less optimistic about post-World War II military involvements.Christopher Kasparekwrites: "Had the... postwar governments of major powers realized ... how Allied victory in World War II had hung by a slender thread first spun by three mathematicians [Rejewski, Różycki, Zygalski] working on Enigma decryption for the general staff of a seemingly negligible power [Poland], they might have been more cautious in picking their own wars."[120]A kindred point concerning postwar American triumphalism is made by British historianMax Hastings, author ofInferno: The World at War, 1939–1945.[121]
|
https://en.wikipedia.org/wiki/Ultra_(cryptography)
|
TheAdvanced Encryption Standard(AES), the symmetricblock cipherratified as a standard byNational Institute of Standards and Technologyof the United States (NIST), was chosen using a process lasting from 1997 to 2000 that was markedly more open and transparent than its predecessor, theData Encryption Standard(DES). This process won praise from the open cryptographic community, and helped to increase confidence in the security of the winning algorithm from those who were suspicious of backdoors in the predecessor, DES.
A new standard was needed primarily because DES had a relatively small 56-bit key which was becoming vulnerable tobrute-force attacks. In addition, the DES was designed primarily for hardware and was relatively slow when implemented in software.[1]While Triple-DES avoids the problem of a small key size, it is very slow even in hardware, it is unsuitable for limited-resource platforms, and it may be affected by potential security issues connected with the (today comparatively small) block size of 64 bits.
On January 2, 1997, NIST announced that they wished to choose a successor to DES to be known as AES. Like DES, this was to be "an unclassified, publicly disclosed encryption algorithm capable of protecting sensitive government information well into the next century."[2]However, rather than simply publishing a successor, NIST asked for input from interested parties on how the successor should be chosen. Interest from the open cryptographic community was immediately intense, and NIST received a great many submissions during the three-month comment period.
The result of this feedback was a call for new algorithms on September 12, 1997.[3]The algorithms were all to be block ciphers, supporting a block size of 128 bits and key sizes of 128, 192, and 256 bits. Such ciphers were rare at the time of the announcement; the best known was probablySquare.
In the nine months that followed, fifteen designs were created and submitted from several countries. They were, in alphabetical order:CAST-256,CRYPTON,DEAL,DFC,E2,FROG,HPC,LOKI97,MAGENTA,MARS,RC6,Rijndael,SAFER+,Serpent, andTwofish.
In the ensuing debate, many advantages and disadvantages of the candidates were investigated by cryptographers; they were assessed not only on security, but also on performance in a variety of settings (PCs of various architectures, smart cards, hardware implementations) and on their feasibility in limited environments (smart cards with very limited memory, low gate count implementations, FPGAs).
Some designs fell due tocryptanalysisthat ranged from minor flaws to significant attacks, while others lost favour due to poor performance in various environments or through having little to offer over other candidates. NIST held two conferences to discuss the submissions (AES1, August 1998 and AES2, March 1999[4][5][6]), and in August 1999 they announced[7]that they were narrowing the field from fifteen to five:MARS,RC6,Rijndael,Serpent, andTwofish. All five algorithms, commonly referred to as "AES finalists", were designed by cryptographers considered well-known and respected in the community.
The AES2 conference votes were as follows:[8]
A further round of intense analysis and cryptanalysis followed, culminating in the AES3 conference in April 2000, at which a representative of each of the final five teams made a presentation arguing why their design should be chosen as the AES. The AES3 conference votes were as follows:[9]
On October 2, 2000, NIST announced[10]thatRijndaelhad been selected as the proposed AES and started the process of making it the official standard by publishing an announcement in theFederal Register[11]on February 28, 2001 for the draft FIPS to solicit comments. On November 26, 2001, NIST announced thatAESwas approved asFIPS PUB197.
NIST won praises from the cryptographic community for the openness and care with which they ran the standards process.Bruce Schneier, one of the authors of the losing Twofish algorithm, wrote after the competition was over that "I have nothing but good things to say about NIST and the AES process."[12]
|
https://en.wikipedia.org/wiki/Advanced_Encryption_Standard_process
|
Post-Quantum Cryptography Standardization[1]is a program and competition byNISTto update their standards to includepost-quantum cryptography.[2]It was announced at PQCrypto 2016.[3]23 signature schemes and 59 encryption/KEMschemes were submitted by the initial submission deadline at the end of 2017[4]of which 69 total were deemed complete and proper and participated in the first round. Seven of these, of which 3 are signature schemes, have advanced to the third round, which was announced on July 22, 2020.[citation needed]
On August 13, 2024, NIST released final versions of the first three Post Quantum Crypto Standards: FIPS 203, FIPS 204, and FIPS 205.[5]
Academic research on the potential impact of quantum computing dates back to at least 2001.[6]A NIST published report from April 2016 cites experts that acknowledge the possibility of quantum technology to render the commonly usedRSAalgorithm insecure by 2030.[7]As a result, a need to standardizequantum-securecryptographic primitives was pursued. Since most symmetric primitives are relatively easy to modify in a way that makes them quantum resistant, efforts have focused on public-key cryptography, namelydigital signaturesandkey encapsulation mechanisms. In December 2016 NIST initiated a standardization process by announcing a call for proposals.[8]
The competition is now in its third round out of expected four, where in each round some algorithms are discarded and others are studied more closely. NIST hopes to publish the standardization documents by 2024, but may speed up the process if major breakthroughs inquantum computingare made.
It is currently undecided whether the future standards will be published asFIPSor as NIST Special Publication (SP).
Under consideration were:[9](strikethroughmeans it had been withdrawn)
Candidates moving on to the second round were announced on January 30, 2019. They are:[33]
On July 22, 2020, NIST announced seven finalists ("first track"), as well as eight alternate algorithms ("second track"). The first track contains the algorithms which appear to have the most promise, and will be considered for standardization at the end of the third round. Algorithms in the second track could still become part of the standard, after the third round ends.[53]NIST expects some of the alternate candidates to be considered in a fourth round. NIST also suggests it may re-open the signature category for new schemes proposals in the future.[54]
On June 7–9, 2021, NIST conducted the third PQC standardization conference, virtually.[55]The conference included candidates' updates and discussions on implementations, on performances, and on security issues of the candidates. A small amount of focus was spent on intellectual property concerns.
AfterNIST's announcement regarding the finalists and the alternate candidates, various intellectual property concerns were voiced, notably surroundinglattice-based schemessuch asKyberandNewHope. NIST holds signed statements from submitting groups clearing any legal claims, but there is still a concern that third parties could raise claims. NIST claims that they will take such considerations into account while picking the winning algorithms.[56]
During this round, some candidates have shown to be vulnerable to some attack vectors. It forces these candidates to adapt accordingly:
On July 5, 2022, NIST announced the first group of winners from its six-year competition.[60][61]
On July 5, 2022, NIST announced four candidates for PQC Standardization Round 4.[62]
On August 13, 2024, NIST released final versions of its first three Post Quantum Crypto Standards.[5]According to the release announcement:
While there have been no substantive changes made to the standards since the draft versions, NIST has changed the algorithms’ names to specify the versions that appear in the three finalized standards, which are:
On March 11, 2025 NIST released HQC as the fifth algorithm for post-quantum asymmetric encryption as used for key encapsulation / exchange.[66]The new algorithm is as a backup for ML-KEM, the main algorithm for general encryption. HQC is based on different math than ML-KEM, thus mitigating weakness if found.[67]The draft standard incorporating the HQC algorithm is expected in early 2026 with the final in 2027.
NIST received 50 submissions and deemed 40 to be complete and proper according to the submission requirements.[68]Under consideration are:[69](strikethroughmeans it has been withdrawn)
NIST deemed 14 submissions to pass to the second round.[127]
|
https://en.wikipedia.org/wiki/Post-Quantum_Cryptography_Standardization
|
Abest current practice, abbreviated asBCP,[1]is ade factolevel of performance in engineering and information technology. It is more flexible than astandard, since techniques and tools are continually evolving. TheInternet Engineering Task Forcepublishes Best Current Practice documents in a numbered document series. Each document in this series is paired with the currently validRequest for Comments(RFC) document. BCP was introduced in RFC-1818.[2]
BCPs are document guidelines, processes, methods, and other matters not suitable for standardization. TheInternet standards processitself is defined in a series of BCPs, as is the formal organizational structure of the IETF,Internet Engineering Steering Group,Internet Architecture Board, and other groups involved in that process. IETF's separateStandard Track(STD) document series defines the fully standardized network protocols of the Internet, such as theInternet Protocol, theTransmission Control Protocol, and theDomain Name System.
Each RFC number refers to a specific version of a document Standard Track, but the BCP number refers to the most recent revision of the document. Thus, citations often reference both the BCP number and the RFC number. Example citations for BCPs are:BCP 38,RFC 2827.
|
https://en.wikipedia.org/wiki/Best_current_practice
|
AnInternet Experiment Note(IEN) is a sequentially numbered document in a series of technical publications issued by the participants of the early development work groups that created the precursors of the modernInternet.
AfterDARPAbegan the Internet program in earnest in 1977, the project members were in need of communication and documentation of their work in order to realize the concepts laid out byBob KahnandVint Cerfsome years before. TheRequest for Comments(RFC) series was considered the province of theARPANETproject and the Network Working Group (NWG) which defined thenetwork protocolsused on it. Thus, the members of the Internet project decided on publishing their own series of documents,Internet Experiment Notes, which were modeled after the RFCs.[1][2]
Jon Postelbecame the editor of the new series, in addition to his existing role of administering the long-standing RFC series. Between March, 1977, and September, 1982, 206 IENs were published. After that, with the plan to terminate support of theNetwork Control Protocol(NCP) on the ARPANET and switch toTCP/IP, the production of IENs was discontinued, and all further publication was conducted within the existing RFC system.[3][2]
The second, third and fourth versions of TCP, including the split into TCP/IP, were developed during the IEN work.[4][5][6]The "Final Report" of the "TCP Project", mentions some of the people involved, including groups from Stanford University, University College London, USC-ISI, MIT, BBN, NDRE, among others.[7]
Key networking principles, such as therobustness principle, were defined during the IEN work.[8]
|
https://en.wikipedia.org/wiki/Internet_Experiment_Note
|
This is apartial list of RFCs(request for comments memoranda). ARequest for Comments(RFC) is a publication in a series from the principal technical development and standards-setting bodies for theInternet, most prominently theInternet Engineering Task Force(IETF).
While there are over 9,151RFCsas of February 2022, this list consists of RFCs that have related articles. A complete list is available from theIETFwebsite.[1]
Obsolete RFCs are indicated withstruck-throughtext.
|
https://en.wikipedia.org/wiki/List_of_RFCs
|
Data degradationis the gradualcorruptionofcomputer datadue to an accumulation of non-critical failures in adata storage device. It is also referred to asdata decay,data rotorbit rot.[1]This results in a decline in data quality over time, even when the data is not being utilized. The concept of data degradation involves progressively minimizing data in interconnected processes, where data is used for multiple purposes at different levels of detail. At specific points in the process chain, data is irreversibly reduced to a level that remains sufficient for the successful completion of the following steps[2]
Data degradation indynamic random-access memory(DRAM) can occur when theelectric chargeof abitin DRAM disperses, possibly altering program code or stored data. DRAM may be altered bycosmic rays[3]or other high-energy particles. Such data degradation is known as asoft error.[4]ECC memorycan be used to mitigate this type of data degradation.[5]
Data degradation results from the gradual decay ofstorage mediaover the course of years or longer. Causes vary by medium.
EPROMs,flash memoryand othersolid-state drivestore data using electrical charges, which can slowly leak away due to imperfect insulation. Modern flash controller chips account for this leak by trying several lower threshold voltages (untilECCpasses), prolonging the age of data.Multi-level cellswith much lower distance between voltage levels cannot be considered stable without this functionality.[6]
The chip itself is not affected by this, so reprogramming it approximately once per decade prevents decay. An undamaged copy of the master data is required for the reprogramming. Achecksumcan be used to assure that the on-chip data is not yet damaged and ready for reprogramming.
The typical SD card, USB stick and M.2 NVMe all have a limited endurance. Power on can usually recover data[citation needed]but error rates will eventually degrade the media to illegibility. Writing zeros to a degraded NAND device can revive the storage to close to new condition for further use.[citation needed]Refresh cycles should be no longer than 6 months to be sure the device is legible.
Magnetic media, such ashard disk drives,floppy disksandmagnetic tapes, may experience data decay as bits lose their magnetic orientation. Higher temperature speeds up the rate of magnetic loss. As with solid-state media, re-writing is useful as long as the medium itself is not damaged (see below).[7]Modern hard drives useGiant magnetoresistanceand have a higher magnetic lifespan on the order of decades. They also automatically correct any errors detected by ECC through rewriting. The reliance on aservowritercan complicate data recovery if it becomes unrecoverable, however.
Floppy disks and tapes are poorly protected against ambient air. In warm/humid conditions, they are prone to the physicaldecompositionof the storage medium.[8][7]
Optical mediasuch asCD-R,DVD-RandBD-R, may experience data decay from thebreakdownof the storage medium. This can be mitigated by storing discs in a dark, cool, low humidity location. "Archival quality" discs are available with an extended lifetime, but are still not permanent. However,data integrity scanningthat measures the rates of various types of errors is able to predict data decay on optical media well ahead of uncorrectable data loss occurring.[9]
Both the disc dye and the disc backing layer are potentially susceptible to breakdown. Early cyanine-based dyes used in CD-R were notorious for their lack of UV stability. Early CDs also suffered fromCD bronzing, and is related to a combination of bad lacquer material and failure of the aluminum reflection layer.[10]Later discs use more stable dyes or forgo them for an inorganic mixture. The aluminum layer is also commonly swapped out for gold or silver alloy.
Paper media, such aspunched cardsandpunched tape, may literallyrot.Mylarpunched tape is another approach that does not rely on electromagnetic stability. Degradation ofbooksandprinting paperis primarily driven byacid hydrolysisofglycosidic bondswithin thecellulosemolecule as well as byoxidation;[11]degradation of paper is accelerated by highrelative humidity, high temperature, as well as by exposure to acids, oxygen, light, and various pollutants, including variousvolatile organic compoundsandnitrogen dioxide.[12]
Data degradation instreaming mediaacquisition modules, as addressed by the repair algorithms, reflects real-time data quality issues caused by device limitations. However, a more general form of data degradation refers to the gradual decay of storage media over extended periods, influenced by factors like physical wear, environmental conditions, or technological obsolescence. Causes of such degradation can vary depending on the medium, such as magnetic fields in hard drives, moisture or temperature for tape storage, or electronic failure over time.[13]
One manifestation of data degradation is when one or a few bits are randomly flipped over a long period of time.[14]This is illustrated by several digital images below, all consisting of 326,272 bits. The original photo is displayed first. In the next image, a single bit was changed from 0 to 1. In the next two images, two and three bits were flipped. OnLinuxsystems, the binary difference between files can be revealed using thecmpcommand (e.g.cmp -b bitrot-original.jpg bitrot-1bit-changed.jpg).
This deterioration can be caused by a variety of factors that impact the reliability and integrity of digital information, including physical factors,software errors, security breaches,human error, obsolete technology, and unauthorized access incidents.[15][16][17][18]
Most disk,disk controllerand higher-level systems are subject to a slight chance of unrecoverable failure. With ever-growing disk capacities, file sizes, and increases in the amount of data stored on a disk, the likelihood of the occurrence of data decay and other forms of uncorrected and undetecteddata corruptionincreases.[19]
Low-level disk controllers typically employerror correction codes(ECC) to correct erroneous data.[20]
Higher-level software systems may be employed to mitigate the risk of such underlying failures by increasing redundancy and implementing integrity checking, error correction codes and self-repairing algorithms.[21]TheZFSfile systemwas designed to address many of these data corruption issues.[22]TheBtrfsfile system also includes data protection and recovery mechanisms,[23][better source needed]as doesReFS.[24]
There is no solution that completely eliminates the threat of data degradation,[25]but various measures exist that can stave it off. One of these is toreplicate the dataasbackups. Both the original and backed data are thenauditedfor any faults due to storage media errors bychecksummingthe data or comparing it with that of other copies. This is the only way to detectlatentfaults proactively,[26]which might otherwise go unnoticed until the data is actually accessed.[27]Current storage systems such as those based onRAIDalready employ such measures internally.[28]Ideally, and especially for data that must bepreserved digitally, the replicas should be distributed across multiple administrative sites that function autonomously and deploy various hardware and software, increasing resistance to failure, as well as human error and cyberattacks.[29]
|
https://en.wikipedia.org/wiki/Data_degradation
|
Computer scienceis the study ofcomputation,information, andautomation.[1][2][3]Computer science spanstheoretical disciplines(such asalgorithms,theory of computation, andinformation theory) toapplied disciplines(including the design and implementation ofhardwareandsoftware).[4][5][6]
Algorithms anddata structuresare central to computer science.[7]The theory of computation concerns abstractmodels of computationand general classes ofproblemsthat can be solved using them. The fields ofcryptographyandcomputer securityinvolve studying the means for secure communication and preventingsecurity vulnerabilities.Computer graphicsandcomputational geometryaddress the generation of images.Programming language theoryconsiders different ways to describe computational processes, anddatabasetheory concerns the management of repositories of data.Human–computer interactioninvestigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such asoperating systems,networksandembedded systemsinvestigate the principles and design behindcomplex systems. Computer architecture describes the construction of computer components and computer-operated equipment.Artificial intelligenceandmachine learningaim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation,planningand learning found in humans and animals. Within artificial intelligence,computer visionaims to understand and process image and video data, whilenatural language processingaims to understand and process textual and linguistic data.
The fundamental concern of computer science is determining what can and cannot be automated.[2][8][3][9][10]TheTuring Awardis generally recognized as the highest distinction in computer science.[11][12]
The earliest foundations of what would become computer science predate the invention of the moderndigital computer. Machines for calculating fixed numerical tasks such as theabacushave existed since antiquity, aiding in computations such as multiplication and division.Algorithmsfor performing computations have existed since antiquity, even before the development of sophisticated computing equipment.[16]
Wilhelm Schickarddesigned and constructed the first workingmechanical calculatorin 1623.[17]In 1673,Gottfried Leibnizdemonstrated a digital mechanical calculator, called theStepped Reckoner.[18]Leibniz may be considered the firstcomputer scientistand information theorist, because of various reasons, including the fact that he documented the binary number system. In 1820,Thomas de Colmarlaunched the mechanical calculator industry[note 1]when he invented his simplifiedarithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment.Charles Babbagestarted the design of the firstautomatic mechanical calculator, hisDifference Engine, in 1822, which eventually gave him the idea of the firstprogrammable mechanical calculator, hisAnalytical Engine.[19]He started developing this machine in 1834, and "in less than two years, he had sketched out many of thesalientfeatures of the modern computer".[20]"A crucial step was the adoption of a punched card system derived from theJacquard loom"[20]making it infinitely programmable.[note 2]In 1843, during the translation of a French article on the Analytical Engine,Ada Lovelacewrote, in one of the many notes she included, an algorithm to compute theBernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer.[21]Around 1885,Herman Hollerithinvented thetabulator, which usedpunched cardsto process statistical information; eventually his company became part ofIBM. Following Babbage, although unaware of his earlier work,Percy Ludgatein 1909 published[22]the 2nd of the only two designs for mechanical analytical engines in history. In 1914, the Spanish engineerLeonardo Torres Quevedopublished hisEssays on Automatics,[23]and designed, inspired by Babbage, a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also introduced the idea offloating-point arithmetic.[24][25]In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris theElectromechanical Arithmometer, a prototype that demonstrated the feasibility of an electromechanical analytical engine,[26]on which commands could be typed and the results printed automatically.[27]In 1937, one hundred years after Babbage's impossible dream,Howard Aikenconvinced IBM, which was making all kinds of punched card equipment and was also in the calculator business[28]to develop his giant programmable calculator, theASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".[29]
During the 1940s, with the development of new and more powerfulcomputingmachines such as theAtanasoff–Berry computerandENIAC, the termcomputercame to refer to the machines rather than their human predecessors.[30]As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to studycomputationin general. In 1945, IBM founded theWatson Scientific Computing LaboratoryatColumbia UniversityinNew York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world.[31]Ultimately, the close relationship between IBM and Columbia University was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946.[32]Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s.[33][34]The world's first computer science degree program, theCambridge Diploma in Computer Science, began at theUniversity of Cambridge Computer Laboratoryin 1953. The first computer science department in the United States was formed atPurdue Universityin 1962.[35]Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.
Although first proposed in 1956,[36]the term "computer science" appears in a 1959 article inCommunications of the ACM,[37]in which Louis Fein argues for the creation of aGraduate School in Computer Sciencesanalogous to the creation ofHarvard Business Schoolin 1921.[38]Louis justifies the name by arguing that, likemanagement science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.[37]His efforts, and those of others such asnumerical analystGeorge Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962.[39]Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed.[40]Certain departments of major universities prefer the termcomputing science, to emphasize precisely that difference. Danish scientistPeter Naursuggested the termdatalogy,[41]to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, isdata science; this is now used for amulti-disciplinaryfield of data analysis, including statistics and databases.
In the early days of computing, a number of terms for the practitioners of the field of computing were suggested (albeit facetiously) in theCommunications of the ACM—turingineer,turologist,flow-charts-man,applied meta-mathematician, andappliedepistemologist.[42]Three months later in the same journal,comptologistwas suggested, followed next year byhypologist.[43]The termcomputicshas also been suggested.[44]In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g.informatique(French),Informatik(German),informatica(Italian, Dutch),informática(Spanish, Portuguese),informatika(Slavic languagesandHungarian) orpliroforiki(πληροφορική, which means informatics) inGreek. Similar words have also been adopted in the UK (as in theSchool of Informatics, University of Edinburgh).[45]"In the U.S., however,informaticsis linked with applied computing, or computing in the context of another domain."[46]
A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes."[note 3]The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part ofcomputer engineering, while the study of commercialcomputer systemsand their deployment is often called information technology orinformation systems. However, there has been exchange of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such ascognitive science,linguistics,mathematics,physics,biology,Earth science,statistics,philosophy, andlogic.
Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science.[33]Early computer science was strongly influenced by the work of mathematicians such asKurt Gödel,Alan Turing,John von Neumann,Rózsa PéterandAlonzo Churchand there continues to be a useful interchange of ideas between the two fields in areas such asmathematical logic,category theory,domain theory, andalgebra.[36]
The relationship between computer science and software engineering is a contentious issue, which is further muddied bydisputesover what the term "software engineering" means, and how computer science is defined.[47]David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[48]
The academic, political, and funding aspects of computer science tend to depend on whether a department is formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment withcomputational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.
Despite the wordsciencein its name, there is debate over whether or not computer science is a discipline of science,[49]mathematics,[50]or engineering.[51]Allen NewellandHerbert A. Simonargued in 1975,
Computer science is an empirical discipline. We would have called it an experimental science, but like astronomy, economics, and geology, some of its unique forms of observation and experience do not fit a narrow stereotype of the experimental method. Nonetheless, they are experiments. Each new machine that is built is an experiment. Actually constructing the machine poses a question to nature; and we listen for the answer by observing the machine in operation and analyzing it by all analytical and measurement means available.[51]
It has since been argued that computer science can be classified as an empirical science since it makes use of empirical testing to evaluate thecorrectness of programs, but a problem remains in defining the laws and theorems of computer science (if any exist) and defining the nature of experiments in computer science.[51]Proponents of classifying computer science as an engineering discipline argue that the reliability of computational systems is investigated in the same way as bridges incivil engineeringand airplanes inaerospace engineering.[51]They also argue that while empirical sciences observe what presently exists, computer science observes what is possible to exist and while scientists discover laws from observation, no proper laws have been found in computer science and it is instead concerned with creating phenomena.[51]
Proponents of classifying computer science as a mathematical discipline argue that computer programs are physical realizations of mathematical entities and programs that can bedeductively reasonedthrough mathematicalformal methods.[51]Computer scientistsEdsger W. DijkstraandTony Hoareregard instructions for computer programs as mathematical sentences and interpret formal semantics for programming languages as mathematicalaxiomatic systems.[51]
A number of computer scientists have argued for the distinction of three separate paradigms in computer science.Peter Wegnerargued that those paradigms are science, technology, and mathematics.[52]Peter Denning's working group argued that they are theory, abstraction (modeling), and design.[33]Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective ofnatural sciences,[53]identifiable in some branches ofartificial intelligence).[54]Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems.[55]
As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.[56][57]CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of theAssociation for Computing Machinery(ACM), and theIEEE Computer Society(IEEE CS)[58]—identifies four areas that it considers crucial to the discipline of computer science:theory of computation,algorithms and data structures,programming methodology and languages, andcomputer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical andsymbolic computationas being important areas of computer science.[56]
Theoretical computer scienceis mathematical and abstract in spirit, but it derives its motivation from practical and everyday computation. It aims to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies.
According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?"[3]Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question,computability theoryexamines which computational problems are solvable on various theoreticalmodels of computation. The second question is addressed bycomputational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems.
The famousP = NP?problem, one of theMillennium Prize Problems,[59]is an open problem in the theory of computation.
Information theory, closely related toprobabilityandstatistics, is related to the quantification of information. This was developed byClaude Shannonto find fundamental limits onsignal processingoperations such as compressing data and on reliably storing and communicating data.[60]Coding theory is the study of the properties ofcodes(systems for converting information from one form to another) and their fitness for a specific application. Codes are used fordata compression,cryptography,error detection and correction, and more recently also fornetwork coding. Codes are studied for the purpose of designing efficient and reliabledata transmissionmethods.[61]
Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency.
Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification ofprogramming languagesand their individualfeatures. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, andlinguistics. It is an active research area, with numerous dedicated academic journals.
Formal methods are a particular kind of mathematically based technique for thespecification, development andverificationof software andhardwaresystems.[62]The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity andlife-critical systems, where safety orsecurityis of utmost importance. Formal methods are best described as the application of a fairly broad variety oftheoretical computer sciencefundamentals, in particularlogiccalculi,formal languages,automata theory, andprogram semantics, but alsotype systemsandalgebraic data typesto problems in software and hardware specification and verification.
Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, includingcomputer vision,image processing, andcomputational geometry, and is heavily applied in the fields of special effects andvideo games.
Informationcan take the form of images, sound, video or other multimedia.Bitsof information can be streamed viasignals. Itsprocessingis the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier – whether it is electrical, mechanical or biological. This field plays important role ininformation theory,telecommunications,information engineeringand has applications inmedical image computingandspeech synthesis, among others.What is the lower bound on the complexity offast Fourier transformalgorithms?is one of theunsolved problems in theoretical computer science.
Scientific computing(or computational science) is the field of study concerned with constructingmathematical modelsandquantitative analysistechniques and using computers to analyze and solvescientificproblems. A major usage of scientific computing issimulationof various processes, including computationalfluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE,[63]as well as software for physical realization of new (or modified) designs. The latter includes essential design software forintegrated circuits.[64]
Human–computer interaction (HCI) is the field of study and research concerned with the design and use ofcomputer systems, mainly based on the analysis of the interaction betweenhumansandcomputer interfaces. HCI has severalsubfieldsthat focus on the relationship betweenemotions,social behaviorandbrain activitywithcomputers.
Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it does not just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For examplesoftware testing,systems engineering,technical debtandsoftware development processes.
Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins incyberneticsand in theDartmouth Conference(1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such asapplied mathematics, symbolic logic,semiotics,electrical engineering,philosophy of mind,neurophysiology, andsocial intelligence. AI is associated in the popular mind withrobotic development, but the main field of practical application has been as an embedded component in areas ofsoftware development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although theTuring testis still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.
Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory.[65]Computer engineers studycomputational logicand design of computer hardware, from individualprocessorcomponents,microcontrollers,personal computerstosupercomputersandembedded systems. The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson andFrederick P. Brooks Jr., members of the Machine Organization department in IBM's main research center in 1959.
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other.[66]A number of mathematical models have been developed for general concurrent computation includingPetri nets,process calculiand theparallel random access machinemodel.[67]When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals.[68]
This branch of computer science aims to manage networks between computers worldwide.
Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users.
Historicalcryptographyis the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked.[69]Technologies studied in modern cryptography include symmetric and asymmetricencryption,digital signatures,cryptographic hash functions,key-agreement protocols,blockchain,zero-knowledge proofs, andgarbled circuits.
A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, throughdatabase modelsandquery languages. Data mining is a process of discovering patterns in large data sets.
The philosopher of computingBill Rapaportnoted threeGreat Insights of Computer Science:[70]
Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include:
Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities.[76]
Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige ofconference papersis greater than that of journal publications.[77][78]One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.[79]
|
https://en.wikipedia.org/wiki/Computer_science
|
Data integrityis the maintenance of, and the assurance of, data accuracy and consistency over its entirelife-cycle.[1]It is a critical aspect to the design, implementation, and usage of any system that stores, processes, or retrieves data. The term is broad in scope and may have widely different meanings depending on the specific context even under the same general umbrella ofcomputing. It is at times used as a proxy term fordata quality,[2]whiledata validationis a prerequisite for data integrity.[3]
Data integrity is the opposite ofdata corruption.[4]The overall intent of any data integrity technique is the same: ensure data is recorded exactly as intended (such as a database correctly rejecting mutually exclusive possibilities). Moreover, upon laterretrieval, ensure the data is the same as when it was originally recorded. In short, data integrity aims to prevent unintentional changes to information. Data integrity is not to be confused withdata security, the discipline of protecting data from unauthorized parties.
Any unintended changes to data as the result of a storage, retrieval or processing operation, including malicious intent, unexpected hardware failure, andhuman error, is failure of data integrity. If the changes are the result of unauthorized access, it may also be a failure of data security. Depending on the data involved this could manifest itself as benign as a single pixel in an image appearing a different color than was originally recorded, to the loss of vacation pictures or a business-critical database, to even catastrophic loss of human life in alife-critical system.
Physical integrity deals with challenges which are associated with correctly storing and fetching the data itself. Challenges with physical integrity may includeelectromechanicalfaults, design flaws, materialfatigue,corrosion,power outages, natural disasters, and other special environmental hazards such asionizing radiation, extreme temperatures, pressures andg-forces. Ensuring physical integrity includes methods such asredundanthardware, anuninterruptible power supply, certain types ofRAIDarrays,radiation hardenedchips,error-correcting memory, use of aclustered file system, using file systems that employ block levelchecksumssuch asZFS, storage arrays that compute parity calculations such asexclusive oror use acryptographic hash functionand even having awatchdog timeron critical subsystems.
Physical integrity often makes extensive use of error detecting algorithms known aserror-correcting codes. Human-induced data integrity errors are often detected through the use of simpler checks and algorithms, such as theDamm algorithmorLuhn algorithm. These are used to maintain data integrity after manual transcription from one computer system to another by a human intermediary (e.g. credit card or bank routing numbers). Computer-induced transcription errors can be detected throughhash functions.
In production systems, these techniques are used together to ensure various degrees of data integrity. For example, a computerfile systemmay be configured on a fault-tolerant RAID array, but might not provide block-level checksums to detect and preventsilent data corruption. As another example, a database management system might be compliant with theACIDproperties, but the RAID controller or hard disk drive's internal write cache might not be.
This type of integrity is concerned with thecorrectnessorrationalityof a piece of data, given a particular context. This includes topics such asreferential integrityandentity integrityin arelational databaseor correctly ignoring impossible sensor data in robotic systems. These concerns involve ensuring that the data "makes sense" given its environment. Challenges includesoftware bugs, design flaws, and human errors. Common methods of ensuring logical integrity include things such ascheck constraints,foreign key constraints, programassertions, and other run-time sanity checks.
Physical and logical integrity often share many challenges such as human errors and design flaws, and both must appropriately deal with concurrent requests to record and retrieve data, the latter of which is entirely a subject on its own.
If a data sector only has a logical error, it can be reused by overwriting it with new data. In case of a physical error, the affected data sector is permanently unusable.
Data integrity contains guidelines fordata retention, specifying or guaranteeing the length of time data can be retained in a particular database (typically arelational database). To achieve data integrity, these rules are consistently and routinely applied to all data entering the system, and any relaxation of enforcement could cause errors in the data. Implementing checks on the data as close as possible to the source of input (such as human data entry), causes less erroneous data to enter the system. Strict enforcement of data integrity rules results in lower error rates, and time saved troubleshooting and tracing erroneous data and the errors it causes to algorithms.
Data integrity also includes rules defining the relations a piece of data can have to other pieces of data, such as aCustomerrecord being allowed to link to purchasedProducts, but not to unrelated data such asCorporate Assets. Data integrity often includes checks and correction for invalid data, based on a fixedschemaor a predefined set of rules. An example being textual data entered where a date-time value is required. Rules for data derivation are also applicable, specifying how a data value is derived based on algorithm, contributors and conditions. It also specifies the conditions on how the data value could be re-derived.
Data integrity is normally enforced in adatabase systemby a series of integrity constraints or rules. Three types of integrity constraints are an inherent part of therelational data model: entity integrity, referential integrity and domain integrity.
If a database supports these features, it is the responsibility of the database to ensure data integrity as well as theconsistency modelfor the data storage and retrieval. If a database does not support these features, it is the responsibility of the applications to ensure data integrity while the database supports theconsistency modelfor the data storage and retrieval.
Having a single, well-controlled, and well-defined data-integrity system increases:
Moderndatabasessupport these features (seeComparison of relational database management systems), and it has become the de facto responsibility of the database to ensure data integrity. Companies, and indeed many database systems, offer products and services to migrate legacy systems to modern databases.
An example of a data-integrity mechanism is the parent-and-child relationship of related records. If a parent record owns one or more related child records all of the referential integrity processes are handled by the database itself, which automatically ensures the accuracy and integrity of the data so that no child record can exist without a parent (also called being orphaned) and that no parent loses their child records. It also ensures that no parent record can be deleted while the parent record owns any child records. All of this is handled at the database level and does not require coding integrity checks into each application.
Various research results show that neither widespreadfilesystems(includingUFS,Ext,XFS,JFSandNTFS) norhardware RAIDsolutions provide sufficient protection against data integrity problems.[5][6][7][8][9]
Some filesystems (includingBtrfsandZFS) provide internal data andmetadatachecksumming that is used for detectingsilent data corruptionand improving data integrity. If a corruption is detected that way and internal RAID mechanisms provided by those filesystems are also used, such filesystems can additionally reconstruct corrupted data in a transparent way.[10]This approach allows improved data integrity protection covering the entire data paths, which is usually known asend-to-end data protection.[11]
|
https://en.wikipedia.org/wiki/Database_integrity
|
Radiation hardeningis the process of makingelectronic componentsand circuits resistant to damage or malfunction caused by high levels ofionizing radiation(particle radiationand high-energyelectromagnetic radiation),[1]especially for environments inouter space(especially beyondlow Earth orbit), aroundnuclear reactorsandparticle accelerators, or duringnuclear accidentsornuclear warfare.
Mostsemiconductor electronic componentsare susceptible to radiation damage, andradiation-hardened(rad-hard) components are based on their non-hardened equivalents, with some design and manufacturing variations that reduce the susceptibility to radiation damage. Due to the low demand and the extensive development and testing required to produce a radiation-tolerant design of amicroelectronicchip, the technology of radiation-hardened chips tends to lag behind the most recent developments.[2]They also typically cost more than their commercial counterparts.[2]
Radiation-hardened products are typically tested to one or more resultant-effects tests, including total ionizing dose (TID), enhanced low dose rate effects (ELDRS), neutron and proton displacement damage, and single event effects (SEEs).
Environments with high levels of ionizing radiation create special design challenges. A singlecharged particlecan knock thousands ofelectronsloose, causingelectronic noiseandsignal spikes. In the case ofdigital circuits, this can cause results which are inaccurate or unintelligible. This is a particularly serious problem in the design ofsatellites,spacecraft, futurequantum computers,[3][4][5]military aircraft, nuclear power stations, andnuclear weapons. In order to ensure the proper operation of such systems, manufacturers ofintegrated circuitsandsensorsintended for themilitaryoraerospacemarkets employ various methods of radiation hardening. The resulting systems are said to berad(iation)-hardened,rad-hard, or (within context)hardened.
Typical sources of exposure of electronics to ionizing radiation are theVan Allen radiation beltsfor satellites, nuclear reactors in power plants for sensors and control circuits, particle accelerators for control electronics (particularlyparticle detectordevices), residual radiation fromisotopesinchip packaging materials,cosmic radiationfor spacecraft and high-altitude aircraft, andnuclear explosionsfor potentially all military and civilian electronics.
Secondary particles result from interaction of other kinds of radiation with structures around the electronic devices.
Two fundamental damage mechanisms take place:
Lattice displacement is caused byneutrons, protons, alpha particles, heavy ions, and very high energygamma photons. They change the arrangement of the atoms in thecrystal lattice, creating lasting damage, and increasing the number ofrecombination centers, depleting theminority carriersand worsening the analog properties of the affected semiconductorjunctions. Counterintuitively, higher doses over a short time cause partialannealing("healing") of the damaged lattice, leading to a lower degree of damage than with the same doses delivered in low intensity over a long time (LDR or Low Dose Rate). This type of problem is particularly significant inbipolar transistors, which are dependent on minority carriers in their base regions; increased losses caused byrecombinationcause loss of the transistorgain(seeneutron effects). Components certified as ELDRS (Enhanced Low Dose Rate Sensitive)-free do not show damage with fluxes below 0.01 rad(Si)/s = 36 rad(Si)/h.
Ionization effects are caused by charged particles, including ones with energy too low to cause lattice effects. The ionization effects are usually transient, creatingglitchesand soft errors, but can lead to destruction of the device if they trigger other damage mechanisms (e.g., alatchup).Photocurrentcaused byultravioletand X-ray radiation may belong to this category as well. Gradual accumulation ofholesin the oxide layer inMOSFETtransistors leads to worsening of their performance, up to device failure when the dose is high enough (seetotal ionizing dose effects).
The effects can vary wildly depending on all the parameters – type of radiation, total dose and radiation flux, combination of types of radiation, and even the kind of device load (operating frequency, operating voltage, actual state of the transistor during the instant it is struck by the particle) – which makes thorough testing difficult, time-consuming, and requiring many test samples.
The "end-user" effects can be characterized in several groups:
A neutron interacting with a semiconductor lattice will displace the atoms in the lattice. This leads to an increase in the count of recombination centers anddeep-level defects, reducing the lifetime of minority carriers, thus affectingbipolar devicesmore thanCMOSones. Bipolar devices onsilicontend to show changes in electrical parameters at levels of 1010to 1011neutrons/cm2, while CMOS devices aren't affected until 1015neutrons/cm2. The sensitivity of devices may increase together with increasing level of integration and decreasing size of individual structures. There is also a risk of induced radioactivity caused byneutron activation, which is a major source of noise inhigh energy astrophysicsinstruments. Induced radiation, together with residual radiation from impurities in component materials, can cause all sorts of single-event problems during the device's lifetime.GaAsLEDs, common inoptocouplers, are very sensitive to neutrons. The lattice damage influences the frequency ofcrystal oscillators. Kinetic energy effects (namely lattice displacement) of charged particles belong here too.
Total ionizing dose effects represent the cumulative damage of the semiconductor lattice (lattice displacementdamage) caused by exposure to ionizing radiation over time. It is measured inradsand causes slow gradual degradation of the device's performance. A total dose greater than 5000 rads delivered to silicon-based devices in a timespan on the order of seconds to minutes will cause long-term degradation. In CMOS devices, the radiation createselectron–hole pairsin the gate insulation layers, which cause photocurrents during their recombination, and the holes trapped in the lattice defects in the insulator create a persistent gatebiasingand influence the transistors'threshold voltage, making the N-type MOSFET transistors easier and the P-type ones more difficult to switch on. The accumulated charge can be high enough to keep the transistors permanently open (or closed), leading to device failure. Some self-healing takes place over time, but this effect is not too significant. This effect is the same ashot carrier degradationin high-integration high-speed electronics. Crystal oscillators are somewhat sensitive to radiation doses, which alter their frequency. The sensitivity can be greatly reduced by usingswept quartz. Naturalquartzcrystals are especially sensitive. Radiation performance curves for TID testing may be generated for all resultant effects testing procedures. These curves show performance trends throughout the TID test process and are included in the radiation test report.
Transient dose effects result from a brief high-intensity pulse of radiation, typically occurring during a nuclear explosion. The high radiation flux creates photocurrents in the entire body of the semiconductor, causing transistors to randomly open, changing logical states offlip-flopsandmemory cells. Permanent damage may occur if the duration of the pulse is too long, or if the pulse causes junction damage or a latchup. Latchups are commonly caused by the X-rays and gamma radiation flash of a nuclear explosion. Crystal oscillators may stop oscillating for the duration of the flash due to promptphotoconductivityinduced in quartz.
SGEMP effects are caused by the radiation flash traveling through the equipment and causing localionizationandelectric currentsin the material of the chips,circuit boards,electrical cablesand cases.
Single-event effects (SEE) have been studied extensively since the 1970s.[9]When a high-energy particle travels through a semiconductor, it leaves anionizedtrack behind. This ionization may cause a highly localized effect similar to the transient dose one - a benign glitch in output, a less benign bit flip in memory or aregisteror, especially inhigh-power transistors, a destructive latchup and burnout. Single event effects have importance for electronics in satellites, aircraft, and other civilian and military aerospace applications. Sometimes, in circuits not involving latches, it is helpful to introduceRCtime constantcircuits that slow down the circuit's reaction time beyond the duration of an SEE.
An SET happens when the charge collected from an ionization event discharges in the form of a spurious signal traveling through the circuit. This is de facto the effect of anelectrostatic discharge. it is considered a soft error, and is reversible.
Single-event upsets(SEU) ortransient radiation effects in electronicsare state changes of memory or register bits caused by a single ion interacting with the chip. They do not cause lasting damage to the device, but may cause lasting problems to a system which cannot recover from such an error. It is otherwise a reversible soft error. In very sensitive devices, a single ion can cause amultiple-bit upset(MBU) in several adjacent memory cells. SEUs can becomesingle-event functional interrupts(SEFI) when they upset control circuits, such asstate machines, placing the device into an undefined state, atest mode, or a halt, which would then need aresetor apower cycleto recover.
An SEL can occur in any chip with aparasitic PNPNstructure. A heavy ion or a high-energy proton passing through one of the two inner-transistor junctions can turn on thethyristor-like structure, which then stays "shorted" (an effect known aslatch-up) until the device is power-cycled. As the effect can happen between the power source and substrate, destructively high current can be involved and the part may fail. This is a hard error, and is irreversible. Bulk CMOS devices are most susceptible.
A single-event snapback is similar to an SEL but not requiring the PNPN structure, and can be induced in N-channel MOS transistors switching large currents, when an ion hits near the drain junction and causesavalanche multiplicationof thecharge carriers. The transistor then opens and stays opened, a hard error which is irreversible.
An SEB may occur in power MOSFETs when the substrate right under the source region gets forward-biased and the drain-source voltage is higher than the breakdown voltage of the parasitic structures. The resulting high current and local overheating then may destroy the device. This is a hard error, and is irreversible.
SEGR are observed in power MOSFETs when a heavy ion hits the gate region while a high voltage is applied to the gate. A local breakdown then happens in the insulating layer ofsilicon dioxide, causing local overheating and destruction (looking like a microscopicexplosion) of the gate region. It can occur even inEEPROMcells during write or erase, when the cells are subjected to a comparatively high voltage. This is a hard error, and is irreversible.
While proton beams are widely used for SEE testing due to availability, at lower energies proton irradiation can often underestimate SEE susceptibility. Furthermore, proton beams expose devices to risk of total ionizing dose (TID) failure which can cloud proton testing results or result in premature device failure. White neutron beams—ostensibly the most representative SEE test method—are usually derived from solid target-based sources, resulting in flux non-uniformity and small beam areas. White neutron beams also have some measure of uncertainty in their energy spectrum, often with high thermal neutron content.
The disadvantages of both proton and spallation neutron sources can be avoided by using mono-energetic 14 MeV neutrons for SEE testing. A potential concern is that mono-energetic neutron-induced single event effects will not accurately represent the real-world effects of broad-spectrum atmospheric neutrons. However, recent studies have indicated that, to the contrary, mono-energetic neutrons—particularly 14 MeV neutrons—can be used to quite accurately understand SEE cross-sections in modern microelectronics.[10]
Hardened chips are often manufactured oninsulatingsubstratesinstead of the usualsemiconductorwafers. Silicon on insulator (SOI) and silicon onsapphire(SOS) are commonly used. While normal commercial-grade chips can withstand between 50 and 100gray(5 and 10 krad), space-grade SOI and SOS chips can survive doses between 1000 and 3000gray(100 and 300 krad).[11][12]At one time many4000 serieschips were available in radiation-hardened versions (RadHard).[13]While SOI eliminates latchup events, TID and SEE hardness are not guaranteed to be improved.[14]
Choosing a substrate with wideband gapgives it higher tolerance to deep-level defects; e.g.silicon carbideorgallium nitride.[citation needed]
Use of a specialprocess nodeprovides increased radiation resistance.[15]Due to the high development costs of new radiation hardened processes, the smallest "true" rad-hard (RHBP, Rad-Hard By Process) process is 150 nm as of 2016, however, rad-hard 65 nm FPGAs were available that used some of the techniques used in "true" rad-hard processes (RHBD, Rad-Hard By Design).[16]As of 2019 110 nm rad-hard processes are available.[17]
Bipolar integrated circuits generally have higher radiation tolerance than CMOS circuits. The low-power Schottky (LS)5400 seriescan withstand 1000 krad, and manyECL devicescan withstand 10,000 krad.[13]Usingedgeless CMOStransistors, which have an unconventional physical construction, together with an unconventional physical layout, can also be effective.[18]
MagnetoresistiveRAM, orMRAM, is considered a likely candidate to provide radiation hardened, rewritable, non-volatile conductor memory. Physical principles and early tests suggest that MRAM is not susceptible to ionization-induced data loss.[19]
Capacitor-basedDRAMis often replaced by more rugged (but larger, and more expensive)SRAM. SRAM cells have more transistors per cell than usual (which is 4T or 6T), which makes the cells more tolerant to SEUs at the cost of higher power consumption and size.[20][16]
Shieldingthe package againstradioactivityis straightforward to reduce exposure of the bare device.[21]
To protect against neutron radiation and theneutron activationof materials, it is possible to shield the chips themselves by use ofdepleted boron(consisting only of isotope boron-11) in theborophosphosilicate glasspassivation layerprotecting the chips, as naturally prevalent boron-10 readilycaptures neutronsand undergoesalpha decay(seesoft error).
Error correcting code memory(ECC memory) uses redundant bits to check for and possibly correct corrupted data. Since radiation's effects damage the memory content even when the system is not accessing the RAM, a "scrubber" circuit must continuously sweep the RAM; reading out the data, checking the redundant bits for data errors, then writing back any corrections to the RAM.
Redundantelements can be used at the system level. Three separatemicroprocessorboards may independently compute an answer to a calculation and compare their answers. Any system that produces a minority result will recalculate. Logic may be added such that if repeated errors occur from the same system, that board is shut down.
Redundant elements may be used at the circuit level.[22]A single bit may be replaced with three bits and separate "voting logic" for each bit to continuously determine its result (triple modular redundancy). This increases area of a chip design by a factor of 5, so must be reserved for smaller designs. But it has the secondary advantage of also being "fail-safe" in real time. In the event of a single-bit failure (which may be unrelated to radiation), the voting logic will continue to produce the correct result without resorting to awatchdog timer. System level voting between three separate processor systems will generally need to use some circuit-level voting logic to perform the votes between the three processor systems.
Hardened latches may be used.[23]
A watchdog timer will perform a hard reset of a system unless some sequence is performed that generally indicates the system is alive, such as a write operation from an onboard processor. During normal operation, software schedules a write to the watchdog timer at regular intervals to prevent the timer from running out. If radiation causes the processor to operate incorrectly, it is unlikely the software will work correctly enough to clear the watchdog timer. The watchdog eventually times out and forces a hard reset to the system. This is considered a last resort to other methods of radiation hardening.
Radiation-hardened and radiation tolerant components are often used in military and aerospace applications, including point-of-load (POL) applications, satellite system power supplies, step downswitching regulators,microprocessors,FPGAs,[24]FPGA power sources, and high efficiency, low voltage subsystem power supplies.
However, not all military-grade components are radiation hardened. For example, the USMIL-STD-883features many radiation-related tests, but has no specification for single event latchup frequency. TheFobos-Gruntspace probe may have failed due to a similar assumption.[14]
The market size for radiation hardened electronics used in space applications was estimated to be $2.35 billion in 2021. A new study has estimated that this will reach approximately $4.76 billion by the year 2032.[25][26]
Intelecommunication, the termnuclear hardnesshas the following meanings:
1) an expression of the extent to which the performance of asystem, facility, or device is expected to degrade in a given nuclear environment, 2) the physical attributes of a system orelectronic componentthat will allow survival in an environment that includesnuclear radiationand electromagnetic pulses (EMP).
|
https://en.wikipedia.org/wiki/Radiation_hardening
|
Software rot(bit rot,code rot,software erosion,software decay, orsoftware entropy) is the degradation, deterioration, or loss of the use or performance ofsoftwareover time.
TheJargon File, a compendium of hacker lore, defines "bit rot" as a jocular explanation for the degradation of a softwareprogramover time even if "nothing has changed"; the idea behind this is almost as if the bits that make up the program were subject to radioactive decay.[1]
Several factors are responsible for software rot, including changes to the environment in which the software operates, degradation of compatibility between parts of the software itself, and the emergence ofbugsin unused or rarely used code.
When changes occur in the program's environment, particularly changes which the designer of the program did not anticipate, the software may no longer operate as originally intended. For example, many earlycomputer gamedesigners used theCPUclock speedas atimerin their games.[2]However, newer CPU clocks were faster, so the gameplay speed increased accordingly, making the games less usable over time.
There are changes in the environment not related to the program's designer, but its users. Initially, a user could bring the system into working order, and have it working flawlessly for a certain amount of time. But, when the system stops working correctly, or the users want to access the configuration controls, they cannot repeat that initial step because of the different context and the unavailable information (password lost, missing instructions, or simply a hard-to-manageuser interfacethat was first configured by trial and error). Information architect Jonas Söderström has named this conceptonceability,[3]and defines it as "the quality in a technical system that prevents a user from restoring the system, once it has failed".
Infrequently used portions of code, such as document filters or interfaces designed to be used by other programs, may contain bugs that go unnoticed. With changes in user requirements and other external factors, this code may be executed later, thereby exposing the bugs and making the software appear less functional.
Normalmaintenance of softwareand systems may also cause software rot. In particular, when a program containsmultiple partswhichfunction at arm's lengthfrom one another, failing to consider how changes to one part that affect the others may introduce bugs.
In some cases, this may take the form of libraries that the software uses being changed in a way which adversely affects the software. If the old version of a library that previously worked with the software can no longer be used due to conflicts with other software or security flaws that were found in the old version, there may no longer be a viable version of a needed library for the program to use.
Modern commercial software often connects to an online server for license verification and accessing information. If the online service powering the software is shut down, it may stop working.[4][5]
Since the late 2010s most websites use secureHTTPSconnections. However this requires encryption keys calledroot certificateswhich have expiration dates. After the certificates expire the device loses connectivity to most websites unless the keys are continuously updated.[6]
Another issue is that in March 2021 old encryption standards TLS 1.0 and TLS 1.1 weredeprecated.[7]This means that operating systems, browsers and other online software that do notsupport at least TLS 1.2cannot connect to most websites, even to download patches or update the browser, if these are available. This is occasionally called the "TLS apocalypse".
Products that cannot connect to most websites include PowerMacs, old Unix boxes and Microsoft Windows versions older than Server 2008/Windows 7 (at least without the use of a third-party browser).
The Internet Explorer 8 browser in Server 2008/Windows 7 does support TLS 1.2 but it is disabled by default.[8]
Software rot is usually classified as being either "dormant rot" or "active rot".
Software that is not currently being used gradually becomes unusable as the remainder of the application changes. Changes in user requirements and the software environment also contribute to the deterioration.
Software that is being continuously modified may lose its integrity over time if proper mitigating processes are not consistently applied. However, much software requires continuous changes to meet new requirements and correct bugs, and re-engineering software each time a change is made is rarely practical. This creates what is essentially anevolutionprocess for the program, causing it to depart from the original engineered design. As a consequence of this and a changing environment, assumptions made by the original designers may be invalidated, thereby introducing bugs.
In practice, adding new features may be prioritized over updatingdocumentation; without documentation, however, it is possible for specific knowledge pertaining to parts of the program to be lost. To some extent, this can be mitigated by followingbest current practicesforcoding conventions.
Active software rot slows once an application is near the end of its commercial life and further development ceases. Users often learn to work around any remainingsoftware bugs, and the behaviour of the software becomes consistent as nothing is changing.
Many seminal programs from the early days ofAIresearch have suffered from irreparable software rot. For example, the originalSHRDLUprogram (an early natural language understanding program) cannot be run on any modern-day computer or computer simulator, as it was developed during the days when LISP and PLANNER were still in development stage and thus uses non-standard macros and software libraries which do not exist anymore.
Suppose an administrator creates a forum usingopen sourceforum software, and then heavily modifies it by adding new features and options. This process requires extensive modifications to existing code and deviation from the original functionality of that software.
From here, there are several ways software rot can affect the system:
Suppose a webmaster installs the latest version ofMediaWiki, the software that powers wikis such as Wikipedia, then never applies any updates. Over time, the web host is likely to update their versions of theprogramming language(such asPHP) and thedatabase(such asMariaDB) without consulting the webmaster. After a long enough time, this will eventually break complex websites that have not been updated, because the latest versions of PHP and MariaDB will have breaking changes as they harddeprecatecertainbuilt-in functions, breakingbackwards compatibilityand causingfatal errors. Other problems that can arise with un-updated website software includesecurity vulnerabilitiesandspam.
Refactoringis a means of addressing the problem of software rot. It is described as the process of rewriting existing code to improve its structure without affecting its external behaviour.[9]This includes removingdead codeand rewriting sections that have been modified extensively and no longer work efficiently. Care must be taken not to change the software's external behaviour, as this could introduce incompatibilities and thereby itself contribute to software rot. Some design principles to consider when it comes to refactoring is maintaining the hierarchical structure of the code and implementingabstractionto simplify and generalize code structures.[10]
Software entropy describes a tendency for repairs and modifications to a software system to cause it to gradually lose structure or increase in complexity.[11]Manny Lehmanused the term entropy in 1974 to describe the complexity of a software system, and to draw an analogy to thesecond law of thermodynamics. Lehman'slaws of software evolutionstate that a complex software system will require continuous modifications to maintain its relevance to the environment around it, and that such modifications will increase the system's entropy unless specific work is done to reduce it.[12]
Ivar Jacobsonet al. in 1992 described software entropy similarly, and argued that this increase in disorder as a system is modified would always eventually make a software system uneconomical to maintain, although the time until that happens is greatly dependent on its initial design, and can be extended by refactoring.[13]
In 1999, Andrew Hunt and David Thomas usefixing broken windowsas a metaphor for avoiding software entropy in software development.[14]
|
https://en.wikipedia.org/wiki/Software_rot
|
Data Integrity Field(DIF) is an approach to protectdata integrityincomputer data storagefromdata corruption. It was proposed in 2003 by theT10 subcommitteeof theInternational Committee for Information Technology Standards.[1]A similar approach for data integrity was added in 2016 to the NVMe 1.2.1 specification.[2]
Packet-based storage transport protocols haveCRCprotection on command and data payloads. Interconnect buses have parity protection. Memory systems have parity detection/correction schemes. I/O protocol controllers at the transport/interconnect boundaries have internal data path protection.
Data availability in storage systems is frequently measured simply in terms of the reliability of the hardware components and the effects of redundant hardware. But the reliability of the software, its ability to detect errors, and its ability to correctly report or apply corrective actions to a failure have a significant bearing on the overall storage system availability.
The data exchange usually takes place between the host CPU and storage disk. There may be a storage data controller in between these two. The controller could beRAIDcontroller or simple storage switches.
DIF included extending thedisk sectorfrom its traditional 512 bytes, to 520 bytes, by adding eight additional protection bytes.[1]This extended sector is defined forSmall Computer System Interface(SCSI) devices, which is in turn used in many enterprise storage technologies, such asFibre Channel.[3]Oracle Corporationincluded support for DIF in theLinux kernel.[4][5]
An evolution of this technology called T10 Protection Information was introduced in 2011.[6][7]
|
https://en.wikipedia.org/wiki/Data_Integrity_Field
|
Incomputing,data recoveryis a process of retrieving deleted, inaccessible, lost, corrupted, damaged, overwritten or formatted data fromsecondary storage,removable mediaorfiles, when the data stored in them cannot be accessed in a usual way.[1]The data is most often salvaged from storage media such as internal or externalhard disk drives(HDDs),solid-state drives(SSDs),USB flash drives,magnetic tapes,CDs,DVDs,RAIDsubsystems, and otherelectronic devices. Recovery may be required due to physical damage to the storage devices or logical damage to thefile systemthat prevents it from beingmountedby the hostoperating system(OS).[1]
Logical failures occur when the hard drive devices are functional but the user or automated-OS cannot retrieve or access data stored on them. Logical failures can occur due to corruption of the engineering chip, lost partitions, firmware failure, or failures during formatting/re-installation.[2][3]
Data recovery can be a very simple or technical challenge. This is why there are specific software companies specialized in this field.[4]
The most common data recovery scenarios involve an operating system failure, malfunction of a storage device, logical failure of storage devices, accidental damage or deletion, etc. (typically, on a single-drive, single-partition, single-OS system), in which case the ultimate goal is simply to copy all important files from the damaged media to another new drive. This can be accomplished using aLive CD, or DVD by booting directly from aROMor a USB drive instead of the corrupted drive in question. Many Live CDs or DVDs provide a means to mount the system drive and backup drives or removable media, and to move the files from the system drive to the backup media with afile manageroroptical disc authoringsoftware. Such cases can often be mitigated bydisk partitioningand consistently storing valuable data files (or copies of them) on a different partition from the replaceable OS system files.
Another scenario involves a drive-level failure, such as a compromisedfile systemor drive partition, or ahard disk drive failure. In any of these cases, the data is not easily read from the media devices. Depending on the situation, solutions involve repairing the logical file system, partition table, ormaster boot record, or updating thefirmwareor drive recovery techniques ranging from software-based recovery of corrupted data, to hardware- and software-based recovery of damaged service areas (also known as the hard disk drive's "firmware"), to hardware replacement on a physically damaged drive which allows for the extraction of data to a new drive. If a drive recovery is necessary, the drive itself has typically failed permanently, and the focus is rather on a one-time recovery, salvaging whatever data can be read.
In a third scenario, files have been accidentally "deleted" from a storage medium by the users. Typically, the contents of deleted files are not removed immediately from the physical drive; instead, references to them in the directory structure are removed, and thereafter space the deleted data occupy is made available for later data overwriting. In the mind ofend users, deleted files cannot be discoverable through a standard file manager, but the deleted data still technically exists on the physical drive. In the meantime, the original file contents remain, often several disconnectedfragments, and may be recoverable if not overwritten by other data files.
The term "data recovery" is also used in the context offorensicapplications orespionage, where data which have beenencrypted, hidden, or deleted, rather than damaged, are recovered. Sometimes data present in the computer gets encrypted or hidden due to reasons like virus attacks which can only be recovered by some computer forensic experts.
A wide variety of failures can cause physical damage to storage media, which may result from human errors and natural disasters.CD-ROMscan have their metallic substrate or dye layer scratched off; hard disks can suffer from a multitude of mechanical failures, such ashead crashes, PCB failure, and failed motors;tapescan simply break.
Physical damage to a hard drive, even in cases where a head crash has occurred, does not necessarily mean permanent data loss. However, in extreme cases, such as prolonged exposure tomoistureandcorrosion—like the lostBitcoin hard drive of James Howells, buried in the Newport landfillfor over a decade — recovery is usually impossible. In rare cases, forensic techniques likeMagnetic Force Microscopy(MFM) have been explored to detect residual magnetic traces when data holds exceptional value.[5]Other techniques employed by many professional data recovery companies can typically salvage most, if not all, of the data that had been lost when the failure occurred.
Of course, there are exceptions to this, such as cases where severe damage to the hard driveplattersmay have occurred. However, if the hard drive can be repaired and a full image or clone created, then the logical file structure can be rebuilt in most instances.
Most physical damage cannot be repaired by end users. For example, opening a hard disk drive in a normal environment can allow airborne dust to settle on the platter and become caught between the platter and theread/write head. During normal operation, read/write heads float 3 to 6nanometersabove the platter surface, and the average dust particles found in a normal environment are typically around 30,000nanometers in diameter.[6]When these dust particles get caught between the read/write heads and the platter, they can cause new head crashes that further damage the platter and thus compromise the recovery process. Furthermore, end users generally do not have the hardware or technical expertise required to make these repairs. Consequently, data recovery companies are often employed to salvage important data with the more reputable ones usingclass 100dust- and static-freecleanrooms.[7]
Recovering data from physically damaged hardware can involve multiple techniques. Some damage can be repaired by replacing parts in the hard disk. This alone may make the disk usable, but there may still be logical damage. A specialized disk-imaging procedure is used to recover every readable bit from the surface. Once this image is acquired and saved on a reliable medium, the image can be safely analyzed for logical damage and will possibly allow much of the original file system to be reconstructed.
A common misconception is that a damagedprinted circuit board(PCB) may be simply replaced during recovery procedures by an identical PCB from a healthy drive. While this may work in rare circumstances on hard disk drives manufactured before 2003, it will not work on newer drives. Electronics boards of modern drives usually contain drive-specificadaptation data(generally a map of bad sectors and tuning parameters) and other information required to properly access data on the drive. Replacement boards often need this information to effectively recover all of the data. The replacement board may need to be reprogrammed. Some manufacturers (Seagate, for example) store this information on a serialEEPROMchip, which can be removed and transferred to the replacement board.[8][9]
Each hard disk drive has what is called asystem areaorservice area; this portion of the drive, which is not directly accessible to the end user, usually contains drive's firmware and adaptive data that helps the drive operate within normal parameters.[10]One function of the system area is to log defective sectors within the drive; essentially telling the drive where it can and cannot write data.
The sector lists are also stored on various chips attached to the PCB, and they are unique to each hard disk drive. If the data on the PCB do not match what is stored on the platter, then the drive will not calibrate properly.[11]In most cases the drive heads will click because they are unable to find the data matching what is stored on the PCB.
The term "logical damage" refers to situations in which the error is not a problem in the hardware and requires software-level solutions.
In some cases, data on a hard disk drive can be unreadable due to damage to thepartition tableorfile system, or to (intermittent) media errors. In the majority of these cases, at least a portion of the original data can be recovered by repairing the damaged partition table or file system using specialized data recovery software such asTestDisk; software likeddrescuecan image media despite intermittent errors, and image raw data when there is partition table or file system damage. This type of data recovery can be performed by people without expertise in drive hardware as it requires no special physical equipment or access to platters.
Sometimes data can be recovered using relatively simple methods and tools;[12]more serious cases can require expert intervention, particularly if parts of files are irrecoverable.Data carvingis the recovery of parts of damaged files using knowledge of their structure.
After data has been physically overwritten on a hard disk drive, it is generally assumed that the previous data are no longer possible to recover. In 1996,Peter Gutmann, a computer scientist, presented a paper that suggested overwritten data could be recovered through the use ofmagnetic force microscopy.[13]In 2001, he presented another paper on a similar topic.[14]To guard against this type of data recovery, Gutmann and Colin Plumb designed a method of irreversibly scrubbing data, known as theGutmann methodand used by several disk-scrubbing software packages.
Substantial criticism has followed, primarily dealing with the lack of any concrete examples of significant amounts of overwritten data being recovered.[15]Gutmann's article contains a number of errors and inaccuracies, particularly regarding information about how data is encoded and processed on hard drives.[16]Although Gutmann's theory may be correct, there is no practical evidence that overwritten data can be recovered, while research has shown to support that overwritten data cannot be recovered.[specify][17][18][19]
Solid-state drives(SSD) overwrite data differently from hard disk drives (HDD) which makes at least some of their data easier to recover. Most SSDs useflash memoryto store data in pages and blocks, referenced bylogical block addresses(LBA) which are managed by theflash translation layer(FTL). When the FTL modifies a sector it writes the new data to another location and updates the map so the new data appear at the target LBA. This leaves the pre-modification data in place, with possibly many generations, and recoverable by data recovery software.
Sometimes, data present in the physical drives (Internal/External Hard disk,Pen Drive, etc.) gets lost, deleted and formatted due to circumstances like virus attack, accidental deletion or accidental use of SHIFT+DELETE. In these cases, data recovery software is used to recover/restore the data files.
In the list of logical failures of hard disks, a logical bad sector is the most common fault leading data not to be readable. Sometimes it is possible to sidestep error detection even in software, and perhaps with repeated reading and statistical analysis recover at least some of the underlying stored data. Sometimes prior knowledge of the data stored and the error detection and correction codes can be used to recover even erroneous data. However, if the underlying physical drive is degraded badly enough, at least the hardware surrounding the data must be replaced, or it might even be necessary to apply laboratory techniques to the physical recording medium. Each of the approaches is progressively more expensive, and as such progressively more rarely sought.
Eventually, if the final, physical storage medium has indeed been disturbed badly enough, recovery will not be possible using any means; the information has irreversibly been lost.
Recovery experts do not always need to have physical access to the damaged hardware. When the lost data can be recovered by software techniques, they can often perform the recovery using remote access software over the Internet, LAN or other connection to the physical location of the damaged media. The process is essentially no different from what the end user could perform by themselves.[20]
Remote recovery requires a stable connection with an adequate bandwidth. However, it is not applicable where access to the hardware is required, as in cases of physical damage.
Usually, there are four phases when it comes to successful data recovery, though that can vary depending on the type of data corruption and recovery required.[21]
TheWindowsoperating system can be reinstalled on a computer that is already licensed for it. The reinstallation can be done by downloading the operating system or by using a "restore disk" provided by the computer manufacturer. Eric Lundgren was fined and sentenced to U.S. federal prison in April 2018 for producing 28,000 restore disks and intending to distribute them for about 25 cents each as a convenience to computer repair shops.[22]
Data recovery cannot always be done on a running system. As a result, aboot disk,live CD,live USB, or any other type of live distro contains a minimal operating system.
|
https://en.wikipedia.org/wiki/List_of_data_recovery_software
|
RAID(/reɪd/;redundant array of inexpensive disksorredundant array of independent disks)[1][2]is a datastorage virtualizationtechnology that combines multiple physicaldata storagecomponents into one or more logical units for the purposes ofdata redundancy, performance improvement, or both. This is in contrast to the previous concept of highly reliable mainframe disk drives known assingle large expensive disk(SLED).[3][1]
Data is distributed across the drives in one of several ways, referred to asRAID levels, depending on the required level ofredundancyand performance. The different schemes, or data distribution layouts, are named by the word "RAID" followed by a number, for example RAID 0 or RAID 1. Each scheme, or RAID level, provides a different balance among the key goals:reliability,availability,performance, andcapacity. RAID levels greater than RAID 0 provide protection against unrecoverablesectorread errors, as well as against failures of whole physical drives.
The term "RAID" was invented byDavid Patterson,Garth Gibson, andRandy Katzat theUniversity of California, Berkeleyin 1987. In their June 1988 paper "A Case for Redundant Arrays of Inexpensive Disks (RAID)", presented at theSIGMODConference, they argued that the top-performingmainframedisk drives of the time could be beaten on performance by an array of the inexpensive drives that had been developed for the growingpersonal computermarket. Although failures would rise in proportion to the number of drives, by configuring for redundancy, the reliability of an array could far exceed that of any large single drive.[4]
Although not yet using that terminology, the technologies of the five levels of RAID named in the June 1988 paper were used in various products prior to the paper's publication,[3]including the following:
Industry manufacturers later redefined the RAID acronym to stand for "redundant array ofindependentdisks".[2][11][12][13]
Many RAID levels employ an error protection scheme called "parity", a widely used method in information technology to providefault tolerancein a given set of data. Most use simpleXOR, but RAID 6 uses two separate parities based respectively on addition and multiplication in a particularGalois fieldorReed–Solomon error correction.[14]
RAID can also provide data security withsolid-state drives(SSDs) without the expense of an all-SSD system. For example, a fast SSD can be mirrored with a mechanical drive. For this configuration to provide a significant speed advantage, an appropriate controller is needed that uses the fast SSD for all read operations.Adapteccalls this "hybrid RAID".[15]
Originally, there were five standard levels of RAID, but many variations have evolved, including severalnested levelsand manynon-standard levels(mostlyproprietary). RAID levels and their associated data formats are standardized by theStorage Networking Industry Association(SNIA) in the Common RAID Disk Drive Format (DDF) standard:[16][17]
In what was originally termedhybrid RAID,[25]many storage controllers allow RAID levels to be nested. The elements of aRAIDmay be either individual drives or arrays themselves. Arrays are rarely nested more than one level deep.
The final array is known as the top array. When the top array is RAID 0 (such as in RAID 1+0 and RAID 5+0), most vendors omit the "+" (yieldingRAID 10and RAID 50, respectively).
Many configurations other than the basic numbered RAID levels are possible, and many companies, organizations, and groups have created their own non-standard configurations, in many cases designed to meet the specialized needs of a small niche group. Such configurations include the following:
The distribution of data across multiple drives can be managed either by dedicatedcomputer hardwareor bysoftware. A software solution may be part of the operating system, part of the firmware and drivers supplied with a standard drive controller (so-called "hardware-assisted software RAID"), or it may reside entirely within the hardware RAID controller.
Hardware RAID controllers can be configured through cardBIOSorOption ROMbefore anoperating systemis booted, and after the operating system is booted,proprietaryconfiguration utilities are available from the manufacturer of each controller. Unlike thenetwork interface controllersforEthernet, which can usually be configured and serviced entirely through the common operating system paradigms likeifconfiginUnix, without a need for any third-party tools, each manufacturer of each RAID controller usually provides their own proprietary software tooling for each operating system that they deem to support, ensuring avendor lock-in, and contributing to reliability issues.[33]
For example, inFreeBSD, in order to access the configuration ofAdaptecRAID controllers, users are required to enableLinux compatibility layer, and use the Linux tooling from Adaptec,[34]potentially compromising the stability, reliability and security of their setup, especially when taking the long-term view.[33]
Some other operating systems have implemented their own generic frameworks for interfacing with any RAID controller, and provide tools for monitoring RAID volume status, as well as facilitation of drive identification through LED blinking, alarm management andhot spare diskdesignations from within the operating system without having to reboot into card BIOS. For example, this was the approach taken byOpenBSDin 2005 with its bio(4) pseudo-device and thebioctlutility, which provide volume status, and allow LED/alarm/hotspare control, as well as the sensors (including thedrive sensor) for health monitoring;[35]this approach has subsequently been adopted and extended byNetBSDin 2007 as well.[36]
Software RAID implementations are provided by many modernoperating systems. Software RAID can be implemented as:
Some advancedfile systemsare designed to organize data across multiple storage devices directly, without needing the help of a third-party logical volume manager:
Many operating systems provide RAID implementations, including the following:
If a boot drive fails, the system has to be sophisticated enough to be able to boot from the remaining drive or drives. For instance, consider a computer whose disk is configured as RAID 1 (mirrored drives); if the first drive in the array fails, then afirst-stage boot loadermight not be sophisticated enough to attempt loading thesecond-stage boot loaderfrom the second drive as a fallback. The second-stage boot loader for FreeBSD is capable of loading akernelfrom such an array.[64]
Software-implemented RAID is not always compatible with the system's boot process, and it is generally impractical for desktop versions of Windows. However, hardware RAID controllers are expensive and proprietary. To fill this gap, inexpensive "RAID controllers" were introduced that do not contain a dedicated RAID controller chip, but simply a standard drive controller chip, or the chipset built-in RAID function, with proprietary firmware and drivers. During early bootup, the RAID is implemented by the firmware and, once the operating system has been more completely loaded, the drivers take over control. Consequently, such controllers may not work when driver support is not available for the host operating system.[65]An example isIntel Rapid Storage Technology, implemented on many consumer-level motherboards.[66][67]
Because some minimal hardware support is involved, this implementation is also called "hardware-assisted software RAID",[68][69][70]"hybrid model" RAID,[70]or even "fake RAID".[71]If RAID 5 is supported, the hardware may provide a hardware XOR accelerator. An advantage of this model over the pure software RAID is that—if using a redundancy mode—the boot drive is protected from failure (due to the firmware) during the boot process even before the operating system's drivers take over.[70]
Data scrubbing(referred to in some environments aspatrol read) involves periodic reading and checking by the RAID controller of all the blocks in an array, including those not otherwise accessed. This detects bad blocks before use.[72]Data scrubbing checks for bad blocks on each storage device in an array, but also uses the redundancy of the array to recover bad blocks on a single drive and to reassign the recovered data to spare blocks elsewhere on the drive.[73]
Frequently, a RAID controller is configured to "drop" a component drive (that is, to assume a component drive has failed) if the drive has been unresponsive for eight seconds or so; this might cause the array controller to drop a good drive because that drive has not been given enough time to complete its internal error recovery procedure. Consequently, using consumer-marketed drives with RAID can be risky, and so-called "enterprise class" drives limit this error recovery time to reduce risk.[citation needed]Western Digital's desktop drives used to have a specific fix. A utility called WDTLER.exe limited a drive's error recovery time. The utility enabledTLER (time limited error recovery), which limits the error recovery time to seven seconds. Around September 2009, Western Digital disabled this feature in their desktop drives (such as the Caviar Black line), making such drives unsuitable for use in RAID configurations.[74]However, Western Digital enterprise class drives are shipped from the factory with TLER enabled. Similar technologies are used by Seagate, Samsung, and Hitachi. For non-RAID usage, an enterprise class drive with a short error recovery timeout that cannot be changed is therefore less suitable than a desktop drive.[74]In late 2010, theSmartmontoolsprogram began supporting the configuration of ATA Error Recovery Control, allowing the tool to configure many desktop class hard drives for use in RAID setups.[74]
While RAID may protect against physical drive failure, the data is still exposed to operator, software, hardware, and virus destruction. Many studies cite operator fault as a common source of malfunction,[75][76]such as a server operator replacing the incorrect drive in a faulty RAID, and disabling the system (even temporarily) in the process.[77]
An array can be overwhelmed by catastrophic failure that exceeds its recovery capacity and the entire array is at risk of physical damage by fire, natural disaster, and human forces, however backups can be stored off site. An array is also vulnerable to controller failure because it is not always possible to migrate it to a new, different controller without data loss.[78]
In practice, the drives are often the same age (with similar wear) and subject to the same environment. Since many drive failures are due to mechanical issues (which are more likely on older drives), this violates the assumptions of independent, identical rate of failure amongst drives; failures are in fact statistically correlated.[11]In practice, the chances for a second failure before the first has been recovered (causing data loss) are higher than the chances for random failures. In a study of about 100,000 drives, the probability of two drives in the same cluster failing within one hour was four times larger than predicted by theexponential statistical distribution—which characterizes processes in which events occur continuously and independently at a constant average rate. The probability of two failures in the same 10-hour period was twice as large as predicted by an exponential distribution.[79]
Unrecoverable read errors (URE) present as sector read failures, also known as latent sector errors (LSE). The associated media assessment measure, unrecoverable bit error (UBE) rate, is typically guaranteed to be less than one bit in 1015[disputed–discuss]for enterprise-class drives (SCSI,FC,SASor SATA), and less than one bit in 1014[disputed–discuss]for desktop-class drives (IDE/ATA/PATA or SATA). Increasing drive capacities and large RAID 5 instances have led to the maximum error rates being insufficient to guarantee a successful recovery, due to the high likelihood of such an error occurring on one or more remaining drives during a RAID set rebuild.[11][obsolete source][80]When rebuilding, parity-based schemes such as RAID 5 are particularly prone to the effects of UREs as they affect not only the sector where they occur, but also reconstructed blocks using that sector for parity computation.[81]
Double-protection parity-based schemes, such as RAID 6, attempt to address this issue by providing redundancy that allows double-drive failures; as a downside, such schemes suffer from elevated write penalty—the number of times the storage medium must be accessed during a single write operation.[82]Schemes that duplicate (mirror) data in a drive-to-drive manner, such as RAID 1 and RAID 10, have a lower risk from UREs than those using parity computation or mirroring between striped sets.[24][83]Data scrubbing, as a background process, can be used to detect and recover from UREs, effectively reducing the risk of them happening during RAID rebuilds and causing double-drive failures. The recovery of UREs involves remapping of affected underlying disk sectors, utilizing the drive's sector remapping pool; in case of UREs detected during background scrubbing, data redundancy provided by a fully operational RAID set allows the missing data to be reconstructed and rewritten to a remapped sector.[84][85]
Drive capacity has grown at a much faster rate than transfer speed, and error rates have only fallen a little in comparison. Therefore, larger-capacity drives may take hours if not days to rebuild, during which time other drives may fail or yet undetected read errors may surface. The rebuild time is also limited if the entire array is still in operation at reduced capacity.[86]Given an array with only one redundant drive (which applies to RAID levels 3, 4 and 5, and to "classic" two-drive RAID 1), a second drive failure would cause complete failure of the array. Even though individual drives'mean time between failure(MTBF) have increased over time, this increase has not kept pace with the increased storage capacity of the drives. The time to rebuild the array after a single drive failure, as well as the chance of a second failure during a rebuild, have increased over time.[22]
Some commentators have declared that RAID 6 is only a "band aid" in this respect, because it only kicks the problem a little further down the road.[22]However, according to the 2006NetAppstudy of Berriman et al., the chance of failure decreases by a factor of about 3,800 (relative to RAID 5) for a proper implementation of RAID 6, even when using commodity drives.[87][citation not found]Nevertheless, if the currently observed technology trends remain unchanged, in 2019 a RAID 6 array will have the same chance of failure as its RAID 5 counterpart had in 2010.[87][unreliable source?]
Mirroring schemes such as RAID 10 have a bounded recovery time as they require the copy of a single failed drive, compared with parity schemes such as RAID 6, which require the copy of all blocks of the drives in an array set. Triple parity schemes, or triple mirroring, have been suggested as one approach to improve resilience to an additional drive failure during this large rebuild time.[87][unreliable source?]
A system crash or other interruption of a write operation can result in states where the parity is inconsistent with the data due to non-atomicity of the write process, such that the parity cannot be used for recovery in the case of a disk failure. This is commonly termed thewrite holewhich is a known data corruption issue in older and low-end RAIDs, caused by interrupted destaging of writes to disk.[88]The write hole can be addressed in a few ways:
Write hole is a little understood and rarely mentioned failure mode for redundant storage systems that do not utilize transactional features. Database researcherJim Graywrote "Update in Place is a Poison Apple" during the early days of relational database commercialization.[96]
There are concerns about write-cache reliability, specifically regarding devices equipped with awrite-back cache, which is a caching system that reports the data as written as soon as it is written to cache, as opposed to when it is written to the non-volatile medium. If the system experiences a power loss or other major failure, the data may be irrevocably lost from the cache before reaching the non-volatile storage. For this reason good write-back cache implementations include mechanisms, such as redundant battery power, to preserve cache contents across system failures (including power failures) and to flush the cache at system restart time.[97]
|
https://en.wikipedia.org/wiki/RAID
|
Ininformation theoryandcoding theory,Reed–Solomon codesare a group oferror-correcting codesthat were introduced byIrving S. ReedandGustave Solomonin 1960.[1]They have many applications, including consumer technologies such asMiniDiscs,CDs,DVDs,Blu-raydiscs,QR codes,Data Matrix,data transmissiontechnologies such asDSLandWiMAX,broadcastsystems such as satellite communications,DVBandATSC, and storage systems such asRAID 6.
Reed–Solomon codes operate on a block of data treated as a set offinite-fieldelements called symbols. Reed–Solomon codes are able to detect and correct multiple symbol errors. By addingt=n−kcheck symbols to the data, a Reed–Solomon code can detect (but not correct) any combination of up toterroneous symbols,orlocate and correct up to⌊t/2⌋erroneous symbols at unknown locations. As anerasure code, it can correct up toterasures at locations that are known and provided to the algorithm, or it can detect and correct combinations of errors and erasures. Reed–Solomon codes are also suitable as multiple-burstbit-error correcting codes, since a sequence ofb+ 1consecutive bit errors can affect at most two symbols of sizeb. The choice oftis up to the designer of the code and may be selected within wide limits.
There are two basic types of Reed–Solomon codes – original view andBCHview – with BCH view being the most common, as BCH view decoders are faster and require less working storage than original view decoders.
Reed–Solomon codes were developed in 1960 byIrving S. ReedandGustave Solomon, who were then staff members ofMIT Lincoln Laboratory. Their seminal article was titled "Polynomial Codes over Certain Finite Fields".[1]The original encoding scheme described in the Reed and Solomon article used a variable polynomial based on the message to be encoded where only a fixed set of values (evaluation points) to be encoded are known to encoder and decoder. The original theoretical decoder generated potential polynomials based on subsets ofk(unencoded message length) out ofn(encoded message length) values of a received message, choosing the most popular polynomial as the correct one, which was impractical for all but the simplest of cases. This was initially resolved by changing the original scheme to aBCH-code-like scheme based on a fixed polynomial known to both encoder and decoder, but later, practical decoders based on the original scheme were developed, although slower than the BCH schemes. The result of this is that there are two main types of Reed–Solomon codes: ones that use the original encoding scheme and ones that use the BCH encoding scheme.
Also in 1960, a practical fixed polynomial decoder forBCH codesdeveloped byDaniel Gorensteinand Neal Zierler was described in anMIT Lincoln Laboratoryreport by Zierler in January 1960 and later in an article in June 1961.[2]The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book "Error-Correcting Codes" byW. Wesley Peterson(1961).[3][page needed]By 1963 (or possibly earlier), J.J. Stone (and others)[who?]recognized that Reed–Solomon codes could use the BCH scheme of using a fixed generator polynomial, making such codes a special class of BCH codes,[4]but Reed–Solomon codes based on the original encoding scheme are not a class of BCH codes, and depending on the set of evaluation points, they are not evencyclic codes.
In 1969, an improved BCH scheme decoder was developed byElwyn BerlekampandJames Masseyand has since been known as theBerlekamp–Massey decoding algorithm.
In 1975, another improved BCH scheme decoder was developed by Yasuo Sugiyama, based on theextended Euclidean algorithm.[5]
In 1977, Reed–Solomon codes were implemented in theVoyager programin the form ofconcatenated error correction codes. The first commercial application in mass-produced consumer products appeared in 1982 with thecompact disc, where twointerleavedReed–Solomon codes are used. Today, Reed–Solomon codes are widely implemented indigital storagedevices anddigital communicationstandards, though they are being slowly replaced byBose–Chaudhuri–Hocquenghem (BCH) codes. For example, Reed–Solomon codes are used in theDigital Video Broadcasting(DVB) standardDVB-S, in conjunction with aconvolutionalinner code, but BCH codes are used withLDPCin its successor,DVB-S2.
In 1986, an original scheme decoder known as theBerlekamp–Welch algorithmwas developed.
In 1996, variations of original scheme decoders called list decoders or soft decoders were developed by Madhu Sudan and others, and work continues on these types of decoders (seeGuruswami–Sudan list decoding algorithm).
In 2002, another original scheme decoder was developed by Shuhong Gao, based on theextended Euclidean algorithm.[6]
Reed–Solomon coding is very widely used in mass storage systems to correct
the burst errors associated with media defects.
Reed–Solomon coding is a key component of thecompact disc. It was the first use of strong error correction coding in a mass-produced consumer product, andDATandDVDuse similar schemes. In the CD, two layers of Reed–Solomon coding separated by a 28-wayconvolutionalinterleaveryields a scheme called Cross-Interleaved Reed–Solomon Coding (CIRC). The first element of a CIRC decoder is a relatively weak inner (32,28) Reed–Solomon code, shortened from a (255,251) code with 8-bit symbols. This code can correct up to 2 byte errors per 32-byte block. More importantly, it flags as erasures any uncorrectable blocks, i.e., blocks with more than 2 byte errors. The decoded 28-byte blocks, with erasure indications, are then spread by the deinterleaver to different blocks of the (28,24) outer code. Thanks to the deinterleaving, an erased 28-byte block from the inner code becomes a single erased byte in each of 28 outer code blocks. The outer code easily corrects this, since it can handle up to 4 such erasures per block.
The result is a CIRC that can completely correct error bursts up to 4000 bits, or about 2.5 mm on the disc surface. This code is so strong that most CD playback errors are almost certainly caused by tracking errors that cause the laser to jump track, not by uncorrectable error bursts.[7]
DVDs use a similar scheme, but with much larger blocks, a (208,192) inner code, and a (182,172) outer code.
Reed–Solomon error correction is also used inparchivefiles which are commonly posted accompanying multimedia files onUSENET. The distributed online storage serviceWuala(discontinued in 2015) also used Reed–Solomon when breaking up files.
Almost all two-dimensional bar codes such asPDF-417,MaxiCode,Datamatrix,QR Code,Aztec CodeandHan Xin codeuse Reed–Solomon error correction to allow correct reading even if a portion of the bar code is damaged. When the bar code scanner cannot recognize a bar code symbol, it will treat it as an erasure.
Reed–Solomon coding is less common in one-dimensional bar codes, but is used by thePostBarsymbology.
Specialized forms of Reed–Solomon codes, specificallyCauchy-RS andVandermonde-RS, can be used to overcome the unreliable nature of data transmission overerasure channels. The encoding process assumes a code of RS(N,K) which results inNcodewords of lengthNsymbols each storingKsymbols of data, being generated, that are then sent over an erasure channel.
Any combination ofKcodewords received at the other end is enough to reconstruct all of theNcodewords. The code rate is generally set to 1/2 unless the channel's erasure likelihood can be adequately modelled and is seen to be less. In conclusion,Nis usually 2K, meaning that at least half of all the codewords sent must be received in order to reconstruct all of the codewords sent.
Reed–Solomon codes are also used inxDSLsystems andCCSDS'sSpace Communications Protocol Specificationsas a form offorward error correction.
One significant application of Reed–Solomon coding was to encode the digital pictures sent back by theVoyager program.
Voyager introduced Reed–Solomon codingconcatenatedwithconvolutional codes, a practice that has since become very widespread in deep space and satellite (e.g., direct digital broadcasting) communications.
Viterbi decoderstend to produce errors in short bursts. Correcting these burst errors is a job best done by short or simplified Reed–Solomon codes.
Modern versions of concatenated Reed–Solomon/Viterbi-decoded convolutional coding were and are used on theMars Pathfinder,Galileo,Mars Exploration RoverandCassinimissions, where they perform within about 1–1.5dBof the ultimate limit, theShannon capacity.
These concatenated codes are now being replaced by more powerfulturbo codes:
The Reed–Solomon code is actually a family of codes, where every code is characterised by three parameters: analphabetsizeq, ablock lengthn, and amessage lengthk,withk<n≤q{\displaystyle k<n\leq q}. The set of alphabet symbols is interpreted as thefinite fieldF{\displaystyle F}of orderq{\displaystyle q}, and thus,q{\displaystyle q}must be aprime power. In the most useful parameterizations of the Reed–Solomon code, the block length is usually some constant multiple of the message length, that is, therateR=kn{\displaystyle R={\frac {k}{n}}}is some constant, and furthermore, the block length is either equal to the alphabet size or one less than it, i.e.,n=q{\displaystyle n=q}orn=q−1{\displaystyle n=q-1}.[citation needed]
There are different encoding procedures for the Reed–Solomon code, and thus, there are different ways to describe the set of all codewords.
In the original view of Reed and Solomon, every codeword of the Reed–Solomon code is a sequence of function values of a polynomial of degree less thank{\displaystyle k}.[1]In order to obtain a codeword of the Reed–Solomon code, the message symbols (each within the q-sized alphabet) are treated as the coefficients of a polynomialp{\displaystyle p}of degree less thank{\displaystyle k}, over the finite fieldF{\displaystyle F}withq{\displaystyle q}elements.
In turn, the polynomialp{\displaystyle p}is evaluated atn≤q{\displaystyle n\leq q}distinct pointsa1,…,an{\displaystyle a_{1},\dots ,a_{n}}of the fieldF{\displaystyle F}, and the sequence of values is the corresponding codeword. Common choices for a set of evaluation points include{0,1,2,…,n−1}{\displaystyle \{0,1,2,\dots ,n-1\}},{0,1,α,α2,…,αn−2}{\displaystyle \{0,1,\alpha ,\alpha ^{2},\dots ,\alpha ^{n-2}\}}, or forn<q{\displaystyle n<q},{1,α,α2,…,αn−1}{\displaystyle \{1,\alpha ,\alpha ^{2},\dots ,\alpha ^{n-1}\}}, ... , whereα{\displaystyle \alpha }is aprimitive elementofF{\displaystyle F}.
Formally, the setC{\displaystyle \mathbf {C} }of codewords of the Reed–Solomon code is defined as follows:C={(p(a1),p(a2),…,p(an))|pis a polynomial overFof degree<k}.{\displaystyle \mathbf {C} ={\Bigl \{}\;{\bigl (}p(a_{1}),p(a_{2}),\dots ,p(a_{n}){\bigr )}\;{\Big |}\;p{\text{ is a polynomial over }}F{\text{ of degree }}<k\;{\Bigr \}}\,.}Since any twodistinctpolynomials of degree less thank{\displaystyle k}agree in at mostk−1{\displaystyle k-1}points, this means that any two codewords of the Reed–Solomon code disagree in at leastn−(k−1)=n−k+1{\displaystyle n-(k-1)=n-k+1}positions. Furthermore, there are two polynomials that do agree ink−1{\displaystyle k-1}points but are not equal, and thus, thedistanceof the Reed–Solomon code is exactlyd=n−k+1{\displaystyle d=n-k+1}. Then the relative distance isδ=d/n=1−k/n+1/n=1−R+1/n∼1−R{\displaystyle \delta =d/n=1-k/n+1/n=1-R+1/n\sim 1-R}, whereR=k/n{\displaystyle R=k/n}is the rate. This trade-off between the relative distance and the rate is asymptotically optimal since, by theSingleton bound,everycode satisfiesδ+R≤1+1/n{\displaystyle \delta +R\leq 1+1/n}.
Being a code that achieves this optimal trade-off, the Reed–Solomon code belongs to the class ofmaximum distance separable codes.
While the number of different polynomials of degree less thankand the number of different messages are both equal toqk{\displaystyle q^{k}}, and thus every message can be uniquely mapped to such a polynomial, there are different ways of doing this encoding. The original construction of Reed & Solomon interprets the messagexas thecoefficientsof the polynomialp, whereas subsequent constructions interpret the message as thevaluesof the polynomial at the firstkpointsa1,…,ak{\displaystyle a_{1},\dots ,a_{k}}and obtain the polynomialpby interpolating these values with a polynomial of degree less thank. The latter encoding procedure, while being slightly less efficient, has the advantage that it gives rise to asystematic code, that is, the original message is always contained as a subsequence of the codeword.[1]
In the original construction of Reed and Solomon, the messagem=(m0,…,mk−1)∈Fk{\displaystyle m=(m_{0},\dots ,m_{k-1})\in F^{k}}is mapped to the polynomialpm{\displaystyle p_{m}}withpm(a)=∑i=0k−1miai.{\displaystyle p_{m}(a)=\sum _{i=0}^{k-1}m_{i}a^{i}\,.}The codeword ofm{\displaystyle m}is obtained by evaluatingpm{\displaystyle p_{m}}atn{\displaystyle n}different pointsa0,…,an−1{\displaystyle a_{0},\dots ,a_{n-1}}of the fieldF{\displaystyle F}.[1]Thus the classical encoding functionC:Fk→Fn{\displaystyle C:F^{k}\to F^{n}}for the Reed–Solomon code is defined as follows:C(m)=[pm(a0)pm(a1)⋯pm(an−1)]{\displaystyle C(m)={\begin{bmatrix}p_{m}(a_{0})\\p_{m}(a_{1})\\\cdots \\p_{m}(a_{n-1})\end{bmatrix}}}This functionC{\displaystyle C}is alinear mapping, that is, it satisfiesC(m)=Am{\displaystyle C(m)=Am}for the followingn×k{\displaystyle n\times k}-matrixA{\displaystyle A}with elements fromF{\displaystyle F}:C(m)=Am=[1a0a02…a0k−11a1a12…a1k−1⋮⋮⋮⋱⋮1an−1an−12…an−1k−1][m0m1⋮mk−1]{\displaystyle C(m)=Am={\begin{bmatrix}1&a_{0}&a_{0}^{2}&\dots &a_{0}^{k-1}\\1&a_{1}&a_{1}^{2}&\dots &a_{1}^{k-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&a_{n-1}&a_{n-1}^{2}&\dots &a_{n-1}^{k-1}\end{bmatrix}}{\begin{bmatrix}m_{0}\\m_{1}\\\vdots \\m_{k-1}\end{bmatrix}}}
This matrix is aVandermonde matrixoverF{\displaystyle F}. In other words, the Reed–Solomon code is alinear code, and in the classical encoding procedure, itsgenerator matrixisA{\displaystyle A}.
There are alternative encoding procedures that produce asystematicReed–Solomon code. One method usesLagrange interpolationto compute polynomialpm{\displaystyle p_{m}}such thatpm(ai)=mifor alli∈{0,…,k−1}.{\displaystyle p_{m}(a_{i})=m_{i}{\text{ for all }}i\in \{0,\dots ,k-1\}.}Thenpm{\displaystyle p_{m}}is evaluated at the other pointsak,…,an−1{\displaystyle a_{k},\dots ,a_{n-1}}.
C(m)=[pm(a0)pm(a1)⋯pm(an−1)]{\displaystyle C(m)={\begin{bmatrix}p_{m}(a_{0})\\p_{m}(a_{1})\\\cdots \\p_{m}(a_{n-1})\end{bmatrix}}}
This functionC{\displaystyle C}is a linear mapping. To generate the corresponding systematic encoding matrix G, multiply the Vandermonde matrix A by the inverse of A's left square submatrix.
G=(A's left square submatrix)−1⋅A=[100…0g1,k+1…g1,n010…0g2,k+1…g2,n001…0g3,k+1…g3,n⋮⋮⋮⋮⋮⋮0…0…1gk,k+1…gk,n]{\displaystyle G=(A{\text{'s left square submatrix}})^{-1}\cdot A={\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}}
C(m)=Gm{\displaystyle C(m)=Gm}for the followingn×k{\displaystyle n\times k}-matrixG{\displaystyle G}with elements fromF{\displaystyle F}:C(m)=Gm=[100…0g1,k+1…g1,n010…0g2,k+1…g2,n001…0g3,k+1…g3,n⋮⋮⋮⋮⋮⋮0…0…1gk,k+1…gk,n][m0m1⋮mk−1]{\displaystyle C(m)=Gm={\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}{\begin{bmatrix}m_{0}\\m_{1}\\\vdots \\m_{k-1}\end{bmatrix}}}
Adiscrete Fourier transformis essentially the same as the encoding procedure; it uses the generator polynomialpm{\displaystyle p_{m}}to map a set of evaluation points into the message values as shown above:C(m)=[pm(a0)pm(a1)⋯pm(an−1)]{\displaystyle C(m)={\begin{bmatrix}p_{m}(a_{0})\\p_{m}(a_{1})\\\cdots \\p_{m}(a_{n-1})\end{bmatrix}}}
The inverse Fourier transform could be used to convert an error free set ofn<qmessage values back into the encoding polynomial ofkcoefficients, with the constraint that in order for this to work, the set of evaluation points used to encode the message must be a set of increasing powers ofα:ai=αi{\displaystyle a_{i}=\alpha ^{i}}a0,…,an−1={1,α,α2,…,αn−1}{\displaystyle a_{0},\dots ,a_{n-1}=\{1,\alpha ,\alpha ^{2},\dots ,\alpha ^{n-1}\}}
However, Lagrange interpolation performs the same conversion without the constraint on the set of evaluation points or the requirement of an error free set of message values and is used for systematic encoding, and in one of the steps of theGao decoder.
In this view, the message is interpreted as the coefficients of a polynomialp(x){\displaystyle p(x)}. The sender computes a related polynomials(x){\displaystyle s(x)}of degreen−1{\displaystyle n-1}wheren≤q−1{\displaystyle n\leq q-1}and sends the polynomials(x){\displaystyle s(x)}. The polynomials(x){\displaystyle s(x)}is constructed by multiplying the message polynomialp(x){\displaystyle p(x)}, which has degreek−1{\displaystyle k-1}, with agenerator polynomialg(x){\displaystyle g(x)}of degreen−k{\displaystyle n-k}that is known to both the sender and the receiver. The generator polynomialg(x){\displaystyle g(x)}is defined as the polynomial whose roots are sequential powers of the Galois field primitiveα{\displaystyle \alpha }g(x)=(x−αi)(x−αi+1)⋯(x−αi+n−k−1)=g0+g1x+⋯+gn−k−1xn−k−1+xn−k{\displaystyle g(x)=\left(x-\alpha ^{i}\right)\left(x-\alpha ^{i+1}\right)\cdots \left(x-\alpha ^{i+n-k-1}\right)=g_{0}+g_{1}x+\cdots +g_{n-k-1}x^{n-k-1}+x^{n-k}}
For a "narrow sense code",i=1{\displaystyle i=1}.
C={(s1,s2,…,sn)|s(a)=∑i=1nsiaiis a polynomial that has at least the rootsα1,α2,…,αn−k}.{\displaystyle \mathbf {C} =\left\{\left(s_{1},s_{2},\dots ,s_{n}\right)\;{\Big |}\;s(a)=\sum _{i=1}^{n}s_{i}a^{i}{\text{ is a polynomial that has at least the roots }}\alpha ^{1},\alpha ^{2},\dots ,\alpha ^{n-k}\right\}.}
The encoding procedure for the BCH view of Reed–Solomon codes can be modified to yield asystematic encoding procedure, in which each codeword contains the message as a prefix, and simply appends error correcting symbols as a suffix. Here, instead of sendings(x)=p(x)g(x){\displaystyle s(x)=p(x)g(x)}, the encoder constructs the transmitted polynomials(x){\displaystyle s(x)}such that the coefficients of thek{\displaystyle k}largest monomials are equal to the corresponding coefficients ofp(x){\displaystyle p(x)}, and the lower-order coefficients ofs(x){\displaystyle s(x)}are chosen exactly in such a way thats(x){\displaystyle s(x)}becomes divisible byg(x){\displaystyle g(x)}. Then the coefficients ofp(x){\displaystyle p(x)}are a subsequence of the coefficients ofs(x){\displaystyle s(x)}. To get a code that is overall systematic, we construct the message polynomialp(x){\displaystyle p(x)}by interpreting the message as the sequence of its coefficients.
Formally, the construction is done by multiplyingp(x){\displaystyle p(x)}byxt{\displaystyle x^{t}}to make room for thet=n−k{\displaystyle t=n-k}check symbols, dividing that product byg(x){\displaystyle g(x)}to find the remainder, and then compensating for that remainder by subtracting it. Thet{\displaystyle t}check symbols are created by computing the remaindersr(x){\displaystyle s_{r}(x)}:sr(x)=p(x)⋅xtmodg(x).{\displaystyle s_{r}(x)=p(x)\cdot x^{t}\ {\bmod {\ }}g(x).}
The remainder has degree at mostt−1{\displaystyle t-1}, whereas the coefficients ofxt−1,xt−2,…,x1,x0{\displaystyle x^{t-1},x^{t-2},\dots ,x^{1},x^{0}}in the polynomialp(x)⋅xt{\displaystyle p(x)\cdot x^{t}}are zero. Therefore, the following definition of the codewords(x){\displaystyle s(x)}has the property that the firstk{\displaystyle k}coefficients are identical to the coefficients ofp(x){\displaystyle p(x)}:s(x)=p(x)⋅xt−sr(x).{\displaystyle s(x)=p(x)\cdot x^{t}-s_{r}(x)\,.}
As a result, the codewordss(x){\displaystyle s(x)}are indeed elements ofC{\displaystyle \mathbf {C} }, that is, they are divisible by the generator polynomialg(x){\displaystyle g(x)}:[10]s(x)≡p(x)⋅xt−sr(x)≡sr(x)−sr(x)≡0modg(x).{\displaystyle s(x)\equiv p(x)\cdot x^{t}-s_{r}(x)\equiv s_{r}(x)-s_{r}(x)\equiv 0\mod g(x)\,.}
This functions{\displaystyle s}is a linear mapping. To generate the corresponding systematic encoding matrix G, set G's left square submatrix to the identity matrix and then encode each row:
G=[100…0g1,k+1…g1,n010…0g2,k+1…g2,n001…0g3,k+1…g3,n⋮⋮⋮⋮⋮⋮0…0…1gk,k+1…gk,n]{\displaystyle G={\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}}Ignoring leading zeroes, the last row =g(x){\displaystyle g(x)}.
C(m)=mG{\displaystyle C(m)=mG}for the followingn×k{\displaystyle n\times k}-matrixG{\displaystyle G}with elements fromF{\displaystyle F}:C(m)=mG=[m0m1…mk−1][100…0g1,k+1…g1,n010…0g2,k+1…g2,n001…0g3,k+1…g3,n⋮⋮⋮⋮⋮⋮0…0…1gk,k+1…gk,n]{\displaystyle C(m)=mG={\begin{bmatrix}m_{0}&m_{1}&\ldots &m_{k-1}\end{bmatrix}}{\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}}
The Reed–Solomon code is a [n,k,n−k+ 1] code; in other words, it is alinear block codeof lengthn(overF) withdimensionkand minimumHamming distancedmin=n−k+1.{\textstyle d_{\min }=n-k+1.}The Reed–Solomon code is optimal in the sense that the minimum distance has the maximum value possible for a linear code of size (n,k); this is known as theSingleton bound. Such a code is also called amaximum distance separable (MDS) code.
The error-correcting ability of a Reed–Solomon code is determined by its minimum distance, or equivalently, byn−k{\displaystyle n-k}, the measure of redundancy in the block. If the locations of the error symbols are not known in advance, then a Reed–Solomon code can correct up to(n−k)/2{\displaystyle (n-k)/2}erroneous symbols, i.e., it can correct half as many errors as there are redundant symbols added to the block. Sometimes error locations are known in advance (e.g., "side information" indemodulatorsignal-to-noise ratios)—these are callederasures. A Reed–Solomon code (like anyMDS code) is able to correct twice as many erasures as errors, and any combination of errors and erasures can be corrected as long as the relation2E+S≤n−kis satisfied, whereE{\displaystyle E}is the number of errors andS{\displaystyle S}is the number of erasures in the block.
The theoretical error bound can be described via the following formula for theAWGNchannel forFSK:[11]Pb≈2m−12m−11n∑ℓ=t+1nℓ(nℓ)Psℓ(1−Ps)n−ℓ{\displaystyle P_{b}\approx {\frac {2^{m-1}}{2^{m}-1}}{\frac {1}{n}}\sum _{\ell =t+1}^{n}\ell {n \choose \ell }P_{s}^{\ell }(1-P_{s})^{n-\ell }}and for other modulation schemes:Pb≈1m1n∑ℓ=t+1nℓ(nℓ)Psℓ(1−Ps)n−ℓ{\displaystyle P_{b}\approx {\frac {1}{m}}{\frac {1}{n}}\sum _{\ell =t+1}^{n}\ell {n \choose \ell }P_{s}^{\ell }(1-P_{s})^{n-\ell }}wheret=12(dmin−1){\textstyle t={\frac {1}{2}}(d_{\min }-1)},Ps=1−(1−s)h{\displaystyle P_{s}=1-(1-s)^{h}},h=mlog2M{\displaystyle h={\frac {m}{\log _{2}M}}},s{\displaystyle s}is the symbol error rate in uncoded AWGN case andM{\displaystyle M}is the modulation order.
For practical uses of Reed–Solomon codes, it is common to use a finite fieldF{\displaystyle F}with2m{\displaystyle 2^{m}}elements. In this case, each symbol can be represented as anm{\displaystyle m}-bit value.
The sender sends the data points as encoded blocks, and the number of symbols in the encoded block isn=2m−1{\displaystyle n=2^{m}-1}. Thus a Reed–Solomon code operating on 8-bit symbols hasn=28−1=255{\displaystyle n=2^{8}-1=255}symbols per block. (This is a very popular value because of the prevalence ofbyte-orientedcomputer systems.) The numberk{\displaystyle k}, withk<n{\displaystyle k<n}, ofdatasymbols in the block is a design parameter. A commonly used code encodesk=223{\displaystyle k=223}eight-bit data symbols plus 32 eight-bit parity symbols in ann=255{\displaystyle n=255}-symbol block; this is denoted as a(n,k)=(255,223){\displaystyle (n,k)=(255,223)}code, and is capable of correcting up to 16 symbol errors per block.
The Reed–Solomon code properties discussed above make them especially well-suited to applications where errors occur inbursts. This is because it does not matter to the code how many bits in a symbol are in error — if multiple bits in a symbol are corrupted it only counts as a single error. Conversely, if a data stream is not characterized by error bursts or drop-outs but by random single bit errors, a Reed–Solomon code is usually a poor choice compared to a binary code.
The Reed–Solomon code, like theconvolutional code, is a transparent code. This means that if the channel symbols have beeninvertedsomewhere along the line, the decoders will still operate. The result will be the inversion of the original data. However, the Reed–Solomon code loses its transparency when the code is shortened (see 'Remarks' at the end of this section). The "missing" bits in a shortened code need to be filled by either zeros or ones, depending on whether the data is complemented or not. (To put it another way, if the symbols are inverted, then the zero-fill needs to be inverted to a one-fill.) For this reason it is mandatory that the sense of the data (i.e., true or complemented) be resolved before Reed–Solomon decoding.
Whether the Reed–Solomon code iscyclicor not depends on subtle details of the construction. In the original view of Reed and Solomon, where the codewords are the values of a polynomial, one can choose the sequence of evaluation points in such a way as to make the code cyclic. In particular, ifα{\displaystyle \alpha }is aprimitive rootof the fieldF{\displaystyle F}, then by definition all non-zero elements ofF{\displaystyle F}take the formαi{\displaystyle \alpha ^{i}}fori∈{1,…,q−1}{\displaystyle i\in \{1,\dots ,q-1\}}, whereq=|F|{\displaystyle q=|F|}. Each polynomialp{\displaystyle p}overF{\displaystyle F}gives rise to a codeword(p(α1),…,p(αq−1)){\displaystyle (p(\alpha ^{1}),\dots ,p(\alpha ^{q-1}))}. Since the functiona↦p(αa){\displaystyle a\mapsto p(\alpha a)}is also a polynomial of the same degree, this function gives rise to a codeword(p(α2),…,p(αq)){\displaystyle (p(\alpha ^{2}),\dots ,p(\alpha ^{q}))}; sinceαq=α1{\displaystyle \alpha ^{q}=\alpha ^{1}}holds, this codeword is thecyclic left-shiftof the original codeword derived fromp{\displaystyle p}. So choosing a sequence of primitive root powers as the evaluation points makes the original view Reed–Solomon codecyclic. Reed–Solomon codes in the BCH view are always cyclic becauseBCH codes are cyclic.
Designers are not required to use the "natural" sizes of Reed–Solomon code blocks. A technique known as "shortening" can produce a smaller code of any desired size from a larger code. For example, the widely used (255,223) code can be converted to a (160,128) code by padding the unused portion of the source block with 95 binary zeroes and not transmitting them. At the decoder, the same portion of the block is loaded locally with binary zeroes.
The QR code, Ver 3 (29×29) uses interleaved blocks. The message has 26 data bytes and is encoded using two Reed-Solomon code blocks. Each block is a (255,233) Reed Solomon code shortened to a (35,13) code.
The Delsarte–Goethals–Seidel[12]theorem illustrates an example of an application of shortened Reed–Solomon codes. In parallel to shortening, a technique known aspuncturingallows omitting some of the encoded parity symbols.
The decoders described in this section use the BCH view of a codeword as a sequence of coefficients. They use a fixed generator polynomial known to both encoder and decoder.
Daniel Gorenstein and Neal Zierler developed a decoder that was described in a MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961.[13][14]The Gorenstein–Zierler decoder and the related work on BCH codes are described in the bookError Correcting CodesbyW. Wesley Peterson(1961).[3][page needed]
The transmitted message,(c0,…,ci,…,cn−1){\displaystyle (c_{0},\ldots ,c_{i},\ldots ,c_{n-1})}, is viewed as the coefficients of a polynomials(x)=∑i=0n−1cixi.{\displaystyle s(x)=\sum _{i=0}^{n-1}c_{i}x^{i}.}
As a result of the Reed–Solomon encoding procedure,s(x) is divisible by the generator polynomialg(x)=∏j=1n−k(x−αj),{\displaystyle g(x)=\prod _{j=1}^{n-k}(x-\alpha ^{j}),}whereαis a primitive element.
Sinces(x) is a multiple of the generatorg(x), it follows that it "inherits" all its roots:s(x)mod(x−αj)=g(x)mod(x−αj)=0.{\displaystyle s(x){\bmod {(}}x-\alpha ^{j})=g(x){\bmod {(}}x-\alpha ^{j})=0.}Therefore,s(αj)=0,j=1,2,…,n−k.{\displaystyle s(\alpha ^{j})=0,\ j=1,2,\ldots ,n-k.}
The transmitted polynomial is corrupted in transit by an error polynomiale(x)=∑i=0n−1eixi{\displaystyle e(x)=\sum _{i=0}^{n-1}e_{i}x^{i}}to produce the received polynomialr(x)=s(x)+e(x).{\displaystyle r(x)=s(x)+e(x).}
Coefficienteiwill be zero if there is no error at that power ofx, and nonzero if there is an error. If there areνerrors at distinct powersikofx, thene(x)=∑k=1νeikxik.{\displaystyle e(x)=\sum _{k=1}^{\nu }e_{i_{k}}x^{i_{k}}.}
The goal of the decoder is to find the number of errors (ν), the positions of the errors (ik), and the error values at those positions (eik). From those,e(x) can be calculated and subtracted fromr(x) to get the originally sent messages(x).
The decoder starts by evaluating the polynomial as received at pointsα1…αn−k{\displaystyle \alpha ^{1}\dots \alpha ^{n-k}}. We call the results of that evaluation the "syndromes"Sj. They are defined asSj=r(αj)=s(αj)+e(αj)=0+e(αj)=e(αj)=∑k=1νeik(αj)ik,j=1,2,…,n−k.{\displaystyle {\begin{aligned}S_{j}&=r(\alpha ^{j})=s(\alpha ^{j})+e(\alpha ^{j})=0+e(\alpha ^{j})\\&=e(\alpha ^{j})\\&=\sum _{k=1}^{\nu }e_{i_{k}}{(\alpha ^{j})}^{i_{k}},\quad j=1,2,\ldots ,n-k.\end{aligned}}}Note thats(αj)=0{\displaystyle s(\alpha ^{j})=0}becauses(x){\displaystyle s(x)}has roots atαj{\displaystyle \alpha ^{j}}, as shown in the previous section.
The advantage of looking at the syndromes is that the message polynomial drops out. In other words, the syndromes only relate to the error and are unaffected by the actual contents of the message being transmitted. If the syndromes are all zero, the algorithm stops here and reports that the message was not corrupted in transit.
For convenience, define theerror locatorsXkanderror valuesYkasXk=αik,Yk=eik.{\displaystyle X_{k}=\alpha ^{i_{k}},\quad Y_{k}=e_{i_{k}}.}
Then the syndromes can be written in terms of these error locators and error values asSj=∑k=1νYkXkj.{\displaystyle S_{j}=\sum _{k=1}^{\nu }Y_{k}X_{k}^{j}.}
This definition of the syndrome values is equivalent to the previous since(αj)ik=αj⋅ik=(αik)j=Xkj{\displaystyle {(\alpha ^{j})}^{i_{k}}=\alpha ^{j\cdot i_{k}}={(\alpha ^{i_{k}})}^{j}=X_{k}^{j}}.
The syndromes give a system ofn−k≥ 2νequations in 2νunknowns, but that system of equations is nonlinear in theXkand does not have an obvious solution. However, if theXkwere known (see below), then the syndrome equations provide a linear system of equations[X11X21⋯Xν1X12X22⋯Xν2⋮⋮⋱⋮X1n−kX2n−k⋯Xνn−k][Y1Y2⋮Yν]=[S1S2⋮Sn−k],{\displaystyle {\begin{bmatrix}X_{1}^{1}&X_{2}^{1}&\cdots &X_{\nu }^{1}\\X_{1}^{2}&X_{2}^{2}&\cdots &X_{\nu }^{2}\\\vdots &\vdots &\ddots &\vdots \\X_{1}^{n-k}&X_{2}^{n-k}&\cdots &X_{\nu }^{n-k}\\\end{bmatrix}}{\begin{bmatrix}Y_{1}\\Y_{2}\\\vdots \\Y_{\nu }\end{bmatrix}}={\begin{bmatrix}S_{1}\\S_{2}\\\vdots \\S_{n-k}\end{bmatrix}},}which can easily be solved for theYkerror values.
Consequently, the problem is finding theXk, because then the leftmost matrix would be known, and both sides of the equation could be multiplied by its inverse, yielding Yk
In the variant of this algorithm where the locations of the errors are already known (when it is being used as anerasure code), this is the end. The error locations (Xk) are already known by some other method (for example, in an FM transmission, the sections where the bitstream was unclear or overcome with interference are probabilistically determinable from frequency analysis). In this scenario, up ton−k{\displaystyle n-k}errors can be corrected.
The rest of the algorithm serves to locate the errors and will require syndrome values up to2ν{\displaystyle 2\nu }, instead of just theν{\displaystyle \nu }used thus far. This is why twice as many error-correcting symbols need to be added as can be corrected without knowing their locations.
There is a linear recurrence relation that gives rise to a system of linear equations. Solving those equations identifies those error locationsXk.
Define theerror locator polynomialΛ(x)asΛ(x)=∏k=1ν(1−xXk)=1+Λ1x1+Λ2x2+⋯+Λνxν.{\displaystyle \Lambda (x)=\prod _{k=1}^{\nu }(1-xX_{k})=1+\Lambda _{1}x^{1}+\Lambda _{2}x^{2}+\cdots +\Lambda _{\nu }x^{\nu }.}
The zeros ofΛ(x)are the reciprocalsXk−1{\displaystyle X_{k}^{-1}}. This follows from the above product notation construction, since ifx=Xk−1{\displaystyle x=X_{k}^{-1}}, then one of the multiplied terms will be zero,(1−Xk−1⋅Xk)=1−1=0{\displaystyle (1-X_{k}^{-1}\cdot X_{k})=1-1=0}, making the whole polynomial evaluate to zero:Λ(Xk−1)=0.{\displaystyle \Lambda (X_{k}^{-1})=0.}
Letj{\displaystyle j}be any integer such that1≤j≤ν{\displaystyle 1\leq j\leq \nu }. Multiply both sides byYkXkj+ν{\displaystyle Y_{k}X_{k}^{j+\nu }}, and it will still be zero:YkXkj+νΛ(Xk−1)=0,YkXkj+ν(1+Λ1Xk−1+Λ2Xk−2+⋯+ΛνXk−ν)=0,YkXkj+ν+Λ1YkXkj+νXk−1+Λ2YkXkj+νXk−2+⋯+ΛνYkXkj+νXk−ν=0,YkXkj+ν+Λ1YkXkj+ν−1+Λ2YkXkj+ν−2+⋯+ΛνYkXkj=0.{\displaystyle {\begin{aligned}&Y_{k}X_{k}^{j+\nu }\Lambda (X_{k}^{-1})=0,\\&Y_{k}X_{k}^{j+\nu }(1+\Lambda _{1}X_{k}^{-1}+\Lambda _{2}X_{k}^{-2}+\cdots +\Lambda _{\nu }X_{k}^{-\nu })=0,\\&Y_{k}X_{k}^{j+\nu }+\Lambda _{1}Y_{k}X_{k}^{j+\nu }X_{k}^{-1}+\Lambda _{2}Y_{k}X_{k}^{j+\nu }X_{k}^{-2}+\cdots +\Lambda _{\nu }Y_{k}X_{k}^{j+\nu }X_{k}^{-\nu }=0,\\&Y_{k}X_{k}^{j+\nu }+\Lambda _{1}Y_{k}X_{k}^{j+\nu -1}+\Lambda _{2}Y_{k}X_{k}^{j+\nu -2}+\cdots +\Lambda _{\nu }Y_{k}X_{k}^{j}=0.\end{aligned}}}
Sum fork= 1 toν, and it will still be zero:∑k=1ν(YkXkj+ν+Λ1YkXkj+ν−1+Λ2YkXkj+ν−2+⋯+ΛνYkXkj)=0.{\displaystyle \sum _{k=1}^{\nu }(Y_{k}X_{k}^{j+\nu }+\Lambda _{1}Y_{k}X_{k}^{j+\nu -1}+\Lambda _{2}Y_{k}X_{k}^{j+\nu -2}+\cdots +\Lambda _{\nu }Y_{k}X_{k}^{j})=0.}
Collect each term into its own sum:(∑k=1νYkXkj+ν)+(∑k=1νΛ1YkXkj+ν−1)+(∑k=1νΛ2YkXkj+ν−2)+⋯+(∑k=1νΛνYkXkj)=0.{\displaystyle \left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu }\right)+\left(\sum _{k=1}^{\nu }\Lambda _{1}Y_{k}X_{k}^{j+\nu -1}\right)+\left(\sum _{k=1}^{\nu }\Lambda _{2}Y_{k}X_{k}^{j+\nu -2}\right)+\cdots +\left(\sum _{k=1}^{\nu }\Lambda _{\nu }Y_{k}X_{k}^{j}\right)=0.}
Extract the constant values ofΛ{\displaystyle \Lambda }that are unaffected by the summation:(∑k=1νYkXkj+ν)+Λ1(∑k=1νYkXkj+ν−1)+Λ2(∑k=1νYkXkj+ν−2)+⋯+Λν(∑k=1νYkXkj)=0.{\displaystyle \left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu }\right)+\Lambda _{1}\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu -1}\right)+\Lambda _{2}\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu -2}\right)+\cdots +\Lambda _{\nu }\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j}\right)=0.}
These summations are now equivalent to the syndrome values, which we know and can substitute in. This therefore reduces toSj+ν+Λ1Sj+ν−1+⋯+Λν−1Sj+1+ΛνSj=0.{\displaystyle S_{j+\nu }+\Lambda _{1}S_{j+\nu -1}+\cdots +\Lambda _{\nu -1}S_{j+1}+\Lambda _{\nu }S_{j}=0.}
SubtractingSj+ν{\displaystyle S_{j+\nu }}from both sides yieldsSjΛν+Sj+1Λν−1+⋯+Sj+ν−1Λ1=−Sj+ν.{\displaystyle S_{j}\Lambda _{\nu }+S_{j+1}\Lambda _{\nu -1}+\cdots +S_{j+\nu -1}\Lambda _{1}=-S_{j+\nu }.}
Recall thatjwas chosen to be any integer between 1 andvinclusive, and this equivalence is true for all such values. Therefore, we havevlinear equations, not just one. This system of linear equations can therefore be solved for the coefficients Λiof the error-location polynomial:[S1S2⋯SνS2S3⋯Sν+1⋮⋮⋱⋮SνSν+1⋯S2ν−1][ΛνΛν−1⋮Λ1]=[−Sν+1−Sν+2⋮−Sν+ν].{\displaystyle {\begin{bmatrix}S_{1}&S_{2}&\cdots &S_{\nu }\\S_{2}&S_{3}&\cdots &S_{\nu +1}\\\vdots &\vdots &\ddots &\vdots \\S_{\nu }&S_{\nu +1}&\cdots &S_{2\nu -1}\end{bmatrix}}{\begin{bmatrix}\Lambda _{\nu }\\\Lambda _{\nu -1}\\\vdots \\\Lambda _{1}\end{bmatrix}}={\begin{bmatrix}-S_{\nu +1}\\-S_{\nu +2}\\\vdots \\-S_{\nu +\nu }\end{bmatrix}}.}The above assumes that the decoder knows the number of errorsν, but that number has not been determined yet. The PGZ decoder does not determineνdirectly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trialνand sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trialνis reduced by one and the next smaller system is examined.[15]
Use the coefficients Λifound in the last step to build the error location polynomial. The roots of the error location polynomial can be found by exhaustive search. The error locatorsXkare the reciprocals of those roots. The order of coefficients of the error location polynomial can be reversed, in which case the roots of that reversed polynomial are the error locatorsXk{\displaystyle X_{k}}(not their reciprocalsXk−1{\displaystyle X_{k}^{-1}}).Chien searchis an efficient implementation of this step.
Once the error locatorsXkare known, the error values can be determined. This can be done by direct solution forYkin theerror equationsmatrix given above, or using theForney algorithm.
Calculateikby taking the log baseα{\displaystyle \alpha }ofXk. This is generally done using a precomputed lookup table.
Finally,e(x) is generated fromikandeikand then is subtracted fromr(x) to get the originally sent messages(x), with errors corrected.
Consider the Reed–Solomon code defined inGF(929)withα= 3andt= 4(this is used inPDF417barcodes) for a RS(7,3) code. The generator polynomial isg(x)=(x−3)(x−32)(x−33)(x−34)=x4+809x3+723x2+568x+522.{\displaystyle g(x)=(x-3)(x-3^{2})(x-3^{3})(x-3^{4})=x^{4}+809x^{3}+723x^{2}+568x+522.}If the message polynomial isp(x) = 3x2+ 2x+ 1, then a systematic codeword is encoded as follows:sr(x)=p(x)xtmodg(x)=547x3+738x2+442x+455,{\displaystyle s_{r}(x)=p(x)\,x^{t}{\bmod {g}}(x)=547x^{3}+738x^{2}+442x+455,}s(x)=p(x)xt−sr(x)=3x6+2x5+1x4+382x3+191x2+487x+474.{\displaystyle s(x)=p(x)\,x^{t}-s_{r}(x)=3x^{6}+2x^{5}+1x^{4}+382x^{3}+191x^{2}+487x+474.}Errors in transmission might cause this to be received instead:r(x)=s(x)+e(x)=3x6+2x5+123x4+456x3+191x2+487x+474.{\displaystyle r(x)=s(x)+e(x)=3x^{6}+2x^{5}+123x^{4}+456x^{3}+191x^{2}+487x+474.}The syndromes are calculated by evaluatingrat powers ofα:S1=r(31)=3⋅36+2⋅35+123⋅34+456⋅33+191⋅32+487⋅3+474=732,{\displaystyle S_{1}=r(3^{1})=3\cdot 3^{6}+2\cdot 3^{5}+123\cdot 3^{4}+456\cdot 3^{3}+191\cdot 3^{2}+487\cdot 3+474=732,}S2=r(32)=637,S3=r(33)=762,S4=r(34)=925,{\displaystyle S_{2}=r(3^{2})=637,\quad S_{3}=r(3^{3})=762,\quad S_{4}=r(3^{4})=925,}yielding the system[732637637762][Λ2Λ1]=[−762−925]=[167004].{\displaystyle {\begin{bmatrix}732&637\\637&762\end{bmatrix}}{\begin{bmatrix}\Lambda _{2}\\\Lambda _{1}\end{bmatrix}}={\begin{bmatrix}-762\\-925\end{bmatrix}}={\begin{bmatrix}167\\004\end{bmatrix}}.}
UsingGaussian elimination,[001000000001][Λ2Λ1]=[329821],{\displaystyle {\begin{bmatrix}001&000\\000&001\end{bmatrix}}{\begin{bmatrix}\Lambda _{2}\\\Lambda _{1}\end{bmatrix}}={\begin{bmatrix}329\\821\end{bmatrix}},}soΛ(x)=329x2+821x+001,{\displaystyle \Lambda (x)=329x^{2}+821x+001,}with rootsx1= 757 = 3−3andx2= 562 = 3−4.
The coefficients can be reversed:R(x)=001x2+821x+329,{\displaystyle R(x)=001x^{2}+821x+329,}to produce roots 27 = 33and 81 = 34with positive exponents, but typically this isn't used. The logarithm of the inverted roots corresponds to the error locations (right to left, location 0 is the last term in the codeword).
To calculate the error values, apply theForney algorithm:Ω(x)=S(x)Λ(x)modx4=546x+732,{\displaystyle \Omega (x)=S(x)\Lambda (x){\bmod {x}}^{4}=546x+732,}Λ′(x)=658x+821,{\displaystyle \Lambda '(x)=658x+821,}e1=−Ω(x1)/Λ′(x1)=074,{\displaystyle e_{1}=-\Omega (x_{1})/\Lambda '(x_{1})=074,}e2=−Ω(x2)/Λ′(x2)=122.{\displaystyle e_{2}=-\Omega (x_{2})/\Lambda '(x_{2})=122.}
Subtractinge1x3+e2x4=74x3+122x4{\displaystyle e_{1}x^{3}+e_{2}x^{4}=74x^{3}+122x^{4}}from the received polynomialr(x) reproduces the original codewords.
TheBerlekamp–Massey algorithmis an alternate iterative procedure for finding the error locator polynomial. During each iteration, it calculates a discrepancy based on a current instance of Λ(x) with an assumed number of errorse:Δ=Si+Λ1Si−1+⋯+ΛeSi−e{\displaystyle \Delta =S_{i}+\Lambda _{1}\ S_{i-1}+\cdots +\Lambda _{e}\ S_{i-e}}and then adjusts Λ(x) andeso that a recalculated Δ would be zero. The articleBerlekamp–Massey algorithmhas a detailed description of the procedure. In the following example,C(x) is used to represent Λ(x).
Using the same data as the Peterson Gorenstein Zierler example above:
The final value ofCis the error locator polynomial, Λ(x).
Another iterative method for calculating both the error locator polynomial and the error value polynomial is based on Sugiyama's adaptation of theextended Euclidean algorithm.
DefineS(x), Λ(x), and Ω(x) fortsyndromes andeerrors:S(x)=Stxt−1+St−1xt−2+⋯+S2x+S1Λ(x)=Λexe+Λe−1xe−1+⋯+Λ1x+1Ω(x)=Ωexe+Ωe−1xe−1+⋯+Ω1x+Ω0{\displaystyle {\begin{aligned}S(x)&=S_{t}x^{t-1}+S_{t-1}x^{t-2}+\cdots +S_{2}x+S_{1}\\[1ex]\Lambda (x)&=\Lambda _{e}x^{e}+\Lambda _{e-1}x^{e-1}+\cdots +\Lambda _{1}x+1\\[1ex]\Omega (x)&=\Omega _{e}x^{e}+\Omega _{e-1}x^{e-1}+\cdots +\Omega _{1}x+\Omega _{0}\end{aligned}}}
The key equation is:Λ(x)S(x)=Q(x)xt+Ω(x){\displaystyle \Lambda (x)S(x)=Q(x)x^{t}+\Omega (x)}
Fort= 6 ande= 3:[Λ3S6x8Λ2S6+Λ3S5x7Λ1S6+Λ2S5+Λ3S4x6S6+Λ1S5+Λ2S4+Λ3S3x5S5+Λ1S4+Λ2S3+Λ3S2x4S4+Λ1S3+Λ2S2+Λ3S1x3S3+Λ1S2+Λ2S1x2S2+Λ1S1xS1]=[Q2x8Q1x7Q0x6000Ω2x2Ω1xΩ0]{\displaystyle {\begin{bmatrix}\Lambda _{3}S_{6}&x^{8}\\\Lambda _{2}S_{6}+\Lambda _{3}S_{5}&x^{7}\\\Lambda _{1}S_{6}+\Lambda _{2}S_{5}+\Lambda _{3}S_{4}&x^{6}\\S_{6}+\Lambda _{1}S_{5}+\Lambda _{2}S_{4}+\Lambda _{3}S_{3}&x^{5}\\S_{5}+\Lambda _{1}S_{4}+\Lambda _{2}S_{3}+\Lambda _{3}S_{2}&x^{4}\\S_{4}+\Lambda _{1}S_{3}+\Lambda _{2}S_{2}+\Lambda _{3}S_{1}&x^{3}\\S_{3}+\Lambda _{1}S_{2}+\Lambda _{2}S_{1}&x^{2}\\S_{2}+\Lambda _{1}S_{1}&x\\S_{1}\end{bmatrix}}={\begin{bmatrix}Q_{2}x^{8}\\Q_{1}x^{7}\\Q_{0}x^{6}\\0\\0\\0\\\Omega _{2}x^{2}\\\Omega _{1}x\\\Omega _{0}\end{bmatrix}}}
The middle terms are zero due to the relationship between Λ and syndromes.
The extended Euclidean algorithm can find a series of polynomials of the form
where the degree ofRdecreases asiincreases. Once the degree ofRi(x) <t/2, then
B(x) andQ(x) don't need to be saved, so the algorithm becomes:
to set low order term of Λ(x) to 1, divide Λ(x) and Ω(x) byAi(0):
Ai(0) is the constant (low order) term of Ai.
Using the same data as the Peterson–Gorenstein–Zierler example above:
A discrete Fourier transform can be used for decoding.[16]To avoid conflict with syndrome names, letc(x) =s(x) the encoded codeword.r(x) ande(x) are the same as above. DefineC(x),E(x), andR(x) as the discrete Fourier transforms ofc(x),e(x), andr(x). Sincer(x) =c(x) +e(x), and since a discrete Fourier transform is a linear operator,R(x) =C(x) +E(x).
Transformr(x) toR(x) using discrete Fourier transform. Since the calculation for a discrete Fourier transform is the same as the calculation for syndromes,tcoefficients ofR(x) andE(x) are the same as the syndromes:Rj=Ej=Sj=r(αj)for1≤j≤t{\displaystyle R_{j}=E_{j}=S_{j}=r(\alpha ^{j})\qquad {\text{for }}1\leq j\leq t}
UseR1{\displaystyle R_{1}}throughRt{\displaystyle R_{t}}as syndromes (they're the same) and generate the error locator polynomial using the methods from any of the above decoders.
Letv= number of errors. GenerateE(x) using the known coefficientsE1{\displaystyle E_{1}}toEt{\displaystyle E_{t}}, the error locator polynomial, and these formulasE0=−1Λv(Ev+Λ1Ev−1+⋯+Λv−1E1)Ej=−(Λ1Ej−1+Λ2Ej−2+⋯+ΛvEj−v)fort<j<n{\displaystyle {\begin{aligned}E_{0}&=-{\frac {1}{\Lambda _{v}}}(E_{v}+\Lambda _{1}E_{v-1}+\cdots +\Lambda _{v-1}E_{1})\\E_{j}&=-(\Lambda _{1}E_{j-1}+\Lambda _{2}E_{j-2}+\cdots +\Lambda _{v}E_{j-v})&{\text{for }}t<j<n\end{aligned}}}
Then calculateC(x) =R(x) −E(x) and take the inverse transform (polynomial interpolation) ofC(x) to producec(x).
TheSingleton boundstates that the minimum distancedof a linear block code of size (n,k) is upper-bounded byn-k+ 1. The distancedwas usually understood to limit the error-correction capability to⌊(d- 1) / 2⌋. The Reed–Solomon code achieves this bound with equality, and can thus correct up to⌊(n-k) / 2⌋errors. However, this error-correction bound is not exact.
In 1999,Madhu SudanandVenkatesan Guruswamiat MIT published "Improved Decoding of Reed–Solomon and Algebraic-Geometry Codes" introducing an algorithm that allowed for the correction of errors beyond half the minimum distance of the code.[17]It applies to Reed–Solomon codes and more generally toalgebraic geometric codes. This algorithm produces a list of codewords (it is alist-decodingalgorithm) and is based on interpolation and factorization of polynomials overGF(2m)and its extensions.
In 2023, building on three exciting[according to whom?]works,[18][19][20]coding theorists showed that Reed-Solomon codes defined over random evaluation points can actually achievelist decodingcapacity (up ton-kerrors) over linear size alphabets with high probability. However, this result is combinatorial rather than algorithmic.[citation needed]
The algebraic decoding methods described above are hard-decision methods, which means that for every symbol a hard decision is made about its value. For example, a decoder could associate with each symbol an additional value corresponding to the channeldemodulator's confidence in the correctness of the symbol. The advent ofLDPCandturbo codes, which employ iteratedsoft-decisionbelief propagation decoding methods to achieve error-correction performance close to thetheoretical limit, has spurred interest in applying soft-decision decoding to conventional algebraic codes. In 2003, Ralf Koetter andAlexander Vardypresented a polynomial-time soft-decision algebraic list-decoding algorithm for Reed–Solomon codes, which was based upon the work by Sudan and Guruswami.[21]In 2016, Steven J. Franke and Joseph H. Taylor published a novel soft-decision decoder.[22]
Here we present a simpleMATLABimplementation for an encoder.
Now the decoding part:
The decoders described in this section use the Reed Solomon original view of a codeword as a sequence of polynomial values where the polynomial is based on the message to be encoded. The same set of fixed values are used by the encoder and decoder, and the decoder recovers the encoding polynomial (and optionally an error locating polynomial) from the received message.
Reed and Solomon described a theoretical decoder that corrected errors by finding the most popular message polynomial.[1]The decoder only knows the set of valuesa1{\displaystyle a_{1}}toan{\displaystyle a_{n}}and which encoding method was used to generate the codeword's sequence of values. The original message, the polynomial, and any errors are unknown. A decoding procedure could use a method like Lagrange interpolation on various subsets of n codeword values taken k at a time to repeatedly produce potential polynomials, until a sufficient number of matching polynomials are produced to reasonably eliminate any errors in the received codeword. Once a polynomial is determined, then any errors in the codeword can be corrected, by recalculating the corresponding codeword values. Unfortunately, in all but the simplest of cases, there are too many subsets, so the algorithm is impractical. The number of subsets is thebinomial coefficient,(nk)=n!(n−k)!k!{\textstyle {\binom {n}{k}}={n! \over (n-k)!k!}}, and the number of subsets is infeasible for even modest codes. For a(255,249)code that can correct 3 errors, the naïve theoretical decoder would examine 359 billion subsets.[citation needed]
In 1986, a decoder known as theBerlekamp–Welch algorithmwas developed as a decoder that is able to recover the original message polynomial as well as an error "locator" polynomial that produces zeroes for the input values that correspond to errors, with time complexityO(n3), wherenis the number of values in a message. The recovered polynomial is then used to recover (recalculate as needed) the original message.
Using RS(7,3), GF(929), and the set of evaluation pointsai=i− 1
If the message polynomial is
The codeword is
Errors in transmission might cause this to be received instead.
The key equation is:
Assume maximum number of errors:e= 2. The key equation becomes:
[001000928000000000000006006928928928928928123246928927925921913456439928926920902848057228928925913865673086430928924904804304121726928923893713562][e0e1q0q1q2q3q4]=[000923437541017637289]{\displaystyle {\begin{bmatrix}001&000&928&000&000&000&000\\006&006&928&928&928&928&928\\123&246&928&927&925&921&913\\456&439&928&926&920&902&848\\057&228&928&925&913&865&673\\086&430&928&924&904&804&304\\121&726&928&923&893&713&562\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q_{0}\\q_{1}\\q_{2}\\q_{3}\\q_{4}\end{bmatrix}}={\begin{bmatrix}000\\923\\437\\541\\017\\637\\289\end{bmatrix}}}
UsingGaussian elimination:
[001000000000000000000000001000000000000000000000001000000000000000000000001000000000000000000000001000000000000000000000001000000000000000000000001][e0e1q0q1q2q3q4]=[006924006007009916003]{\displaystyle {\begin{bmatrix}001&000&000&000&000&000&000\\000&001&000&000&000&000&000\\000&000&001&000&000&000&000\\000&000&000&001&000&000&000\\000&000&000&000&001&000&000\\000&000&000&000&000&001&000\\000&000&000&000&000&000&001\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q_{0}\\q_{1}\\q_{2}\\q_{3}\\q_{4}\end{bmatrix}}={\begin{bmatrix}006\\924\\006\\007\\009\\916\\003\end{bmatrix}}}
RecalculateP(x)whereE(x) = 0 : {2, 3}to correctbresulting in the corrected codeword:
In 2002, an improved decoder was developed by Shuhong Gao, based on the extended Euclid algorithm.[23]
To duplicate the polynomials generated by Berlekamp Welsh,
divideQ(x) andE(x) by most significant coefficient ofE(x) = 708.
RecalculateP(x)whereE(x) = 0 : {2, 3}to correctbresulting in the corrected codeword:
|
https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction
|
Incryptography,deniable authenticationrefers tomessage authenticationbetween a set of participants where the participants themselves can be confident in the authenticity of the messages, but it cannot be proved to a third party after the event.[1][2][3]
In practice, deniable authentication between two parties can be achieved through the use ofmessage authentication codes(MACs) by making sure that if an attacker is able to decrypt the messages, they would also know the MAC key as part of the protocol, and would thus be able to forge authentic-looking messages.[4]For example, in theOff-the-Record Messaging(OTR) protocol, MAC keys are derived from the asymmetric decryption key through acryptographic hash function. In addition to that, the OTR protocol also reveals used MAC keys as part of the next message, after they have already been used to authenticate previously received messages, and will not be re-used.[5]
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Deniable_authentication
|
Theclosed-world assumption(CWA), in aformal system of logicused forknowledge representation, is the presumption that a statement that is true is also known to be true. Therefore, conversely, what is not currently known to be true, is false. The same name also refers to alogicalformalization of this assumption byRaymond Reiter.[1]The opposite of the closed-world assumption is theopen-world assumption(OWA), stating that lack of knowledge does not imply falsity. Decisions on CWA vs. OWA determine the understanding of the actual semantics of a conceptual expression with the same notations of concepts. A successfulformalization of natural language semanticsusually cannot avoid an explicit revelation of whether the implicit logical backgrounds are based on CWA or OWA.
Negation as failureis related to the closed-world assumption, as it amounts to believing false every predicate that cannot be proved to be true.
In the context ofknowledge management, the closed-world assumption is used in at least two situations: (1) when the knowledge base is known to be complete (e.g., a corporate database containing records for every employee), and (2) when the knowledge base is known to be incomplete but a "best" definite answer must be derived from incomplete information. For example, if adatabasecontains the following table reporting editors who have worked on a given article, a query on the people not having edited the article on Formal Logic is usually expected to return "Sarah Johnson".
In the closed-world assumption, the table is assumed to becomplete(it lists all editor–article relationships), and Sarah Johnson is the only editor who has not edited the article on Formal Logic. In contrast, with the open-world assumption the table is not assumed to contain all editor–article tuples, and the answer to who has not edited the Formal Logic article is unknown. There is an unknown number of editors not listed in the table, and an unknown number of articles edited by Sarah Johnson that are also not listed in the table.
The first formalization of the closed-world assumption informal logicconsists in adding to the knowledge base the negation of the literals that are not currentlyentailedby it. The result of this addition is alwaysconsistentif the knowledge base is inHorn form, but is not guaranteed to be consistent otherwise. For example, the knowledge base
entails neitherEnglish(Fred){\displaystyle English(Fred)}norIrish(Fred){\displaystyle Irish(Fred)}.
Adding the negation of these two literals to the knowledge base leads to
which is inconsistent. In other words, this formalization of the closed-world assumption sometimes turns a consistent knowledge base into an inconsistent one. The closed-world assumption does not introduce an inconsistency on a knowledge baseK{\displaystyle K}exactly when the intersection of allHerbrand modelsofK{\displaystyle K}is also a model ofK{\displaystyle K}; in the propositional case, this condition is equivalent toK{\displaystyle K}having a single minimal model, where a model is minimal if no other model has a subset of variables assigned to true.
Alternative formalizations not suffering from this problem have been proposed. In the following description, the considered knowledge baseK{\displaystyle K}is assumed to be propositional. In all cases, the formalization of the closed-world assumption is based on adding toK{\displaystyle K}the negation of the formulae that are "free for negation" forK{\displaystyle K}, i.e., the formulae that can be assumed to be false. In other words, the closed-world assumption applied to a knowledge baseK{\displaystyle K}generates the knowledge base
The setF{\displaystyle F}of formulae that are free for negation inK{\displaystyle K}can be defined in different ways, leading to different formalizations of the closed-world assumption. The following are the definitions off{\displaystyle f}being free for negation in the various formalizations.
The ECWA and the formalism ofcircumscriptioncoincide on propositional theories.[5][6]The complexity of query answering (checking whether a formula is entailed by another one under the closed-world assumption) is typically in the second level of thepolynomial hierarchyfor general formulae, and ranges fromPtocoNPforHorn formulae. Checking whether the original closed-world assumption introduces an inconsistency requires at most a logarithmic number of calls to anNP oracle; however, the exact complexity of this problem is not currently known.[7]
In situations where it is not possible to assume a closed world for all predicates, yet some of them are known to be closed, thepartial-closed world assumptioncan be used. This regime considers knowledge bases generally to be open, i.e., potentially incomplete, yet allows to use completeness assertions to specify parts of the knowledge base that are closed.[8]
The language of logic programs withstrong negationallows us to postulate the closed-world assumption for some statements and leave the other statements in the realm of the open-world assumption.[9]An intermediate ground between OWA and CWA is provided by thepartial-closed world assumption(PCWA). Under the PCWA, the knowledge base is generally treated under open-world semantics, yet it is possible to assert parts that should be treated under closed-world semantics, via completeness assertions. The PCWA is especially needed for situations where the CWA is not applicable due to an open domain, yet the OWA is too credulous in allowing anything to be possibly true.[10][11]
|
https://en.wikipedia.org/wiki/Closed_world_assumption
|
Theclosed-world assumption(CWA), in aformal system of logicused forknowledge representation, is the presumption that a statement that is true is also known to be true. Therefore, conversely, what is not currently known to be true, is false. The same name also refers to alogicalformalization of this assumption byRaymond Reiter.[1]The opposite of the closed-world assumption is theopen-world assumption(OWA), stating that lack of knowledge does not imply falsity. Decisions on CWA vs. OWA determine the understanding of the actual semantics of a conceptual expression with the same notations of concepts. A successfulformalization of natural language semanticsusually cannot avoid an explicit revelation of whether the implicit logical backgrounds are based on CWA or OWA.
Negation as failureis related to the closed-world assumption, as it amounts to believing false every predicate that cannot be proved to be true.
In the context ofknowledge management, the closed-world assumption is used in at least two situations: (1) when the knowledge base is known to be complete (e.g., a corporate database containing records for every employee), and (2) when the knowledge base is known to be incomplete but a "best" definite answer must be derived from incomplete information. For example, if adatabasecontains the following table reporting editors who have worked on a given article, a query on the people not having edited the article on Formal Logic is usually expected to return "Sarah Johnson".
In the closed-world assumption, the table is assumed to becomplete(it lists all editor–article relationships), and Sarah Johnson is the only editor who has not edited the article on Formal Logic. In contrast, with the open-world assumption the table is not assumed to contain all editor–article tuples, and the answer to who has not edited the Formal Logic article is unknown. There is an unknown number of editors not listed in the table, and an unknown number of articles edited by Sarah Johnson that are also not listed in the table.
The first formalization of the closed-world assumption informal logicconsists in adding to the knowledge base the negation of the literals that are not currentlyentailedby it. The result of this addition is alwaysconsistentif the knowledge base is inHorn form, but is not guaranteed to be consistent otherwise. For example, the knowledge base
entails neitherEnglish(Fred){\displaystyle English(Fred)}norIrish(Fred){\displaystyle Irish(Fred)}.
Adding the negation of these two literals to the knowledge base leads to
which is inconsistent. In other words, this formalization of the closed-world assumption sometimes turns a consistent knowledge base into an inconsistent one. The closed-world assumption does not introduce an inconsistency on a knowledge baseK{\displaystyle K}exactly when the intersection of allHerbrand modelsofK{\displaystyle K}is also a model ofK{\displaystyle K}; in the propositional case, this condition is equivalent toK{\displaystyle K}having a single minimal model, where a model is minimal if no other model has a subset of variables assigned to true.
Alternative formalizations not suffering from this problem have been proposed. In the following description, the considered knowledge baseK{\displaystyle K}is assumed to be propositional. In all cases, the formalization of the closed-world assumption is based on adding toK{\displaystyle K}the negation of the formulae that are "free for negation" forK{\displaystyle K}, i.e., the formulae that can be assumed to be false. In other words, the closed-world assumption applied to a knowledge baseK{\displaystyle K}generates the knowledge base
The setF{\displaystyle F}of formulae that are free for negation inK{\displaystyle K}can be defined in different ways, leading to different formalizations of the closed-world assumption. The following are the definitions off{\displaystyle f}being free for negation in the various formalizations.
The ECWA and the formalism ofcircumscriptioncoincide on propositional theories.[5][6]The complexity of query answering (checking whether a formula is entailed by another one under the closed-world assumption) is typically in the second level of thepolynomial hierarchyfor general formulae, and ranges fromPtocoNPforHorn formulae. Checking whether the original closed-world assumption introduces an inconsistency requires at most a logarithmic number of calls to anNP oracle; however, the exact complexity of this problem is not currently known.[7]
In situations where it is not possible to assume a closed world for all predicates, yet some of them are known to be closed, thepartial-closed world assumptioncan be used. This regime considers knowledge bases generally to be open, i.e., potentially incomplete, yet allows to use completeness assertions to specify parts of the knowledge base that are closed.[8]
The language of logic programs withstrong negationallows us to postulate the closed-world assumption for some statements and leave the other statements in the realm of the open-world assumption.[9]An intermediate ground between OWA and CWA is provided by thepartial-closed world assumption(PCWA). Under the PCWA, the knowledge base is generally treated under open-world semantics, yet it is possible to assert parts that should be treated under closed-world semantics, via completeness assertions. The PCWA is especially needed for situations where the CWA is not applicable due to an open domain, yet the OWA is too credulous in allowing anything to be possibly true.[10][11]
|
https://en.wikipedia.org/wiki/Open_world_assumption
|
TheKimball lifecycleis a methodology for developingdata warehouses, and has been developed byRalph Kimballand a variety of colleagues. The methodology "covers a sequence of high level tasks for the effectivedesign,developmentanddeployment" of a data warehouse orbusiness intelligencesystem.[1]It is considered a "bottom-up" approach to data warehousing as pioneered by Ralph Kimball, in contrast to the older "top-down" approach pioneered byBill Inmon.[2]
According to Ralph Kimball et al., the planning phase is the start of the lifecycle. It is aplanningphase in whichprojectis a single iteration of the lifecycle whileprogramis the broader coordination of resources. When launching a project or program Kimball et al. suggests following three focus areas:
This is an ongoing discipline in the project. The purpose is to keep the project/program on course, develop a communication plan and manage expectations.
This phase ormilestoneof the project is about making theproject teamunderstand thebusiness requirements. Its purpose is to establish a foundation for all the following activities in the lifecycle. Kimball et al. makes it clear that it is important for the project team to talk with the business users, and team members should be prepared to focus on listening and to document the user interviews. An output of this step is theenterprise bus matrix.
The top track holds two milestones:
Dimensional modelingis a process in which the business requirements are used to design dimensional models for the system.
Physical design is the phase where the database is designed. It involves the database environment as well as security.
Extract, transform, load(ETL) design and development is the design of some of the heavy procedures in the data warehouse and business intelligence system. Kimball et al. suggests four parts to this process, which are further divided into 34 subsystems:[3]
Business intelligence application designdeals with designing and selecting some applications to support the business requirements. Business intelligence application development use the design to develop and validate applications to support the business requirements.
When the three tracks are complete they all end up in the finaldeployment. This phase requires planning and should includepre-deployment testing,documentation, training and maintenance andsupport.
When the deployment has finished the system will need proper maintenance to stay alive. This includesdata reconciliation, execution and monitoring andperformance tuning.
As the project can be seen as part of the larger iterative program, it is likely that the system will want to expand. There will be projects to add new data as well as reaching new segments of the business areas. The lifecycle then starts over again.
|
https://en.wikipedia.org/wiki/The_Kimball_lifecycle
|
Dimensional modeling(DM) is part of theBusiness Dimensional Lifecyclemethodology developed byRalph Kimballwhich includes a set of methods, techniques and concepts for use indata warehousedesign.[1]: 1258–1260[2]The approach focuses on identifying the keybusiness processeswithin a business and modelling and implementing these first before adding additional business processes, as abottom-up approach.[1]: 1258–1260An alternative approach fromInmonadvocates a top down design of the model of all the enterprise data using tools such asentity-relationship modeling(ER).[1]: 1258–1260
Dimensional modeling always uses the concepts of facts (measures), and dimensions (context). Facts are typically (but not always) numeric values that can be aggregated, and dimensions are groups of hierarchies and descriptors that define the facts. For example, sales amount is a fact; timestamp, product, register#, store#, etc. are elements of dimensions. Dimensional models are built by business process area, e.g. store sales, inventory, claims, etc. Because the differentbusiness process areasshare some but not all dimensions, efficiency in design, operation, and consistency, is achieved usingconformed dimensions, i.e. using one copy of the shared dimension across subject areas.[citation needed]
Dimensional modeling does not necessarily involve a relational database. The same modeling approach, at the logical level, can be used for any physical form, such as multidimensional database or even flat files. It is oriented around understandability and performance.[citation needed]
The dimensional model is built on astar-like schemaorsnowflake schema, with dimensions surrounding the fact table.[3][4]To build the schema, the following design model is used:
The process of dimensional modeling builds on a 4-step design method that helps to ensure the usability of the dimensional model and the use of thedata warehouse. The basics in the design build on the actual business process which thedata warehouseshould cover. Therefore, the first step in the model is to describe the business process which the model builds on. This could for instance be a sales situation in a retail store. To describe the business process, one can choose to do this in plain text or use basicBusiness Process Model and Notation(BPMN) or other design guides like theUnified Modeling Language|UML).
After describing the business process, the next step in the design is to declare the grain of the model. The grain of the model is the exact description of what the dimensional model should be focusing on. This could for instance be “An individual line item on a customer slip from a retail store”. To clarify what the grain means, you should pick the central process and describe it with one sentence. Furthermore, the grain (sentence) is what you are going to build your dimensions and fact table from. You might find it necessary to go back to this step to alter the grain due to new information gained on what your model is supposed to be able to deliver.
The third step in the design process is to define the dimensions of the model. The dimensions must be defined within the grain from the second step of the 4-step process. Dimensions are the foundation of the fact table, and is where the data for the fact table is collected. Typically dimensions are nouns like date, store, inventory etc. These dimensions are where all the data is stored. For example, the date dimension could contain data such as year, month and weekday.
After defining the dimensions, the next step in the process is to make keys for the fact table. This step is to identify the numeric facts that will populate each fact table row. This step is closely related to the business users of the system, since this is where they get access to data stored in thedata warehouse. Therefore, most of the fact table rows are numerical, additive figures such as quantity or cost per unit, etc.
Dimensional normalization or snowflaking removes redundant attributes, which are known in the normal flatten de-normalized dimensions. Dimensions are strictly joined together in sub dimensions.
Snowflaking has an influence on the data structure that differs from many philosophies of data warehouses.[4]Single data (fact) table surrounded by multiple descriptive (dimension) tables
Developers often don't normalize dimensions due to several reasons:[5]
There are some arguments on why normalization can be useful.[4]It can be an advantage when part of hierarchy is common to more than one dimension. For example, a geographic dimension may be reusable because both the customer and supplier dimensions use it.
Benefits of the dimensional model are the following:[6]
We still get the benefits of dimensional models onHadoopand similarbig dataframeworks. However, some features of Hadoop require us to slightly adapt the standard approach to dimensional modelling.[citation needed]
|
https://en.wikipedia.org/wiki/Dimensional_modeling
|
Ralph Kimball(born July 18, 1944[1]) is an author on the subject ofdata warehousingandbusiness intelligence. He is one of the original architects ofdata warehousingand is known for long-term convictions that data warehouses must be designed to be understandable and fast.[2][3]His bottom-up methodology, also known asdimensional modelingor the Kimball methodology, is one of the two main data warehousing methodologies alongsideBill Inmon.[2][3]
He is the principal author of the best-selling[4]booksThe Data Warehouse Toolkit(1996),[5]The Data Warehouse Lifecycle Toolkit(1998),The Data Warehouse ETL Toolkit(2004) andThe Kimball Group Reader(2015), published byWiley and Sons.
After receiving a Ph.D.[4]in 1973 fromStanford Universityin electrical engineering (specializing in man-machine systems), Ralph joined theXerox Palo Alto Research Center(PARC). At PARC Ralph was a principal designer of theXerox StarWorkstation, the first commercial product to usemice, icons and windows.
Kimball then became vice president of applications atMetaphor Computer Systems, adecision support softwareand services provider. He developed the Capsule Facility in 1982. The Capsule was a graphical programming technique which connected icons together in a logical flow, allowing a very visual style of programming for non-programmers. The Capsule was used to build reporting and analysis applications at Metaphor.
Kimball foundedRed Brick Systemsin 1986, serving as CEO until 1992. The company was acquired byInformix, which is now owned byIBM.[6]Red Brick was known for itsrelational databaseoptimized for data warehousing. Their claim to fame was the use of bit-mapIndexesin order to achieve performance gains that amounted to almost 10 times that of other Database vendors at that time.
Since 1992, Kimball has provided data warehouse consulting and education through various companies such as Ralph Kimball Associates and the Kimball Group.[7][4]
|
https://en.wikipedia.org/wiki/Ralph_Kimball
|
William H. Inmon(born 1945) is an Americancomputer scientist, recognized by many as the father of thedata warehouse.[1][2]Inmon wrote the first book, held the first conference (withArnie Barnett), wrote the first column in a magazine and was the first to offer classes in data warehousing. Inmon created the accepted definition of what a data warehouse is - a subject-oriented, non-volatile, integrated, time-variant collection of data insupport of management's decisions. Compared with the approach of the other pioneering architect of data warehousing,Ralph Kimball, Inmon's approach is often characterized as a top-down approach.
William H. Inmon was born July 20, 1945, inSan Diego, California. He received hisBachelor of Sciencedegree inmathematicsfromYale Universityin 1967, and hisMaster of Sciencedegree incomputer sciencefromNew Mexico State University.
He worked forAmerican Management SystemsandCoopers & Lybrandbefore 1991, when he founded the company Prism Solutions, which he took public. In 1995 he founded Pine Cone Systems, which was renamed Ambeo later on. In 1999, he created a corporate information factory web site for his consulting business.[3]
Inmon coined terms such as the government information factory, as well as data warehousing 2.0. Inmon promotes building, usage, and maintenance of data warehouses and related topics. His books include "Building the Data Warehouse" (1992, with later editions) and "DW 2.0: The Architecture for the Next Generation of Data Warehousing" (2008).
In July 2007, Inmon was named byComputerworldas one of the ten people that most influenced the first 40 years of the computer industry.[4]
Inmon's association with data warehousing stems from the fact that he wrote the first[5]book on data warehousing he held the first conference on data warehousing (with Arnie Barnett), he wrote the first column in a magazine on data warehousing, he has written over 1,000 articles on data warehousing in journals and newsletters, he created the first fold out wall chart for data warehousing and he conducted the first classes on data warehousing.
In 2012, Inmon developed and made public technology known as "textual disambiguation". Textual disambiguation applies context to raw text and reformats the raw text and context into a standard data base format. Once raw text is passed through textual disambiguation, it can easily and efficiently be accessed and analyzed by standard business intelligence technology. Textual disambiguation is accomplished through the execution of TextualETL.
Inmon owns and operates Forest Rim Technology, a company that applies and implements data warehousing solutions executed through textual disambiguation and TextualETL.[6]
Bill Inmon has published more than 60 books in nine languages and 2,000 articles on data warehousing and data management.
|
https://en.wikipedia.org/wiki/Bill_Inmon
|
TheChecking Integrated Circuit(CIC) is alockout chipdesigned byNintendofor theNintendo Entertainment System(NES)video game consolein 1985; the chip is part of a system known as 10NES, in which akey(which is stored in the game) is used by thelock(stored in the console) to check that the game is authentic and that the game is for the same region as the console.
The chip was designed in response to theNorth American video game crash of 1983, which was partially the result of a lack of both publishing and quality control; the idea was that by forcing third-party developers to have their games go through an approval process, Nintendo could stopshovelwarefrom entering the market. Improved designs of the CIC chip were also used in the laterSuper Nintendo Entertainment Systemand theNintendo 64, although running an updated security program that performs additional checks.
The lockout chip was controversial, with several developers opting to release their games without Nintendo's approval by using workarounds; the most well-known of these wasTengen(a subsidiary ofAtari Games), which copied the CIC chip, resulting in their games running without issue. In response,Nintendo sued Atariforcopyright infringement.[1]
The10NESsystem is alock-out system[2]designed for the North American and European versions of theNintendo Entertainment System(NES)video game console. The electronic chip serves as a digital lock which can be opened by akeyin the games,[3][4]designed to restrict the software that could be operated on the system.[5]
The chip was not present for the originalFamily Computerin 1983, leading to a large number of unlicensed cartridges in the Asian market.[6]They were, however, added for international variants as a response to the1983 video game crashin North America,[7]partially caused by an oversaturated market of console games due to lack of publishing control. Nintendo presidentHiroshi Yamauchisaid in 1986: "Ataricollapsed because they gave too much freedom to third-party developers and the market was swamped with rubbish games."[8]By requiring the presence of the 10NES in a game cartridge, Nintendo prevented third-party developers from producing games without Nintendo's approval, and provided the company with licensing fees,[7]a practice it had already established earlier with Famicom games.
The system consists of two parts: aSharp4-bit SM590[9][10]microcontrollerin the console (the "lock") that checks the inserted cartridge for authentication, and a matching chip in the game cartridge (the "key") that gives the code upon demand.[4]If the cartridge does not successfully provide the authentication, then the CIC repeatedly resets theCPUat a frequency of 1 Hz.[3][5][11]This causes the television and power LED to blink at the same 1 Hz rate and prevents the game from being playable.
The program used in the NES CIC is called 10NES and waspatentedunderU.S. patent 4,799,635.[5]Thesource codeis copyrighted; onlyNintendocan produce the authorization chips. The patent covering the 10NES expired on January 24, 2006, although the copyright is still in effect for exact clones.
Some unlicensed companies created circuits that used avoltage spiketo shut off the CIC before it can perform the authentication checks.[12][13]: 286
A few unlicensed games released in Europe and Australia (such asHESgames) came in the form of adonglethat would be connected to a licensed cartridge, in order to use that cartridge's CIC lockout chip for authentication.[14]This method also worked on theSNESand was utilized bySuper Noah's Ark 3D.[15]
Tengen(Atari Games's NES games subsidiary) took a different tactic: the corporation obtained a description of the code in the lockout chip from theUnited States Copyright Officeby claiming that it was required to defend against present infringement claims in a legal case.[4][16]Tengen then used these documents to design theirRabbitchip, which duplicated the function of the 10NES.[4]Nintendo sued Tengen for these actions. The court found that Tengen did not violate the copyright for copying the portion of code necessary to defeat the protection with current NES consoles, but did violate the copyright for copying portions of the code not being used in the communication between the chip and console.[4]Tengen had copied this code in its entirety because future console releases could have been engineered to pick up the discrepancy. On the initial claim, the court sided with Nintendo on the issue of patent infringement, but noted that Nintendo's patent would likely be deemed obvious as it was basicallyU.S. patent 4,736,419with the addition of a reset pin, which was at the time already commonplace in the world of electronics.[4]An eight-person jury later found that Atari did infringe.[4]While Nintendo was the winner of the initial trial, before they could actually enforce the ruling they would need to have the patent hold up under scrutiny, as well as address Tengen's antitrust claims. Before this occurred, the sides settled.[4]
A small company called RetroZone, the first company to publish games on the NES in over a decade, uses a multi-region lockout chip forNTSC,PAL A, and PAL Bcalled theCiclonewhich was created by reverse engineeringTengen'sRabbitchip. It will allow games to be played in more than one region. It is intended to make the games playable on older hardware that uses the 10NES lockout chip and the two other regions, although thetop-loading NESdoes not use a lockout chip. The Ciclone chip is the first lockout chip to be developed after the patent for the 10NES had expired.[17]Since then, there have been a few other open source implementations to allow the general public to reproduce multi-region CICs on AVR microcontrollers.[18]
Because the 10NES in the model NES-001 occasionally fails to authenticate legal cartridges, a common modification is to disable the chip entirely by cutting pin 4 on the NES-001's internal 10NES lockout chip.[19]
Towards the end of the SNES lifespan, the CIC was cloned and used in pirate games. Alternatively the aforementioned method of using a licensed game's CIC chip was possible, as it was used in the SNES version ofSuper Noah’s Ark 3D.[15]
|
https://en.wikipedia.org/wiki/CIC_(Nintendo)
|
Fan translation(oruser-generatedtranslation) refers to the unofficialtranslationof various forms of written or multimedia products made byfans(fan labor), often into a language in which an official translated version is not yet available.[1]Generally, fans do not have formal training as translators[1]but they volunteer to participate in translation projects based on interest in a specific audiovisual genre, TV series, movie, etc.[2]
Notable areas of fan translation include:
Fan translation of audiovisual material, particularly fansubbing ofanime, dates back to the 1980s.[1]O'Hagan (2009) argues that fansubbing emerged as a form of protest over "the official often over-edited versions of anime typically aired in dubbed form on television networks outside Japan"[1]and that fans sought more authentic translated versions[1][3]in a shorter time frame.[3]
Early fansubbing and fandubbing efforts involved manipulation ofVHS tapes, which was time-consuming and expensive.[6]The first reported fansub produced in theUnited Stateswas ofLupin III, produced in the mid-1980s, requiring an average of 100 hours per episode to subtitle.[3]
The development ofcultural industry, technological advances and the expansion of online platforms have led to a dynamic rise in fan translation[citation needed]. This has been followed by an increase in voluntary translation communities as well as in the variety of the content.[7]The largest beneficiaries are the audience, readers and game players who are also fellow fans of variouspopular cultureproducts,[4]since they are given the chance to receive first-hand information from foreign cultures. The entertainment industry and other cultural industries also benefit because their products are given global exposure, with a consequence of cultural immersion andcultural assimilation. However, people also consider fan translation as a potential threat to professional translation.[8]In fact, fan translation communities are built on the spirit of sharing,volunteering, ado-it-yourselfattitude[4]and most importantly, passion and enthusiasm for the same goal. Like a lot of specialization-based and art-based professions, rich experience and related knowledge are highly demanded in translation industry.[8]Therefore, fan translation cannot be regarded as a threat. Instead, to some extent, it includes two significant senses: for fan translators, it means a period of valuable experience and a pack of adequate preparation no matter if they are willing to take their fun hobby into another level; for professional translators, it serves as a type of sources to be referred and consulted once they encounter similar situations. In addition, from the perspective of development of fan translation, the content is no longer limited withinmovies,video gamesandfan fictions. Various forms including educational courses, political speeches and critical news reports appear in recent years, which injects brand-new meaning to fan translation by extending its value from entertaining nature towards social significance.[4]Just as Henry Jenkins states: "popular culture may be preparing the way for a more meaning public culture."[9]As a newly emerging phenomena dependent on the progress of Internet-supported infrastructure, it surpasses its original focus on personal interest and makes itself visible in front of the entire society. As a result, it has to be admitted that fan translation is somehow an inevitable trend.[4]
Fan translation often borders oncopyright infringement, as fans translate films, video games, comics, etc. often without seeking proper permission from the copyright holders.[10][1]Studies of fan translators have shown that these fans do so because they are enthusiastic about the works they translate and want to help other fans access the material.[10][11]Copyright holders often condone fan translation because it can help expose their products to a wider audience.[1]As-well as encouraging their works to be translated, many rights holders threaten creators of fan translations. In 2007, a French teenager was arrested for producing and releasing a translated copy ofHarry Potter and the Deathly Hallowsin French.[12]In 2013, Swedish police took down a website which hosted fan-made subtitles for users to download.[13]Releasing subtitles without including the original copyrighted work is not generally considered copyright infringement, but works that involve direct release of the copyrighted material like scanlation do infringe copyright law.[14]Japanese copyright holders and publishers in particular often take down fan translations, viewing them as pirated versions of their works.[15]
|
https://en.wikipedia.org/wiki/Fan_translation
|
Amodchip(short formodification chip) is a small electronic device used to alter or disable artificial restrictions of computers or entertainment devices. Modchips are mainly used invideo game consoles, but also in someDVDorBlu-rayplayers. They introduce various modifications to its host system's function, including the circumvention ofregion coding,digital rights management, andcopy protectionchecks for the purpose of using media intended for other markets, copied media, or unlicensed third-party (homebrew) software.
Modchips operate by replacing or overriding a system's protection hardware or software. They achieve this by either exploiting existing interfaces in an unintended or undocumented manner, or by actively manipulating the system's internal communication, sometimes to the point of re-routing it to substitute parts provided by the modchip.
Most modchips consist of one or moreintegrated circuits(microcontrollers,FPGAs, orCPLDs), often complemented withdiscrete parts, usually packaged on a smallPCBto fit within the console system it is designed for. Although there are modchips that can be reprogrammed for different purposes, most modchips are designed to work within only one console system or even only one specific hardware version.
Modchips typically require some degree of technical skill to install since they must be connected to a console's circuitry, most commonly bysolderingwires to select traces or chip legs on a system's circuit board. Some modchips allow for installation by directly soldering the modchip's contacts to the console's circuit ("quicksolder"), by the precise positioning of electrical contacts ("solderless"), or, in rare cases, by plugging them into a system's internal or external connector.
Memory cards or cartridges that offer functions similar to modchips work on a completely different concept, namely by exploiting flaws in the system's handling of media. Such devices are not referred to as modchips, even if they are frequently traded under this umbrella term.
The diversity of hardware modchips operate on and varying methods they use mean that while modchips are often used for the same goal, they may work in vastly different ways, even if they are intended for use on the same console. Some of the first modchips for theWii, known as drive chips, modify the behaviour and communication of the optical drive to bypass security. On theXbox 360, a common modchip took advantage of the fact short periods of instability in the CPU could be used to fairly reliably lead it to incorrectly compare security signatures. The precision required in this attack meant that the modchip had to make use of a CPLD. Other modchips, such as the XenoGC and clones for theGameCube, invoke a debug mode where security measures are reduced or absent (in which case, a stockAtmelAVR microcontrollerwas used). A more recent innovation are optical disk drive emulators or ODDE, which replace the optical disk drive and allow data to come from another source bypassing the need to circumvent any security. These often make use of FPGAs to enable them to accurately emulate timing and performance characteristics of the optical drives.
Most cartridge-based console systems did not have modchips produced for them. They usually implemented copy protection and regional lockout with game cartridges, both on hardware and software level. Converters or passthrough devices have been used to circumvent the restrictions, while flash memory devices (game backup devices) were widely adopted in later years to copy game media. Early in the transition from solid-state to optical media, CD-based console systems did not have regional market segmentation or copy protection measures due to the rarity and high cost of user-writable media at the time.
Modchips started to surface with thePlayStationsystem, due to the increasing availability and affordability of CD writers and the increasing sophistication of DRM protocols. At the time, a modchip's sole purpose was to allow the use of imported and copied game media.
Today, modchips are available for practically every current console system, often in a great number of variations. In addition to circumventing regional lockout and copy protection mechanisms, modern modchips may introduce more sophisticated modifications to the system, such as allowing the use of user-created software (homebrew), expanding the hardware capabilities of its host system, or even installing an alternative operating system to completely re-purpose the host system (e.g. for use as ahome theater PC).
Most modchips open the system to copied media, therefore the availability of a modchip for a console system is undesirable for console manufacturers. They react by removing the intrusion points exploited by a modchip from subsequent hardware or software versions, changing the PCB layout the modchips are customized for, or by having the firmware or software detect an installed modchip and refuse operation as a consequence. Since modchips often hook into fundamental functions of the host system that cannot be removed or adjusted, these measures may not completely prevent a modchip from functioning but only prompt an adjustment of its installation process or programming, e.g. to include measures to make it undetectable ("stealth") to its host system.
With the advent of online services to be used by video game consoles, some manufacturers have executed their possibilities within the service'slicense agreementto ban consoles equipped with modchips from using those services.[1]
In an effort to dissuade modchip creation, some console manufacturers included the option to run homebrew software or even an alternative operating system on their consoles, such asLinux for PlayStation 2. However, some of these features have been withdrawn at a later date.[2][3][4]An argument can be made that a console system remains largely untouched by modchips as long as their manufacturers provide an official way of running unlicensed third-party software.[5]
One of the most prominent functions of many modchips—the circumvention of copy protection mechanisms—is outlawed by many countries' copyright laws such as theDigital Millennium Copyright Actin the United States, theEuropean Copyright Directiveand its various implementations by the EU member countries, and theAustralian Copyright Act. Other laws may apply to the many diversified functions of a modchip, e.g. Australian law specifically allowing the circumvention of region coding.
The ambiguity of applicable law, its nonuniform interpretation by the courts, and constant profound changes and amendments to copyright law do not allow for a definitive statement on the legality of modchips. A modchip's legality under a country's legislature may only be individually asserted in court.
Most of the very few cases that have been brought before a court ended with the conviction of the modchip merchant or the manufacturer under the respective country's anti-circumvention laws. A small number of cases in the United Kingdom and Australia were dismissed under the argument that a system's copy protection mechanism would not be able to prevent the actual infringement of copyright—the actual process of copying game media—and therefore cannot be considered an effectivetechnical protection measureprotected by anti-circumvention laws.[6][7]In 2006, Australian copyright law has been amended to effectively close this legal loophole.[8]
In a 2017 lawsuit against a retailer, a Canadian court ruled in favor of Nintendo under anti-circumvention provisions inCanadian copyright law, which prohibit any breaching of technical protection measures. The court ruled that even though the retailer claimed the products could be used for homebrew, thus asserting exemptions for maintaining interoperability, the court ruled that because Nintendo offers development kits for its platforms, interoperability could be achieved without breaching TPMs, and thus the defence is invalid.[9]
In Japan, modchips were outlawed as part of new legislation in 2018 which made savegame editing and console modding illegal.[10]
An alternative of installing a modchip is a process ofsoftmoddinga device. A softmodded device does not need to permanently have any additional hardware pieces inside. Instead, the software of a device or its internal part is modified in order to change the device's behaviour.
|
https://en.wikipedia.org/wiki/Modchip
|
NTSC-Cis aregional lockoutcreated in 2003 bySony Computer Entertainmentfor the official launch of itsPlayStation 2gaming system into themainland Chinesemarket.[1]
The system's original model, then calledPlayStation 2, was launched throughout 2000, 2001 and 2002 in Japan, North America, Europe, Oceania, Hong Kong, Taiwan and South Korea, but it was not introduced in mainland China because of rampant piracy. In November 2003, Sony China ChairmanHiroshi Sodaexplained the situation:
Sony was previously reluctant to introduce PlayStation 2 into the Chinese market due to the piracy problem. But we changed our minds as we think that the piracy situation cannot be controlled 100 percent, not only in China but also in many other countries and regions in the world. We have to be courageous, to face the reality.[2]
However the situation changed in November 2003 as Sony China announced the PlayStation 2 (SCPH-50009 "Satin Silver" type) was planned to be launched in mainland China for Christmas, official release date December 20, 2003. Sales would be first limited to five large industrialized citiesBeijing,Shanghai,Guangzhou,ShenzhenandChengdu, then distribution would start in the whole country. However, on the eve of Christmas, arguing an "unfavorable environment,"[3][4]Sony China delayed the mainland release to next year with the system's new "slimline" type PS2 and sales limited to Shanghai and Guangzhou.[5]Meanwhile,Kenichi Fukunaga, a Sony Japan spokesman in Tokyo, reportedly declared "the company simply had not prepared in time for the China launch."[6]
The "NTSC/C" regional lockout for mainland China was specially created as the system is also a homeNTSCDVD playerwith its specific Zone 6regional codewhich is not compatible with the bordering countries (Japan is Zone 2; South Korea, Hong Kong, and Taiwan are all Zone 3, etc.)
The first batch of NTSC/C games was released in December 2005. Along withSony Computer Entertainment Japan, third-party publishers included local branches ofBandaiandNamcoamong others. The model types of NTSC-C PS2 for mainland China were SCPH-70006 CB, SCPH-75006 CB, SCPH-77006 CB, and SCPH-90006 CB.
"C" stands forChina. HoweverHong Kong,MacauandTaiwanare part of theNTSC-Jregion which was initially created forJapan.
The term NTSC-C is used to distinguish regions in console video games, which use televisions ofNTSCorPALdisplay standards. NTSC-C is used as the name of the video gaming region of continental China, despite the country's historical use of PAL as the official TV standard instead of NTSC.
Games designated as part of this region will not run on hardware designated as part of theNTSC-J(that includeTraditional Chinese中文版 version for Hong Kong, Taiwan,SingaporeandMalaysia, instead ofSimplified Chinesefor China),NTSC-UandPAL(or PAL-E, "E" stands for Europe) mostly due to the regional differences of the PAL (SECAMwas also used in the early 1990s) and NTSC TV standards, but there is also a concern of copyright protection throughregional lockoutbuilt into the video game systems and games themselves, as the same product can be released by different publishers on different continents.
|
https://en.wikipedia.org/wiki/NTSC-C
|
NTSC-Jor "System J" is the informal designation for theanaloguetelevisionstandard used inJapan. The system is based on the USNTSC(NTSC-M) standard with minor differences.[1]While NTSC-M is an officialCCIR[2][3][4]andFCC[5][6][7]standard, NTSC-J or "System J" are a colloquial indicators.
The system was introduced byNHKandNTV, with regular color broadcasts starting on September 10, 1960.[8][9]
NTSC-J was replaced bydigital broadcastsin 44 of the country's 47 prefectures on 24 July 2011. Analogue broadcasting ended on 31 March 2012 in the three prefectures devastated by the2011 Tōhoku earthquake and tsunami(Iwate,Miyagi,Fukushima) and the subsequentFukushima Daiichi nuclear disaster.
The term NTSC-J is also incorrectly and informally used to distinguish regions inconsole video games, which use televisions (seeMarketing definitionbelow).
Japan implemented theNTSCstandard with slight differences. Theblackandblanking levelsof the NTSC-J signal are identical to each other[10](both at 0IRE, similar to thePALvideo standard), while in American NTSC the black level is slightly higher (7.5IRE) than blanking level - because of the way this appears in the waveform, the higher black level is also called pedestal. This small difference doesn't cause any incompatibility problems, but needs to be compensated by a slight change of the TV brightness setting in order to achieve proper images.
YIQcolor encoding in NTSC-J uses slightly different equations and ranges from regular NTSC.I{\displaystyle I}has a range of 0 to +-334 (+-309 on NTSC-M), andQ{\displaystyle Q}has a range of 0 to +-293 (+-271 on NTSC-M).[11]
YCbCrequations for NTSC-J areC=(Cb−512)∗(0.545)∗(sinωt)+(Cr−512)∗(0.769)∗(cosωt){\displaystyle C=(Cb-512)*(0.545)*(\sin \omega t)+(Cr-512)*(0.769)*(\cos \omega t)}, while on NTSC-M we haveC=(Cb−512)∗(0.504)∗(sinωt)+(Cr−512)∗(0.711)∗(cosωt){\displaystyle C=(Cb-512)*(0.504)*(\sin \omega t)+(Cr-512)*(0.711)*(\cos \omega t)}.[11]
NTSC-J also uses awhite reference(color temperature) of9300Kinstead of the usual NTSC-U standard of6500K.[12][13][14]
The over-the-airRF frequenciesused in Japan do not match those of the US NTSC standard. On VHF the frequency spacing for each channel is 6 MHz as inNorth America,South America,Caribbean,South Korea,Taiwan, Burma (Myanmar) thePhilippines, except between channels 7 and 8 (which overlap). Channels 1 through 3 are reallocated for the expansion of theJapanese FM band. On UHF frequency spacing for each channel in Japan is the same, but the channel numbers are 1 lower than on the other areas mentioned - for example, channel 13 in Japan is on the same frequency as channel 14. For more information seeTelevision channel frequencies. Channels 13-62 are used for analog and digital TV broadcasting.
The encoding of thestereo subcarrieralso differs betweenNTSC-M/MTSand JapaneseEIAJ MTSbroadcasts.[15]
The term NTSC-J was informally used to distinguishregions in console video games, which use televisions. NTSC-J is used as the name of the video gaming region of Japan (hence the "J"), South East Asia (some countries only), Taiwan, Hong Kong, Macau, Philippines and South Korea (now NTSC-K) (formerly part of SE Asia with Hong Kong, Taiwan, Japan, etc.).[16][17]
Most games designated as part of this region will not run on hardware designated as part of the NTSC-U, PAL (or PAL-E, "E" stands for Europe) orNTSC-C(forChina) mostly due to the regional differences of thePAL(SECAMwas also used in the early 1990s) andNTSCstandards.[18][19][20][17]Many older video game systems do not allow games from different regions to be played (accomplished by various forms ofregional lockout); however more modern consoles either leave protection to the discretion of publishers, such asMicrosoft'sXbox 360, or discontinue its use entirely, likeSony'sPlayStation 3(with a few exceptions).
Chinareceived its own designation due to fears of an influx of illegal copies flooding out of China, which is notorious for its rampant copyright infringements. There is also concern of copyright protection through regional lockout built into the video game systems and games themselves, as the same product can be edited by different publishers from one continent to another.
|
https://en.wikipedia.org/wiki/NTSC-J
|
NTSC(fromNational Television System Committee) is the first American standard foranalog television, published and adopted in 1941.[1]In 1961, it was assigned the designationSystem M. It is also known asEIAstandard 170.[2]
In 1953, a second NTSC standard was adopted,[3]which allowed forcolor televisionbroadcast compatible with the existing stock ofblack-and-whitereceivers.[4][5][6]It is one of three major color formats for analog television, the others beingPALandSECAM.NTSC coloris usually associated with the System M; this combination is sometimes called NTSC II.[7][8]The only otherbroadcast television systemto use NTSC color was theSystem J. Brazil used System M with PAL color. Vietnam, Cambodia and Laos used System M with SECAM color - Vietnam later started using PAL in the early 1990s.
The NTSC/System M standard was used in most of theAmericas(exceptArgentina,Brazil,Paraguay, andUruguay),Myanmar,South Korea,Taiwan,Philippines,Japan, and somePacific Islandsnations and territories (see map).
Since the introduction of digital sources (ex: DVD) the termNTSChas been used to refer to digital formats with number of active lines between 480 and 487 having 30 or 29.97 frames per second rate, serving as a digital shorthand to System M. The so-calledNTSC-Filmstandardhas a digital standard resolution of 720 × 480 pixel forDVD-Videos, 480 × 480 pixel forSuper Video CDs(SVCD, Aspect Ratio: 4:3) and 352 × 240 pixel forVideo CDs(VCD).[9]Thedigital video(DV) camcorder format that is equivalent to NTSC is 720 × 480 pixels.[10]Thedigital television(DTV) equivalent is 704 × 480 pixels.[10]
The National Television System Committee was established in 1940 by the United StatesFederal Communications Commission(FCC) to resolve the conflicts between companies over the introduction of a nationwide analog television system in the United States. In March 1941, the committee issued a technical standard for black-and-white television that built upon a 1936 recommendation made by the Radio Manufacturers Association (RMA). Technical advancements of thevestigial side bandtechnique allowed for the opportunity to increase the image resolution. The NTSC selected 525 scan lines as a compromise betweenRCA's441-scan linestandard (already being used by RCA'sNBCTV network) andPhilco's andDuMont's desire to increase the number of scan lines to between 605 and 800.[11]The standard recommended aframe rateof 30 frames (images) per second, consisting of twointerlacedfieldsper frame at 262.5 lines per field and 60 fields per second. Other standards in the final recommendation were anaspect ratioof 4:3, and frequency modulation (FM) for the sound signal (which was quite new at the time).
In January 1950, the committee was reconstituted to standardizecolor television. The FCC had briefly approved a405-linefield-sequentialcolor television standard in October 1950, which was developed byCBS.[12]The CBS system was incompatible with existing black-and-white receivers. It used a rotating color wheel, reduced the number ofscan linesfrom 525 to 405, and increased the field rate from 60 to 144, but had an effectiveframe rateof only 24 frames per second. Legal action by rival RCA kept commercial use of the system off the air until June 1951, and regular broadcasts only lasted a few months before manufacture of all color television sets was banned by theOffice of Defense Mobilizationin October, ostensibly due to theKorean War.[13][14][15][16]A variant of the CBS system was later used byNASAto broadcast pictures of astronauts from space.[citation needed]CBS rescinded its system in March 1953,[17]and the FCC replaced it on December 17, 1953, with the NTSC color standard, which was cooperatively developed by several companies, including RCA and Philco.[18]
In December 1953, the FCC unanimously approved what is now called theNTSCcolor television standard (later defined as RS-170a). The compatible color standard retained full backward compatibility with then-existing black-and-white television sets. Color information was added to the black-and-white image by introducing a colorsubcarrierof precisely 315/88 MHz (usually described as 3.579545 MHz±10 Hz).[19]The precise frequency was chosen so that horizontal line-rate modulation components of the chrominance signal fall exactly in between the horizontal line-rate modulation components of the luminance signal, such that the chrominance signal could easily be filtered out of the luminance signal on new television sets, and that it would be minimally visible in existing televisions. Due to limitations offrequency dividercircuits at the time the color standard was promulgated, the color subcarrier frequency was constructed as composite frequency assembled from small integers, in this case 5×7×9/(8×11) MHz.[20]The horizontal line rate was reduced to approximately 15,734 lines per second (3.579545 × 2/455 MHz = 9/572 MHz) from 15,750 lines per second, and the frame rate was reduced to 30/1.001 ≈ 29.970 frames per second (the horizontal line rate divided by 525 lines/frame) from 30 frames per second. These changes amounted to 0.1 percent and were readily tolerated by then-existing television receivers.[21][22]
The first publicly announced network television broadcast of a program using the NTSC "compatible color" system was an episode of NBC'sKukla, Fran and Ollieon August 30, 1953, although it was viewable in color only at the network's headquarters.[23]The first nationwide viewing of NTSC color came on the following January 1 with the coast-to-coast broadcast of theTournament of Roses Parade, viewable on prototype color receivers at special presentations across the country. The first color NTSCtelevision camerawas theRCA TK-40, used for experimental broadcasts in 1953; an improved version, the TK-40A, introduced in March 1954, was the first commercially available color television camera. Later that year, the improved TK-41 became the standard camera used throughout much of the 1960s.
The NTSC standard has been adopted by other countries, including some in theAmericasandJapan.
With the advent ofdigital television, analog broadcasts were largely phased out. Most US NTSC broadcasters were required by the FCC to shut down their analog transmitters by February 17, 2009, however this was later moved to June 12, 2009.Low-power stations,Class A stationsandtranslatorswere required to shut down by 2015, although an FCC extension allowed some of those stations operating on Channel 6 to operate until July 13, 2021.[24]The remaining Canadian analog TV transmitters, in markets not subject to the mandatory transition in 2011, were scheduled to be shut down by January 14, 2022, under a schedule published byInnovation, Science and Economic Development Canadain 2017; however the scheduled transition dates have already passed for several stations listed that continue to broadcast in analog (e.g.CFJC-TVKamloops, which has not yet transitioned to digital, is listed as having been required to transition by November 20, 2020).[25]
Most countries using the NTSC standard, as well as those using otheranalog television standards, have switched to, or are in process of switching to, newer digital television standards, with there being at least four different standards in use around the world. North America, parts ofCentral America, andSouth Koreaare adopting or have adopted theATSCstandards, while other countries, such asJapan, are adopting or have adopted other standards instead of ATSC. After nearly 70 years, the majority of over-the-air NTSC transmissions in the United States ceased on June 12, 2009,[26]and by August 31, 2011,[27]inCanadaand most other NTSC markets.[28]The majority of NTSC transmissions ended in Japan on July 24, 2011, with the Japanese prefectures ofIwate,Miyagi, andFukushimaending the next year.[27]After a pilot program in 2013, most full-power analog stations in Mexico left the air on ten dates in 2015, with some 500 low-power and repeater stations allowed to remain in analog until the end of 2016. Digital broadcasting allowshigher-resolution television, butdigital standard definition televisioncontinues to use the frame rate and number of lines of resolution established by the analog NTSC standard.
NTSC color encoding is used with theSystem Mtelevision signal, which consists of30⁄1.001(approximately 29.97)interlacedframes ofvideopersecond. Each frame is composed of two fields, each consisting of 262.5 scan lines, for a total of 525 scan lines. The visiblerasteris made up of 486 scan lines. The later digital standard,Rec. 601, only uses 480 of these lines for visible raster. The remainder (thevertical blanking interval) allow for verticalsynchronizationand retrace. This blanking interval was originally designed to simply blank the electron beam of the receiver's CRT to allow for the simple analog circuits and slow vertical retrace of early TV receivers. However, some of these lines may now contain other data such asclosed captioningand vertical intervaltimecode(VITC). In the completeraster(disregarding half lines due tointerlacing) the even-numbered scan lines (every other line that would be even if counted in the video signal, e.g. {2, 4, 6, ..., 524}) are drawn in the first field, and the odd-numbered (every other line that would be odd if counted in the video signal, e.g. {1, 3, 5, ..., 525}) are drawn in the second field, to yield aflicker-freeimage at the field refreshfrequencyof60⁄1.001Hz (approximately 59.94 Hz). For comparison, 625 lines (576 visible) systems, usually used withPAL-B/GandSECAMcolor, and so have a higher vertical resolution, but a lower temporal resolution of 25 frames or 50 fields per second.
The NTSC field refresh frequency in the black-and-white system originally exactly matched the nominal 60 Hzfrequencyofalternating currentpower used in the United States. Matching the fieldrefresh rateto the power source avoidedintermodulation(also calledbeating), which produces rolling bars on the screen. Synchronization of the refresh rate to the power incidentally helpedkinescopecameras record early live television broadcasts, as it was very simple to synchronize afilmcamera to capture one frame of video on each film frame by using the alternating current frequency to set the speed of the synchronous AC motor-drive camera. This, as mentioned, is how the NTSC field refresh frequency worked in the original black-and-white system; whencolorwas added to the system, however, the refresh frequency was shifted slightly downward by 0.1%, to approximately 59.94 Hz, to eliminate stationary dot patterns in the difference frequency between the sound and color carriers (as explained below in§Color encoding). By the time the frame rate changed to accommodate color, it was nearly as easy to trigger the camera shutter from the video signal itself.
The actual figure of 525 lines was chosen as a consequence of the limitations of the vacuum-tube-based technologies of the day. In early TV systems, a mastervoltage-controlled oscillatorwas run at twice the horizontal line frequency, and thisfrequency was divideddown by the number of lines used (in this case 525) to give the field frequency (60 Hz in this case). This frequency was then compared with the 60 Hzpower-line frequencyand any discrepancycorrected by adjusting the frequencyof the master oscillator. For interlaced scanning, an odd number of lines per frame was required in order to make the vertical retrace distance identical for the odd and even fields,[clarification needed]which meant the master oscillator frequency had to be divided down by an odd number. At the time, the only practical method of frequency division was the use of a chain ofvacuum tubemultivibrators, the overall division ratio being the mathematical product of the division ratios of the chain. Since all the factors of an odd number also have to be odd numbers, it follows that all the dividers in the chain also had to divide by odd numbers, and these had to be relatively small due to the problems ofthermal driftwith vacuum tube devices. The closest practical sequence to 500 that meets these criteria was3×5×5×7=525. (For the same reason, 625-line PAL-B/G and SECAM uses5×5×5×5, the oldBritish 405-line systemused3×3×3×3×5, the French819-linesystem used3×3×7×13etc.)
Colorimetryrefers to the specific colorimetric characteristics of the system and its components, including the specific primary colors used, the camera, the display, etc. Over its history, NTSC color had two distinctly defined colorimetries, shown on the accompanying chromaticity diagram as NTSC 1953 and SMPTE C. Manufacturers introduced a number of variations for technical, economic, marketing, and other reasons.[29]
Note: displayed colors are approximate and require awide gamutdisplay for faithful reproduction.
The original 1953 color NTSC specification, still part of the United StatesCode of Federal Regulations, defined thecolorimetricvalues of the system as shown in the above table.[30]
Early color television receivers, such as the RCACT-100, were faithful to this specification (which was based on prevailing motion picture standards), having a larger gamut than most of today's monitors. Their low-efficiency phosphors (notably in the Red) were weak and long-persistent, leaving trails after moving objects. Starting in the late 1950s, picture tube phosphors would sacrifice saturation for increased brightness; this deviation from the standard at both the receiver and broadcaster was the source of considerable color variation.
To ensure more uniform color reproduction, some manufacturers incorporated color correction circuits into sets, that converted the received signal—encoded for the colorimetric values listed above—adjusting for the actual phosphor characteristics used within the monitor. Since such color correction can not be performed accurately on the nonlineargamma correctedsignals transmitted, the adjustment can only be approximated, introducing both hue andluminanceerrors for highly saturated colors.
Similarly at the broadcaster stage, in 1968–69 the Conrac Corp., working with RCA, defined a set of controlled phosphors for use in broadcast color picturevideo monitors.[31]This specification survives today as theSMPTE Cphosphor specification:[32]
As with home receivers, it was further recommended[33]that studio monitors incorporate similar color correction circuits so that broadcasters would transmit pictures encoded for the original 1953 colorimetric values, in accordance with FCC standards.
In 1987, theSociety of Motion Picture and Television Engineers(SMPTE) Committee on Television Technology, Working Group on Studio Monitor Colorimetry, adopted the SMPTE C (Conrac) phosphors for general use in Recommended Practice 145,[34]prompting many manufacturers to modify their camera designs to directly encode for SMPTE C colorimetry without color correction,[35]as approved in SMPTE standard 170M, "Composite Analog Video Signal – NTSC for Studio Applications" (1994). As a consequence, theATSCdigital television standard states that for480isignals, SMPTE C colorimetry should be assumed unless colorimetric data is included in the transport stream.[36]
Japanese NTSC never changed primaries and whitepoint to SMPTE C, continuing to use the 1953 NTSC primaries and whitepoint.[33]Both thePALandSECAMsystems used the original 1953 NTSC colorimetry as well until 1970;[33]unlike NTSC, however, the European Broadcasting Union (EBU) rejected color correction in receivers and studio monitors that year and instead explicitly called for all equipment to directly encode signals for the "EBU" colorimetric values.[37]
In reference to the gamuts shown on the CIE chromaticity diagram (above), the variations between the different colorimetries can result in significant visual differences. To adjust for proper viewing requiresgamut mappingviaLUTsor additionalcolor grading. SMPTE Recommended Practice RP 167-1995 refers to such an automatic correction as an "NTSC corrective display matrix."[38]For instance, material prepared for 1953 NTSC may look desaturated when displayed on SMPTE C or ATSC/BT.709displays, and may also exhibit noticeable hue shifts. On the other hand, SMPTE C materials may appear slightly more saturated on BT.709/sRGB displays, or significantly more saturated on P3 displays, if the appropriate gamut mapping is not performed.
NTSC uses aluminance-chrominanceencoding system, incorporating concepts invented in 1938 byGeorges Valensi. Using a separate luminance signal maintained backward compatibility with black-and-white television sets in use at the time; only color sets would recognize the chroma signal, which was essentially ignored by black and white sets.
The red, green, and blue primary color signals(R′G′B′){\displaystyle (R^{\prime }G^{\prime }B^{\prime })}are weighted and summed into a singlelumasignal, designatedY′{\displaystyle Y^{\prime }}(Y prime)[39]which takes the place of the originalmonochrome signal. The color difference information is encoded into the chrominance signal, which carries only the color information. This allows black-and-white receivers to display NTSC color signals by simply ignoring the chrominance signal. Some black-and-white TVs sold in the U.S. after the introduction of color broadcasting in 1953 were designed to filter chroma out, but the early B&W sets did not do this andchrominancecould be seen as acrawling dot patternin areas of the picture that held saturated colors.[40]
To derive the separate signals containing only color information, the difference is determined between each color primary and the summed luma. Thus the red difference signal isR′−Y′{\displaystyle R^{\prime }-Y^{\prime }}and the blue difference signal isB′−Y′{\displaystyle B^{\prime }-Y^{\prime }}. These difference signals are then used to derive two new color signals known asI′{\displaystyle I^{\prime }}(in-phase) andQ′{\displaystyle Q^{\prime }}(in quadrature) in a process calledQAM. TheI′Q′{\displaystyle I^{\prime }Q^{\prime }}color space is rotated relative to the difference signal color space, such that orange-blue color information (which the human eye is most sensitive to) is transmitted on theI′{\displaystyle I^{\prime }}signal at 1.3 MHz bandwidth, while theQ′{\displaystyle Q^{\prime }}signal encodes purple-green color information at 0.4 MHz bandwidth; this allows the chrominance signal to use less overall bandwidth without noticeable color degradation. The two signals each amplitude modulate[41]3.58 MHz carriers which are 90 degrees out of phase with each other[42]and the result added together but with thecarriers themselves being suppressed.[43][41]The result can be viewed as a single sine wave with varying phase relative to a reference carrier and with varying amplitude. The varying phase represents the instantaneous colorhuecaptured by a TV camera, and the amplitude represents the instantaneous colorsaturation. The3+51⁄88MHzsubcarrieris then added to the Luminance to form the composite color signal[41]which modulates the video signalcarrier. 3.58 MHz is often stated as an abbreviation instead of 3.579545 MHz.[44]
For a color TV to recover hue information from the color subcarrier, it must have a zero-phase reference to replace the previously suppressed carrier. The NTSC signal includes a short sample of this reference signal, known as thecolorburst, located on the back porch of each horizontal synchronization pulse. The color burst consists of a minimum of eight cycles of the unmodulated (pure original) color subcarrier. The TV receiver has a local oscillator, which is synchronized with these color bursts to create a reference signal. Combining this reference phase signal with the chrominance signal allows the recovery of theI′{\displaystyle I^{\prime }}andQ′{\displaystyle Q^{\prime }}signals, which in conjunction with theY′{\displaystyle Y^{\prime }}signal, is reconstructed to the individualR′G′B′{\displaystyle R^{\prime }G^{\prime }B^{\prime }}signals, that are then sent to theCRTto form the image.
In CRT televisions, the NTSC signal is turned into three color signals: red, green, and blue, each controlling an electron gun that is designed to excite only the corresponding red, green, or blue phosphor dots. TV sets with digital circuitry use sampling techniques to process the signals but the result is the same. For both analog and digital sets processing an analog NTSC signal, the original three color signals are transmitted using three discrete signals (Y, I and Q) and then recovered as three separate colors (R, G, and B) and presented as a color image.
When a transmitter broadcasts an NTSC signal, it amplitude-modulates a radio-frequency carrier with the NTSC signal just described, while it frequency-modulates a carrier 4.5 MHz higher with the audio signal. If non-linear distortion happens to the broadcast signal, the3+51⁄88MHz color carrier maybeatwith the sound carrier to produce a dot pattern on the screen. To make the resulting pattern less noticeable, designers adjusted the original 15,750 Hz scanline rate down by a factor of 1.001 (100⁄1,001%) to match the audio carrier frequency divided by the factor 286, resulting in a field rate of approximately 59.94 Hz. This adjustment ensures that the difference between the sound carrier and the color subcarrier (the most problematicintermodulationproduct of the two carriers) is an odd multiple of half the line rate, which is the necessary condition for the dots on successive lines to be opposite in phase, making them least noticeable.
The 59.94 rate is derived from the following calculations. Designers chose to make the chrominance subcarrier frequency ann+ 0.5 multiple of the line frequency to minimize interference between the luminance signal and the chrominance signal. (Another way this is often stated is that the color subcarrier frequency is an odd multiple of half the line frequency.) They then chose to make the audio subcarrier frequency an integer multiple of the line frequency to minimize visible (intermodulation) interference between the audio signal and the chrominance signal. The original black-and-white standard, with its 15,750 Hz line frequency and 4.5 MHz audio subcarrier, does not meet these requirements, so designers had to either raise the audio subcarrier frequency or lower the line frequency. Raising the audio subcarrier frequency would prevent existing (black and white) receivers from properly tuning in the audio signal. Lowering the line frequency is comparatively innocuous, because the horizontal and vertical synchronization information in the NTSC signal allows a receiver to tolerate a substantial amount of variation in the line frequency. So the engineers chose the line frequency to be changed for the color standard. In the black-and-white standard, the ratio of audio subcarrier frequency to line frequency is4.5 MHz⁄15,750 Hz=285+5⁄7. In the color standard, this becomes rounded to the integer 286, which means the color standard's line rate is4.5 MHz⁄286≈15,734+266⁄1,001Hz. Maintaining the same number of scan lines per field (and frame), the lower line rate must yield a lower field rate. Dividing4,500,000⁄286lines per second by 262.5 lines per field gives approximately 59.94 fields per second.
An NTSCtelevision channelas transmitted occupies a total bandwidth of 6 MHz. The actual video signal, which isamplitude-modulated, is transmitted between 500kHzand 5.45 MHz above the lower bound of the channel. The videocarrieris 1.25 MHz above the lower bound of the channel. Like most AM signals, the video carrier generates twosidebands, one above the carrier and one below. The sidebands are each 4.2 MHz wide. The entire upper sideband is transmitted, but only 1.25 MHz of the lower sideband, known as avestigial sideband, is transmitted. The color subcarrier, as noted above, is 3.579545 MHz above the video carrier, and isquadrature-amplitude-modulatedwith a suppressed carrier. The audio signal isfrequency modulated, like the audio signals broadcast byFM radiostationsin the 88–108 MHz band, but with a 25 kHz maximumfrequency deviation, as opposed to 75 kHz as is used on theFM band, making analog television audio signals sound quieter than FM radio signals as received on a wideband receiver. The main audio carrier is 4.5 MHz above the video carrier, making it 250 kHz below the top of the channel. Sometimes a channel may contain anMTSsignal, which offers more than one audio signal by adding one or two subcarriers on the audio signal, each synchronized to a multiple of the line frequency. This is normally the case whenstereo audioand/orsecond audio programsignals are used. The same extensions are used inATSC, where the ATSC digital carrier is broadcast at 0.31 MHz above the lower bound of the channel.
"Setup" is a 54 mV (7.5IRE) voltage offset between the "black" and "blanking" levels. It is unique to NTSC. CVBS stands for Color, Video, Blanking, and Sync.
The following table shows the values for the basic RGB colors, encoded in NTSC[45]
There is a large difference inframe ratebetween film, which runs at 24 frames per second, and the NTSC standard, which runs at approximately 29.97 (10 MHz ×63/88/455/525) frames per second.
In regions that use 25-fps television and video standards, this difference can be overcome byspeed-up.
For 30-fps standards, a process called "3:2 pulldown" is used. One film frame is transmitted for three video fields (lasting1+1⁄2video frames), and the next frame is transmitted for two video fields (lasting 1 video frame). Two film frames are thus transmitted in five video fields, for an average of2+1⁄2video fields per film frame. The average frame rate is thus 60 ÷ 2.5 = 24 frames per second, so the average film speed is nominally exactly what it should be. (In reality, over the course of an hour of real time, 215,827.2 video fields are displayed, representing 86,330.88 frames of film, while in an hour of true 24-fps film projection, exactly 86,400 frames are shown: thus, 29.97-fps NTSC transmission of 24-fps film runs at 99.92% of the film's normal speed.) Still-framing on playback can display a video frame with fields from two different film frames, so any difference between the frames will appear as a rapid back-and-forth flicker. There can also be noticeable jitter/"stutter" during slow camera pans (telecine judder).
Film shot specifically for NTSC television is usually taken at 30 (instead of 24) frames per second to avoid 3:2 pulldown.[46]
To show 25-fps material (such as Europeantelevision seriesand some European movies) on NTSC equipment, every fifth frame is duplicated and then the resulting stream is interlaced.
Film shot for NTSC television at 24 frames per second has traditionally been accelerated by 1/24 (to about 104.17% of normal speed) for transmission in regions that use 25-fps television standards. This increase in picture speed has traditionally been accompanied by a similar increase in the pitch and tempo of the audio. More recently, frame-blending has been used to convert 24 FPS video to 25 FPS without altering its speed.
Film shot for television in regions that use 25-fps television standards can be handled in either of two ways:
Because both film speeds have been used in 25-fps regions, viewers can face confusion about the true speed of video and audio, and the pitch of voices, sound effects, and musical performances, in television films from those regions. For example, they may wonder whether theJeremy Brettseries ofSherlock Holmestelevision films, made in the 1980s and early 1990s, was shot at 24 fps and then transmitted at an artificially fast speed in 25-fps regions, or whether it was shot at 25 fps natively and then slowed to 24 fps for NTSC exhibition.
These discrepancies exist not only in television broadcasts over the air and through cable, but also in the home-video market, on both tape and disc, includingLaserDiscandDVD.
In digital television and video, which are replacing their analog predecessors, single standards that can accommodate a wider range of frame rates still show the limits of analog regional standards. The initial version of theATSCstandard, for example, allowed frame rates of 23.976, 24, 29.97, 30, 59.94, 60, 119.88 and 120 frames per second, but not 25 and 50. Modern ATSC allows 25 and 50 FPS.
Because satellite power is severely limited, analog video transmission through satellites differs from terrestrial TV transmission.AMis a linear modulation method, so a given demodulatedsignal-to-noise ratio(SNR) requires an equally high received RF SNR. The SNR of studio quality video is over 50 dB, so AM would require prohibitively high powers and/or large antennas.
WidebandFMis used instead to trade RF bandwidth for reduced power. Increasing the channel bandwidth from 6 to 36 MHz allows a RF SNR of only 10 dB or less. The wider noise bandwidth reduces this 40 dB power saving by 36 MHz / 6 MHz = 8 dB for a substantial net reduction of 32 dB.
Sound is on an FM subcarrier as in terrestrial transmission, but frequencies above 4.5 MHz are used to reduce aural/visual interference. 6.8, 5.8 and 6.2 MHz are commonly used. Stereo can be multiplex, discrete, or matrix and unrelated audio and data signals may be placed on additional subcarriers.
A triangular 60 Hz energy dispersal waveform is added to the composite baseband signal (video plus audio and data subcarriers) before modulation. This limits the satellite downlinkpower spectral densityin case the video signal is lost. Otherwise the satellite might transmit all of its power on a single frequency, interfering with terrestrial microwave links in the same frequency band.
In half transponder mode, the frequency deviation of the composite baseband signal is reduced to 18 MHz to allow another signal in the other half of the 36 MHz transponder. This reduces the FM benefit somewhat, and the recovered SNRs are further reduced because the combined signal power must be "backed off" to avoid intermodulation distortion in the satellite transponder. A single FM signal is constant amplitude, so it can saturate a transponder without distortion.
An NTSCframeconsists of twofields,F1 (field one) and F2 (field two). Thefield dominancedepends on a combination of factors, including decisions by various equipment manufacturers as well as historical conventions. As a result, most professional equipment has the option to switch between a dominant upper or dominant lower field. It is not advisable to use the termsevenoroddwhen speaking of fields, due to substantial ambiguity. For instance if the line numbering for a particular system starts at zero, while another system starts its line numbering at one. As such the same field could be even or odd.[26][47]
While an analog television set does not care about field dominance per se, field dominance is important when editing NTSC video. Incorrect interpretation of field order can cause a shuddering effect as moving objects jump forward and behind on each successive field.
This is of particular importance when interlaced NTSC is transcoded to a format with a different field dominance and vice versa. Field order is also important when transcoding progressive video to interlaced NTSC, as any place there is a cut between two scenes in the progressive video, there could be a flash field in the interlaced video if the field dominance is incorrect. The film telecine process where athree-two pull downis utilized to convert 24 frames to 30, will also provide unacceptable results if the field order is incorrect.
Because each field is temporally unique for material captured with an interlaced camera, converting interlaced to a digital progressive-frame medium is difficult, as each progressive frame will have artifacts of motion on every alternating line. This can be observed in PC-based video-playing utilities and is frequently solved simply by transcoding the video at half resolution and only using one of the two available fields.
Unlike PAL and SECAM, with its many varied underlyingbroadcast television systemsin use throughout the world, NTSC color encoding is almost invariably used withbroadcast systemM, giving NTSC-M.
NTSC-N was originally proposed in the 1960s to theCCIRas a 50 Hz broadcast method forSystem Ncountries Paraguay, Uruguay and Argentina before they chosePAL. In 1978, with the introduction ofApple II Europlus, it was effectively reintroduced as "NTSC 50", a pseudo-system combining 625-line video with 3.58 MHz NTSC color. For example, anAtari STrunning PAL software on their NTSC color display used this system as the monitor could not decode PAL color. Most analog NTSC television sets and monitors with a V-Hold knob can display this system after adjusting the vertical hold.[48]
OnlyJapan's variant "NTSC-J" is slightly different: in Japan, black level and blanking level of the signal are identical (at 0IRE), as they are in PAL, while in American NTSC, black level is slightly higher (7.5IRE) than blanking level. Since the difference is quite small, a slight turn of the brightness knob is all that is required to correctly show the "other" variant of NTSC on any set as it is supposed to be; most watchers might not even notice the difference in the first place. The channel encoding on NTSC-J differs slightly from NTSC-M. In particular, the Japanese VHF band runs from channels 1–12 (located on frequencies directly above the 76–90 MHz JapaneseFM radioband) while the North American VHF TV band uses channels 2–13 (54–72 MHz, 76–88 MHz and 174–216 MHz) with 88–108 MHz allocated to FM radio broadcasting. Japan's UHF TV channels are therefore numbered from 13 up and not 14 up, but otherwise uses the same UHF broadcasting frequencies as those inNorth America.
NTSC 4.43 is a pseudo-system that transmits a NTSC color subcarrier of 4.43 MHz instead of 3.58 MHz[49]The resulting output is only viewable by TVs that support the resulting pseudo-system (such as most PAL TVs).[50]Using a native NTSC TV to decode the signal yields no color, while using an incompatible PAL TV to decode the system yields erratic colors (observed to be lacking red and flickering randomly). The format was used by theUSAFTV based in Germany during theCold WarandHong Kong Cable Television.[citation needed]It was also found as an optional output on someLaserDiscplayers sold in markets where the PAL system is used.
The NTSC 4.43 system, while not a broadcast format, appears most often as a playback function of PAL cassette format VCRs, beginning with the Sony 3/4" U-Matic format and then following onto Betamax and VHS format machines, commonly advertised as "NTSC playback on PAL TV".
Multi-standard video monitors were already in use in Europe to accommodate broadcast sources in PAL, SECAM, and NTSC video formats. Theheterodynecolor-under process of U-Matic, Betamax & VHS lent itself to minor modification of VCR players to accommodate NTSC format cassettes. The color-under format of VHS uses a 629 kHz subcarrier while U-Matic & Betamax use a 688 kHz subcarrier to carry anamplitude modulatedchroma signal for both NTSC and PAL formats. Since the VCR was ready to play the color portion of the NTSC recording using PAL color mode, the PAL scanner and capstan speeds had to be adjusted from PAL's 50 Hz field rate to NTSC's 59.94 Hz field rate, and faster linear tape speed.
The changes to the PAL VCR are minor thanks to the existing VCR recording formats. The output of the VCR when playing an NTSC cassette in NTSC 4.43 mode is 525 lines/29.97 frames per second with PAL compatible heterodyned color. The multi-standard receiver is already set to support the NTSC H & V frequencies; it just needs to do so while receiving PAL color.
The existence of those multi-standard receivers was probably part of the drive for region coding of DVDs. As the color signals are component on disc for all display formats, almost no changes would be required for PAL DVD players to play NTSC (525/29.97) discs as long as the display was frame-rate compatible.
In January 1960, (7 years prior to adoption of the modified SECAM version) the experimental TV studio in Moscow started broadcasting using the OSKM system. OSKM was the version of NTSC adapted to European D/K 625/50 standard. The OSKM abbreviation means "Simultaneous system with quadrature modulation" (In Russian: Одновременная Система с Квадратурной Модуляцией). It used the color coding scheme that was later used in PAL (U and V instead of I and Q).
The color subcarrier frequency was 4.4296875 MHz and the bandwidth of U and V signals was near 1.5 MHz.[51]Only circa 4000 TV sets of 4 models (Raduga,[52]Temp-22, Izumrud-201 and Izumrud-203[53]) were produced for studying the real quality of TV reception. These TV's were not commercially available, despite being included in the goods catalog for trade network of the USSR.
The broadcasting with this system lasted about 3 years and was ceased well before SECAM transmissions started in the USSR. None of the current multi-standard TV receivers can support this TV system.
Film content commonly shot at 24 frames/s can be converted to 30 frames/s through thetelecineprocess to duplicate frames as needed.
Mathematically for NTSC this is relatively simple as it is only needed to duplicate every fourth frame. Various techniques are employed. NTSC with an actual frame rate of24⁄1.001(approximately 23.976) frames/s is often defined as NTSC-film. A process known as pullup, also known as pulldown, generates the duplicated frames upon playback. This method is common forH.262/MPEG-2 Part 2digital video so the original content is preserved and played back on equipment that can display it or can be converted for equipment that cannot.
For NTSC, and to a lesser extent, PAL, reception problems can degrade the color accuracy of the picture where ghosting can dynamically change the phase of the color burst with picture content, thus altering the color balance of the signal. The only receiver compensation is in the professional TV receiver ghost canceling circuits used by cable companies. The vacuum-tube electronics used in televisions through the 1960s led to various technical problems. Among other things, the color burst phase would often drift. In addition, the TV studios did not always transmit properly, leading to hue changes when channels were changed, which is why NTSC televisions were equipped with a tint control. PAL and SECAM televisions had less of a need for one. SECAM in particular was very robust, but PAL, while excellent in maintaining skin tones which viewers are particularly sensitive to, nevertheless would distort other colors in the face of phase errors. With phase errors, only "Deluxe PAL" receivers would get rid of "Hanover bars" distortion. Hue controls are still found on NTSC TVs, but color drifting generally ceased to be a problem for more modern circuitry by the 1970s. When compared to PAL, in particular, NTSC color accuracy and consistency were sometimes considered inferior, leading to video professionals and television engineers jokingly referring to NTSC asNever The Same Color,Never Twice the Same Color, orNo True Skin Colors,[54]while for the more expensive PAL system it was necessary toPay for Additional Luxury.[citation needed]
The use of NTSC coded color inS-Videosystems, as well as the use of closed-circuit composite NTSC, both eliminate the phase distortions because there is no reception ghosting in a closed-circuit system to smear the color burst. For VHS videotape on the horizontal axis and frame rate of the three color systems when used with this scheme, the use of S-Video gives the higher resolution picture quality on monitors and TVs without a high-quality motion-compensated comb filtering section. (The NTSC resolution on the vertical axis is lower than the European standards, 525 lines against 625.) However, it uses too much bandwidth for over-the-air transmission. TheAtari 800andCommodore 64home computers generate S-video, but only when used with specially designed monitors as no TV at the time supported the separate chroma and luma on standardRCA jacks. In 1987, a standardized four-pinmini-DINsocket was introduced for S-video input with the introduction ofS-VHSplayers, which were the first device produced to use the four-pin plugs. However, S-VHS never became very popular. Video game consoles in the 1990s began offering S-video output as well.
The standard NTSC video image contains some lines (lines 1–21 of each field) that are not visible (this is known as thevertical blanking interval, or VBI); all are beyond the edge of the viewable image, but only lines 1–9 are used for the vertical-sync and equalizing pulses. The remaining lines were deliberately blanked in the original NTSC specification to provide time for the electron beam in CRT screens to return to the top of the display.
VIR (or Vertical interval reference), widely adopted in the 1980s, attempts to correct some of the color problems with NTSC video by adding studio-inserted reference data for luminance and chrominance levels on line 19.[55]Suitably equipped television sets could then employ these data in order to adjust the display to a closer match of the original studio image. The actual VIR signal contains three sections, the first having 70 percent luminance and the same chrominance as thecolor burstsignal, and the other two having 50 percent and 7.5 percent luminance respectively.[56]
A less-used successor to VIR,GCR, also added ghost (multipath interference) removal capabilities.
The remainingvertical blanking intervallines are typically used fordatacastingor ancillary data such as video editing timestamps (vertical interval timecodesorSMPTEtimecodes on lines 12–14[57][58]),test dataon lines 17–18, a network source code on line 20 andclosed captioning,XDS, andV-chipdata online 21. Earlyteletextapplications also used vertical blanking interval lines 14–18 and 20, but teletext over NTSC was never widely adopted by viewers.[59]
Many stations transmit TV Guide On Screen (TVGOS) data for an electronic program guide on VBI lines. The primary station in a market will broadcast 4 lines of data, and backup stations will broadcast 1 line. In most markets the PBS station is the primary host. TVGOS data can occupy any line from 10–25, but in practice its limited to 11–18, 20 and line 22. Line 22 is only used for 2 broadcast,DirecTVandCFPL-TV.
TiVo data is also transmitted on some commercials and program advertisements so that customers can autorecord the program being advertised, and is also used in weekly half-hourpaid programsonIon Televisionand theDiscovery Channelwhich highlight TiVo promotions and advertisers.
Below are countries and territories that currently use or once used the NTSC system. Many of these have switched or are currently switching from NTSC to digital television standards such asATSC(United States, Canada, Mexico, Suriname, Jamaica, South Korea, Saint Lucia, Bahamas, Barbados, Grenada, Antigua and Barbuda, Haiti),ISDB(Japan, Philippines, part of South America and Saint Kitts and Nevis),DVB-T(Taiwan, Panama, Colombia, Myanmar, and Trinidad and Tobago) orDTMB(Cuba).
The following countries and regions no longer use NTSC for terrestrial broadcasts.
|
https://en.wikipedia.org/wiki/NTSC
|
Phase Alternating Line(PAL) is a color encoding system foranalog television. It was one of three major analogue colour television standards, the others beingNTSCandSECAM. In most countries it was broadcast at625 lines, 50 fields (25 frames) per second, and associated with CCIR analoguebroadcast television systemsB,D,G,H,IorK. The articles on analogbroadcast television systemsfurther describeframe rates,image resolution, and audio modulation.
PAL video iscomposite videobecauseluminance(luma, monochrome image) andchrominance(chroma, colour applied to the monochrome image) are transmitted together as one signal. A latter evolution of the standard,PALplus, added support forwidescreenbroadcasts with no loss of verticalimage resolution, while retaining compatibility with existing sets. Almost all of the countries using PAL are currently in theprocess of conversion, or have already converted transmission standards toDVB,ISDBorDTMB. The PAL designation continues to be used in some non-broadcast contexts, especially regardingconsole video games.
PAL was adopted by most European countries, by several African countries, including South Africa, byArgentina,Brazil,Paraguay,Uruguay, and by most ofAsia Pacific (including the Middle East and South Asia).[1]Countries in those regions that did not adopt PAL wereFrance,[2]Francophone Africa,[2]several ex-Sovietstates,[2]Japan,[3]South Korea,Liberia,Myanmar, thePhilippines,[3]andTaiwan.[3]
With the introduction ofhome videoreleases and later digital sources (e.g.DVD-Video), the name "PAL" might be used to refer to digital formats, even though they use completely different colour encoding systems. For instance,576i(576 interlaced lines) digital video with colour encoded asYCbCr, intended to be backward compatible and easily displayed on legacy PAL devices, is usually mentioned as "PAL" (eg: "PAL DVD"). Likewise, video game consoles outputting a 50 Hz signal might be labeled as "PAL", as opposed to 60 Hz on NTSC machines. These designations should not be confused with the analog colour system itself.
In the 1950s, Western European countries began plans to introduce colour television and were faced with the fact that theNTSCstandard demonstrated several weaknesses, including colour tone shifting under poor transmission conditions, which became a major issue considering Europe's geographical and weather-related particularities. To overcome NTSC's shortcomings, alternative standards were devised, resulting in the development of the PAL and SECAM standards. The goal was to provide a colour TV standard for the European picture frequency of 50fieldsper second (50hertz), and finding a way to eliminate the problems with NTSC.
PAL was developed byWalter BruchatTelefunkeninHanover,West Germany, with important input fromGerhard Mahler[de].[4]The format was patented byTelefunkenin December 1962, citing Bruch as inventor,[5][6]and unveiled to members of theEuropean Broadcasting Union(EBU) on 3 January 1963.[6]When asked why the system was named "PAL" and not "Bruch", the inventor answered that a "Bruch system" would probably not have sold very well ("Bruch" is the German word for "breakage"[7]).
The first broadcasts began in theUnited Kingdomin July 1967, followed byWest Germanyat theBerlin IFAon August 25.[6][8]The BBC channel initially using the broadcast standard wasBBC2, which had been the first UK TV service to introduce "625-lines" during 1964. TheNetherlandsandSwitzerlandstarted PAL broadcasts by 1968, withAustriafollowing the next year.[6]
Telefunken PALcolour 708T[9]was the first PAL commercial TV set. It was followed byLoewe-Farbfernseher S 920andF 900.[10]
Telefunken was later bought by the French electronics manufacturerThomson. Thomson also bought theCompagnie Générale de TélévisionwhereHenri de Francedeveloped SECAM, the firstEuropean Standardfor colour television. Thomson, now called Technicolour SA, also owns theRCA brandand licences it to other companies;Radio Corporation of America, the originator of that brand, created the NTSC colour TV standard before Thomson became involved.
InItaly, at firstIndesitin co-operation with SEIMART tried to develop its own standard, ISA (Identificazione a Soppressione Alternata). However, while it presented very interesting technical and qualitative characteristics, it arrived too late and its eventual adoption would have resulted in heavy political and economic consequences, therefore the system was abandoned in favor of PAL in 1975.[11][12]
TheSoviet Uniondeveloped two further systems, mixing concepts from PAL and SECAM, known as TRIPAL and NIIR, that never went beyond tests.[6]
In 1993,[13]an evolution of PAL aimed to improve and enhance format by allowing16:9aspect ratiobroadcasts, while remaining compatible with existing television receivers,[14]was introduced. NamedPALplus, it was defined byITUrecommendation BT.1197-1. It was developed at theUniversity of DortmundinGermany, in cooperation with German terrestrial broadcasters and European and Japanese manufacturers. Adoption was limited to European countries.
With the introduction ofdigital broadcastsand signal sources (ex:DVDs, game consoles), the term PAL was used imprecisely to refer to the625-line/50 Hz television system in general, to differentiate from the525-line/60 Hz system generally used with NTSC. For example, DVDs were labelled as PAL or NTSC (referring to the line count and frame rate)[15]even though technically the discs carry neither PAL nor NTSC encoded signal. These devices would still have analog outputs (ex;composite videooutput), and would convert the digital signals (576ior480i) to the analog standards to assure compatibility. CCIR 625/50 and EIA 525/60 are the proper names for these (line count and field rate) standards; PAL and NTSC on the other hand are methods of encoding colour information in the signal.
"PAL-D", "PAL-N", "PAL-H" and "PAL-K" designations on this section describe PAL decoding methods and are unrelated tobroadcast systemswith similar names.[6]
The Telefunken licence covered any decoding method that relied on the alternating subcarrier phase to reduce phase errors, described as "PAL-D" for "delay", and "PAL-N" for "new" or "Chrominance Lock".[6]
This excluded very basic PAL decoders that relied on the human eye to average out the odd/even line phase errors, and in the early 1970s some Japanese set manufacturers developed basic decoding systems to avoid paying royalties toTelefunken. These variations are known as "PAL-S" (for "simple" or "Volks-PAL"),[16]operating without a delay line and suffering from the “Hanover bars” effect. An example of this solution is theKuba Porta Color CK211Pset.[6]Another solution was to use a 1Hanalogue delay lineto allow decoding of only the odd or even lines. For example, the chrominance on odd lines would be switched directly through to the decoder and also be stored in the delay line. Then, on even lines, the stored odd line would be decoded again. This method (known as 'gated NTSC') was adopted bySonyon their 1970sTrinitronsets (KV-1300UBtoKV-1330UB), and came in two versions: "PAL-H" and "PAL-K" (averaging over multiple lines).[6][16]It effectively treated PAL as NTSC, suffering from hue errors and other problems inherent in NTSC and required the addition of a manualhuecontrol.
Most PAL systems encode the colour information using a variant of theY'UVcolour space.Y′{\displaystyle Y'}comprises the monochromelumasignal, with the three RGB colour channels mixed down onto two,U{\displaystyle U}andV{\displaystyle V}.
Like NTSC, PAL uses aquadrature amplitude modulatedsubcarriercarrying thechrominanceinformation added to the luma video signal to form acomposite videobaseband signal. The frequency of this subcarrier is 4.43361875MHzfor PAL 4.43, compared to 3.579545 MHz for NTSC 3.58. The SECAM system, on the other hand, uses a frequency modulation scheme on its two line alternate colour subcarriers 4.25000 and 4.40625 MHz.
The name "Phase Alternating Line" describes the way that the phase of part of the colour information on the video signal is reversed with each line, which automatically corrects phase errors in the transmission of the signal by cancelling them out, at the expense of vertical frame colour resolution. Lines where the colour phase is reversed compared to NTSC are often called PAL or phase-alternation lines, which justifies one of the expansions of the acronym, while the other lines are called NTSC lines. Early PAL receivers relied on the human eye to do that cancelling; however, this resulted in a comb-like effect known asHanover barson larger phase errors. Thus, most receivers now use a chrominanceanalogue delay line, which stores the received colour information on each line of display; an average of the colour information from the previous line and the current line is then used to drive thepicture tube. The effect is that phase errors result insaturationchanges, which are less objectionable than the equivalent hue changes of NTSC. A minor drawback is that the vertical colour resolution is poorer than the NTSC system's, but since the human eye also has a colour resolution that is much lower than its brightness resolution, this effect is not visible. In any case, NTSC, PAL, and SECAM all have chrominance bandwidth (horizontal colour detail) reduced greatly compared to the luma signal.
The 4.43361875 MHz frequency of the colour carrier is a result of 283.75 colour clock cycles per line plus a 25 Hz offset to avoid interferences. Since the line frequency (number of lines per second) is 15625 Hz (625 lines × 50 Hz ÷ 2), the colourcarrier frequencycalculates as follows: 4.43361875 MHz = 283.75 × 15625 Hz + 25 Hz.
The frequency 50 Hz is the optional refresh frequency of the monitor to be able to create an illusion of motion, while 625 lines means the vertical lines or resolution that the PAL system supports.
The original colourcarrieris required by the colour decoder to recreate thecolour differencesignals. Since the carrier is not transmitted with the video information it has to be generated locally in the receiver. In order that thephaseof this locally generated signal can match the transmitted information, a 10 cycle burst of coloursubcarrieris added to the video signal shortly after the line sync pulse, but before the picture information, during the so-calledback porch. This colour burst is not actually in phase with the original colour subcarrier, but leads it by 45 degrees on the odd lines and lags it by 45 degrees on the even lines. Thisswinging burstenables the colour decoder circuitry to distinguish the phase of theR−Y′{\displaystyle R-Y'}vector which reverses every line.
For PAL-B/G the signal has these characteristics.
(Total horizontal sync time 12.05 μs)
After 0.9 μs a2.25±0.23 μscolourburstof10±1cycles is sent. Most rise/fall times are in250±50nsrange. Amplitude is 100% for white level, 30% for black, and 0% for sync.[17]
TheCVBSelectrical amplitude is Vpp1.0Vand impedance of 75Ω.[19]
The vertical timings are:
(Total vertical sync time 1.6 ms)
As PAL is interlaced, every two fields are summed to make a complete picture frame.
PAL colorimetry, as defined by the ITU on REC-BT.470, and based onCIE 1931x,y coordinates:[21]
The assumeddisplay gammais defined as 2.8.[21]ThePAL-Msystem uses color primary and gamma values similar to NTSC.[22]Color is encoded using theYUVcolor space.
Luma(E′Y{\displaystyle E'{\scriptstyle {\text{Y}}}}) is derived from red, green, and blue (E′R,E′G,E′B{\displaystyle E'{\scriptstyle {\text{R}}},E'{\scriptstyle {\text{G}}},E'{\scriptstyle {\text{B}}}}) gamma pre-corrected (E′{\displaystyle E'}) primary signals:[18]
E′U{\displaystyle E'{\scriptstyle {\text{U}}}}andE′V{\displaystyle E'{\scriptstyle {\text{V}}}}are used to transmitchrominance. Each has a typical bandwidth of 1.3 MHz.
Composite PAL signal=E′Y+E′Usin(ωt)+E′Vcos(ωt)+{\displaystyle =E'{\scriptstyle {\text{Y}}}+E'{\scriptstyle {\text{U}}}\sin(\omega t)+E'{\scriptstyle {\text{V}}}\cos(\omega t)+}timing[18]whereω=2πFSC{\displaystyle \omega =2\pi F_{SC}}.
Subcarrier frequencyFSC{\displaystyle F_{SC}}is 4.43361875 MHz (±5 Hz) for PAL-B/D/G/H/I/N.
The PAL colour system is usually used with a video format that has 625 lines per frame (576 visible lines, the rest being used for other information such as sync data and captioning) and arefresh rateof 50interlacedfields per second (compatible with 25 full frames per second), such systems beingB,G,H,I, andN(seebroadcast television systemsfor the technical details of each format).
This ensures video interoperability. However, as some of these standards (B/G/H,IandD/K) use different sound carriers (5.5 MHz, 6.0 MHz and 6.5 MHz respectively), it may result in a video image without audio when viewing a signal broadcast over the air or cable. Some countries inEastern Europewhich formerly usedSECAMwith systemsDandKhave switched to PAL while leaving other aspects of their video system the same, resulting in the different sound carrier. Instead, other European countries have changed completely from SECAM-D/K to PAL-B/G.[23]
The PAL-N system has a different sound carrier, and also a different colour subcarrier, and decoding on incompatible PAL systems results in a black-and-white image without sound.
The PAL-M system has a different sound carrier and a different colour subcarrier, and does not use 625 lines or 50 frames/second. This would result in no video or audio at all when viewing a European signal.
The BBC tested their pre-war (but still broadcast until 1985)405-linemonochrome system (CCIR System A) with all three colour standards including PAL, before the decision was made to abandon 405 and transmit colour on 625/System Ionly.
Many countries have turned off analogue transmissions, so the following does not apply anymore, except for using devices which output RF signals, such asvideo recorders.
The majority of countries using or having used PAL have television standards with 625 lines and 50 fields per second. Differences concern the audio carrier frequency and channel bandwidths. The variants are:
Systems B and G are similar. System B specifies 7 MHz channel bandwidth, while System G specifies 8 MHz channel bandwidth. Australia and China used Systems B and D respectively for VHF and UHF channels. Similarly, Systems D and K are similar except for the bands they use: System D is only used on VHF, while System K is only used on UHF. Although System I is used on both bands, it has only been used on UHF in the United Kingdom.
The PAL-L (Phase Alternating Line withCCIR System Lbroadcast system) standard uses the same video system as PAL-B/G/H (625 lines, 50 Hz field rate, 15.625 kHz line rate), but with a larger 6 MHz video bandwidth rather than 5.5 MHz and moving the audio subcarrier to 6.5 MHz. An 8 MHz channel spacing is used for PAL-L, to maintain compatibility with System L channel spacings.
The PAL-N standard was created inArgentina, through Resolution No. 100 ME/76,[24]which determined the creation of a study commission for a national color standard. The commission recommended using PAL underCCIR System NthatParaguayandUruguayalso used. It employs the 625 line/50 field per second waveform of PAL-B/G, D/K, H, and I, but on a 6 MHz channel with a chrominance subcarrier frequency of 3.582056 MHz (917/4*H) similar to NTSC (910/4*H).[21]On the studio production level, standard PAL cameras and equipment were used, with video signals then transcoded to PAL-N for broadcast.[25]This allows 625 line, 50 frames per second video to be broadcast in a 6 MHz channel, at some cost inhorizontal resolution.
In Brazil, PAL is used in conjunction with the 525 line, 60 field/sCCIR System M, using (very nearly) the NTSC colour subcarrier frequency. Exact colour subcarrier frequency of PAL-M is 3.575611 MHz, or 227.25 times System M's horizontal scan frequency. Almost all other countries using system M use NTSC.
The PAL colour system (either baseband or with any RF system, with the normal 4.43 MHz subcarrier unlike PAL-M) can also be applied to an NTSC-like525-linepicture to form what is often known as "PAL 60" (sometimes "PAL 60/525", "Quasi-PAL" or "Pseudo PAL"). PAL-M (a broadcast standard) however should not be confused with "PAL 60" (a video playback system—see below).
PAL television receivers manufactured since the 1990s can typically decode all of the PAL variants except, in some cases PAL-M and PAL-N. Many such receivers can also receive Eastern European and Middle Eastern SECAM, though rarely French-broadcast SECAM (because France used a quasi-unique positive video modulation, system L) unless they are manufactured for the French market. They will correctly display plain (non-broadcast)CVBSorS-videoSECAM signals. Many can also acceptbasebandNTSC-M, such as from a VCR or game console, and RF modulated NTSC with a PAL standard audio subcarrier (i.e., from a modulator), though not usually broadcast NTSC (as its 4.5 MHz audio subcarrier is not supported). Many sets also support NTSC with a 4.43 MHz color subcarrier (see PAL 60 on the next section).
VHStapes recorded from a PAL-N or a PAL-B/G, D/K, H, or I broadcast are indistinguishable because the downconverted subcarrier on the tape is the same. A VHS recorded off TV (or released) in Europe will play in colour on any PAL-N VCR and PAL-N TV in Argentina, Paraguay and Uruguay. Likewise, any tape recorded in Argentina, Paraguay or Uruguay off a PAL-N TV broadcast can be sent to anyone in European countries that use PAL (and Australia/New Zealand, etc.) and it will display in colour. This will also play back successfully in Russia and other SECAM countries, as the USSR mandated PAL compatibility in 1985—this has proved to be very convenient for video collectors.
People in Argentina, Paraguay and Uruguay usually own TV sets that also display NTSC-M, in addition to PAL-N.DirecTValso conveniently broadcasts in NTSC-M for North, Central, and South America. MostDVDplayers sold in Argentina, Paraguay and Uruguay also play PAL discs—however, this is usually output in the European variant (colour subcarrier frequency 4.433618 MHz), so people who own a TV set which only works in PAL-N (plus NTSC-M in most cases) will have to watch those PAL DVD imports in black and white (unless the TV supports RGBSCART) as the colour subcarrier frequency in the TV set is the PAL-N variation, 3.582056 MHz.
In the case that a VHS or DVD player works in PAL (and not in PAL-N) and the TV set works in PAL-N (and not in PAL), there are two options:
Some DVD players (usually lesser known brands) include an internal transcoder and the signal can be output in NTSC-M, with some video quality loss due to thestandard conversionfrom a 625/50 PAL DVD to the NTSC-M 525/60 output format. A few DVD players sold in Argentina, Paraguay and Uruguay also allow a signal output of NTSC-M, PAL, or PAL-N. In that case, a PAL disc (imported from Europe) can be played back on a PAL-N TV because there are no field/line conversions, quality is generally excellent.
Some special VHS video recorders are available which can allow viewers the flexibility of enjoying PAL-N recordings using a standard PAL (625/50 Hz) colour TV, or even through multi-system TV sets. Video recorders like Panasonic NV-W1E (AG-W1 for the US), AG-W2, AG-W3, NV-J700AM, Aiwa HV-M110S, HV-M1U, Samsung SV-4000W and SV-7000W feature a digital TV system conversion circuitry.
Many 1990s-onwardsvideocassette recorderssold in Europe can play back NTSC tapes. When operating in this mode most of them do not output a true (625/50) PAL signal, but rather a hybrid consisting of the original NTSC line standard (525/60), with colour converted to PAL 4.43 MHz (instead of 3.58 as with NTSC andPAL-M) — this is known as"PAL 60"(also"quasi-PAL"or"pseudo-PAL") with "60" standing for 60 Hz (for 525/30), instead of 50 Hz (for 625/25).
Some video game consoles also output a signal in this mode. TheDreamcastpioneered PAL 60 with most of its games being able to play games at full speed like NTSC and without borders.XboxandGameCubealso support PAL 60 unlike PlayStation 2.[26]ThePlayStation 2did not actually offer a true PAL 60 mode; while many PlayStation 2 games did offer a "PAL 60" mode as an option, the console would in fact generate an NTSC signal during 60 Hz operation.
Most newer television sets can display a "PAL 60" signal correctly, but some will only do so (if at all) in black and white and/or with flickering/foldover at the bottom of the picture, or picture rolling (however, many old TV sets can display the picture properly by means of adjusting the V-Hold and V-Height knobs—assuming they have them). Some TV tuner cards or video capture cards will support this mode (although software/driver modification can be required and the manufacturers' specs may be unclear).
Some DVD players offer a choice of PAL vs NTSC output for NTSC discs.[27]
PAL usually has 576 visible lines compared with 480 lines withNTSC, meaning that PAL has a 20% higher resolution, in fact it even has a higher resolution thanEnhanced Definitionstandard (852x480). Most TV output for PAL and NTSC use interlaced frames meaning that even lines update on one field and odd lines update on the next field. Interlacing frames gives a smoother motion with half the frame rate.NTSCis used with aframe rateof60ior30pwhereas PAL generally uses50ior25p; both use a high enoughframe rateto give the illusion of fluid motion. This is due to the fact that NTSC is generally used in countries with autility frequencyof 60 Hz and PAL in countries with 50 Hz, although there are many exceptions.
Both PAL and NTSC have a higher frame rate than film which uses 24 frames per second. PAL has a closer frame rate to that of film, so most films are sped up 4% to play on PAL systems, shortening the runtime of the film and, without adjustment, largely raising the pitch of the audio track. Film conversions for NTSC instead use3:2 pull downto spread the 24 frames of film across 60 interlaced fields. This maintains the runtime of the film and preserves the original audio, but may cause worse interlacing artefacts during fast motion, along withjudder.
NTSC receivers have atint controlto perform colour correction manually. If this is not adjusted correctly, the colours may be faulty. The PAL standard automatically cancelshueerrors by phase reversal, so a tint control is unnecessary yet Saturation control can be more useful. Chrominance phase errors in the PAL system are cancelled out using a 1H delay line resulting in lower saturation, which is much less noticeable to the eye than NTSC hue errors.
However, the alternation of colour information—Hanover bars—can lead to picture grain on pictures with extreme phase errors even in PAL systems, if decoder circuits are misaligned or use the simplified decoders of early designs (typically to overcome royalty restrictions). This effect will usually be observed when the transmission path is poor, typically in built up areas or where the terrain is unfavourable. The effect is more noticeable on UHF than VHF signals as VHF signals tend to be more robust. In most cases such extreme phase shifts do not occur.
PAL and NTSC have slightly divergentcolour spaces, but the colour decoder differences here are ignored.
Outside of film and TV broadcasts, the differences between PAL and NTSC when used in the context ofvideo gameswere quite dramatic. For comparison, the NTSC standard is 60 fields/30 frames per second while PAL is 50 fields/25 frames per second. To avoid timing problems or unfeasible code changes, games were slowed down by approximately 16.7%. This has led to games ported over to PAL regions being historically known for their inferior speed and frame rates compared to their NTSC counterparts, especially when they are not properly optimized for PAL standards.
Full motion videorendered and encoded at 30 frames per second by the Japanese/US (NTSC) developers were often down-sampled to 25 frames per second or considered to be 50 frames per second video for PAL release—usually by means of3:2 pull-down, resulting in motionjudder. In addition, the increased resolution of PAL was often not utilised at all during conversion, creating a pseudo-letterbox effect with borders on the top and bottom of the screen, looking similar to 14:9 letterbox. This leaves the graphics with a slightly squashed look due to an incorrect aspect ratio caused by the borders.
These practices were prevalent in previous generations, especially during the8-bitand16-bitera of video games where 2D graphics were the norm at that time. The gameplay of many games with an emphasis on speed, such as the originalSonic the Hedgehogfor theSega Genesis (Mega Drive), suffered in their PAL incarnations.
Starting with thesixth generationof video games, game consoles started to offer true 60 Hz modes in games ported to PAL regions. TheDreamcastwas the first to offer a true "PAL 60" mode, with many games made for the system in PAL regions being closely on-par with their NTSC counterparts in terms of speed and frame rates using "PAL 60" modes. TheXboxandGameCubealso featured "PAL 60" modes in games made for the region as well. The only lone exception was thePlayStation 2, where games ported over to PAL regions are oftentimes (but not always) running in 50 Hz modes. PAL region games supporting 60 Hz modes for the PlayStation 2 also requires a display with NTSC output unlessRGBorcomponentconnections were used, since these allowed for colour outputs without the need for NTSC or PAL colour encoding. Otherwise, the games would display in monochrome on PAL-only displays.
The problems usually associated with PAL region video games are not necessarily encountered in Brazil with the PAL-M standard used in that region, since its video system uses an identical number of visible lines and refresh rate as NTSC but with a slightly different colour encoding frequency based on PAL, modified for use with theCCIR System Mbroadcast television system.
TheSECAMpatents predate those of PAL by several years (1956 vs. 1962). Its creator, Henri de France, in search of a response to known NTSChueproblems, came up with ideas that were to become fundamental to both European systems, namely:
SECAM applies those principles by transmitting alternately only one of the U and V components on each TV line, and getting the other from the delay line. QAM is not required, andfrequency modulationof the subcarrier is used instead for additional robustness (sequential transmission of U and V was to be reused much later in Europe's last "analog" video systems: the MAC standards).
SECAM is free of both hue and saturation errors. It is not sensitive to phase shifts between the colour burst and the chrominance signal, and for this reason was sometimes used in early attempts at colour video recording, where tape speed fluctuations could get the other systems into trouble. In the receiver, it did not require a quartz crystal (which was an expensive component at the time) and generally could do with lower accuracy delay lines and components.
SECAM transmissions are more robust over longer distances than NTSC or PAL. However, owing to their FM nature, the colour signal remains present, although at reduced amplitude, even in monochrome portions of the image, thus being subject to stronger cross colour.
One serious drawback for studio work is that the addition of two SECAM signals does not yield valid colour information, due to its use of frequency modulation. It was necessary to demodulate the FM and handle it as AM for proper mixing, before finally remodulating as FM, at the cost of some added complexity and signal degradation. In its later years, this was no longer a problem, due to the wider use of component and digital equipment.
PAL can work without a delay line (PAL-S), but this configuration, sometimes referred to as "poor man's PAL", could not match SECAM in terms of picture quality. To compete with it at the same level, it had to make use of the main ideas outlined above, and as a consequence PAL had to pay licence fees to SECAM. Over the years, this contributed significantly to the estimated 500 million francs gathered by the SECAM patents (for an initial 100 million francs invested in research).[28]
Hence, PAL could be considered as a hybrid system, with its signal structure closer to NTSC, but its decoding borrowing much from SECAM.
There were initial specifications to use colour with the French 819 line format (system E). However, "SECAM E" only ever existed in development phases. Actual deployment used the 625 line format. This made for easy interchange and conversion between PAL and SECAM in Europe. Conversion was often not even needed, as more and more receivers and VCRs became compliant with both standards, helped in this by the common decoding steps and components. When theSCARTplug became standard, it could take RGB as an input, effectively bypassing all the colour coding formats' peculiarities.
When it comes to home VCRs, all video standards use what is called "colour under" format. Colour is extracted from the high frequencies of the video spectrum, and moved to the lower part of the spectrum available from tape. Luma then uses what remains of it, above the colour frequency range. This is usually done by heterodyning for PAL (as well as NTSC). But the FM nature of colour in SECAM allows for a cheaper trick: division by 4 of the subcarrier frequency (and multiplication on replay). This became the standard for SECAM VHS recording in France. Most other countries kept using the same heterodyning process as for PAL or NTSC and this is known as MESECAM recording (as it was more convenient for some Middle East countries that used both PAL and SECAM broadcasts).
Another difference in colour management is related to the proximity of successive tracks on the tape, which is a cause for chroma crosstalk in PAL. A cyclic sequence of 90° chroma phase shifts from one line to the next is used to overcome this problem. This is not needed in SECAM, as FM provides sufficient protection.
Regarding early (analogue) videodiscs, the established Laserdisc standard supported only NTSC and PAL. However, a different optical disc format, the Thomson transmissive optical disc made a brief appearance on the market. At some point, it used a modified SECAM signal (single FM subcarrier at 3.6 MHz[29]). The media's flexible and transmissive material allowed for direct access to both sides without flipping the disc, a concept that reappeared in multi-layered DVDs about fifteen years later.
Below are lists of countries and territories that used or once used the PAL system. Many of these have converted or are converting PAL toDVB-T(most countries),DVB-T2(most countries),DTMB(China, Hong Kong and Macau) orISDB-Tb(Sri Lanka, Maldives, Botswana, Brazil, Argentina, Paraguay and Uruguay).
A legacy list of PAL users in 1998 is available onRecommendation ITU-R BT.470-6 - Conventional Television Systems, Appendix 1 to Annex 1.[30]
The following countries and territories no longer use PAL for terrestrial broadcasts, and are in process of converting from PAL toDVB-T/T2,DTMBorISDB-T.
|
https://en.wikipedia.org/wiki/PAL
|
Aparallel importis a non-counterfeitproductimported from another country without the permission of theintellectual propertyowner. Parallel imports are often referred to as agrey productand are implicated in issues ofinternational trade, andintellectual property.[1]
Parallel importing is based on concept ofexhaustion of intellectual property rights; according to this concept, when the product is first launched on the market in a particular jurisdiction, parallel importation is authorized to all residents in the state in question.[2]Some countries allow it but others do not.[3]
Parallel importing of pharmaceuticals reduces price of pharmaceuticals by introducing competition;TRIPS Agreementin Article 6 states that this practice cannot be challenged under the WTO dispute settlement system and so is effectively a matter of national discretion.[4]
The practice of parallel importing is often advocated in the case of software, music, printed texts and electronic products, and occurs for several reasons:
Parallel importing is regulated differently in different jurisdictions; there is no consistency in laws dealing with parallel imports between countries. Neither theBerne Conventionnor theParis Conventionexplicitly prohibit parallel importation.
The Australian market is an example of a relatively small consumer market which does not benefit from theeconomies of scaleand competition available in the larger global economies. Australia tends to have lower levels of competition in many industries and oligopolies are common in industries like banking, supermarkets, and mobile telecommunications.
Private enterprise will use product segmentation strategies to legally maximise profit. This often includes varying service levels, pricing and product features to improve the so-called "fit" to the local marketplace. However, this segmentation may mean identical products at higher prices. This can be termed price discrimination.[7]With the advent of the Internet, Australian consumers can readily compare prices globally and have been able to identify products exhibiting price discrimination, also known as the "Australia Tax".
In 1991, the Australian Government resolved to remove parallel import restrictions from a range of products except cars. It followed this up with legislation making it legal to source music and software CDs from overseas and import them into Australia. An Australian Productivity Commission report recommended in July 2009 that legislation be extended to legalise the parallel importing of books, with three years' notice for publishers.[8]The commission also recommended abolishing restrictions on parallel importing of cars.[9]
The Federal Court of Australiadecisionhas ruled that parallel imported items with valid trademarks are subject toSection 123 of the Trade Mark Act.
Various Australian Parliament committees have investigated allegations of price discrimination.[10]
TheEuropean Union(andEuropean Economic Area) require the doctrine of international exhaustion to exist between member states, but EU legislation for trademarks, design rights and copyright prohibits its application to goods put on the market outside the EU/EEA.
InGermany, theBundesgerichtshofhas held that thedoctrine of international exhaustiongoverns parallel importation, subject to the EU rules above.
In Hong Kong, parallel importation is permitted under both the Trade Mark and (amended) Copyright Ordinance before The Copyright (Amendment) Ordinance 2007 came into force 6 July.[11]
Japan's intellectual property rights law prohibits audiovisual articles marketed for export from being sold domestically, and such sale of "re-imported" CDs are illegal.
In the United States, courts have established that parallel importation is legal.[12]In the case ofKirtsaeng v. John Wiley & Sons, Inc., the US Supreme Court held that thefirst-sale doctrineapplies to copies of a copyrighted work lawfully made abroad, thus permitting importation and resale of many product categories.
Moreover, the Science, State, Justice, and Commerce, and Related Agencies, Appropriations Act of 2006 prohibits future free trade agreements from categorically disallowing the parallel import of patented products.[13]
The United States has unique automobile design legislation administered by theNational Highway Traffic Safety Administration. Certain car makers find the required modifications too expensive. In the past, this created demand forgrey import vehicles, where certain models are modified for individual customers to meet these requirements at a higher cost than if it had been done by the original manufacturer. This procedure interferes with the marketing scheme of the manufacturer, who might plan to import a less powerful car and force consumers to accept it. The Imported Vehicle Safety Compliance Act of 1988 basically ended the gray market by requiring manufacturer certification of U.S.-bound cars.[14]
Markets for parallel imports and locally made products sometimes exist alongside each other even though the parallel imports are markedly more expensive. This may be for various reasons, but is mostly observed in foodstuffs and toiletry.
Due to the nature of hotels, travellers often have little information on where to shop except in the immediate vicinity. Grocery shops opened to serve brand-name hotels often feature parallel-imported foodstuffs and toiletry to cater to travellers so that they can easily recognise the product they have been using at home.
Foodstuffs and toiletry made from different plants may vary in quality because different plants may use materials or reagents (such as water used for washing, food additives) from different sources, although they are usually subject to the same standards by internal QC or public health authorities. A person may be allergic to the foodstuff or toiletry made by some plants but not others.
To sum up, the major reasons for such a market are:
A manifestation of the philosophical divide between those who support various intellectual property andthose who are critical of it, is the divide over the legitimacy of parallel importation. Some believe that it benefits consumers by lowering prices and widening the selection and consumption of products available in themarket, while others believe that it discourages intellectual property owners from investing in new andinnovativeproducts. Some also believe that parallel imports tend to facilitatecopyright infringement.
This tension essentially concerns therightsanddutiesof a protectedmonopoly. Intellectual property rights allow the holder to sell at a price that is higher than the price one would pay in acompetitive market, but by doing so the holder relinquishes sales to those who would be prepared to buy at a price between the monopoly price and the competitive price. The presence of parallel imports in the marketplace prevents the holder from exploiting the monopoly further bymarket segmentation, i.e. by applying different prices to different consumers.
Consumer organizationstend to support parallel importation as it offers consumers more choice and lower prices, provided that consumers retain equivalent legal protection to locally sourced products (e.g. in the form ofwarrantieswith international effect), and competition is not diminished.
However, such organisations also warn consumers of certain risks in using parallel-imported products. Although the products may have been made to comply with the laws and customs of their place of origin, these products or their use may not comply with those in places where they are used, or some of their functions may be rendered unusable or meaningless (which may needlessly drive up prices). Electronic devices, however, suffer less from this type of risk because newer models support more than one user language.
Importation ofcomputer gamesand computer game hardware fromAsiais a common practice for some wholesale and/or retail stockists. Many consumers now take advantage of on-line stores inHong Kongand theUnited Statesto purchase computer games at or near half the cost of a retail purchase from the Australian RRP. Often the versions sold by the Asian retailers are manufactured in Australia to begin with. An example isCrysis, which was available from Hong Kong on-line stores for approximately A$50, but whose retail cost in Australia was close to $100. Crysis was sold in Asia using identical versions of the game box and disc, right down to including Australian censor ratings on the box.
Importation ofColgatetoothpastefromThailandintoHong Kong. The goods are bought in markets where the price is lower, and sold in markets where the price of the same goods is, for a variety of reasons, higher. Electronic goods like Apple'siPadare frequently imported in Hong Kong before they're official and resold to South-East Asian early adopters for a premium.
The practice exists of luxurycardealers inNew ZealandbuyingMercedes-BenzvehiclesinMalaysiaat a low price, and importing the cars into New Zealand to sell at a price lower than the price offered by Mercedes Benz to New Zealand consumers.[citation needed]There are also many parallel import dealers of electronics hardware. Parallel importing is allowed in New Zealand and has resulted in a significant lowering of margins on many products.[citation needed]
There is an opinion, not scientifically proven, but very popular among people in Poland that "Western" washing powders are more effective in cleaning than Polish, because chemistry companies allegedly produce items of higher quality for Western Europe. Because of that, there are companies and online stores importing Western chemistry supplies to Poland (for example from Germany), even if similar brands are available there.[15][16]
According toAnatoliy Semyonov, trademark rights exhaustion turned national in 2002, and, as of April 2013, an act is being prepared that could make original goods imported without a permission of the producer officially "counterfeit" (by replacing things on which "a trademark is located illegally" with things "on which an illegally used trademark is located"). He notes that, according to theCriminal Code, illegal use of a trademark can be punished up to 6 years of imprisonment; and a similar article in theOffences Codemakes goods with an illegal copy of a trademark subject to confiscation.[17][18]
In 2022, following the exit of various Western firms from Russia as a result of theRussian invasion of Ukraine, a parallel import scheme was legalized to allow certain goods into Russia.[19]In September, thetrade minister,Denis Manturov, stated that Russian consumers would be able to buy the newly announcediPhone 14, despiteApplehalting all sales in the country. Apple products were already being re-exported and sold in Russia through the scheme, although at a higher price.[20]
SomeSony PSPvideo game consoles were imported into theEuropean Economic Areafrom Japan up to twelve months prior to the European launch. The unusual component of this example is that some importers were selling the console for a higher price than the intended EU price, taking advantage of the relative monopoly they enjoyed. After the release the console was commonly imported from the USA where it was retailed for much lower price.[citation needed]
Other example is smart phones, which were being imported from China, where an average device could be bought[when?]for about $100 while a similar device would be retailed for about €200 in the EU.[citation needed]
|
https://en.wikipedia.org/wiki/Parallel_import
|
Import gamersare a subset of thevideo game playercommunity that take part in the practice of playingvideo gamesfrom another region, usually fromJapanwhere the majority of games for certain systems originate.
Some common reasons for importing include:
While many games consoles do not allow games from other countries to be played on them (mainly due to voltage, localization and licensing issues), some consoles (often handheld, due to the universal nature of batteries) are not necessarily restricted to a certain locale. Some of these include:
Note: Pre-third generationconsoles are not listed because at the time there was little to no importing and consequently there was little reason to introduce regional lock-out. Sometime importing difficulties may still arise (e.g.Atari 2600games from regions the console is not from may introduce some glitches, such as missing colors).
Most handheld video game systems are region free due to most of them having a built in screen, run on batteries and being much cheaper to produce if they do not have a region lock on the system or games.
The majority of disk-based home consoles released in more than one region feature regional lockout, the main exceptions being the 3DO Interactive Multiplayer and the Sony PlayStation 3.
Modchipsare a popular choice for many of these consoles as they are generally the easiest to use; however a poorly installed chip could permanently break the console. Some modern consoles, such asXbox, cannot be used foronline playif chipped. However, some Xbox modchips can be turned off by the user, allowing online play.
Boot disks are another common choice, as they are generally reliable and do not require risky installation methods. These disks are loaded as though they are local game disks, then prompt the user to swap them for an imported game, allowing it to run. A Wii "Freeloader" boot disk was launched by Codejunkies. However, the Freeloader boot disk was rendered unusable with the release of Firmware 3.3 for the Wii. Most Wii users have since turned to "hacking" their Wii instead using the "Twilight Hack", and when Nintendo patched the bug that allowed the exploit to take place in Firmware 4.0, users soon discovered another method, aptly called the "BannerBomb Hack". This, when combined with the Homebrew channel and a disk loader application, allows users to bypass region checks for Wii games. Aside from the Freeloader series, other boot disks include the Action Replay, theUtopia boot disk,Bleemcast!, and numerous othersoftmoddisks.[8]
TheSega Saturnhas a fairly unusual workaround; while a disk-based console, it has acartridgeslot generally used for backup memory, cheat cards, and other utilities. This same slot can also be used for cartridges that allow imported games to run. Some of these cartridges include regional bypass, extra memory, RAM expansion(s), andcheatdevices all in one, while others feature only regional bypass and cannot play certain Japanese Saturn games that require RAM expansion cartridges.
The Xbox is not very restrictive due to the console being capable of "softmods" which can do things such as make the console region-free, allowing for burned games to be used and homebrew and multimedia functionality
All three major game console makers refuse to repair any system that has been modded or if boot disks are used.
Some consoles are only released in one region, and therefore have no protection. These include:
ThePCis a popular platform for import gaming as well. While someoperating systemsare unable to run games designed for other language versions of the same operating system,[citation needed]others, such asWindows XPandWindows Vista, are capable of being set to run Japanese (and/or other non-local) games and other software. Another method of importing is using a region-free disk drive.
|
https://en.wikipedia.org/wiki/Parallel_importing_in_video_games
|
Ineconomics,vendor lock-in, also known asproprietary lock-inorcustomer lock-in, makes a customer dependent on avendorforproducts, unable to use another vendor without substantialswitching costs.
The use ofopen standardsand alternative options makes systems tolerant of change, so that decisions can be postponed until more information is available or unforeseen events are addressed. Vendor lock-in does the opposite: it makes it difficult to move from one solution to another.
Lock-in costs that createbarriers to market entrymay result inantitrustaction against amonopoly.
This class of lock-in is potentially technologically hard to overcome if the monopoly is held up by barriers to market that are nontrivial to circumvent, such as patents, secrecy, cryptography or other technical hindrances.
This class of lock-in is potentially inescapable to rational individuals not otherwise motivated, by creating aprisoner's dilemma—if the cost to resist is greater than the cost of joining, then the locally optimal choice is to join—a barrier that takes cooperation to overcome. The distributive property (cost to resist the locally dominant choice) alone is not anetwork effect, for lack of anypositive feedback; however, the addition ofbistabilityper individual, such as by a switching cost, qualifies as a network effect, by distributing this instability to the collective as a whole.
As defined byThe Independent, this is a non-monopoly (mere technology), collective (on a society level) kind of lock-in:[1]
Technological lock-in is the idea that the more a society adopts a certain technology, the more unlikely users are to switch.
Examples:
Technology lock-in, as defined, is strictly of the collective kind. However, the personal variant is also a possiblepermutationof the variations shown in the table, but with no monopoly and no collectivity, it would be expected to be the weakest lock-in. Equivalent personal examples:
There exist lock-in situations that are both monopolistic and collective. Having the worst of two worlds, these can be very hard to escape — in many examples, the cost to resist incurs some level of isolation from the (dominating technology in) society, which can be socially costly, yet direct competition with the dominant vendor is hindered by compatibility.
As one blogger expressed:[3]
If I stopped using Skype, I'd lose contact with many people, because it's impossible to make them all change to[other]software.
WhileMP3is patent-free as of 2017, in 2001 it was both patented and entrenched, as noted byRichard Stallmanin that year (in justifying a lax license forOgg Vorbis):[4]
there is […] the danger that people will settle on MP3 format even though it is patented, and we won't be *allowed* to write free encoders for the most popular format. […] Ordinarily, if someone decides not to use a copylefted program because the license doesn't please him, that's his loss not ours. But if he rejects the Ogg/Vorbis code because of the license, and uses MP3 instead, then the problem rebounds on us—because his continued use of MP3 may help MP3 to become and stay entrenched.
More examples:
TheEuropean Commission, in its March 24, 2004 decision on Microsoft's business practices,[5]quotes, in paragraph 463, Microsoft general manager forC++development Aaron Contorer as stating in a February 21, 1997 internal Microsoft memo drafted forBill Gates:
"TheWindows APIis so broad, so deep, and so functional that mostISVs[independent software vendors] would be crazy not to use it. And it is so deeply embedded in the source code of many Windows apps that there is a huge switching cost to using a different operating system instead. It is this switching cost that has given customers the patience to stick with Windows through all our mistakes, our buggy drivers, our highTCO[total cost of ownership], our lack of a sexy vision at times, and many other difficulties. […] Customers constantly evaluate other desktop platforms, [but] it would be so much work to move over that they hope we just improve Windows rather than force them to move. In short, without this exclusive franchise called the Windows API, we would have been dead a long time ago. The Windows franchise is fueled by application development which is focused on our core APIs."
Microsoft's application software also exhibits lock-in through the use of proprietaryfile formats.Microsoft Outlookuses a proprietary, publicly undocumented datastore format. Present versions of Microsoft Word have introduced a new formatMS-OOXML. This may make it easier for competitors to write documents compatible with Microsoft Office in the future by reducing lock-in.[citation needed]Microsoft released full descriptions of the file formats for earlier versions of Word, Excel and PowerPoint in February 2008.[6]
Prior to March 2009, digital music files withdigital rights management(DRM) were available for purchase from theiTunes Store, encoded in a proprietary derivative of theAACformat that used Apple'sFairPlayDRM system. These files are compatible only with Apple'siTunesmedia player software onMacsandWindows, theiriPodportable digital music players,iPhonesmartphones,iPadtablet computers, and theMotorolaROKR E1andSLVRmobile phones. As a result, that music was locked into this ecosystem and available for portable use only through the purchase of one of the above devices,[7]or by burning toCDand optionally re-ripping to a DRM-free format such asMP3orWAV.
In January 2005, aniPodpurchaser named Thomas Slattery filed a suit against Apple for the "unlawful bundling" of theiriTunes Music Storeand iPod device. He stated in his brief:
"Apple has turned an open and interactive standard into an artifice that prevents consumers from using the portable hard drive digital music player of their choice."
At the time, Apple was stated to have an 80% market share of digital music sales and a 90% share of sales of new music players, which he claimed allowed Apple to horizontally leverage its dominant positions in both markets to lock consumers into its complementary offerings.[8]In September 2005, U.S. District JudgeJames WareapprovedSlattery v. Apple Computer Inc.to proceed with monopoly charges against Apple in violation of theSherman Antitrust Act.[9]
On June 7, 2006, theNorwegian Consumer Councilstated that Apple'siTunes Music Storeviolates Norwegian law. The contract conditions were vague and "clearly unbalanced to disfavor the customer".[10]The retroactive changes to the DRM conditions and the incompatibility with other music players are the major points of concern. In an earlier letter to Apple, consumer ombudsmanBjørn Erik Thoncomplained that iTunes' DRM mechanism was a lock-in to Apple's music players, and argued that this was a conflict with consumer rights that he doubted would be defendable by Norwegian copyright law.[11]
As of 29 May 2007[update], tracks on theEMIlabel became available in a DRM-free format callediTunes Plus. These files are unprotected and are encoded in the AAC format at 256kilobits per second, twice the bitrate of standard tracks bought through the service. iTunes accounts can be set to display either standard or iTunes Plus formats for tracks where both formats exist.[12]These files can be used with any player that supports the AAC file format and are not locked to Apple hardware. They can be converted to MP format if desired.[clarification needed]
As of January 6, 2009, all four big music studios (Warner Bros.,Sony BMG,Universal, andEMI) have signed up to remove the DRM from their tracks, at no extra cost. However, Apple charges consumers to have previously purchased DRM music restrictions removed.[13]
AlthoughGooglehas stated its position in favor of interoperability,[14]the company has taken steps away from open protocols replacing open standard Google Talk by proprietary protocol Google Hangouts.[15][16]Also, Google'sData Liberation Fronthas been inactive on Twitter since 2013[17]and its official website, www.dataliberation.org, now redirects to a page on Google's FAQs, leading users to believe the project has been closed.[18][19]Google's mobile operating systemAndroidis open source; however, the operating system that comes with the phones that most people actually purchase in a store is more often than not shipped with many of Google's proprietary applications thatpromote users to use only Google services.
Because cloud computing is still relatively new, standards are still being developed.[20]Many cloud platforms and services are proprietary, meaning that they are built on the specific standards, tools and protocols developed by a particular vendor for its particular cloud offering.[20]This can make migrating off a proprietary cloud platform prohibitively complicated and expensive.[20]
Three types of vendor lock-in can occur with cloud computing:[21]
Heterogeneous cloud computing is described as a type of cloud environment that prevents vendor lock-in, and aligns with enterprise data centers that are operating hybrid cloud models.[22]The absence of vendor lock-in lets cloud administrators select their choice of hypervisors for specific tasks, or to deploy virtualized infrastructures to other enterprises without the need to consider the flavor of hypervisor in the other enterprise.[23]
A heterogeneous cloud is considered one that includes on-premises private clouds, public clouds and software-as-a-service clouds. Heterogeneous clouds can work with environments that are not virtualized, such as traditional data centers.[24]Heterogeneous clouds also allow for the use of piece parts, such as hypervisors, servers, and storage, from multiple vendors.[25]
Cloud piece parts, such as cloud storage systems, offer APIs but they are often incompatible with each other.[26]The result is complicated migration between backends, and makes it difficult to integrate data spread across various locations.[26]This has been described as a problem of vendor lock-in.[26]The solution to this is for clouds to adopt common standards.[26]
|
https://en.wikipedia.org/wiki/Vendor_lock-in
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.