text
stringlengths
21
172k
source
stringlengths
32
113
Moore's lawis the observation that the number oftransistorsin anintegrated circuit(IC) doubles about every two years. Moore's law is anobservationandprojectionof a historical trend. Rather than alaw of physics, it is anempirical relationship. It is anexperience-curve law, a type of law quantifying efficiency gains from experience in production. The observation is named afterGordon Moore, the co-founder ofFairchild SemiconductorandInteland former CEO of the latter, who in 1965 noted that the number of components per integrated circuit had beendoubling every year,[a]and projected this rate of growth would continue for at least another decade. In 1975, looking forward to the next decade, he revised the forecast to doubling every two years, acompound annual growth rate(CAGR) of 41%. Moore's empirical evidence did not directly imply that the historical trend would continue, nevertheless, his prediction has held since 1975 and has since become known as alaw. Moore's prediction has been used in thesemiconductor industryto guide long-term planning and to set targets forresearch and development(R&D). Advancements indigital electronics, such as the reduction inquality-adjusted pricesofmicroprocessors, the increase inmemory capacity(RAMandflash), the improvement ofsensors, and even the number and size ofpixelsindigital cameras, are strongly linked to Moore's law. These ongoing changes in digital electronics have been a driving force of technological and social change,productivity, and economic growth. Industry experts have not reached a consensus on exactly when Moore's law will cease to apply. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, slightly below the pace predicted by Moore's law. In September 2022,NvidiaCEOJensen Huangconsidered Moore's law dead,[2]while Intel CEOPat Gelsingerwas of the opposite view.[3] In 1959,Douglas Engelbartstudied the projected downscaling of integrated circuit (IC) size, publishing his results in the article "Microelectronics, and the Art of Similitude".[4][5][6]Engelbart presented his findings at the 1960International Solid-State Circuits Conference, where Moore was present in the audience.[7] In 1965, Gordon Moore, who at the time was working as the director of research and development atFairchild Semiconductor, was asked to contribute to the thirty-fifth-anniversary issue ofElectronicsmagazine with a prediction on the future of the semiconductor components industry over the next ten years.[8]His response was a brief article entitled "Cramming more components onto integrated circuits".[1][9][b]Within his editorial, he speculated that by 1975 it would be possible to contain as many as65000components on a single quarter-square-inch (~1.6 cm2) semiconductor. The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.[1] Moore posited a log–linear relationship between device complexity (higher circuit density at reduced cost) and time.[12][13]In a 2015 interview, Moore noted of the 1965 article: "... I just did a wild extrapolation saying it's going to continue to double every year for the next 10 years."[14]One historian of the law citesStigler's law of eponymy, to introduce the fact that the regular doubling of components was known to many working in the field.[13] In 1974,Robert H. DennardatIBMrecognized the rapid MOSFET scaling technology and formulated what became known asDennard scaling, which describes that as MOS transistors get smaller, theirpower densitystays constant such that the power use remains in proportion with area.[15][16]Evidence from the semiconductor industry shows that this inverse relationship between power density andareal densitybroke down in the mid-2000s.[17] At the 1975IEEE International Electron Devices Meeting, Moore revised his forecast rate,[18][19]predicting semiconductor complexity would continue to double annually until about 1980, after which it would decrease to a rate of doubling approximately every two years.[19][20][21]He outlined several contributing factors for this exponential behavior:[12][13] Shortly after 1975,CaltechprofessorCarver Meadpopularized the termMoore's law.[22][23]Moore's law eventually came to be widely accepted as a goal for the semiconductor industry, and it was cited by competitive semiconductor manufacturers as they strove to increase processing power. Moore viewed his eponymous law as surprising and optimistic: "Moore's law is a violation ofMurphy's law. Everything gets better and better."[24]The observation was even seen as aself-fulfilling prophecy.[25][26] The doubling period is often misquoted as 18 months because of a separate prediction by Moore's colleague, Intel executiveDavid House.[27]In 1975, House noted that Moore's revised law of doubling transistor count every 2 years in turn implied that computer chip performance would roughly double every 18 months,[28]with no increase in power consumption.[29]Mathematically, Moore's law predicted that transistor count would double every 2 years due to shrinking transistor dimensions and other improvements.[30]As a consequence of shrinking dimensions, Dennard scaling predicted that power consumption per unit area would remain constant. Combining these effects, David House deduced that computer chip performance would roughly double every 18 months. Also due to Dennard scaling, this increased performance would not be accompanied by increased power, i.e., the energy-efficiency ofsilicon-based computer chips roughly doubles every 18 months. Dennard scaling ended in the 2000s.[17]Koomey later showed that a similar rate of efficiency improvement predated silicon chips and Moore's law, for technologies such as vacuum tubes. Microprocessor architects report that since around 2010, semiconductor advancement has slowed industry-wide below the pace predicted by Moore's law.[17]Brian Krzanich, the former CEO of Intel, cited Moore's 1975 revision as a precedent for the current deceleration, which results from technical challenges and is "a natural part of the history of Moore's law".[31][32][33]The rate of improvement in physical dimensions known as Dennard scaling also ended in the mid-2000s. As a result, much of the semiconductor industry has shifted its focus to the needs of major computing applications rather than semiconductor scaling.[25][34][17]Nevertheless, as of 2019, leading semiconductor manufacturersTSMCandSamsung Electronicsclaimed to keep pace with Moore's law[35][36][37][38][39][40]with10,7, and5 nmnodes in mass production.[35][36][41][42][43] As the cost of computer power to the consumer falls, the cost for producers to fulfill Moore's law follows an opposite trend: R&D, manufacturing, and test costs have increased steadily with each new generation of chips. The cost of the tools, principallyextreme ultraviolet lithography(EUVL), used to manufacture chips doubles every 4 years.[44]Rising manufacturing costs are an important consideration for the sustaining of Moore's law.[45]This led to the formulation ofMoore's second law, also called Rock's law (named afterArthur Rock), which is that thecapital costof asemiconductor fabrication plantalso increases exponentially over time.[46][47] Numerous innovations by scientists and engineers have sustained Moore's law since the beginning of the IC era. Some of the key innovations are listed below, as examples of breakthroughs that have advanced integrated circuit andsemiconductor device fabricationtechnology, allowing transistor counts to grow by more than seven orders of magnitude in less than five decades. Computer industry technology road maps predicted in 2001 that Moore's law would continue for several generations of semiconductor chips.[71] One of the key technical challenges of engineering futurenanoscaletransistors is the design of gates. As device dimensions shrink, controlling the current flow in the thin channel becomes more difficult. Modern nanoscale transistors typically take the form ofmulti-gate MOSFETs, with theFinFETbeing the most common nanoscale transistor. The FinFET has gate dielectric on three sides of the channel. In comparison, thegate-all-aroundMOSFET (GAAFET) structure has even better gate control. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, below the pace predicted by Moore's law.[17]Brian Krzanich, the former CEO of Intel, announced, "Our cadence today is closer to two and a half years than two."[103]Intel stated in 2015 that improvements in MOSFET devices have slowed, starting at the22 nmfeature width around 2012, and continuing at14 nm.[104]Pat Gelsinger, Intel CEO, stated at the end of 2023 that "we're no longer in the golden era of Moore's Law, it's much, much harder now, so we're probably doubling effectively closer to every three years now, so we've definitely seen a slowing."[105] The physical limits to transistor scaling have been reached due to source-to-drain leakage, limited gate metals and limited options for channel material. Other approaches are being investigated, which do not rely on physical scaling. These include the spin state of electronspintronics,tunnel junctions, and advanced confinement of channel materials via nano-wire geometry.[106]Spin-based logic and memory options are being developed actively in labs.[107][108] The vast majority of current transistors on ICs are composed principally ofdopedsilicon and its alloys. As silicon is fabricated into single nanometer transistors,short-channel effectsadversely changes desired material properties of silicon as a functional transistor. Below are several non-silicon substitutes in the fabrication of small nanometer transistors. One proposed material isindium gallium arsenide, or InGaAs. Compared to their silicon and germanium counterparts, InGaAs transistors are more promising for future high-speed, low-power logic applications. Because of intrinsic characteristics ofIII–V compound semiconductors, quantum well andtunneleffect transistors based on InGaAs have been proposed as alternatives to more traditional MOSFET designs. Biological computingresearch shows that biological material has superior information density and energy efficiency compared to silicon-based computing.[116] Various forms ofgrapheneare being studied forgraphene electronics, e.g.graphene nanoribbontransistorshave shown promise since its appearance in publications in 2008. (Bulk graphene has aband gapof zero and thus cannot be used in transistors because of its constant conductivity, an inability to turn off. The zigzag edges of the nanoribbons introduce localized energy states in the conduction and valence bands and thus a bandgap that enables switching when fabricated as a transistor. As an example, a typical GNR of width of 10 nm has a desirable bandgap energy of 0.4 eV.[117][118]) More research will need to be performed, however, on sub-50 nm graphene layers, as its resistivity value increases and thus electron mobility decreases.[117] In April 2005,Gordon Moorestated in an interview that the projection cannot be sustained indefinitely: "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens." He also noted that transistors eventually would reach the limits of miniaturization atatomiclevels: In terms of size [of transistors] you can see that we're approaching the size of atoms which is a fundamental barrier, but it'll be two or three generations before we get that far—but that's as far out as we've ever been able to see. We have another 10 to 20 years before we reach a fundamental limit. By then they'll be able to make bigger chips and have transistor budgets in the billions.[119] In 2016 theInternational Technology Roadmap for Semiconductors, after using Moore's Law to drive the industry since 1998, produced its final roadmap. It no longer centered its research and development plan on Moore's law. Instead, it outlined what might be called the More than Moore strategy in which the needs of applications drive chip development, rather than a focus on semiconductor scaling. Application drivers range from smartphones to AI to data centers.[120] IEEE began a road-mapping initiative in 2016, Rebooting Computing, named theInternational Roadmap for Devices and Systems(IRDS).[121] Some forecasters, including Gordon Moore,[122]predict that Moore's law will end by around 2025.[123][120][124]Although Moore's Law will reach a physical limit, some forecasters are optimistic about the continuation of technological progress in a variety of other areas, including new chip architectures, quantum computing, and AI and machine learning.[125][126]NvidiaCEOJensen Huangdeclared Moore's law dead in 2022;[2]several days later, Intel CEO Pat Gelsinger countered with the opposite claim.[3] Digital electronics have contributed to world economic growth in the late twentieth and early twenty-first centuries.[127]The primary driving force of economic growth is the growth ofproductivity,[128]which Moore's law factors into. Moore (1995) expected that "the rate of technological progress is going to be controlled from financial realities".[129]The reverse could and did occur around the late-1990s, however, with economists reporting that "Productivity growth is the key economic indicator of innovation."[130]Moore's law describes a driving force of technological and social change, productivity, and economic growth.[131][132][128] An acceleration in the rate of semiconductor progress contributed to a surge in U.S. productivity growth,[133][134][135]which reached 3.4% per year in 1997–2004, outpacing the 1.6% per year during both 1972–1996 and 2005–2013.[136]As economist Richard G. Anderson notes, "Numerous studies have traced the cause of the productivity acceleration to technological innovations in the production of semiconductors that sharply reduced the prices of such components and of the products that contain them (as well as expanding the capabilities of such products)."[137] The primary negative implication of Moore's law is thatobsolescencepushes society up against theLimits to Growth. As technologies continue to rapidly improve, they render predecessor technologies obsolete. In situations in which security and survivability of hardware or data are paramount, or in which resources are limited, rapid obsolescence often poses obstacles to smooth or continued operations.[138] Several measures of digital technology are improving at exponential rates related to Moore's law, including the size, cost, density, and speed of components. Moore wrote only about the density of components, "a component being a transistor, resistor, diode or capacitor",[129]at minimum cost. Transistors per integrated circuit– The most popular formulation is of the doubling of the number of transistors on ICs every two years. At the end of the 1970s, Moore's law became known as the limit for the number of transistors on the most complex chips. The graph at the top of this article shows this trend holds true today. As of 2025[update], the commercially available processor possessing one of the highest numbers of transistors is aGB202 graphics processorwith more than 92.2 billion transistors.[139] Density at minimum cost per transistor– This is the formulation given in Moore's 1965 paper.[1]It is not just about the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest.[140] As more transistors are put on a chip, the cost to make each transistor decreases, but the chance that the chip will not work due to a defect increases. In 1965, Moore examined the density of transistors at which cost is minimized, and observed that, as transistors were made smaller through advances inphotolithography, this number would increase at "a rate of roughly a factor of two per year".[1] Dennard scaling– This posits that power usage would decrease in proportion to area (both voltage and current being proportional to length) of transistors. Combined with Moore's law,performance per wattwould grow at roughly the same rate as transistor density, doubling every 1–2 years. According to Dennard scaling transistor dimensions would be scaled by 30% (0.7×) every technology generation, thus reducing their area by 50%. This would reduce the delay by 30% (0.7×) and therefore increase operating frequency by about 40% (1.4×). Finally, to keep electric field constant, voltage would be reduced by 30%, reducing energy by 65% and power (at 1.4× frequency) by 50%.[c]Therefore, in every technology generation transistor density would double, circuit becomes 40% faster, while power consumption (with twice the number of transistors) stays the same.[141]Dennard scaling ended in 2005–2010, due to leakage currents.[17] The exponential processor transistor growth predicted by Moore does not always translate into exponentially greater practical CPU performance. Since around 2005–2007, Dennard scaling has ended, so even though Moore's law continued after that, it has not yielded proportional dividends in improved performance.[15][142]The primary reason cited for the breakdown is that at small sizes, current leakage poses greater challenges, and also causes the chip to heat up, which creates a threat ofthermal runawayand therefore, further increases energy costs.[15][142][17] The breakdown of Dennard scaling prompted a greater focus on multicore processors, but the gains offered by switching to more cores are lower than the gains that would be achieved had Dennard scaling continued.[143][144]In another departure from Dennard scaling, Intel microprocessors adopted a non-planar tri-gate FinFET at 22 nm in 2012 that is faster and consumes less power than a conventional planar transistor.[145]The rate of performance improvement for single-core microprocessors has slowed significantly.[146]Single-core performance was improving by 52% per year in 1986–2003 and 23% per year in 2003–2011, but slowed to just seven percent per year in 2011–2018.[146] Quality adjusted price of IT equipment– Thepriceof information technology (IT), computers and peripheral equipment, adjusted for quality and inflation, declined 16% per year on average over the five decades from 1959 to 2009.[147][148]The pace accelerated, however, to 23% per year in 1995–1999 triggered by faster IT innovation,[130]and later, slowed to 2% per year in 2010–2013.[147][149] Whilequality-adjustedmicroprocessor price improvement continues,[150]the rate of improvement likewise varies, and is not linear on a log scale. Microprocessor price improvement accelerated during the late 1990s, reaching 60% per year (halving every nine months) versus the typical 30% improvement rate (halving every two years) during the years earlier and later.[151][152]Laptop microprocessors in particular improved 25–35% per year in 2004–2010, and slowed to 15–25% per year in 2010–2013.[153] The number of transistors per chip cannot explain quality-adjusted microprocessor prices fully.[151][154][155]Moore's 1995 paper does not limit Moore's law to strict linearity or to transistor count, "The definition of 'Moore's Law' has come to refer to almost anything related to the semiconductor industry that on asemi-log plotapproximates a straight line. I hesitate to review its origins and by doing so restrict its definition."[129] Hard disk drive areal density– A similar prediction (sometimes calledKryder's law) was made in 2005 forhard disk driveareal density.[156]The prediction was later viewed as over-optimistic. Several decades of rapid progress in areal density slowed around 2010, from 30 to 100% per year to 10–15% per year, because of noise related tosmaller grain sizeof the disk media, thermal stability, and writability using available magnetic fields.[157][158] Fiber-optic capacity– The number of bits per second that can be sent down an optical fiber increases exponentially, faster than Moore's law.Keck's law, in honor ofDonald Keck.[159] Network capacity– According to Gerald Butters,[160][161]the former head of Lucent's Optical Networking Group at Bell Labs, there is another version, called Butters' Law of Photonics,[162]a formulation that deliberately parallels Moore's law. Butters' law says that the amount of data coming out of an optical fiber is doubling every nine months.[163]Thus, the cost of transmitting a bit over an optical network decreases by half every nine months. The availability ofwavelength-division multiplexing(sometimes called WDM) increased the capacity that could be placed on a single fiber by as much as a factor of 100. Optical networking anddense wavelength-division multiplexing(DWDM) is rapidly bringing down the cost of networking, and further progress seems assured. As a result, the wholesale price of data traffic collapsed in thedot-com bubble.Nielsen's Lawsays that the bandwidth available to users increases by 50% annually.[164] Pixels per dollar– Similarly, Barry Hendy of Kodak Australia has plotted pixels per dollar as a basic measure of value for a digital camera, demonstrating the historical linearity (on a log scale) of this market and the opportunity to predict the future trend of digital camera price,LCDandLEDscreens, and resolution.[165][166][167][168] The great Moore's law compensator (TGMLC), also known asWirth's law– generally is referred to assoftware bloatand is the principle that successive generations of computer software increase in size and complexity, thereby offsetting the performance gains predicted by Moore's law. In a 2008 article inInfoWorld, Randall C. Kennedy,[169]formerly of Intel, introduces this term using successive versions ofMicrosoft Officebetween the year 2000 and 2007 as his premise. Despite the gains in computational performance during this time period according to Moore's law, Office 2007 performed the same task at half the speed on a prototypical year 2007 computer as compared to Office 2000 on a year 2000 computer. Library expansion– was calculated in 1945 byFremont Riderto double in capacity every 16 years, if sufficient space were made available.[170]He advocated replacing bulky, decaying printed works with miniaturizedmicroformanalog photographs, which could be duplicated on-demand for library patrons or other institutions. He did not foresee the digital technology that would follow decades later to replace analog microform with digital imaging, storage, and transmission media. Automated, potentially lossless digital technologies allowed vast increases in the rapidity of information growth in an era that now sometimes is called theInformation Age. Carlson curve– is a term coined byThe Economist[171]to describe the biotechnological equivalent of Moore's law, and is named after author Rob Carlson.[172]Carlson accurately predicted that the doubling time of DNA sequencing technologies (measured by cost and performance) would be at least as fast as Moore's law.[173]Carlson Curves illustrate the rapid (in some cases hyperexponential) decreases in cost, and increases in performance, of a variety of technologies, including DNA sequencing, DNA synthesis, and a range of physical and computational tools used in protein expression and in determining protein structures. Eroom's law– is a pharmaceutical drug development observation that was deliberately written as Moore's Law spelled backward in order to contrast it with the exponential advancements of other forms of technology (such as transistors) over time. It states that the cost of developing a new drug roughly doubles every nine years. Experience curve effectssays that each doubling of the cumulative production of virtually any product or service is accompanied by an approximate constant percentage reduction in the unit cost. The acknowledged first documented qualitative description of this dates from 1885.[174][175]A power curve was used to describe this phenomenon in a 1936 discussion of the cost of airplanes.[176] Edholm's law– Phil Edholm observed that thebandwidthoftelecommunication networks(including the Internet) is doubling every 18 months.[177]The bandwidths of onlinecommunication networkshas risen frombits per secondtoterabits per second. The rapid rise in online bandwidth is largely due to the same MOSFET scaling that enabled Moore's law, as telecommunications networks are built from MOSFETs.[178] Haitz's lawpredicts that the brightness of LEDs increases as their manufacturing cost goes down. Swanson's lawis the observation that the price of solar photovoltaic modules tends to drop 20 percent for every doubling of cumulative shipped volume. At present rates, costs go down 75% about every 10 years.
https://en.wikipedia.org/wiki/Moore%27s_law
Inmathematics, in particularabstract algebra, agraded ringis aringsuch that the underlyingadditive groupis adirect sum of abelian groupsRi{\displaystyle R_{i}}such that⁠RiRj⊆Ri+j{\displaystyle R_{i}R_{j}\subseteq R_{i+j}}⁠. Theindex setis usually the set of nonnegativeintegersor the set of integers, but can be anymonoid. The direct sum decomposition is usually referred to asgradationorgrading. Agraded moduleis defined similarly (see below for the precise definition). It generalizesgraded vector spaces. A graded module that is also a graded ring is called agraded algebra. A graded ring could also be viewed as a graded⁠Z{\displaystyle \mathbb {Z} }⁠-algebra. Theassociativityis not important (in fact not used at all) in the definition of a graded ring; hence, the notion applies tonon-associative algebrasas well; e.g., one can consider agraded Lie algebra. Generally, the index set of a graded ring is assumed to be the set of nonnegative integers, unless otherwise explicitly specified. This is the case in this article. A graded ring is aringthat is decomposed into adirect sum ofadditive groups, such that for all nonnegative integersm{\displaystyle m}and⁠n{\displaystyle n}⁠. A nonzero element ofRn{\displaystyle R_{n}}is said to behomogeneousofdegree⁠n{\displaystyle n}⁠. By definition of a direct sum, every nonzero elementa{\displaystyle a}ofR{\displaystyle R}can be uniquely written as a suma=a0+a1+⋯+an{\displaystyle a=a_{0}+a_{1}+\cdots +a_{n}}where eachai{\displaystyle a_{i}}is either 0 or homogeneous of degree⁠i{\displaystyle i}⁠. The nonzeroai{\displaystyle a_{i}}are thehomogeneous componentsof⁠a{\displaystyle a}⁠. Some basic properties are: AnidealI⊆R{\displaystyle I\subseteq R}ishomogeneous, if for every⁠a∈I{\displaystyle a\in I}⁠, the homogeneous components ofa{\displaystyle a}also belong to⁠I{\displaystyle I}⁠. (Equivalently, if it is a graded submodule of⁠R{\displaystyle R}⁠; see§ Graded module.) Theintersectionof a homogeneous idealI{\displaystyle I}withRn{\displaystyle R_{n}}is an⁠R0{\displaystyle R_{0}}⁠-submoduleofRn{\displaystyle R_{n}}called thehomogeneous partof degreen{\displaystyle n}of⁠I{\displaystyle I}⁠. A homogeneous ideal is the direct sum of its homogeneous parts. IfI{\displaystyle I}is a two-sided homogeneous ideal in⁠R{\displaystyle R}⁠, thenR/I{\displaystyle R/I}is also a graded ring, decomposed as whereIn{\displaystyle I_{n}}is the homogeneous part of degreen{\displaystyle n}of⁠I{\displaystyle I}⁠. The corresponding idea inmodule theoryis that of agraded module, namely a leftmoduleMover a graded ringRsuch that and for everyiandj. Examples: Amorphismf:N→M{\displaystyle f:N\to M}of graded modules, called agraded morphismorgraded homomorphism, is ahomomorphismof the underlying modules that respects grading; i.e.,⁠f(Ni)⊆Mi{\displaystyle f(N_{i})\subseteq M_{i}}⁠. Agraded submoduleis a submodule that is a graded module in own right and such that the set-theoreticinclusionis a morphism of graded modules. Explicitly, a graded moduleNis a graded submodule ofMif and only if it is a submodule ofMand satisfies⁠Ni=N∩Mi{\displaystyle N_{i}=N\cap M_{i}}⁠. Thekerneland theimageof a morphism of graded modules are graded submodules. Remark: To give a graded morphism from a graded ring to another graded ring with the image lying in thecenteris the same as to give the structure of a graded algebra to the latter ring. Given a graded moduleM{\displaystyle M}, theℓ{\displaystyle \ell }-twist ofM{\displaystyle M}is a graded module defined byM(ℓ)n=Mn+ℓ{\displaystyle M(\ell )_{n}=M_{n+\ell }}(cf.Serre's twisting sheafinalgebraic geometry). LetMandNbe graded modules. Iff:M→N{\displaystyle f\colon M\to N}is a morphism of modules, thenfis said to have degreediff(Mn)⊆Nn+d{\displaystyle f(M_{n})\subseteq N_{n+d}}. Anexterior derivativeofdifferential formsindifferential geometryis an example of such a morphism having degree 1. Given a graded moduleMover a commutative graded ringR, one can associate theformal power series⁠P(M,t)∈Z[[t]]{\displaystyle P(M,t)\in \mathbb {Z} [\![t]\!]}⁠: (assumingℓ(Mn){\displaystyle \ell (M_{n})}are finite.) It is called theHilbert–Poincaré seriesofM. A graded module is said to be finitely generated if the underlying module isfinitely generated. The generators may be taken to be homogeneous (by replacing the generators by their homogeneous parts.) SupposeRis apolynomial ring⁠k[x0,…,xn]{\displaystyle k[x_{0},\dots ,x_{n}]}⁠,ka field, andMa finitely generated graded module over it. Then the functionn↦dimk⁡Mn{\displaystyle n\mapsto \dim _{k}M_{n}}is called the Hilbert function ofM. The function coincides with theinteger-valued polynomialfor largencalled theHilbert polynomialofM. Anassociative algebraAover a ringRis agraded algebraif it is graded as a ring. In the usual case where the ringRis not graded (in particular ifRis a field), it is given the trivial grading (every element ofRis of degree 0). Thus,R⊆A0{\displaystyle R\subseteq A_{0}}and the graded piecesAi{\displaystyle A_{i}}areR-modules. In the case where the ringRis also a graded ring, then one requires that In other words, we requireAto be a graded left module overR. Examples of graded algebras are common in mathematics: Graded algebras are much used incommutative algebraandalgebraic geometry,homological algebra, andalgebraic topology. One example is the close relationship betweenhomogeneous polynomialsandprojective varieties(cf.Homogeneous coordinate ring.) The above definitions have been generalized to rings graded using anymonoidGas an index set. AG-graded ringRis a ring with a direct sum decomposition such that Elements ofRthat lie insideRi{\displaystyle R_{i}}for somei∈G{\displaystyle i\in G}are said to behomogeneousofgradei. The previously defined notion of "graded ring" now becomes the same thing as anN{\displaystyle \mathbb {N} }-graded ring, whereN{\displaystyle \mathbb {N} }is the monoid ofnatural numbersunder addition. The definitions for graded modules and algebras can also be extended this way replacing the indexing setN{\displaystyle \mathbb {N} }with any monoidG. Remarks: Examples: Some graded rings (or algebras) are endowed with ananticommutativestructure. This notion requires ahomomorphismof the monoid of the gradation into the additive monoid ofZ/2Z{\displaystyle \mathbb {Z} /2\mathbb {Z} }, the field with two elements. Specifically, asigned monoidconsists of a pair(Γ,ε){\displaystyle (\Gamma ,\varepsilon )}whereΓ{\displaystyle \Gamma }is a monoid andε:Γ→Z/2Z{\displaystyle \varepsilon \colon \Gamma \to \mathbb {Z} /2\mathbb {Z} }is a homomorphism of additive monoids. AnanticommutativeΓ{\displaystyle \Gamma }-graded ringis a ringAgraded with respect toΓ{\displaystyle \Gamma }such that: for all homogeneous elementsxandy. Intuitively, a gradedmonoidis the subset of a graded ring,⨁n∈N0Rn{\textstyle \bigoplus _{n\in \mathbb {N} _{0}}R_{n}}, generated by theRn{\displaystyle R_{n}}'s, without using the additive part. That is, the set of elements of the graded monoid is⋃n∈N0Rn{\displaystyle \bigcup _{n\in \mathbb {N} _{0}}R_{n}}. Formally, a graded monoid[1]is a monoid(M,⋅){\displaystyle (M,\cdot )}, with a gradation functionϕ:M→N0{\displaystyle \phi :M\to \mathbb {N} _{0}}such thatϕ(m⋅m′)=ϕ(m)+ϕ(m′){\displaystyle \phi (m\cdot m')=\phi (m)+\phi (m')}. Note that the gradation of1M{\displaystyle 1_{M}}is necessarily 0. Some authors request furthermore thatϕ(m)≠0{\displaystyle \phi (m)\neq 0}whenmis not the identity. Assuming the gradations of non-identity elements are non-zero, the number of elements of gradationnis at mostgn{\displaystyle g^{n}}wheregis thecardinalityof agenerating setGof the monoid. Therefore the number of elements of gradationnor less is at mostn+1{\displaystyle n+1}(forg=1{\displaystyle g=1}) orgn+1−1g−1{\textstyle {\frac {g^{n+1}-1}{g-1}}}else. Indeed, each such element is the product of at mostnelements ofG, and onlygn+1−1g−1{\textstyle {\frac {g^{n+1}-1}{g-1}}}such products exist. Similarly, the identity element can not be written as the product of two non-identity elements. That is, there is no unitdivisorin such a graded monoid. These notions allow us to extend the notion ofpower series ring. Instead of the indexing family beingN{\displaystyle \mathbb {N} }, the indexing family could be any graded monoid, assuming that the number of elements of degreenis finite, for each integern. More formally, let(K,+K,×K){\displaystyle (K,+_{K},\times _{K})}be an arbitrarysemiringand(R,⋅,ϕ){\displaystyle (R,\cdot ,\phi )}a graded monoid. ThenK⟨⟨R⟩⟩{\displaystyle K\langle \langle R\rangle \rangle }denotes the semiring of power series with coefficients inKindexed byR. Its elements are functions fromRtoK. The sum of two elementss,s′∈K⟨⟨R⟩⟩{\displaystyle s,s'\in K\langle \langle R\rangle \rangle }is defined pointwise, it is the function sendingm∈R{\displaystyle m\in R}tos(m)+Ks′(m){\displaystyle s(m)+_{K}s'(m)}, and the product is the function sendingm∈R{\displaystyle m\in R}to the infinite sum∑p,q∈Rp⋅q=ms(p)×Ks′(q){\displaystyle \sum _{p,q\in R \atop p\cdot q=m}s(p)\times _{K}s'(q)}. This sum is correctly defined (i.e., finite) because, for eachm, there are only a finite number of pairs(p,q)such thatpq=m. Informal language theory, given analphabetA, thefree monoidof words overAcan be considered as a graded monoid, where the gradation of a word is its length.
https://en.wikipedia.org/wiki/Graded_algebra
Dreadlocks, also known asdreadsorlocs, are ahairstylemade of rope-like strands of matted hair. Dreadlocks can form naturally invery curly hair, or they can be created with techniques like twisting,backcombing, or crochet.[1][2][3][4] The worddreadlocksis usually understood to come fromJamaican Creoledread, "member of theRastafarian movementwho wears his hair in dreadlocks" (compareNazirite), referring to theirdread or awe of God.[5]An older name for dreadlocks waselflocks, from the notion thatelveshad matted the locks in people'ssleep. Other origins have been proposed. Some authors trace the term to theMau Mau, a group of whom apparently coined it fromBritish colonialistsin 1959 as a reference to their dreadful hair. In their 2014 bookHair Story: Untangling the Roots of Black Hair in America, Ayana Byrd and Lori Tharps claimed that the namedredlocsoriginated in the time of theslave trade: when transported Africans disembarked from the slave ships after spending months confined inunhygienic conditions,whiteswould report that their undressed and matted kinky hair was "dreadful". According to them, it is due to these circumstances that many people wearing the style today drop theaindreadlockto avoid negative implications.[6] The worddreadlocksrefers to locks of entangled hair.[7] Several languages have names for these locks: According to Sherrow inEncyclopedia of Hair: A Cultural History, dreadlocks date back to ancient times in various cultures. Inancient Egypt, Egyptians wore locked hairstyles andwigsappeared onbas-reliefs, statuary and other artifacts.[14]Mummified remains of Egyptians with locked wigs have also been recovered from archaeological sites.[15]According to Maria Delongoria, braided hair was worn by people in theSahara desertsince 3000 BCE. Dreadlocks were also worn by followers ofAbrahamic religions. For example,Ethiopian CopticBahatowie priests adopted dreadlocks as a hairstyle before the fifth century CE (400 or 500 CE). Locking hair was practiced by some ethnic groups inEast,Central,West, andSouthernAfrica.[16][17][18] Pre-ColumbianAztecpriests were described inAztec codices(including theDurán Codex, theCodex Tudelaand theCodex Mendoza) as wearing their hair untouched, allowing it to grow long and matted.[19]Bernal Diaz del Castillo records: There were priests with long robes of black cloth... The hair of these priests was very long and so matted that it could not be separated or disentangled, and most of them had their ears scarified, and their hair was clotted with blood. The earliest known possible depictions of dreadlocks in Europe date back as far as 1600–1500 BCE in theMinoan Civilization, centered inCrete(now part ofGreece).[21]Frescoesdiscovered on theAegean islandofThera(modernSantorini, Greece) portray individuals with long braided hair or long dreadlocks.[20][23][24][25]Another source describes the hair of the boys in theAkrotiri Boxer Frescoas long tresses, not dreadlocks. Tresses of hair are defined byCollins Dictionaryas braided hair, braided plaits, or long loose curls of hair.[26][27][28] InSenegal, the Baye Fall, followers of theMouridemovement, a Sufi movement ofIslamfounded in 1887 CE byShaykh Aamadu Bàmba Mbàkke, are famous for growing dreadlocks and wearing multi-colored gowns.[29] Cheikh Ibra Fall, founder of the Baye Fall school of theMouride Brotherhood, popularized the style by adding a mystic touch to it.[30]This sect of Islam in Senegal, where Muslims wearndjan(dreadlocks), aimed to Africanize Islam. Dreadlocks to this group of Islamic followers symbolize their religious orientation.[31][32]Jamaican Rastas also reside in Senegal and have settled in areas near Baye Fall communities. Baye Fall and Jamaican Rastas have similar cultural beliefs regarding dreadlocks. Both groups wear knitted caps to cover their locs and wear locs for religious and spiritual purposes.[33]Male members of the Baye Fall religion wear locs to detach from mainstream Western ideals.[34] In the 1970s, Americans and Britons attended reggae concerts and were exposed to various aspects of Jamaican culture, including dreadlocks.Hippiesrelated to the Rastafarian idea of rejectingcapitalismandcolonialism, symbolized by the name "Babylon". Rastafarians rejected Babylon in multiple ways, including by wearing their hair naturally in locs to defy Western standards of beauty. The 1960s was the height of thecivil rights movementin the U.S., and some White Americans joined Black people in the fight against inequality andsegregationand were inspired by Black culture. As a result, some White people joined the Rastafarian movement. Dreadlocks were not a common hairstyle in the United States, but by the 1970s, some White Americans were inspired by reggae music, the Rastafarian movement, andAfrican-American hair cultureand started wearing dreadlocks.[35][36]According to authors Bronner and Dell Clark, the clothing styles worn by hippies in the 1960s and 1970s were copied fromAfrican-American culture. The word hippie comes from theAfrican-American slangwordhip. African-American dress and hairstyles such as braids (often decorated with beads), dreadlocks, and language were copied by hippies and developed into a new countercultural movement used by hippies.[37][38] In Europe in the 1970s, hundreds of Jamaicans and otherCaribbean peopleimmigrated to metropolitan centers of London,Birmingham, Paris, and Amsterdam. Communities ofJamaicans,Caribbeans, and Rastas emerged in these areas. Thus Europeans in these metropolitan cities were introduced to Black cultures from the Caribbean and Rastafarian practices and were inspired byCaribbean culture, leading some of them to adopt Black hair culture, music, and religion. However, the strongest influence of Rastafari religion is amongEurope's Black population.[39][40] Whenreggae music, which espoused Rastafarian ideals, gained popularity and mainstream acceptance in the 1970s, thanks toBob Marley's music and cultural influence, dreadlocks (often called "dreads") became a notable fashion statement worldwide, and have been worn by prominent authors, actors, athletes, and rappers.[41][42]Rastafari influenced its members worldwide to embrace dreadlocks. Black Rastas loc their hair to embrace their African heritage and accept African features as beautiful, such as dark skin tones, Afro-textured hair, and African facial features.[43] Hip Hopandrapartists such asLauryn Hill,Lil Wayne,T-Pain,Snoop Dog,J-Cole,Wiz Khalifa,Chief Keef,Lil Jon, and other artists wear dreadlocks, which further popularized the hairstyle in the 1990s, early 2000s, and present day. Dreadlocks are a part of hip-hop fashion and reflect Black cultural music of liberation and identity.[44][45][46][47]Many rappers andAfrobeatartists inUgandawear locs, such asNavio, Delivad Julio,Fik Fameica, Vyper Ranking, Byaxy, Liam Voice, and other artists. From reggae music to hip hop, rap, and Afrobeat, Black artists in theAfrican diasporawear locs to display their Black identity and culture.[48][49][50] Youth in Kenya who are fans of rap and hip hop music, and Kenyan rappers and musicians, wear locs to connect to the history of theMau Mau freedom fighterswho wore locs as symbols of anti-colonialism, and to Bob Marley, who was a Rasta.[51]Hip hop and reggae fashion spread toGhanaand fused with traditional Ghanaian culture.Ghanaian musicianswear dreadlocks incorporating reggae symbols and hip hop clothes mixed with traditional Ghanaian textiles, such as wearingGhanaian headwrapsto hold their locs.[52][53]Ghanaian women wear locs as a symbol of African beauty. The beauty industry in Ghana believe locs are a traditional African hair practice and market hair care products to promote natural African hairstyles such as afros and locs.[54]The previous generations of Black artists have inspired younger contemporary Black actresses to loc their hair, such asChloe Bailey,Halle Bailey, andR&BandPop musicsingerWillow Smith. More Black actors in Hollywood are choosing to loc their hair to embrace their Black heritage.[55] Although more Black women in Hollywood and the beauty and music industries are wearing locs, there has never been a BlackMiss Americawinner with locs because there is pushback in the fashion industry towards Black women's natural hair. For example, modelAdesuwa Aighewilocked her hair and was told she might not receive any casting calls because of her dreadlocks. Some Black women in modeling agencies are forced to straighten their hair. However, more Black women are resisting and choosing to wear Black hairstyles such as afros and dreadlocks in fashion shows and beauty pageants.[56][57]For example, in 2007 Miss Universe Jamaica and Rastafarian,Zahra Redwood, was the first Black woman to break the barrier on a world pageant stage when she wore locs, paving the way and influencing other Black women to wear locs in beauty pageants. In 2015,Miss Jamaica WorldSanneta Myrie was the first contestant to wear locs to theMiss World Pageant.[58]In 2018,Dee-Ann Kentish-Rogersof Britain was crowned Miss Universe wearing her locs and became the first Black British woman to win the competition with natural locs.[59][60] Hollywood cinemaoften uses the dreadlock hairstyle as a prop in movies for villains and pirates. According to author Steinhoff, this appropriates dreadlocks and removes them from their original meaning of Black heritage to one of dread and otherness. In the moviePirates of the Caribbean, the pirate Jack Sparrow wears dreadlocks. Dreadlocks are used in Hollywood to mystify a character and make them appear threatening or living a life of danger. In the movieThe Curse of the Black Pearl, pirates were dressed in dreadlocks to signify their cursed lives.[61] Locks have been worn for various reasons in many cultures and ethnic groups around the world throughout history. Their use has also been raised in debates aboutcultural appropriation.[62][63][64][65][66][67] The practice of wearing braids and dreadlocks in Africa dates back to 3,000 BC in the Sahara Desert. It has been commonly thought that other cultures influenced the dreadlock tradition in Africa. TheKikuyuandSomaliwear braided and locked hairstyles.[68][69]Warriors among theFulani,Wolof, andSererinMauritania, andMandinkainMaliwere known for centuries to have worncornrowswhen young and dreadlocks when old. InWest Africa, the water spiritMami Watais said to have long locked hair. Mami Wata's spiritual powers of fertility and healing come from her dreadlocks.[70][71]West African spiritual priests calledDadawear dreadlocks to venerate Mami Wata in her honor as spiritual consecrations.[72]SomeEthiopianChristian monks and Bahatowie priests of theEthiopian Coptic Churchlock their hair for religious purposes.[73][74]InYorubaland,Aladura churchprophets calledwooliimat their hair into locs and wear long blue, red, white, or purple garments with caps and carry iron rods used as a staff.[75]Prophets lock their hair in accordance with the Nazarene vow in the Christian bible. This is not to be confused with the Rastafari religion that was started in the 1930s. The Aladura church was founded in 1925 andsyncretizesindigenous Yoruba beliefs about dreadlocks with Christianity.[76]Moses Orimolade Tunolasewas the founder of the first African Pentecostal movement started in 1925 in Nigeria. Tunolase wore dreadlocks and members of his church wear dreadlocks in his honor and for spiritual protection.[77] TheYorubawordDadais given to children inNigeriaborn with dreadlocks.[78][79]SomeYoruba peoplebelieve children born with dreadlocks have innate spiritual powers, and cutting their hair might cause serious illness. Only the child's mother can touch their hair. "Dada children are believed to be young gods, they are often offered at spiritual altars for chief priests to decide their fate. Some children end up becoming spiritual healers and serve at the shrine for the rest of their lives." If their hair is cut, it must be cut by a chief priest and placed in a pot of water with herbs, and the mixture is used to heal the child if they get sick. Among theIgbo, Dada children are said to be reincarnatedJujuistsof great spiritual power because of their dreadlocks.[80][81]Children born with dreadlocks are viewed as special. However, adults with dreadlocks are viewed negatively. Yoruba Dada children's dreadlocks are shaved at a river, and their hair is grown back "tamed" and have a hairstyle that conforms to societal standards. The child continues to be recognized as mysterious and special.[82]It is believed that the hair of Dada children was braided in heaven before they were born and will bring good fortune and wealth to their parents. When the child is older, the hair is cut during a special ritual.[83]InYoruba mythology, theOrisha Yemojagave birth to a Dada who is a deified king in Yoruba.[84][85]However, dreadlocks are viewed in a negative light in Nigeria due to their stereotypical association with gangs and criminal activity; men with dreadlocks faceprofilingfrom Nigerian police.[86][87] InGhana, among theAshanti people,Okomfo priestsare identified by their dreadlocks. They are not allowed to cut their hair and must allow it to mat and lock naturally. Locs are symbols of higher power reserved for priests.[88][89][90]Other spiritual people in Southern Africa who wear dreadlocks areSangomas. Sangomas wear red and white beaded dreadlocks to connect to ancestral spirits. Two African men were interviewed, explaining why they chose to wear dreadlocks. "One – Mr. Ngqula – said he wore his dreadlocks to obey his ancestors' call, given through dreams, to become a 'sangoma' in accordance with hisXhosa culture. Another – Mr. Kamlana – said he was instructed to wear his dreadlocks by his ancestors and did so to overcome 'intwasa', a condition understood in African culture as an injunction from the ancestors to become a traditional healer, from which he had suffered since childhood."[91][92]InZimbabwe, there is a tradition of locking hair calledmhotsiworn by spirit mediums calledsvikiro. The Rastafarian religion spread to Zimbabwe and influenced some women inHarareto wear locs because they believe in the Rastafari's pro-Black teachings and rejection of colonialism.[93] Maasaiwarriors inKenyaare known for their long, thin, red dreadlocks, dyed with red root extracts orred ochre(red earth clay).[94]TheHimba womeninNamibiaare also known for their red-colored dreadlocks. Himba women usered earth clay mixed with butterfatand roll their hair with the mixture. They use natural moisturizers to maintain the health of their hair.Hamar womeninEthiopiawear red-colored locs made using red earth clay.[95]InAngola, Mwila women create thick dreadlocks covered in herbs, crushed tree bark, dried cow dung, butter, and oil. The thick dreadlocks are dyed using oncula, an ochre of red crushed rock.[96][97][98]InSouthern,Eastern, andNorthernAfrica, Africans use red ochre as sunscreen and cover their dreadlocks and braids with ochre to hold their hair in styles and as a hair moisturizer by mixing it with fats. Red ochre has a spiritual meaning of fertility, and in Maasai culture, the color red symbolizes bravery and is used in ceremonies and dreadlock hair traditions.[99][100] Historians note that West and Central African people braid their hair to signify age, gender, rank, role in society, and ethnic affiliation. It is believed braided and locked hair provides spiritual protection, connects people to the spirit of the earth, bestows spiritual power, and enables people to communicate with the gods and spirits.[101][102][103]In the 15th and 16th centuries, theAtlantic slave tradesaw Black Africans forcibly transported fromSub-Saharan AfricatoNorth Americaand, upon their arrival in theNew World, their heads would be shaved in an effort to erase their culture.[104][105][106][107]Enslaved Africans spent months inslave shipsand their hair matted into dreadlocks that European slave traders called "dreadful."[108][109] In theAfrican diaspora, people loc their hair to have a connection to the spirit world and receive messages from spirits. It is believed locs of hair are antennas making the wearer receptive to spiritual messages.[110]Other reasons people loc their hair are for fashion and to maintain the health of natural hair, also calledkinky hair.[111]In the 1960s and 1970s in theUnited States, theBlack Power movement,Black is Beautifulmovement, and thenatural hair movementinspired manyBlack Americansto wear their hair natural inafros,braids, and locked hairstyles.[112][113]The Black is Beautiful cultural movement spread toBlack communities in Britain. In the 1960s and 1970s, Black people in Britain were aware of thecivil rights movementand other cultural movements in Black America and the social and political changes occurring at the time. The Black is Beautiful movement and Rastafari culture in Europe influenced Afro-Britons to wear their hair in natural loc styles and afros as a way to fight against racism, Western standards of beauty, and to develop unity among Black people of diverse backgrounds.[114][115]From the twentieth century to the present day, dreadlocks have been symbols of Black liberation and are worn by revolutionaries, activists,womanists, and radical artists in the diaspora.[116][117]For example, Black American literary authorToni Morrisonwore locs, andAlice Walkerwears locs to reconnect with their African heritage.[118] Natural Black hairstyles worn by Black women are seen as not feminine and unprofessional in some American businesses.[119]Wearing locs in the diaspora signifies a person's racial identity and defiance of European standards of beauty, such as straight blond hair.[120]Locs encourage Black people to embrace other aspects of their culture that are tied to Black hair, such as wearing African ornaments like cowrie shells,beads, andAfrican headwrapsthat are sometimes worn with locs.[121][122]SomeBlack Canadianwomen wear locs to connect to theglobal Black culture. Dreadlocks unite Black people in the diaspora because wearing locs has the same meaning in areas of the world where there are Black people: opposing Eurocentric standards of beauty and sharing a Black and African diaspora identity.[123][124]For many Black women in the diaspora, locs are a fashion statement to express individuality and the beauty and versatility of Black hair. Locs are also aprotective hairstyleto maintain the health of their hair by wearing kinky hair in natural locs or faux locs. To protect their natural hair from the elements during thechanging seasons, Black women wear certain hairstyles to protect and retain the moisture in their hair. Black women wear soft locs as a protective hairstyle because they enclose natural hair inside them, protecting their natural hair from environmental damage. This protective soft loc style is created by "wrapping hair around the natural hair or crocheting pre-made soft locs into cornrows."[125]In the diaspora, Black men and women wear different styles of dreadlocks. Each style requires a different method of care. Freeform locs are formed organically by not combing the hair or manipulating the hair. There are also goddess locs, faux locs, sister locs, twisted locs, Rasta locs, crinkle locs, invisible locs, and other loc styles.[126][127][128] SomeIndigenous AustraliansofNorth Westand North Central Australia, as well as the Gold Coast region of Eastern Australia, have historically worn their hair in a locked style, sometimes also having long beards that are fully or partially locked. Traditionally, some wear the dreadlocks loose, while others wrap the dreadlocks around their heads or bind them at the back of the head.[129]In North Central Australia, the tradition is for the dreadlocks to be greased with fat and coated with red ochre, which assists in their formation.[130]In 1931 inWarburton Range, Western Australia, a photograph was taken of an Aboriginal Australian man with dreadlocks.[131] In the 1970s, hippies from Australia's southern region moved toKuranda, where they introduced the Rastafari movement as expressed in thereggae musicofPeter ToshandBob Marleyto theBuluwaipeople in the 1970s. Aboriginal Australians found parallels between the struggles of Black people in theAmericasand their own racial struggles in Australia. Willie Brim, a Buluwai man born in the 1960s in Kuranda, identified with Tosh's and Marley's spiritually conscious music, and inspired particularly by Peter Tosh's albumBush Doctor, in 1978 he founded a reggae band calledMantakaafter the area alongside the Barron River where he grew up. He combined his people's cultural traditions with the reggae guitar he had played since he was young, and his band's music reflects Buluwai culture and history. Now a leader of the Buluwai people and a cultural steward, Brim and his band send an "Aboriginal message" to the world. He and other Buluwai people wear dreadlocks as a native part of their culture and not as an influence from the Rastafari religion. Although Brim was inspired by reggae music, he is not a Rastafarian as he and his people have their own spirituality.[132]Foreigners visiting Australia think the Buluwai people wearing dreadlocks were influenced by the Rastafarian movement, but the Buluwai say their ancestors wore dreadlocks before the movement began.[133]Some Indigenous Australians wear anAustralian Aboriginal flag(a symbol of unity and Indigenous identity in Australia) tied around their head to hold their dreadlocks.[134] WithinTibetan Buddhismand other more esoteric forms of Buddhism, locks have occasionally been substituted for the more traditional shaved head. The most recognizable of these groups are known as theNgagpasofTibet. For Buddhists of these particular sects and degrees of initiation, their locked hair is not only a symbol of their vows but an embodiment of the particular powers they are sworn to carry.[135]Hevajra Tantra1.4.15 states that the practitioner of particular ceremonies "should arrange his piled up hair" as part of the ceremonial protocol.[136]Archeologists found a statue of a male deity,Shiva, with dreadlocks in Stung Treng province inCambodia.[137]In a sect of tantric Buddhism, some initiates wear dreadlocks.[138][139]The sect of tantric Buddhism in which initiates wear dreadlocks is calledweikzaandPassayanaorVajrayana Buddhism. Thissect of Buddhismis practiced in Burma. The initiates spend years in the forest with this practice, and when they return to the temples, they should not shave their heads to reintegrate.[140] The practice of wearing ajaṭā(dreadlocks) is observed in modern-day Hinduism,[142][143][144]most notably by sadhus who worshipShiva.[145][146]TheKapalikas, first commonly referenced in the6th century CE, were known to wear thejaṭā[147]as a form of deity imitation of thedevaBhairava-Shiva.[148]Shiva is often depicted with dreadlocks. According to Ralph Trueb, "Shiva's dreadlocks represent the potent power of his mind that enables him to catch and bind the unruly and wild rivergoddess Ganga."[149] In a village inPune, Savitha Uttam Thorat, some women hesitate to cut their long dreadlocks because it is believed it will cause misfortune or bring down divine wrath. Dreadlocks practiced by the women in this region ofIndiaare believed to be possessed by the goddessYellamma. Cutting off the hair is believed to bring misfortune onto the woman, because having dreadlocks is considered to be a gift from the goddess Yellamma (also known as Renuka).[150]Some of the women have long and heavy dreadlocks that put a lot of weight on their necks, causing pain and limited mobility.[151][152]Some in local government and police in theMaharashtra regiondemand the women cut their hair, because the religious practice of Yellamma forbids women from washing and cutting their dreadlocks, causing health issues.[153]These locks of hair dedicated to Yellamma are calledjade, believed to be evidence of divine presence. However, in Southern India, people advocate for the end of the practice.[154]The goddessAngala Parameshvariin Indian mythology is said to havecataik-karimatted hair (dreadlocks). Women healers in India are identified by their locs of hair and are respected in spiritual rituals because they are believed to be connected to goddesses. A woman who has ajatais believed to derive her spiritual powers orshaktifrom her dreadlocks.[155] Rastafari movementdreadlocks are symbolic of theLion of Judah, and were inspired by theNazaritesof the Bible.[156]Jamaicans locked their hair after seeing images of Ethiopians with locs fighting Italian soldiers during theSecond Italo-Ethiopian War. The afro is the preferred hairstyle worn byEthiopians. During the Italian invasion, Ethiopians vowed not to cut their hair using the Biblical example of Samson, who got his strength from his seven locks of hair, until emperor Ras Tafari Makonnen (Haile Selassie) and Ethiopia were liberated and Selassie was returned from exile.[157]Scholars also state another indirect Ethiopian influence for Rastas locking their hair are the Bahatowie priests in Ethiopia and their tradition of wearing dreadlocks for religious reasons since the 5th century AD.[158]Another African influence for Rastas wearing locs was seeing photos ofMau Mau freedom fighterswith locs inKenyafighting against the British authorities in the 1950s. Dreadlocks to the Mau Mau freedom fighters were a symbol of anti-colonialism, and this symbolism of dreadlocks was an inspiration for Rastas to loc their hair in opposition to racism and promote an African identity.[159][160][161]The branch of Rastafari that was inspired to loc their hair after the Mau Mau freedom fighters was theNyabinghi Order, previously calledYoung Black Faith. Young Black Faith were considered a radical group of younger Rastafari members. Eventually, other Rastafari groups started locking their hair.[162] In the Rastafarian belief, people wear locs for a spiritual connection to the universe and the spirit of the earth. It is believed that by shaking their locs, they will bring down the destruction ofBabylon. Babylon in the Rastafarian belief issystemic racism, colonialism, and any system of economic and social oppression of Black people.[163][164]Locs are also worn to defy European standards of beauty and help to develop a sense of Black pride and acceptance of African features as beautiful.[165][166]In another branch of Rastafari calledBoboshanti Order of Rastafari, dreadlocks are worn to display a black person's identity and social protest against racism.[167]The Bobo Ashanti are one of the strictestMansions of Rastafari. They cover their locs with brightturbansand wear long robes and can usually be distinguished from other Rastafari members because of this.[168]Other Rastas wear aRastacapto tuck their locs under the cap.[169] TheBobo Ashanti("Bobo" meaning "black" inIyaric;[170]and "Ashanti" in reference to theAshanti peopleofGhana, whom the Bobos claim are their ancestors),[171]were founded byEmmanuel Charles Edwardsin 1959 during the period known as the "groundation", where many protests took place calling for the repatriation of African descendants and slaves to Kingston. A Boboshanti branch spread to Ghana because of repatriated Jamaicans and other Black Rastas moving to Ghana. Prior to Rastas living in Ghana,GhanaiansandWest Africanspreviously had their own beliefs about locked hair. Dreadlocks in West Africa are believed to bestow children born with locked hair with spiritual power, and thatDadachildren, that is, those born with dreadlocks, were given to their parents bywater deities. Rastas and Ghanaians have similar beliefs about the spiritual significance of dreadlocks, such as not touching a person's or child's locs, maintaining clean locs, locs spiritual connections to spirits, and locs bestowing spiritual powers to the wearer.[172] Dreadlocks have become a popular hairstyle among professional athletes. However, some athletes are discriminated against and were forced to cut their dreadlocks. For example, in December 2018, a Black high school wrestler in New Jersey was forced to cut his dreadlocks 90 seconds before his match, sparking a civil rights case that led to the passage of the CROWN Act in 2019.[173] In professionalAmerican football, the number of players with dreadlocks has increased sinceAl HarrisandRicky Williamsfirst wore the style during the 1990s. In 2012, about 180National Football Leagueplayers wore dreadlocks. A significant number of these players are defensive backs, who are less likely to be tackled than offensive players. According to the NFL's rulebook, a player's hair is considered part of their "uniform", meaning the locks are fair game when attempting to bring them down.[174][175] In theNBA, there has been controversy over Brooklyn Nets guardJeremy Lin, an Asian-American who garnered mild controversy over his choice of dreadlocks. Former NBA playerKenyon Martinaccused Lin of appropriating African-American culture in a since-deleted social media post, after which Lin pointed out that Martin has multiple Chinese characters tattooed on his body.[176] David Diamante, the American Boxingring announcerofItalian Americanheritage, sports prominent dreadlocks. Dreadlocks can be formed through several methods. Very curly hair forms single-strand knots that can naturally entangle into dreadlocks.[177]For other types of hair various methods utilized to create dreadlocks include crochet hooks and backcombing. Dreadlocks should not be confused with matting, which occurs from the unintentional neglect and damage of any type of hair.[178] On 3 July 2019, California became the first US state to prohibit discrimination over natural hair. GovernorGavin Newsomsigned theCROWN Actinto law, banning employers and schools fromdiscriminating against hairstylessuch as dreadlocks, braids, afros, andtwists.[179]Likewise, later in 2019, Assembly Bill 07797 became law in New York state; it "prohibits racediscrimination based on natural hairor hairstyles".[180][181]Scholars call discrimination based on hair "hairism". Despite the passage of the CROWN Act, hairism continues, with some Black people being fired from work or not hired because of their dreadlocks.[182][183][184]According to the CROWN 2023 Workplace Research Study, sixty-six percent of Black women change their hairstyle for job interviews, and twenty-five percent of Black women said they were denied a job because of their hairstyle.[185]The CROWN Act was passed to challenge the idea that Black people must emulate other hairstyles to be accepted in public and educational spaces.[186]As of 2023, 24 states have passed the CROWN Act. July 3 is recognized as National CROWN Day, also called Black Hair Independence Day.[187][188][189] The Perception Institute conducted a "Good Hair Study" using images of Black women wearing natural styles in locs, afros, twists, and other Black hairstyles. The Perception Institute is "a consortium of researchers, advocates and strategists" that uses psychological and emotional test studies to make participants aware of their racial biases. A Black-owned hair supply company,Shea Moisture, partnered with Perception Institute to conduct the study. The tests were done to reduce hair- and racially-based discrimination in education, civil justice, and law enforcement places. The study used animplicit-association teston 4,000 participants of all racial backgrounds and showed most of the participants had negative views about natural Black hairstyles. The study also showedMillennialswere the most accepting of kinky hair texture on Black people. "Noliwe Rooks, aCornell Universityprofessor who writes about the intersection of beauty and race, says for some reason, natural Black hair just frightens some White people."[190][191] In September of 2016, a lawsuit was filed by theEqual Employment Opportunity Commissionagainst the company Catastrophe Management Solutions located inMobile, Alabama. The court case ended with the decision that it was not a discriminatory practice for the company to refuse to hire an African American because they wore dreadlocks.[192] In someTexaspublic schools, dreadlocks are prohibited, especially for male students, because long braided hair is considered unmasculine according to Western standards of masculinity which define masculinity as "short, tidy hair." Black and Native American boys are stereotyped and receive negative treatment and negative labeling for wearing dreadlocks, cornrows, and long braids. Non-white students are prohibited from practicing their traditional hairstyles that are a part of their culture.[193][194] The policing of Black hairstyles also occurs inLondon, England.Black studentsin England are prohibited from wearing natural hairstyles such as dreadlocks,afros, braids, twists, and other African and Black hairstyles. Black students are suspended from school, are stereotyped, and receive negative treatment from teachers.[195] InMidrand, north ofJohannesburginSouth Africa, a Black girl was kicked out of school for wearing her hair in a natural dreadlock style[citation needed]. Hair and dreadlock discrimination is experienced by people of color all over the world who do not conform to Western standards of beauty.[196][197]AtPretoria High School for GirlsinGauteng provincein South Africa, Black girls are discriminated against for wearing African hairstyles and are forced tostraightentheir hair.[198] In 2017, theUnited States Armylifted the ban on dreadlocks. In the army, Black women can now wearbraidsand locs under the condition that they are groomed, clean, and meet the length requirements.[199]From slavery into the present day, the policing of Black women's hair continues to be controlled by some institutions and people. Even when Black women wear locs and they are clean and well-kept, some people do not consider locs to be feminine and professional because of thenatural kinkytexture of Black hair.[200][201] Four African countries approved the wearing of dreadlocks in their courts:Kenya,Malawi,South Africa, andZimbabwe. However, hairism continues despite the approval. Although locked hairstyles are a traditional practice on theAfrican continent, some Africans disapprove of the hairstyle because of cultural taboos or pressure from Europeans in African schools and local African governments to conform to Eurocentric standards of beauty.[202][203] According to a 2011 article fromThe New Republic, Black men who wear locs areracially profiledand watched more by the police and are believed to be "thugs" or involved in gangs and violent crimes than Black men who do not wear dreadlocks.[204] On 10 December 2010, theGuinness Book of World Recordsrested its "longest dreadlocks" category after investigating its first and only female title holder, Asha Mandela, with this official statement: Following a review of our guidelines for the longest dreadlock, we have taken expert advice and made the decision to rest this category. The reason for this is that it is difficult, and in many cases impossible, to measure the authenticity of the locks due to expert methods employed in the attachment of hair extensions/re-attachment of broken-off dreadlocks. Effectively the dreadlock can become an extension and therefore impossible to adjudicate accurately. It is for this reason Guinness World Records has decided to rest the category and will no longer be monitoring the category for longest dreadlock.[205]
https://en.wikipedia.org/wiki/Dreadlock
Afab lab(fabrication laboratory) is a small-scaleworkshopoffering (personal)digital fabrication.[1][2] A fab lab is typically equipped with an array of flexible computer-controlled tools that cover several different length scales and various materials, with the aim to make "almost anything".[3]This includesprototypingandtechnology-enabled products generally perceived as limited tomass production. While fab labs have yet to compete with mass production and its associatedeconomies of scalein fabricating widely distributed products, they have already shown the potential to empower individuals to create smart devices for themselves. These devices can be tailored to local or personal needs in ways that are not practical or economical using mass production. The fab lab movement is closely aligned with theDIYmovement,open-source hardware,maker culture, and thefree and open-sourcemovement, and shares philosophy as well as technology with them. The fab lab program was initiated to broadly explore how the content ofinformationrelates to its physical representation and how an under-served community can be powered by technology at the grassroots level.[4]The program began as a collaboration between the Grassroots Invention Group and theCenter for Bits and Atomsat theMIT Media Labin theMassachusetts Institute of Technologywith a grant from theNational Science Foundation(NSF,Washington, D.C.) in 2001.[5] Vigyan AshraminIndiawas the first fab lab to be set up outside MIT. It is established in 2002 and received capital equipment by NSF-USA andIIT Kanpur. While the Grassroots Invention Group is no longer in the MIT Media Lab, The Center for Bits and Atoms consortium is still actively involved in continuing research in areas related to description and fabrication but does not operate or maintain any of the labs worldwide (with the excmobile fab lab). The fab lab concept also grew out of a popular class at MIT (MAS.863) named "How To Make (Almost) Anything". The class is still offered in the fall semesters.[6] Flexible manufacturing equipment within a fab lab can include: One of the larger projects undertaken by fab labs include free communityFabFiwireless networks (in Afghanistan, Kenya and US). The first city-scale FabFi network, set up in Afghanistan, has remained in place and active for three years under community supervision and with no special maintenance. The network in Kenya, (Based in the University of Nairobi (UoN)) building on that experience, started to experiment with controlling service quality and providing added services for a fee to make the network cost-neutral. Fab Academy leverages the Fab Lab network to teach hands-on, digital fabrication skills.[7]Students convene at Fab Lab "Supernodes" for the 19 week course to earn a diploma and build a portfolio. In some cases, the diploma is accredited or offers academic credit.[8]The curriculum is based onMIT's rapid prototyping course MAS 863: How to Make (Almost) Anything.[9]The course is estimated to cost US$5000, but varies with location and available scholarship opportunities. All course materials are publicly archived onlinehere. Fab City has been set up to explore innovative ways of creating the city of the future.[10]It focuses on transforming and shaping the way how materials are sourced and used. This transformation should lead to a shift in the urban model from 'PITO to DIDO' that is, 'product-in, trash-out' to, data-in, data-out'.[11]This can eventually transform cities intoself-sufficiententities in 2054; in line with the pledge thatBarcelonahas made.[12]The Fab City links to the fab lab movement, because they make use of the samehuman capital. The Fab cities make use of the innovative spirit of the users of the fab labs.[13] The Green Fab Lab Network,which started inCatalonia's Green Fablab,[14]promotes environmental awareness through entrepreneurship.[15]For example, they promote distributed recycling, where locals recycled theirplastic wasteturning locally sourced shredded plastic into items of value withfused particle fabrication/fused granular fabrication(FPF/FGF)3D printing, which not only is a good economic but also a good environmental option.[16][17] Listing of all official Fab Labs is maintained by the community through website fablabs.io.[18]As of November 2019, there existed 1830 Fab Labs in the world in total. Currently there are Fab Labs on every continent exceptAntarctica.
https://en.wikipedia.org/wiki/Fablab
File bindersareutility softwarethat allow a user to "bind" multiple files together, resulting in a single executable. They are commonly used byhackersto insert other programs such asTrojan horsesinto otherwise harmless files, making them more difficult to detect.Malwarebuilders (such askeyloggersor stealers) often include a binder by default.[1] Apolymorphic packeris a file binder with apolymorphic engine. It thus has the ability to make itspayloadmutate over time, so it is more difficult to detect and remove.[citation needed] This article related to a type ofsoftwareis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/File_binder
WirelessHARTwithintelecommunicationsandcomputing, is awireless sensor networkingtechnology. It is based on theHighway Addressable Remote Transducer Protocol(HART). Developed as a multi-vendor,interoperablewireless standard, WirelessHART was defined for the requirements of process field device networks. The protocol utilizes a time synchronized, self-organizing, and self-healing mesh architecture. The protocol supports operation in the 2.4 GHzISM bandusingIEEE 802.15.4standard radios. The underlying wireless technology is based on the work ofDust Networks'TSMPtechnology.[1] The standard was initiated in early 2004 and developed by 37 HART Communications Foundation (HCF) companies that - amongst others - includedABB,Emerson,Endress+Hauser,Pepperl+Fuchs,Siemens,Freescale Semiconductor, Software Technologies Group (which developed the initial WirelessHART WiTECK stack), and AirSprite Technologies which went on to form WiTECK, an open non-profit membership organization whose mission is to provide a reliable, cost-effective, high-quality portfolio of core enabling system software for industrial wireless sensing applications, under a company and platform-neutral umbrella. WirelessHART was approved by a vote of the 210 member general HCF membership, ratified by the HCF Board of Directors, and introduced to the market in September 2007.[2]On September 27, 2007, theFieldbus Foundation,ProfibusNutzerorganisation, and HCF announced a wireless cooperation team to develop a specification for a common interface to a wireless gateway, further protecting users' investments in technology and work practices for leveraging these industry-pervasive networks. Following its completed work on the WirelessHART standard in September 2007, the HCF offeredInternational Society of Automation(ISA) an unrestricted, royalty-free copyright license, allowing theISA100committee access to the WirelessHART standard.[3] Backward compatibility with the HART “user layer” allows transparent adaptation of HART compatiblecontrol systemsand configuration tools to integrate new wireless networks and their devices, as well as continued use of proven configuration and system-integration work practices. It is estimated that 25 million HART field devices are installed worldwide, and approximately 3 million new wired HART devices are shipping each year. In September 2008, Emerson became the first process automation supplier to begin production shipments for its WirelessHART enabled products.[4] During the summer of 2009NAMUR, an international user association in the chemical and pharmaceutical processing industries, conducted a field test of WirelessHART to verify alignment with the NAMUR requirements for wireless automation in process applications.[5] WirelessHart was approved by theInternational Electrotechnical Commission(IEC) in January 2009 with revision released in April 2010. The latest edition, version 2, was released in 2016 as IEC/PAS 62591:2016.[6]
https://en.wikipedia.org/wiki/WirelessHART
COMMAND.COMis the defaultcommand-line interpreterforMS-DOS,Windows 95,Windows 98andWindows Me. In the case of DOS, it is the default user interface as well. It has an additional role as the usual first program run afterboot(init process). As a shell, COMMAND.COM has two distinct modes of operation:interactive modeandbatch mode. Internal commands are commands stored directly inside the COMMAND.COMbinary; thus, they are always available, but can only be executed directly from the command interpreter.[citation needed] COMMAND.COM's successor onOS/2andWindows NTsystems iscmd.exe, although COMMAND.COM is available invirtual DOS machinesonIA-32versions of those operating systems as well. TheCOMMAND.COMfilename was also used byDisk Control Program[de](DCP), an MS-DOS derivative by the former East GermanVEB Robotron.[2] COMMAND.COM is a DOS program. Programs launched from COMMAND.COM are DOS programs that use theDOS APIto communicate with the disk operating system. The compatible command processor underFreeDOSis sometimes also calledFreeCom. As a shell, COMMAND.COM has two distinct modes of operation. The first isinteractive mode, in which the user types commands which are then executed immediately. The second isbatch mode, which executes a predefined sequence of commands stored as a text file with the.BATextension. Internal commands are commands stored directly inside the COMMAND.COM binary. Thus, they are always available but can only be executed directly from the command interpreter. All commands are executed after the↵ Enterkey is pressed at the end of the line. COMMAND.COM is not case-sensitive, meaning commands can be typed in any mixture of upper and lower case. Control structuresare mostly used inside batch files, although they can also be used interactively.[4][3] On exit, all external commands submit areturn code(a value between 0 and 255) to the calling program. Most programs have a certain convention for their return codes (for instance, 0 for a successful execution).[5][6][7][8] If a program was invoked by COMMAND.COM, the internal IF command with its ERRORLEVEL conditional can be used to test on error conditions of the last invoked external program.[citation needed] Under COMMAND.COM, internal commands do not establish a new value.[citation needed] Batch files for COMMAND.COM can have four kinds of variables: Because DOS is a single-tasking operating system,pipingis achieved by running commands sequentially, redirecting to and from atemporary file.[citation needed]COMMAND.COM makes no provision for redirecting thestandard errorchannel.[citation needed] Generally, the command line length in interactive mode is limited to 126 characters.[11][12][13]InMS-DOS 6.22, the command line length in interactive mode is limited to 127 characters.[citation needed] [...] Multiple Commands: You can type several commands on the same command line, separated by a caret [^]. For example, if you know you want to copy all of your .TXT files to drive A: and then run CHKDSK to be sure that drive A's file structure is in good shape, you could enter the following command:C:\>COPY*.TXT A:^CHKDSK A:You may put as many commands on the command line as you wish, as long as the total length of the command line does not exceed 511 characters. You can use multiple commands in aliases and batch files as well as at the command line. If you don't like using the default command separator, you can pick another character using theSETDOS /Ccommand or the CommandSep directive in4DOS.INI. [...]SETDOS /C:(Compound character) This option sets the character used for separating multiple commands on the same line. The default is the caret [^]. You cannot use any of the redirection characters [<>|], or the blank, tab, comma, or equal sign as the command separator. The command separator is saved by SETLOCAL and restored by ENDLOCAL. This example changes the separator to a tilde [~]:C:\>SETDOS /C~(You can specify either the character itself, or its ASCII code as a decimal number, or a hexadecimal number preceded by 0x.) [...] CommandSep = c (^): This is the character used to separate multiple commands on the same line. [...] Special Character Compatibility: If you use two or more of our products, or if you want to share aliases and batch files with users of different products, you need to be aware of the differences in three important characters: the Command Separator [...], the Escape Character [...], and the Parameter Character [...]. The default values of each of these characters in each product is shown in the following chart: [...] Product, Separator, Escape Parameter [...] 4DOS: ^, ↑, & [...] 4OS2, 4NT, Take Command: &, ^, $ [...] (The up-arrow [↑] represents the ASCII Ctrl-X character, numeric value 24.) [...] [...] all MS-DOS versions prior to Windows 95 [...] used a COM style COMMAND.COM file which has a special signature at the start of the file [...] queried by the MS-DOS BIOS before it loads the shell, but not by the DR-DOS BIOS [...] COMMAND.COM would [...] check that it is running on the "correct" DOS version, so if you would load their COMMAND.COM under DR-DOS, you would receive a "Bad version" error message and their COMMAND.COM would exit, so DR-DOS would [...] display an error message "Bad or missing command interpreter" (if DR-DOS was trying to load the SHELL= command processor after having finished CONFIG.SYS processing). In this case, you could enter the path to a valid DR-DOS COMMAND.COM (C:\DRDOS\COMMAND.COM) and everything was fine. Now, things have changed since MS-DOS 7.0 [...] COMMAND.COM has internally become an EXE style file, so there is no magic [...] signature [...] to check [...] thus no way for DR-DOS to rule out an incompatible COMMAND.COM. Further, their COMMAND.COM no longer does any version checks, but [...] does not work under DR-DOS [...] just crashes [...] the PC DOS COMMAND.COM works fine under DR-DOS [...]
https://en.wikipedia.org/wiki/COMMAND.COM
Passingis the ability of a person to be regarded as a member of anidentity group or category, such asracial identity,ethnicity,caste,social class,sexual orientation,gender,religion, age ordisability status, that is often different from their own.[1][2][3][4]Passing may be used to increasesocial acceptance[1][2]to cope withstigmaby removing stigma from the presented self and could result in other social benefits as well. Thus, passing may serve as a form ofself-preservationor self-protection if expressing one's true or prior identity may be dangerous.[4][5] Passing may require acceptance into a community and may lead to temporary or permanent leave from another community to which an individual previously belonged. Thus, passing can result in separation from one's original self, family, friends, or previous living experiences.[6]Successful passing may contribute to economic security, safety, and stigma avoidance, but it may take an emotional toll as a result of denial of one's previous identity and may lead to depression or self-loathing.[4]When an individual deliberately attempts to "pass" as a member of an identity group, they may actively engage in performance of behaviors that they believe to be associated with membership of that group. Passing practices may also include information management of the passer in attempting to control or conceal any stigmatizing information that may reveal disparity from their presumed identity.[7] Etymologically, the term is simply the nominalisation of the verbpassin itsphrasaluse withfororas, as in a counterfeitpassing forthe genuine article or an impostorpassing asanother person. It has been in popular use since at least the late 1920s.[8][9][10][11] Passing, as a sociological concept, was first coined by Erving Goffman as a term for one response to possessing some kind of stigma that is often less visible.[12][13][7][14]Stigma, according to Goffman's framework in his workStigma: Notes on the Management of Spoiled Identity(1963), "refer[s] to an attribute that is deeply discrediting" or "an undesired differentness from what [was] anticipated".[7]According to Goffman, "This discrepancy, when known about or apparent, spoils his social identity; it has the effect of cutting him off from society and from himself so that he stands a discredited person facing an unaccepting world".[7] Thus, inhabiting an identity associated with stigma may be particularly dangerous and harmful.[12]According to Link and Phelan, Roschelle and Kaufman, and Marvasti, it may lead to loss of opportunities due to status loss and discrimination, alienation and marginalization, harassment and embarrassment, and social rejection.[12][15][16][17]These can be a persistent source of psychological issues.[12] To resist, manage, and avoid stigma and its associated consequences, individuals might choose to pass as a non-stigmatized identity. According to Nathan Shippee, "Passing communicates a seemingly "normal" self, one that does not apparently possess the stigma."[12]According to Patrick Kermit, "To be suspected of being "not quite human" is the essence of stigmatisation, and passing is a desperate means to the end of appearing fully human in the sense of being like most other people."[14] When making the decision of whether to pass or not, there are many factors stigmatized actors may consider. Firstly, there is the notion of visibility. How visible their stigma is may problematize how much ease or difficulty they may face in attempting to pass. However, how visible their stigma is may also determine the intensity and frequency of adversity they may face from others as a result of their stigma. Goffman explains, "Traditionally, the question of passing has raised the issue of the "visibility" of a particular stigma, that is, how well or how badly-the stigma is adapted to provide means of communicating that the individual possesses it."[7]Other scholars further emphasize the cruciality of visibility and conclude that "whether a stigma is evident to the audience can mark the difference between being discredited or merely discreditable".[12] Other factors may include risk, context, and intimacy. Different contexts and situations may make passing more easy or difficult and/or more safe or risky. How well others know the passer may impede their abilities as well. One scholar explains, "Individuals may pass in some situations but not others, effectively creating different arenas of life (depending on whether the stigma is known or not). Goffman claimed that actors develop theories about which situations are risky for disclosure, but risk is only one factor: intimacy with the audience can lead actors to disclose, or to feel guilty for not doing so."[12]In addition to guilt, since passing can sometimes involve the fabrication of a false personal history to aid in concealment of their stigma, passing can complicate personal relationships and cause feelings of shame at having to be dishonest about their identity.[13][18]According to Goffman, "It can be assumed that the possession of a discreditable secret failing takes on a deeper meaning when the persons to whom the individual has not yet revealed himself are not strangers to him but friends. Discovery prejudices not only the current social situation, but established relationships as well; not only the current image others present have of him, but also the one they will have in the future; not only appearances, but also reputation."[7]Relating to this experience of passing, actors may have an ambivalent attachment to their stigma that can cause them to fluctuate between acceptance and rejection of their stigmatized identity. Thus, there may be times when the stigmatized individual will feel more inclined to pass and others when they feel less inclined.[7] Despite the potentially-distressing and dangerous parts of passing, some passers have expressed a habituation with it. In one study, Shippee accounts that "participants often portrayed it as a normal or mundane event."[12]For those whose stigma invites particularly hostile responses from most of society, passing may become a regular part of everyday life that is necessary to survive in that society. Regardless, the stigma that passers are subject to is not inherent. As Goffman explains, stigma exists not within the person but between an attribute and an audience. As a result, stigma is socially constructed and differs based on the cultural beliefs, social structures, and situational dynamics of various contexts. Thus, passing is also immersed in different contexts of the socially-structured meaning and behavior of daily life and passing implies familiarity with that knowledge.[12][7][17][16] Passing has been interpreted in sociology and cultural studies through different analytical lenses such as that of information management by Goffman and that of as cultural performance by Bryant Keith Alexander. Goffman defines passing as 'the management of undisclosed discrediting information about self."[13][14][7][18][17]Similarly, other scholars add that "Passing is mostly associated with strategies of information management that the discreditable use to pass for normal [in everyday life]".[13]Whereas some individuals' stigma is immediately apparent, passers deal with different problems in that their stigma is not always so obvious. Goffman elaborates "The issue is not that of managing tension generated during social contacts, but rather that of managing information about his failing. To display or not to display; to tell or not to tell; to let on or not to let on; to lie or not to lie; and in each case, to whom, how, when, and where."[7][18] In Goffman's understanding, individuals possess various symbols that convey social information about us. There are prestige symbols that convey creditable information and there are stigma symbols that convey discrediting information. By managing the visibility and apparentness of their stigma symbols, passers prevent others from learning of their discredited and stigmatized status and remain discreditable. Passing may also include the adoption of certain prestige symbols and personal history or biography of social information that aids to conceal and draw attention away from their actual stigmatized status.[7] Goffman also briefly notes, "The concealment of creditable facts-reverse passing-of course occurs."[7]Reverse passing, related to terms like "blackfishing", has emerged as a topic of discourse as critics raise concerns overcultural appropriationand accuse nonstigmatized individuals, such as prominent celebritiesKim KardashianandAriana Grande, of concealing creditable information about themselves for some social benefit.[19]Notions of cultural appropriation, racial fetishization, and reverse passing entered public debate particularly in 2015, after a former college instructor and president of theSpokane, Washington,NAACP,Rachel Dolezal, was discovered to be white with no black racial heritage after she had presented herself as black for several years.[19][20]As many point out, reverse passing crucially differs from passing in that individuals who reverse pass are not stigmatized and therefore are not subject to the harms of stigma that may force stigmatized individuals to pass.[19] Bryant Keith Alexander, a professor of Communication, Performance and Cultural Studies at Loyola Marymount University, defines cultural performance as "a process of delineation using performative practices to mark membership and association." Using this definition, passing is reframed as a method to maintain cultural performance and choose both consciously and unconsciously to participate in other performances. Rather than through the management of symbols and the social information they convey, passers assume "the necessary and performative strategies that signal membership." Alexander reiterates, "Cultural membership is thus maintained primarily through recognizable performative practices." Hence, to successfully pass is to have your cultural performance assessed and validated by others.[21] With that interpretation, avoiding stigma by passing necessitates intimate understanding and awareness of social constructions of meaning and expected behaviors that signal membership. Shippee explains that "far from merely appraising situations to determine when concealment is required, passing encompasses active interpretations of several aspects of social life. It requires an understanding of cultural conventions, namely: what is considered "normal" and what is required to maintain it; customs of everyday interaction; and the symbolic character of the stigma itself.... Passing, then, embodies a creative mobilization of situational and cultural awareness, structural considerations, self-appraisals, and sense-making".[12]Alexander recognizes that and then asserts that "passing is a product (an assessed state), a process (an active engagement), performative (ritualized repetition of communicative acts), and a reflection of one's positionality (politicized location), knowing that its existential accomplishment always resides in liminality."[21] Historically and genealogically, the term passing has referred to mixed-race, orbiracialAmericans identifying as or being perceived as belonging to a different racial group. InPassing and the Fictions of Identity, Elaine Ginsberg cites an ad for escaped slave Edmund Kenney as an example of racial passing; Edmund Kenney, a biracial person, was able to pass as white in the United States in the 1800s.[2]In the entry "Passing" for the GLBTQ Encyclopedia Project, Tina Gianoulis states that "for light-skinned African Americans during the times of slavery and the intense periods of racial resegregation that followed, passing for white was a survival tool that allowed them to gain education and employment that would have been denied them had they been recognized as "colored" people." The term passing has since been expanded to include other ethnicities and identity categories. Discriminated groups in North America and Europe may modify their accents, word choices, manner of dress, grooming habits, and even names in an attempt to appear to be members of a majority group or of a privileged minority group.[22][23] Nella Larsen's 1929 novella,Passing, helped to establish the term after several years of prior use. The writer and subject of the novella is a mixed African-American/Caucasian who passes for white. The novella was written during theHarlem Renaissance, when passing was commonly found in both reality and fiction. Since the 1960sCivil Rights Movement, racial pride has decreased the weight that is given to passing as an important issue for black Americans. Still, it is possible and common for biracial people to pass based on appearance or by hiding or omitting their backgrounds.[24][25] In "Adjusting the Borders: Bisexual Passing and Queer Theory," Lingel discussesbell hooks' notion of racial passing in conjunction with discussion of bisexual engagement in passing.[6] Romani peoplehave a history of passing as well, particularly in the United States and often tell outsiders that they belong to other ethnicities such as Latino, Greek, Middle Eastern, or Native American. Class passing, similar to racial and gender passing, is the concealment or misrepresentation of one'ssocial class. InClass-Passing: Social Mobility in Film and Popular Culture, Gwendolyn Audrey Foster suggests that racial and gender passing is often stigmatized but that class passing is generally accepted asnormative behavior.[26]Class passing is common in the United States and is linked to the notions of theAmerican Dreamand of upward class mobility.[24] English-language novels that feature class passing includeThe Talented Mr. Ripley,Anne of Green Gables, andHoratio Algernovels. Films featuring class-passing characters includeCatch Me If You Can,My Fair Lady,Pinky,ATL, andAndy Hardy Meets Debutante.[26]Class passing also figures intoreality televisionprograms such asJoe Millionairein which contestants are often immersed in displays of great material wealth or may have to conceal their class status.[26] The perception of an individual's sexual orientation is often based on their visual identity. The term visual identity refers to the expression of personal, social, and cultural identities through dress and appearance. InVisible Lesbians and Invisible Bisexuals: Appearance and Visual Identities Among Bisexual Women[27]it is proposed that through the expression of a visual identity, others "read" a person's appearance and make assumptions about their wider identity. Therefore, visual identity is a prominent tool of non-verbal communication. The concept of passing is showcased in research byJennifer Taubin herBisexual Women and Beauty Norms.[28]Some participants in the study stated that they attempted to dress as what they perceived as heterosexual when they partnered with a man, and others stated that they tried to dress more like a "lesbian." That exemplifies how visual identities can greatly alter people's immediate assumptions of sexuality. Therefore, presenting oneself as "heterosexual" is effectively "passing."[28] Passing bysexual orientationoccurs when an individual's perceived sexual orientation or sexuality differs from the sexuality or sexual orientation with which they identify. In the entry "Passing" for the GLBTQ Encyclopedia Project, Tina Gianoulis notes "the presumption of heterosexuality in most modern cultures", which in some parts of the world, such as the United States, may be effectivelycompulsory, "most gay men and lesbians in fact spend a great deal of their lives passing as straight even when they do not do so intentionally."[4]The phrase "in the closet" may be used to describe individual who hide or conceal their sexual orientation.[3][4]InPassing: Identity and Interpretation in Sexuality, Race, and Religion, Maria Sanchez and Linda Schlossberg state that "the dominant social order often implores gay people to stay in the closet (to pass)."[3]Individuals may choose to remain "in the closet" or to pass as heterosexual for a variety of reasons, including a desire to maintain positive relationships with family and policies or requirements associated with employment such as "Don't ask, don't tell", a policy that required passing as heterosexual within the military or armed forces.[3][4] Bisexual erasurecauses some bisexual individuals to feel the need to engage in passing within presumed predominantly-heterosexual circles and even within LGBTQ circles for fear of stigma.[6]InAdjusting the Borders: Bisexual Passing and Queer Theory, Jessica Lingel notes, "The ramifications of being denied a public sphere in which to practice a sexual identity that isn't labeled licentious or opportunistic leads some women to resort to manufacturing profiles of gayness or straightness to pledge membership within a community."[6] Genderpassing refers to individuals who are perceived as belonging to a gender identity group that differs from the gender with which they were assigned at birth.[2]InPassing and the Fictions of Identity, Elaine Ginsberg provides the story ofBrandon Teena, who was assigned female at birth but lived as a man, as an example of gender passing in the United States. In 1993, Brandon moved to Falls City, Nebraska, where he initially passed as a man. However, community members discovered that Brandon had been assigned female at birth, and two men in it shot and murdered him.[2]Ginsberg cites for another example of gender passingBilly Tipton, a jazz musician who was also assigned female at birth but lived and performed as a man until his death in 1989. Within thetransgendercommunity, passing refers to the perception or recognition of trans individuals as belonging to the gender identity to which they are transitioning rather than thesex or genderthey were assigned at birth.[2][4] Passing as a member of a differentreligionor as not religious at all is not uncommon amongminorityreligious communities.[citation needed]In the entry "Passing" for the GLBTQ Encyclopedia Project, Tina Gianoulis states "at times of rabid anti-Semitism in Europe and the Americas, many Jewish families also either converted to Christianity or passed as Christian" for the sake of survival.[4]Circumcised Jewish males inGermanyduringWorld War IIattempted torestoretheirforeskinsas part of passing asGentile.[citation needed]The filmEuropa, Europaexplores that theme. Shia Islamhas the doctrine oftaqiyyain which one is lawfully allowed to disavow Islam and profess another faith but secretly remain a Muslim if one's life is at risk. The concept has also been practised by various minority faiths in the Middle East such as theAlawitesandDruze.[29][30][31] Disability passing may refer to the intentional concealment of impairment to avoid thestigmaofdisability, but it may also describe the exaggeration of an ailment or impairment to receive some benefit, which may take the form of attention or care. InDisability and Passing: Blurring the Lines of Identity, Jeffrey Brune and Daniel Wilson define passing by ability or disability as "the ways that others impose, intentionally or not, a specific disability or non-disability identity on a person."[32]Similarly, in "Compulsory Able-Bodiedness and Queer/Disabled Existence," Robert McRuer argues that "the system of compulsory able-bodiedness...produces disability."[33] People with disabilities may exaggerate their disabilities when they are evaluated for medical care or accommodations often for fear of being denied support. "There are too many agencies out there with the ostensible purpose of helping us that still believe that as long as we technically can do something, like crab-walking our way into a subway station, we should have to do it," writes Gabe Moses, a wheelchair user who has a limited ability to walk.[34]Those pressures may result in disabled people exaggerating symptoms or tiring out their body before an evaluation so that they are seen on a "bad day," instead of a "good day." In sports, some mobility impaired individuals have been observed strategically exaggerating the extent of their disability to pass as more disabled than they are and be placed in divisions in which they may be more competitive. In quadriplegic rugby, orwheelchair rugby, some players are described as having "incomplete" quadriplegia in which they may retain some sensation and function in their lower limbs that may allow them to stand and walk in limited capacities. Based on a rule from theUnited States Quad Rugby Association(USQRA) that states that players need only a combination of upper- and lower-extremity impairment that precludes them from playing able-bodied sports, the incomplete quads may play alongside other quadriplegics who have no sensation or function in their lower limbs. That is justified by classifications the USQRA has developed in which certified physical therapists compare arm and muscle flexibility, trunk and torso movement, and ease of chair operation between players and rank them by injury level. However, inconsistencies between medical diagnoses of injury and those classifications allows players to perform higher levels of impairment for the classifiers and pass for being more disabled than they are. As a result, their ranking may underestimate their capacity and they may attain a competitive advantage over teams with players whose capacity is not equivalent. That policy has raised questions from some about the ethics and fairness of comparing disabilities, as well as about how competition, inclusion, and ability should be defined in the world of sports.[35] Individuals withinvisible disabilitiessuch as people with mental illness; intellectual or cognitive disabilities; or physical disabilities that are not immediately obvious to others such as IBS, Crohn's disease, or ulcerative colitis may choose whether or not to reveal their identity or to pass as "normal." Passing as non-disabled may protect against discrimination but may also result in lack of support or accusations of faking. Autisticpeople may employ strategies known as"masking" or "camouflaging"to appear non-autistic.[36]That can involve behavior like suppressing or redirecting repetitive movements (stimming), maintainingeye contactdespite discomfort, mirroring thebody languageandtoneof others, or scripting conversations.[36][37]Masking may be done to reduce the risk of ostracism or abuse.[38]Autistic masking is often exhausting and linked to adverse mental health outcomes such asburnout,depression, andsuicide.[39][40][41]However, that perspective has been challenged in a 2023 review of autistic masking by Valentina Petrolini, Ekaine Rodríguez-Armendariz, and Agustín Vicente who question whether all autistic people see "being autistic" as a central aspect of their identity and whether all autistic people are capable of truly hiding their autistic status. Both conditions, they argue, would have to be fulfilled for the analogy to hold and conclude that only a subgroup of autistic people experiences masking as passing.[36] Individuals with visible physical impairments or disabilities, such as people with mobility impairment, including individuals who use wheelchairs or scooters, face greater challenges in concealing their disability.[32] In a study on individuals' experience with prosthetics, the ability of users to be able to pass as if they were "like everybody else" with their prosthetic based on the realistic or unrealistic appearance of the prosthetic was one factor in predicting whether patients would accept or reject prosthetic use. Cosmetic prosthetics that were, for example, skin-colored or had the added appearance of veins, hair, and nails were often harder to adapt to and use, but many individuals expressed a preference for them over more functional and more conspicuous prosthetics to maintain their personal conceptions of social and bodily identity. One user of prosthetics characterized her device as one that could "maintain her humanness ('half way human'), which in turn prevented her, quite literally, from being seen to have an 'odd' body." Users also discussed wanting prosthetics that could help them maintain a walking gait, which would attract no stares and prosthetics that could be disguised or concealed under clothes in efforts to pass as non-disabled.[18] Though passing may occur on the basis of a single subordinate identity such as race, often people's intersectional locations involve multiple marginalized identities. Intersectionality provides a framework for seeing the interconnected nature of oppressive systems and how multiple identities interact within them. Gay Asian men possess two key subordinated identities; in combination, they create unique challenges for them when passing. Sometimes, those men must pass as straight to avoid stigma, but around other gay men, they may attempt to pass as a non-racialized person or white to avoid the disinterest or fetishization often encountered upon revealing their Asian identities.[42]By recognizing the hidden intersection of the gendered aspects of gay and Asian male stereotypes, these two distinct experiences make even more sense. Gay men are often stereotyped as effeminate and thereby insufficiently masculine as men. Stereotypes characterizing Asian men as too sexual (overly masculine) or too feminine (hypo-masculine) or even both also exhibit the gendered nature of racial stereotypes.[43]Thus, passing as the dominant racial or sexuality category also often means passing as gender correct. When Black transgender men transition in the workplace from identifying as female to passing as cisgender men, gendered racial stereotypes characterizing Black men as overly masculine and violent[44]may affect how previously acceptable behaviors will be interpreted. One such Black trans man discovered that he had gone from "being an obnoxious Black woman to a scary Black man" and therefore had to adapt his behavior to gendered scripts to pass.[45]
https://en.wikipedia.org/wiki/Passing_(sociology)
Therelational calculusconsists of two calculi, thetuple relational calculusand thedomain relational calculus, that is part of therelational modelfor databases and provide a declarative way to specify database queries. The raison d'être of relational calculus is the formalization ofquery optimization, which is finding more efficient manners to execute the same query in a database. The relational calculus is similar to therelational algebra, which is also part of the relational model: While the relational calculus is meant as a declarative language that prescribes no execution order on the subexpressions of a relational calculus expression, therelational algebrais meant as an imperative language: the sub-expressions of a relational algebraic expression are meant to be executed from left-to-right and inside-out following their nesting. PerCodd's theorem, the relational algebra and the domain-independent relational calculus arelogically equivalent. Arelational algebraexpression might prescribe the following steps to retrieve the phone numbers and names of book stores that supplySome Sample Book: A relational calculus expression would formulate this query in the following descriptive or declarative manner: The relational algebra and the domain-independent relational calculus arelogically equivalent: for any algebraic expression, there is an equivalent expression in the calculus, and vice versa. This result is known asCodd's theorem. The raison d'être of the relational calculus is the formalization ofquery optimization. Query optimization consists in determining from a query the most efficient manner (or manners) to execute it. Query optimization can be formalized as translating a relational calculus expression delivering an answer A into efficient relational algebraic expressions delivering the same answer A. Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Relational_calculus
Inmathematics,derivatorsare a proposed framework[1][2]pg 190-195forhomological algebragiving a foundation for bothabelianandnon-abelianhomological algebra and various generalizations of it. They were introduced to address the deficiencies ofderived categories(such as the non-functorialityof the cone construction) and provide at the same time a language forhomotopical algebra. Derivators were first introduced byAlexander Grothendieckin his long unpublished 1983 manuscriptPursuing Stacks. They were then further developed by him in the huge unpublished 1991 manuscriptLes Dérivateursof almost 2000 pages. Essentially the same concept was introduced (apparently independently) by Alex Heller.[3] The manuscript has been edited for on-line publication by Georges Maltsiniotis. The theory has been further developed by several other people, including Heller,Franke, Keller and Groth. One of the motivating reasons for considering derivators is the lack of functoriality with the cone construction withtriangulated categories. Derivators are able to solve this problem, and solve the inclusion of generalhomotopy colimits, by keeping track of all possible diagrams in a category withweak equivalencesand their relations between each other. Heuristically, given the diagram ∙→∙{\displaystyle \bullet \to \bullet } which is a category with two objects and one non-identity arrow, and a functor F:(∙→∙)→A{\displaystyle F:(\bullet \to \bullet )\to A} to a categoryA{\displaystyle A}with a class of weak-equivalencesW{\displaystyle W}(and satisfying the right hypotheses), we should have an associated functor C(F):∙→A[W−1]{\displaystyle C(F):\bullet \to A[W^{-1}]} where the target object is unique up to weak equivalence inC[W−1]{\displaystyle {\mathcal {C}}[W^{-1}]}. Derivators are able to encode this kind of information and provide a diagram calculus to use inderived categoriesand homotopy theory. Formally, aprederivatorD{\displaystyle \mathbb {D} }is a 2-functor D:Indop→CAT{\displaystyle \mathbb {D} :{\text{Ind}}^{op}\to {\text{CAT}}} from a suitable 2-category ofindicesto the category of categories. Typically such 2-functors come from considering the categoriesHom_(Iop,A){\displaystyle {\underline {\text{Hom}}}(I^{op},A)}whereA{\displaystyle A}is called thecategory of coefficients. For example,Ind{\displaystyle {\text{Ind}}}could be the category of small categories which are filtered, whose objects can be thought of as the indexing sets for afiltered colimit. Then, given a morphism of diagrams f:I→J{\displaystyle f:I\to J} denotef∗{\displaystyle f^{*}}by f∗:D(J)→D(I){\displaystyle f^{*}:\mathbb {D} (J)\to \mathbb {D} (I)} This is called theinverse imagefunctor. In the motivating example, this is just precompositition, so given a functorFI∈Hom_(Iop,A){\displaystyle F_{I}\in {\underline {\text{Hom}}}(I^{op},A)}there is an associated functorFJ=FI∘f{\displaystyle F_{J}=F_{I}\circ f}. Note these 2-functors could be taken to be Hom_(−,A[W−1]){\displaystyle {\underline {\text{Hom}}}(-,A[W^{-1}])} whereW{\displaystyle W}is a suitable class of weak equivalences in a categoryA{\displaystyle A}. There are a number of examples of indexing categories which can be used in this construction Derivators are then the axiomatization of prederivators which come equipped with adjoint functors wheref!{\displaystyle f_{!}}is left adjoint tof∗{\displaystyle f^{*}}and so on. Heuristically,f∗{\displaystyle f_{*}}should correspond to inverse limits,f!{\displaystyle f_{!}}to colimits.
https://en.wikipedia.org/wiki/Derivator
Incomputer science,algorithmic efficiencyis a property of analgorithmwhich relates to the amount ofcomputational resourcesused by the algorithm. Algorithmic efficiency can be thought of as analogous to engineeringproductivityfor a repeating or continuous process. For maximum efficiency it is desirable to minimize resource usage. However, different resources such astimeandspacecomplexity cannot be compared directly, so which of two algorithms is considered to be more efficient often depends on which measure of efficiency is considered most important. For example,bubble sortandtimsortare bothalgorithms to sort a listof items from smallest to largest. Bubble sort organizes the list in time proportional to the number of elements squared (O(n2){\textstyle O(n^{2})}, seeBig O notation), but only requires a small amount of extramemorywhich is constant with respect to the length of the list (O(1){\textstyle O(1)}). Timsort sorts the list in timelinearithmic(proportional to a quantity times its logarithm) in the list's length (O(nlog⁡n){\textstyle O(n\log n)}), but has a space requirementlinearin the length of the list (O(n){\textstyle O(n)}). If large lists must be sorted at high speed for a given application, timsort is a better choice; however, if minimizing thememory footprintof the sorting is more important, bubble sort is a better choice. The importance of efficiency with respect to time was emphasized byAda Lovelacein 1843 as applied toCharles Babbage's mechanical analytical engine: "In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation"[1] Earlyelectronic computershad both limitedspeedand limitedrandom access memory. Therefore, aspace–time trade-offoccurred. Ataskcould use a fast algorithm using a lot of memory, or it could use a slow algorithm using little memory. The engineering trade-off was therefore to use the fastest algorithm that could fit in the available memory. Modern computers are significantly faster than early computers and have a much larger amount of memory available (gigabytes instead of kilobytes). Nevertheless,Donald Knuthemphasized that efficiency is still an important consideration: "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"[2] An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an available computer, typically as afunctionof the size of the input. Since the 1950s computers have seen dramatic increases in both the available computational power and in the available amount of memory, so current acceptable levels would have been unacceptable even 10 years ago. In fact, thanks to theapproximate doubling of computer power every 2 years, tasks that are acceptably efficient on modernsmartphonesandembedded systemsmay have been unacceptably inefficient for industrialservers10 years ago. Computer manufacturers frequently bring out new models, often with higherperformance. Software costs can be quite high, so in some cases the simplest and cheapest way of getting higher performance might be to just buy a faster computer, provided it iscompatiblewith an existing computer. There are many ways in which the resources used by an algorithm can be measured: the two most common measures are speed and memory usage; other measures could include transmission speed, temporary disk usage, long-term disk usage, power consumption,total cost of ownership,response timeto external stimuli, etc. Many of these measures depend on the size of the input to the algorithm, i.e. the amount of data to be processed. They might also depend on the way in which the data is arranged; for example, somesorting algorithmsperform poorly on data which is already sorted, or which is sorted in reverse order. In practice, there are other factors which can affect the efficiency of an algorithm, such as requirements for accuracy and/or reliability. As detailed below, the way in which an algorithm is implemented can also have a significant effect on actual efficiency, though many aspects of this relate tooptimizationissues. In the theoreticalanalysis of algorithms, the normal practice is to estimate their complexity in the asymptotic sense. The most commonly used notation to describe resource consumption or "complexity" isDonald Knuth'sBig O notation, representing the complexity of an algorithm as a function of the size of the inputn{\textstyle n}. Big O notation is anasymptoticmeasure of function complexity, wheref(n)=O(g(n)){\textstyle f(n)=O{\bigl (}g(n){\bigr )}}roughly means the time requirement for an algorithm is proportional tog(n){\displaystyle g(n)}, omittinglower-order termsthat contribute less thang(n){\displaystyle g(n)}to the growth of the function asn{\textstyle n}grows arbitrarily large. This estimate may be misleading whenn{\textstyle n}is small, but is generally sufficiently accurate whenn{\textstyle n}is large as the notation is asymptotic. For example, bubble sort may be faster thanmerge sortwhen only a few items are to be sorted; however either implementation is likely to meet performance requirements for a small list. Typically, programmers are interested in algorithms thatscaleefficiently to large input sizes, and merge sort is preferred over bubble sort for lists of length encountered in most data-intensive programs. Some examples of Big O notation applied to algorithms' asymptotic time complexity include: For new versions of software or to provide comparisons with competitive systems,benchmarksare sometimes used, which assist with gauging an algorithms relative performance. If a newsort algorithmis produced, for example, it can be compared with its predecessors to ensure that at least it is efficient as before with known data, taking into consideration any functional improvements. Benchmarks can be used by customers when comparing various products from alternative suppliers to estimate which product will best suit their specific requirements in terms of functionality and performance. For example, in themainframeworld certain proprietarysortproducts from independent software companies such asSyncsortcompete with products from the major suppliers such asIBMfor speed. Some benchmarks provide opportunities for producing an analysis comparing the relative speed of various compiled and interpreted languages for example[3][4]andThe Computer Language Benchmarks Gamecompares the performance of implementations of typical programming problems in several programming languages. Even creating "do it yourself" benchmarks can demonstrate the relative performance of different programming languages, using a variety of user specified criteria. This is quite simple, as a "Nine language performance roundup" by Christopher W. Cowell-Shah demonstrates by example.[5] Implementation issues can also have an effect on efficiency, such as the choice of programming language, or the way in which the algorithm is actually coded,[6]or the choice of acompilerfor a particular language, or thecompilation optionsused, or even theoperating systembeing used. In many cases a language implemented by aninterpretermay be much slower than a language implemented by a compiler.[3]See the articles onjust-in-time compilationandinterpreted languages. There are other factors which may affect time or space issues, but which may be outside of a programmer's control; these includedata alignment,data granularity,cache locality,cache coherency,garbage collection,instruction-level parallelism,multi-threading(at either a hardware or software level),simultaneous multitasking, andsubroutinecalls.[7] Some processors have capabilities forvector processing, which allow asingle instruction to operate on multiple operands; it may or may not be easy for a programmer or compiler to use these capabilities. Algorithms designed for sequential processing may need to be completely redesigned to make use ofparallel processing, or they could be easily reconfigured. Asparallelanddistributed computinggrow in importance in the late 2010s, more investments are being made into efficienthigh-levelAPIsfor parallel and distributed computing systems such asCUDA,TensorFlow,Hadoop,OpenMPandMPI. Another problem which can arise in programming is that processors compatible with the sameinstruction set(such asx86-64orARM) may implement an instruction in different ways, so that instructions which are relatively fast on some models may be relatively slow on other models. This often presents challenges tooptimizing compilers, which must have extensive knowledge of the specificCPUand other hardware available on the compilation target to best optimize a program for performance. In the extreme case, a compiler may be forced toemulateinstructions not supported on a compilation target platform, forcing it togenerate codeorlinkan externallibrary callto produce a result that is otherwise incomputable on that platform, even if it is natively supported and more efficient in hardware on other platforms. This is often the case inembedded systemswith respect tofloating-point arithmetic, where small andlow-powermicrocontrollersoften lack hardware support for floating-point arithmetic and thus require computationally expensive software routines to produce floating point calculations. Measures are normally expressed as a function of the size of the inputn{\displaystyle \scriptstyle {n}}. The two most common measures are: For computers whose power is supplied by a battery (e.g.laptopsandsmartphones), or for very long/large calculations (e.g.supercomputers), other measures of interest are: As of 2018[update], power consumption is growing as an important metric for computational tasks of all types and at all scales ranging fromembeddedInternet of thingsdevices tosystem-on-chipdevices toserver farms. This trend is often referred to asgreen computing. Less common measures of computational efficiency may also be relevant in some cases: Analysis of algorithms, typically using concepts liketime complexity, can be used to get an estimate of the running time as a function of the size of the input data. The result is normally expressed usingBig O notation. This is useful for comparing algorithms, especially when a large amount of data is to be processed. More detailed estimates are needed to compare algorithm performance when the amount of data is small, although this is likely to be of less importance.Parallel algorithmsmay bemore difficult to analyze. Abenchmarkcan be used to assess the performance of an algorithm in practice. Many programming languages have an available function which providesCPU timeusage. For long-running algorithms the elapsed time could also be of interest. Results should generally be averaged over several tests. Run-based profiling can be very sensitive to hardware configuration and the possibility of other programs or tasks running at the same time in amulti-processingandmulti-programmingenvironment. This sort of test also depends heavily on the selection of a particular programming language, compiler, and compiler options, so algorithms being compared must all be implemented under the same conditions. This section is concerned with use of memory resources (registers,cache,RAM,virtual memory,secondary memory) while the algorithm is being executed. As for time analysis above,analyzethe algorithm, typically usingspace complexityanalysis to get an estimate of the run-time memory needed as a function as the size of the input data. The result is normally expressed usingBig O notation. There are up to four aspects of memory usage to consider: Early electronic computers, and early home computers, had relatively small amounts of working memory. For example, the 1949Electronic Delay Storage Automatic Calculator(EDSAC) had a maximum working memory of 1024 17-bit words, while the 1980 SinclairZX80came initially with 1024 8-bit bytes of working memory. In the late 2010s, it is typical forpersonal computersto have between 4 and 32GBof RAM, an increase of over 300 million times as much memory. Modern computers can have relatively large amounts of memory (possibly gigabytes), so having to squeeze an algorithm into a confined amount of memory is not the kind of problem it used to be. However, the different types of memory and their relative access speeds can be significant: An algorithm whose memory needs will fit in cache memory will be much faster than an algorithm which fits in main memory, which in turn will be very much faster than an algorithm which has to resort to paging. Because of this,cache replacement policiesare extremely important to high-performance computing, as arecache-aware programminganddata alignment. To further complicate the issue, some systems have up to three levels of cache memory, with varying effective speeds. Different systems will have different amounts of these various types of memory, so the effect of algorithm memory needs can vary greatly from one system to another. In the early days of electronic computing, if an algorithm and its data would not fit in main memory then the algorithm could not be used. Nowadays the use of virtual memory appears to provide much more memory, but at the cost of performance. Much higher speed can be obtained if an algorithm and its data fit in cache memory; in this case minimizing space will also help minimize time. This is called theprinciple of locality, and can be subdivided intolocality of reference,spatial locality, andtemporal locality. An algorithm which will not fit completely in cache memory but which exhibits locality of reference may perform reasonably well.
https://en.wikipedia.org/wiki/Algorithmic_efficiency
Data degradationis the gradualcorruptionofcomputer datadue to an accumulation of non-critical failures in adata storage device. It is also referred to asdata decay,data rotorbit rot.[1]This results in a decline in data quality over time, even when the data is not being utilized. The concept of data degradation involves progressively minimizing data in interconnected processes, where data is used for multiple purposes at different levels of detail. At specific points in the process chain, data is irreversibly reduced to a level that remains sufficient for the successful completion of the following steps[2] Data degradation indynamic random-access memory(DRAM) can occur when theelectric chargeof abitin DRAM disperses, possibly altering program code or stored data. DRAM may be altered bycosmic rays[3]or other high-energy particles. Such data degradation is known as asoft error.[4]ECC memorycan be used to mitigate this type of data degradation.[5] Data degradation results from the gradual decay ofstorage mediaover the course of years or longer. Causes vary by medium. EPROMs,flash memoryand othersolid-state drivestore data using electrical charges, which can slowly leak away due to imperfect insulation. Modern flash controller chips account for this leak by trying several lower threshold voltages (untilECCpasses), prolonging the age of data.Multi-level cellswith much lower distance between voltage levels cannot be considered stable without this functionality.[6] The chip itself is not affected by this, so reprogramming it approximately once per decade prevents decay. An undamaged copy of the master data is required for the reprogramming. Achecksumcan be used to assure that the on-chip data is not yet damaged and ready for reprogramming. The typical SD card, USB stick and M.2 NVMe all have a limited endurance. Power on can usually recover data[citation needed]but error rates will eventually degrade the media to illegibility. Writing zeros to a degraded NAND device can revive the storage to close to new condition for further use.[citation needed]Refresh cycles should be no longer than 6 months to be sure the device is legible. Magnetic media, such ashard disk drives,floppy disksandmagnetic tapes, may experience data decay as bits lose their magnetic orientation. Higher temperature speeds up the rate of magnetic loss. As with solid-state media, re-writing is useful as long as the medium itself is not damaged (see below).[7]Modern hard drives useGiant magnetoresistanceand have a higher magnetic lifespan on the order of decades. They also automatically correct any errors detected by ECC through rewriting. The reliance on aservowritercan complicate data recovery if it becomes unrecoverable, however. Floppy disks and tapes are poorly protected against ambient air. In warm/humid conditions, they are prone to the physicaldecompositionof the storage medium.[8][7] Optical mediasuch asCD-R,DVD-RandBD-R, may experience data decay from thebreakdownof the storage medium. This can be mitigated by storing discs in a dark, cool, low humidity location. "Archival quality" discs are available with an extended lifetime, but are still not permanent. However,data integrity scanningthat measures the rates of various types of errors is able to predict data decay on optical media well ahead of uncorrectable data loss occurring.[9] Both the disc dye and the disc backing layer are potentially susceptible to breakdown. Early cyanine-based dyes used in CD-R were notorious for their lack of UV stability. Early CDs also suffered fromCD bronzing, and is related to a combination of bad lacquer material and failure of the aluminum reflection layer.[10]Later discs use more stable dyes or forgo them for an inorganic mixture. The aluminum layer is also commonly swapped out for gold or silver alloy. Paper media, such aspunched cardsandpunched tape, may literallyrot.Mylarpunched tape is another approach that does not rely on electromagnetic stability. Degradation ofbooksandprinting paperis primarily driven byacid hydrolysisofglycosidic bondswithin thecellulosemolecule as well as byoxidation;[11]degradation of paper is accelerated by highrelative humidity, high temperature, as well as by exposure to acids, oxygen, light, and various pollutants, including variousvolatile organic compoundsandnitrogen dioxide.[12] Data degradation instreaming mediaacquisition modules, as addressed by the repair algorithms, reflects real-time data quality issues caused by device limitations. However, a more general form of data degradation refers to the gradual decay of storage media over extended periods, influenced by factors like physical wear, environmental conditions, or technological obsolescence. Causes of such degradation can vary depending on the medium, such as magnetic fields in hard drives, moisture or temperature for tape storage, or electronic failure over time.[13] One manifestation of data degradation is when one or a few bits are randomly flipped over a long period of time.[14]This is illustrated by several digital images below, all consisting of 326,272 bits. The original photo is displayed first. In the next image, a single bit was changed from 0 to 1. In the next two images, two and three bits were flipped. OnLinuxsystems, the binary difference between files can be revealed using thecmpcommand (e.g.cmp -b bitrot-original.jpg bitrot-1bit-changed.jpg). This deterioration can be caused by a variety of factors that impact the reliability and integrity of digital information, including physical factors,software errors, security breaches,human error, obsolete technology, and unauthorized access incidents.[15][16][17][18] Most disk,disk controllerand higher-level systems are subject to a slight chance of unrecoverable failure. With ever-growing disk capacities, file sizes, and increases in the amount of data stored on a disk, the likelihood of the occurrence of data decay and other forms of uncorrected and undetecteddata corruptionincreases.[19] Low-level disk controllers typically employerror correction codes(ECC) to correct erroneous data.[20] Higher-level software systems may be employed to mitigate the risk of such underlying failures by increasing redundancy and implementing integrity checking, error correction codes and self-repairing algorithms.[21]TheZFSfile systemwas designed to address many of these data corruption issues.[22]TheBtrfsfile system also includes data protection and recovery mechanisms,[23][better source needed]as doesReFS.[24] There is no solution that completely eliminates the threat of data degradation,[25]but various measures exist that can stave it off. One of these is toreplicate the dataasbackups. Both the original and backed data are thenauditedfor any faults due to storage media errors bychecksummingthe data or comparing it with that of other copies. This is the only way to detectlatentfaults proactively,[26]which might otherwise go unnoticed until the data is actually accessed.[27]Current storage systems such as those based onRAIDalready employ such measures internally.[28]Ideally, and especially for data that must bepreserved digitally, the replicas should be distributed across multiple administrative sites that function autonomously and deploy various hardware and software, increasing resistance to failure, as well as human error and cyberattacks.[29]
https://en.wikipedia.org/wiki/Data_degradation
Inmathematics,Church encodingis a means of representing data and operators in thelambda calculus. TheChurch numeralsare a representation of the natural numbers using lambda notation. The method is named forAlonzo Church, who first encoded data in the lambda calculus this way. Terms that are usually considered primitive in other notations (such as integers, Booleans, pairs, lists, and tagged unions) are mapped tohigher-order functionsunder Church encoding. TheChurch–Turing thesisasserts that any computable operator (and its operands) can be represented under Church encoding.[dubious–discuss]In theuntyped lambda calculusthe only primitive data type is the function. A straightforward implementation of Church encoding slows some access operations fromO(1){\displaystyle O(1)}toO(n){\displaystyle O(n)}, wheren{\displaystyle n}is the size of thedata structure, making Church encoding impractical.[1]Research has shown that this can be addressed by targeted optimizations, but mostfunctional programminglanguages instead expand their intermediate representations to containalgebraic data types.[2]Nonetheless Church encoding is often used in theoretical arguments, as it is a natural representation for partial evaluation and theorem proving.[1]Operations can be typed usinghigher-ranked types,[3]and primitive recursion is easily accessible.[1]The assumption that functions are the only primitive data types streamlines many proofs. Church encoding is complete but only representationally. Additional functions are needed to translate the representation into common data types, for display to people. It is not possible in general to decide if two functions areextensionallyequal due to theundecidability of equivalencefromChurch's theorem. The translation may apply the function in some way to retrieve the value it represents, or look up its value as a literal lambda term. Lambda calculus is usually interpreted as usingintensional equality. There arepotential problemswith the interpretation of results because of the difference between the intensional and extensional definition of equality. Church numerals are the representations ofnatural numbersunder Church encoding. Thehigher-order functionthat represents natural numbernis a function that maps any functionf{\displaystyle f}to itsn-foldcomposition. In simpler terms, the "value" of the numeral is equivalent to the number of times the function encapsulates its argument. All Church numerals are functions that take two parameters. Church numerals0,1,2, ..., are defined as follows in thelambda calculus. Starting with0not applying the function at all, proceed with1applying the function once,2applying the function twice,3applying the function three times, etc.: The Church numeral3represents the action of applying any given function three times to a value. The supplied function is first applied to a supplied parameter and then successively to its own result. The end result is not the numeral 3 (unless the supplied parameter happens to be 0 and the function is asuccessor function). The function itself, and not its end result, is the Church numeral3. The Church numeral3means simply to do anything three times. It is anostensivedemonstration of what is meant by "three times". Arithmeticoperations on numbers may be represented by functions on Church numerals. These functions may be defined inlambda calculus, or implemented in most functional programming languages (seeconverting lambda expressions to functions). The addition functionplus⁡(m,n)=m+n{\displaystyle \operatorname {plus} (m,n)=m+n}uses the identityf∘(m+n)(x)=f∘m(f∘n(x)){\displaystyle f^{\circ (m+n)}(x)=f^{\circ m}(f^{\circ n}(x))}. The successor functionsucc⁡(n)=n+1{\displaystyle \operatorname {succ} (n)=n+1}isβ-equivalentto(plus⁡1){\displaystyle (\operatorname {plus} \ 1)}. The multiplication functionmult⁡(m,n)=m∗n{\displaystyle \operatorname {mult} (m,n)=m*n}uses the identityf∘(m∗n)(x)=(f∘n)∘m(x){\displaystyle f^{\circ (m*n)}(x)=(f^{\circ n})^{\circ m}(x)}. The exponentiation functionexp⁡(b,n)=bn{\displaystyle \operatorname {exp} (b,n)=b^{n}}is given by the definition of Church numerals,nhx=hnx{\displaystyle n\ h\ x=h^{n}\ x}. In the definition substituteh→b,x→f{\displaystyle h\to b,x\to f}to getnbf=bnf{\displaystyle n\ b\ f=b^{n}\ f}and, which gives the lambda expression, Thepred⁡(n){\displaystyle \operatorname {pred} (n)}function is more difficult to understand. A Church numeral applies a functionntimes. The predecessor function must return a function that applies its parametern - 1times. This is achieved by building a container aroundfandx, which is initialized in a way that omits the application of the function the first time. Seepredecessorfor a more detailed explanation. The subtraction function can be written based on the predecessor function. λn.λf.λx.n(λg.λh.h(gf))(λu.x)(λu.u){\displaystyle \lambda n.\lambda f.\lambda x.n\ (\lambda g.\lambda h.h\ (g\ f))\ (\lambda u.x)\ (\lambda u.u)} Notes: The predecessor function used in the Church encoding is, We need a way of applying the function 1 fewer times to build the predecessor. A numeralnapplies the functionfntimes tox. The predecessor function must use the numeralnto apply the functionn-1times. Before implementing the predecessor function, here is a scheme that wraps the value in a container function. We will define new functions to use in place offandx, calledincandinit. The container function is calledvalue. The left-hand side of the table shows a numeralnapplied toincandinit. The general recurrence rule is, If there is also a function to retrieve the value from the container (calledextract), Thenextractmay be used to define thesamenumfunction as, Thesamenumfunction is not intrinsically useful. However, asincdelegates calling offto its container argument, we can arrange that on the first applicationincreceives a special container that ignores its argument allowing to skip the first application off. Call this new initial containerconst. The right-hand side of the above table shows the expansions ofnincconst. Then by replacinginitwithconstin the expression for thesamefunction we get the predecessor function, As explained below the functionsinc,init,const,valueandextractmay be defined as, Which gives the lambda expression forpredas, The value container applies a function to its value. It is defined by, so, Theincfunction should take a value containingv, and return a new value containingf v. Letting g be the value container, then, so, The value may be extracted by applying the identity function, UsingI, so, To implementpredtheinitfunction is replaced with theconstthat does not applyf. We needconstto satisfy, Which is satisfied if, Or as a lambda expression, A much simpler presentation is enabled using combinators notation. Now it is easy enough to see that i.e. by eta-contraction, and then by induction, etc. Pred may also be defined using pairs: This is a simpler definition but leads to a more complex expression for pred. The expansion forpred⁡three{\displaystyle \operatorname {pred} \operatorname {three} }: Divisionof natural numbers may be implemented by,[4] Calculatingn−m{\displaystyle n-m}takes many beta reductions. Unless doing the reduction by hand, this doesn't matter that much, but it is preferable to not have to do this calculation twice. The simplest predicate for testing numbers isIsZeroso consider the condition. But this condition is equivalent ton≤m{\displaystyle n\leq m}, notn<m{\displaystyle n<m}. If this expression is used then the mathematical definition of division given above is translated into function on Church numerals as, As desired, this definition has a single call tominus⁡nm{\displaystyle \operatorname {minus} \ n\ m}. However the result is that this formula gives the value of(n−1)/m{\displaystyle (n-1)/m}. This problem may be corrected by adding 1 tonbefore callingdivide. The definition ofdivideis then, divide1is a recursive definition. TheY combinatormay be used to implement the recursion. Create a new function calleddivby; to get, Then, where, Gives, Or as text, using \ forλ, For example, 9/3 is represented by Using a lambda calculus calculator, the above expression reduces to 3, using normal order. One simple approach for extending Church Numerals tosigned numbersis to use a Church pair, containing Church numerals representing a positive and a negative value.[5]The integer value is the difference between the two Church numerals. A natural number is converted to a signed number by, Negation is performed by swapping the values. The integer value is more naturally represented if one of the pair is zero. TheOneZerofunction achieves this condition, The recursion may be implemented using the Y combinator, Addition is defined mathematically on the pair by, The last expression is translated into lambda calculus as, Similarly subtraction is defined, giving, Multiplication may be defined by, The last expression is translated into lambda calculus as, A similar definition is given here for division, except in this definition, one value in each pair must be zero (seeOneZeroabove). ThedivZfunction allows us to ignore the value that has a zero component. divZis then used in the following formula, which is the same as for multiplication, but withmultreplaced bydivZ. Rational andcomputable real numbersmay also be encoded in lambda calculus. Rational numbers may be encoded as a pair of signed numbers. Computable real numbers may be encoded by a limiting process that guarantees that the difference from the real value differs by a number which may be made as small as we need.[6][7]The references given describe software that could, in theory, be translated into lambda calculus. Once real numbers are defined, complex numbers are naturally encoded as a pair of real numbers. The data types and functions described above demonstrate that any data type or calculation may be encoded in lambda calculus. This is theChurch–Turing thesis. Most real-world languages have support for machine-native integers; thechurchandunchurchfunctions convert between nonnegative integers and their corresponding Church numerals. The functions are given here inHaskell, where the\corresponds to the λ of Lambda calculus. Implementations in other languages are similar. Church Booleansare the Church encoding of the Boolean valuestrueandfalse.Some programming languages use these as an implementation model for Boolean arithmetic; examples areSmalltalkandPico. Boolean logic may be considered as a choice. The Church encoding oftrueandfalseare functions of two parameters: The two definitions are known as Church Booleans: This definition allows predicates (i.e. functions returninglogical values) to directly act as if-clauses. A function returning a Boolean, which is then applied to two parameters, returns either the first or the second parameter: evaluates tothen-clauseifpredicate-xevaluates totrue, and toelse-clauseifpredicate-xevaluates tofalse. Becausetrueandfalsechoose the first or second parameter they may be combined to provide logic operators. Note that there are multiple possible implementations ofnot. Some examples: Apredicateis a function that returns a Boolean value. The most fundamental predicate isIsZero{\displaystyle \operatorname {IsZero} }, which returnstrue{\displaystyle \operatorname {true} }if its argument is the Church numeral0{\displaystyle 0}, andfalse{\displaystyle \operatorname {false} }if its argument is any other Church numeral: The following predicate tests whether the first argument is less-than-or-equal-to the second: Because of the identity, The test for equality may be implemented as, Church pairs are the Church encoding of thepair(two-tuple) type. The pair is represented as a function that takes a function argument. When given its argument it will apply the argument to the two components of the pair. The definition inlambda calculusis, For example, An (immutable)listis constructed from list nodes. The basic operations on the list are; We give four different representations of lists below: A nonempty list can be implemented by a Church pair; However this does not give a representation of the empty list, because there is no "null" pointer. To represent null, the pair may be wrapped in another pair, giving three values: Using this idea the basic list operations can be defined like this:[8] In anilnodesecondis never accessed, provided thatheadandtailare only applied to nonempty lists. Alternatively, define[9] where the last definition is a special case of the general Other operations for one pair as a list node As an alternative to the encoding using Church pairs, a list can be encoded by identifying it with itsright fold function. For example, a list of three elements x, y and z can be encoded by a higher-order function that when applied to a combinator c and a value n returns c x (c y (c z n)). Equivalently, it is an application of the chain of functional compositions of partial applications, (c x ∘ c y ∘ c z) n. This list representation can be given type inSystem F. The evident correspondence to Church numerals is non-coincidental, as that can be seen as a unary encoding, with natural numbers represented by lists of unit (i.e. non-important) values, e.g. [() () ()], with the list's length serving as the representation of the natural number. Right folding over such lists uses functions which necessarily ignore the element's value, and is equivalent to the chained functional composition, i.e. (c () ∘ c () ∘ c ()) n = (f ∘ f ∘ f) n, as is used in Church numerals. An alternative representation is Scott encoding, which uses the idea ofcontinuationsand can lead to simpler code.[10](see alsoMogensen–Scott encoding). In this approach, we use the fact that lists can be observed usingpattern matchingexpression. For example, usingScalanotation, iflistdenotes a value of typeListwith empty listNiland constructorCons(h, t)we can inspect the list and computenilCodein case the list is empty andconsCode(h, t)when the list is not empty: Thelistis given by how it acts uponnilCodeandconsCode. We therefore define a list as a function that accepts suchnilCodeandconsCodeas arguments, so that instead of the above pattern match we may simply write: Let us denote bynthe parameter corresponding tonilCodeand bycthe parameter corresponding toconsCode. The empty list is the one that returns the nil argument: The non-empty list with headhand tailtis given by More generally, analgebraic data typewithm{\displaystyle m}alternatives becomes a function withm{\displaystyle m}parameters. When thei{\displaystyle i}th constructor hasni{\displaystyle n_{i}}arguments, the corresponding parameter of the encoding takesni{\displaystyle n_{i}}arguments as well. Scott encoding can be done in untyped lambda calculus, whereas its use with types requires a type system with recursion and type polymorphism. A list with element type E in this representation that is used to compute values of type C would have the following recursive type definition, where '=>' denotesfunction type: A list that can be used to compute arbitrary types would have a type that quantifies overC. A list generic[clarification needed]inEwould also takeEas the type argument.
https://en.wikipedia.org/wiki/Church_encoding
a0,t+(a0,t)2−(a0,t)2=a0,t{\displaystyle a_{0,t}+(a_{0,t})^{2}-(a_{0,t})^{2}=a_{0,t}}sinceRx(t1,t2)=a0,min(t1,t2)+a0,t1a0,t2{\displaystyle R_{x}(t_{1},t_{2})=a_{0,min(t_{1},t_{2})}+a_{0,t_{1}}a_{0,t_{2}}} Inprobability theory,statisticsand related fields, aPoisson point process(also known as:Poisson random measure,Poisson random point fieldandPoisson point field) is a type ofmathematical objectthat consists ofpointsrandomly located on amathematical spacewith the essential feature that the points occur independently of one another.[1]The process's name derives from the fact that the number of points in any given finite region follows aPoisson distribution. The process and the distribution are named after French mathematicianSiméon Denis Poisson. The process itself was discovered independently and repeatedly in several settings, including experiments onradioactive decay, telephone call arrivals andactuarial science.[2][3] This point process is used as amathematical modelfor seemingly random processes in numerous disciplines includingastronomy,[4]biology,[5]ecology,[6]geology,[7]seismology,[8]physics,[9]economics,[10]image processing,[11][12]andtelecommunications.[13][14] The Poisson point process is often defined on the real number line, where it can be considered astochastic process. It is used, for example, inqueueing theory[15]to model random events distributed in time, such as the arrival of customers at a store, phone calls at an exchange or occurrence of earthquakes. In theplane, the point process, also known as aspatial Poisson process,[16]can represent the locations of scattered objects such as transmitters in awireless network,[13][17][18][19]particlescolliding into a detector or trees in a forest.[20]The process is often used in mathematical models and in the related fields of spatial point processes,[21]stochastic geometry,[1]spatial statistics[21][22]andcontinuum percolation theory.[23] The point process depends on a single mathematical object, which, depending on the context, may be aconstant, alocally integrable functionor, in more general settings, aRadon measure.[24]In the first case, the constant, known as therateorintensity, is the averagedensityof the points in the Poisson process located in some region of space. The resulting point process is called ahomogeneousorstationary Poisson point process.[25]In the second case, the point process is called aninhomogeneousornonhomogeneousPoisson point process, and the average density of points depend on the location of the underlying space of the Poisson point process.[26]The wordpointis often omitted,[27]but there are otherPoisson processesof objects, which, instead of points, consist of more complicated mathematical objects such aslinesandpolygons, and such processes can be based on the Poisson point process.[28]Both the homogeneous and nonhomogeneous Poisson point processes are particular cases of thegeneralized renewal process. Depending on the setting, the process has several equivalent definitions[29]as well as definitions of varying generality owing to its many applications and characterizations.[30]The Poisson point process can be defined, studied and used in one dimension, for example, on the real line, where it can be interpreted as a counting process or part of a queueing model;[31][32]in higher dimensions such as the plane where it plays a role instochastic geometry[1]andspatial statistics;[33]or on more general mathematical spaces.[34]Consequently, the notation, terminology and level of mathematical rigour used to define and study the Poisson point process and points processes in general vary according to the context.[35] Despite all this, the Poisson point process has two key properties—the Poisson property and the independence property— that play an essential role in all settings where the Poisson point process is used.[24][36]The two properties are not logically independent; indeed, the Poisson distribution of point counts implies the independence property,[a]while in the converse direction the assumptions that: (i) the point process is simple, (ii) has no fixed atoms, and (iii) is a.s. boundedly finite are required.[37] A Poisson point process is characterized via thePoisson distribution. The Poisson distribution is the probability distribution of arandom variableN{\textstyle N}(called aPoisson random variable) such that the probability thatN{\displaystyle \textstyle N}equalsn{\displaystyle \textstyle n}is given by: wheren!{\textstyle n!}denotesfactorialand the parameterΛ{\textstyle \Lambda }determines the shape of the distribution. (In fact,Λ{\textstyle \Lambda }equals the expected value ofN{\textstyle N}.) By definition, a Poisson point process has the property that the number of points in a bounded region of the process's underlying space is a Poisson-distributed random variable.[36] Consider a collection ofdisjointand bounded subregions of the underlying space. By definition, the number of points of a Poisson point process in each bounded subregion will be completely independent of all the others. This property is known under several names such ascomplete randomness,complete independence,[38]orindependent scattering[39][40]and is common to all Poisson point processes. In other words, there is a lack of interaction between different regions and the points in general,[41]which motivates the Poisson process being sometimes called apurelyorcompletelyrandom process.[38] If a Poisson point process has a parameter of the formΛ=νλ{\textstyle \Lambda =\nu \lambda }, whereν{\textstyle \nu }is Lebesgue measure (that is, it assigns length, area, or volume to sets) andλ{\textstyle \lambda }is a constant, then the point process is called a homogeneous or stationary Poisson point process. The parameter, calledrateorintensity, is related to the expected (or average) number of Poisson points existing in some bounded region,[42][43]whererateis usually used when the underlying space has one dimension.[42]The parameterλ{\textstyle \lambda }can be interpreted as the average number of points per some unit of extent such aslength, area,volume, or time, depending on the underlying mathematical space, and it is also called themean densityormean rate;[44]seeTerminology. The homogeneous Poisson point process, when considered on the positive half-line, can be defined as acounting process, a type of stochastic process, which can be denoted as{N(t),t≥0}{\textstyle \{N(t),t\geq 0\}}.[29][32]A counting process represents the total number of occurrences or events that have happened up to and including timet{\textstyle t}. A counting process is a homogeneous Poisson counting process with rateλ>0{\textstyle \lambda >0}if it has the following three properties:[29][32] The last property implies: In other words, the probability of the random variableN(t){\textstyle N(t)}being equal ton{\textstyle n}is given by: The Poisson counting process can also be defined by stating that the time differences between events of the counting process are exponential variables with mean1/λ{\textstyle 1/\lambda }.[45]The time differences between the events or arrivals are known asinterarrival[46]orinteroccurrencetimes.[45] Interpreted as apoint process, a Poisson point process can be defined on thereal lineby considering the number of points of the process in the interval(a,b]{\textstyle (a,b]}. For the homogeneous Poisson point process on the real line with parameterλ>0{\textstyle \lambda >0}, the probability of this random number of points, written here asN(a,b]{\textstyle N(a,b]}, being equal to somecounting numbern{\textstyle n}is given by:[47] For some positive integerk{\textstyle k}, the homogeneous Poisson point process has the finite-dimensional distribution given by:[47] where the real numbersai<bi≤ai+1{\textstyle a_{i}<b_{i}\leq a_{i+1}}. In other words,N(a,b]{\textstyle N(a,b]}is a Poisson random variable with meanλ(b−a){\textstyle \lambda (b-a)}, wherea≤b{\textstyle a\leq b}. Furthermore, the number of points in any two disjoint intervals, say,(a1,b1]{\textstyle (a_{1},b_{1}]}and(a2,b2]{\textstyle (a_{2},b_{2}]}are independent of each other, and this extends to any finite number of disjoint intervals.[47]In the queueing theory context, one can consider a point existing (in an interval) as anevent, but this is different to the wordeventin the probability theory sense.[b]It follows thatλ{\textstyle \lambda }is the expected number ofarrivalsthat occur per unit of time.[32] The previous definition has two important features shared by Poisson point processes in general:[47][24] Furthermore, it has a third feature related to just the homogeneous Poisson point process:[48] In other words, for any finitet>0{\textstyle t>0}, the random variableN(a+t,b+t]{\textstyle N(a+t,b+t]}is independent oft{\textstyle t}, so it is also called a stationary Poisson process.[47] The quantityλ(bi−ai){\textstyle \lambda (b_{i}-a_{i})}can be interpreted as the expected or average number of points occurring in the interval(ai,bi]{\textstyle (a_{i},b_{i}]}, namely: whereE{\displaystyle \operatorname {E} }denotes theexpectationoperator. In other words, the parameterλ{\textstyle \lambda }of the Poisson process coincides with thedensityof points. Furthermore, the homogeneous Poisson point process adheres to its own form of the (strong) law of large numbers.[49]More specifically, with probability one: wherelim{\textstyle \lim }denotes thelimitof a function, andλ{\displaystyle \lambda }is expected number of arrivals occurred per unit of time. The distance between two consecutive points of a point process on the real line will be anexponential random variablewith parameterλ{\textstyle \lambda }(or equivalently, mean1/λ{\textstyle 1/\lambda }). This implies that the points have thememorylessproperty: the existence of one point existing in a finite interval does not affect the probability (distribution) of other points existing,[50][51]but this property has no natural equivalence when the Poisson process is defined on a space with higher dimensions.[52] A point process withstationary incrementsis sometimes said to beorderly[53]orregularif:[54] wherelittle-o notationis being used. A point process is called asimple point processwhen the probability of any of its two points coinciding in the same position, on the underlying space, is zero. For point processes in general on the real line, the property of orderliness implies that the process is simple,[55]which is the case for the homogeneous Poisson point process.[56] On the real line, the homogeneous Poisson point process has a connection to the theory ofmartingalesvia the following characterization: a point process is the homogeneous Poisson point process if and only if is a martingale.[57][58] On the real line, the Poisson process is a type of continuous-timeMarkov processknown as abirth process, a special case of thebirth–death process(with just births and zero deaths).[59][60]More complicated processes with theMarkov property, such asMarkov arrival processes, have been defined where the Poisson process is a special case.[45] If the homogeneous Poisson process is considered just on the half-line[0,∞){\textstyle [0,\infty )}, which can be the case whent{\textstyle t}represents time[29]then the resulting process is not truly invariant under translation.[52]In that case the Poisson process is no longer stationary, according to some definitions of stationarity.[25] There have been many applications of the homogeneous Poisson process on the real line in an attempt to model seemingly random and independent events occurring. It has a fundamental role inqueueing theory, which is the probability field of developing suitable stochastic models to represent the random arrival and departure of certain phenomena.[15][45]For example, customers arriving and being served or phone calls arriving at a phone exchange can be both studied with techniques from queueing theory. The homogeneous Poisson process on the real line is considered one of the simplest stochastic processes for counting random numbers of points.[61][62]This process can be generalized in a number of ways. One possible generalization is to extend the distribution of interarrival times from the exponential distribution to other distributions, which introduces the stochastic process known as arenewal process. Another generalization is to define the Poisson point process on higher dimensional spaces such as the plane.[63] Aspatial Poisson processis a Poisson point process defined in the planeR2{\displaystyle \textstyle \mathbb {R} ^{2}}.[57][64]For its mathematical definition, one first considers a bounded, open or closed (or more precisely,Borel measurable) regionB{\textstyle B}of the plane. The number of points of a point processN{\displaystyle \textstyle N}existing in this regionB⊂R2{\displaystyle \textstyle B\subset \mathbb {R} ^{2}}is a random variable, denoted byN(B){\displaystyle \textstyle N(B)}. If the points belong to a homogeneous Poisson process with parameterλ>0{\displaystyle \textstyle \lambda >0}, then the probability ofn{\displaystyle \textstyle n}points existing inB{\displaystyle \textstyle B}is given by: where|B|{\displaystyle \textstyle |B|}denotes the area ofB{\displaystyle \textstyle B}. For some finite integerk≥1{\displaystyle \textstyle k\geq 1}, we can give the finite-dimensional distribution of the homogeneous Poisson point process by first considering a collection of disjoint, bounded Borel (measurable) setsB1,…,Bk{\displaystyle \textstyle B_{1},\dots ,B_{k}}. The number of points of the point processN{\displaystyle \textstyle N}existing inBi{\displaystyle \textstyle B_{i}}can be written asN(Bi){\displaystyle \textstyle N(B_{i})}. Then the homogeneous Poisson point process with parameterλ>0{\displaystyle \textstyle \lambda >0}has the finite-dimensional distribution:[65] The spatial Poisson point process features prominently inspatial statistics,[21][22]stochastic geometry, andcontinuum percolation theory.[23]This point process is applied in various physical sciences such as a model developed for alpha particles being detected. In recent years, it has been frequently used to model seemingly disordered spatial configurations of certain wireless communication networks.[17][18][19]For example, models for cellular or mobile phone networks have been developed where it is assumed the phone network transmitters, known as base stations, are positioned according to a homogeneous Poisson point process. The previous homogeneous Poisson point process immediately extends to higher dimensions by replacing the notion of area with (high dimensional) volume. For some bounded regionB{\displaystyle \textstyle B}of Euclidean spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}, if the points form a homogeneous Poisson process with parameterλ>0{\displaystyle \textstyle \lambda >0}, then the probability ofn{\displaystyle \textstyle n}points existing inB⊂Rd{\displaystyle \textstyle B\subset \mathbb {R} ^{d}}is given by: where|B|{\displaystyle \textstyle |B|}now denotes thed{\displaystyle \textstyle d}-dimensional volume ofB{\displaystyle \textstyle B}. Furthermore, for a collection of disjoint, bounded Borel setsB1,…,Bk⊂Rd{\displaystyle \textstyle B_{1},\dots ,B_{k}\subset \mathbb {R} ^{d}}, letN(Bi){\displaystyle \textstyle N(B_{i})}denote the number of points ofN{\displaystyle \textstyle N}existing inBi{\displaystyle \textstyle B_{i}}. Then the corresponding homogeneous Poisson point process with parameterλ>0{\displaystyle \textstyle \lambda >0}has the finite-dimensional distribution:[67] Homogeneous Poisson point processes do not depend on the position of the underlying space through its parameterλ{\displaystyle \textstyle \lambda }, which implies it is both a stationary process (invariant to translation) and an isotropic (invariant to rotation) stochastic process.[25]Similarly to the one-dimensional case, the homogeneous point process is restricted to some bounded subset ofRd{\textstyle \mathbb {R} ^{d}}, then depending on some definitions of stationarity, the process is no longer stationary.[25][52] If the homogeneous point process is defined on the real line as a mathematical model for occurrences of some phenomenon, then it has the characteristic that the positions of these occurrences or events on the real line (often interpreted as time) will be uniformly distributed. More specifically, if an event occurs (according to this process) in an interval(a,b]{\displaystyle \textstyle (a,b]}wherea≤b{\displaystyle \textstyle a\leq b}, then its location will be a uniform random variable defined on that interval.[65]Furthermore, the homogeneous point process is sometimes called theuniformPoisson point process (seeTerminology). This uniformity property extends to higher dimensions in the Cartesian coordinate, but not in, for example, polar coordinates.[68][69] TheinhomogeneousornonhomogeneousPoisson point process(seeTerminology) is a Poisson point process with a Poisson parameter set as some location-dependent function in the underlying space on which the Poisson process is defined. For Euclidean spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}, this is achieved by introducing a locally integrable positive functionλ:Rd→[0,∞){\displaystyle \lambda \colon \mathbb {R} ^{d}\to [0,\infty )}, such that for every bounded regionB{\displaystyle \textstyle B}the (d{\displaystyle \textstyle d}-dimensional) volume integral ofλ(x){\displaystyle \textstyle \lambda (x)}over regionB{\displaystyle \textstyle B}is finite. In other words, if this integral, denoted byΛ(B){\displaystyle \textstyle \Lambda (B)}, is:[43] wheredx{\displaystyle \textstyle {\mathrm {d} x}}is a (d{\displaystyle \textstyle d}-dimensional) volume element,[c]then for every collection of disjoint boundedBorel measurablesetsB1,…,Bk{\displaystyle \textstyle B_{1},\dots ,B_{k}}, an inhomogeneous Poisson process with (intensity) functionλ(x){\displaystyle \textstyle \lambda (x)}has the finite-dimensional distribution:[67] Furthermore,Λ(B){\displaystyle \textstyle \Lambda (B)}has the interpretation of being the expected number of points of the Poisson process located in the bounded regionB{\displaystyle \textstyle B}, namely On the real line, the inhomogeneous or non-homogeneous Poisson point process has mean measure given by a one-dimensional integral. For two real numbersa{\displaystyle \textstyle a}andb{\displaystyle \textstyle b}, wherea≤b{\displaystyle \textstyle a\leq b}, denote byN(a,b]{\displaystyle \textstyle N(a,b]}the number points of an inhomogeneous Poisson process with intensity functionλ(t){\displaystyle \textstyle \lambda (t)}occurring in the interval(a,b]{\displaystyle \textstyle (a,b]}. The probability ofn{\displaystyle \textstyle n}points existing in the above interval(a,b]{\displaystyle \textstyle (a,b]}is given by: where the mean or intensity measure is: which means that the random variableN(a,b]{\displaystyle \textstyle N(a,b]}is a Poisson random variable with meanE⁡[N(a,b]]=Λ(a,b){\displaystyle \textstyle \operatorname {E} [N(a,b]]=\Lambda (a,b)}. A feature of the one-dimension setting, is that an inhomogeneous Poisson process can be transformed into a homogeneous by amonotone transformationor mapping, which is achieved with the inverse ofΛ{\displaystyle \textstyle \Lambda }.[70][71] The inhomogeneous Poisson point process, when considered on the positive half-line, is also sometimes defined as a counting process. With this interpretation, the process, which is sometimes written as{N(t),t≥0}{\displaystyle \textstyle \{N(t),t\geq 0\}}, represents the total number of occurrences or events that have happened up to and including timet{\displaystyle \textstyle t}. A counting process is said to be an inhomogeneous Poisson counting process if it has the four properties:[32][72] whereo(h){\displaystyle \textstyle o(h)}is asymptotic orlittle-o notationforo(h)/h→0{\displaystyle \textstyle o(h)/h\rightarrow 0}ash→0{\displaystyle \textstyle h\rightarrow 0}. In the case of point processes with refractoriness (e.g., neural spike trains) a stronger version of property 4 applies:[73]Pr{N(t+h)−N(t)≥2}=o(h2){\displaystyle \Pr\{N(t+h)-N(t)\geq 2\}=o(h^{2})}. The above properties imply thatN(t+h)−N(t){\displaystyle \textstyle N(t+h)-N(t)}is a Poisson random variable with the parameter (or mean) which implies An inhomogeneous Poisson process defined in the planeR2{\displaystyle \textstyle \mathbb {R} ^{2}}is called aspatial Poisson process[16]It is defined with intensity function and its intensity measure is obtained performing a surface integral of its intensity function over some region.[20][74]For example, its intensity function (as a function of Cartesian coordinatesx{\textstyle x}andy{\displaystyle \textstyle y}) can be so the corresponding intensity measure is given by the surface integral whereB{\textstyle B}is some bounded region in the planeR2{\textstyle \mathbb {R} ^{2}}. In the plane,Λ(B){\textstyle \Lambda (B)}corresponds to a surface integral while inRd{\textstyle \mathbb {R} ^{d}}the integral becomes a (d{\textstyle d}-dimensional) volume integral. When the real line is interpreted as time, the inhomogeneous process is used in the fields of counting processes and in queueing theory.[72][75]Examples of phenomena which have been represented by or appear as an inhomogeneous Poisson point process include: In the plane, the Poisson point process is important in the related disciplines of stochastic geometry[1][33]and spatial statistics.[21][22]The intensity measure of this point process is dependent on the location of underlying space, which means it can be used to model phenomena with a density that varies over some region. In other words, the phenomena can be represented as points that have a location-dependent density.[20]This processes has been used in various disciplines and uses include the study of salmon and sea lice in the oceans,[78]forestry,[6]and search problems.[79] The Poisson intensity functionλ(x){\textstyle \lambda (x)}has an interpretation, considered intuitive,[20]with the volume elementdx{\textstyle \mathrm {d} x}in the infinitesimal sense:λ(x)dx{\textstyle \lambda (x)\,\mathrm {d} x}is the infinitesimal probability of a point of a Poisson point process existing in a region of space with volumedx{\textstyle \mathrm {d} x}located atx{\textstyle x}.[20] For example, given a homogeneous Poisson point process on the real line, the probability of finding a single point of the process in a small interval of widthδ{\textstyle \delta }is approximatelyλδ{\textstyle \lambda \delta }. In fact, such intuition is how the Poisson point process is sometimes introduced and its distribution derived.[80][41][81] If a Poisson point process has an intensity measure that is a locally finite and diffuse (or non-atomic), then it is asimple point process. For a simple point process, the probability of a point existing at a single point or location in the underlying (state) space is either zero or one. This implies that, with probability one, no two (or more) points of a Poisson point process coincide in location in the underlying space.[82][18][83] Simulating a Poisson point process on a computer is usually done in a bounded region of space, known as a simulationwindow, and requires two steps: appropriately creating a random number of points and then suitably placing the points in a random manner. Both these two steps depend on the specific Poisson point process that is being simulated.[84][85] The number of pointsN{\textstyle N}in the window, denoted here byW{\textstyle W}, needs to be simulated, which is done by using a (pseudo)-random number generatingfunction capable of simulating Poisson random variables. For the homogeneous case with the constantλ{\textstyle \lambda }, the mean of the Poisson random variableN{\textstyle N}is set toλ|W|{\textstyle \lambda |W|}where|W|{\textstyle |W|}is the length, area or (d{\textstyle d}-dimensional) volume ofW{\textstyle W}. For the inhomogeneous case,λ|W|{\textstyle \lambda |W|}is replaced with the (d{\textstyle d}-dimensional) volume integral The second stage requires randomly placing theN{\displaystyle \textstyle N}points in the windowW{\displaystyle \textstyle W}. For the homogeneous case in one dimension, all points are uniformly and independently placed in the window or intervalW{\displaystyle \textstyle W}. For higher dimensions in a Cartesian coordinate system, each coordinate is uniformly and independently placed in the windowW{\displaystyle \textstyle W}. If the window is not a subspace of Cartesian space (for example, inside a unit sphere or on the surface of a unit sphere), then the points will not be uniformly placed inW{\displaystyle \textstyle W}, and suitable change of coordinates (from Cartesian) are needed.[84] For the inhomogeneous case, a couple of different methods can be used depending on the nature of the intensity functionλ(x){\displaystyle \textstyle \lambda (x)}.[84]If the intensity function is sufficiently simple, then independent and random non-uniform (Cartesian or other) coordinates of the points can be generated. For example, simulating a Poisson point process on a circular window can be done for an isotropic intensity function (in polar coordinatesr{\displaystyle \textstyle r}andθ{\displaystyle \textstyle \theta }), implying it is rotationally variant or independent ofθ{\displaystyle \textstyle \theta }but dependent onr{\displaystyle \textstyle r}, by a change of variable inr{\displaystyle \textstyle r}if the intensity function is sufficiently simple.[84] For more complicated intensity functions, one can use anacceptance-rejection method, which consists of using (or 'accepting') only certain random points and not using (or 'rejecting') the other points, based on the ratio:.[86] wherexi{\displaystyle \textstyle x_{i}}is the point under consideration for acceptance or rejection. That is, a location is uniformly randomly selected for consideration, then to determine whether to place a sample at that location a uniformly randomly drawn number in[0,1]{\displaystyle [0,1]}is compared to the probability density functionλ(x)Λ(W){\displaystyle {\frac {\lambda (x)}{\Lambda (W)}}}, accepting if it is smaller than the probability density function, and repeating until the previously chosen number of samples have been drawn. Inmeasure theory, the Poisson point process can be further generalized to what is sometimes known as thegeneral Poisson point process[20][87]orgeneral Poisson process[74]by using aRadon measureΛ{\displaystyle \textstyle \Lambda }, which is alocally finite measure. In general, this Radon measureΛ{\displaystyle \textstyle \Lambda }can be atomic, which means multiple points of the Poisson point process can exist in the same location of the underlying space. In this situation, the number of points atx{\displaystyle \textstyle x}is a Poisson random variable with meanΛ(x){\displaystyle \textstyle \Lambda ({x})}.[87]But sometimes the converse is assumed, so the Radon measureΛ{\displaystyle \textstyle \Lambda }isdiffuseor non-atomic.[20] A point processN{\displaystyle \textstyle {N}}is a general Poisson point process with intensityΛ{\displaystyle \textstyle \Lambda }if it has the two following properties:[20] The Radon measureΛ{\displaystyle \textstyle \Lambda }maintains its previous interpretation of being the expected number of points ofN{\displaystyle \textstyle {N}}located in the bounded regionB{\displaystyle \textstyle B}, namely Furthermore, ifΛ{\displaystyle \textstyle \Lambda }is absolutely continuous such that it has a density (which is theRadon–Nikodym densityor derivative) with respect to the Lebesgue measure, then for all Borel setsB{\displaystyle \textstyle B}it can be written as: where the densityλ(x){\displaystyle \textstyle \lambda (x)}is known, among other terms, as the intensity function. Despite its name, the Poisson point process was neither discovered nor studied by its namesake. It is cited as an example ofStigler's law of eponymy.[2][3]The name arises from the process's inherent relation to the Poisson distribution, derived by Poisson as a limiting case of thebinomial distribution.[88]It describes theprobabilityof the sum ofn{\displaystyle \textstyle n}Bernoulli trialswith probabilityp{\displaystyle \textstyle p}, often likened to the number of heads (or tails) aftern{\displaystyle \textstyle n}biasedcoin flipswith the probability of a head (or tail) occurring beingp{\displaystyle \textstyle p}. For some positive constantΛ>0{\displaystyle \textstyle \Lambda >0}, asn{\displaystyle \textstyle n}increases towards infinity andp{\displaystyle \textstyle p}decreases towards zero such that the productnp=Λ{\displaystyle \textstyle np=\Lambda }is fixed, the Poisson distribution more closely approximates that of the binomial.[89] Poisson derived the Poisson distribution, published in 1841, by examining the binomial distribution in thelimitofp{\displaystyle \textstyle p}(to zero) andn{\displaystyle \textstyle n}(to infinity). It only appears once in all of Poisson's work,[90]and the result was not well known during his time. Over the following years others used the distribution without citing Poisson, includingPhilipp Ludwig von SeidelandErnst Abbe.[91][2]At the end of the 19th century,Ladislaus Bortkiewiczstudied the distribution, citing Poisson, using real data on the number of deaths from horse kicks in thePrussian army.[88][92] There are a number of claims for early uses or discoveries of the Poisson point process.[2][3]For example,John Michellin 1767, a decade before Poisson was born, was interested in the probability a star being within a certain region of another star under the erroneous assumption that the stars were "scattered by mere chance", and studied an example consisting of the six brighteststarsin thePleiades, without deriving the Poisson distribution. This work inspiredSimon Newcombto study the problem and to calculate the Poisson distribution as an approximation for the binomial distribution in 1860.[3] At the beginning of the 20th century the Poisson process (in one dimension) would arise independently in different situations.[2][3]In Sweden 1903,Filip Lundbergpublished a thesis containing work, now considered fundamental and pioneering, where he proposed to model insurance claims with a homogeneous Poisson process.[93][94] InDenmarkA.K. Erlangderived the Poisson distribution in 1909 when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang unaware of Poisson's earlier work and assumed that the number phone calls arriving in each interval of time were independent of each other. He then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution.[2] In 1910Ernest RutherfordandHans Geigerpublished experimental results on counting alpha particles. Their experimental work had mathematical contributions fromHarry Bateman, who derived Poisson probabilities as a solution to a family of differential equations, though the solution had been derived earlier, resulting in the independent discovery of the Poisson process.[2]After this time, there were many studies and applications of the Poisson process, but its early history is complicated, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and various physical scientists.[2] The years after 1909 led to a number of studies and applications of the Poisson point process, however, its early history is complex, which has been explained by the various applications of the process in numerous fields bybiologists, ecologists, engineers and others working in thephysical sciences. The early results were published in different languages and in different settings, with no standard terminology and notation used.[2]For example, in 1922 SwedishchemistandNobel LaureateTheodor Svedbergproposed a model in which a spatial Poisson point process is the underlying process to study how plants are distributed in plant communities.[95]A number of mathematicians started studying the process in the early 1930s, and important contributions were made byAndrey Kolmogorov,William FellerandAleksandr Khinchin,[2]among others.[96]In the field ofteletraffic engineering, mathematicians and statisticians studied and used Poisson and other point processes.[97] The SwedeConny Palmin his 1943dissertationstudied the Poisson and other point processes in theone-dimensionalsetting by examining them in terms of the statistical or stochastic dependence between the points in time.[98][97]In his work exists the first known recorded use of the termpoint processesasPunktprozessein German.[98][3] It is believed[2]that William Feller was the first in print to refer to it as thePoisson processin a 1940 paper. Although the Swede Ove Lundberg used the termPoisson processin his 1940 PhD dissertation,[3]in which Feller was acknowledged as an influence,[99]it has been claimed that Feller coined the term before 1940.[89]It has been remarked that both Feller and Lundberg used the term as though it were well-known, implying it was already in spoken use by then.[3]Feller worked from 1936 to 1939 alongsideHarald CramératStockholm University, where Lundberg was a PhD student under Cramér who did not use the termPoisson processin a book by him, finished in 1936, but did in subsequent editions, which his has led to the speculation that the termPoisson processwas coined sometime between 1936 and 1939 at the Stockholm University.[3] The terminology of point process theory in general has been criticized for being too varied.[3]In addition to the wordpointoften being omitted,[63][27]the homogeneous Poisson (point) process is also called astationaryPoisson (point) process,[47]as well asuniformPoisson (point) process.[42]The inhomogeneous Poisson point process, as well as being callednonhomogeneous,[47]is also referred to as thenon-stationaryPoisson process.[72][100] The termpoint processhas been criticized, as the termprocesscan suggest over time and space, sorandom point field,[101]resulting in the termsPoisson random point fieldorPoisson point fieldbeing also used.[102]A point process is considered, and sometimes called, a random counting measure,[103]hence the Poisson point process is also referred to as aPoisson random measure,[104]a term used in the study of Lévy processes,[104][105]but some choose to use the two terms for Poisson points processes defined on two different underlying spaces.[106] The underlying mathematical space of the Poisson point process is called acarrier space,[107][108]orstate space, though the latter term has a different meaning in the context of stochastic processes. In the context of point processes, the term "state space" can mean the space on which the point process is defined such as the real line,[109][110]which corresponds to the index set[111]or parameter set[112]in stochastic process terminology. The measureΛ{\displaystyle \textstyle \Lambda }is called theintensity measure,[113]mean measure,[36]orparameter measure,[67]as there are no standard terms.[36]IfΛ{\displaystyle \textstyle \Lambda }has a derivative or density, denoted byλ(x){\displaystyle \textstyle \lambda (x)}, is called theintensity functionof the Poisson point process.[20]For the homogeneous Poisson point process, the derivative of the intensity measure is simply a constantλ>0{\displaystyle \textstyle \lambda >0}, which can be referred to as therate, usually when the underlying space is the real line, or theintensity.[42]It is also called themean rateor themean density[114]orrate.[32]Forλ=1{\displaystyle \textstyle \lambda =1}, the corresponding process is sometimes referred to as thestandard Poisson(point) process.[43][57][115] The extent of the Poisson point process is sometimes called theexposure.[116][117] The notation of the Poisson point process depends on its setting and the field it is being applied in. For example, on the real line, the Poisson process, both homogeneous or inhomogeneous, is sometimes interpreted as a counting process, and the notation{N(t),t≥0}{\displaystyle \textstyle \{N(t),t\geq 0\}}is used to represent the Poisson process.[29][32] Another reason for varying notation is due to the theory of point processes, which has a couple of mathematical interpretations. For example, a simple Poisson point process may be considered as a random set, which suggests the notationx∈N{\displaystyle \textstyle x\in N}, implying thatx{\displaystyle \textstyle x}is a random point belonging to or being an element of the Poisson point processN{\displaystyle \textstyle N}. Another, more general, interpretation is to consider a Poisson or any other point process as a random counting measure, so one can write the number of points of a Poisson point processN{\displaystyle \textstyle {N}}being found or located in some (Borel measurable) regionB{\displaystyle \textstyle B}asN(B){\displaystyle \textstyle N(B)}, which is a random variable. These different interpretations results in notation being used from mathematical fields such as measure theory and set theory.[118] For general point processes, sometimes a subscript on the point symbol, for examplex{\displaystyle \textstyle x}, is included so one writes (with set notation)xi∈N{\displaystyle \textstyle x_{i}\in N}instead ofx∈N{\displaystyle \textstyle x\in N}, andx{\displaystyle \textstyle x}can be used for thebound variablein integral expressions such as Campbell's theorem, instead of denoting random points.[18]Sometimes an uppercase letter denotes the point process, while a lowercase denotes a point from the process, so, for example, the pointx{\displaystyle \textstyle x}orxi{\displaystyle \textstyle x_{i}}belongs to or is a point of the point processX{\displaystyle \textstyle X}, and be written with set notation asx∈X{\displaystyle \textstyle x\in X}orxi∈X{\displaystyle \textstyle x_{i}\in X}.[110] Furthermore, the set theory and integral or measure theory notation can be used interchangeably. For example, for a point processN{\displaystyle \textstyle N}defined on the Euclidean state spaceRd{\displaystyle \textstyle {\mathbb {R} ^{d}}}and a (measurable) functionf{\displaystyle \textstyle f}onRd{\displaystyle \textstyle \mathbb {R} ^{d}}, the expression demonstrates two different ways to write a summation over a point process (see alsoCampbell's theorem (probability)). More specifically, the integral notation on the left-hand side is interpreting the point process as a random counting measure while the sum on the right-hand side suggests a random set interpretation.[118] In probability theory, operations are applied to random variables for different purposes. Sometimes these operations are regular expectations that produce the average or variance of a random variable. Others, such as characteristic functions (or Laplace transforms) of a random variable can be used to uniquely identify or characterize random variables and prove results like the central limit theorem.[119]In the theory of point processes there exist analogous mathematical tools which usually exist in the forms of measures and functionals instead of moments and functions respectively.[120][121] For a Poisson point processN{\displaystyle \textstyle N}with intensity measureΛ{\displaystyle \textstyle \Lambda }on some spaceX{\displaystyle X}, theLaplace functionalis given by:[18] One version ofCampbell's theoreminvolves the Laplace functional of the Poisson point process. The probability generating function of non-negative integer-valued random variable leads to the probability generating functional being defined analogously with respect to any non-negative bounded functionv{\displaystyle \textstyle v}onRd{\displaystyle \textstyle \mathbb {R} ^{d}}such that0≤v(x)≤1{\displaystyle \textstyle 0\leq v(x)\leq 1}. For a point processN{\displaystyle \textstyle {N}}the probability generating functional is defined as:[122] where the product is performed for all the points inN{\textstyle N}. If the intensity measureΛ{\displaystyle \textstyle \Lambda }ofN{\displaystyle \textstyle {N}}is locally finite, then theG{\textstyle G}is well-defined for any measurable functionu{\displaystyle \textstyle u}onRd{\displaystyle \textstyle \mathbb {R} ^{d}}. For a Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }the generating functional is given by: which in the homogeneous case reduces to For a general Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }the firstmoment measureis its intensity measure:[18][19] which for a homogeneous Poisson point process withconstantintensityλ{\displaystyle \textstyle \lambda }means: where|B|{\displaystyle \textstyle |B|}is the length, area or volume (or more generally, theLebesgue measure) ofB{\displaystyle \textstyle B}. The Mecke equation characterizes the Poisson point process. LetNσ{\displaystyle \mathbb {N} _{\sigma }}be the space of allσ{\displaystyle \sigma }-finite measures on some general spaceQ{\displaystyle {\mathcal {Q}}}. A point processη{\displaystyle \eta }with intensityλ{\displaystyle \lambda }onQ{\displaystyle {\mathcal {Q}}}is a Poisson point process if and only if for all measurable functionsf:Q×Nσ→R+{\displaystyle f:{\mathcal {Q}}\times \mathbb {N} _{\sigma }\to \mathbb {R} _{+}}the following holds For further details see.[123] For a general Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }then{\displaystyle \textstyle n}-thfactorial moment measureis given by the expression:[124] whereΛ{\displaystyle \textstyle \Lambda }is the intensity measure or first moment measure ofN{\displaystyle \textstyle {N}}, which for some Borel setB{\displaystyle \textstyle B}is given by For a homogeneous Poisson point process then{\displaystyle \textstyle n}-th factorial moment measure is simply:[18][19] where|Bi|{\displaystyle \textstyle |B_{i}|}is the length, area, or volume (or more generally, theLebesgue measure) ofBi{\displaystyle \textstyle B_{i}}. Furthermore, then{\displaystyle \textstyle n}-th factorial moment density is:[124] Theavoidance function[69]orvoid probability[118]v{\displaystyle \textstyle v}of a point processN{\displaystyle \textstyle {N}}is defined in relation to some setB{\displaystyle \textstyle B}, which is a subset of the underlying spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}, as the probability of no points ofN{\displaystyle \textstyle {N}}existing inB{\displaystyle \textstyle B}. More precisely,[125]for a test setB{\displaystyle \textstyle B}, the avoidance function is given by: For a general Poisson point processN{\displaystyle \textstyle {N}}with intensity measureΛ{\displaystyle \textstyle \Lambda }, its avoidance function is given by: Simple point processes are completely characterized by their void probabilities.[126]In other words, complete information of a simple point process is captured entirely in its void probabilities, and two simple point processes have the same void probabilities if and if only if they are the same point processes. The case for Poisson process is sometimes known asRényi's theorem, which is named afterAlfréd Rényiwho discovered the result for the case of a homogeneous point process in one-dimension.[127] In one form,[127]the Rényi's theorem says for a diffuse (or non-atomic) Radon measureΛ{\displaystyle \textstyle \Lambda }onRd{\displaystyle \textstyle \mathbb {R} ^{d}}and a setA{\displaystyle \textstyle A}is a finite union of rectangles (so not Borel[d]) that ifN{\displaystyle \textstyle N}is a countable subset ofRd{\displaystyle \textstyle \mathbb {R} ^{d}}such that: thenN{\displaystyle \textstyle {N}}is a Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }. Mathematical operations can be performed on point processes to get new point processes and develop new mathematical models for the locations of certain objects. One example of an operation is known as thinning which entails deleting or removing the points of some point process according to a rule, creating a new process with the remaining points (the deleted points also form a point process).[129] For the Poisson process, the independentp(x){\displaystyle \textstyle p(x)}-thinning operations results in another Poisson point process. More specifically, ap(x){\displaystyle \textstyle p(x)}-thinning operation applied to a Poisson point process with intensity measureΛ{\displaystyle \textstyle \Lambda }gives a point process of removed points that is also Poisson point processNp{\displaystyle \textstyle {N}_{p}}with intensity measureΛp{\displaystyle \textstyle \Lambda _{p}}, which for a bounded Borel setB{\displaystyle \textstyle B}is given by: This thinning result of the Poisson point process is sometimes known asPrekopa's theorem.[130]Furthermore, after randomly thinning a Poisson point process, the kept or remaining points also form a Poisson point process, which has the intensity measure The two separate Poisson point processes formed respectively from the removed and kept points are stochastically independent of each other.[129]In other words, if a region is known to containn{\displaystyle \textstyle n}kept points (from the original Poisson point process), then this will have no influence on the random number of removed points in the same region. This ability to randomly create two independent Poisson point processes from one is sometimes known assplitting[131][132]the Poisson point process. If there is a countable collection of point processesN1,N2,…{\displaystyle \textstyle N_{1},N_{2},\dots }, then their superposition, or, in set theory language, their union, which is[133] also forms a point process. In other words, any points located in any of the point processesN1,N2…{\displaystyle \textstyle N_{1},N_{2}\dots }will also be located in the superposition of these point processesN{\displaystyle \textstyle {N}}. Thesuperposition theoremof the Poisson point process says that the superposition of independent Poisson point processesN1,N2…{\displaystyle \textstyle N_{1},N_{2}\dots }with mean measuresΛ1,Λ2,…{\displaystyle \textstyle \Lambda _{1},\Lambda _{2},\dots }will also be a Poisson point process with mean measure[134][89] In other words, the union of two (or countably more) Poisson processes is another Poisson process. If a pointx{\textstyle x}is sampled from a countablen{\textstyle n}union of Poisson processes, then the probability that the pointx{\displaystyle \textstyle x}belongs to thej{\textstyle j}th Poisson processNj{\textstyle N_{j}}is given by: For two homogeneous Poisson processes with intensitiesλ1,λ2…{\textstyle \lambda _{1},\lambda _{2}\dots }, the two previous expressions reduce to and The operation clustering is performed when each pointx{\displaystyle \textstyle x}of some point processN{\displaystyle \textstyle {N}}is replaced by another (possibly different) point process. If the original processN{\displaystyle \textstyle {N}}is a Poisson point process, then the resulting processNc{\displaystyle \textstyle {N}_{c}}is called a Poisson cluster point process. A mathematical model may require randomly moving points of a point process to other locations on the underlying mathematical space, which gives rise to a point process operation known as displacement[135]or translation.[136]The Poisson point process has been used to model, for example, the movement of plants between generations, owing to the displacement theorem,[135]which loosely says that the random independent displacement of points of a Poisson point process (on the same underlying space) forms another Poisson point process. One version of the displacement theorem[135]involves a Poisson point processN{\displaystyle \textstyle {N}}onRd{\displaystyle \textstyle \mathbb {R} ^{d}}with intensity functionλ(x){\displaystyle \textstyle \lambda (x)}. It is then assumed the points ofN{\displaystyle \textstyle {N}}are randomly displaced somewhere else inRd{\displaystyle \textstyle \mathbb {R} ^{d}}so that each point's displacement is independent and that the displacement of a point formerly atx{\displaystyle \textstyle x}is a random vector with a probability densityρ(x,⋅){\displaystyle \textstyle \rho (x,\cdot )}.[e]Then the new point processND{\displaystyle \textstyle N_{D}}is also a Poisson point process with intensity function If the Poisson process is homogeneous withλ(x)=λ>0{\displaystyle \textstyle \lambda (x)=\lambda >0}and ifρ(x,y){\displaystyle \rho (x,y)}is a function ofy−x{\displaystyle y-x}, then In other words, after each random and independent displacement of points, the original Poisson point process still exists. The displacement theorem can be extended such that the Poisson points are randomly displaced from one Euclidean spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}to another Euclidean spaceRd′{\displaystyle \textstyle \mathbb {R} ^{d'}}, whered′≥1{\displaystyle \textstyle d'\geq 1}is not necessarily equal tod{\displaystyle \textstyle d}.[18] Another property that is considered useful is the ability to map a Poisson point process from one underlying space to another space.[137] If the mapping (or transformation) adheres to some conditions, then the resulting mapped (or transformed) collection of points also form a Poisson point process, and this result is sometimes referred to as themapping theorem.[137][138]The theorem involves some Poisson point process with mean measureΛ{\displaystyle \textstyle \Lambda }on some underlying space. If the locations of the points are mapped (that is, the point process is transformed) according to some function to another underlying space, then the resulting point process is also a Poisson point process but with a different mean measureΛ′{\displaystyle \textstyle \Lambda '}. More specifically, one can consider a (Borel measurable) functionf{\displaystyle \textstyle f}that maps a point processN{\displaystyle \textstyle {N}}with intensity measureΛ{\displaystyle \textstyle \Lambda }from one spaceS{\displaystyle \textstyle S}, to another spaceT{\displaystyle \textstyle T}in such a manner so that the new point processN′{\displaystyle \textstyle {N}'}has the intensity measure: with no atoms, whereB{\displaystyle \textstyle B}is a Borel set andf−1{\displaystyle \textstyle f^{-1}}denotes the inverse of the functionf{\displaystyle \textstyle f}. IfN{\displaystyle \textstyle {N}}is a Poisson point process, then the new processN′{\displaystyle \textstyle {N}'}is also a Poisson point process with the intensity measureΛ′{\displaystyle \textstyle \Lambda '}. The tractability of the Poisson process means that sometimes it is convenient to approximate a non-Poisson point process with a Poisson one. The overall aim is to approximate both the number of points of some point process and the location of each point by a Poisson point process.[139]There a number of methods that can be used to justify, informally or rigorously, approximating the occurrence of random events or phenomena with suitable Poisson point processes. The more rigorous methods involve deriving upper bounds on the probability metrics between the Poisson and non-Poisson point processes, while other methods can be justified by less formal heuristics.[140] One method for approximating random events or phenomena with Poisson processes is called theclumping heuristic.[141]The general heuristic or principle involves using the Poisson point process (or Poisson distribution) to approximate events, which are considered rare or unlikely, of some stochastic process. In some cases these rare events are close to being independent, hence a Poisson point process can be used. When the events are not independent, but tend to occur in clusters orclumps, then if these clumps are suitably defined such that they are approximately independent of each other, then the number of clumps occurring will be close to a Poisson random variable[140]and the locations of the clumps will be close to a Poisson process.[141] Stein's methodis a mathematical technique originally developed for approximating random variables such asGaussianand Poisson variables, which has also been applied to point processes. Stein's method can be used to derive upper bounds onprobability metrics, which give way to quantify how different two random mathematical objects vary stochastically.[139][142]Upperbounds on probability metrics such astotal variationandWasserstein distancehave been derived.[139] Researchers have applied Stein's method to Poisson point processes in a number of ways,[139]such as usingPalm calculus.[108]Techniques based on Stein's method have been developed to factor into the upper bounds the effects of certainpoint process operationssuch as thinning and superposition.[143][144]Stein's method has also been used to derive upper bounds on metrics of Poisson and other processes such as theCox point process, which is a Poisson process with a random intensity measure.[139] In general, when an operation is applied to a general point process the resulting process is usually not a Poisson point process. For example, if a point process, other than a Poisson, has its points randomly and independently displaced, then the process would not necessarily be a Poisson point process. However, under certain mathematical conditions for both the original point process and the random displacement, it has been shown via limit theorems that if the points of a point process are repeatedly displaced in a random and independent manner, then the finite-distribution of the point process will converge (weakly) to that of a Poisson point process.[145] Similar convergence results have been developed for thinning and superposition operations[145]that show that such repeated operations on point processes can, under certain conditions, result in the process converging to a Poisson point processes, provided a suitable rescaling of the intensity measure (otherwise values of the intensity measure of the resulting point processes would approach zero or infinity). Such convergence work is directly related to the results known as the Palm–Khinchin[f]equations, which has its origins in the work ofConny PalmandAleksandr Khinchin,[146]and help explains why the Poisson process can often be used as a mathematical model of various random phenomena.[145] The Poisson point process can be generalized by, for example, changing its intensity measure or defining on more general mathematical spaces. These generalizations can be studied mathematically as well as used to mathematically model or represent physical phenomena. ThePoisson-type random measures(PT) are a family of three random counting measures which are closed under restriction to a subspace, i.e. closed underPoint process operation#Thinning. These random measures are examples of themixed binomial processand share the distributional self-similarity property of thePoisson random measure. They are the only members of the canonical non-negativepower seriesfamily of distributions to possess this property and include thePoisson distribution,negative binomial distribution, andbinomial distribution. The Poisson random measure is independent on disjoint subspaces, whereas the other PT random measures (negative binomial and binomial) have positive and negative covariances. The PT random measures are discussed[147]and include thePoisson random measure, negative binomial random measure, and binomial random measure. For mathematical models the Poisson point process is often defined in Euclidean space,[1][36]but has been generalized to more abstract spaces and plays a fundamental role in the study of random measures,[148][149]which requires an understanding of mathematical fields such as probability theory, measure theory and topology.[150] In general, the concept of distance is of practical interest for applications, while topological structure is needed for Palm distributions, meaning that point processes are usually defined on mathematical spaces with metrics.[151]Furthermore, a realization of a point process can be considered as a counting measure, so points processes are types of random measures known as random counting measures.[115]In this context, the Poisson and other point processes have been studied on a locally compact second countable Hausdorff space.[152] ACox point process,Cox processordoubly stochastic Poisson processis a generalization of the Poisson point process by letting its intensity measureΛ{\displaystyle \textstyle \Lambda }to be also random and independent of the underlying Poisson process. The process is named afterDavid Coxwho introduced it in 1955, though other Poisson processes with random intensities had been independently introduced earlier by Lucien Le Cam and Maurice Quenouille.[3]The intensity measure may be a realization of random variable or a random field. For example, if thelogarithmof the intensity measure is aGaussian random field, then the resulting process is known as alog Gaussian Cox process.[153]More generally, the intensity measures is a realization of a non-negative locally finite random measure. Cox point processes exhibit aclusteringof points, which can be shown mathematically to be larger than those of Poisson point processes. The generality and tractability of Cox processes has resulted in them being used as models in fields such as spatial statistics[154]and wireless networks.[19] For a given point process, each random point of a point process can have a random mathematical object, known as amark, randomly assigned to it. These marks can be as diverse as integers, real numbers, lines, geometrical objects or other point processes.[155][156]The pair consisting of a point of the point process and its corresponding mark is called a marked point, and all the marked points form amarked point process.[157]It is often assumed that the random marks are independent of each other and identically distributed, yet the mark of a point can still depend on the location of its corresponding point in the underlying (state) space.[158]If the underlying point process is a Poisson point process, then the resulting point process is amarked Poisson point process.[159] If a general point process is defined on somemathematical spaceand the random marks are defined on another mathematical space, then the marked point process is defined on theCartesian productof these two spaces. For a marked Poisson point process with independent and identically distributed marks, themarking theorem[158][160]states that this marked point process is also a (non-marked) Poisson point process defined on the aforementioned Cartesian product of the two mathematical spaces, which is not true for general point processes. Thecompound Poisson point processorcompound Poisson processis formed by adding random values or weights to each point of Poisson point process defined on some underlying space, so the process is constructed from a marked Poisson point process, where the marks form a collection ofindependent and identically distributednon-negative random variables. In other words, for each point of the original Poisson process, there is an independent and identically distributed non-negative random variable, and then the compound Poisson process is formed from the sum of all the random variables corresponding to points of the Poisson process located in some region of the underlying mathematical space.[161] If there is a marked Poisson point process formed from a Poisson point processN{\displaystyle \textstyle N}(defined on, for example,Rd{\displaystyle \textstyle \mathbb {R} ^{d}}) and a collection of independent and identically distributed non-negative marks{Mi}{\displaystyle \textstyle \{M_{i}\}}such that for each pointxi{\displaystyle \textstyle x_{i}}of the Poisson processN{\displaystyle \textstyle N}there is a non-negative random variableMi{\displaystyle \textstyle M_{i}}, the resulting compound Poisson process is then:[162] whereB⊂Rd{\displaystyle \textstyle B\subset \mathbb {R} ^{d}}is a Borel measurable set. If general random variables{Mi}{\displaystyle \textstyle \{M_{i}\}}take values in, for example,d{\displaystyle \textstyle d}-dimensional Euclidean spaceRd{\displaystyle \textstyle \mathbb {R} ^{d}}, the resulting compound Poisson process is an example of aLévy processprovided that it is formed from a homogeneous Point processN{\displaystyle \textstyle N}defined on the non-negative numbers[0,∞){\displaystyle \textstyle [0,\infty )}.[163] The failure process with the exponential smoothing of intensity functions (FP-ESI) is an extension of the nonhomogeneous Poisson process. The intensity function of an FP-ESI is an exponential smoothing function of the intensity functions at the last time points of event occurrences and outperforms other nine stochastic processes on 8 real-world failure datasets when the models are used to fit the datasets,[164]where the model performance is measured in terms of AIC (Akaike information criterion) and BIC (Bayesian information criterion).
https://en.wikipedia.org/wiki/Poisson_point_process
Inmathematics,Young's inequality for productsis amathematical inequalityabout the product of two numbers.[1]The inequality is named afterWilliam Henry Youngand should not be confused withYoung's convolution inequality. Young's inequality for products can be used to proveHölder's inequality. It is also widely used to estimate the norm of nonlinear terms inPDE theory, since it allows one to estimate a product of two terms by a sum of the same terms raised to a power and scaled. The standard form of the inequality is the following, which can be used to proveHölder's inequality. Theorem—Ifa≥0{\displaystyle a\geq 0}andb≥0{\displaystyle b\geq 0}arenonnegativereal numbersand ifp>1{\displaystyle p>1}andq>1{\displaystyle q>1}are real numbers such that1p+1q=1,{\displaystyle {\frac {1}{p}}+{\frac {1}{q}}=1,}thenab≤app+bqq.{\displaystyle ab~\leq ~{\frac {a^{p}}{p}}+{\frac {b^{q}}{q}}.} Equality holds if and only ifap=bq.{\displaystyle a^{p}=b^{q}.} Since1p+1q=1,{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1,}p−1=1q−1.{\displaystyle p-1={\tfrac {1}{q-1}}.}A graphy=xp−1{\displaystyle y=x^{p-1}}on thexy{\displaystyle xy}-plane is thus also a graphx=yq−1.{\displaystyle x=y^{q-1}.}From sketching a visual representation of the integrals of the area between this curve and the axes, and the area in the rectangle bounded by the linesx=0,x=a,y=0,y=b,{\displaystyle x=0,x=a,y=0,y=b,}and the fact thaty{\displaystyle y}is always increasing for increasingx{\displaystyle x}and vice versa, we can see that∫0axp−1dx{\displaystyle \int _{0}^{a}x^{p-1}\mathrm {d} x}upper bounds the area of the rectangle below the curve (with equality whenb≥ap−1{\displaystyle b\geq a^{p-1}}) and∫0byq−1dy{\displaystyle \int _{0}^{b}y^{q-1}\mathrm {d} y}upper bounds the area of the rectangle above the curve (with equality whenb≤ap−1{\displaystyle b\leq a^{p-1}}). Thus,∫0axp−1dx+∫0byq−1dy≥ab,{\displaystyle \int _{0}^{a}x^{p-1}\mathrm {d} x+\int _{0}^{b}y^{q-1}\mathrm {d} y\geq ab,}with equality whenb=ap−1{\displaystyle b=a^{p-1}}(or equivalently,ap=bq{\displaystyle a^{p}=b^{q}}). Young's inequality follows from evaluating the integrals. (Seebelowfor a generalization.) A second proof is viaJensen's inequality. The claim is certainly true ifa=0{\displaystyle a=0}orb=0{\displaystyle b=0}so henceforth assume thata>0{\displaystyle a>0}andb>0.{\displaystyle b>0.}Putt=1/p{\displaystyle t=1/p}and(1−t)=1/q.{\displaystyle (1-t)=1/q.}Because thelogarithmfunction isconcave,ln⁡(tap+(1−t)bq)≥tln⁡(ap)+(1−t)ln⁡(bq)=ln⁡(a)+ln⁡(b)=ln⁡(ab){\displaystyle \ln \left(ta^{p}+(1-t)b^{q}\right)~\geq ~t\ln \left(a^{p}\right)+(1-t)\ln \left(b^{q}\right)=\ln(a)+\ln(b)=\ln(ab)}with the equality holding if and only ifap=bq.{\displaystyle a^{p}=b^{q}.}Young's inequality follows by exponentiating. Yet another proof is to first prove it withb=1{\displaystyle b=1}an then apply the resulting inequality toabq{\displaystyle {\tfrac {a}{b^{q}}}}. The proof below illustrates also why Hölder conjugate exponent is the only possible parameter that makes Young's inequality hold for all non-negative values. The details follow: Let0<α<1{\displaystyle 0<\alpha <1}andα+β=1{\displaystyle \alpha +\beta =1}. The inequalityx≤αxp+β,forallx≥0{\displaystyle x~\leq ~\alpha x^{p}+\beta ,\qquad \,for\quad \ all\quad \ x~\geq ~0}holds if and only ifα=1p{\displaystyle \alpha ={\tfrac {1}{p}}}(and henceβ=1q{\displaystyle \beta ={\tfrac {1}{q}}}). This can be shown by convexity arguments or by simply minimizing the single-variable function. To prove full Young's inequality, clearly we assume thata>0{\displaystyle a>0}andb>0{\displaystyle b>0}. Now, we apply the inequality above tox=abs{\displaystyle x={\tfrac {a}{b^{s}}}}to obtain:abs≤1papbsp+1q.{\displaystyle {\tfrac {a}{b^{s}}}~\leq ~{\tfrac {1}{p}}{\tfrac {a^{p}}{b^{sp}}}+{\tfrac {1}{q}}.}It is easy to see that choosings=q−1{\displaystyle s=q-1}and multiplying both sides bybq{\displaystyle b^{q}}yields Young's inequality. Young's inequality may equivalently be written asaαbβ≤αa+βb,0≤α,β≤1,α+β=1.{\displaystyle a^{\alpha }b^{\beta }\leq \alpha a+\beta b,\qquad \,0\leq \alpha ,\beta \leq 1,\quad \ \alpha +\beta =1.} Where this is just the concavity of thelogarithmfunction. Equality holds if and only ifa=b{\displaystyle a=b}or{α,β}={0,1}.{\displaystyle \{\alpha ,\beta \}=\{0,1\}.}This also follows from the weightedAM-GM inequality. Theorem[4]—Supposea>0{\displaystyle a>0}andb>0.{\displaystyle b>0.}If1<p<∞{\displaystyle 1<p<\infty }andq{\displaystyle q}are such that1p+1q=1{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1}thenab=min0<t<∞(tpapp+t−qbqq).{\displaystyle ab~=~\min _{0<t<\infty }\left({\frac {t^{p}a^{p}}{p}}+{\frac {t^{-q}b^{q}}{q}}\right).} Usingt:=1{\displaystyle t:=1}and replacinga{\displaystyle a}witha1/p{\displaystyle a^{1/p}}andb{\displaystyle b}withb1/q{\displaystyle b^{1/q}}results in the inequality:a1/pb1/q≤ap+bq,{\displaystyle a^{1/p}\,b^{1/q}~\leq ~{\frac {a}{p}}+{\frac {b}{q}},}which is useful for provingHölder's inequality. Define a real-valued functionf{\displaystyle f}on the positive real numbers byf(t)=tpapp+t−qbqq{\displaystyle f(t)~=~{\frac {t^{p}a^{p}}{p}}+{\frac {t^{-q}b^{q}}{q}}}for everyt>0{\displaystyle t>0}and then calculate its minimum. Theorem—If0≤pi≤1{\displaystyle 0\leq p_{i}\leq 1}with∑ipi=1{\displaystyle \sum _{i}p_{i}=1}then∏iaipi≤∑ipiai.{\displaystyle \prod _{i}{a_{i}}^{p_{i}}~\leq ~\sum _{i}p_{i}a_{i}.}Equality holds if and only if all theai{\displaystyle a_{i}}s with non-zeropi{\displaystyle p_{i}}s are equal. An elementary case of Young's inequality is the inequality withexponent2,{\displaystyle 2,}ab≤a22+b22,{\displaystyle ab\leq {\frac {a^{2}}{2}}+{\frac {b^{2}}{2}},}which also gives rise to the so-called Young's inequality withε{\displaystyle \varepsilon }(valid for everyε>0{\displaystyle \varepsilon >0}), sometimes called the Peter–Paul inequality.[5]This name refers to the fact that tighter control of the second term is achieved at the cost of losing some control of the first term – one must "rob Peter to pay Paul"ab≤a22ε+εb22.{\displaystyle ab~\leq ~{\frac {a^{2}}{2\varepsilon }}+{\frac {\varepsilon b^{2}}{2}}.} Proof: Young's inequality with exponent2{\displaystyle 2}is the special casep=q=2.{\displaystyle p=q=2.}However, it has a more elementary proof. Start by observing that the square of every real number is zero or positive. Therefore, for every pair of real numbersa{\displaystyle a}andb{\displaystyle b}we can write:0≤(a−b)2{\displaystyle 0\leq (a-b)^{2}}Work out the square of the right hand side:0≤a2−2ab+b2{\displaystyle 0\leq a^{2}-2ab+b^{2}}Add2ab{\displaystyle 2ab}to both sides:2ab≤a2+b2{\displaystyle 2ab\leq a^{2}+b^{2}}Divide both sides by 2 and we have Young's inequality with exponent2:{\displaystyle 2:}ab≤a22+b22{\displaystyle ab\leq {\frac {a^{2}}{2}}+{\frac {b^{2}}{2}}} Young's inequality withε{\displaystyle \varepsilon }follows by substitutinga′{\displaystyle a'}andb′{\displaystyle b'}as below into Young's inequality with exponent2:{\displaystyle 2:}a′=a/ε,b′=εb.{\displaystyle a'=a/{\sqrt {\varepsilon }},\;b'={\sqrt {\varepsilon }}b.} T. Ando proved a generalization of Young's inequality for complex matrices ordered byLoewner ordering.[6]It states that for any pairA,B{\displaystyle A,B}of complex matrices of ordern{\displaystyle n}there exists aunitary matrixU{\displaystyle U}such thatU∗|AB∗|U⪯1p|A|p+1q|B|q,{\displaystyle U^{*}|AB^{*}|U\preceq {\tfrac {1}{p}}|A|^{p}+{\tfrac {1}{q}}|B|^{q},}where∗{\displaystyle {}^{*}}denotes theconjugate transposeof the matrix and|A|=A∗A.{\displaystyle |A|={\sqrt {A^{*}A}}.} For the standard version[7][8]of the inequality, letf{\displaystyle f}denote a real-valued, continuous and strictly increasing function on[0,c]{\displaystyle [0,c]}withc>0{\displaystyle c>0}andf(0)=0.{\displaystyle f(0)=0.}Letf−1{\displaystyle f^{-1}}denote theinverse functionoff.{\displaystyle f.}Then, for alla∈[0,c]{\displaystyle a\in [0,c]}andb∈[0,f(c)],{\displaystyle b\in [0,f(c)],}ab≤∫0af(x)dx+∫0bf−1(x)dx{\displaystyle ab~\leq ~\int _{0}^{a}f(x)\,dx+\int _{0}^{b}f^{-1}(x)\,dx}with equality if and only ifb=f(a).{\displaystyle b=f(a).} Withf(x)=xp−1{\displaystyle f(x)=x^{p-1}}andf−1(y)=yq−1,{\displaystyle f^{-1}(y)=y^{q-1},}this reduces to standard version for conjugate Hölder exponents. For details and generalizations we refer to the paper of Mitroi & Niculescu.[9] By denoting theconvex conjugateof a real functionf{\displaystyle f}byg,{\displaystyle g,}we obtainab≤f(a)+g(b).{\displaystyle ab~\leq ~f(a)+g(b).}This follows immediately from the definition of the convex conjugate. For a convex functionf{\displaystyle f}this also follows from theLegendre transformation. More generally, iff{\displaystyle f}is defined on a real vector spaceX{\displaystyle X}and itsconvex conjugateis denoted byf⋆{\displaystyle f^{\star }}(and is defined on thedual spaceX⋆{\displaystyle X^{\star }}), then⟨u,v⟩≤f⋆(u)+f(v).{\displaystyle \langle u,v\rangle \leq f^{\star }(u)+f(v).}where⟨⋅,⋅⟩:X⋆×X→R{\displaystyle \langle \cdot ,\cdot \rangle :X^{\star }\times X\to \mathbb {R} }is thedual pairing. The convex conjugate off(a)=ap/p{\displaystyle f(a)=a^{p}/p}isg(b)=bq/q{\displaystyle g(b)=b^{q}/q}withq{\displaystyle q}such that1p+1q=1,{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1,}and thus Young's inequality for conjugate Hölder exponents mentioned above is a special case. The Legendre transform off(a)=ea−1{\displaystyle f(a)=e^{a}-1}isg(b)=1−b+bln⁡b{\displaystyle g(b)=1-b+b\ln b}, henceab≤ea−b+bln⁡b{\displaystyle ab\leq e^{a}-b+b\ln b}for all non-negativea{\displaystyle a}andb.{\displaystyle b.}This estimate is useful inlarge deviations theoryunder exponential moment conditions, becausebln⁡b{\displaystyle b\ln b}appears in the definition ofrelative entropy, which is therate functioninSanov's theorem.
https://en.wikipedia.org/wiki/Young%27s_inequality_for_products
Network,networkingandnetworkedmay refer to:
https://en.wikipedia.org/wiki/Networking
TheUnified Medical Language System(UMLS) is acompendiumof manycontrolled vocabulariesin thebiomedicalsciences (created 1986).[1]It provides a mapping structure among these vocabularies and thus allows one to translate among the various terminology systems; it may also be viewed as a comprehensivethesaurusandontologyof biomedical concepts. UMLS further provides facilities fornatural language processing. It is intended to be used mainly by developers of systems inmedical informatics. UMLS consists of Knowledge Sources (databases) and a set of software tools. The UMLS was designed and is maintained by theUSNational Library of Medicine, is updated quarterly and may be used for free. The project was initiated in 1986 byDonald A.B. Lindberg,M.D., then Director of the Library of Medicine, and directed byBetsy Humphreys.[2] The number of biomedical resources available to researchers is enormous. Often this is a problem due to the large volume of documents retrieved when the medical literature is searched. The purpose of the UMLS is to enhance access to this literature by facilitating the development of computer systems that understand biomedical language. This is achieved by overcoming two significant barriers: "the variety of ways the same concepts are expressed in different machine-readable sources & by different people" and "the distribution of useful information among many disparate databases & systems".[citation needed] Users of the system are required to sign a "UMLS agreement" and file brief annual usage reports. Academic users may use the UMLS free of charge for research purposes. Commercial or production use requires copyright licenses for some of the incorporated source vocabularies. The Metathesaurus forms the base of the UMLS and comprises over 1 million biomedical concepts and 5 million concept names, all of which stem from the over 100 incorporated controlled vocabularies and classification systems. Some examples of the incorporated controlled vocabularies areCPT,ICD-10,MeSH,SNOMED CT,DSM-IV,LOINC,WHO Adverse Drug Reaction Terminology,UK Clinical Terms,RxNorm,Gene Ontology, andOMIM(seefull list). The Metathesaurus is organized by concept, and each concept has specific attributes defining its meaning and is linked to the corresponding concept names in the various source vocabularies. Numerous relationships between the concepts are represented, for instance hierarchical ones such as "isa" for subclasses and "is part of" for subunits, and associative ones such as "is caused by" or "in the literature often occurs close to" (the latter being derived fromMedline). The scope of the Metathesaurus is determined by the scope of the source vocabularies. If different vocabularies use different names for the same concept, or if they use the same name for different concepts, then this will be faithfully represented in the Metathesaurus. All hierarchical information from the source vocabularies is retained in the Metathesaurus. Metathesaurus concepts can also link to resources outside of the database, for instance gene sequence databases. Each concept in the Metathesaurus is assigned one or moresemantic types(categories), which are linked with one another throughsemantic relationships.[3]Thesemantic networkis a catalog of these semantic types and relationships. This is a rather broad classification; there are 127 semantic types and 54 relationships in total. The major semantic types are organisms, anatomical structures, biologic function, chemicals, events, physical objects, and concepts or ideas. The links among semantic types define the structure of the network and show important relationships between thegroupingsand concepts. The primary link between semantic types is the "isa" link, establishing ahierarchyof types. The network also has 5 major categories of non-hierarchical (or associative) relationships, which constitute the remaining 53 relationship types. These are "physically related to", "spatially related to", "temporally related to", "functionally related to" and "conceptually related to".[3] The information about a semantic type includes an identifier, definition, examples, hierarchical information about the encompassing semantic type(s), andassociativerelationships. Associative relationships within the Semantic Network are very weak. They capture at most some-some relationships, i.e. they capture the fact that some instance of the first type may be connected by the salient relationship to some instance of the second type. Phrased differently, they capture the fact that a corresponding relational assertion is meaningful (though it need not be true in all cases). An example of an associative relationship is "may-cause", applied to the terms (smoking, lung cancer) would yield: smoking "may-cause" lung cancer. The SPECIALIST Lexicon contains information about common English vocabulary, biomedical terms, terms found inMEDLINEand terms found in the UMLS Metathesaurus. Each entry containssyntactic(how words are put together to create meaning),morphological(form and structure) andorthographic(spelling) information. A set ofJavaprograms use the lexicon to work through the variations in biomedical texts by relating words by their parts of speech, which can be helpful inwebsearches or searches through anelectronic medical record. Entries may be one-word or multiple-word terms. Records contain four parts: base form (i.e. "run" for "running"); parts of speech (of which Specialist recognizes eleven); a unique identifier; and any available spelling variants. For example, aqueryfor "anesthetic" would return the following:[4] The SPECIALIST lexicon is available in two formats. The "unit record" format can be seen above, and comprisesslotsandfillers. Aslotis the element (i.e. "base=" or "spelling variant=") and thefillersare the values attributable to that slot for that entry. The "relational table" format is not yetnormalizedand contain a great deal of redundant data in the files. Given the size and complexity of the UMLS and its permissive policy on integrating terms, errors are inevitable.[5]Errors include ambiguity and redundancy, hierarchical relationship cycles (a concept is both an ancestor and descendant to another), missing ancestors (semantic types of parent and child concepts are unrelated), and semantic inversion (the child/parent relationship with the semantic types is not consistent with the concepts).[6] These errors are discovered and resolved by auditing the UMLS. Manual audits can be very time-consuming and costly. Researchers have attempted to address the issue through a number of ways. Automated tools can be used to search for these errors. For structural inconsistencies (such as loops), a trivial solution based on the order would work. However, the same wouldn't apply when the inconsistency is at the term or concept level (context-specific meaning of a term).[7]This requires an informed search strategy to be used (knowledge representation). In addition to the knowledge sources, theNational Library of Medicinealso provides supporting tools.
https://en.wikipedia.org/wiki/Unified_Medical_Language_System
Innumber theorytheAgoh–Giuga conjectureon theBernoulli numbersBkpostulates thatpis aprime numberif and only if It is named afterTakashi AgohandGiuseppe Giuga. The conjecture as stated above is due toTakashi Agoh(1990); an equivalent formulation is due toGiuseppe Giuga, from 1950, to the effect thatpis prime if and only if which may also be written as It is trivial to show thatpbeing prime is sufficient for the second equivalence to hold, since ifpis prime,Fermat's little theoremstates that fora=1,2,…,p−1{\displaystyle a=1,2,\dots ,p-1}, and the equivalence follows, sincep−1≡−1(modp).{\displaystyle p-1\equiv -1{\pmod {p}}.} The statement is still a conjecture since it has not yet been proven that if a numbernis not prime (that is,niscomposite), then the formula does not hold. It has been shown that a composite numbernsatisfies the formula if and only if it is both aCarmichael numberand aGiuga number, and that if such a number exists, it has at least 13,800 digits (Borwein, Borwein, Borwein, Girgensohn 1996). Laerte Sorini, in a work of 2001 showed that a possible counterexample should be a numberngreater than  1036067which represents the limit suggested by Bedocchi for the demonstration technique specified by Giuga to his own conjecture. The Agoh–Giuga conjecture bears a similarity toWilson's theorem, which has been proven to be true. Wilson's theorem states that a numberpis prime if and only if which may also be written as For an odd prime p we have and for p=2 we have So, the truth of the Agoh–Giuga conjecture combined with Wilson's theorem would give: a numberpis prime if and only if and
https://en.wikipedia.org/wiki/Agoh%E2%80%93Giuga_conjecture
Reputation management,refers to theinfluencing, controlling, enhancing, or concealing of an individual's or group'sreputation. It is a marketing technique used to modify a person's or a company's reputation in a positive way.[1]The growth of theinternetandsocial medialed to growth of reputationmanagementcompanies, withsearch resultsas a core part of a client's reputation.[2]Online reputation management (ORM) involves overseeing and influencing the search engine results related to products and services.[3] Ethical grey areas includemug shot removal sites,astroturfingcustomer reviewsites, censoring complaints, and usingsearch engine optimizationtactics toinfluence results.In other cases, the ethical lines are clear; some reputation management companies are closely connected to websites that publish unverified and libelous statements about people.[4]Such unethical companies charge thousands of dollars to remove these posts – temporarily – from their websites.[4] The field of public relations has evolved with the rise of the internet and social media. Reputation management is now broadly categorized into two areas: online reputation management and offline reputation management. Online reputation management focuses on the management of product and service search results within the digital space. A variety of electronic markets and online communities likeeBay,AmazonandAlibabahave ORM systems built in, and using effective control nodes can minimize the threat and protect systems from possible misuses and abuses by malicious nodes in decentralized overlay networks.[5]Big Datahas the potential to be employed in overseeing and enhancing the reputation of organizations.[6] Offline reputation management shapes public perception of a said entity outside the digital sphere.[7]Popular controls for off-line reputation management include social responsibility, media visibility, press releases inprint mediaandsponsorshipamongst related tools.[8] Reputation is asocial constructbased on the opinion other people hold about a person or thing. Before the internet was developed, consumers wanting to learn about a company had fewer options. They had access to resources such as the Yellow Pages, but mostly relied onword-of-mouth. A company's reputation depended on personal experience.[citation needed]A company while it grew and expanded was subject to the market's perception of the brand. Public relations were developed to manage the image and manage the reputation of a company or individual.[citation needed]The concept was initially created to broaden public relations outside of media relations.[9]Academic studies have identified it as a driving force behindFortune 500corporate public relations since the beginning of the 21st century.[10] As of 1988, reputation management was acknowledged as a valuableintangible assetand corporate necessity, which can be one of the most important sources of competitive edge in a fiercely competitive market,[11]and with firms under scrutiny from the business community, regulators[vague], and corporate governance watchdogs; good reputation management practices would to help firms cope with this scrutiny.[12] As of 2006, reputation management practices reinforce and aid a corporation's branding objectives. Good reputation management practices are helping any entity manage staff confidence as a control tool on public perceptions which if undermined and ignored can be costly, which in the long run may cripple employee confidence, a risk no employer would dare explore as staff morale is one of the most important drivers of company performance.[13] Originally, public relations includedprinted media, events and networking campaigns. At the end of 1990ssearch enginesbecame widely used. The popularity of the internet introduced new marketing and branding opportunities. Where once journalists were the main source of media content,blogs, review sites and social media gave a voice toconsumersregardless of qualification. Public relations became part of online reputation management (ORM). ORM includes traditional reputation strategies of public relations but also focuses on building a long-term reputation strategy that is consistent across all web-based channels and platforms. ORM includes search engine reputation management which is designed to counter negative search results and elevate positive content.[14][15]Reputation management (sometimes referred to asrep managementorORM) is the practice of attempting to shape public perception of a person or organization by influencing information about that entity, primarily online.[16]What necessitates this shaping of perceptions being the role of consumers in any organization and the cognizance of how much if ignored these perceptions may harm a company's performance at any time of the year, a risk no entrepreneur or company executive can afford.[17] Specifically, reputation management involves the monitoring of the reputation of an individual or a brand on the internet, primarily focusing on the various social media platforms such asFacebook,Instagram,YouTube, etc. addressing content which is potentially damaging to it, and using customer feedback to try to solve problems before they damage the individual's or brand's reputation.[18]A major part of reputation management involves suppressing negative search results, while highlighting positive ones.[19]For businesses, reputation management usually involves an attempt to bridge the gap between how a company perceives itself and how others view it.[20] In 2012, there had been an article released titled "Social Media Research in Advertising, Communication, Marketing and Public Relations" written by Hyoungkoo Khang et-al.[21]The references to Kaplan and Haenleins theory ofsocial presence, highlights the "concept of self-presentation."[22] Khang highlights that "companies must monitor individual's comments regarding service 24/7."[23]This can imply that the reputation of a company does essentially rely on the consumer, as they are the ones that can make or break it. A 2015 study commissioned by theAmerican Association of Advertising Agenciesconcluded that 4 percent of consumers believed advertisers and marketers practice integrity.[24] According toSusan Crawford, acyberlawspecialist fromCardozo Law School, most websites will remove negative content when contacted to avoid litigation.The Wall Street Journalnoted that in some cases, writing a letter to a detractor can have unintended consequences, though the company makes an effort to avoid writing to certain website operators that are likely to respond negatively. The company says it respects theFirst Amendmentand does not try to remove "genuinely newsworthy speech." It generally cannot remove major government-related news stories from established publications or court records.[25][26] In 2015,Jon Ronson, author of"So You've Been Publicly Shamed", said that reputation management helped some people who becameagoraphobicdue topublic humiliationfromonline shaming, but that it was an expensive service that many could not afford.[27][28] In 2011, controversy around theTaco Bellrestaurant chain arose when public accusations were made that their "seasoned beef" product was only made up of only 35% real beef. Aclass actionlawsuit was filed by the law firm Beasley Allen against Taco Bell. The suit was voluntarily withdrawn with Beasley Allen citing that "From the inception of this case, we stated that if Taco Bell would make certain changes regarding disclosure and marketing of its 'seasoned beef' product, the case could be dismissed."[29][30]Taco Bell responded to the case being withdrawn by launching a reputation management campaign titled "Would it kill you to say you're sorry?" that ran advertisements in various news outlets in print and online, which attempted to draw attention to the voluntary withdrawal of the case.[31] In 2015,Volkswagen, a German automobile manufacturer, faced a massive €30 billion controversy. A scandal erupted when it was revealed that 11 million of its vehicles globally had been fitted with devices designed to mask the true levels of harmful emissions. The reaction from the company's investors was swift as Volkswagen's stock value started to fall rapidly.[32]The brand released a two-minute video in which the CEO and other representatives apologized after pleading guilty. However, this wasn't enough to change the public perception. The automotive giant had to bring in four PR firms led by Hering Schuppener, a German crisis communications and reputation management agency.[33]To rebuild its reputation, Volkswagen launched an initiative to transition to electric motors on an unprecedented scale. The company released print media and published pieces in top publications to show its commitment to developing electric and hybrid vehicle models worldwide, which helped improve its CSR image.[33] Starbucks, the coffeehouse chain, also faced reputation damage in response to the arrests of two African-American men at its Philadelphia branch. In response to a request to use the bathroom, the branch's manager denied the two men's access since they hadn't bought anything, calling the police when they refused to leave.[34]The incident sparked massive public outrage and boycotts across the country.[35]SYPartners, a business reputation consultancy, was engaged to provide Starbucks leadership with advice after the incident. Starbucks issued an apology, which was circulated across top media publications.[36]The company also initiated ananti-bias trainingfor its 175,00 employees across 8,000 locations.[37]Starbucks also changed its policy, allowing people to sit without making a purchase. Both men also reached a settlement with Starbucks and the city.[34] In 2024, a London restaurant wasreview bombedby acybercrimegroup to extort £10,000. The negative reviews brought the eatery's Google rating down to 2.3 stars from a 4.9 stars before the attack.[38]Maximatic Media, an online reputation management firm, was hired to identify the origin of the malicious reviews and found that they were being generated by abotnet. The agency worked with Google for the removal of these fake reviews to restore the restaurant's online reputation to a 4.8-star rating.[39] Organisations attempt to manage their reputations on websites that many people visit, such aseBay,[40]Wikipedia, andGoogle. Some of the tactics used by reputation management firms include:[41] The practice of reputation management raises many ethical questions.[44][49]It is widely disagreed upon where the line for disclosure, astroturfing, and censorship should be drawn. Firms have been known to hire staff to pose as bloggers on third-party sites without disclosing they were paid, and some have been criticized for asking websites to remove negative posts.[14][42]The exposure of unethical reputation management may itself be risky to the reputation of a firm that attempts it if known.[50] In 2007 Google declared there to be nothing inherently wrong with reputation management,[43]and even introduced a toolset in 2011 for users to monitor their online identity and request the removal of unwanted content.[51]Many firms are selective about clients they accept. For example, they may avoid individuals who committed violent crimes who are looking to push information about their crimes lower on search results.[44] In 2010, a study showed thatNaymz, one of the first Web 2.0 services to provide utilities for Online Reputation Management (ORM), had developed a method to assess the online reputation of its members (RepScore) that was rather easy to deceive. The study found that the highest level of online reputation was easily achieved by engaging a small social group of nine persons who connect with each other and provide reciprocal positive feedbacks and endorsements.[52]As of December 2017, Naymz was shut down. In 2015, the online retailerAmazon.comsued 1,114 people who were paid to publish fake five-star reviews for products. These reviews were created using a website forMacrotasking,Fiverr.com.[53][54][55]Several other companies offer fakeYelpandFacebookreviews, and one journalist amassed five-star reviews for a business that doesn't exist, from social media accounts that have also given overwhelmingly positive reviews to "a chiropractor inArizona, a hair salon inLondon, a limo company inNorth Carolina, a realtor inTexas, and a locksmith inFlorida, among other far-flung businesses".[56]In 2007, a study by theUniversity of California Berkeleyfound that some sellers oneBaywere undertaking reputation management by selling products at a discount in exchange forpositive feedbacktogame the system.[57] In 2016, theWashington Postdetailed 25 court cases, at least 15 of which had false addresses for the defendant. The court cases had similar language and the defendant agreed to the injunction by the plaintiff, which allowed the reputation management company to issue takedown notices to Google, Yelp, Leagle, Ripoff Report, various news sites, and other websites.[58]
https://en.wikipedia.org/wiki/Reputation_management
This article lists thecompaniesworldwideengaged in the development of quantum computing, quantum communication and quantum sensing.Quantum computingand communication are two sub-fields ofquantum information science, which describes and theorizesinformation sciencein terms ofquantum physics. While the fundamental unit of classical information is thebit, the basic unit ofquantuminformation is thequbit.Quantum sensingis the third main sub-field of quantum technologies and it focus consists in taking advantage of the quantum states sensitivity to the surrounding environment to perform atomic scale measurements. Computing/Communication Quantum Hardware Quantum Algorithms Quantum Computing Software Universitat de Barcelona (UB) Barcelona Supercomputing Center (BSC) Quantum Algorithms Quantum Random Number Generator Quantum Hardware Computing Crypto accelerators, High Performance Computing Orquestra Quantum Operating Environment[116]
https://en.wikipedia.org/wiki/List_of_companies_involved_in_quantum_computing_or_communication
Inmathematics, areal-valued functionis calledconvexif theline segmentbetween any two distinct points on thegraph of the functionlies above or on the graph between the two points. Equivalently, a function is convex if itsepigraph(the set of points on or above the graph of the function) is aconvex set. In simple terms, a convex function graph is shaped like a cup∪{\displaystyle \cup }(or a straight line like a linear function), while aconcave function's graph is shaped like a cap∩{\displaystyle \cap }. A twice-differentiablefunction of a single variable is convexif and only ifitssecond derivativeis nonnegative on its entiredomain.[1]Well-known examples of convex functions of a single variable include alinear functionf(x)=cx{\displaystyle f(x)=cx}(wherec{\displaystyle c}is areal number), aquadratic functioncx2{\displaystyle cx^{2}}(c{\displaystyle c}as a nonnegative real number) and anexponential functioncex{\displaystyle ce^{x}}(c{\displaystyle c}as a nonnegative real number). Convex functions play an important role in many areas of mathematics. They are especially important in the study ofoptimizationproblems where they are distinguished by a number of convenient properties. For instance, a strictly convex function on anopen sethas no more than oneminimum. Even in infinite-dimensional spaces, under suitable additional hypotheses, convex functions continue to satisfy such properties and as a result, they are the most well-understood functionals in thecalculus of variations. Inprobability theory, a convex function applied to theexpected valueof arandom variableis always bounded above by the expected value of the convex function of the random variable. This result, known asJensen's inequality, can be used to deduceinequalitiessuch as thearithmetic–geometric mean inequalityandHölder's inequality. LetX{\displaystyle X}be aconvex subsetof a realvector spaceand letf:X→R{\displaystyle f:X\to \mathbb {R} }be a function. Thenf{\displaystyle f}is calledconvexif and only if any of the following equivalent conditions hold: The second statement characterizing convex functions that are valued in the real lineR{\displaystyle \mathbb {R} }is also the statement used to defineconvex functionsthat are valued in theextended real number line[−∞,∞]=R∪{±∞},{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \},}where such a functionf{\displaystyle f}is allowed to take±∞{\displaystyle \pm \infty }as a value. The first statement is not used because it permitst{\displaystyle t}to take0{\displaystyle 0}or1{\displaystyle 1}as a value, in which case, iff(x1)=±∞{\displaystyle f\left(x_{1}\right)=\pm \infty }orf(x2)=±∞,{\displaystyle f\left(x_{2}\right)=\pm \infty ,}respectively, thentf(x1)+(1−t)f(x2){\displaystyle tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}would be undefined (because the multiplications0⋅∞{\displaystyle 0\cdot \infty }and0⋅(−∞){\displaystyle 0\cdot (-\infty )}are undefined). The sum−∞+∞{\displaystyle -\infty +\infty }is also undefined so a convex extended real-valued function is typically only allowed to take exactly one of−∞{\displaystyle -\infty }and+∞{\displaystyle +\infty }as a value. The second statement can also be modified to get the definition ofstrict convexity, where the latter is obtained by replacing≤{\displaystyle \,\leq \,}with the strict inequality<.{\displaystyle \,<.}Explicitly, the mapf{\displaystyle f}is calledstrictly convexif and only if for all real0<t<1{\displaystyle 0<t<1}and allx1,x2∈X{\displaystyle x_{1},x_{2}\in X}such thatx1≠x2{\displaystyle x_{1}\neq x_{2}}:f(tx1+(1−t)x2)<tf(x1)+(1−t)f(x2){\displaystyle f\left(tx_{1}+(1-t)x_{2}\right)<tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)} A strictly convex functionf{\displaystyle f}is a function that the straight line between any pair of points on the curvef{\displaystyle f}is above the curvef{\displaystyle f}except for the intersection points between the straight line and the curve. An example of a function which is convex but not strictly convex isf(x,y)=x2+y{\displaystyle f(x,y)=x^{2}+y}. This function is not strictly convex because any two points sharing an x coordinate will have a straight line between them, while any two points NOT sharing an x coordinate will have a greater value of the function than the points between them. The functionf{\displaystyle f}is said to beconcave(resp.strictly concave) if−f{\displaystyle -f}(f{\displaystyle f}multiplied by −1) is convex (resp. strictly convex). The termconvexis often referred to asconvex downorconcave upward, and the termconcaveis often referred asconcave downorconvex upward.[3][4][5]If the term "convex" is used without an "up" or "down" keyword, then it refers strictly to a cup shaped graph∪{\displaystyle \cup }. As an example,Jensen's inequalityrefers to an inequality involving a convex or convex-(down), function.[6] Many properties of convex functions have the same simple formulation for functions of many variables as for functions of one variable. See below the properties for the case of many variables, as some of them are not listed for functions of one variable. Sincef{\displaystyle f}is convex, by using one of the convex function definitions above and lettingx2=0,{\displaystyle x_{2}=0,}it follows that for all real0≤t≤1,{\displaystyle 0\leq t\leq 1,}f(tx1)=f(tx1+(1−t)⋅0)≤tf(x1)+(1−t)f(0)≤tf(x1).{\displaystyle {\begin{aligned}f(tx_{1})&=f(tx_{1}+(1-t)\cdot 0)\\&\leq tf(x_{1})+(1-t)f(0)\\&\leq tf(x_{1}).\\\end{aligned}}}Fromf(tx1)≤tf(x1){\displaystyle f(tx_{1})\leq tf(x_{1})}, it follows thatf(a)+f(b)=f((a+b)aa+b)+f((a+b)ba+b)≤aa+bf(a+b)+ba+bf(a+b)=f(a+b).{\displaystyle {\begin{aligned}f(a)+f(b)&=f\left((a+b){\frac {a}{a+b}}\right)+f\left((a+b){\frac {b}{a+b}}\right)\\&\leq {\frac {a}{a+b}}f(a+b)+{\frac {b}{a+b}}f(a+b)\\&=f(a+b).\\\end{aligned}}}Namely,f(a)+f(b)≤f(a+b){\displaystyle f(a)+f(b)\leq f(a+b)}. The concept of strong convexity extends and parametrizes the notion of strict convexity. Intuitively, a strongly-convex function is a function that grows as fast as a quadratic function.[11]A strongly convex function is also strictly convex, but not vice versa. If a one-dimensional functionf{\displaystyle f}is twice continuously differentiable and the domain is the real line, then we can characterize it as follows: For example, letf{\displaystyle f}be strictly convex, and suppose there is a sequence of points(xn){\displaystyle (x_{n})}such thatf″(xn)=1n{\displaystyle f''(x_{n})={\tfrac {1}{n}}}. Even thoughf″(xn)>0{\displaystyle f''(x_{n})>0}, the function is not strongly convex becausef″(x){\displaystyle f''(x)}will become arbitrarily small. More generally, a differentiable functionf{\displaystyle f}is called strongly convex with parameterm>0{\displaystyle m>0}if the following inequality holds for all pointsx,y{\displaystyle x,y}in its domain:[12](∇f(x)−∇f(y))T(x−y)≥m‖x−y‖22{\displaystyle (\nabla f(x)-\nabla f(y))^{T}(x-y)\geq m\|x-y\|_{2}^{2}}or, more generally,⟨∇f(x)−∇f(y),x−y⟩≥m‖x−y‖2{\displaystyle \langle \nabla f(x)-\nabla f(y),x-y\rangle \geq m\|x-y\|^{2}}where⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }is anyinner product, and‖⋅‖{\displaystyle \|\cdot \|}is the correspondingnorm. Some authors, such as[13]refer to functions satisfying this inequality asellipticfunctions. An equivalent condition is the following:[14]f(y)≥f(x)+∇f(x)T(y−x)+m2‖y−x‖22{\displaystyle f(y)\geq f(x)+\nabla f(x)^{T}(y-x)+{\frac {m}{2}}\|y-x\|_{2}^{2}} It is not necessary for a function to be differentiable in order to be strongly convex. A third definition[14]for a strongly convex function, with parameterm,{\displaystyle m,}is that, for allx,y{\displaystyle x,y}in the domain andt∈[0,1],{\displaystyle t\in [0,1],}f(tx+(1−t)y)≤tf(x)+(1−t)f(y)−12mt(1−t)‖x−y‖22{\displaystyle f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)-{\frac {1}{2}}mt(1-t)\|x-y\|_{2}^{2}} Notice that this definition approaches the definition for strict convexity asm→0,{\displaystyle m\to 0,}and is identical to the definition of a convex function whenm=0.{\displaystyle m=0.}Despite this, functions exist that are strictly convex but are not strongly convex for anym>0{\displaystyle m>0}(see example below). If the functionf{\displaystyle f}is twice continuously differentiable, then it is strongly convex with parameterm{\displaystyle m}if and only if∇2f(x)⪰mI{\displaystyle \nabla ^{2}f(x)\succeq mI}for allx{\displaystyle x}in the domain, whereI{\displaystyle I}is the identity and∇2f{\displaystyle \nabla ^{2}f}is theHessian matrix, and the inequality⪰{\displaystyle \succeq }means that∇2f(x)−mI{\displaystyle \nabla ^{2}f(x)-mI}ispositive semi-definite. This is equivalent to requiring that the minimumeigenvalueof∇2f(x){\displaystyle \nabla ^{2}f(x)}be at leastm{\displaystyle m}for allx.{\displaystyle x.}If the domain is just the real line, then∇2f(x){\displaystyle \nabla ^{2}f(x)}is just the second derivativef″(x),{\displaystyle f''(x),}so the condition becomesf″(x)≥m{\displaystyle f''(x)\geq m}. Ifm=0{\displaystyle m=0}then this means the Hessian is positive semidefinite (or if the domain is the real line, it means thatf″(x)≥0{\displaystyle f''(x)\geq 0}), which implies the function is convex, and perhaps strictly convex, but not strongly convex. Assuming still that the function is twice continuously differentiable, one can show that the lower bound of∇2f(x){\displaystyle \nabla ^{2}f(x)}implies that it is strongly convex. UsingTaylor's Theoremthere existsz∈{tx+(1−t)y:t∈[0,1]}{\displaystyle z\in \{tx+(1-t)y:t\in [0,1]\}}such thatf(y)=f(x)+∇f(x)T(y−x)+12(y−x)T∇2f(z)(y−x){\displaystyle f(y)=f(x)+\nabla f(x)^{T}(y-x)+{\frac {1}{2}}(y-x)^{T}\nabla ^{2}f(z)(y-x)}Then(y−x)T∇2f(z)(y−x)≥m(y−x)T(y−x){\displaystyle (y-x)^{T}\nabla ^{2}f(z)(y-x)\geq m(y-x)^{T}(y-x)}by the assumption about the eigenvalues, and hence we recover the second strong convexity equation above. A functionf{\displaystyle f}is strongly convex with parametermif and only if the functionx↦f(x)−m2‖x‖2{\displaystyle x\mapsto f(x)-{\frac {m}{2}}\|x\|^{2}}is convex. A twice continuously differentiable functionf{\displaystyle f}on a compact domainX{\displaystyle X}that satisfiesf″(x)>0{\displaystyle f''(x)>0}for allx∈X{\displaystyle x\in X}is strongly convex. The proof of this statement follows from theextreme value theorem, which states that a continuous function on a compact set has a maximum and minimum. Strongly convex functions are in general easier to work with than convex or strictly convex functions, since they are a smaller class. Like strictly convex functions, strongly convex functions have unique minima on compact sets. Iffis a strongly-convex function with parameterm, then:[15]: Prop.6.1.4 A uniformly convex function,[16][17]with modulusϕ{\displaystyle \phi }, is a functionf{\displaystyle f}that, for allx,y{\displaystyle x,y}in the domain andt∈[0,1],{\displaystyle t\in [0,1],}satisfiesf(tx+(1−t)y)≤tf(x)+(1−t)f(y)−t(1−t)ϕ(‖x−y‖){\displaystyle f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)-t(1-t)\phi (\|x-y\|)}whereϕ{\displaystyle \phi }is a function that is non-negative and vanishes only at 0. This is a generalization of the concept of strongly convex function; by takingϕ(α)=m2α2{\displaystyle \phi (\alpha )={\tfrac {m}{2}}\alpha ^{2}}we recover the definition of strong convexity. It is worth noting that some authors require the modulusϕ{\displaystyle \phi }to be an increasing function,[17]but this condition is not required by all authors.[16]
https://en.wikipedia.org/wiki/Convex_function
ThePrivacy Act of 1974(Pub. L.93–579, 88Stat.1896, enactedDecember 31, 1974,5 U.S.C.§ 552a), aUnited States federal law, establishes a Code of Fair Information Practice that governs the collection, maintenance, use, and dissemination ofpersonally identifiable informationabout individuals that is maintained in systems of records by federal agencies.[1]At its creation, it was meant to be an "American Bill of Rights on data."[2] A system of records is a group of records under the control of an agency from which information is retrieved by the name of the individual or by some identifier assigned to the individual. The Privacy Act requires that agencies give the public notice of their systems of records by publication in theFederal Register. The Privacy Act prohibits the disclosure of information from a system of records absent of the written consent of the subject individual, unless the disclosure is pursuant to one of twelve statutory exceptions. The Act also provides individuals with a means by which to seek access to and amendment of their records and sets forth various agency record-keeping requirements. Additionally, with people granted the right to review what was documented with their name, they are also able to find out if the "records have been disclosed" and are also given the right to make corrections.[1] An idea enshrining aright to privacybecame relevant when theSocial Security numberbecame a de facto identifier for people across the federal government, and computers installed across federal agencies in the late 1950s. In the early 1960s, there was widespread interest in a "federal data center," with Congress commissioning various reports looking into the idea.[2] However, withMcCarthyism, the 1965–1966 Congressional wiretapping hearings, and cultural milestones likeGeorge Orwell's bookNineteen Eighty-Four, the public became concerned about the idea of the government knowing everything about an individual. The idea of a "federal data bank" was debated in a series of Congressional hearings starting in 1966, one of them featuring authorVance Packard. He testified, "Big Brother, if he ever comes to the United States, may turn out to be...a relentless bureaucrat obsessed with efficiency."[2] By 1971, the Congressional hearings on privacy solidified a policy demand for an "American Bill of Rights on data," namely with a 1973 report calledRecords, Computers, and Rights of Citizens.[2] Passing a bill about the right to privacy became a priority in the light ofWatergateandCOINTELPRO, two scandals in which people and political parties considered "subversive" were subject to investigation and illegal surveillance by the government.[3]PresidentNixonpublicly supported the personal right to privacy in 1974, in an attempt to win back public trust in the government after Watergate.[4] SenatorSam Ervinwas the bill's principal sponsor, especially as the House and Senate versions were combined. The law went into effect on September 27, 1975.[3] Although the Privacy Act was groundbreaking when it was passed, in subsequent years it has been criticized as lacking an enforcement mechanism. The United States the only nation in theOrganisation for Economic Co-operation and Developmentwithout a data protection agency to enforce privacy laws.[5] The Privacy Act states in part: There are specific exceptions to the Act that allow the use of personal records. Examples of these exceptions are:[7] The Privacy Act mandates that eachUnited States Governmentagency have in place an administrative and physical security system to prevent the unauthorized release of personal records. To protect the privacy and liberty rights of individuals, federal agencies must state "the authority (whether granted by statute, or by Executive order of the President) which authorizes the solicitation of the information and whether disclosure of such information is mandatory or voluntary" when requesting information. (5 U.S.C.§ 552e) This notice is common on almost all federal government forms which seek to gather information from individuals, many of which seek personal and confidential details.[8] Subsection "U" requires that each agency have a Data Integrity Board. Each agency's Data Integrity Board is supposed to make an annual report to OMB, available to the public, that includes all complaints that the Act was violated, such as use of records for unauthorized reasons or the holding of First Amendment Records and report on —..."(v) any violations of matching agreements that have been alleged or identified and any corrective action taken". Former Attorney General Dick Thornburg appointed a Data Integrity Board but since then, the USDOJ has not published any Privacy Act reports.[9] The Computer Matching and Privacy Protection Act of 1988, P.L. 100–503, amended the Privacy Act of 1974 by adding certain protections for the subjects of Privacy Act records whose records are used in automated matching programs. These protections have been mandated to ensure: The ComputerMatching Actis codified as part of the Privacy Act.[11] The Privacy Act also states: The Privacy Act does apply to the records of every "individual," defined as "a citizen of the United States or an alien lawfully admitted for permanent residence"[12]but the Privacy Act only applies to records held by an "agency".[13]Therefore, the records held by courts, executive components, or non-agency government entities are not subject to the provisions in the Privacy Act and there is no right to these records.[14] On January 25, 2017,President Trumpsigned an executive order that eliminates Privacy Act protections for foreigners. Section 14 of Trump's "Enhancing Public Safety" executive order directs federal agencies to "ensure that their privacy policies exclude persons who are not United States citizens or lawful permanent residents from the protections of the Privacy Act regarding personally identifiable information" to the extent consistent with applicable law.[15] Broad exemptions of the Act include "routine [agency] use" of data, which can be claimed under a general "compatible" purpose. However, this can result in "mission creep" of a agency's database extending beyond its original stated goals.[16] The Act also forbade agencies from collecting information about people's First Amendment activities.[17] Following the controversialPassenger Name Record(PNR) agreement signed with theEuropean Union(EU) in 2007, theBush administrationprovided an exemption for theDepartment of Homeland Securityand theArrival and Departure Information System(ADIS) from the U.S. Privacy Act.[18]ADIS is intended to authorize people to travel only after PNR and API (Advance Passenger Information) data has been checked and cleared through a US agency watchlist.[18]TheAutomated Targeting Systemis also to be exempted.[18]The Privacy Act does not protect non-US persons without lawful permanent residency in the US, which is problematic for the exchange ofPassenger Name Recordinformation between the US and theEuropean Union. The passing of the Privacy Act was a rushed bipartisan effort, with compromises made when combining the House and Senate versions of the bill.[4][19][17] Weaknesses in the Privacy Act included general exceptions for "routine use," intelligence and law enforcement agencies, as well as lacking an enforcement mechanism and violations having to be "intentional and willful."[2][19][20]Even as early as 1976, law reviews acknowledged that there were many limitations on the Privacy Act, namely because it was "practically unenforceable."[19]The 1977 report from the Privacy Protection Study Commission (created by the Act) also concluded that the Act did not result in intended benefits, because definitions and disclosure of data use were unclear, and the public was not aware of the Act's provisions.[16] The Act only covers record systems that "retrieve" information by name or individual identifier, which is easily circumvented. A database could contain identifying information (such as name or SSN) without being indexed by them, and therefore would be exempted from the Privacy Act.[16] In addition, the Act was undercut in federal courts usingtort lawtheory. Federal courts established that there had to be "actual damages" for claims to be levied against the Act, not just "reputation loss" or "emotional distress."[2](Refer toDoe v. ChaoandFederal Aviation Administration v. Cooper.) Under Trump's second administration, the Privacy Act has been cited in up to fourteen lawsuits pertaining toDOGEaccess to data that could contain sensitive personal data.[21][22]Congressional leaderGerry Connollystated "I am concerned that DOGE is moving personal information across agencies without the notification required under the Privacy Act or related laws, such that the American people are wholly unaware their data is being manipulated in this way."[22] Due to the recent lawsuits, Congressional leaderLori Trahanannounced an effort to modernize and update the Act to address growing concerns about government surveillance, unvetted access, and misuse.[21] This article uses material from the public domain source:
https://en.wikipedia.org/wiki/Privacy_Act_of_1974
Incomputer scienceandgraph theory,Karger's algorithmis arandomized algorithmto compute aminimum cutof a connectedgraph. It was invented byDavid Kargerand first published in 1993.[1] The idea of the algorithm is based on the concept ofcontraction of an edge(u,v){\displaystyle (u,v)}in an undirected graphG=(V,E){\displaystyle G=(V,E)}. Informally speaking, the contraction of an edge merges the nodesu{\displaystyle u}andv{\displaystyle v}into one, reducing the total number of nodes of the graph by one. All other edges connecting eitheru{\displaystyle u}orv{\displaystyle v}are "reattached" to the merged node, effectively producing amultigraph. Karger's basic algorithm iteratively contracts randomly chosen edges until only two nodes remain; those nodes represent acutin the original graph. By iterating this basic algorithm a sufficient number of times, a minimum cut can be foundwith high probability. Acut(S,T){\displaystyle (S,T)}in an undirected graphG=(V,E){\displaystyle G=(V,E)}is a partition of the verticesV{\displaystyle V}into two non-empty, disjoint setsS∪T=V{\displaystyle S\cup T=V}. Thecutsetof a cut consists of the edges{uv∈E:u∈S,v∈T}{\displaystyle \{\,uv\in E\colon u\in S,v\in T\,\}}between the two parts. Thesize(orweight) of a cut in an unweighted graph is the cardinality of the cutset, i.e., the number of edges between the two parts, There are2|V|{\displaystyle 2^{|V|}}ways of choosing for each vertex whether it belongs toS{\displaystyle S}or toT{\displaystyle T}, but two of these choices makeS{\displaystyle S}orT{\displaystyle T}empty and do not give rise to cuts. Among the remaining choices, swapping the roles ofS{\displaystyle S}andT{\displaystyle T}does not change the cut, so each cut is counted twice; therefore, there are2|V|−1−1{\displaystyle 2^{|V|-1}-1}distinct cuts. Theminimum cut problemis to find a cut of smallest size among these cuts. For weighted graphs with positive edge weightsw:E→R+{\displaystyle w\colon E\rightarrow \mathbf {R} ^{+}}the weight of the cut is the sum of the weights of edges between vertices in each part which agrees with the unweighted definition forw=1{\displaystyle w=1}. A cut is sometimes called a “global cut” to distinguish it from an “s{\displaystyle s}-t{\displaystyle t}cut” for a given pair of vertices, which has the additional requirement thats∈S{\displaystyle s\in S}andt∈T{\displaystyle t\in T}. Every global cut is ans{\displaystyle s}-t{\displaystyle t}cut for somes,t∈V{\displaystyle s,t\in V}. Thus, the minimum cut problem can be solved inpolynomial timeby iterating over all choices ofs,t∈V{\displaystyle s,t\in V}and solving the resulting minimums{\displaystyle s}-t{\displaystyle t}cut problem using themax-flow min-cut theoremand a polynomial time algorithm formaximum flow, such as thepush-relabel algorithm, though this approach is not optimal. Better deterministic algorithms for the global minimum cut problem include theStoer–Wagner algorithm, which has a running time ofO(mn+n2log⁡n){\displaystyle O(mn+n^{2}\log n)}.[2] The fundamental operation of Karger’s algorithm is a form ofedge contraction. The result of contracting the edgee={u,v}{\displaystyle e=\{u,v\}}is a new nodeuv{\displaystyle uv}. Every edge{w,u}{\displaystyle \{w,u\}}or{w,v}{\displaystyle \{w,v\}}forw∉{u,v}{\displaystyle w\notin \{u,v\}}to the endpoints of the contracted edge is replaced by an edge{w,uv}{\displaystyle \{w,uv\}}to the new node. Finally, the contracted nodesu{\displaystyle u}andv{\displaystyle v}with all their incident edges are removed. In particular, the resulting graph contains no self-loops. The result of contracting edgee{\displaystyle e}is denotedG/e{\displaystyle G/e}. The contraction algorithm repeatedly contracts random edges in the graph, until only two nodes remain, at which point there is only a single cut. The key idea of the algorithm is that it is far more likely for non min-cut edges than min-cut edges to be randomly selected and lost to contraction, since min-cut edges are usually vastly outnumbered by non min-cut edges. Subsequently, it is plausible that the min-cut edges will survive all the edge contraction, and the algorithm will correctly identify the min-cut edge. When the graph is represented usingadjacency listsor anadjacency matrix, a single edge contraction operation can be implemented with a linear number of updates to the data structure, for a total running time ofO(|V|2){\displaystyle O(|V|^{2})}. Alternatively, the procedure can be viewed as an execution ofKruskal’s algorithmfor constructing theminimum spanning treein a graph where the edges have weightsw(ei)=π(i){\displaystyle w(e_{i})=\pi (i)}according to a random permutationπ{\displaystyle \pi }. Removing the heaviest edge of this tree results in two components that describe a cut. In this way, the contraction procedure can be implemented likeKruskal’s algorithmin timeO(|E|log⁡|E|){\displaystyle O(|E|\log |E|)}. The best known implementations useO(|E|){\displaystyle O(|E|)}time and space, orO(|E|log⁡|E|){\displaystyle O(|E|\log |E|)}time andO(|V|){\displaystyle O(|V|)}space, respectively.[1] In a graphG=(V,E){\displaystyle G=(V,E)}withn=|V|{\displaystyle n=|V|}vertices, the contraction algorithm returns a minimum cut with polynomially small probability(n2)−1{\displaystyle {\binom {n}{2}}^{-1}}. Recall that every graph has2n−1−1{\displaystyle 2^{n-1}-1}cuts (by the discussion in the previous section), among which at most(n2){\displaystyle {\tbinom {n}{2}}}can be minimum cuts. Therefore, the success probability for this algorithm is much better than the probability for picking a cut at random, which is at most(n2)2n−1−1{\displaystyle {\frac {\tbinom {n}{2}}{2^{n-1}-1}}}. For instance, thecycle graphonn{\displaystyle n}vertices has exactly(n2){\displaystyle {\binom {n}{2}}}minimum cuts, given by every choice of 2 edges. The contraction procedure finds each of these with equal probability. To further establish the lower bound on the success probability, letC{\displaystyle C}denote the edges of a specific minimum cut of sizek{\displaystyle k}. The contraction algorithm returnsC{\displaystyle C}if none of the random edges deleted by the algorithm belongs to the cutsetC{\displaystyle C}. In particular, the first edge contraction avoidsC{\displaystyle C}, which happens with probability1−k/|E|{\displaystyle 1-k/|E|}. The minimumdegreeofG{\displaystyle G}is at leastk{\displaystyle k}(otherwise a minimum degree vertex would induce a smaller cut where one of the two partitions contains only the minimum degree vertex), so|E|⩾nk/2{\displaystyle |E|\geqslant nk/2}. Thus, the probability that the contraction algorithm picks an edge fromC{\displaystyle C}is The probabilitypn{\displaystyle p_{n}}that the contraction algorithm on ann{\displaystyle n}-vertex graph avoidsC{\displaystyle C}satisfies the recurrencepn⩾(1−2n)pn−1{\displaystyle p_{n}\geqslant \left(1-{\frac {2}{n}}\right)p_{n-1}}, withp2=1{\displaystyle p_{2}=1}, which can be expanded as By repeating the contraction algorithmT=(n2)ln⁡n{\displaystyle T={\binom {n}{2}}\ln n}times with independent random choices and returning the smallest cut, the probability of not finding a minimum cut is The total running time forT{\displaystyle T}repetitions for a graph withn{\displaystyle n}vertices andm{\displaystyle m}edges isO(Tm)=O(n2mlog⁡n){\displaystyle O(Tm)=O(n^{2}m\log n)}. An extension of Karger’s algorithm due toDavid KargerandClifford Steinachieves an order of magnitude improvement.[3] The basic idea is to perform the contraction procedure until the graph reachest{\displaystyle t}vertices. The probabilitypn,t{\displaystyle p_{n,t}}that this contraction procedure avoids a specific cutC{\displaystyle C}in ann{\displaystyle n}-vertex graph is pn,t≥∏i=0n−t−1(1−2n−i)=(t2)/(n2).{\displaystyle p_{n,t}\geq \prod _{i=0}^{n-t-1}{\Bigl (}1-{\frac {2}{n-i}}{\Bigr )}={\binom {t}{2}}{\Bigg /}{\binom {n}{2}}\,.} This expression is approximatelyt2/n2{\displaystyle t^{2}/n^{2}}and becomes less than12{\displaystyle {\frac {1}{2}}}aroundt=n/2{\displaystyle t=n/{\sqrt {2}}}. In particular, the probability that an edge fromC{\displaystyle C}is contracted grows towards the end. This motivates the idea of switching to a slower algorithm after a certain number of contraction steps. The contraction parametert{\displaystyle t}is chosen so that each call to contract has probability at least 1/2 of success (that is, of avoiding the contraction of an edge from a specific cutsetC{\displaystyle C}). This allows the successful part of the recursion tree to be modeled as arandom binary treegenerated by a criticalGalton–Watson process, and to be analyzed accordingly.[3] The probabilityP(n){\displaystyle P(n)}that this random tree of successful calls contains a long-enough path to reach the base of the recursion and findC{\displaystyle C}is given by the recurrence relation with solutionP(n)=Ω(1log⁡n){\displaystyle P(n)=\Omega \left({\frac {1}{\log n}}\right)}. The running time of fastmincut satisfies with solutionT(n)=O(n2log⁡n){\displaystyle T(n)=O(n^{2}\log n)}. To achieve error probabilityO(1/n){\displaystyle O(1/n)}, the algorithm can be repeatedO(log⁡n/P(n)){\displaystyle O(\log n/P(n))}times, for an overall running time ofT(n)⋅log⁡nP(n)=O(n2log3⁡n){\displaystyle T(n)\cdot {\frac {\log n}{P(n)}}=O(n^{2}\log ^{3}n)}. This is an order of magnitude improvement over Karger’s original algorithm.[3] To determine a min-cut, one has to touch every edge in the graph at least once, which isΘ(n2){\displaystyle \Theta (n^{2})}time in adense graph. The Karger–Stein's min-cut algorithm takes the running time ofO(n2lnO(1)⁡n){\displaystyle O(n^{2}\ln ^{O(1)}n)}, which is very close to that.
https://en.wikipedia.org/wiki/Karger%27s_algorithm
In mathematics, especially inalgebraic geometryand the theory ofcomplex manifolds,coherent sheavesare a class ofsheavesclosely linked to the geometric properties of the underlying space. The definition of coherent sheaves is made with reference to asheaf of ringsthat codifies this geometric information. Coherent sheaves can be seen as a generalization ofvector bundles. Unlike vector bundles, they form anabelian category, and so they are closed under operations such as takingkernels,images, andcokernels. Thequasi-coherent sheavesare a generalization of coherent sheaves and include the locally free sheaves of infinite rank. Coherent sheaf cohomologyis a powerful technique, in particular for studying thesectionsof a given coherent sheaf. Aquasi-coherent sheafon aringed space(X,OX){\displaystyle (X,{\mathcal {O}}_{X})}is a sheafF{\displaystyle {\mathcal {F}}}ofOX{\displaystyle {\mathcal {O}}_{X}}-modulesthat has a local presentation, that is, every point inX{\displaystyle X}has anopen neighborhoodU{\displaystyle U}in which there is anexact sequence for some (possibly infinite) setsI{\displaystyle I}andJ{\displaystyle J}. Acoherent sheafon aringed space(X,OX){\displaystyle (X,{\mathcal {O}}_{X})}is a sheafF{\displaystyle {\mathcal {F}}}ofOX{\displaystyle {\mathcal {O}}_{X}}-modulessatisfying the following two properties: Morphisms between (quasi-)coherent sheaves are the same as morphisms of sheaves ofOX{\displaystyle {\mathcal {O}}_{X}}-modules. WhenX{\displaystyle X}is a scheme, the general definitions above are equivalent to more explicit ones. A sheafF{\displaystyle {\mathcal {F}}}ofOX{\displaystyle {\mathcal {O}}_{X}}-modules isquasi-coherentif and only if over each openaffine subschemeU=Spec⁡A{\displaystyle U=\operatorname {Spec} A}the restrictionF|U{\displaystyle {\mathcal {F}}|_{U}}is isomorphic to the sheafM~{\displaystyle {\tilde {M}}}associatedto the moduleM=Γ(U,F){\displaystyle M=\Gamma (U,{\mathcal {F}})}overA{\displaystyle A}. WhenX{\displaystyle X}is a locally Noetherian scheme,F{\displaystyle {\mathcal {F}}}iscoherentif and only if it is quasi-coherent and the modulesM{\displaystyle M}above can be taken to befinitely generated. On an affine schemeU=Spec⁡A{\displaystyle U=\operatorname {Spec} A}, there is anequivalence of categoriesfromA{\displaystyle A}-modules to quasi-coherent sheaves, taking a moduleM{\displaystyle M}to the associated sheafM~{\displaystyle {\tilde {M}}}. The inverse equivalence takes a quasi-coherent sheafF{\displaystyle {\mathcal {F}}}onU{\displaystyle U}to theA{\displaystyle A}-moduleF(U){\displaystyle {\mathcal {F}}(U)}of global sections ofF{\displaystyle {\mathcal {F}}}. Here are several further characterizations of quasi-coherent sheaves on a scheme.[1] Theorem—LetX{\displaystyle X}be a scheme andF{\displaystyle {\mathcal {F}}}anOX{\displaystyle {\mathcal {O}}_{X}}-module on it. Then the following are equivalent. On an arbitrary ringed space, quasi-coherent sheaves do not necessarily form an abelian category. On the other hand, the quasi-coherent sheaves on anyschemeform an abelian category, and they are extremely useful in that context.[2] On any ringed spaceX{\displaystyle X}, the coherent sheaves form an abelian category, afull subcategoryof the category ofOX{\displaystyle {\mathcal {O}}_{X}}-modules.[3](Analogously, the category ofcoherent modulesover any ringA{\displaystyle A}is a full abelian subcategory of the category of allA{\displaystyle A}-modules.) So the kernel, image, and cokernel of any map of coherent sheaves are coherent. Thedirect sumof two coherent sheaves is coherent; more generally, anOX{\displaystyle {\mathcal {O}}_{X}}-module that is anextensionof two coherent sheaves is coherent.[4] A submodule of a coherent sheaf is coherent if it is of finite type. A coherent sheaf is always anOX{\displaystyle {\mathcal {O}}_{X}}-module offinite presentation, meaning that each pointx{\displaystyle x}inX{\displaystyle X}has an open neighborhoodU{\displaystyle U}such that the restrictionF|U{\displaystyle {\mathcal {F}}|_{U}}ofF{\displaystyle {\mathcal {F}}}toU{\displaystyle U}is isomorphic to the cokernel of a morphismOXn|U→OXm|U{\displaystyle {\mathcal {O}}_{X}^{n}|_{U}\to {\mathcal {O}}_{X}^{m}|_{U}}for some natural numbersn{\displaystyle n}andm{\displaystyle m}. IfOX{\displaystyle {\mathcal {O}}_{X}}is coherent, then, conversely, every sheaf of finite presentation overOX{\displaystyle {\mathcal {O}}_{X}}is coherent. The sheaf of ringsOX{\displaystyle {\mathcal {O}}_{X}}is called coherent if it is coherent considered as a sheaf of modules over itself. In particular, theOka coherence theoremstates that the sheaf of holomorphic functions on a complex analytic spaceX{\displaystyle X}is a coherent sheaf of rings. The main part of the proof is the caseX=Cn{\displaystyle X=\mathbf {C} ^{n}}. Likewise, on alocally Noetherian schemeX{\displaystyle X}, the structure sheafOX{\displaystyle {\mathcal {O}}_{X}}is a coherent sheaf of rings.[5] Letf:X→Y{\displaystyle f:X\to Y}be a morphism of ringed spaces (for example, amorphism of schemes). IfF{\displaystyle {\mathcal {F}}}is a quasi-coherent sheaf onY{\displaystyle Y}, then theinverse imageOX{\displaystyle {\mathcal {O}}_{X}}-module (orpullback)f∗F{\displaystyle f^{*}{\mathcal {F}}}is quasi-coherent onX{\displaystyle X}.[10]For a morphism of schemesf:X→Y{\displaystyle f:X\to Y}and a coherent sheafF{\displaystyle {\mathcal {F}}}onY{\displaystyle Y}, the pullbackf∗F{\displaystyle f^{*}{\mathcal {F}}}is not coherent in full generality (for example,f∗OY=OX{\displaystyle f^{*}{\mathcal {O}}_{Y}={\mathcal {O}}_{X}}, which might not be coherent), but pullbacks of coherent sheaves are coherent ifX{\displaystyle X}is locally Noetherian. An important special case is the pullback of a vector bundle, which is a vector bundle. Iff:X→Y{\displaystyle f:X\to Y}is aquasi-compactquasi-separatedmorphism of schemes andF{\displaystyle {\mathcal {F}}}is a quasi-coherent sheaf onX{\displaystyle X}, then the direct image sheaf (orpushforward)f∗F{\displaystyle f_{*}{\mathcal {F}}}is quasi-coherent onY{\displaystyle Y}.[2] The direct image of a coherent sheaf is often not coherent. For example, for afieldk{\displaystyle k}, letX{\displaystyle X}be the affine line overk{\displaystyle k}, and consider the morphismf:X→Spec⁡(k){\displaystyle f:X\to \operatorname {Spec} (k)}; then the direct imagef∗OX{\displaystyle f_{*}{\mathcal {O}}_{X}}is the sheaf onSpec⁡(k){\displaystyle \operatorname {Spec} (k)}associated to the polynomial ringk[x]{\displaystyle k[x]}, which is not coherent becausek[x]{\displaystyle k[x]}has infinite dimension as ak{\displaystyle k}-vector space. On the other hand, the direct image of a coherent sheaf under aproper morphismis coherent, byresults of Grauert and Grothendieck. An important feature of coherent sheavesF{\displaystyle {\mathcal {F}}}is that the properties ofF{\displaystyle {\mathcal {F}}}at a pointx{\displaystyle x}control the behavior ofF{\displaystyle {\mathcal {F}}}in a neighborhood ofx{\displaystyle x}, more than would be true for an arbitrary sheaf. For example,Nakayama's lemmasays (in geometric language) that ifF{\displaystyle {\mathcal {F}}}is a coherent sheaf on a schemeX{\displaystyle X}, then thefiberFx⊗OX,xk(x){\displaystyle {\mathcal {F}}_{x}\otimes _{{\mathcal {O}}_{X,x}}k(x)}ofF{\displaystyle F}at a pointx{\displaystyle x}(a vector space over the residue fieldk(x){\displaystyle k(x)}) is zero if and only if the sheafF{\displaystyle {\mathcal {F}}}is zero on some open neighborhood ofx{\displaystyle x}. A related fact is that the dimension of the fibers of a coherent sheaf isupper-semicontinuous.[11]Thus a coherent sheaf has constant rank on an open set, while the rank can jump up on a lower-dimensional closed subset. In the same spirit: a coherent sheafF{\displaystyle {\mathcal {F}}}on a schemeX{\displaystyle X}is a vector bundle if and only if itsstalkFx{\displaystyle {\mathcal {F}}_{x}}is afree moduleover the local ringOX,x{\displaystyle {\mathcal {O}}_{X,x}}for every pointx{\displaystyle x}inX{\displaystyle X}.[12] On a general scheme, one cannot determine whether a coherent sheaf is a vector bundle just from its fibers (as opposed to its stalks). On areducedlocally Noetherian scheme, however, a coherent sheaf is a vector bundle if and only if its rank is locally constant.[13] For a morphism of schemesX→Y{\displaystyle X\to Y}, letΔ:X→X×YX{\displaystyle \Delta :X\to X\times _{Y}X}be thediagonal morphism, which is aclosed immersionifX{\displaystyle X}isseparatedoverY{\displaystyle Y}. LetI{\displaystyle {\mathcal {I}}}be the ideal sheaf ofX{\displaystyle X}inX×YX{\displaystyle X\times _{Y}X}. Then the sheaf ofdifferentialsΩX/Y1{\displaystyle \Omega _{X/Y}^{1}}can be defined as the pullbackΔ∗I{\displaystyle \Delta ^{*}{\mathcal {I}}}ofI{\displaystyle {\mathcal {I}}}toX{\displaystyle X}. Sections of this sheaf are called1-formsonX{\displaystyle X}overY{\displaystyle Y}, and they can be written locally onX{\displaystyle X}as finite sums∑fjdgj{\displaystyle \textstyle \sum f_{j}\,dg_{j}}for regular functionsfj{\displaystyle f_{j}}andgj{\displaystyle g_{j}}. IfX{\displaystyle X}is locally of finite type over a fieldk{\displaystyle k}, thenΩX/k1{\displaystyle \Omega _{X/k}^{1}}is a coherent sheaf onX{\displaystyle X}. IfX{\displaystyle X}issmoothoverk{\displaystyle k}, thenΩ1{\displaystyle \Omega ^{1}}(meaningΩX/k1{\displaystyle \Omega _{X/k}^{1}}) is a vector bundle overX{\displaystyle X}, called thecotangent bundleofX{\displaystyle X}. Then thetangent bundleTX{\displaystyle TX}is defined to be the dual bundle(Ω1)∗{\displaystyle (\Omega ^{1})^{*}}. ForX{\displaystyle X}smooth overk{\displaystyle k}of dimensionn{\displaystyle n}everywhere, the tangent bundle has rankn{\displaystyle n}. IfY{\displaystyle Y}is a smooth closed subscheme of a smooth schemeX{\displaystyle X}overk{\displaystyle k}, then there is a short exact sequence of vector bundles onY{\displaystyle Y}: which can be used as a definition of thenormal bundleNY/X{\displaystyle N_{Y/X}}toY{\displaystyle Y}inX{\displaystyle X}. For a smooth schemeX{\displaystyle X}over a fieldk{\displaystyle k}and a natural numberi{\displaystyle i}, the vector bundleΩi{\displaystyle \Omega ^{i}}ofi-formsonX{\displaystyle X}is defined as thei{\displaystyle i}-thexterior powerof the cotangent bundle,Ωi=ΛiΩ1{\displaystyle \Omega ^{i}=\Lambda ^{i}\Omega ^{1}}. For a smoothvarietyX{\displaystyle X}of dimensionn{\displaystyle n}overk{\displaystyle k}, thecanonical bundleKX{\displaystyle K_{X}}means the line bundleΩn{\displaystyle \Omega ^{n}}. Thus sections of the canonical bundle are algebro-geometric analogs ofvolume formsonX{\displaystyle X}. For example, a section of the canonical bundle of affine spaceAn{\displaystyle \mathbb {A} ^{n}}overk{\displaystyle k}can be written as wheref{\displaystyle f}is a polynomial with coefficients ink{\displaystyle k}. LetR{\displaystyle R}be a commutative ring andn{\displaystyle n}a natural number. For each integerj{\displaystyle j}, there is an important example of a line bundle on projective spacePn{\displaystyle \mathbb {P} ^{n}}overR{\displaystyle R}, calledO(j){\displaystyle {\mathcal {O}}(j)}. To define this, consider the morphism ofR{\displaystyle R}-schemes given in coordinates by(x0,…,xn)↦[x0,…,xn]{\displaystyle (x_{0},\ldots ,x_{n})\mapsto [x_{0},\ldots ,x_{n}]}. (That is, thinking of projective space as the space of 1-dimensional linear subspaces of affine space, send a nonzero point in affine space to the line that it spans.) Then a section ofO(j){\displaystyle {\mathcal {O}}(j)}over an open subsetU{\displaystyle U}ofPn{\displaystyle \mathbb {P} ^{n}}is defined to be a regular functionf{\displaystyle f}onπ−1(U){\displaystyle \pi ^{-1}(U)}that is homogeneous of degreej{\displaystyle j}, meaning that as regular functions on (A1−0)×π−1(U){\displaystyle \mathbb {A} ^{1}-0)\times \pi ^{-1}(U)}. For all integersi{\displaystyle i}andj{\displaystyle j}, there is an isomorphismO(i)⊗O(j)≅O(i+j){\displaystyle {\mathcal {O}}(i)\otimes {\mathcal {O}}(j)\cong {\mathcal {O}}(i+j)}of line bundles onPn{\displaystyle \mathbb {P} ^{n}}. In particular, everyhomogeneous polynomialinx0,…,xn{\displaystyle x_{0},\ldots ,x_{n}}of degreej{\displaystyle j}overR{\displaystyle R}can be viewed as a global section ofO(j){\displaystyle {\mathcal {O}}(j)}overPn{\displaystyle \mathbb {P} ^{n}}. Note that every closed subscheme of projective space can be defined as the zero set of some collection of homogeneous polynomials, hence as the zero set of some sections of the line bundlesO(j){\displaystyle {\mathcal {O}}(j)}.[14]This contrasts with the simpler case of affine space, where a closed subscheme is simply the zero set of some collection of regular functions. The regular functions on projective spacePn{\displaystyle \mathbb {P} ^{n}}overR{\displaystyle R}are just the "constants" (the ringR{\displaystyle R}), and so it is essential to work with the line bundlesO(j){\displaystyle {\mathcal {O}}(j)}. Serregave an algebraic description of all coherent sheaves on projective space, more subtle than what happens for affine space. Namely, letR{\displaystyle R}be a Noetherian ring (for example, a field), and consider the polynomial ringS=R[x0,…,xn]{\displaystyle S=R[x_{0},\ldots ,x_{n}]}as agraded ringwith eachxi{\displaystyle x_{i}}having degree 1. Then every finitely generated gradedS{\displaystyle S}-moduleM{\displaystyle M}has anassociatedcoherent sheafM~{\displaystyle {\tilde {M}}}onPn{\displaystyle \mathbb {P} ^{n}}overR{\displaystyle R}. Every coherent sheaf onPn{\displaystyle \mathbb {P} ^{n}}arises in this way from a finitely generated gradedS{\displaystyle S}-moduleM{\displaystyle M}. (For example, the line bundleO(j){\displaystyle {\mathcal {O}}(j)}is the sheaf associated to theS{\displaystyle S}-moduleS{\displaystyle S}with its grading lowered byj{\displaystyle j}.) But theS{\displaystyle S}-moduleM{\displaystyle M}that yields a given coherent sheaf onPn{\displaystyle \mathbb {P} ^{n}}is not unique; it is only unique up to changingM{\displaystyle M}by graded modules that are nonzero in only finitely many degrees. More precisely, the abelian category of coherent sheaves onPn{\displaystyle \mathbb {P} ^{n}}is thequotientof the category of finitely generated gradedS{\displaystyle S}-modules by theSerre subcategoryof modules that are nonzero in only finitely many degrees.[15] The tangent bundle of projective spacePn{\displaystyle \mathbb {P} ^{n}}over a fieldk{\displaystyle k}can be described in terms of the line bundleO(1){\displaystyle {\mathcal {O}}(1)}. Namely, there is a short exact sequence, theEuler sequence: It follows that the canonical bundleKPn{\displaystyle K_{\mathbb {P} ^{n}}}(the dual of thedeterminant line bundleof the tangent bundle) is isomorphic toO(−n−1){\displaystyle {\mathcal {O}}(-n-1)}. This is a fundamental calculation for algebraic geometry. For example, the fact that the canonical bundle is a negative multiple of theample line bundleO(1){\displaystyle {\mathcal {O}}(1)}means that projective space is aFano variety. Over the complex numbers, this means that projective space has aKähler metricwith positiveRicci curvature. Consider a smooth degree-d{\displaystyle d}hypersurfaceX⊆Pn{\displaystyle X\subseteq \mathbb {P} ^{n}}defined by the homogeneous polynomialf{\displaystyle f}of degreed{\displaystyle d}. Then, there is an exact sequence where the second map is the pullback of differential forms, and the first map sends Note that this sequence tells us thatO(−d){\displaystyle {\mathcal {O}}(-d)}is the conormal sheaf ofX{\displaystyle X}inPn{\displaystyle \mathbb {P} ^{n}}. Dualizing this yields the exact sequence henceO(d){\displaystyle {\mathcal {O}}(d)}is the normal bundle ofX{\displaystyle X}inPn{\displaystyle \mathbb {P} ^{n}}. If we use the fact that given an exact sequence of vector bundles with ranksr1{\displaystyle r_{1}},r2{\displaystyle r_{2}},r3{\displaystyle r_{3}}, there is an isomorphism of line bundles, then we see that there is the isomorphism showing that One useful technique for constructing rank 2 vector bundles is the Serre construction[16][17]pg 3which establishes a correspondence between rank 2 vector bundlesE{\displaystyle {\mathcal {E}}}on a smooth projective varietyX{\displaystyle X}and codimension 2 subvarietiesY{\displaystyle Y}using a certainExt1{\displaystyle {\text{Ext}}^{1}}-group calculated onX{\displaystyle X}. This is given by a cohomological condition on the line bundle∧2E{\displaystyle \wedge ^{2}{\mathcal {E}}}(see below). The correspondence in one direction is given as follows: for a sections∈Γ(X,E){\displaystyle s\in \Gamma (X,{\mathcal {E}})}we can associated the vanishing locusV(s)⊆X{\displaystyle V(s)\subseteq X}. IfV(s){\displaystyle V(s)}is a codimension 2 subvariety, then In the other direction,[18]for a codimension 2 subvarietyY⊆X{\displaystyle Y\subseteq X}and a line bundleL→X{\displaystyle {\mathcal {L}}\to X}such that there is a canonical isomorphism Hom((ωX⊗L)|Y,ωY)≅Ext1(IY⊗L,OX){\displaystyle {\text{Hom}}((\omega _{X}\otimes {\mathcal {L}})|_{Y},\omega _{Y})\cong {\text{Ext}}^{1}({\mathcal {I}}_{Y}\otimes {\mathcal {L}},{\mathcal {O}}_{X})}, which is functorial with respect to inclusion of codimension2{\displaystyle 2}subvarieties. Moreover, any isomorphism given on the left corresponds to a locally free sheaf in the middle of the extension on the right. That is, fors∈Hom((ωX⊗L)|Y,ωY){\displaystyle s\in {\text{Hom}}((\omega _{X}\otimes {\mathcal {L}})|_{Y},\omega _{Y})}that is an isomorphism there is a corresponding locally free sheafE{\displaystyle {\mathcal {E}}}of rank 2 that fits into a short exact sequence 0→OX→E→IY⊗L→0{\displaystyle 0\to {\mathcal {O}}_{X}\to {\mathcal {E}}\to {\mathcal {I}}_{Y}\otimes {\mathcal {L}}\to 0} This vector bundle can then be further studied using cohomological invariants to determine if it is stable or not. This forms the basis for studyingmoduli of stable vector bundlesin many specific cases, such as onprincipally polarized abelian varieties[17]andK3 surfaces.[19] A vector bundleE{\displaystyle E}on a smooth varietyX{\displaystyle X}over a field hasChern classesin theChow ringofX{\displaystyle X},ci(E){\displaystyle c_{i}(E)}inCHi(X){\displaystyle CH^{i}(X)}fori≥0{\displaystyle i\geq 0}.[20]These satisfy the same formal properties as Chern classes in topology. For example, for any short exact sequence of vector bundles onX{\displaystyle X}, the Chern classes ofB{\displaystyle B}are given by It follows that the Chern classes of a vector bundleE{\displaystyle E}depend only on the class ofE{\displaystyle E}in theGrothendieck groupK0(X){\displaystyle K_{0}(X)}. By definition, for a schemeX{\displaystyle X},K0(X){\displaystyle K_{0}(X)}is the quotient of the free abelian group on the set of isomorphism classes of vector bundles onX{\displaystyle X}by the relation that[B]=[A]+[C]{\displaystyle [B]=[A]+[C]}for any short exact sequence as above. AlthoughK0(X){\displaystyle K_{0}(X)}is hard to compute in general,algebraic K-theoryprovides many tools for studying it, including a sequence of related groupsKi(X){\displaystyle K_{i}(X)}for integersi>0{\displaystyle i>0}. A variant is the groupG0(X){\displaystyle G_{0}(X)}(orK0′(X){\displaystyle K_{0}'(X)}), theGrothendieck groupof coherent sheaves onX{\displaystyle X}. (In topological terms,G-theory has the formal properties of aBorel–Moore homologytheory for schemes, whileK-theory is the correspondingcohomology theory.) The natural homomorphismK0(X)→G0(X){\displaystyle K_{0}(X)\to G_{0}(X)}is an isomorphism ifX{\displaystyle X}is aregularseparated Noetherian scheme, using that every coherent sheaf has a finiteresolutionby vector bundles in that case.[21]For example, that gives a definition of the Chern classes of a coherent sheaf on a smooth variety over a field. More generally, a Noetherian schemeX{\displaystyle X}is said to have theresolution propertyif every coherent sheaf onX{\displaystyle X}has a surjection from some vector bundle onX{\displaystyle X}. For example, every quasi-projective scheme over a Noetherian ring has the resolution property. Since the resolution property states that a coherent sheafE{\displaystyle {\mathcal {E}}}on a Noetherian scheme is quasi-isomorphic in the derived category to the complex of vector bundles :Ek→⋯→E1→E0{\displaystyle {\mathcal {E}}_{k}\to \cdots \to {\mathcal {E}}_{1}\to {\mathcal {E}}_{0}}we can compute the total Chern class ofE{\displaystyle {\mathcal {E}}}with For example, this formula is useful for finding the Chern classes of the sheaf representing a subscheme ofX{\displaystyle X}. If we take the projective schemeZ{\displaystyle Z}associated to the ideal(xy,xz)⊆C[x,y,z,w]{\displaystyle (xy,xz)\subseteq \mathbb {C} [x,y,z,w]}, then since there is the resolution overCP3{\displaystyle \mathbb {CP} ^{3}}. When vector bundles and locally free sheaves of finite constant rank are used interchangeably, care must be given to distinguish between bundle homomorphisms and sheaf homomorphisms. Specifically, given vector bundlesp:E→X,q:F→X{\displaystyle p:E\to X,\,q:F\to X}, by definition, a bundle homomorphismφ:E→F{\displaystyle \varphi :E\to F}is ascheme morphismoverX{\displaystyle X}(i.e.,p=q∘φ{\displaystyle p=q\circ \varphi }) such that, for each geometric pointx{\displaystyle x}inX{\displaystyle X},φx:p−1(x)→q−1(x){\displaystyle \varphi _{x}:p^{-1}(x)\to q^{-1}(x)}is a linear map of rank independent ofx{\displaystyle x}. Thus, it induces the sheaf homomorphismφ~:E→F{\displaystyle {\widetilde {\varphi }}:{\mathcal {E}}\to {\mathcal {F}}}of constant rank between the corresponding locally freeOX{\displaystyle {\mathcal {O}}_{X}}-modules (sheaves of dual sections). But there may be anOX{\displaystyle {\mathcal {O}}_{X}}-module homomorphism that does not arise this way; namely, those not having constant rank. In particular, a subbundleE⊆F{\displaystyle E\subseteq F}is a subsheaf (i.e.,E{\displaystyle {\mathcal {E}}}is a subsheaf ofF{\displaystyle {\mathcal {F}}}). But the converse can fail; for example, for an effective Cartier divisorD{\displaystyle D}onX{\displaystyle X},OX(−D)⊆OX{\displaystyle {\mathcal {O}}_{X}(-D)\subseteq {\mathcal {O}}_{X}}is a subsheaf but typically not a subbundle (since any line bundle has only two subbundles). The quasi-coherent sheaves on any fixed scheme form an abelian category.Gabbershowed that, in fact, the quasi-coherent sheaves on any scheme form a particularly well-behaved abelian category, aGrothendieck category.[22]A quasi-compact quasi-separated schemeX{\displaystyle X}(such as an algebraic variety over a field) is determined up to isomorphism by the abelian category of quasi-coherent sheaves onX{\displaystyle X}, by Rosenberg, generalizing a result ofGabriel.[23] The fundamental technical tool in algebraic geometry is the cohomology theory of coherent sheaves. Although it was introduced only in the 1950s, many earlier techniques of algebraic geometry are clarified by the language ofsheaf cohomologyapplied to coherent sheaves. Broadly speaking, coherent sheaf cohomology can be viewed as a tool for producing functions with specified properties; sections of line bundles or of more general sheaves can be viewed as generalized functions. In complex analytic geometry, coherent sheaf cohomology also plays a foundational role. Among the core results of coherent sheaf cohomology are results on finite-dimensionality of cohomology, results on the vanishing of cohomology in various cases, duality theorems such asSerre duality, relations between topology and algebraic geometry such asHodge theory, and formulas forEuler characteristicsof coherent sheaves such as theRiemann–Roch theorem.
https://en.wikipedia.org/wiki/Coherent_sheaf
Asmart mobis a group whose coordination and communication abilities have been empowered bydigital communication technologies.[1]Smart mobs are particularly known for their ability to mobilize quickly.[1] The concept was introduced byHoward Rheingoldin his 2002 bookSmart Mobs: The Next Social Revolution.[2]Rheingold defined the smart mob as follows: "Smart mobs consist of people who are able to act in concert even if they don’t know each other... because they carry devices that possess both communication and computing capabilities".[3]In December of that year, the "smart mob" concept was highlighted in theNew York Times"Year in Ideas".[4] These technologies that empower smart mobs include theInternet,computer-mediated communicationsuch asInternet Relay Chat, andwirelessdevices likemobile phonesandpersonal digital assistants. Methodologies likepeer-to-peernetworks andubiquitous computingare also changing the ways in which people organize and share information.[citation needed] Flash mobsare a specific form of smart mob, originally describing a group of people who assemble suddenly in a public place, do something unusual and pointless for a brief period of time, then quickly disperse. The difference between flash and smart mobs is primarily with regards to their duration: flash mobs disappear quickly, but smart mobs can have a more enduring presence.[2]The termflash mobis claimed to have been inspired by "smart mob".[5] Smart mobs have begun to have an impact in current events, as mobile phones and text messages have empowered everyone from revolutionaries inMalaysiato individuals protesting the secondIraq War. Individuals who have divergent worldviews and methods have been able to coordinate short-term.[citation needed] A 2009 entry in theEncyclopedia of Computer Science and Technologynoted that the term may be "fading from public use".[2] A forerunner to the idea can be found in the work of anarchist thinker Kropotkin, "fishermen, hunters, travelling merchants, builders, or settled craftsmen came together for a common pursuit."[6] According toCNN, the first smart mobs were teenage "thumb tribes" in Tokyo and Helsinki who usedtext messagingoncell phonesto organize imprompturavesor to stalk celebrities. For instance, in Tokyo, crowds of teenage fans would assemble seemingly spontaneously at subway stops where a rock musician was rumored to be headed.[7] However, an even earlier example is theDîner en blancphenomenon, which has taken place annually inParis,France, since 1988, for one night around the end of June. The invited guests wear only white clothes and gather at a chosen spot, knowledge of which they have only a short time beforehand. They bring along food, drink, chairs and a table and the whole group then gathers to have a meal, after which they disperse. The event has been held each year in different places in the centre of Paris. It is not a normal cultural event because it is not advertised and only those who have received an invite attend—information on the chosen location is transferred by text message or more recentlyTwitter. The number of people attending has grown, in 2011, to over 10,000.[8]Dîner en blancwould be considered a smart mob rather than a flash mob, because the event lasts for several hours.[citation needed] TheProfessional Contractors Grouporganised the first smart mob in the UK in 2000 when 700 contractors turned up at The House of Commons to lobby their MP following an email sent out a few days before.[9] In the days after the U.S. presidential election of 2000, online activistZack Exleyanonymously created a website that allowed people to suggest locations for gatherings to protest for a full recount of the votes inFlorida. On the first Saturday after the election, more than 100 significant protests took place—many with thousands of participants—without any traditional organizing effort. Exley wrote in December 2000 that the self-organized protests "demonstrated that a fundamental change is taking place in our national political life. It's not the Internet per se, but the emerging potential for any individual to communicate—for free and anonymously if necessary—with any other individual."[10] In thePhilippinesin 2001, a group of protesters organized via text messaging gathered at theEDSA Shrine, the site of the1986 revolutionthat overthrewFerdinand Marcos, to protest the corruption of PresidentJoseph Estrada. The protest grew quickly, and Estrada was soon removed from office.[11] TheCritical Massbicycling events, dating back to 1992, are also sometimes compared to smart mobs, due to their self-organizing manner of assembly.[12][13] Essentially, the smart mob is a practical implementation ofcollective intelligence. According to Rheingold, examples of smart mobs are the street protests organized by theanti-globalization movement. TheFree State Projecthas been described inForeign Policyas an example of potential "smartmob rule".[14]Other examples of smart mobs include: The comic bookGlobal Frequency, written byWarren Ellis, describes a covert, non-governmental intelligence organization built around a smart mob of people that are called on to provide individual expertise in solving extraordinary crises.[citation needed] David Brin's speculative science fiction novel,Existence(ISBN978-0-765-30361-5), similarly posits the use of on-the-fly smart mobs by credible journalists as sources of information and expertise.
https://en.wikipedia.org/wiki/Smart_mob
TheRSA(Rivest–Shamir–Adleman)cryptosystemis apublic-key cryptosystem, one of the oldest widely used for secure data transmission. Theinitialism"RSA" comes from the surnames ofRon Rivest,Adi ShamirandLeonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 atGovernment Communications Headquarters(GCHQ), the Britishsignals intelligenceagency, by the English mathematicianClifford Cocks. That system wasdeclassifiedin 1997.[2] In a public-keycryptosystem, theencryption keyis public and distinct from thedecryption key, which is kept secret (private). An RSA user creates and publishes a public key based on two largeprime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone, via the public key, but can only be decrypted by someone who knows the private key.[1] The security of RSA relies on the practical difficulty offactoringthe product of two largeprime numbers, the "factoring problem". Breaking RSA encryption is known as theRSA problem. Whether it is as difficult as the factoring problem is an open question.[3]There are no published methods to defeat the system if a large enough key is used. RSA is a relatively slow algorithm. Because of this, it is not commonly used to directly encrypt user data. More often, RSA is used to transmit shared keys forsymmetric-keycryptography, which are then used for bulk encryption–decryption. The idea of an asymmetric public-private key cryptosystem is attributed toWhitfield DiffieandMartin Hellman, who published this concept in 1976. They also introduced digital signatures and attempted to apply number theory. Their formulation used a shared-secret-key created from exponentiation of some number, modulo a prime number. However, they left open the problem of realizing a one-way function, possibly because the difficulty of factoring was not well-studied at the time.[4]Moreover, likeDiffie-Hellman, RSA is based onmodular exponentiation. Ron Rivest,Adi Shamir, andLeonard Adlemanat theMassachusetts Institute of Technologymade several attempts over the course of a year to create a function that was hard to invert. Rivest and Shamir, as computer scientists, proposed many potential functions, while Adleman, as a mathematician, was responsible for finding their weaknesses. They tried many approaches, including "knapsack-based" and "permutation polynomials". For a time, they thought what they wanted to achieve was impossible due to contradictory requirements.[5]In April 1977, they spentPassoverat the house of a student and drank a good deal of wine before returning to their homes at around midnight.[6]Rivest, unable to sleep, lay on the couch with a math textbook and started thinking about their one-way function. He spent the rest of the night formalizing his idea, and he had much of the paper ready by daybreak. The algorithm is now known as RSA – the initials of their surnames in same order as their paper.[7] Clifford Cocks, an Englishmathematicianworking for theBritishintelligence agencyGovernment Communications Headquarters(GCHQ), described a similar system in an internal document in 1973.[8]However, given the relatively expensive computers needed to implement it at the time, it was considered to be mostly a curiosity and, as far as is publicly known, was never deployed. His ideas and concepts were not revealed until 1997 due to its top-secret classification. Kid-RSA (KRSA) is a simplified, insecure public-key cipher published in 1997, designed for educational purposes. Some people feel that learning Kid-RSA gives insight into RSA and other public-key ciphers, analogous tosimplified DES.[9][10][11][12][13] Apatentdescribing the RSA algorithm was granted toMITon 20 September 1983:U.S. patent 4,405,829"Cryptographic communications system and method". FromDWPI's abstract of the patent: The system includes a communications channel coupled to at least one terminal having an encoding device and to at least one terminal having a decoding device. A message-to-be-transferred is enciphered to ciphertext at the encoding terminal by encoding the message as a number M in a predetermined set. That number is then raised to a first predetermined power (associated with the intended receiver) and finally computed. The remainder or residue, C, is... computed when the exponentiated number is divided by the product of two predetermined prime numbers (associated with the intended receiver). A detailed description of the algorithm was published in August 1977, inScientific American'sMathematical Gamescolumn.[7]This preceded the patent's filing date of December 1977. Consequently, the patent had no legal standing outside theUnited States. Had Cocks' work been publicly known, a patent in the United States would not have been legal either. When the patent was issued,terms of patentwere 17 years. The patent was about to expire on 21 September 2000, butRSA Securityreleased the algorithm to the public domain on 6 September 2000.[14] The RSA algorithm involves four steps:keygeneration, key distribution, encryption, and decryption. A basic principle behind RSA is the observation that it is practical to find three very large positive integerse,d, andn, such that for all integersm(0 ≤m<n), both(me)d{\displaystyle (m^{e})^{d}}andm{\displaystyle m}have the sameremainderwhen divided byn{\displaystyle n}(they arecongruent modulon{\displaystyle n}):(me)d≡m(modn).{\displaystyle (m^{e})^{d}\equiv m{\pmod {n}}.}However, when given onlyeandn, it is extremely difficult to findd. The integersnandecomprise the public key,drepresents the private key, andmrepresents the message. Themodular exponentiationtoeanddcorresponds to encryption and decryption, respectively. In addition, because the two exponentscan be swapped, the private and public key can also be swapped, allowing for messagesigning and verificationusing the same algorithm. The keys for the RSA algorithm are generated in the following way: Thepublic keyconsists of the modulusnand the public (or encryption) exponente. Theprivate keyconsists of the private (or decryption) exponentd, which must be kept secret.p,q, andλ(n)must also be kept secret because they can be used to calculated. In fact, they can all be discarded afterdhas been computed.[16] In the original RSA paper,[1]theEuler totient functionφ(n) = (p− 1)(q− 1)is used instead ofλ(n)for calculating the private exponentd. Sinceφ(n)is always divisible byλ(n), the algorithm works as well. The possibility of usingEuler totient functionresults also fromLagrange's theoremapplied to themultiplicative group of integers modulopq. Thus anydsatisfyingd⋅e≡ 1 (modφ(n))also satisfiesd⋅e≡ 1 (modλ(n)). However, computingdmoduloφ(n)will sometimes yield a result that is larger than necessary (i.e.d>λ(n)). Most of the implementations of RSA will accept exponents generated using either method (if they use the private exponentdat all, rather than using the optimized decryption methodbased on the Chinese remainder theoremdescribed below), but some standards such asFIPS 186-4(Section B.3.1) may require thatd<λ(n). Any "oversized" private exponents not meeting this criterion may always be reduced moduloλ(n)to obtain a smaller equivalent exponent. Since any common factors of(p− 1)and(q− 1)are present in the factorisation ofn− 1=pq− 1=(p− 1)(q− 1) + (p− 1) + (q− 1),[17][self-published source?]it is recommended that(p− 1)and(q− 1)have only very small common factors, if any, besides the necessary 2.[1][18][19][failed verification][20][failed verification] Note: The authors of the original RSA paper carry out the key generation by choosingdand then computingeas themodular multiplicative inverseofdmoduloφ(n), whereas most current implementations of RSA, such as those followingPKCS#1, do the reverse (chooseeand computed). Since the chosen key can be small, whereas the computed key normally is not, the RSA paper's algorithm optimizes decryption compared to encryption, while the modern algorithm optimizes encryption instead.[1][21] Suppose thatBobwants to send information toAlice. If they decide to use RSA, Bob must know Alice's public key to encrypt the message, and Alice must use her private key to decrypt the message. To enable Bob to send his encrypted messages, Alice transmits her public key(n,e)to Bob via a reliable, but not necessarily secret, route. Alice's private key(d)is never distributed. After Bob obtains Alice's public key, he can send a messageMto Alice. To do it, he first turnsM(strictly speaking, the un-padded plaintext) into an integerm(strictly speaking, thepaddedplaintext), such that0 ≤m<nby using an agreed-upon reversible protocol known as apadding scheme. He then computes the ciphertextc, using Alice's public keye, corresponding to c≡me(modn).{\displaystyle c\equiv m^{e}{\pmod {n}}.} This can be done reasonably quickly, even for very large numbers, usingmodular exponentiation. Bob then transmitscto Alice. Note that at least nine values ofmwill yield a ciphertextcequal tom,[a]but this is very unlikely to occur in practice. Alice can recovermfromcby using her private key exponentdby computing cd≡(me)d≡m(modn).{\displaystyle c^{d}\equiv (m^{e})^{d}\equiv m{\pmod {n}}.} Givenm, she can recover the original messageMby reversing the padding scheme. Here is an example of RSA encryption and decryption:[b] Thepublic keyis(n= 3233,e= 17). For a paddedplaintextmessagem, the encryption function isc(m)=memodn=m17mod3233.{\displaystyle {\begin{aligned}c(m)&=m^{e}{\bmod {n}}\\&=m^{17}{\bmod {3}}233.\end{aligned}}} Theprivate keyis(n= 3233,d= 413). For an encryptedciphertextc, the decryption function ism(c)=cdmodn=c413mod3233.{\displaystyle {\begin{aligned}m(c)&=c^{d}{\bmod {n}}\\&=c^{413}{\bmod {3}}233.\end{aligned}}} For instance, in order to encryptm= 65, one calculatesc=6517mod3233=2790.{\displaystyle c=65^{17}{\bmod {3}}233=2790.} To decryptc= 2790, one calculatesm=2790413mod3233=65.{\displaystyle m=2790^{413}{\bmod {3}}233=65.} Both of these calculations can be computed efficiently using thesquare-and-multiply algorithmformodular exponentiation. In real-life situations the primes selected would be much larger; in our example it would be trivial to factorn= 3233(obtained from the freely available public key) back to the primespandq.e, also from the public key, is then inverted to getd, thus acquiring the private key. Practical implementations use theChinese remainder theoremto speed up the calculation using modulus of factors (modpqusing modpand modq). The valuesdp,dqandqinv, which are part of the private key are computed as follows:dp=dmod(p−1)=413mod(61−1)=53,dq=dmod(q−1)=413mod(53−1)=49,qinv=q−1modp=53−1mod61=38⇒(qinv×q)modp=38×53mod61=1.{\displaystyle {\begin{aligned}d_{p}&=d{\bmod {(}}p-1)=413{\bmod {(}}61-1)=53,\\d_{q}&=d{\bmod {(}}q-1)=413{\bmod {(}}53-1)=49,\\q_{\text{inv}}&=q^{-1}{\bmod {p}}=53^{-1}{\bmod {6}}1=38\\&\Rightarrow (q_{\text{inv}}\times q){\bmod {p}}=38\times 53{\bmod {6}}1=1.\end{aligned}}} Here is howdp,dqandqinvare used for efficient decryption (encryption is efficient by choice of a suitabledandepair):m1=cdpmodp=279053mod61=4,m2=cdqmodq=279049mod53=12,h=(qinv×(m1−m2))modp=(38×−8)mod61=1,m=m2+h×q=12+1×53=65.{\displaystyle {\begin{aligned}m_{1}&=c^{d_{p}}{\bmod {p}}=2790^{53}{\bmod {6}}1=4,\\m_{2}&=c^{d_{q}}{\bmod {q}}=2790^{49}{\bmod {5}}3=12,\\h&=(q_{\text{inv}}\times (m_{1}-m_{2})){\bmod {p}}=(38\times -8){\bmod {6}}1=1,\\m&=m_{2}+h\times q=12+1\times 53=65.\end{aligned}}} SupposeAliceusesBob's public key to send him an encrypted message. In the message, she can claim to be Alice, but Bob has no way of verifying that the message was from Alice, since anyone can use Bob's public key to send him encrypted messages. In order to verify the origin of a message, RSA can also be used tosigna message. Suppose Alice wishes to send a signed message to Bob. She can use her own private key to do so. She produces ahash valueof the message, raises it to the power ofd(modulon) (as she does when decrypting a message), and attaches it as a "signature" to the message. When Bob receives the signed message, he uses the same hash algorithm in conjunction with Alice's public key. He raises the signature to the power ofe(modulon) (as he does when encrypting a message), and compares the resulting hash value with the message's hash value. If the two agree, he knows that the author of the message was in possession of Alice's private key and that the message has not been tampered with since being sent. This works because ofexponentiationrules:h=hash⁡(m),{\displaystyle h=\operatorname {hash} (m),}(he)d=hed=hde=(hd)e≡h(modn).{\displaystyle (h^{e})^{d}=h^{ed}=h^{de}=(h^{d})^{e}\equiv h{\pmod {n}}.} Thus the keys may be swapped without loss of generality, that is, a private key of a key pair may be used either to: The proof of the correctness of RSA is based onFermat's little theorem, stating thatap− 1≡ 1 (modp)for any integeraand primep, not dividinga.[note 1] We want to show that(me)d≡m(modpq){\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}}for every integermwhenpandqare distinct prime numbers andeanddare positive integers satisfyinged≡ 1 (modλ(pq)). Sinceλ(pq) =lcm(p− 1,q− 1)is, by construction, divisible by bothp− 1andq− 1, we can writeed−1=h(p−1)=k(q−1){\displaystyle ed-1=h(p-1)=k(q-1)}for some nonnegative integershandk.[note 2] To check whether two numbers, such asmedandm, are congruentmodpq, it suffices (and in fact is equivalent) to check that they are congruentmodpandmodqseparately.[note 3] To showmed≡m(modp), we consider two cases: The verification thatmed≡m(modq)proceeds in a completely analogous way: This completes the proof that, for any integerm, and integerse,dsuch thated≡ 1 (modλ(pq)),(me)d≡m(modpq).{\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}.} Although the original paper of Rivest, Shamir, and Adleman used Fermat's little theorem to explain why RSA works, it is common to find proofs that rely instead onEuler's theorem. We want to show thatmed≡m(modn), wheren=pqis a product of two different prime numbers, andeanddare positive integers satisfyinged≡ 1 (modφ(n)). Sinceeanddare positive, we can writeed= 1 +hφ(n)for some non-negative integerh.Assumingthatmis relatively prime ton, we havemed=m1+hφ(n)=m(mφ(n))h≡m(1)h≡m(modn),{\displaystyle m^{ed}=m^{1+h\varphi (n)}=m(m^{\varphi (n)})^{h}\equiv m(1)^{h}\equiv m{\pmod {n}},} where the second-last congruence follows fromEuler's theorem. More generally, for anyeanddsatisfyinged≡ 1 (modλ(n)), the same conclusion follows fromCarmichael's generalization of Euler's theorem, which states thatmλ(n)≡ 1 (modn)for allmrelatively prime ton. Whenmis not relatively prime ton, the argument just given is invalid. This is highly improbable (only a proportion of1/p+ 1/q− 1/(pq)numbers have this property), but even in this case, the desired congruence is still true. Eitherm≡ 0 (modp)orm≡ 0 (modq), and these cases can be treated using the previous proof. There are a number of attacks against plain RSA as described below. To avoid these problems, practical RSA implementations typically embed some form of structured, randomizedpaddinginto the valuembefore encrypting it. This padding ensures thatmdoes not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts. Standards such asPKCS#1have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintextmwith some number of additional bits, the size of the un-padded messageMmust be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks that may be facilitated by a predictable message structure. Early versions of the PKCS#1 standard (up to version 1.5) used a construction that appears to make RSA semantically secure. However, atCrypto1998, Bleichenbacher showed that this version is vulnerable to a practicaladaptive chosen-ciphertext attack. Furthermore, atEurocrypt2000, Coron et al.[25]showed that for some types of messages, this padding does not provide a high enough level of security. Later versions of the standard includeOptimal Asymmetric Encryption Padding(OAEP), which prevents these attacks. As such, OAEP should be used in any new application, and PKCS#1 v1.5 padding should be replaced wherever possible. The PKCS#1 standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g. the Probabilistic Signature Scheme for RSA (RSA-PSS). Secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption. Two USA patents on PSS were granted (U.S. patent 6,266,771andU.S. patent 7,036,014); however, these patents expired on 24 July 2009 and 25 April 2010 respectively. Use of PSS no longer seems to be encumbered by patents.[original research?]Note that using different RSA key pairs for encryption and signing is potentially more secure.[26] For efficiency, many popular crypto libraries (such asOpenSSL,Javaand.NET) use for decryption and signing the following optimization based on theChinese remainder theorem.[27][citation needed]The following values are precomputed and stored as part of the private key: These values allow the recipient to compute the exponentiationm=cd(modpq)more efficiently as follows:m1=cdP(modp){\displaystyle m_{1}=c^{d_{P}}{\pmod {p}}},m2=cdQ(modq){\displaystyle m_{2}=c^{d_{Q}}{\pmod {q}}},h=qinv(m1−m2)(modp){\displaystyle h=q_{\text{inv}}(m_{1}-m_{2}){\pmod {p}}},[c]m=m2+hq{\displaystyle m=m_{2}+hq}. This is more efficient than computingexponentiation by squaring, even though two modular exponentiations have to be computed. The reason is that these two modular exponentiations both use a smaller exponent and a smaller modulus. The security of the RSA cryptosystem is based on two mathematical problems: the problem offactoring large numbersand theRSA problem. Full decryption of an RSA ciphertext is thought to be infeasible on the assumption that both of these problems arehard, i.e., no efficient algorithm exists for solving them. Providing security againstpartialdecryption may require the addition of a securepadding scheme.[28] TheRSA problemis defined as the task of takingeth roots modulo a compositen: recovering a valuemsuch thatc≡me(modn), where(n,e)is an RSA public key, andcis an RSA ciphertext. Currently the most promising approach to solving the RSA problem is to factor the modulusn. With the ability to recover prime factors, an attacker can compute the secret exponentdfrom a public key(n,e), then decryptcusing the standard procedure. To accomplish this, an attacker factorsnintopandq, and computeslcm(p− 1,q− 1)that allows the determination ofdfrome. No polynomial-time method for factoring large integers on a classical computer has yet been found, but it has not been proven that none exists; seeinteger factorizationfor a discussion of this problem. The first RSA-512 factorization in 1999 used hundreds of computers and required the equivalent of 8,400 MIPS years, over an elapsed time of about seven months.[29]By 2009, Benjamin Moody could factor an 512-bit RSA key in 73 days using only public software (GGNFS) and his desktop computer (a dual-coreAthlon64with a 1,900 MHz CPU). Just less than 5 gigabytes of disk storage was required and about 2.5 gigabytes of RAM for the sieving process. Rivest, Shamir, and Adleman noted[1]that Miller has shown that – assuming the truth of theextended Riemann hypothesis– findingdfromnandeis as hard as factoringnintopandq(up to a polynomial time difference).[30]However, Rivest, Shamir, and Adleman noted, in section IX/D of their paper, that they had not found a proof that inverting RSA is as hard as factoring. As of 2020[update], the largest publicly known factoredRSA numberhad 829 bits (250 decimal digits,RSA-250).[31]Its factorization, by a state-of-the-art distributed implementation, took about 2,700 CPU-years. In practice, RSA keys are typically 1024 to 4096 bits long. In 2003,RSA Securityestimated that 1024-bit keys were likely to become crackable by 2010.[32]As of 2020, it is not known whether such keys can be cracked, but minimum recommendations have moved to at least 2048 bits.[33]It is generally presumed that RSA is secure ifnis sufficiently large, outside of quantum computing. Ifnis 300bitsor shorter, it can be factored in a few hours on apersonal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999, whenRSA-155was factored by using several hundred computers, and these are now factored in a few weeks using common hardware. Exploits using 512-bit code-signing certificates that may have been factored were reported in 2011.[34]A theoretical hardware device namedTWIRL, described by Shamir and Tromer in 2003, called into question the security of 1024-bit keys.[32] In 1994,Peter Shorshowed that aquantum computer– if one could ever be practically created for the purpose – would be able to factor inpolynomial time, breaking RSA; seeShor's algorithm. Finding the large primespandqis usually done by testing random numbers of the correct size with probabilisticprimality teststhat quickly eliminate virtually all of the nonprimes. The numberspandqshould not be "too close", lest theFermat factorizationfornbe successful. Ifp−qis less than2n1/4(n=p⋅q, which even for "small" 1024-bit values ofnis3×1077), solving forpandqis trivial. Furthermore, if eitherp− 1orq− 1has only small prime factors,ncan be factored quickly byPollard'sp− 1 algorithm, and hence such values ofporqshould be discarded. It is important that the private exponentdbe large enough. Michael J. Wiener showed that ifpis betweenqand2q(which is quite typical) andd<n1/4/3, thendcan be computed efficiently fromnande.[35] There is no known attack against small public exponents such ase= 3, provided that the proper padding is used.Coppersmith's attackhas many applications in attacking RSA specifically if the public exponenteis small and if the encrypted message is short and not padded.65537is a commonly used value fore; this value can be regarded as a compromise between avoiding potential small-exponent attacks and still allowing efficient encryptions (or signature verification). The NIST Special Publication on Computer Security (SP 800-78 Rev. 1 of August 2007) does not allow public exponentsesmaller than 65537, but does not state a reason for this restriction. In October 2017, a team of researchers fromMasaryk Universityannounced theROCA vulnerability, which affects RSA keys generated by an algorithm embodied in a library fromInfineonknown as RSALib. A large number ofsmart cardsandtrusted platform modules(TPM) were shown to be affected. Vulnerable RSA keys are easily identified using a test program the team released.[36] A cryptographically strongrandom number generator, which has been properly seeded with adequate entropy, must be used to generate the primespandq. An analysis comparing millions of public keys gathered from the Internet was carried out in early 2012 byArjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter. They were able to factor 0.2% of the keys using only Euclid's algorithm.[37][38][self-published source?] They exploited a weakness unique to cryptosystems based on integer factorization. Ifn=pqis one public key, andn′ =p′q′is another, then if by chancep=p′(butqis not equal toq'), then a simple computation ofgcd(n,n′) =pfactors bothnandn', totally compromising both keys. Lenstra et al. note that this problem can be minimized by using a strong random seed of bit length twice the intended security level, or by employing a deterministic function to chooseqgivenp, instead of choosingpandqindependently. Nadia Heningerwas part of a group that did a similar experiment. They used an idea ofDaniel J. Bernsteinto compute the GCD of each RSA keynagainst the product of all the other keysn' they had found (a 729-million-digit number), instead of computing eachgcd(n,n′)separately, thereby achieving a very significant speedup, since after one large division, the GCD problem is of normal size. Heninger says in her blog that the bad keys occurred almost entirely in embedded applications, including "firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones" from more than 30 manufacturers. Heninger explains that the one-shared-prime problem uncovered by the two groups results from situations where the pseudorandom number generator is poorly seeded initially, and then is reseeded between the generation of the first and second primes. Using seeds of sufficiently high entropy obtained from key stroke timings or electronic diode noise oratmospheric noisefrom a radio receiver tuned between stations should solve the problem.[39] Strong random number generation is important throughout every phase of public-key cryptography. For instance, if a weak generator is used for the symmetric keys that are being distributed by RSA, then an eavesdropper could bypass RSA and guess the symmetric keys directly. Kocherdescribed a new attack on RSA in 1995: if the attacker Eve knows Alice's hardware in sufficient detail and is able to measure the decryption times for several known ciphertexts, Eve can deduce the decryption keydquickly. This attack can also be applied against the RSA signature scheme. In 2003,BonehandBrumleydemonstrated a more practical attack capable of recovering RSA factorizations over a network connection (e.g., from aSecure Sockets Layer(SSL)-enabled webserver).[40]This attack takes advantage of information leaked by theChinese remainder theoremoptimization used by many RSA implementations. One way to thwart these attacks is to ensure that the decryption operation takes a constant amount of time for every ciphertext. However, this approach can significantly reduce performance. Instead, most RSA implementations use an alternate technique known ascryptographic blinding. RSA blinding makes use of the multiplicative property of RSA. Instead of computingcd(modn), Alice first chooses a secret random valuerand computes(rec)d(modn). The result of this computation, after applyingEuler's theorem, isrcd(modn), and so the effect ofrcan be removed by multiplying by its inverse. A new value ofris chosen for each ciphertext. With blinding applied, the decryption time is no longer correlated to the value of the input ciphertext, and so the timing attack fails. In 1998,Daniel Bleichenbacherdescribed the first practicaladaptive chosen-ciphertext attackagainst RSA-encrypted messages using the PKCS #1 v1padding scheme(a padding scheme randomizes and adds structure to an RSA-encrypted message, so it is possible to determine whether a decrypted message is valid). Due to flaws with the PKCS #1 scheme, Bleichenbacher was able to mount a practical attack against RSA implementations of theSecure Sockets Layerprotocol and to recover session keys. As a result of this work, cryptographers now recommend the use of provably secure padding schemes such asOptimal Asymmetric Encryption Padding, and RSA Laboratories has released new versions of PKCS #1 that are not vulnerable to these attacks. A variant of this attack, dubbed "BERserk", came back in 2014.[41][42]It impacted the Mozilla NSS Crypto Library, which was used notably by Firefox and Chrome. A side-channel attack using branch-prediction analysis (BPA) has been described. Many processors use abranch predictorto determine whether a conditional branch in the instruction flow of a program is likely to be taken or not. Often these processors also implementsimultaneous multithreading(SMT). Branch-prediction analysis attacks use a spy process to discover (statistically) the private key when processed with these processors. Simple Branch Prediction Analysis (SBPA) claims to improve BPA in a non-statistical way. In their paper, "On the Power of Simple Branch Prediction Analysis",[43]the authors of SBPA (Onur Aciicmez and Cetin Kaya Koc) claim to have discovered 508 out of 512 bits of an RSA key in 10 iterations. A power-fault attack on RSA implementations was described in 2010.[44]The author recovered the key by varying the CPU power voltage outside limits; this caused multiple power faults on the server. There are many details to keep in mind in order to implement RSA securely (strongPRNG, acceptable public exponent, etc.). This makes the implementation challenging, to the point the book Practical Cryptography With Go suggests avoiding RSA if possible.[45] Some cryptography libraries that provide support for RSA include:
https://en.wikipedia.org/wiki/RSA_algorithm
Astandard normal deviateis anormally distributeddeviate. It is arealizationof astandard normal random variable, defined as arandom variablewithexpected value0 andvariance1.[1]Where collections of such random variables are used, there is often an associated (possibly unstated) assumption that members of such collections arestatistically independent. Standard normal variables play a major role in theoretical statistics in the description of many types of models, particularly inregression analysis, theanalysis of varianceandtime series analysis. When the term "deviate" is used, rather than "variable", there is a connotation that the value concerned is treated as the no-longer-random outcome of a standard normal random variable. The terminology here is the same as that forrandom variableandrandom variate. Standard normal deviates arise in practicalstatisticsin two ways. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Standard_normal_deviate
Bluebuggingis a form ofBluetoothattack often caused by a lack of awareness. It was developed after the onset ofbluejackingandbluesnarfing. Similar to bluesnarfing, bluebugging accesses and uses all phone features[1]but is limited by the transmitting power of class 2 Bluetooth radios, normally capping its range at 10–15 meters. However, the operational range can be increased with the use of adirectional antenna.[2][3] Bluebugging was developed by the German researcher Martin Herfurt in 2004, one year after the advent of bluejacking.[2]Initially a threat against laptops with Bluetooth capability,[4]it later targeted mobile phones[5]and PDAs. Bluebugging manipulates a target phone into compromising its security, this to create a backdoor attack before returning control of the phone to its owner. Once control of a phone has been established, it is used to call back the hacker who is then able to listen in to conversations, hence the name "bugging".[5]The Bluebug program also has the capability to create a call forwarding application whereby the hacker receives calls intended for the target phone.[1] A further development of Bluebugging has allowed for the control of target phones through Bluetooth phone headsets, It achieves this by pretending to be the headset and thereby "tricking" the phone into obeying call commands. Not only can a hacker receive calls intended for the target phone, they can send messages, read phonebooks, and examine calendars.
https://en.wikipedia.org/wiki/Bluebugging
Face validityis the extent to which a test issubjectivelyviewed as covering the concept it purports to measure. It refers to the transparency or relevance of a test as it appears to test participants.[1][2]In other words, a test can be said to have face validity if it "looks like" it is going to measure what it is supposed to measure.[3]For instance, if a test is prepared to measure whether students can perform multiplication, and the people to whom it is shown all agree that it looks like a good test of multiplication ability, this demonstrates face validity of the test. Face validity is often contrasted withcontent validityandconstruct validity. Some people use the term face validity to refer only to the validity of a test to observers who are not expert in testing methodologies. For instance, if a test is designed to measure whether children are good spellers, and parents are asked whether the test is a good test, this measures the face validity of the test. If an expert is asked instead, some people would argue that this does not measure face validity.[4]This distinction seems too careful for most applications.[citation needed]Generally, face validity means that the test "looks like" it will work, as opposed to "has been shown to work". Insimulation, the first goal of the system designer is to construct a system which can support a task to be accomplished, and to record the learner's task performance for any particular trial. The task(s)—and therefore, the task performance—on the simulator should be representative of the real world that they model. Face validity is a subjective measure of the extent to which this selection appears reasonable "on the face of it"—that is, subjectively to an expert after only a superficial examination of the content. Some assume that it is representative of the realism of the system, according to users and others who are knowledgeable about the real system being simulated.[5]Those would say that if these experts feel the model is adequate, then it has face validity. However, in factface validityrefers to the test, not the system.
https://en.wikipedia.org/wiki/Face_validity
TheElGamal signature schemeis adigital signaturescheme which is based on the difficulty of computingdiscrete logarithms. It was described byTaher Elgamalin 1985.[1] The ElGamal signature algorithm is rarely used in practice. A variant developed at theNSAand known as theDigital Signature Algorithmis much more widely used. There are several other variants.[2]The ElGamal signature scheme must not be confused withElGamal encryptionwhich was also invented by Taher Elgamal. The ElGamal signature scheme is a digital signature scheme based on the algebraic properties of modular exponentiation, together with the discrete logarithm problem. The algorithm uses akey pairconsisting of apublic keyand aprivate key. The private key is used to generate adigital signaturefor a message, and such a signature can beverifiedby using the signer's corresponding public key. The digital signature provides message authentication (the receiver can verify the origin of the message), integrity (the receiver can verify that the message has not been modified since it was signed) and non-repudiation (the sender cannot falsely claim that they have not signed the message). The ElGamal signature scheme was described byTaher Elgamalin 1985.[1]It is based on theDiffie–Hellman problem. The scheme involves four operations: key generation (which creates the key pair), key distribution, signing and signature verification. Key generation has two phases. The first phase is a choice of algorithm parameters which may be shared between different users of the system, while the second phase computes a single key pair for one user. The algorithm parameters are(p,g){\displaystyle (p,g)}. These parameters may be shared between users of the system. Given a set of parameters, the second phase computes the key pair for a single user: x{\displaystyle x}is the private key andy{\displaystyle y}is the public key. The signer should send the public keyy{\displaystyle y}to the receiver via a reliable, but not necessarily secret, mechanism. The signer should keep the private keyx{\displaystyle x}secret. A messagem{\displaystyle m}is signed as follows: The signature is(r,s){\displaystyle (r,s)}. One can verify that a signature(r,s){\displaystyle (r,s)}is a valid signature for a messagem{\displaystyle m}as follows: The algorithm is correct in the sense that a signature generated with the signing algorithm will always be accepted by the verifier. The computation ofs{\displaystyle s}during signature generation implies Sinceg{\displaystyle g}is relatively prime top{\displaystyle p}, A third party can forge signatures either by finding the signer's secret keyxor by finding collisions in the hash functionH(m)≡H(M)(modp−1){\displaystyle H(m)\equiv H(M){\pmod {p-1}}}. Both problems are believed to be difficult. However, as of 2011 no tight reduction to acomputational hardness assumptionis known. The signer must be careful to choose a differentkuniformly at random for each signature and to be certain thatk, or even partial information aboutk, is not leaked. Otherwise, an attacker may be able to deduce the secret keyxwith reduced difficulty, perhaps enough to allow a practical attack. In particular, if two messages are sent using the same value ofkand the same key, then an attacker can computexdirectly.[1] The original paper[1]did not include ahash functionas a system parameter. The messagemwas used directly in the algorithm instead ofH(m). This enables an attack calledexistential forgery, as described in section IV of the paper. Pointcheval and Stern generalized that case and described two levels of forgeries:[3]
https://en.wikipedia.org/wiki/ElGamal_signature_scheme
Ashell scriptis acomputer programdesigned to be run by aUnix shell, acommand-line interpreter.[1]The various dialects of shell scripts are considered to becommand languages. Typical operations performed by shell scripts include file manipulation, program execution, and printing text. A script which sets up the environment, runs the program, and does any necessary cleanup or logging, is called awrapper. The term is also used more generally to mean the automated mode of running an operating system shell; each operating system uses a particular name for these functions including batch files (MSDos-Win95 stream,OS/2), command procedures (VMS), and shell scripts (Windows NTstream and third-party derivatives like4NT—article is atcmd.exe), and mainframe operating systems are associated with a number of terms. Shells commonly present in Unix and Unix-like systems include theKorn shell, theBourne shell, andGNU Bash. While a Unix operating system may have a different default shell, such asZshonmacOS, these shells are typically present for backwards compatibility. Commentsare ignored by the shell. They typically begin with the hash symbol (#), and continue until the end of the line.[2] Theshebang, or hash-bang, is a special kind of comment which the system uses to determine what interpreter to use to execute the file. The shebang must be the first line of the file, and start with "#!".[2]In Unix-like operating systems, the characters following the "#!" prefix are interpreted as a path to an executable program that will interpret the script.[3] A shell script can provide a convenient variation of a system command where special environment settings, command options, or post-processing apply automatically, but in a way that allows the new script to still act as a fully normalUnix command. One example would be to create a version ofls, the command to list files, giving it a shorter command name ofl, which would be normally saved in a user'sbindirectory as/home/username/bin/l, and a default set of command options pre-supplied. Here, the first line uses ashebangto indicate which interpreter should execute the rest of the script, and the second line makes a listing with options for file format indicators, columns, all files (none omitted), and a size in blocks. TheLC_COLLATE=Csets the default collation order to not fold upper and lower case together, not intermixdotfileswith normal filenames as a side effect of ignoring punctuation in the names (dotfiles are usually only shown if an option like-ais used), and the"$@"causes any parameters given tolto pass through as parameters to ls, so that all of the normal options and othersyntaxknown to ls can still be used. The user could then simply uselfor the most commonly used short listing. Another example of a shell script that could be used as a shortcut would be to print a list of all the files and directories within a given directory. In this case, the shell script would start with its normal starting line of#!/bin/sh. Following this, the script executes the commandclearwhich clears the terminal of all text before going to the next line. The following line provides the main function of the script. Thels -alcommand lists the files and directories that are in the directory from which the script is being run. Thelscommand attributes could be changed to reflect the needs of the user. Shell scripts allow several commands that would be entered manually at a command-line interface to be executed automatically, and without having to wait for a user to trigger each stage of the sequence. For example, in a directory with three C source code files, rather than manually running the four commands required to build the final program from them, one could instead create a script forPOSIX-compliant shells, here namedbuildand kept in the directory with them, which would compile them automatically: The script would allow a user to save the file being edited, pause the editor, and then just run./buildto create the updated program, test it, and then return to the editor. Since the 1980s or so, however, scripts of this type have been replaced with utilities likemakewhich are specialized for building programs. Simple batch jobs are not unusual for isolated tasks, but using shell loops, tests, and variables provides much more flexibility to users. A POSIX sh script to convert JPEG images to PNG images, where the image names are provided on the command-line—possibly via wildcards—instead of each being listed within the script, can be created with this file, typically saved in a file like/home/username/bin/jpg2png Thejpg2pngcommand can then be run on an entire directory full of JPEG images with just/home/username/bin/jpg2png *.jpg Many modern shells also supply various features usually found only in more sophisticatedgeneral-purpose programming languages, such as control-flow constructs, variables,comments, arrays,subroutinesand so on. With these sorts of features available, it is possible to write reasonably sophisticated applications as shell scripts. However, they are still limited by the fact that most shell languages have little or no support for data typing systems, classes, threading, complex math, and other common full language features, and are also generally much slower than compiled code or interpreted languages written with speed as a performance goal. The standard Unix toolssedandawkprovide extra capabilities for shell programming;Perlcan also be embedded in shell scripts as can other scripting languages likeTcl. Perl and Tcl come with graphics toolkits as well. Scripting languages commonly found on UNIX, Linux, and POSIX-compliant operating system installations include: The C and Tcl shells have syntax quite similar to that of said programming languages, and the Korn shells and Bash are developments of the Bourne shell, which is based on theALGOLlanguage with elements of a number of others added as well.[4]On the other hand, the various shells plus tools likeawk,sed,grep, andBASIC,Lisp,Cand so forth contributed to thePerlprogramming language.[5] Other shells that may be available on a machine or for download and/or purchase include: Related programs such as shells based onPython,Ruby,C,Java,Perl,Pascal,Rexxetc. in various forms are also widely available. Another somewhat common shell isOld shell(osh), whose manual page states it "is an enhanced, backward-compatible port of the standard command interpreter from Sixth Edition UNIX."[6] So called remote shells such as are really just tools to run a more complex shell on a remote system and have no 'shell' like characteristics themselves. Many powerful scripting languages have been introduced for tasks that are too large or complex to be comfortably handled with ordinary shell scripts, but for which the advantages of a script are desirable and the development overhead of a full-blown, compiled programming language would be disadvantageous. The specifics of what separates scripting languages fromhigh-level programming languagesis a frequent source of debate, but, generally speaking, a scripting language is one which requires an interpreter. Shell scripts often serve as an initial stage in software development, and are often subject to conversion later to a different underlying implementation, most commonly being converted toPerl,Python, orC. Theinterpreter directiveallows the implementation detail to be fully hidden inside the script, rather than being exposed as a filename extension, and provides for seamless reimplementation in different languages with no impact on end users. While files with the ".sh"file extensionare usually a shell script of some kind, most shell scripts do not have any filename extension.[7][8][9][10] Perhaps the biggest advantage of writing a shell script is that the commands and syntax are exactly the same as those directly entered at the command-line. The programmer does not have to switch to a totally different syntax, as they would if the script were written in a different language, or if a compiled language were used. Often, writing a shell script is much quicker than writing the equivalent code in other programming languages. The many advantages include easy program or file selection, quick start, and interactive debugging. A shell script can be used to provide a sequencing and decision-making linkage around existing programs, and for moderately sized scripts the absence of a compilation step is an advantage. Interpretive running makes it easy to write debugging code into a script and re-run it to detect and fix bugs. Non-expert users can use scripting to tailor the behavior of programs, and shell scripting provides some limited scope for multiprocessing. On the other hand, shell scripting is prone to costly errors. Inadvertent typing errors such asrm-rf * /(instead of the intendedrm -rf */) are folklore in the Unix community; a single extra space converts the command from one that deletes all subdirectories contained in the current directory, to one which deletes everything from the file system'sroot directory. Similar problems can transformcpandmvinto dangerous weapons, and misuse of the>redirect can delete the contents of a file. This is made more problematic by the fact that many UNIX commands differ in name by only one letter:cp,cd,dd,df, etc. Another significant disadvantage is the slow execution speed and the need to launch a new process for almost every shell command executed. When a script's job can be accomplished by setting up apipelinein which efficientfiltercommands perform most of the work, the slowdown is mitigated, but a complex script is typically several orders of magnitude slower than a conventional compiled program that performs an equivalent task. There are also compatibility problems between different platforms.Larry Wall, creator ofPerl, famously wrote that "It's easier to port a shell than a shell script."[11] Similarly, more complex scripts can run into the limitations of the shell scripting language itself; the limits make it difficult to write quality code, and extensions by various shells to ameliorate problems with the original shell language can make problems worse.[12] Many disadvantages of using some script languages are caused by design flaws within thelanguage syntaxor implementation, and are not necessarily imposed by the use of a text-based command-line; there are a number of shells which use other shell programming languages or even full-fledged languages likeScsh(which usesScheme). Different scripting languages may share many common elements, largely due to being POSIX based, and some shells offer modes to emulate different shells. This allows a shell script written in one scripting language to be adapted into another. One example of this is Bash, which offers the same grammar and syntax as the Bourne shell, and which also provides a POSIX-compliant mode.[13]As such, most shell scripts written for the Bourne shell can be run in BASH, but the reverse may not be true since BASH has extensions which are not present in the Bourne shell. As such, these features are known asbashisms.[14] Interoperability software such asCygwin, theMKS Toolkit,Interix(which is available in the Microsoft Windows Services for UNIX),Hamilton C shell,UWIN(AT&T Unix for Windows) and others allow Unix shell programs to be run on machines running Windows NT and its successors, with some loss of functionality on theMS-DOS-Windows 95branch, as well as earlier MKS Toolkit versions for OS/2. At least three DCL implementations for Windows type operating systems—in addition toXLNT, a multiple-use scripting language package which is used with the command shell,Windows Script HostandCGIprogramming—are available for these systems as well. Mac OS X and subsequent are Unix-like as well.[15] In addition to the aforementioned tools, somePOSIXand OS/2 functionality can be used with the corresponding environmental subsystems of the Windows NT operating system series up to Windows 2000 as well. A third,16-bitsubsystem often called the MS-DOS subsystem uses the Command.com provided with these operating systems to run the aforementioned MS-DOS batch files.[16] The console alternatives4DOS,4OS2,FreeDOS,Peter Norton'sNDOSand4NT / Take Commandwhich add functionality to the Windows NT-style cmd.exe, MS-DOS/Windows 95 batch files (run by Command.com), OS/2's cmd.exe, and 4NT respectively are similar to the shells that they enhance and are more integrated with the Windows Script Host, which comes with three pre-installed engines, VBScript,JScript, andVBAand to which numerous third-party engines can be added, with Rexx, Perl, Python, Ruby, and Tcl having pre-defined functions in 4NT and related programs.PC DOSis quite similar to MS-DOS, whilstDR DOSis more different. Earlier versions of Windows NT are able to run contemporary versions of 4OS2 by the OS/2 subsystem. Scripting languages are, by definition, able to be extended; for example, a MS-DOS/Windows 95/98 and Windows NT type systems allows for shell/batch programs to call tools likeKiXtart,QBasic, variousBASIC,Rexx,Perl, andPythonimplementations, theWindows Script Hostand its installed engines. On Unix and otherPOSIX-compliant systems,awkandsedare used to extend the string and numeric processing ability of shell scripts.Tcl, Perl, Rexx, and Python have graphics toolkits and can be used to code functions and procedures for shell scripts which pose a speed bottleneck (C, Fortran, assembly language &c are much faster still) and to add functionality not available in the shell language such as sockets and other connectivity functions, heavy-duty text processing, working with numbers if the calling script does not have those abilities, self-writing and self-modifying code, techniques likerecursion, direct memory access, various types ofsortingand more, which are difficult or impossible in the main script, and so on.Visual Basic for ApplicationsandVBScriptcan be used to control and communicate with such things as spreadsheets, databases, scriptable programs of all types, telecommunications software, development tools, graphics tools and other software which can be accessed through theComponent Object Model.
https://en.wikipedia.org/wiki/Shell_script
Data and information visualization(data viz/visorinfo viz/vis)[2]is the practice ofdesigningand creatinggraphicor visualrepresentationsof a large amount[3]of complex quantitative and qualitativedataandinformationwith the help of static, dynamic or interactive visual items. Typically based on data and information collected from a certaindomain of expertise, these visualizations are intended for a broader audience to help them visually explore and discover, quickly understand, interpret and gain important insights into otherwise difficult-to-identify structures, relationships, correlations, local and global patterns, trends, variations, constancy, clusters, outliers and unusual groupings within data (exploratory visualization).[4][5][6]When intended for the general public (mass communication) to convey a concise version of known, specific information in a clear and engaging manner (presentationalorexplanatory visualization),[4]it is typically calledinformation graphics. Data visualizationis concerned with presenting sets of primarily quantitative raw data in a schematic form, using imagery. The visual formats used in data visualization include charts and graphs (e.g.pie charts,bar charts,line charts,area charts,cone charts,pyramid charts,donut charts,histograms,spectrograms,cohort charts,waterfall charts,funnel charts,bullet graphs, etc.),diagrams,plots(e.g.scatter plots,distribution plots,box-and-whisker plots), geospatialmaps(such asproportional symbol maps,choropleth maps,isopleth mapsandheat maps), figures,correlation matrices, percentagegauges, etc., which sometimes can be combined in adashboard. Information visualization, on the other hand, deals with multiple, large-scale and complicated datasets which contain quantitative (numerical) data as well as qualitative (non-numerical, i.e. verbal or graphical) and primarily abstract information and its goal is to add value to raw data, improve the viewers' comprehension, reinforce their cognition and help them derive insights and make decisions as they navigate and interact with the computer-supported graphical display. Visual tools used in information visualization includemapsfor location based data;hierarchical[7]organisations of data such astree maps,radial_trees, and othertree_structures; displays that prioritiserelationships(Heer et al. 2010) such asSankey diagrams,network diagrams,venn diagrams,mind maps,semantic networks,entity-relationship diagrams;flow charts,timelines, etc. Emerging technologieslikevirtual,augmentedandmixed realityhave the potential to make information visualization more immersive, intuitive, interactive and easily manipulable and thus enhance the user'svisual perceptionandcognition.[8]In data and information visualization, the goal is to graphically present and explore abstract, non-physical and non-spatial data collected fromdatabases,information systems,file systems,documents,business data, etc. (presentational and exploratory visualization) which is different from the field ofscientific visualization, where the goal is to render realistic images based on physical andspatialscientific datato confirm or rejecthypotheses(confirmatory visualization).[9] Effective data visualization is properly sourced, contextualized, simple and uncluttered. The underlying data is accurate and up-to-date to make sure that insights are reliable. Graphical items are well-chosen for the given datasets and aesthetically appealing, with shapes, colors and other visual elements used deliberately in a meaningful and non-distracting manner. The visuals are accompanied by supporting texts (labels and titles). These verbal and graphical components complement each other to ensure clear, quick and memorable understanding. Effective information visualization is aware of the needs and concerns and the level of expertise of the target audience, deliberately guiding them to the intended conclusion.[10][3]Such effective visualization can be used not only for conveying specialized, complex, big data-driven ideas to a wider group of non-technical audience in a visually appealing, engaging and accessible manner, but also to domain experts and executives for making decisions, monitoring performance, generating new ideas and stimulating research.[10][4]In addition, data scientists, data analysts and data mining specialists use data visualization to check the quality of data, find errors, unusual gaps and missing values in data, clean data, explore the structures and features of data and assess outputs of data-driven models.[4]Inbusiness, data and information visualization can constitute a part ofdata storytelling, where they are paired with a coherentnarrativestructure orstorylineto contextualize the analyzed data and communicate the insights gained from analyzing the data clearly and memorably with the goal of convincing the audience into making a decision or taking an action in order to createbusiness value.[3][11]This can be contrasted with the field ofstatistical graphics, where complex statistical data are communicated graphically in an accurate and precise manner among researchers and analysts with statistical expertise to help them performexploratory data analysisor to convey the results of such analyses, where visual appeal, capturing attention to a certain issue and storytelling are not as important.[12] The field of data and information visualization is of interdisciplinary nature as it incorporates principles found in the disciplines ofdescriptive statistics(as early as the 18th century),[13]visual communication,graphic design,cognitive scienceand, more recently,interactive computer graphicsandhuman-computer interaction.[14]Since effective visualization requires design skills, statistical skills and computing skills, it is argued by authors such as Gershon and Page that it is both an art and a science.[15]The neighboring field ofvisual analyticsmarries statistical data analysis, data and information visualization and human analytical reasoning through interactive visual interfaces to help human users reach conclusions, gain actionable insights and make informed decisions which are otherwise difficult for computers to do. Research into how people read and misread various types of visualizations is helping to determine what types and features of visualizations are most understandable and effective in conveying information.[16][17]On the other hand, unintentionally poor or intentionally misleading and deceptive visualizations (misinformative visualization) can function as powerful tools which disseminatemisinformation, manipulate public perception and divertpublic opiniontoward a certain agenda.[18]Thus data visualization literacy has become an important component ofdataandinformation literacyin theinformation ageakin to the roles played bytextual,mathematicalandvisual literacyin the past.[19] The field of data and information visualization has emerged "from research inhuman–computer interaction,computer science,graphics,visual design,psychology,photographyandbusiness methods. It is increasingly applied as a critical component in scientific research,digital libraries,data mining, financial data analysis, market studies, manufacturingproduction control, anddrug discovery".[20] Data and information visualization presumes that "visual representations and interaction techniques take advantage of the human eye's broad bandwidth pathway into the mind to allow users to see, explore, and understand large amounts of information at once. Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways."[21] Data analysis is an indispensable part of all applied research and problem solving in industry. The most fundamental data analysis approaches are visualization (histograms, scatter plots, surface plots, tree maps, parallel coordinate plots, etc.),statistics(hypothesis test,regression,PCA, etc.),data mining(association mining, etc.), andmachine learningmethods (clustering,classification,decision trees, etc.). Among these approaches, information visualization, or visual data analysis, is the most reliant on the cognitive skills of human analysts, and allows the discovery of unstructured actionable insights that are limited only by human imagination and creativity. The analyst does not have to learn any sophisticated methods to be able to interpret the visualizations of the data. Information visualization is also a hypothesis generation scheme, which can be, and is typically followed by more analytical or formal analysis, such as statistical hypothesis testing. To communicate information clearly and efficiently, data visualization usesstatistical graphics,plots,information graphicsand other tools. Numerical data may be encoded using dots, lines, or bars, to visually communicate a quantitative message.[22]Effective visualization helps users analyze and reason about data and evidence.[23]It makes complex data more accessible, understandable, and usable, but can also be reductive.[24]Users may have particular analytical tasks, such as making comparisons or understandingcausality, and the design principle of the graphic (i.e., showing comparisons or showing causality) follows the task. Tables are generally used where users will look up a specific measurement, while charts of various types are used to show patterns or relationships in the data for one or more variables. Data visualization refers to the techniques used to communicate data or information by encoding it as visual objects (e.g., points, lines, or bars) contained in graphics. The goal is to communicate information clearly and efficiently to users. It is one of the steps indata analysisordata science. According to Vitaly Friedman (2008) the "main goal of data visualization is to communicate information clearly and effectively through graphical means. It doesn't mean that data visualization needs to look boring to be functional or extremely sophisticated to look beautiful. To convey ideas effectively, both aesthetic form and functionality need to go hand in hand, providing insights into a rather sparse and complex data set by communicating its key aspects in a more intuitive way. Yet designers often fail to achieve a balance between form and function, creating gorgeous data visualizations which fail to serve their main purpose — to communicate information".[25] Indeed,Fernanda ViegasandMartin M. Wattenbergsuggested that an ideal visualization should not only communicate clearly, but stimulate viewer engagement and attention.[26] Data visualization is closely related toinformation graphics,information visualization,scientific visualization,exploratory data analysisandstatistical graphics. In the new millennium, data visualization has become an active area of research, teaching and development. According to Post et al. (2002), it has united scientific and information visualization.[27] In the commercial environment data visualization is often referred to asdashboards.Infographicsare another very common form of data visualization. The greatest value of a picture is when it forces us to notice what we never expected to see. Edward Tuftehas explained that users of information displays are executing particularanalytical taskssuch as making comparisons. Thedesign principleof the information graphic should support the analytical task.[29]As William Cleveland and Robert McGill show, different graphical elements accomplish this more or less effectively. For example, dot plots and bar charts outperform pie charts.[30] In his 1983 bookThe Visual Display of Quantitative Information,[31]Edward Tuftedefines 'graphical displays' and principles for effective graphical display in the following passage: "Excellence in statistical graphics consists of complex ideas communicated with clarity, precision, and efficiency. Graphical displays should: Graphicsrevealdata. Indeed, graphics can be more precise and revealing than conventional statistical computations."[32] For example, the Minard diagram shows the losses suffered by Napoleon's army in the 1812–1813 period. Six variables are plotted: the size of the army, its location on a two-dimensional surface (x and y), time, the direction of movement, and temperature. The line width illustrates a comparison (size of the army at points in time), while the temperature axis suggests a cause of the change in army size. This multivariate display on a two-dimensional surface tells a story that can be grasped immediately while identifying the source data to build credibility. Tufte wrote in 1983 that: "It may well be the best statistical graphic ever drawn."[32] Not applying these principles may result inmisleading graphs, distorting the message, or supporting an erroneous conclusion. According to Tufte,chartjunkrefers to the extraneous interior decoration of the graphic that does not enhance the message or gratuitous three-dimensional or perspective effects. Needlessly separating the explanatory key from the image itself, requiring the eye to travel back and forth from the image to the key, is a form of "administrative debris." The ratio of "data to ink" should be maximized, erasing non-data ink where feasible.[32] TheCongressional Budget Officesummarized several best practices for graphical displays in a June 2014 presentation. These included: a) Knowing your audience; b) Designing graphics that can stand alone outside the report's context; and c) Designing graphics that communicate the key messages in the report.[33] Useful criteria for a data or information visualization include:[34] Readability means that it is possible for a viewer to understand the underlying data, such as by making comparisons between proportionally sized visual elements to compare their respective data values; or using a legend to decode a map, like identifying coloured regions on a climate map to read temperature at that location. For greatest efficiency and simplicity of design and user experience, this readability is enhanced through the use of bijective mapping in that design of the image elements - where the mapping of representational element to data variable is unique.[35] Kosara (2007)[34]also identifies the need for a visualisation to be "recognisable as a visualisation and not appear to be something else". He also states that recognisability and readability may not always be required in all types of visualisation e.g. "informative art" (which would still meet all three above criteria but might not look like a visualisation) or "artistic visualisation" (which similarly is still based on non-visual data to create an image, but may not be readable or recognisable). AuthorStephen Fewdescribed eight types of quantitative messages that users may attempt to understand or communicate from a set of data and the associated graphs used to help communicate the message: Analysts reviewing a set of data may consider whether some or all of the messages and graphic types above are applicable to their task and audience. The process of trial and error to identify meaningful relationships and messages in the data is part ofexploratory data analysis. A human can distinguish differences in line length, shape, orientation, distances, and color (hue) readily without significant processing effort; these are referred to as "pre-attentive attributes". For example, it may require significant time and effort ("attentive processing") to identify the number of times the digit "5" appears in a series of numbers; but if that digit is different in size, orientation, or color, instances of the digit can be noted quickly through pre-attentive processing.[38] Compelling graphics take advantage of pre-attentive processing and attributes and the relative strength of these attributes. For example, since humans can more easily process differences in line length than surface area, it may be more effective to use a bar chart (which takes advantage of line length to show comparison) rather than pie charts (which use surface area to show comparison).[38] Almost all data visualizations are created for human consumption. Knowledge of human perception and cognition is necessary when designing intuitive visualizations.[39]Cognition refers to processes in human beings like perception, attention, learning, memory, thought, concept formation, reading, and problem solving.[40]Human visual processing is efficient in detecting changes and making comparisons between quantities, sizes, shapes and variations in lightness. When properties of symbolic data are mapped to visual properties, humans can browse through large amounts of data efficiently. It is estimated that 2/3 of the brain's neurons can be involved in visual processing. Proper visualization provides a different approach to show potential connections, relationships, etc. which are not as obvious in non-visualized quantitative data. Visualization can become a means ofdata exploration. Studies have shown individuals used on average 19% less cognitive resources, and 4.5% better able to recall details when comparing data visualization with text.[41] The modern study of visualization started withcomputer graphics, which "has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the special issue of Computer Graphics on Visualization inScientific Computing. Since then there have been several conferences and workshops, co-sponsored by theIEEE Computer SocietyandACM SIGGRAPH".[42]They have been devoted to the general topics ofdata visualization, information visualization andscientific visualization, and more specific areas such asvolume visualization. In 1786,William Playfairpublished the first presentation graphics. There is no comprehensive 'history' of data visualization. There are no accounts that span the entire development of visual thinking and the visual representation of data, and which collate the contributions of disparate disciplines.[43]Michael Friendly and Daniel J Denis ofYork Universityare engaged in a project that attempts to provide a comprehensive history of visualization. Contrary to general belief, data visualization is not a modern development. Since prehistory, stellar data, or information such as location of stars were visualized on the walls of caves (such as those found inLascaux Cavein Southern France) since thePleistoceneera.[44]Physical artefacts such as Mesopotamianclay tokens(5500 BC), Incaquipus(2600 BC) and Marshall Islandsstick charts(n.d.) can also be considered as visualizing quantitative information.[45][46] The first documented data visualization can be tracked back to 1160 B.C. withTurin Papyrus Mapwhich accurately illustrates the distribution of geological resources and provides information about quarrying of those resources.[47]Such maps can be categorized asthematic cartography, which is a type of data visualization that presents and communicates specific data and information through a geographical illustration designed to show a particular theme connected with a specific geographic area. Earliest documented forms of data visualization were various thematic maps from different cultures and ideograms and hieroglyphs that provided and allowed interpretation of information illustrated. For example,Linear Btablets ofMycenaeprovided a visualization of information regarding Late Bronze Age era trades in the Mediterranean. The idea of coordinates was used by ancient Egyptian surveyors in laying out towns, earthly and heavenly positions were located by something akin to latitude and longitude at least by 200 BC, and the map projection of a spherical Earth into latitude and longitude byClaudius Ptolemy[c.85–c.165] in Alexandria would serve as reference standards until the 14th century.[47] The invention of paper and parchment allowed further development of visualizations throughout history. Figure shows a graph from the 10th or possibly 11th century that is intended to be an illustration of the planetary movement, used in an appendix of a textbook in monastery schools.[48]The graph apparently was meant to represent a plot of the inclinations of the planetary orbits as a function of the time. For this purpose, the zone of the zodiac was represented on a plane with a horizontal line divided into thirty parts as the time or longitudinal axis. The vertical axis designates the width of the zodiac. The horizontal scale appears to have been chosen for each planet individually for the periods cannot be reconciled. The accompanying text refers only to the amplitudes. The curves are apparently not related in time. By the 16th century, techniques and instruments for precise observation and measurement of physical quantities, and geographic and celestial position were well-developed (for example, a "wall quadrant" constructed byTycho Brahe[1546–1601], covering an entire wall in his observatory). Particularly important were the development of triangulation and other methods to determine mapping locations accurately.[43]Very early, the measure of time led scholars to develop innovative way of visualizing the data (e.g. Lorenz Codomann in 1596, Johannes Temporarius in 1596[49]). French philosopher and mathematicianRené DescartesandPierre de Fermatdeveloped analytic geometry and two-dimensional coordinate system which heavily influenced the practical methods of displaying and calculating values. Fermat andBlaise Pascal's work on statistics and probability theory laid the groundwork for what we now conceptualize as data.[43]According to the Interaction Design Foundation, these developments allowed and helped WilliamPlayfair, who saw potential for graphical communication of quantitative data, to generate and develop graphical methods of statistics.[39] In the second half of the 20th century,Jacques Bertinused quantitative graphs to represent information "intuitively, clearly, accurately, and efficiently".[39] John Tukey and Edward Tufte pushed the bounds of data visualization; Tukey with his new statistical approach of exploratory data analysis and Tufte with his book "The Visual Display of Quantitative Information" paved the way for refining data visualization techniques for more than statisticians. With the progression of technology came the progression of data visualization; starting with hand-drawn visualizations and evolving into more technical applications – including interactive designs leading to software visualization.[50] Programs likeSAS,SOFA,R,Minitab, Cornerstone and more allow for data visualization in the field of statistics. Other data visualization applications, more focused and unique to individuals, programming languages such asD3,Python(through matplotlib, seaborn) andJavaScriptand Java(through JavaFX) help to make the visualization of quantitative data a possibility. Private schools have also developed programs to meet the demand for learning data visualization and associated programming libraries, including free programs likeThe Data Incubatoror paid programs likeGeneral Assembly.[51] Beginning with the symposium "Data to Discovery" in 2013, ArtCenter College of Design, Caltech and JPL in Pasadena have run an annual program on interactive data visualization.[52]The program asks: How can interactive data visualization help scientists and engineers explore their data more effectively? How can computing, design, and design thinking help maximize research results? What methodologies are most effective for leveraging knowledge from these fields? By encoding relational information with appropriate visual and interactive characteristics to help interrogate, and ultimately gain new insight into data, the program develops new interdisciplinary approaches to complex science problems, combining design thinking and the latest methods from computing, user-centered design, interaction design and 3D graphics. Data visualization involves specific terminology, some of which is derived from statistics. For example, authorStephen Fewdefines two types of data, which are used in combination to support a meaningful analysis or visualization: The distinction between quantitative and categorical variables is important because the two types require different methods of visualization. Two primary types ofinformation displaysare tables and graphs. Eppler and Lengler have developed the "Periodic Table of Visualization Methods," an interactive chart displaying various data visualization methods. It includes six types of data visualization methods: data, information, concept, strategy, metaphor and compound.[55]In "Visualization Analysis and Design"Tamara Munznerwrites "Computer-based visualization systems provide visual representations of datasets designed to help people carry out tasks more effectively." Munzner argues that visualization "is suitable when there is a need to augment human capabilities rather than replace people with computational decision-making methods."[56] Variable-width ("variwide") bar chart Orthogonal (orthogonal composite) bar chart Interactive data visualizationenables direct actions on a graphicalplotto change elements and link between multiple plots.[59] Interactive data visualization has been a pursuit ofstatisticianssince the late 1960s. Examples of the developments can be found on theAmerican Statistical Associationvideo lending library.[60] Common interactions include: There are different approaches on the scope of data visualization. One common focus is on information presentation, such as Friedman (2008). Friendly (2008) presumes two main parts of data visualization:statistical graphics, andthematic cartography.[61]In this line the "Data Visualization: Modern Approaches" (2007) article gives an overview of seven subjects of data visualization:[62] All these subjects are closely related tographic designand information representation. On the other hand, from acomputer scienceperspective, Frits H. Post in 2002 categorized the field into sub-fields:[27][63] Within The Harvard Business Review, Scott Berinato developed a framework to approach data visualisation.[64]To start thinking visually, users must consider two questions; 1) What you have and 2) what you're doing. The first step is identifying what data you want visualised. It is data-driven like profit over the past ten years or a conceptual idea like how a specific organisation is structured. Once this question is answered one can then focus on whether they are trying to communicate information (declarative visualisation) or trying to figure something out (exploratory visualisation). Scott Berinato combines these questions to give four types of visual communication that each have their own goals.[64] These four types of visual communication are as follows; Data and information visualization insights are being applied in areas such as:[20] Notable academic and industry laboratories in the field are: Conferences in this field, ranked by significance in data visualization research,[66]are: For further examples, see:Category:Computer graphics organizations Data presentation architecture(DPA) is a skill-set that seeks to identify, locate, manipulate, format and present data in such a way as to optimally communicate meaning and proper knowledge. Historically, the termdata presentation architectureis attributed to Kelly Lautt:[a]"Data Presentation Architecture (DPA) is a rarely applied skill set critical for the success and value ofBusiness Intelligence. Data presentation architecture weds the science of numbers, data and statistics indiscovering valuable informationfrom data and making it usable, relevant and actionable with the arts of data visualization, communications,organizational psychologyandchange managementin order to provide business intelligence solutions with the data scope, delivery timing, format and visualizations that will most effectively support and drive operational, tactical and strategic behaviour toward understood business (or organizational) goals. DPA is neither an IT nor a business skill set but exists as a separate field of expertise. Often confused with data visualization, data presentation architecture is a much broader skill set that includes determining what data on what schedule and in what exact format is to be presented, not just the best way to present data that has already been chosen. Data visualization skills are one element of DPA." DPA has two main objectives: With the above objectives in mind, the actual work of data presentation architecture consists of: DPA work shares commonalities with several other fields, including:
https://en.wikipedia.org/wiki/Data_and_information_visualization
Anillegal numberis a number that represents information which is illegal to possess, utter, propagate, or otherwise transmit in somelegal jurisdiction. Any piece of digital information is representable as a number; consequently, if communicating a specific set of information is illegal in some way, then the number may be illegal as well.[1][2][3] A number may represent some type ofclassified informationortrade secret, legal to possess only by certain authorized persons. AnAACS encryption key(09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0) that came to prominence in May 2007 is an example of a number claimed to be a secret, and whose publication or inappropriate possession is claimed to be illegal in the United States. It assists in the decryption of anyHD DVDorBlu-ray Discreleased before this date. The issuers of a series of cease-and-desist letters claim that the key itself is therefore a copyright circumvention device,[4]and that publishing the key violates Title 1 of the USDigital Millennium Copyright Act. In part of theDeCSScourt order[5]and in the AACS legal notices, the claimed protection for these numbers is based on their mere possession and the value or potential use of the numbers. This makes their status and legal issues surrounding their distribution quite distinct from that ofcopyright infringement.[5] Any image file or an executable program[6]can be regarded as simply a very largebinary number. In certain jurisdictions, there are images that are illegal to possess,[7]due toobscenityor secrecy/classified status, so the corresponding numbers could be illegal.[1][8] In 2011, Sony suedGeorge Hotzand members of fail0verflow forjailbreakingthePlayStation 3.[9]Part of the lawsuit complaint was that they had published PS3 keys. Sony also threatened to sue anyone who distributed the keys.[10]Sony later accidentally retweeted an olderdonglekey through its fictionalKevin Butlercharacter.[11] As a protest of the DeCSS case, many people createdsteganographicversions of the illegal information (i.e., hiding them in some form in flags etc.). Dave Touretzky of Carnegie Mellon University created the Gallery of DeCSS Descramblers. In theAACS encryption key controversy, aFree Speech Flagwas created. Some illegal numbers are so short that a simple flag (shown in the image) could be created by using triples ofcomponentsas describingred-green-bluecolors. The argument is that if short numbers can be made illegal, then any representation of those numbers also becomes illegal, like simple patterns of colors, etc. In theSony Computer Entertainment v. Hotzcase, many bloggers (including one atYale Law School) made a "new free speech flag" in homage to the AACS Free Speech Flag. Most of these were based on the "dongle key" rather than the keys Hotz actually released.[12]Several users of other websites posted similar flags.[13] Anillegal primeis an illegal number which is alsoprime. One of the earliest illegal prime numbers was generated in March 2001 byPhil Carmody. Itsbinaryrepresentation corresponds to acompressedversion of theCsource codeof acomputer programimplementing theDeCSSdecryption algorithm, which can be used by a computer to circumvent a DVD'scopy protection.[14] Protests against the indictment of DeCSS authorJon Lech Johansenand legislation prohibiting publication of DeCSS code took many forms.[15]One of them was the representation of the illegal code in a form that had anintrinsically archivablequality. Since the bits making up a computer program also represent a number, the plan was for the number to have some special property that would make it archivable and publishable (one method was to print it on a T-shirt). Theprimalityof a number is a fundamental property ofnumber theoryand is therefore not dependent on legal definitions of any particular jurisdiction. The large prime database of thePrimePageswebsite records the top 20 primes of various special forms; one of them is proof of primality using theelliptic curve primality proving(ECPP)algorithm. Thus, if the number were large enough and proved prime using ECPP, it would be published. There are other contexts in which smaller numbers have run afoul of laws or regulations, or drawn the attention of authorities.
https://en.wikipedia.org/wiki/Illegal_number
1Grefers to the first generation ofmobile telecommunicationsstandards, introduced in the 1980s. This generation was characterized by the use ofanalogaudio transmissions, a major distinction from the subsequent2Gnetworks, which were fullydigital. The term "1G" itself was not used at the time, but has since been retroactively applied to describe the early era ofcellular networks. During the 1G era, various regional standards were developed and deployed in different countries, rather than a single global system. Among the most prominent were theNordic Mobile Telephone(NMT) system and theAdvanced Mobile Phone System(AMPS), which were widely adopted in their respective regions.[1]The lack of a unified global standard resulted in a fragmented landscape, with different countries and regions utilizing different technologies for mobile communication. As digital technology advanced, the inherent advantages of digital systems over analog led to the eventual replacement of 1G by 2G networks. While many 1G networks were phased out by the early 2000s, some continued to operate into the 2010s, particularly in less developed regions. The antecedent to 1G technology is themobile radio telephone(i.e. "0G"), where portable phones would connect to a centralised operator. 1G refers to the very first generation of cellular networks.[2]Cellular technology employ a network of cells throughout a geographical area using low-power radio transmitters.[1] The first commercialcellular networkwas launched in Japan byNippon Telegraph and Telephone(NTT) in 1979, initially in the metropolitan area of Tokyo. The first phone that used this network was called TZ-801 built byPanasonic.[3]Within five years, the NTT network had been expanded to cover the whole population of Japan and became the first nationwide 1G/cellular network. Before the network in Japan,Bell Laboratoriesbuilt the first cellular network aroundChicagoin 1977 and trialled it in 1978.[4] As in the pre-cellular era, theNordic countrieswere among the pioneers in wireless technologies. These countries together designed theNMTstandard which first launched in Sweden in 1981.[5]NMT was the first mobile phone network to feature internationalroaming. In 1983, the first 1G cellular network launched in the United States, which was Chicago-basedAmeritechusing theMotorola DynaTACmobile phone. In the early to mid 1990s, 1G was superseded by newer 2G (second generation) cellular technologies such asGSMandcdmaOne. Although 1G also used digital signaling to connect the radio towers (which listen to the handsets) to the rest of the telephone system, the voice itself during a call is encoded to digital signals in 2G whereas 1G uses analog FM modulation for the voice transmission, much like a 2-wayland mobile radio. Most 1G networks had been discontinued by the early 2000s. Some regions especially Eastern Europe continued running these networks for much longer. The last operating 1G network was closed down in Russia in 2017. After Japan, the earliest commercial cellular networks launched in 1981 in Sweden, Norway and Saudi Arabia, followed by Denmark, Finland and Spain in 1982, the U.S. in 1983 and Hong Kong, South Korea, Austria and Canada in 1984. By 1986 networks had also launched in Tunisia, Malaysia, Oman, Ireland, Italy, Luxembourg, Netherlands, United Kingdom, West Germany, France, South Africa, Israel, Thailand, Indonesia, Iceland, Turkey, the Virgin Islands and Australia.[6]Generally, African countries were slower to take up 1G networks, while Eastern European were among the last due to the political situation.[7] In Europe, the United Kingdom had the largest number of cellular subscribers as of 1990 numbering 1.1 million, while the second largest market was Sweden with 482 thousand.[7]Although Japan was the first country with a nationwide cellular network, the number of users was significantly lower than other developed economies with a penetration rate of only 0.15 percent in 1989.[5]As of January 1991, the highest penetration rates were in Sweden and Finland with both countries above 50 percent closely followed by Norway and Iceland. The United States had a rate of 21.2 percent. In most other European countries it was below 10 percent.[8] Analog cellular technologies that were used were:[6]
https://en.wikipedia.org/wiki/1G
TheInstitute of Electrical and Electronics Engineers(IEEE)[a]is an American501(c)(3)public charityprofessional organization forelectrical engineering,electronics engineering, and other related disciplines. The IEEE has a corporate office inNew York Cityand an operations center inPiscataway, New Jersey. The IEEE was formed in 1963 as an amalgamation of theAmerican Institute of Electrical Engineersand theInstitute of Radio Engineers.[5] The IEEE traces its founding to 1884 and theAmerican Institute of Electrical Engineers. In 1912, the rivalInstitute of Radio Engineerswas formed.[6]Although the AIEE was initially larger, the IRE attracted more students and was larger by the mid-1950s. The AIEE and IRE merged in 1963.[7] The IEEE is headquartered inNew York City, but most business is done at the IEEE Operations Center[8]inPiscataway, New Jersey, opened in 1975.[9] The Australian Section of the IEEE existed between 1972 and 1985, after which it split intostate- and territory-basedsections.[10] As of 2023[update], IEEE has over 460,000 members in 190 countries, with more than 66 percent from outside the United States.[11] IEEE claims to produce over 30% of the world's literature in the electrical, electronics, andcomputer engineeringfields, publishing approximately 200peer-reviewed journals[12]and magazines. IEEE publishes more than 1,700 conference proceedings every year.[13] The published content in these journals as well as the content from several hundred annualconferencessponsored by the IEEE are available in the IEEE Electronic Library (IEL)[14]available throughIEEEXplore[15]platform, for subscription-based access and individual publication purchases.[16] In addition to journals and conference proceedings, the IEEE also publishes tutorials and standards that are produced by its standardization committees. The organization also has its own IEEE paper format.[17] IEEE providesIEEE Editorial Style Manual for Authorsstyle guidefor article's authors and basic templates inMicrosoft WordandLaTeXfile formats .[18][19]It's based onThe Chicago Manual of Styleand doesn't cover "Grammar" and "Usage" styles which are provided by Chicago style guideline.[20][21] In April 2024 IEEE bannedLennatest images, and stated that they would decline papers containing them.[22][23] IEEE has 39 technical societies, each focused on a certain knowledge area, which provide specialized publications, conferences,business networkingand other services.[24] In September 2008, theIEEE History Committeefounded theIEEE Global History Network,[25][26][27]which now redirects toEngineering and Technology History Wiki.[28][25] The IEEE Foundation is a charitable foundation established in 1973[29]to support and promote technology education, innovation, and excellence.[30]It is incorporated separately from the IEEE, although it has a close relationship to it. Members of the Board of Directors of the foundation are required to be active members of IEEE, and one third of them must be current or former members of the IEEE Board of Directors. Initially, the role of the IEEE Foundation was to accept and administer donations for the IEEE Awards program, but donations increased beyond what was necessary for this purpose, and the scope was broadened. In addition to soliciting and administering unrestricted funds, the foundation also administers donor-designated funds supporting particular educational, humanitarian, historical preservation, and peer recognition programs of the IEEE.[30]As of the end of 2014, the foundation's total assets were nearly $45 million, split equally between unrestricted and donor-designated funds.[31] In May 2019, IEEE restrictedHuaweiemployees from peer reviewing papers or handling papers as editors due to the "severe legal implications" of U.S. government sanctions against Huawei.[32]As members of its standard-setting body, Huawei employees could continue to exercise their voting rights, attend standards development meetings, submit proposals and comment in public discussions on new standards.[33][34]The ban sparked outrage among Chinese scientists on social media. Some professors in China decided to cancel their memberships.[35][36] On June 3, 2019, IEEE lifted restrictions on Huawei's editorial and peer review activities after receiving clearance from the United States government.[37][38][39] On February 26, 2022, the chair of the IEEE Ukraine Section, Ievgen Pichkalov, publicly appealed to the IEEE members to "freeze [IEEE] activities and membership in Russia" and requested "public reaction and strict disapproval of Russia's aggression" from the IEEE and IEEE Region 8.[40]On March 17, 2022, an article in the form of Q&A interview with IEEE Russia (Siberia) senior member Roman Gorbunov titled "A Russian Perspective on the War in Ukraine" was published inIEEE Spectrumto demonstrate "the plurality of views among IEEE members" and the "views that are at odds with international reporting on the war in Ukraine".[41]On March 30, 2022, activist Anna Rohrbach created an open letter to the IEEE in an attempt to have them directly address the article, stating that the article used "common narratives in Russian propaganda" on the2022 Russian invasion of Ukraineand requesting theIEEE Spectrumto acknowledge "that they have unwittingly published a piece furthering misinformation and Russian propaganda."[42]A few days later a note from the editors was added on April 6[43]with an apology "for not providing adequate context at the time of publication", though the editors did not revise the original article.[44]
https://en.wikipedia.org/wiki/IEEE
TheMobile Broadband Allianceis aconsortiumof companies that have aligned to promote hardware with built-inHSPAbroadband. The companies include the mobile operatorsVodafone,Orange, Telefónica Europe,T-Mobile,3Group,Telecom ItaliaandTeliaSonera, and the hardware manufacturersDell,Asus,Toshiba,Lenovo,Qualcomm,Ericsson,Gemalto, andECS. This technology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Mobile_Broadband_Alliance
AWalrasian auction, introduced byLéon Walras, is a type of simultaneousauctionwhere eachagentcalculates its demand for the good at every possible price and submits this to an auctioneer. The price is then set so that the total demand across all agents equals the total amount of the good. Thus, a Walrasian auction perfectly matches the supply and the demand. Walras suggested thatequilibriumwould always be achieved through a process oftâtonnement(French for "groping"), a form ofhill climbing.[1]In the 1970s, however, theSonnenschein–Mantel–Debreu theoremproved that such a process would not necessarily reach a unique and stable equilibrium, even if the market is populated with perfectlyrational agents.[2] TheWalrasian auctioneeris the presumed auctioneer that matchessupply and demandin a market ofperfect competition. The auctioneer provides for the features of perfect competition:perfect informationand notransaction costs. The process is calledtâtonnement, orgroping, relating to finding the market clearing price for all commodities and giving rise togeneral equilibrium. The device is an attempt to avoid one of deepest conceptual problems of perfect competition, which may, essentially, be defined by the stipulation that no agent can affect prices. But if no one can affect prices no one can change them, so prices cannot change. However, involving as it does an artificial solution, the device is less than entirely satisfactory. Until Walker and van Daal's 2014 translation (retitledElements of Theoretical Economics), William Jaffé'sElements of Pure Economics(1954) was for many years the only English translation of Walras'sÉléments d’économie politique pure. Walker and van Daal argue that the idea of the Walrasian auction and Walrasian auctioneer resulted from Jaffé's mistranslation of the French wordcrieurs(criers) intoauctioneers. Walker and van Daal call this "a momentous error that has misled generations of readers into thinking that the markets in Walras's model are auction markets and that he assigned the function of changing prices in his model to an auctioneer."[3]
https://en.wikipedia.org/wiki/Walrasian_auction
Unix-like operating systems identify a user by a value called auser identifier, often abbreviated touser IDorUID. The UID, along with the group identifier (GID) and other access control criteria, is used to determine which system resources a user can access. Thepassword filemaps textual user names to UIDs. UIDs are stored in theinodesof the Unixfile system, running processes, tar archives, and the now-obsoleteNetwork Information Service. InPOSIX-compliant environments, the shell commandidgives the current user's UID, as well as more information such as the user name, primary user group and group identifier (GID). The POSIX standard introduced three different UID fields into the process descriptor table, to allow privileged processes to take on different roles dynamically: The effective UID (euid) of a process is used for most access checks. It is also used as the owner for files created by that process. The effective GID (egid) of a process also affects access control and may also affect file creation, depending on the semantics of the specific kernel implementation in use and possibly themountoptions used. According toBSD Unixsemantics, the group ownership given to a newly created file is unconditionally inherited from the group ownership of the directory in which it is created. According toAT&TUNIX System Vsemantics (also adopted byLinuxvariants), a newly created file is normally given the group ownership specified by theegidof the process that creates the file. Most filesystems implement a method to select whether BSD or AT&T semantics should be used regarding group ownership of a newly created file; BSD semantics are selected for specific directories when the S_ISGID (s-gid) permission is set.[1] Linux also has a file system user ID (fsuid) which is used explicitly for access control to the file system. It matches theeuidunless explicitly set otherwise. It may beroot's user ID only ifruid,suid, oreuidis root. Whenever theeuidis changed, the change is propagated to thefsuid. The intent offsuidis to permit programs (e.g., theNFSserver) to limit themselves to the file system rights of some givenuidwithout giving thatuidpermission to send them signals. Since kernel 2.0, the existence offsuidis no longer necessary because Linux adheres toSUSv3rules for sending signals, butfsuidremains for compatibility reasons.[2] The saved user ID is used when a program running with elevated privileges needs to do some unprivileged work temporarily; changingeuidfrom a privileged value (typically0) to some unprivileged value (anything other than the privileged value) causes the privileged value to be stored insuid. Later, a program'seuidcan be set back to the value stored insuid, so that elevated privileges can be restored; an unprivileged process may set itseuidto one of only three values: the value ofruid, the value ofsuid, or the value ofeuid. The real UID (ruid) and real GID (rgid) identify the real owner of the process and affect the permissions for sending signals. A process without superuser privileges may signal another process only if the sender'sruidoreuidmatches receiver'sruidorsuid. Because achild processinherits its credentials from its parent, a child and parent may signal each other. POSIX requires the UID to be anintegertype. Most Unix-like operating systems represent the UID as an unsigned integer. The size of UID values varies amongst different systems; some UNIX OS's[which?]used 15-bit values, allowing values up to 32767[citation needed], while others such as Linux (before version 2.4) supported16-bitUIDs, making 65536 unique IDs possible. The majority of modern Unix-like systems (e.g., Solaris 2.0 in 1990, Linux 2.4 in 2001) have switched to32-bitUIDs, allowing 4,294,967,296 (232) unique IDs. TheLinux Standard BaseCore Specification specifies that UID values in the range 0 to 99 should be statically allocated by the system, and shall not be created by applications, while UIDs from 100 to 499 should be reserved for dynamic allocation by system administrators and post install scripts.[3] Debian Linuxnot only reserves the range 100–999 for dynamically allocated system users and groups, but also centrally and statically allocates users and groups in the range 60000-64999 and further reserves the range 65000–65533.[4] Systemddefines a number of special UID ranges, including[5] OnFreeBSD, porters who need a UID for their package can pick a free one from the range 50 to 999 and then register the static allocation.[6][7] Some POSIX systems allocate UIDs for new users starting from 500 (macOS,Red Hat Enterprise Linuxtill version 6), others start at 1000 (Red Hat Enterprise Linux since version 7,[8]openSUSE,Debian[4]). On many Linux systems, these ranges are specified in/etc/login.defs, foruseraddand similar tools. Central UID allocations in enterprise networks (e.g., viaLDAPandNFSservers) may limit themselves to using only UID numbers well above 1000, and outside the range 60000–65535, to avoid potential conflicts with UIDs locally allocated on client computers. When new users are created locally, the local system is supposed to check for and avoid conflicts with UID's already existing on NSS'[9] OS-level virtualizationcan remap user identifiers, e.g. usingLinux namespaces, and therefore need to allocate ranges into which remapped UIDs and GIDs are mapped: The systemd authors recommend thatOS-level virtualizationsystems should allocate 65536 (216) UIDs per container, and map them by adding an integer multiple of 216.[5] NFSv4was intended to help avoid numeric identifier collisions by identifying users (and groups) in protocol packets using textual “user@domain” names rather than integer numbers. However, as long as operating-system kernels and local file systems continue to use integer user identifiers, this comes at the expense of additional translation steps (using idmap daemon processes), which can introduce additional failure points if local UID mapping mechanisms or databases get configured incorrectly, lost, or out of sync. The “@domain” part of the user name could be used to indicate which authority allocated a particular name, for example in form of But in practice many existing implementations only allow setting the NFSv4 domain to a fixed value, thereby rendering it useless.
https://en.wikipedia.org/wiki/User_identifier
Inlinear algebra, aHilbert matrix, introduced byHilbert(1894), is asquare matrixwith entries being theunit fractions For example, this is the 5 × 5 Hilbert matrix: The entries can also be defined by the integral that is, as aGramian matrixfor powers ofx. It arises in theleast squaresapproximation of arbitrary functions bypolynomials. The Hilbert matrices are canonical examples ofill-conditionedmatrices, being notoriously difficult to use innumerical computation. For example, the 2-normcondition numberof the matrix above is about 4.8×105. Hilbert (1894)introduced the Hilbert matrix to study the following question inapproximation theory: "Assume thatI= [a,b], is a real interval. Is it then possible to find a non-zero polynomialPwith integer coefficients, such that the integral is smaller than any given boundε> 0, taken arbitrarily small?" To answer this question, Hilbert derives an exact formula for thedeterminantof the Hilbert matrices and investigates their asymptotics. He concludes that the answer to his question is positive if the lengthb−aof the interval is smaller than 4. The Hilbert matrix issymmetricandpositive definite. The Hilbert matrix is alsototally positive(meaning that the determinant of everysubmatrixis positive). The Hilbert matrix is an example of aHankel matrix. It is also a specific example of aCauchy matrix. The determinant can be expressed inclosed form, as a special case of theCauchy determinant. The determinant of then×nHilbert matrix is where Hilbert already mentioned the curious fact that the determinant of the Hilbert matrix is the reciprocal of an integer (see sequenceOEIS:A005249in theOEIS), which also follows from the identity UsingStirling's approximationof thefactorial, one can establish the following asymptotic result: whereanconverges to the constante1/421/12A−3≈0.6450{\displaystyle e^{1/4}\,2^{1/12}\,A^{-3}\approx 0.6450}asn→∞{\displaystyle n\to \infty }, whereAis theGlaisher–Kinkelin constant. Theinverseof the Hilbert matrix can be expressed in closed form usingbinomial coefficients; its entries are wherenis the order of the matrix.[1]It follows that the entries of the inverse matrix are all integers, and that the signs form a checkerboard pattern, being positive on theprincipal diagonal. For example, The condition number of then×nHilbert matrix grows asO((1+2)4n/n){\displaystyle O\left(\left(1+{\sqrt {2}}\right)^{4n}/{\sqrt {n}}\right)}. Themethod of momentsapplied to polynomial distributions results in aHankel matrix, which in the special case of approximating a probability distribution on the interval [0, 1] results in a Hilbert matrix. This matrix needs to be inverted to obtain the weight parameters of the polynomial distribution approximation.[2]
https://en.wikipedia.org/wiki/Hilbert_matrix
Theactivation functionof a node in anartificial neural networkis a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation function isnonlinear.[1] Modern activation functions include the logistic (sigmoid) function used in the 2012speech recognitionmodel developed by Hinton et al;[2]theReLUused in the 2012AlexNetcomputer vision model[3][4]and in the 2015ResNetmodel; and the smooth version of the ReLU, theGELU, which was used in the 2018BERTmodel.[5] Aside from their empirical performance, activation functions also have different mathematical properties: These properties do not decisively influence performance, nor are they the only mathematical properties that may be useful. For instance, the strictly positive range of the softplus makes it suitable for predicting variances invariational autoencoders. The most common activation functions can be divided into three categories:ridge functions,radial functionsandfold functions. An activation functionf{\displaystyle f}issaturatingiflim|v|→∞|∇f(v)|=0{\displaystyle \lim _{|v|\to \infty }|\nabla f(v)|=0}. It isnonsaturatingif it islim|v|→∞|∇f(v)|≠0{\displaystyle \lim _{|v|\to \infty }|\nabla f(v)|\neq 0}. Non-saturating activation functions, such asReLU, may be better than saturating activation functions, because they are less likely to suffer from thevanishing gradient problem.[8] Ridge functions are multivariate functions acting on a linear combination of the input variables. Often used examples include:[clarification needed] Inbiologically inspired neural networks, the activation function is usually an abstraction representing the rate ofaction potentialfiring in the cell.[9]In its simplest form, this function isbinary—that is, either theneuronis firing or not. Neurons also cannot fire faster than a certain rate, motivatingsigmoidactivation functions whose range is a finite interval. The function looks likeϕ(v)=U(a+v′b){\displaystyle \phi (\mathbf {v} )=U(a+\mathbf {v} '\mathbf {b} )}, whereU{\displaystyle U}is theHeaviside step function. If a line has a positiveslope, on the other hand, it may reflect the increase in firing rate that occurs as input current increases. Such a function would be of the formϕ(v)=a+v′b{\displaystyle \phi (\mathbf {v} )=a+\mathbf {v} '\mathbf {b} }. A special class of activation functions known asradial basis functions(RBFs) are used inRBF networks. These activation functions can take many forms, but they are usually found as one of the following functions: wherec{\displaystyle \mathbf {c} }is the vector representing the functioncenteranda{\displaystyle a}andσ{\displaystyle \sigma }are parameters affecting the spread of the radius. Periodic functions can serve as activation functions. Usually thesinusoidis used, as any periodic function is decomposable into sinusoids by theFourier transform.[10] Quadratic activation mapsx↦x2{\displaystyle x\mapsto x^{2}}.[11][12] Folding activation functions are extensively used in thepooling layersinconvolutional neural networks, and in output layers of multiclass classification networks. These activations perform aggregation over the inputs, such as taking themean,minimumormaximum. In multiclass classification thesoftmaxactivation is often used. The following table compares the properties of several activation functions that are functions of onefoldxfrom the previous layer or layers: wheregλ,σ,μ,β(x)=(x−λ)1{x⩾λ}1+e−sgn⁡(x−μ)(|x−μ|σ)β{\displaystyle g_{\lambda ,\sigma ,\mu ,\beta }(x)={\frac {(x-\lambda ){1}_{\{x\geqslant \lambda \}}}{1+e^{-\operatorname {sgn}(x-\mu )\left({\frac {\vert x-\mu \vert }{\sigma }}\right)^{\beta }}}}}[19] The following table lists activation functions that are not functions of a singlefoldxfrom the previous layer or layers: Inquantum neural networksprogrammed on gate-modelquantum computers, based on quantum perceptrons instead of variational quantum circuits, the non-linearity of the activation function can be implemented with no need of measuring the output of eachperceptronat each layer. The quantum properties loaded within the circuit such as superposition can be preserved by creating theTaylor seriesof the argument computed by the perceptron itself, with suitable quantum circuits computing the powers up to a wanted approximation degree. Because of the flexibility of such quantum circuits, they can be designed in order to approximate any arbitrary classical activation function.[25]
https://en.wikipedia.org/wiki/Activation_function
Avicious circle(orcycle) is a complexchain of eventsthat reinforces itself through afeedback loop, with detrimental results.[1]It is a system with no tendency towardequilibrium(social,economic,ecological, etc.), at least in the short run. Each iteration of the cycle reinforces the previous one, in an example ofpositive feedback. A vicious circle will continue in the direction of its momentum until an external factor intervenes to break the cycle. A well-known example of a vicious circle in economics ishyperinflation. When the results are not detrimental but beneficial, the termvirtuous cycleis used instead. The contemporarysubprime mortgage crisisis a complex group of vicious circles, both in its genesis and in its manifold outcomes, most notably thelate 2000s recession. A specific example is the circle related to housing. As housing prices decline, more homeowners go "underwater", when the market value of a home drops below that of the mortgage on it. This provides an incentive to walk away from the home, increasing defaults and foreclosures. This, in turn, lowers housing values further from over-supply, reinforcing the cycle.[2] The foreclosures reduce the cash flowing into banks and the value of mortgage-backed securities (MBS) widely held by banks. Banks incur losses and require additional funds, also called "recapitalization". If banks are not capitalized sufficiently to lend, economic activity slows andunemploymentincreases, which further increase the number of foreclosures. EconomistNouriel Roubinidiscussed vicious circles in the housing and financial markets in interviews withCharlie Rosein September and October 2008.[3][4][5] By involving all stakeholders in managing ecological areas, a virtuous circle can be created where improved ecology encourages the actions that maintain and improve the area.[6] Other examples include thepoverty cycle,sharecropping, and the intensification ofdrought. In 2021, Austrian ChancellorAlexander Schallenbergdescribed the recurring need for lockdowns in theCOVID-19 pandemicas a vicious circle that could only be broken by a legally-required vaccination program.[7]
https://en.wikipedia.org/wiki/Virtuous_circle_and_vicious_circle
Web server software allows computers to act asweb servers. The first web servers supported only static files, such asHTML(and images), but now they commonly allow embedding of server side applications. Some web application frameworks include simple HTTP servers. For examplethe Django frameworkprovidesrunserver, andPHPhas a built-in server. These are generally intended only for use during initial development. A production server will require a more robust HTTP front-end such as one of the servers listed here. (discontinued) Some features may be intentionally not included to web server to avoidfeaturitis. For example:
https://en.wikipedia.org/wiki/Comparison_of_web_server_software
OpenBTS(Open Base Transceiver Station) is a software-basedGSMaccess point, allowing standard GSM-compatiblemobile phonesto be used asSIPendpoints inVoice over IP(VoIP) networks. OpenBTS is open-source software developed and maintained byRange Networks. The public release of OpenBTS is notable for being the firstfree-softwareimplementation of the lower three layers of the industry-standard GSMprotocol stack. It is written inC++and released as free software under the terms of version 3 of theGNU Affero General Public License. OpenBTS replaces the conventional GSM operatorcore networkinfrastructure from layer 3 upwards. Instead of relying on externalbase station controllersforradio resource management, OpenBTS units perform this function internally. Instead of forwarding call traffic through to an operator'smobile switching center, OpenBTS delivers calls viaSIPto a VOIP soft switch (such asFreeSWITCHoryate) orPBX(such asAsterisk). This VOIP switch or PBX software can be installed on the same computer used to run OpenBTS itself, forming a self-contained cellular network in a single computer system. Multiple OpenBTS units can also share a common VOIP switch or PBX to form larger networks[2] The OpenBTSUm air interfaceuses asoftware-defined radiotransceiverwith no specialized GSM hardware. The original implementation used aUniversal Software Radio Peripheralfrom Ettus Research, but has since been expanded to support several digital radios in implementations ranging from full-scale base stations to embeddedfemtocells. The project was started by Harvind Samra and David A. Burgess[3]with the aim of the project to drastically reduce the cost of GSM service provision in rural areas, the developing world, and hard to reach locations such as oil rigs.[4]The project was initially conducted through Kestrel Signal Processing, the founders' consulting firm. On September 14, 2010, at the Fall 2010DEMO conference, the original authors launchedRange Networksas a start up company to commercialize OpenBTS-based products.[5] In September 2013, Burgess left Range Networks and started a new venture called Legba[6]and started a close collaboration with Null Team SRL, the developers ofYate. In February 2014, Legba and Null announced the release of YateBTS, a fork of the OpenBTS project that uses Yate for its control layers and network interfaces. A large number of experimental installations have shown that OpenBTS can run on extremely low overhead platforms. These including some CDMA handsets - making a GSM gateway to aCDMAnetwork. Computer security researcher Chris Paget reported[7]that a handheld device, such as anAndroidphone, could act as a gateway base station to which handsets can connect; the Android device then connects calls using an on-boardAsteriskserver and routes them to thePSTNviaSIPover an existing3Gnetwork. At the 2010DEF CONconference, it was demonstrated with OpenBTS that GSM calls can be intercepted because in GSM the handset does not authenticate the base station prior to accessing the network.[8] OpenBTS has been used by the security research community to mount attacks on cellular phone baseband processors.[9][10]Previously, investigating and conducting such attacks was considered impractical due to the high cost of traditional cellular base station equipment. Large scale live tests of OpenBTS have been conducted in the United States in Nevada and northern California using temporary radio licenses applied for through Kestrel Signal Processing andRange Networks, Inc. During theBurning Manfestival in August 2008, a week-long live field test was run underspecial temporary authorizationlicense.[11][12]Although this test had not been intended to be open to Burning Man attendees in general, a number of individuals in the vicinity succeeded in making out-going calls after a mis-configured Asterisk PBX installation allowed through test calls prefixed with aninternational code.[13]The test connected about 120 phone calls to 95 numbers in area codes over North America. At the 2009 Burning Man festival, a larger test setup was run using a 3-sector system.[14]For the 2010 festival, an even larger 2-sector 3-carrier system was tested. At the 2011 festival, the OpenBTS project set up a 3-site network withVSATgateway and worked in conjunction with theVoice over IPservices companyVoxeoto provide much of the off-site call routing.[15][16] RELIEF is a series of disaster response exercises managed by theNaval Postgraduate SchoolinCalifornia, USA.[17]Range Networks operated OpenBTS test networks at the RELIEF exercises in November 2011[18]and February 2012.[19] In 2010, an OpenBTS system was installed on the island ofNiueand became the first installation to be connected and tested by a telecommunication company. Niue is a very small island country with a population of about 1,700 - too small to attract mobile telecommunications providers. The cost structure of OpenBTS suited Niue, which required a mobile phone service but did not have the volume of potential customers to justify buying and supporting a conventional GSM basestation system.[20] The success of this installation and the demonstrated demand for service helped bootstrap later commercial services. The OpenBTS installation was later decommissioned ~February 2011 by Niue Telecom, a commercial grade GSM 900 network with Edge support was instead launched few months later (3x sites in Kaimiti O2, Sekena S2/2/2 and Avatele S2/2/2) this provided full coverage around the island and around the reef, the installation included a pre-pay system, USSD, Int. SMS and new Int. Gateway. From July 26 to July 29, 2012, the Ninja Networks team set up a "NinjaTel Van" in the Vendor[21]area of Defcon 20 (at the Rio Hotel/Casino in Las Vegas.) It used OpenBTS and served a small network of 650GSMphones with custom SIM cards.[22]
https://en.wikipedia.org/wiki/OpenBTS
Diffie–Hellman(DH)key exchange[nb 1]is a mathematicalmethodof securely generating a symmetriccryptographic keyover a public channel and was one of the firstpublic-key protocolsas conceived byRalph Merkleand named afterWhitfield DiffieandMartin Hellman.[1][2]DH is one of the earliest practical examples of public key exchange implemented within the field of cryptography. Published in 1976 by Diffie and Hellman, this is the earliest publicly known work that proposed the idea of a private key and a corresponding public key. Traditionally, secure encrypted communication between two parties required that they first exchange keys by some secure physical means, such as paper key lists transported by a trustedcourier. The Diffie–Hellman key exchange method allows two parties that have no prior knowledge of each other to jointly establish ashared secretkey over aninsecure channel. This key can then be used to encrypt subsequent communications using asymmetric-keycipher. Diffie–Hellman is used to secure a variety ofInternetservices. However, research published in October 2015 suggests that the parameters in use for many DH Internet applications at that time are not strong enough to prevent compromise by very well-funded attackers, such as the security services of some countries.[3] The scheme was published by Whitfield Diffie and Martin Hellman in 1976,[2]but in 1997 it was revealed thatJames H. Ellis,[4]Clifford Cocks, andMalcolm J. WilliamsonofGCHQ, the British signals intelligence agency, had previously shown in 1969[5]how public-key cryptography could be achieved.[6] Although Diffie–Hellman key exchange itself is a non-authenticatedkey-agreement protocol, it provides the basis for a variety of authenticated protocols, and is used to provideforward secrecyinTransport Layer Security'sephemeralmodes (referred to as EDH or DHE depending on thecipher suite). The method was followed shortly afterwards byRSA, an implementation of public-key cryptography using asymmetric algorithms. Expired US patent 4200770[7]from 1977 describes the nowpublic-domainalgorithm. It credits Hellman, Diffie, and Merkle as inventors. In 2006, Hellman suggested the algorithm be calledDiffie–Hellman–Merkle key exchangein recognition ofRalph Merkle's contribution to the invention ofpublic-key cryptography(Hellman, 2006), writing: The system ... has since become known as Diffie–Hellman key exchange. While that system was first described in a paper by Diffie and me, it is a public key distribution system, a concept developed by Merkle, and hence should be called 'Diffie–Hellman–Merkle key exchange' if names are to be associated with it. I hope this small pulpit might help in that endeavor to recognize Merkle's equal contribution to the invention of public key cryptography.[8] Diffie–Hellman key exchange establishes a shared secret between two parties that can be used for secret communication for exchanging data over a public network. An analogy illustrates the concept of public key exchange by using colors instead of very large numbers: The process begins by having the two parties,Alice and Bob, publicly agree on an arbitrary starting color that does not need to be kept secret. In this example, the color is yellow. Each person also selects a secret color that they keep to themselves – in this case, red and cyan. The crucial part of the process is that Alice and Bob each mix their own secret color together with their mutually shared color, resulting in orange-tan and light-blue mixtures respectively, and then publicly exchange the two mixed colors. Finally, each of them mixes the color they received from the partner with their own private color. The result is a final color mixture (yellow-brown in this case) that is identical to their partner's final color mixture. If a third party listened to the exchange, they would only know the common color (yellow) and the first mixed colors (orange-tan and light-blue), but it would be very hard for them to find out the final secret color (yellow-brown). Bringing the analogy back to areal-lifeexchange using large numbers rather than colors, this determination is computationally expensive. It is impossible to compute in a practical amount of time even for modernsupercomputers. The simplest and the original implementation,[2]later formalized asFinite Field Diffie–Hellmanin RFC 7919,[9]of the protocol uses themultiplicative group of integers modulop, wherepisprime, andgis aprimitive root modulop. To guard against potential vulnerabilities, it is recommended to use prime numbers of at least 2048 bits in length. This increases the difficulty for an adversary attempting to compute the discrete logarithm and compromise the shared secret. These two values are chosen in this way to ensure that the resulting shared secret can take on any value from 1 top− 1. Here is an example of the protocol, with non-secret values inblue, and secret values inred. Both Alice and Bob have arrived at the same values because under modp, More specifically, Onlyaandbare kept secret. All the other values –p,g,gamodp, andgbmodp– are sent in the clear. The strength of the scheme comes from the fact thatgabmodp=gbamodptake extremely long times to compute by any known algorithm just from the knowledge ofp,g,gamodp, andgbmodp. Such a function that is easy to compute but hard to invert is called aone-way function. Once Alice and Bob compute the shared secret they can use it as an encryption key, known only to them, for sending messages across the same open communications channel. Of course, much larger values ofa,b, andpwould be needed to make this example secure, since there are only 23 possible results ofnmod 23. However, ifpis a prime of at least 600 digits, then even the fastest modern computers using the fastest known algorithm cannot findagiven onlyg,pandgamodp. Such a problem is called thediscrete logarithm problem.[3]The computation ofgamodpis known asmodular exponentiationand can be done efficiently even for large numbers. Note thatgneed not be large at all, and in practice is usually a small integer (like 2, 3, ...). The chart below depicts who knows what, again with non-secret values inblue, and secret values inred. HereEveis aneavesdropper– she watches what is sent between Alice and Bob, but she does not alter the contents of their communications. Nowsis the shared secret key and it is known to both Alice and Bob, butnotto Eve. Note that it is not helpful for Eve to computeAB, which equalsga+bmodp. Note: It should be difficult for Alice to solve for Bob's private key or for Bob to solve for Alice's private key. If it is not difficult for Alice to solve for Bob's private key (or vice versa), then an eavesdropper,Eve, may simply substitute her own private / public key pair, plug Bob's public key into her private key, produce a fake shared secret key, and solve for Bob's private key (and use that to solve for the shared secret key).Evemay attempt to choose a public / private key pair that will make it easy for her to solve for Bob's private key. Here is a more general description of the protocol:[10] Both Alice and Bob are now in possession of the group elementgab=gba, which can serve as the shared secret key. The groupGsatisfies the requisite condition forsecure communicationas long as there is no efficient algorithm for determininggabgiveng,ga, andgb. For example, theelliptic curve Diffie–Hellmanprotocol is a variant that represents an element of G as a point on an elliptic curve instead of as an integer modulo n. Variants usinghyperelliptic curveshave also been proposed. Thesupersingular isogeny key exchangeis a Diffie–Hellman variant that was designed to be secure againstquantum computers, but it was broken in July 2022.[11] The used keys can either be ephemeral or static (long term) key, but could even be mixed, so called semi-static DH. These variants have different properties and hence different use cases. An overview over many variants and some also discussions can for example be found in NIST SP 800-56A.[12]A basic list: It is possible to use ephemeral and static keys in one key agreement to provide more security as for example shown in NIST SP 800-56A, but it is also possible to combine those in a single DH key exchange, which is then called triple DH (3-DH). In 1997 a kind of triple DH was proposed by Simon Blake-Wilson, Don Johnson, Alfred Menezes in 1997,[13]which was improved by C. Kudla and K. G. Paterson in 2005[14]and shown to be secure. The long term secret keys of Alice and Bob are denoted byaandbrespectively, with public keysAandB, as well as the ephemeral key pairs (x,X) and (y,Y). Then protocol is: The long term public keys need to be transferred somehow. That can be done beforehand in a separate, trusted channel, or the public keys can be encrypted using some partial key agreement to preserve anonymity. For more of such details as well as other improvements likeside channel protectionor explicitkey confirmation, as well as early messages and additional password authentication, see e.g. US patent "Advanced modular handshake for key agreement and optional authentication".[15] X3DH was initially proposed as part of theDouble Ratchet Algorithmused in theSignal Protocol. The protocol offers forward secrecy and cryptographic deniability. It operates on an elliptic curve.[16] The protocol uses five public keys. Alice has an identity key IKAand an ephemeral key EKA. Bob has an identity key IKB, a signed prekey SPKB, and a one-time prekey OPKB.[16]Bob first publishes his three keys to a server, which Alice downloads and verifies the signature on. Alice then initiates the exchange to Bob.[16]The OPK is optional.[16] Diffie–Hellman key agreement is not limited to negotiating a key shared by only two participants. Any number of users can take part in an agreement by performing iterations of the agreement protocol and exchanging intermediate data (which does not itself need to be kept secret). For example, Alice, Bob, and Carol could participate in a Diffie–Hellman agreement as follows, with all operations taken to be modulop: An eavesdropper has been able to seegamodp,gbmodp,gcmodp,gabmodp,gacmodp, andgbcmodp, but cannot use any combination of these to efficiently reproducegabcmodp. To extend this mechanism to larger groups, two basic principles must be followed: These principles leave open various options for choosing in which order participants contribute to keys. The simplest and most obvious solution is to arrange theNparticipants in a circle and haveNkeys rotate around the circle, until eventually every key has been contributed to by allNparticipants (ending with its owner) and each participant has contributed toNkeys (ending with their own). However, this requires that every participant performNmodular exponentiations. By choosing a more desirable order, and relying on the fact that keys can be duplicated, it is possible to reduce the number of modular exponentiations performed by each participant tolog2(N) + 1using adivide-and-conquer-styleapproach, given here for eight participants: Once this operation has been completed all participants will possess the secretgabcdefgh, but each participant will have performed only four modular exponentiations, rather than the eight implied by a simple circular arrangement. The protocol is considered secure against eavesdroppers ifGandgare chosen properly. In particular, the order of the group G must be large, particularly if the same group is used for large amounts of traffic. The eavesdropper has to solve theDiffie–Hellman problemto obtaingab. This is currently considered difficult for groups whose order is large enough. An efficient algorithm to solve thediscrete logarithm problemwould make it easy to computeaorband solve the Diffie–Hellman problem, making this and many other public key cryptosystems insecure. Fields of small characteristic may be less secure.[17] TheorderofGshould have a large prime factor to prevent use of thePohlig–Hellman algorithmto obtainaorb. For this reason, aSophie Germain primeqis sometimes used to calculatep= 2q+ 1, called asafe prime, since the order ofGis then only divisible by 2 andq. Sometimesgis chosen to generate the orderqsubgroup ofG, rather thanG, so that theLegendre symbolofganever reveals the low order bit ofa. A protocol using such a choice is for exampleIKEv2.[18] The generatorgis often a small integer such as 2. Because of therandom self-reducibilityof the discrete logarithm problem a smallgis equally secure as any other generator of the same group. If Alice and Bob userandom number generatorswhose outputs are not completely random and can be predicted to some extent, then it is much easier to eavesdrop. In the original description, the Diffie–Hellman exchange by itself does not provideauthenticationof the communicating parties and can be vulnerable to aman-in-the-middle attack. Mallory (an active attacker executing the man-in-the-middle attack) may establish two distinct key exchanges, one with Alice and the other with Bob, effectively masquerading as Alice to Bob, and vice versa, allowing her to decrypt, then re-encrypt, the messages passed between them. Note that Mallory must be in the middle from the beginning and continuing to be so, actively decrypting and re-encrypting messages every time Alice and Bob communicate. If she arrives after the keys have been generated and the encrypted conversation between Alice and Bob has already begun, the attack cannot succeed. If she is ever absent, her previous presence is then revealed to Alice and Bob. They will know that all of their private conversations had been intercepted and decoded by someone in the channel. In most cases it will not help them get Mallory's private key, even if she used the same key for both exchanges. A method to authenticate the communicating parties to each other is generally needed to prevent this type of attack. Variants of Diffie–Hellman, such asSTS protocol, may be used instead to avoid these types of attacks. ACVEreleased in 2021 (CVE-2002-20001) disclosed adenial-of-service attack(DoS) against the protocol variants use ephemeral keys, called D(HE)at attack.[19]The attack exploits that the Diffie–Hellman key exchange allows attackers to send arbitrary numbers that are actually not public keys, triggering expensive modular exponentiation calculations on the victim's side. Another CVEs released disclosed that the Diffie–Hellman key exchange implementations may use long private exponents (CVE-2022-40735) that arguably make modular exponentiation calculations unnecessarily expensive[20]or may unnecessary check peer's public key (CVE-2024-41996) has similar resource requirement as key calculation using a long exponent.[21]An attacker can exploit both vulnerabilities together. Thenumber field sievealgorithm, which is generally the most effective in solving thediscrete logarithm problem, consists of four computational steps. The first three steps only depend on the order of the group G, not on the specific number whose finite log is desired.[22]It turns out that much Internet traffic uses one of a handful of groups that are of order 1024 bits or less.[3]Byprecomputingthe first three steps of the number field sieve for the most common groups, an attacker need only carry out the last step, which is much less computationally expensive than the first three steps, to obtain a specific logarithm. TheLogjamattack used this vulnerability to compromise a variety of Internet services that allowed the use of groups whose order was a 512-bit prime number, so calledexport grade. The authors needed several thousand CPU cores for a week to precompute data for a single 512-bit prime. Once that was done, individual logarithms could be solved in about a minute using two 18-core Intel Xeon CPUs.[3] As estimated by the authors behind the Logjam attack, the much more difficult precomputation needed to solve the discrete log problem for a 1024-bit prime would cost on the order of $100 million, well within the budget of a large nationalintelligence agencysuch as the U.S.National Security Agency(NSA). The Logjam authors speculate that precomputation against widely reused 1024-bit DH primes is behind claims inleaked NSA documentsthat NSA is able to break much of current cryptography.[3] To avoid these vulnerabilities, the Logjam authors recommend use ofelliptic curve cryptography, for which no similar attack is known. Failing that, they recommend that the order,p, of the Diffie–Hellman group should be at least 2048 bits. They estimate that the pre-computation required for a 2048-bit prime is 109times more difficult than for 1024-bit primes.[3] Quantum computerscan break public-key cryptographic schemes, such as RSA, finite-field DH and elliptic-curve DH key-exchange protocols, usingShor's algorithmfor solving thefactoring problem, thediscrete logarithm problem, and the period-finding problem. Apost-quantum variant of Diffie-Hellman algorithmwas proposed in 2023, and relies on a combination of the quantum-resistant CRYSTALS-Kyber protocol, as well as the old elliptic curveX25519protocol. A quantum Diffie-Hellman key-exchange protocol that relies on aquantum one-way function, and its security relies on fundamental principles of quantum mechanics has also been proposed in the literature.[23] Public key encryption schemes based on the Diffie–Hellman key exchange have been proposed. The first such scheme is theElGamal encryption. A more modern variant is theIntegrated Encryption Scheme. Protocols that achieveforward secrecygenerate new key pairs for eachsessionand discard them at the end of the session. The Diffie–Hellman key exchange is a frequent choice for such protocols, because of its fast key generation. When Alice and Bob share a password, they may use apassword-authenticated key agreement(PK) form of Diffie–Hellman to prevent man-in-the-middle attacks. One simple scheme is to compare thehashofsconcatenated with the password calculated independently on both ends of channel. A feature of these schemes is that an attacker can only test one specific password on each iteration with the other party, and so the system provides good security with relatively weak passwords. This approach is described inITU-TRecommendationX.1035, which is used by theG.hnhome networking standard. An example of such a protocol is theSecure Remote Password protocol. It is also possible to use Diffie–Hellman as part of apublic key infrastructure, allowing Bob to encrypt a message so that only Alice will be able to decrypt it, with no prior communication between them other than Bob having trusted knowledge of Alice's public key. Alice's public key is(gamodp,g,p){\displaystyle (g^{a}{\bmod {p}},g,p)}. To send her a message, Bob chooses a randomband then sends Alicegbmodp{\displaystyle g^{b}{\bmod {p}}}(unencrypted) together with the message encrypted with symmetric key(ga)bmodp{\displaystyle (g^{a})^{b}{\bmod {p}}}. Only Alice can determine the symmetric key and hence decrypt the message because only she hasa(the private key). A pre-shared public key also prevents man-in-the-middle attacks. In practice, Diffie–Hellman is not used in this way, withRSAbeing the dominant public key algorithm. This is largely for historical and commercial reasons,[citation needed]namely thatRSA Securitycreated acertificate authorityfor key signing that becameVerisign. Diffie–Hellman, as elaborated above, cannot directly be used to sign certificates. However, theElGamalandDSAsignature algorithms are mathematically related to it, as well asMQV,STSand theIKEcomponent of theIPsecprotocol suite for securingInternet Protocolcommunications.
https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange
Pattern search(also known as direct search, derivative-free search, or black-box search) is a family of numericaloptimizationmethods that does not require agradient. As a result, it can be used on functions that are notcontinuousordifferentiable. One such pattern search method is "convergence" (see below), which is based on the theory of positive bases. Optimization attempts to find the best match (the solution that has the lowest error value) in amultidimensional analysisspace of possibilities. The name "pattern search" was coined by Hooke and Jeeves.[1]An early and simple variant is attributed toFermiandMetropoliswhen they worked at theLos Alamos National Laboratory. It is described by Davidon,[2]as follows: They varied one theoretical parameter at a time by steps of the same magnitude, and when no such increase or decrease in any one parameter further improved the fit to the experimental data, they halved the step size and repeated the process until the steps were deemed sufficiently small. Convergence is a pattern search method proposed by Yu, who proved that it converges using the theory of positive bases.[3]Later,Torczon, Lagarias and co-authors[4][5]used positive-basis techniques to prove the convergence of another pattern-search method on specific classes of functions. Outside of such classes, pattern search is aheuristicthat can provide useful approximate solutions for some issues, but can fail on others. Outside of such classes, pattern search is not aniterative methodthat converges to a solution; indeed, pattern-search methods can converge to non-stationary points on some relatively tame problems.[6][7]
https://en.wikipedia.org/wiki/Pattern_search_(optimization)
This article lists concurrent andparallel programming languages, categorizing them by a definingparadigm. Concurrent and parallel programming languages involve multiple timelines. Such languages providesynchronizationconstructs whose behavior is defined by a parallelexecution model. Aconcurrent programming languageis defined as one which uses the concept of simultaneously executing processes or threads of execution as a means of structuring a program. A parallel language is able to express programs that are executable on more than one processor. Both types are listed, as concurrency is a useful tool in expressing parallelism, but it is not necessary. In both cases, the features must be part of the language syntax and not an extension such as a library (libraries such as the posix-thread library implement a parallelexecution modelbut lack the syntax and grammar required to be a programming language). The following categories aim to capture the main, defining feature of the languages contained, but they are not necessarily orthogonal. These application programming interfaces support parallelism in host languages.
https://en.wikipedia.org/wiki/List_of_concurrent_and_parallel_programming_languages
Withinsoftware engineering, themining software repositories[1](MSR) field[2]analyzes the rich data available in software repositories, such asversion controlrepositories,mailing listarchives,bug tracking systems,issue tracking systems, etc. to uncover interesting and actionable information aboutsoftwaresystems, projects andsoftware engineering. Herzig and Zeller define ”mining software archives” as a process to ”obtain lots of initial evidence” by extracting data from software repositories. Further they define ”data sources” as product-based artifacts like source code, requirement artefacts or version archives and claim that these sources are unbiased, but noisy and incomplete.[3] The idea in coupled change analysis is that developers change code entities (e.g. files) together frequently for fixing defects or introducing new features. These couplings between the entities are often not made explicit in the code or other documents. Especially developers new on the project do not know which entities need to be changed together. Coupled change analysis aims to extract the coupling out of the version control system for a project. By the commits and the timing of changes, we might be able to identify which entities frequently change together. This information could then be presented to developers about to change one of the entities to support them in their further changes.[4] There are many different kinds of commits in version control systems, e.g. bug fix commits, new feature commits, documentation commits, etc. To take data-driven decisions based on past commits, one needs to select subsets of commits that meet a given criterion. That can be done based on the commit message.[5] It is possible to generate useful documentation from mining software repositories. For instance, Jadeite computes usage statistics and helps newcomers to quickly identify commonly used classes.[6] The primary mining data comes from version control systems. Early mining experiments were done on CVS repositories.[7]Then, researchers have extensively analyzed SVN repositories.[8]Now, Git repositories are dominant.[9]Depending on the nature of the data required (size, domain, processing), one can either download data from one of these sources.[clarification needed]However,data governanceand data collection for the sake of buildinglarge language modelshave come to change the rules of the game, by integrating the use ofweb crawlersto obtain data from multiple sources and domains.
https://en.wikipedia.org/wiki/Mining_Software_Repositories
This article compares thesyntaxof many notableprogramming languages. Programming languageexpressionscan be broadly classified into four syntax structures: A language that supports thestatementconstruct typically has rules for one or more of the following aspects: Some languages define a special character as a terminator while some, calledline-oriented, rely on thenewline. Typically, a line-oriented language includes a line continuation feature whereas other languages have no need for line continuation since newline is treated like otherwhitespace. Some line-oriented languages provide a separator for use between statements on one line. Listed below are notable line-oriented languages that provide for line continuation. Unless otherwise noted the continuation marker must be the last text of the line. The C compiler concatenates adjacentstring literalseven if on separate lines, but this is not line continuation syntax as it works the same regardless of the kind of whitespace between the literals. Languages support a variety of ways to reference and consume other software in the syntax of the language. In some cases this is importing the exported functionality of alibrary, package or modulebut some mechanisms are simpler text file include operations. Import can be classified by level (module, package, class, procedure,...) and by syntax (directive name, attributes,...). The above statements can also be classified by whether they are a syntactic convenience (allowing things to be referred to by a shorter name, but they can still be referred to by some fully qualified name without import), or whether they are actually required to access the code (without which it is impossible to access the code, even with fully qualified names). Ablockis a grouping of code that is treated collectively. Many block syntaxes can consist of any number of items (statements, expressions or other units of code) – including one or zero. Languages delimit a block in a variety of ways – some via marking text and others by relative formatting such as levels of indentation. With respect to a language definition, the syntax ofCommentscan be classified many ways, including: Other ways to categorize comments that are outside a language definition: In these examples,~represents the comment content, and the text around it are the delimiters.Whitespace(includingnewline) is not considered delimiters. Indenting lines inFortran66/77 is significant. The actual statement is in columns 7 through 72 of a line. Any non-space character in column 6 indicates that this line is a continuation of the prior line. A 'C' in column 1 indicates that this entire line is a comment. Columns 1 though 5 may contain a number which serves as a label. Columns 73 though 80 are ignored and may be used for comments; in thedays of punched cards, these columns often contained a sequence number so that the deck of cards could be sorted into the correct order if someone accidentally dropped the cards. Fortran 90 removed the need for the indentation rule and added line comments, using the!character as the comment delimiter. In fixed format code, line indentation is significant. Columns 1–6 and columns from 73 onwards are ignored. If a*or/is in column 7, then that line is a comment. Until COBOL 2002, if aDordwas in column 7, it would define a "debugging line" which would be ignored unless the compiler was instructed to compile it. Cobra supports block comments with "/#...#/" which is like the "/*...*/" often found in C-based languages, but with two differences. The#character is reused from the single-line comment form "#...", and the block comments can be nested which is convenient for commenting out large blocks of code. Curl supports block comments with user-defined tags as in|foo# ... #foo|. Like raw strings, there can be any number of equals signs between the square brackets, provided both the opening and closing tags have a matching number of equals signs; this allows nesting as long as nested block comments/raw strings use a different number of equals signs than their enclosing comment:--[[comment --[=[ nested comment ]=] ]]. Lua discards the first newline (if present) that directly follows the opening tag. Block comments in Perl are considered part of the documentation, and are given the namePlain Old Documentation(POD). Technically, Perl does not have a convention for including block comments in source code, but POD is routinely used as a workaround. PHP supports standard C/C++ style comments, but supports Perl style as well. The use of the triple-quotes to comment-out lines of source, does not actually form a comment.[19]The enclosed text becomes a string literal, which Python usually ignores (except when it is the first statement in the body of a module, class or function; seedocstring). The above trick used in Python also works in Elixir, but the compiler will throw a warning if it spots this. To suppress the warning, one would need to prepend the sigil~S(which prevents string interpolation) to the triple-quoted string, leading to the final construct~S""" ... """. In addition, Elixir supports a limited form of block comments as an official language feature, but as in Perl, this construct is entirely intended to write documentation. Unlike in Perl, it cannot be used as a workaround, being limited to certain parts of the code and throwing errors or even suppressing functions if used elsewhere.[20] Rakuuses#`(...)to denote block comments.[21]Raku actually allows the use of any "right" and "left" paired brackets after#`(i.e.#`(...),#`[...],#`{...},#`<...>, and even the more complicated#`{{...}}are all valid block comments). Brackets are also allowed to be nested inside comments (i.e.#`{ a { b } c }goes to the last closing brace). Block comment in Ruby opens at=beginline and closes at=endline. The region of lines enclosed by the#<tag>and#</tag>delimiters are ignored by the interpreter. The tag name can be any sequence of alphanumeric characters that may be used to indicate how the enclosed block is to be deciphered. For example,#<latex>could indicate the start of a block of LaTeX formatted documentation. The next complete syntactic component (s-expression) can be commented out with#;. ABAP supports two different kinds of comments. If the first character of a line, including indentation, is an asterisk (*) the whole line is considered as a comment, while a single double quote (") begins an in-line comment which acts until the end of the line. ABAP comments are not possible between the statementsEXEC SQLandENDEXECbecause Native SQL has other usages for these characters. In the most SQL dialects the double dash (--) can be used instead. Manyesoteric programming languagesfollow the convention that any text not executed by theinstruction pointer(e.g.,Befunge) or otherwise assigned a meaning (e.g.,Brainfuck), is considered a "comment". There is a wide variety of syntax styles for declaring comments in source code.BlockCommentin italics is used here to indicate block comment style.LineCommentin italics is used here to indicate line comment style. commentBlockCommentcommentcoBlockCommentco#BlockComment#£BlockComment£ *LineComment(not all dialects)!LineComment(not all dialects)REMLineComment |foo#BlockComment#| /+BlockComment+/(nestable)/++ DocumentationBlockComment+/(nestable, ddoc comments) (before--after)stack comment convention /**BlockComment*/(Javadocdocumentation comment) __END__Comments after end of code (Documentation stringwhen first line of module, class, method, or function) =commentThis comment paragraph goes until the next POD directiveor the first blank line.[23][24] ///LineComment("Outer" rustdoc comment)//!LineComment("Inner" rustdoc comment) /**BlockComment*/("Outer" rustdoc comment)/*!BlockComment*/("Inner" rustdoc comment) @commentLineComment '''LineComment(XML documentation comment)RemLineComment
https://en.wikipedia.org/wiki/Comparison_of_programming_languages_(syntax)
BlackArchis apenetration testingdistributionbased onArch Linuxthat provides a large number ofsecuritytools. It is anopen-sourcedistrocreated specially for penetration testers and security researchers. The repository contains more than 2800 tools that can be installed individually or in groups. BlackArch Linux is compatible with existingArch Linuxinstallations.[1][2] BlackArch is similar in usage to bothParrot OSandKali Linuxwhen fully installed, with a major difference being BlackArch is based onArch Linuxinstead ofDebian. BlackArch only provides theXfcedesktop environment in the "Slim ISO" but provides multiple preconfigured Window Managers in the "Full ISO". Similar to Kali Linux and Parrot OS, BlackArch can be burned to anISO imageand run as a live system.[1]BlackArch can also be installed as an unofficial user repository on any current Arch Linux installation.[3] BlackArch currently contains 2817 packages and tools, along with their dependencies.[4]BlackArch is developed by a small number ofcyber securityspecialists and researchers that add the packages as well as dependencies needed to run these tools. Tools categories within the BlackArch distribution (Counting date: 15 April 2024):[4]
https://en.wikipedia.org/wiki/BlackArch
In mathematics, theDixon elliptic functionssm and cm are twoelliptic functions(doubly periodicmeromorphic functionson thecomplex plane) that map from eachregular hexagonin ahexagonal tilingto the whole complex plane. Because these functions satisfy the identitycm3⁡z+sm3⁡z=1{\displaystyle \operatorname {cm} ^{3}z+\operatorname {sm} ^{3}z=1}, asreal functionsthey parametrize the cubicFermat curvex3+y3=1{\displaystyle x^{3}+y^{3}=1}, just as thetrigonometric functionssine and cosine parametrize theunit circlex2+y2=1{\displaystyle x^{2}+y^{2}=1}. They were named sm and cm byAlfred Dixonin 1890, by analogy to the trigonometric functions sine and cosine and theJacobi elliptic functionssn and cn;Göran Dillnerdescribed them earlier in 1873.[1] The functions sm and cm can be defined as the solutions to theinitial value problem:[2] Or as the inverse of theSchwarz–Christoffel mappingfrom the complex unit disk to an equilateral triangle, theAbelian integral:[3] which can also be expressed using thehypergeometric function:[4] Both sm and cm have a period along the real axis ofπ3=B(13,13)=32πΓ3(13)≈5.29991625{\displaystyle \pi _{3}=\mathrm {B} {\bigl (}{\tfrac {1}{3}},{\tfrac {1}{3}}{\bigr )}={\tfrac {\sqrt {3}}{2\pi }}\Gamma ^{3}{\bigl (}{\tfrac {1}{3}}{\bigr )}\approx 5.29991625}withB{\displaystyle \mathrm {B} }thebeta functionandΓ{\displaystyle \Gamma }thegamma function:[5] They satisfy the identitycm3⁡z+sm3⁡z=1{\displaystyle \operatorname {cm} ^{3}z+\operatorname {sm} ^{3}z=1}. The parametric functiont↦(cm⁡t,sm⁡t),{\displaystyle t\mapsto (\operatorname {cm} t,\,\operatorname {sm} t),}t∈[−13π3,23π3]{\displaystyle t\in {\bigl [}{-{\tfrac {1}{3}}}\pi _{3},{\tfrac {2}{3}}\pi _{3}{\bigr ]}}parametrizes the cubicFermat curvex3+y3=1,{\displaystyle x^{3}+y^{3}=1,}with12t{\displaystyle {\tfrac {1}{2}}t}representing the signed area lying between the segment from the origin to(1,0){\displaystyle (1,\,0)}, the segment from the origin to(cm⁡t,sm⁡t){\displaystyle (\operatorname {cm} t,\,\operatorname {sm} t)}, and the Fermat curve, analogous to the relationship between the argument of the trigonometric functions and the area of a sector of the unit circle.[6]To see why, applyGreen's theorem: Notice that the area between thex+y=0{\displaystyle x+y=0}andx3+y3=1{\displaystyle x^{3}+y^{3}=1}can be broken into three pieces, each of area16π3{\displaystyle {\tfrac {1}{6}}\pi _{3}}: The functionsm⁡z{\displaystyle \operatorname {sm} z}haszerosat the complex-valued pointsz=13π3i(a+bω){\displaystyle z={\tfrac {1}{\sqrt {3}}}\pi _{3}i(a+b\omega )}for any integersa{\displaystyle a}andb{\displaystyle b}, whereω{\displaystyle \omega }is a cuberoot of unity,ω=exp⁡23iπ=−12+32i{\displaystyle \omega =\exp {\tfrac {2}{3}}i\pi =-{\tfrac {1}{2}}+{\tfrac {\sqrt {3}}{2}}i}(that is,a+bω{\displaystyle a+b\omega }is anEisenstein integer). The functioncm⁡z{\displaystyle \operatorname {cm} z}has zeros at the complex-valued pointsz=13π3+13π3i(a+bω){\displaystyle z={\tfrac {1}{3}}\pi _{3}+{\tfrac {1}{\sqrt {3}}}\pi _{3}i(a+b\omega )}. Both functions havepolesat the complex-valued pointsz=−13π3+13π3i(a+bω){\displaystyle z=-{\tfrac {1}{3}}\pi _{3}+{\tfrac {1}{\sqrt {3}}}\pi _{3}i(a+b\omega )}. On the real line,sm⁡x=0↔x∈π3Z{\displaystyle \operatorname {sm} x=0\leftrightarrow x\in \pi _{3}\mathbb {Z} }, which is analogous tosin⁡x=0↔x∈πZ{\displaystyle \sin x=0\leftrightarrow x\in \pi \mathbb {Z} }. Bothcmandsmcommute with complex conjugation, Analogous to the parity of trigonometric functions (cosine aneven functionand sine anodd function), the Dixon functioncmis invariant under13{\textstyle {\tfrac {1}{3}}}turn rotations of the complex plane, and13{\textstyle {\tfrac {1}{3}}}turn rotations of the domain ofsmcause13{\displaystyle {\tfrac {1}{3}}}turn rotations of the codomain: Each Dixon elliptic function is invariant under translations by the Eisenstein integersa+bω{\displaystyle a+b\omega }scaled byπ3,{\displaystyle \pi _{3},} Negation of each ofcmandsmis equivalent to a13π3{\displaystyle {\tfrac {1}{3}}\pi _{3}}translation of the other, Forn∈{0,1,2},{\displaystyle n\in \mathbb {\{} 0,1,2\},}translations by13π3ω{\displaystyle {\tfrac {1}{3}}\pi _{3}\omega }give The Dixon elliptic functions satisfy the argument sum and difference identities:[8] These formulas can be used to compute the complex-valued functions in real components:[citation needed] Argument duplication and triplication identities can be derived from the sum identity:[9] Thecm{\displaystyle \operatorname {cm} }function satisfies the identitiescm⁡29π3=−cm⁡19π3cm⁡49π3,cm⁡14π3=cl⁡13ϖ,{\displaystyle {\begin{aligned}\operatorname {cm} {\tfrac {2}{9}}\pi _{3}&=-\operatorname {cm} {\tfrac {1}{9}}\pi _{3}\,\operatorname {cm} {\tfrac {4}{9}}\pi _{3},\\[5mu]\operatorname {cm} {\tfrac {1}{4}}\pi _{3}&=\operatorname {cl} {\tfrac {1}{3}}\varpi ,\end{aligned}}} wherecl{\displaystyle \operatorname {cl} }islemniscate cosineandϖ{\displaystyle \varpi }isLemniscate constant.[citation needed] Thecmandsmfunctions can be approximated for|z|<13π3{\displaystyle |z|<{\tfrac {1}{3}}\pi _{3}}by theTaylor series whose coefficients satisfy the recurrencec0=s0=1,{\displaystyle c_{0}=s_{0}=1,}[10] These recurrences result in:[11] TheequianharmonicWeierstrass elliptic function℘(z)=℘(z;0,127),{\displaystyle \wp (z)=\wp {\bigl (}z;0,{\tfrac {1}{27}}{\bigr )},}withlatticeΛ=π3Z⊕π3ωZ{\displaystyle \Lambda =\pi _{3}\mathbb {Z} \oplus \pi _{3}\omega \mathbb {Z} }a scaling of the Eisenstein integers, can be defined as:[12] The function℘(z){\displaystyle \wp (z)}solves the differential equation: We can also write it as the inverse of the integral: In terms of℘(z){\displaystyle \wp (z)}, the Dixon elliptic functions can be written:[13] Likewise, the Weierstrass elliptic function℘(z)=℘(z;0,127){\displaystyle \wp (z)=\wp {\bigl (}z;0,{\tfrac {1}{27}}{\bigr )}}can be written in terms of Dixon elliptic functions: The Dixon elliptic functions can also be expressed usingJacobi elliptic functions, which was first observed byCayley.[14]Letk=e5iπ/6{\displaystyle k=e^{5i\pi /6}},θ=314e5iπ/12{\displaystyle \theta =3^{\frac {1}{4}}e^{5i\pi /12}},s=sn⁡(u,k){\displaystyle s=\operatorname {sn} (u,k)},c=cn⁡(u,k){\displaystyle c=\operatorname {cn} (u,k)}, andd=dn⁡(u,k){\displaystyle d=\operatorname {dn} (u,k)}. Then, let Finally, the Dixon elliptic functions are as so: Several definitions of generalized trigonometric functions include the usual trigonometric sine and cosine as ann=2{\displaystyle n=2}case, and the functions sm and cm as ann=3{\displaystyle n=3}case.[15] For example, definingπn=B(1n,1n){\displaystyle \pi _{n}=\mathrm {B} {\bigl (}{\tfrac {1}{n}},{\tfrac {1}{n}}{\bigr )}}andsinn⁡z,cosn⁡z{\displaystyle \sin _{n}z,\,\cos _{n}z}the inverses of an integral: The area in the positive quadrant under the curvexn+yn=1{\displaystyle x^{n}+y^{n}=1}is The quarticn=4{\displaystyle n=4}case results in a square lattice in the complex plane, related to thelemniscate elliptic functions. The Dixon elliptic functions areconformal mapsfrom an equilateral triangle to a disk, and are therefore helpful for constructing polyhedralconformal map projectionsinvolving equilateral triangles, for example projecting the sphere onto a triangle, hexagon,tetrahedron, octahedron, or icosahedron.[16]
https://en.wikipedia.org/wiki/Dixon_elliptic_functions
Aperipheral device, or simplyperipheral, is an auxiliaryhardwaredevice that acomputeruses to transfer information externally.[1]A peripheral is a hardware component that is accessible to and controlled by a computer but is not a core component of the computer. A peripheral can be categorized based on the direction in which information flows relative to the computer: Many modern electronic devices, such as Internet-enableddigital watches,video game consoles,smartphones, andtablet computers, have interfaces for use as a peripheral. This electronics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Peripheral
Network-attached storage(NAS) is a file-levelcomputer data storageserver connected to acomputer networkproviding data access to aheterogeneousgroup of clients. In this context, the term "NAS" can refer to both the technology and systems involved, or a specializedcomputer appliancedevice unit built for such functionality – aNAS applianceorNAS box. NAS contrasts withblock-levelstorage area networks(SAN). A NAS device is optimised forserving fileseither by its hardware, software, or configuration. It is often manufactured as acomputer appliance– a purpose-built specialized computer. NAS systems are networked appliances that contain one or morestorage drives, often arranged intological, redundant storage containers orRAID. Network-attached storage typically provide access to files using network file sharing protocols such asNFS,SMB, orAFP. From the mid-1990s, NAS devices began gaining popularity as a convenient method of sharing files among multiple computers, as well as to remove the responsibility of file serving from other servers on the network; by doing so, a NAS can provide faster data access, easier administration, and simpler configuration as opposed to using general-purpose server to serve files.[1] Accompanying a NAS are purpose-builthard disk drives, which are functionally similar to non-NAS drives but may have different firmware, vibration tolerance, or power dissipation to make them more suitable for use in RAID arrays, a technology often used in NAS implementations.[2]For example, some NAS versions of drives support a command extension to allow extended error recovery to be disabled. In a non-RAID application, it may be important for a disk drive to go to great lengths to successfully read a problematic storage block, even if it takes several seconds. In an appropriately configured RAID array, a single bad block on a single drive can be recovered completely via the redundancy encoded across the RAID set. If a drive spends several seconds executing extensive retries it might cause the RAID controller to flag the drive as "down" whereas if it simply replied promptly that the block of data had a checksum error, the RAID controller would use the redundant data on the other drives to correct the error and continue without any problem. A NAS unit is a computer connected to a network that provides only file-based data storage services to other devices on the network. Although it may technically be possible to run other software on a NAS unit, it is usually not designed to be a general-purpose server. For example, NAS units usually do not have a keyboard or display, and are controlled and configured over the network, often using a browser.[3] A full-featured operating system is not needed on a NAS device, so often a stripped-down operating system is used. NAS systems contain one or more hard disk drives, often arranged into logical, redundant storage containers orRAID. NAS uses file-based protocols such asNFS(popular onUNIXsystems), SMB (Server Message Block) (used withMicrosoft Windowssystems),AFP(used withApple Macintoshcomputers), or NCP (used withOESandNovell NetWare). NAS units rarely limit clients to a single protocol. The key difference betweendirect-attached storage(DAS) and NAS is that DAS is simply an extension to an existing server and is not necessarily networked. As the name suggests, DAS typically is connected via aUSBorThunderboltenabled cable. NAS is designed as an easy and self-contained solution for sharing files over the network. Both DAS and NAS can potentially increase availability of data by usingRAIDorclustering. Both NAS and DAS can have various amount ofcache memory, which greatly affects performance. When comparing use of NAS with use of local (non-networked) DAS, the performance of NAS depends mainly on the speed of and congestion on the network. Most NAS solutions will include the option to install a wide array of software applications to allow better configuration of the system or to include other capabilities outside of storage (like video surveillance, virtualization, media, etc). DAS typically is focused solely on data storage but capabilities can be available based on specific vendor options. NAS provides both storage and afile system. This is often contrasted with SAN (storage area network), which provides only block-based storage and leaves file system concerns on the "client" side. SAN protocols includeFibre Channel,iSCSI,ATA over Ethernet(AoE) andHyperSCSI. One way to loosely conceptualize the difference between a NAS and a SAN is that NAS appears to the client OS (operating system) as afile server(the client canmapnetwork drives to shares on that server) whereas a disk available through a SAN still appears to the client OS as a disk, visible in disk and volume management utilities (along with client's local disks), and available to be formatted with a file system andmounted. Despite their differences, SAN and NAS are not mutually exclusive and may be combined as a SAN-NAS hybrid, offering both file-level protocols (NAS) and block-level protocols (SAN) from the same system[citation needed]. Ashared disk file systemcan also be run on top of a SAN to provide filesystem services. In the early 1980s, the "Newcastle Connection" byBrian Randelland his colleagues atNewcastle Universitydemonstrated and developed remote file access across a set of UNIX machines.[4][5]Novell'sNetWareserver operating system andNCPprotocol was released in 1983. Following the Newcastle Connection,Sun Microsystems' 1984 release ofNFSallowed network servers to share their storage space with networked clients. 3Com andMicrosoftwould develop theLAN Managersoftware and protocol to further this new market.3Com's3Serverand3+Sharesoftware was the first purpose-built server (including proprietary hardware, software, and multiple disks) for open systems servers. Inspired by the success offile serversfrom Novell,IBM, and Sun, several firms developed dedicated file servers. While 3Com was among the first firms to build a dedicated NAS for desktop operating systems,Auspex Systemswas one of the first to develop a dedicated NFS server for use in the UNIX market. A group of Auspex engineers split away in the early 1990s to create the integratedNetApp FAS, which supported both the Windows SMB and the UNIX NFS protocols and had superiorscalabilityand ease of deployment. This started the market forproprietaryNAS devices now led by NetApp and EMC Celerra. Starting in the early 2000s, a series of startups emerged offering alternative solutions to single filer solutions in the form of clustered NAS – Spinnaker Networks (acquired byNetAppin February 2004),Exanet(acquired byDellin February 2010),Gluster(acquired by RedHat in 2011), ONStor (acquired by LSI in 2009),IBRIX(acquired byHP),Isilon(acquired by EMC – November 2010), PolyServe (acquired byHPin 2007), andPanasas, to name a few. In 2009, NAS vendors (notably CTERA networks[6][7]andNetgear) began to introduceonline backupsolutions integrated in their NAS appliances, for online disaster recovery.[8][9] By 2021, three major types of NAS solutions are offered (all with hybrid cloud models where data can be stored both on-premise on the NAS and off site on a separate NAS or through a public cloud service provider). The first type of NAS is focused on consumer needs with lower-cost options that typically support 1–5 hot plug hard drives. The second is focused on small-to-medium-sized businesses – these NAS solutions range from 2–24+ hard drives and are typically offered in tower or rackmount form factors. Pricing can vary greatly depending on the processor, components, and overall features supported. The last type is geared toward enterprises or large businesses and are offered with more advanced software capabilities. NAS solutions are typically sold without hard drives installed to allow the buyer (or IT departments) to select the hard drive cost, size, and quality. The way manufacturers make NAS devices can be classified into three types: NAS is useful for more than just general centralized storage provided to client computers in environments with large amounts of data. NAS can enable simpler and lower cost systems such asload-balancingand fault-tolerant email and web server systems by providing storage services. The potential emerging market for NAS is the consumer market where there is a large amount of multi-media data. Such consumer market appliances are now commonly available. Unlike theirrackmountedcounterparts, they are generally packaged in smaller form factors. The price of NAS appliances has fallen sharply in recent[when?]years, offering flexible network-based storage to the home consumer market for little more than the cost of a regularUSBorFireWireexternal hard disk. Many of these home consumer devices are built aroundARM, x86 orMIPSprocessors running anembedded Linuxoperating system. Apurpose-built backup appliance(PBBA) is a kind of NAS intended for storingbackupdata. PBBAs typically includedata deduplication, compression,RAID 6or other redundant hardware components, and automated maintenance.[10][11][12][13]A PBBA may also be called abackup and disaster recovery applianceor simply abackupappliance. Open-sourceNAS-oriented distributions ofLinuxandFreeBSDare available. These are designed to be easy to set up on commodity PC hardware, and are typically configured using a web browser. They can run from avirtual machine,Live CD,bootableUSB flash drive (Live USB), or from one of the mounted hard drives. They runSamba(anSMBdaemon),NFSdaemon, andFTPdaemons which are freely available for those operating systems. Network-attached secure disks(NASD) is 1997–2001 research project ofCarnegie Mellon University, with the goal of providing cost-effective scalablestoragebandwidth.[14]NASD reduces the overhead on the fileserver(file manager) by allowing storage devices to transfer data directly toclients. Most of the file manager's work is offloaded to the storage disk without integrating the file system policy into the disk. Most client operations like Read/Write go directly to the disks; less frequent operations like authentication go to the file manager. Disks transfer variable-length objects instead of fixed-size blocks to clients. The File Manager provides a time-limited cachable capability for clients to access the storage objects. A file access from the client to the disks has the following sequence: Aclustered NASis a NAS that is using a distributed file system running simultaneously on multiple servers. The key difference between a clustered and traditional NAS is the ability to distribute[citation needed](e.g. stripe) data andmetadataacross the cluster nodes or storage devices. Clustered NAS, like a traditional one, still provides unified access to the files from any of the cluster nodes, unrelated to the actual location of the data.
https://en.wikipedia.org/wiki/Clustered_NAS
Withinquality management systems(QMS) andinformation technology(IT) systems,change controlis a process—either formal or informal[1]—used to ensure that changes to a product or system are introduced in a controlled and coordinated manner. It reduces the possibility that unnecessary changes will be introduced to a system without forethought, introducing faults into the system or undoing changes made by other users of software. The goals of a change control procedure usually include minimal disruption to services, reduction in back-out activities, and cost-effective utilization of resources involved in implementing change. According to theProject Management Institute, change control is a "process whereby modifications to documents, deliverables, or baselines associated with the project are identified, documented, approved, or rejected."[2] Change control is used in various industries, including in IT,[3]software development,[1]the pharmaceutical industry,[4]the medical device industry,[5]and other engineering/manufacturing industries.[6]For the IT and software industries, change control is a major aspect of the broader discipline of change management. Typical examples from thecomputerandnetworkenvironments are patches to software products, installation of newoperating systems, upgrades to networkroutingtables, or changes to theelectrical powersystems supporting suchinfrastructure.[1][3] Certain portions ofITILcover change control.[7] There is considerable overlap and confusion betweenchange management,configuration managementand change control. The definition below is not yet integrated with definitions of the others. Change control can be described as a set of six steps: Consider the primary and ancillary detail of the proposed change. This should include aspects such as identifying the change, its owner(s), how it will be communicated and executed,[8]how success will be verified, the change's estimate of importance, its added value, its conformity to business and industry standards, and its target date for completion.[3][9][10] Impact and risk assessment is the next vital step. When executed, will the proposed plan cause something to go wrong? Will related systems be impacted by the proposed change? Even minor details should be considered during this phase. Afterwards, a risk category should ideally be assigned to the proposed change: high-, moderate-, or low-risk. High-risk change requires many additional steps such as management approval and stakeholder notification, whereas low-risk change may only require project manager approval and minimal documentation.[3][9][10]If not addressed in the plan/scope, the desire for a backout plan should be expressed, particularly for high-risk changes that have significant worst-case scenarios.[3] Whether it's a change controller,change control board, steering committee, or project manager, a review and approval process is typically required.[11]The plan/scope and impact/risk assessments are considered in the context of business goals, requirements, and resources. If, for example, the change request is deemed to address a low severity, low impact issue that requires significant resources to correct, the request may be made low priority or shelved altogether. In cases where a high-impact change is requested but without a strong plan, the review/approval entity may request a fullbusiness casemay be requested for further analysis.[1][3][9][10] If the change control request is approved to move forward, the delivery team will execute the solution through a small-scale development process in test or development environments. This allows the delivery team an opportunity to design and make incremental changes, withunitand/orregression testing.[1][3][9]Little in the way of testing and validation may occur for low-risk changes, though major changes will require significant testing before implementation.[9]They will then seek approval and request a time and date to carry out the implementation phase. In rare cases where the solution can't be tested, special consideration should be made towards the change/implementation window.[3] In most cases a special implementation team with the technical expertise to quickly move a change along is used to implement the change. The team should also be implementing the change not only according to the approved plan but also according to organizational standards, industry standards, and quality management standards.[9]The implementation process may also require additional staff responsibilities outside the implementation team, including stakeholders[11]who may be asked to assist with troubleshooting.[3]Following implementation, the team may also carry out a post-implementation review, which would take place at another stakeholder meeting or during project closing procedures.[1][9] The closing process can be one of the more difficult and important phases of change control.[12]Three primary tasks at this end phase include determining that the project is actually complete, evaluating "the project plan in the context of project completion," and providing tangible proof of project success.[12]If despite best efforts something went wrong during the change control process, a post-mortem on what happened will need to be run, with the intent of applying lessons learned to future changes.[3] In agood manufacturing practiceregulated industry, the topic is frequently encountered by its users. Various industrial guidances and commentaries are available for people to comprehend this concept.[13][14][15]As a common practice, the activity is usually directed by one or moreSOPs.[16]From theinformation technologyperspective forclinical trials, it has been guided by another U.S.Food and Drug Administrationdocument.[17]
https://en.wikipedia.org/wiki/Change_control
Print Gallery(Dutch:Prentententoonstelling) is alithographprinted in 1956 by theDutchartistM. C. Escher. It depicts a man in a gallery viewing a print of a seaport, and among the buildings in the seaport is the very gallery in which he is standing, making use of theDroste effectwith visualrecursion.[1]The lithograph has attracted discussion in both mathematical and artistic contexts. Escher consideredPrint Galleryto be among the best of his works.[2] Bruno Ernst citesM. C. Escheras stating that he beganPrint Gallery"from the idea that it must be possible to make an annular bulge, a cyclic expansion ... without beginning or end."[3]Escher attempted to do this with straight lines, but intuitively switched to using curved lines which make the grid expand greatly as it rotates.[3][4] In his bookGödel, Escher, Bach,Douglas Hofstadterexplains the seeming paradox embodied inPrint Galleryas astrange loopshowing three kinds of "in-ness": the gallery is physically in the town ("inclusion"); the town is artistically in the picture ("depiction"); the picture is mentally in the person ("representation").[5] Escher's signature is on a circular void in the center of the work. In 2003, two Dutch mathematicians, Bart de Smit andHendrik Lenstra, reported a way of filling in the void by treating the work as drawn on anelliptic curveover the field ofcomplex numbers. They deem an idealized version ofPrint Gallerytocontain a copy of itself (the Droste effect), rotated clockwise by about 157.63 degrees and shrunk by a factor of about 22.58.[4]Their website further explores the mathematical structure of the picture.[6] Print Galleryhas been discussed in relation topost-modernismby a number of writers, including Silvio Gaggi,[7]Barbara Freedman,[8]Stephen Bretzius,[9]andMarie-Laure Ryan.[10]
https://en.wikipedia.org/wiki/Print_Gallery_(M._C._Escher)
TheEZ-Linkcard is a rechargeablecontactless smart cardandelectronic moneysystem that is primarily used as a payment method forpublic transportsuch as bus and rail lines inSingapore. A standard EZ-Link card is acredit-card-sizedstored-valuecontact-less smart-card that comes in a variety of colours, as well as limited edition designs. It is sold by SimplyGo Pte Ltd, a merged entity of TransitLink and EZ-Link since 2024, a subsidiary of theLand Transport Authority(LTA), and can be used on travel modes across Singapore, including theMass Rapid Transit(MRT), theLight Rail Transit(LRT),public buseswhich are operated bySBS Transit,SMRT Buses,Tower Transit SingaporeandGo-Ahead Singapore, as well as theSentosa Express. Established in 2001, the first generation of the card was based on the SonyFeliCasmart card technology and was promoted as the means for speedier boarding times on the city-state's bus and rail services. It had a monopoly on public transportation fare payments in Singapore until September 2009, when theNETS FlashPaycard, which had a monopoly overElectronic Road Pricing(ERP) toll payments, entered the market for transportation payments (and vice versa). EZ-Link cards are distributed and managed by EZ-Link Pte. Ltd., also a subsidiary of Singapore's Land Transport Authority. In September 2009,CEPASEZ-Link cards replaced the original EZ-Link card, expanding the card's usage to taxis, ERP gantries (with the dual-mode in-vehicle unit), car parks (which have been upgraded to accept CEPAS-compliant cards), convenience stores, supermarkets and fast food restaurants. Compared toNETS FlashPayhowever, EZ-Link has lesser acceptance at retail shops. EZ-Link can also be used as a payment card at vending machines throughout the country. Account-based CEPAS EZ-Link card was launched in January 2021.[1] In March 2023, theLand Transport Authorityannounced plans to merge their subsidiaries TransitLink and EZ-Link into a single entitySimplyGo.[2] TheLand Transport Authorityintroduced its pilot testing of the card to 100,000 volunteers on 26 February 2000. Initially for commuters who made at least five trips on MRT/LRT per week, the card was branded as the "Super Rider". As an incentive, volunteers were given 10% rebate off their regular fare during the one-month period.[3]Two further tests were made, with the scheme extending to frequent bus users on selected routes on an invitation basis.[4] The card is commonly used inSingaporeas a smartcard for paying transportation fees in thecity-state'sMass Rapid Transit(MRT),Light Rail Transit(LRT) andpublic bus services. The EZ-Link function is also used in concession cards forstudents in nationally recognised educational institutes, full-time national service personnel serving in theSingapore Armed Forces,Singapore Civil Defence ForceandSingapore Police Forceorsenior citizenswho are over 60 years old.[5] The system is similar to thePasmoandICOCAcards, and the card's use have since been expanded to retail, private transport, government services, community services, educational institutes and vending machines. On 17 October 2007, local telcoStarHuband EZ-Link Pte Ltd declared the start of a 6-month trial on phones with a Subscriber Identity Module (SIM) EZ-Link card.[6] Since 2009, Singapore motorists can use EZ-Link cards in their new generation In-Vehicle Unit to pay for Electronic Road Pricing (ERP) and Electronic Parking System (EPS) payments.[7][8]In August 2016, EZ-Link introduced a post-paid ERP payment service called EZ-Pay.[9] In March 2016, EZ-Link concluded a trial with the Land Transport Authority and theInfocomm Development Authority of Singaporeon the use of compatible mobile phones withNear-Field Communication(NFC) technology to make public transport payments.[10] In February 2018, EZ-Link and NTUC Social Enterprises launched a partnership to promote cashless payments. This allowed EZ-Link card holders with a linked NTUC Plus card to earn LinkPoints with EZ-Link purchases, spare change could also be used to top-up EZ-Link cards when customers make cash payments at Cheers convenience stores, and EZ-Link acceptance was extended to NTUC FairPrice supermarkets and NTUC Unity pharmacies.[11]However, EZ-Link payments at FairPrice and Unity stores were ceased on 3 May 2023 until further notice.[12]On 12 June 2024, EZ-Link acceptance is reenabled at Fairprice, with a slow rollout over a small number of outlets initially.[13] In April 2018, the card also gained acceptance on NETS terminals inhawker centresacross Singapore.[14] In September 2018, the EZ-Link card became part of a unified cashless payment system rolled out at 500 hawker stalls across Singapore.[15] In April 2019, EZ-Link announced it was working with Touch 'N Go to create a dual currency cross-border card for public transport.[16]The card was launched on 17 August 2020.[17] In 2007, theLand Transport Authority(LTA) and theSingapore Tourism Boardlaunched the Singapore Tourist Pass produced by EZ-Link to offer tourists unlimited rides on Singapore's public transport system.[18][19] In 2015, EZ-Link introduced 'EZ-Charms', trinkets that have full EZ-Link functionalities, such as the Hello Kitty EZ-Charms,[20]that received overwhelming response.[21] In 2017, EZ-Link launched EZ-Link Wearables, wearable devices that have full EZ-Link functionalities such as fitness trackers.[22] A trial to test the system was held from 29 August to 28 October 2008. The trial, which involved some 5,000 commuters, generated 1.7 million transactions and has confirmed that the system is ready for revenue service. Developed in-house by theLTA, SeP is built on the Singapore Standard for Contactless ePurse Application (CEPAS) which allows any smart card that complies with the standard to be used with the system and in a wide variety of payment applications. With SeP, commuters were able to use cards issued by any card issuer for transit purposes as long as the card complied with theCEPASstandard and included the transit application. Commuters could eventually useCEPAS-compliant cards for Electronic Road Pricing (ERP) payments in vehicles fitted with the new generation In-vehicle Unit (IU), Electronic Parking System (EPS) carparks and other electronic payment systems that supported the CEPAS standard. During the free one-for-one exchange exercise, most of them replaced their cards during the direct card replacement exercise in 2009. Others seemed to replace new cards after the old cards were out of value and become collectors' value. The new EZ-Link cards also have a higher storage capacity of S$500.00 instead of the previous S$100.00 limit but most passengers keep to the $100 limit in case of loss of card.[23] The EZ-Link App is a mobile application developed by EZ-Link that is available on the Google Play Store and App Store. It was first released as an Android-exclusive app in 2013 under the name 'My EZ-Link Mobile App',[24]and is used for: On 9 March 2020, EZ-Link launched the EZ-Link Wallet, an e-wallet for mobile phones. Compared to the EZ-Link card which is based on NFC, the EZ-Link Wallet is based on QR code, bypassing the need for payment terminals, relying on smartphones and a printed QR code. It is compliant with the SGQR code system. An email address and local mobile number are required to register for an EZ-Link account. Users have to top-up the e-wallet with a debit/credit card, and make payments by scanning the QR code at a retail shop and entering the payment amount. Payment can be authorised with either a 6-digit PIN or the phone's fingerprint scanner. Up to 6 debit/credit cards can be saved in the EZ-Link app.[29] Users can earn EZ-Link Rewards points for each digital wallet transaction, which can be used to redeem vouchers. The EZ-Link Wallet can also be used overseas at an Alipay Connect-enabled merchant in Japan. The following payment networks are supported by the EZ-Link Wallet: SimplyGo was launched in March 2019 forMasterCardusers as a separate account-based ticketing system allowing commuters to pay their public transport fares using bank cards.[31]SimplyGo expanded toVisaon 6 June[32]andNetson 16 November.[33]When the system launched, Senior Parliamentary Secretary for Transport Baey Yam Keng said that SimplyGo was not intended to replace other payment methods such as EZ-Link.[34]In September 2020, a pilot program to expand the use of SimplyGo with EZ-Link adult cards was launched.[35]This was followed on 28 January 2021 by the rollout of account-based EZ-Link cards for adults. Commuters could also update their existing EZ-Link cards to the new system.[36][37] Concession cards were included in SimplyGo on 19 October 2022, with the option to upgrade student concession cards only available in 2023.[38]In March 2023, theLand Transport Authority(LTA) announced that it would merge the TransitLink SimplyGo and EZ-Link mobile apps into a single "SimplyGo" app.[2][39]On 15 June, the EZ-Link Pte Ltd's (EZ-Link) and Transit Link Pte Ltd's (TransitLink) transit and travel card-related services were consolidated under the "SimplyGo" branding.[40]On 9 January 2024, LTA announced that EZ-Link cards that had not yet been upgraded to SimplyGo and Nets Flashpay cards would be deprecated on 1 June 2024.[41][42]By then, a majority of commuters were already using SimplyGo, and the existing card-based system was near the end of itsoperational lifespan. As it would also be costly to run both ticketing systems, the LTA decided to proceed with SimplyGo.[43] Many commuters expressed dissatisfaction with the change,[44]particularly the inability to ascertain the fares charged at the transaction points on buses and the MRT after their cards were upgraded to SimplyGo.[43]When the issue was raised in 2023, the LTA explained that, as most of the SimplyGo features involve back-end processing, riders could not view their stored value card balance and deductions at MRT fare gates and bus readers. The fare transactions could only be viewed on the SimplyGo app.[45]The LTA said that while it would be possible to implement the feature for SimplyGo users, it would take "a few more seconds" for the information from the backend to be displayed at the fare gates, and hence would slow down commuters who were entering or exiting.[46] During the week after LTA's announcement, several commuters attempted to upgrade their EZ-Link cards to the SimplyGo platform. The high transaction volume caused the SimplyGo system to become less stable and responsive, resulting in longer processing times and failed upgrades that lead to commuters' cards being invalidated.[47]On 19 January 2024, the SimplyGo upgrade feature on ticketing machines at MRT stations have been restricted to "TUK with Supervision".[48] On 22 January, transport ministerChee Hong Tatannounced that the LTA reversed their decision and decided to extend the use of the card-based system. Those who had converted their cards to the new SimplyGo system during the January period could revert to the old system if they preferred to at no additional cost.[49]Chee also acknowledged that the issues encountered during the transition could have been avoided "with better preparation". An additional S$40 million (US$28.99 million) would be invested to maintain both systems.[50] The EZ-Link card operates on aradio frequency(RF) interface of 13.56 MHz at 212 kbit/s, with the potential for communication speeds in excess of 847 kbit/s. It employs theManchester bit coding schemefor noise tolerance against distance fluctuation between the card and the contactless reader, and implements theTriple DESalgorithm for security. An adult EZ-Link card costs S$12, inclusive of a S$5 non-refundable card cost and a $7 card value.[51][52] There was a problem with commuters attempting to evade paying the full fare with the prior magnetic fare card system. Under the EZ-Link system, when users tap their card on the entry card reader, the system deducts the maximum fare payable from their bus stop to the end of the bus route. If they tap their card on the exit reader when they disembark, the system will return an amount based on the remaining bus stages to the end of the bus route. If they fail to tap the card on the exit reader when they disembark, the entry card reader would have already deducted the maximum fare payable to the end of the bus route.[53] EZ-Link card holders can top up their cards at the following places: A Refund Service Charge of $1 will be charged per month for EZ-link cards that have expired for 2 years or more until the value is refunded or fully depleted. This applies to the remaining card balance, and not the initial deposit or cost of the card that is non-refundable. Refund may be requested at ticketing offices. In addition, commuters may replace expiring EZ-link cards before 31 December 2024 at a subsidised cost of $3.[54] On 10 January 2024, LTA announced that EZ-Link adult cards which have not yet been upgraded to SimplyGo will no longer be accepted for public transport fare payment from 1 June 2024, due to phasing out of the legacy card-based ticketing system. Commuters with EZ-Link Adult Cards may upgrade to the SimplyGo system at any ticketing machine and retain their current cards.[55][56]The decision was reversed by the authorities on 22 January 2024 following significant backlash, and existing EZ-Link cards can continue to be used after 1 June 2024.[50] • EZ-Link cards • Concession cards • EZ-Link Motoring cards (Card-based Offline Debit) ✓ It can be used for retail and public transport payments, without remote management functionality. ✓ Commuters can see their fare cost and card balance at the gantry. ✓ The card-based offline debit EZ-Link cards and EZ-Link Motoring cards are compatible with dual mode in-vehicle units for ERP and carpark payments. Thecard-based offline debit EZ-Link cardswere temporary suspended from March 2022 to June 2024, to encourage adoption of the SimplyGo account-based system.[57][58] EZ-Link Motoring cards(with a non-account-based card profile & similar functionality) are still sold at 7-Eleven/Cheers convenience stores, selected Caltex petrol stations, Vicom centres, STA Inspection centres. EZ-Link Motoring cards cannot be converted to be used on the SimplyGo system.[59] (* a service fee is chargeable) • SimplyGo EZ-Link cards • SimplyGo Concession cards (Account-based Online Debit) As the card information is stored on a central server, the card balance can be topped up without presence of physical card. Concession cards are only available for: children under 7 years old, students, full-time National Servicemen, senior citizens aged 60 years and above, persons with disabilities, Workfare Income Supplement recipients. ✓ It is compatible with the SimplyGo system for remote management of public transport cards. ✗ Fare cost and card balance will not be displayed at the gantry. Commuters have to create an account and sign in to the SimplyGo website or app, to view their travel history and its related fares. ✗ These account-based online debit cards are not compatible with ERP and carpark payments. • A locally issued Visa/Mastercard card is required to make top-ups.
https://en.wikipedia.org/wiki/EZ-link
Inmodular arithmetic, theintegerscoprime(relatively prime) tonfrom the set{0,1,…,n−1}{\displaystyle \{0,1,\dots ,n-1\}}ofnnon-negative integers form agroupunder multiplicationmodulon, called themultiplicative group of integers modulon. Equivalently, the elements of this group can be thought of as thecongruence classes, also known asresiduesmodulon, that are coprime ton. Hence another name is the group ofprimitive residue classes modulon. In thetheory of rings, a branch ofabstract algebra, it is described as thegroup of unitsof thering of integers modulon. Hereunitsrefers to elements with amultiplicative inverse, which, in this ring, are exactly those coprime ton. This group, usually denoted(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}, is fundamental innumber theory. It is used incryptography,integer factorization, andprimality testing. It is anabelian,finitegroup whoseorderis given byEuler's totient function:|(Z/nZ)×|=φ(n).{\displaystyle |(\mathbb {Z} /n\mathbb {Z} )^{\times }|=\varphi (n).}For primenthe group iscyclic, and in general the structure is easy to describe, but no simple general formula for findinggeneratorsis known. It is a straightforward exercise to show that, under multiplication, the set ofcongruence classesmodulonthat are coprime tonsatisfy the axioms for anabelian group. Indeed,ais coprime tonif and only ifgcd(a,n) = 1. Integers in the same congruence classa≡b(modn)satisfygcd(a,n) = gcd(b,n); hence one is coprime tonif and only if the other is. Thus the notion of congruence classes modulonthat are coprime tonis well-defined. Sincegcd(a,n) = 1andgcd(b,n) = 1impliesgcd(ab,n) = 1, the set of classes coprime tonis closed under multiplication. Integer multiplication respects the congruence classes; that is,a≡a'andb≡b'(modn)impliesab≡a'b'(modn). This implies that the multiplication is associative, commutative, and that the class of 1 is the unique multiplicative identity. Finally, givena, themultiplicative inverseofamodulonis an integerxsatisfyingax≡ 1 (modn). It exists precisely whenais coprime ton, because in that casegcd(a,n) = 1and byBézout's lemmathere are integersxandysatisfyingax+ny= 1. Notice that the equationax+ny= 1implies thatxis coprime ton, so the multiplicative inverse belongs to the group. The set of (congruence classes of) integers modulonwith the operations of addition and multiplication is aring. It is denotedZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }orZ/(n){\displaystyle \mathbb {Z} /(n)}(the notation refers to taking thequotientof integers modulo theidealnZ{\displaystyle n\mathbb {Z} }or(n){\displaystyle (n)}consisting of the multiples ofn). Outside of number theory the simpler notationZn{\displaystyle \mathbb {Z} _{n}}is often used, though it can be confused with thep-adic integerswhennis a prime number. The multiplicative group of integers modulon, which is thegroup of unitsin this ring, may be written as (depending on the author)(Z/nZ)×,{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times },}(Z/nZ)∗,{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{*},}U(Z/nZ),{\displaystyle \mathrm {U} (\mathbb {Z} /n\mathbb {Z} ),}E(Z/nZ){\displaystyle \mathrm {E} (\mathbb {Z} /n\mathbb {Z} )}(for GermanEinheit, which translates asunit),Zn∗{\displaystyle \mathbb {Z} _{n}^{*}}, or similar notations. This article uses(Z/nZ)×.{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }.} The notationCn{\displaystyle \mathrm {C} _{n}}refers to thecyclic groupof ordern. It isisomorphicto the group of integers modulonunder addition. Note thatZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }orZn{\displaystyle \mathbb {Z} _{n}}may also refer to the group under addition. For example, the multiplicative group(Z/pZ)×{\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{\times }}for a primepis cyclic and hence isomorphic to the additive groupZ/(p−1)Z{\displaystyle \mathbb {Z} /(p-1)\mathbb {Z} }, but the isomorphism is not obvious. The order of the multiplicative group of integers modulonis the number of integers in{0,1,…,n−1}{\displaystyle \{0,1,\dots ,n-1\}}coprime ton. It is given byEuler's totient function:|(Z/nZ)×|=φ(n){\displaystyle |(\mathbb {Z} /n\mathbb {Z} )^{\times }|=\varphi (n)}(sequenceA000010in theOEIS). For primep,φ(p)=p−1{\displaystyle \varphi (p)=p-1}. The group(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}iscyclicif and only ifnis 1, 2, 4,pkor 2pk, wherepis an odd prime andk> 0. For all other values ofnthe group is not cyclic.[1][2][3]This was first proved byGauss.[4] This means that for thesen: By definition, the group is cyclic if and only if it has ageneratorg; that is, the powersg0,g1,g2,…,{\displaystyle g^{0},g^{1},g^{2},\dots ,}give all possible residues moduloncoprime ton(the firstφ(n){\displaystyle \varphi (n)}powersg0,…,gφ(n)−1{\displaystyle g^{0},\dots ,g^{\varphi (n)-1}}give each exactly once). A generator of(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}is called aprimitive root modulon.[5]If there is any generator, then there areφ(φ(n)){\displaystyle \varphi (\varphi (n))}of them. Modulo 1 any two integers are congruent, i.e., there is only one congruence class, [0], coprime to 1. Therefore,(Z/1Z)×≅C1{\displaystyle (\mathbb {Z} /1\,\mathbb {Z} )^{\times }\cong \mathrm {C} _{1}}is the trivial group withφ(1) = 1element. Because of its trivial nature, the case of congruences modulo 1 is generally ignored and some authors choose not to include the case ofn= 1 in theorem statements. Modulo 2 there is only one coprime congruence class, [1], so(Z/2Z)×≅C1{\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{\times }\cong \mathrm {C} _{1}}is thetrivial group. Modulo 4 there are two coprime congruence classes, [1] and [3], so(Z/4Z)×≅C2,{\displaystyle (\mathbb {Z} /4\mathbb {Z} )^{\times }\cong \mathrm {C} _{2},}the cyclic group with two elements. Modulo 8 there are four coprime congruence classes, [1], [3], [5] and [7]. The square of each of these is 1, so(Z/8Z)×≅C2×C2,{\displaystyle (\mathbb {Z} /8\mathbb {Z} )^{\times }\cong \mathrm {C} _{2}\times \mathrm {C} _{2},}theKlein four-group. Modulo 16 there are eight coprime congruence classes [1], [3], [5], [7], [9], [11], [13] and [15].{±1,±7}≅C2×C2,{\displaystyle \{\pm 1,\pm 7\}\cong \mathrm {C} _{2}\times \mathrm {C} _{2},}is the 2-torsion subgroup(i.e., the square of each element is 1), so(Z/16Z)×{\displaystyle (\mathbb {Z} /16\mathbb {Z} )^{\times }}is not cyclic. The powers of 3,{1,3,9,11}{\displaystyle \{1,3,9,11\}}are a subgroup of order 4, as are the powers of 5,{1,5,9,13}.{\displaystyle \{1,5,9,13\}.}Thus(Z/16Z)×≅C2×C4.{\displaystyle (\mathbb {Z} /16\mathbb {Z} )^{\times }\cong \mathrm {C} _{2}\times \mathrm {C} _{4}.} The pattern shown by 8 and 16 holds[6]for higher powers 2k,k> 2:{±1,2k−1±1}≅C2×C2,{\displaystyle \{\pm 1,2^{k-1}\pm 1\}\cong \mathrm {C} _{2}\times \mathrm {C} _{2},}is the 2-torsion subgroup, so(Z/2kZ)×{\displaystyle (\mathbb {Z} /2^{k}\mathbb {Z} )^{\times }}cannot be cyclic, and the powers of 3 are a cyclic subgroup of order2k− 2, so: (Z/2kZ)×≅C2×C2k−2.{\displaystyle (\mathbb {Z} /2^{k}\mathbb {Z} )^{\times }\cong \mathrm {C} _{2}\times \mathrm {C} _{2^{k-2}}.} By thefundamental theorem of finite abelian groups, the group(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}is isomorphic to adirect productof cyclic groups of prime power orders. More specifically, theChinese remainder theorem[7]says that ifn=p1k1p2k2p3k3…,{\displaystyle \;\;n=p_{1}^{k_{1}}p_{2}^{k_{2}}p_{3}^{k_{3}}\dots ,\;}then the ringZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }is thedirect productof the rings corresponding to each of its prime power factors: Similarly, the group of units(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}is the direct product of the groups corresponding to each of the prime power factors: For each odd prime powerpk{\displaystyle p^{k}}the corresponding factor(Z/pkZ)×{\displaystyle (\mathbb {Z} /{p^{k}}\mathbb {Z} )^{\times }}is the cyclic group of orderφ(pk)=pk−pk−1{\displaystyle \varphi (p^{k})=p^{k}-p^{k-1}}, which may further factor into cyclic groups of prime-power orders. For powers of 2 the factor(Z/2kZ)×{\displaystyle (\mathbb {Z} /{2^{k}}\mathbb {Z} )^{\times }}is not cyclic unlessk= 0, 1, 2, but factors into cyclic groups as described above. The order of the groupφ(n){\displaystyle \varphi (n)}is the product of the orders of the cyclic groups in the direct product. Theexponentof the group; that is, theleast common multipleof the orders in the cyclic groups, is given by theCarmichael functionλ(n){\displaystyle \lambda (n)}(sequenceA002322in theOEIS). In other words,λ(n){\displaystyle \lambda (n)}is the smallest number such that for eachacoprime ton,aλ(n)≡1(modn){\displaystyle a^{\lambda (n)}\equiv 1{\pmod {n}}}holds. It dividesφ(n){\displaystyle \varphi (n)}and is equal to it if and only if the group is cyclic. Ifnis composite, there exists a possibly proper subgroup ofZn×{\displaystyle \mathbb {Z} _{n}^{\times }}, called the "group of false witnesses", comprising the solutions of the equationxn−1=1{\displaystyle x^{n-1}=1}, the elements which, raised to the powern− 1, are congruent to 1 modulon.[8]Fermat's Little Theoremstates that forn = pa prime, this group consists of allx∈Zp×{\displaystyle x\in \mathbb {Z} _{p}^{\times }}; thus forncomposite, such residuesxare "false positives" or "false witnesses" for the primality ofn. The numberx =2 is most often used in this basic primality check, andn =341 = 11 × 31is notable since2341−1≡1mod341{\displaystyle 2^{341-1}\equiv 1\mod 341}, andn =341 is the smallest composite number for whichx =2 is a false witness to primality. In fact, the false witnesses subgroup for 341 contains 100 elements, and is of index 3 inside the 300-element groupZ341×{\displaystyle \mathbb {Z} _{341}^{\times }}. The smallest example with a nontrivial subgroup of false witnesses is9 = 3 × 3. There are 6 residues coprime to 9: 1, 2, 4, 5, 7, 8. Since 8 is congruent to−1 modulo 9, it follows that 88is congruent to 1 modulo 9. So 1 and 8 are false positives for the "primality" of 9 (since 9 is not actually prime). These are in fact the only ones, so the subgroup {1,8} is the subgroup of false witnesses. The same argument shows thatn− 1is a "false witness" for any odd compositen. Forn= 91 (= 7 × 13), there areφ(91)=72{\displaystyle \varphi (91)=72}residues coprime to 91, half of them (i.e., 36 of them) are false witnesses of 91, namely 1, 3, 4, 9, 10, 12, 16, 17, 22, 23, 25, 27, 29, 30, 36, 38, 40, 43, 48, 51, 53, 55, 61, 62, 64, 66, 68, 69, 74, 75, 79, 81, 82, 87, 88, and 90, since for these values ofx,x90is congruent to 1 mod 91. n= 561 (= 3 × 11 × 17) is aCarmichael number, thuss560is congruent to 1 modulo 561 for any integerscoprime to 561. The subgroup of false witnesses is, in this case, not proper; it is the entire group of multiplicative units modulo 561, which consists of 320 residues. This table shows the cyclic decomposition of(Z/nZ)×{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}and agenerating setforn≤ 128. The decomposition and generating sets are not unique; for example, (Z/35Z)×≅(Z/5Z)××(Z/7Z)×≅C4×C6≅C4×C2×C3≅C2×C12≅(Z/4Z)××(Z/13Z)×≅(Z/52Z)×{\displaystyle \displaystyle {\begin{aligned}(\mathbb {Z} /35\mathbb {Z} )^{\times }&\cong (\mathbb {Z} /5\mathbb {Z} )^{\times }\times (\mathbb {Z} /7\mathbb {Z} )^{\times }\cong \mathrm {C} _{4}\times \mathrm {C} _{6}\cong \mathrm {C} _{4}\times \mathrm {C} _{2}\times \mathrm {C} _{3}\cong \mathrm {C} _{2}\times \mathrm {C} _{12}\cong (\mathbb {Z} /4\mathbb {Z} )^{\times }\times (\mathbb {Z} /13\mathbb {Z} )^{\times }\\&\cong (\mathbb {Z} /52\mathbb {Z} )^{\times }\end{aligned}}} (but≇C24≅C8×C3{\displaystyle \not \cong \mathrm {C} _{24}\cong \mathrm {C} _{8}\times \mathrm {C} _{3}}). The table below lists the shortest decomposition (among those, the lexicographically first is chosen – this guarantees isomorphic groups are listed with the same decompositions). The generating set is also chosen to be as short as possible, and fornwith primitive root, the smallest primitive root modulonis listed. For example, take(Z/20Z)×{\displaystyle (\mathbb {Z} /20\mathbb {Z} )^{\times }}. Thenφ(20)=8{\displaystyle \varphi (20)=8}means that the order of the group is 8 (i.e., there are 8 numbers less than 20 and coprime to it);λ(20)=4{\displaystyle \lambda (20)=4}means the order of each element divides 4; that is, the fourth power of any number coprime to 20 is congruent to 1 (mod 20). The set {3,19} generates the group, which means that every element of(Z/20Z)×{\displaystyle (\mathbb {Z} /20\mathbb {Z} )^{\times }}is of the form3a× 19b(whereais 0, 1, 2, or 3, because the element 3 has order 4, and similarlybis 0 or 1, because the element 19 has order 2). Smallest primitive root modnare (0 if no root exists) Numbers of the elements in a minimal generating set of modnare TheDisquisitiones Arithmeticaehas been translated from Gauss'sCiceronian LatinintoEnglishandGerman. The German edition includes all of his papers on number theory: all the proofs ofquadratic reciprocity, the determination of the sign of theGauss sum, the investigations intobiquadratic reciprocity, and unpublished notes.
https://en.wikipedia.org/wiki/Multiplicative_group_of_integers_modulo_n
Abiological networkis a method of representing systems as complex sets of binary interactions or relations between various biological entities.[1]In general, networks or graphs are used to capture relationships between entities or objects.[1]A typicalgraphingrepresentation consists of a set ofnodesconnected byedges. As early as 1736Leonhard Euleranalyzed a real-world issue known as theSeven Bridges of Königsberg, which established the foundation ofgraph theory. From the 1930s-1950s the study ofrandom graphswere developed. During the mid 1990s, it was discovered that many different types of "real" networks have structural properties quite different from random networks.[2]In the late 2000's, scale-free and small-world networks began shaping the emergence of systems biology, network biology, and network medicine.[3]In 2014, graph theoretical methods were used by Frank Emmert-Streib to analyze biological networks.[4] In the 1980s, researchers started viewingDNAorgenomesas the dynamic storage of a language system with precise computable finitestatesrepresented as afinite-state machine.[5]Recentcomplex systemsresearch has also suggested some far-reaching commonality in the organization of information in problems from biology,computer science, andphysics. Protein-protein interaction networks(PINs) represent the physical relationship among proteins present in a cell, where proteins arenodes, and their interactions are undirectededges.[6]Due to their undirected nature, it is difficult to identify all the proteins involved in an interaction.Protein–protein interactions(PPIs) are essential to the cellular processes and also the most intensely analyzed networks in biology. PPIs could be discovered by various experimental techniques, among which theyeast two-hybrid systemis a commonly used technique for the study of binary interactions.[7]Recently, high-throughput studies using mass spectrometry have identified large sets of protein interactions.[8] Many international efforts have resulted in databases that catalog experimentally determined protein-protein interactions. Some of them are theHuman Protein Reference Database,Database of Interacting Proteins, the Molecular Interaction Database (MINT),[9]IntAct,[10]andBioGRID.[11]At the same time, multiple computational approaches have been proposed to predict interactions.[12]FunCoup andSTRINGare examples of such databases, where protein-protein interactions inferred from multiple evidences are gathered and made available for public usage.[citation needed] Recent studies have indicated the conservation of molecular networks through deep evolutionary time.[13]Moreover, it has been discovered that proteins with high degrees of connectedness are more likely to be essential for survival than proteins with lesser degrees.[14]This observation suggests that the overall composition of the network (not simply interactions between protein pairs) is vital for an organism's overall functioning. Thegenomeencodes thousands of genes whose products (mRNAs, proteins) are crucial to the various processes of life, such as cell differentiation, cell survival, and metabolism. Genes produce such products through a process called transcription, which is regulated by a class of proteins calledtranscription factors. For instance, the human genome encodes almost 1,500 DNA-binding transcription factors that regulate the expression of more than 20,000 human genes.[15]The complete set of gene products and the interactions among them constitutesgene regulatory networks(GRN). GRNs regulate the levels of gene products within the cell and in-turn the cellular processes. GRNs are represented with genes and transcriptional factors as nodes and the relationship between them as edges. These edges are directional, representing the regulatory relationship between the two ends of the edge. For example, the directed edge from gene A to gene B indicates that A regulates the expression of B. Thus, these directional edges can not only represent the promotion of gene regulation but also its inhibition. GRNs are usually constructed by utilizing the gene regulation knowledge available from databases such as.,ReactomeandKEGG. High-throughput measurement technologies, such asmicroarray,RNA-Seq,ChIP-chip, andChIP-seq, enabled the accumulation of large-scale transcriptomics data, which could help in understanding the complex gene regulation patterns.[16][17] Gene co-expression networks can be perceived as association networks between variables that measure transcript abundances. These networks have been used to provide a system biologic analysis of DNA microarray data, RNA-seq data, miRNA data, etc.weighted gene co-expression network analysisis extensively used to identify co-expression modules and intramodular hub genes.[18]Co-expression modules may correspond to cell types or pathways, while highly connected intramodular hubs can be interpreted as representatives of their respective modules. Cells break down the food and nutrients into small molecules necessary for cellular processing through a series of biochemical reactions. These biochemical reactions are catalyzed byenzymes. The complete set of all these biochemical reactions in all the pathways represents themetabolic network. Within the metabolic network, the small molecules take the roles of nodes, and they could be either carbohydrates, lipids, or amino acids. The reactions which convert these small molecules from one form to another are represented as edges. It is possible to use network analyses to infer how selection acts on metabolic pathways.[19] Signals are transduced within cells or in between cells and thus form complex signaling networks which plays a key role in the tissue structure. For instance, theMAPK/ERK pathwayis transduced from the cell surface to the cell nucleus by a series of protein-protein interactions, phosphorylation reactions, and other events.[20]Signaling networks typically integrateprotein–protein interaction networks,gene regulatory networks, andmetabolic networks.[21][22]Single cell sequencing technologies allows the extraction of inter-cellular signaling, an example is NicheNet, which allows to modeling intercellular communication by linking ligands to target genes.[23] The complex interactions in thebrainmake it a perfect candidate to apply network theory.Neuronsin the brain are deeply connected with one another, and this results in complex networks being present in the structural and functional aspects of the brain.[24]For instance,small-world networkproperties have been demonstrated in connections between cortical regions of the primate brain[25]or during swallowing in humans.[26]This suggests that cortical areas of the brain are not directly interacting with each other, but most areas can be reached from all others through only a few interactions. All organisms are connected through feeding interactions. If a species eats or is eaten by another species, they are connected in an intricatefood webof predator and prey interactions. The stability of these interactions has been a long-standing question in ecology.[27]That is to say if certain individuals are removed, what happens to the network (i.e., does it collapse or adapt)? Network analysis can be used to explore food web stability and determine if certain network properties result in more stable networks. Moreover, network analysis can be used to determine how selective removals of species will influence the food web as a whole.[28]This is especially important considering the potential species loss due to global climate change. In biology, pairwise interactions have historically been the focus of intense study. With the recent advances innetwork science, it has become possible to scale up pairwise interactions to include individuals of many species involved in many sets of interactions to understand the structure and function of largerecological networks.[29]The use ofnetwork analysiscan allow for both the discovery and understanding of how these complex interactions link together within the system's network, a property that has previously been overlooked. This powerful tool allows for the study of various types of interactions (fromcompetitivetocooperative) using the same general framework.[30]For example, plant-pollinatorinteractions are mutually beneficial and often involve many different species of pollinators as well as many different species of plants. These interactions are critical to plant reproduction and thus the accumulation of resources at the base of thefood chainfor primary consumers, yet these interaction networks are threatened byanthropogenicchange. The use of network analysis can illuminate howpollination networkswork and may, in turn, inform conservation efforts.[31]Within pollination networks, nestedness (i.e., specialists interact with a subset of species that generalists interact with), redundancy (i.e., most plants are pollinated by many pollinators), andmodularityplay a large role in network stability.[31][32]These network properties may actually work to slow the spread of disturbance effects through the system and potentially buffer the pollination network from anthropogenic changes somewhat.[32]More generally, the structure of species interactions within an ecological network can tell us something about the diversity, richness, and robustness of the network.[33]Researchers can even compare current constructions of species interactions networks with historical reconstructions of ancient networks to determine how networks have changed over time.[34]Much research into these complex species interactions networks is highly concerned with understanding what factors (e.g., species richness, connectance, nature of the physical environment) lead to network stability.[35] Network analysis provides the ability to quantify associations between individuals, which makes it possible to infer details about the network as a whole at the species and/or population level.[36]One of the most attractive features of the network paradigm would be that it provides a single conceptual framework in which the social organization of animals at all levels (individual, dyad, group, population) and for all types of interaction (aggressive, cooperative, sexual, etc.) can be studied.[37] Researchers interested inethologyacross many taxa, from insects to primates, are starting to incorporate network analysis into their research. Researchers interested in social insects (e.g., ants and bees) have used network analyses better to understand the division of labor, task allocation, and foraging optimization within colonies.[38][39][40]Other researchers are interested in how specific network properties at the group and/or population level can explain individual-level behaviors. Studies have demonstrated how animal social network structure can be influenced by factors ranging from characteristics of the environment to characteristics of the individual, such as developmental experience and personality. At the level of the individual, the patterning of social connections can be an important determinant offitness, predicting both survival and reproductive success. At the population level, network structure can influence the patterning of ecological and evolutionary processes, such asfrequency-dependent selectionand disease and information transmission.[41]For instance, a study onwire-tailed manakins(a small passerine bird) found that a male'sdegreein the network largely predicted the ability of the male to rise in the social hierarchy (i.e., eventually obtain a territory and matings).[42]Inbottlenose dolphingroups, an individual's degree andbetweenness centralityvalues may predict whether or not that individual will exhibit certain behaviors, like the use of side flopping and upside-down lobtailing to lead group traveling efforts; individuals with high betweenness values are more connected and can obtain more information, and thus are better suited to lead group travel and therefore tend to exhibit these signaling behaviors more than other group members.[43] Social network analysiscan also be used to describe the social organization within a species more generally, which frequently reveals important proximate mechanisms promoting the use of certain behavioral strategies. These descriptions are frequently linked to ecological properties (e.g., resource distribution). For example, network analyses revealed subtle differences in the group dynamics of two related equidfission-fusionspecies,Grevy's zebraandonagers, living in variable environments; Grevy's zebras show distinct preferences in their association choices when they fission into smaller groups, whereas onagers do not.[44]Similarly, researchers interested in primates have also utilized network analyses to compare social organizations across the diverseprimateorder, suggesting that using network measures (such ascentrality,assortativity,modularity, and betweenness) may be useful in terms of explaining the types of social behaviors we see within certain groups and not others.[45] Finally, social network analysis can also reveal important fluctuations in animal behaviors across changing environments. For example, network analyses in femalechacma baboons(Papio hamadryas ursinus) revealed important dynamic changes across seasons that were previously unknown; instead of creating stable, long-lasting social bonds with friends, baboons were found to exhibit more variable relationships which were dependent on short-term contingencies related to group-level dynamics as well as environmental variability.[46]Changes in an individual's social network environment can also influence characteristics such as 'personality': for example, social spiders that huddle with bolder neighbors tend to increase also in boldness.[47]This is a very small set of broad examples of how researchers can use network analysis to study animal behavior. Research in this area is currently expanding very rapidly, especially since the broader development of animal-borne tags andcomputer visioncan be used to automate the collection of social associations.[48]Social network analysis is a valuable tool for studying animal behavior across all animal species and has the potential to uncover new information about animal behavior and social ecology that was previously poorly understood. Within a nucleus,DNAis constantly in motion. Perpetual actions such as genome folding and Cohesin extrusion morph the shape of a genome in real time. The spatial location of strands ofchromatinrelative to each other plays an important role in the activation or suppression of certain genes. DNA-DNA Chromatin Networks help biologists to understand these interactions by analyzing commonalities amongst differentloci. The size of a network can vary significantly, from a few genes to several thousand and thus network analysis can provide vital support in understanding relationships among different areas of the genome. As an example, analysis of spatially similar loci within the organization in a nucleus withGenome Architecture Mapping (GAM)can be used to construct a network of loci with edges representing highly linked genomic regions. The first graphic showcases the Hist1 region of the mm9 mouse genome with each node representing genomic loci. Two nodes are connected by an edge if their linkage disequilibrium is greater than the average across all 81 genomic windows. The locations of the nodes within the graphic are randomly selected and the methodology of choosing edges yields a, simple to show, but rudimentary graphical representation of the relationships in the dataset. The second visual exemplifies the same information as the previous; However, the network starts with every loci placed sequentially in a ring configuration. It then pulls nodes together using linear interpolation by their linkage as a percentage. The figure illustrates strong connections between the center genomic windows as well as the edge loci at the beginning and end of the Hist1 region. To draw useful information from a biological network, an understanding of the statistical and mathematical techniques of identifying relationships within a network is vital. Procedures to identify association, communities, and centrality within nodes in a biological network can provide insight into the relationships of whatever the nodes represent whether they are genes, species, etc. Formulation of these methods transcends disciplines and relies heavily ongraph theory,computer science, andbioinformatics. There are many different ways to measure the relationships of nodes when analyzing a network. In many cases, the measure used to find nodes that share similarity within a network is specific to the application it is being used. One of the types of measures that biologists utilize iscorrelationwhich specifically centers around the linear relationship between two variables.[49]As an example,weighted gene co-expression network analysisusesPearson correlationto analyze linked gene expression and understand genetics at a systems level.[50]Another measure of correlation islinkage disequilibrium. Linkage disequilibrium describes the non-random association of genetic sequences among loci in a given chromosome.[51]An example of its use is in detecting relationships inGAMdata across genomic intervals based upon detection frequencies of certain loci.[52] The concept ofcentralitycan be extremely useful when analyzing biological network structures. There are many different methods to measure centrality such as betweenness, degree, Eigenvector, and Katz centrality. Every type of centrality technique can provide different insights on nodes in a particular network; However, they all share the commonality that they are to measure the prominence of a node in a network.[53]In 2005, Researchers atHarvard Medical Schoolutilized centrality measures with the yeast protein interaction network. They found that proteins that exhibited high Betweenness centrality were more essential and translated closely to a given protein's evolutionary age.[54] Studying thecommunity structureof a network by subdividing groups of nodes into like-regions can be an integral tool for bioinformatics when exploring data as a network.[55]A food web of TheSecaucus High SchoolMarsh exemplifies the benefits of grouping as the relationships between nodes are far easier to analyze with well-made communities. While the first graphic is hard to visualize, the second provides a better view of the pockets of highly connected feeding relationships that would be expected in a food web. The problem of community detection is still an active problem. Scientists and graph theorists continuously discover new ways of subsectioning networks and thus a plethora of differentalgorithmsexist for creating these relationships.[56]Like many other tools that biologists utilize to understand data with network models, every algorithm can provide its own unique insight and may vary widely on aspects such as accuracy ortime complexityof calculation. In 2002, a food web of marine mammals in theChesapeake Baywas divided into communities by biologists using a community detection algorithm based on neighbors of nodes with high degree centrality. The resulting communities displayed a sizable split in pelagic and benthic organisms.[57]Two very common community detection algorithms for biological networks are the Louvain Method and Leiden Algorithm. TheLouvain methodis agreedy algorithmthat attempts to maximizemodularity, which favors heavy edges within communities and sparse edges between, within a set of nodes. The algorithm starts by each node being in its own community and iteratively being added to the particular node's community that favors a higher modularity.[58][59]Once no modularity increase can occur by joining nodes to a community, a newweighted networkis constructed of communities as nodes with edges representing between-community edges and loops representing edges within a community. The process continues until no increase in modularity occurs.[60]While the Louvain Method provides good community detection, there are a few ways that it is limited. By mainly focusing on maximizing a given measure of modularity, it may be led to craft badly connected communities by degrading a model for the sake of maximizing a modularity metric; However, the Louvain Method performs fairly and is easy to understand compared to many other community detection algorithms.[59] The Leiden Algorithm expands on the Louvain Method by providing a number of improvements. When joining nodes to a community, only neighborhoods that have been recently changed are considered. This greatly improves the speed of merging nodes. Another optimization is in the refinement phase in which the algorithm randomly chooses for a node from a set of communities to merge with. This allows for greater depth in choosing communities as the Louvain Method solely focuses on maximizing the modularity that was chosen. The Leiden algorithm, while more complex than the Louvain Method, performs faster with better community detection and can be a valuable tool for identifying groups.[59] Network motifs, or statistically significant recurring interaction patterns within a network, are a commonly used tool to understand biological networks. A major use case of network motifs is inNeurophysiologywhere motif analysis is commonly used to understand interconnected neuronal functions at varying scales.[61]As an example, in 2017, researchers atBeijing Normal Universityanalyzed highly represented 2 and 3 node network motifs in directed functional brain networks constructed byResting state fMRIdata to study the basic mechanisms in brain information flow.[62]
https://en.wikipedia.org/wiki/Biological_network
Henry Robinson Palmer(1795–1844) was a British civil engineer who designed the world's secondmonorailand the first elevated railway. He is also credited as the inventor of corrugated metal roofing, which is one of the world's major building materials. A son ofSamuel Palmerand his wife Elizabeth Walker, Henry Robinson Palmer was born inHackney, east London. He served a five-year apprenticeship with the mechanical engineerBryan Donkinfrom 1811, where he also became a skilled draughtsman. He was then taken on byThomas Telford, for whom he worked for some seven years, rising to become his chief assistant. He carried out numerous surveys for Telford, including the Knaresborough Canal and Railway, Burnham Marshes, Archway Road London, Portishead Harbour, and theIsles of Scilly. He may have acted as the resident engineer for the Loose Hill and Valley Road improvement scheme, which he surveyed in 1820.[1] In 1821 he obtained a patent for a monorail system, which was deemed at the time to be somewhat impractical, but his work on resistance to tractive effort was much more important. In 1825 he visited theStockton and Darlington Railwayand theHetton colliery railway. On the latter he carried out a series of tests on behalf of Telford, to measure the amount of resistance that horses and locomotives had to overcome to move their loads. Further tests were carried out with boats on a number of canals, including theEllesmere Canal, theMersey and Irwell Navigationand theGrand Junction Canal. The results of these experiments were quoted in Parliament, when navigation interests opposed the bill to authorise theLiverpool and Manchester Railway, for instance.[1] By 1825, Palmer was keen to set up on his own. The Kentish Railway had employed Telford to be its engineer when the scheme was launched in 1824, but after he withdrew, Palmer took over, surveying a route that ran fromDovertoWoolwichviaStroodandErith, with a section north of theRiver Thamesbeyond Woolwich. Reservations about a bridge over theRiver Medway, and what to do at Woolwich meant that funding did not materialise, and the scheme folded before a full survey was completed. He was approached again in the 1830s, when there were plans to revive the scheme. Also in 1825, he was asked to survey the Norfolk and Suffolk Railway scheme, but when he submitted an invoice for £1,573 in August 1826, there were not enough subscribers to meet such expense. He received about half of the money and the scheme was abandoned.[2] He had more success with the scheme to extend the Eastern Dock in London. This had been designed byWilliam Chapman, and Palmer took over as resident engineer when the original incumbent, J W Hemingway, died in 1825. He had overall responsibility for the works after the works supervisorDaniel Asher Alexanderretired in 1828. The project involved construction of the dock, warehousing, an extrance lock and basins at Shadwell, and some swing bridges. The works were substantially finished by 1833, but by the time he left they company in 1835, there were issues with the entrance lock walls, which were resolved by George Rennie andJohn Smeaton. There were suggestions that he was too busy with other work to provide adequate supervision.[3] During his career, he seems to have carried out a lot of surveys, but saw few of the schemes through to completion. Notable exceptions were improvements to Penzance harbour from 1836 to 1839, work onIpswich Docksfrom 1837 to 1842, and two Welsh schemes in 1840, on Port Talbot Harbour and Swansea Bridge. He was again involved in a railway route to Dover, giving evidence to the Engineer in 1836, and organising surveys conducted by others, but he was in poor health, andWilliam Cubittwas appointed engineer when the Act of Parliament was obtained. The Port Talbot harbour scheme was hampered by inadequate capital, and Palmer diverted flood water to scour out the channel, as he could not employ sufficient labourers. The Ipswich Docks scheme was his design, which he supervised until 1842, when he retired, leaving his resident engineer G Horwood to complete the work.[4] Although Palmer was a prolific surveyor, he left few major schemes by which his work can be assessed. However, his main enduring legacy is theInstitution of Civil Engineers(ICE), which he founded in 1818. He was keen on self-improvement, and set up a mechanics institute when he was working in Bermondsey, between 1813 and 1814. This led him to plan a grander scheme, where young engineers could discuss engineering issues, and learn from one another. With several other young engineers, the inaugural meeting of the ICE was held on 2 January 1818, and the aims and objectives which he laid out have stood the test of time, with only the upper age limit being relaxed. This allowed Telford to become president, with Palmer as vice-president, and the institution prospered under their joint leadership.[5][6]In his capacity as vice-president, he represented the Institution at the laying of the first stone of Ipswich Docks.[7] He died in 1844, just two years after retiring. All of his papers, including more than 400 drawings, were given to the ICE by his widow, but were subsequently lost.[6] Palmer made a patent application in 1821 for an elevated single rail supported on a series of pillars in an ordinary distance of ten feet, inserted into conical apertures in the ground, with carriages suspended on both sides, hanging on two wheels the one placed before the other. A horse is connected to the carriage with a towing rope, proceeding on one side of the rail on a towing path. There was an earliermonorail in Russia, of which Palmer was unaware. By 1823George Smarthad set up a trial version of Palmer's monorail.[8] Palmer wrote in the study presenting his system: "the charge of carrying the raw material to the manufacturing district, and the manufactured article to the market, forming no small proportion of its price to the consumer.[...] The leading problem in our present subject is, to convey any given quantity of weight between two points at the least possible expense.[...] In order to retain a perfectly smooth and hard surface, unencumbered with extraneous obstacles to which the rails near the ground are exposed,it appeared desirable to elevate the surface from the reach of those obstacles and at the same time be released from the impediments occasioned by snows in the winter season."[9] The first horse-powered elevated monorail started operating atCheshunton 25 June 1825. Although designed to transport materials, it was here used to carry passengers.[10]In 1826 German railway pioneerFriedrich Harkorthad a demonstration track of Palmer's system built by his steel factory inElberfeld, one of the main towns in the early industrialised region of theWupper Valley. Palmer's monorail can be regarded as the precursor of theLartigue Monorailand of theSchwebebahn Wuppertal. In his study, Palmer has one of the earliest descriptions of the principle ofcontainerisation: "The arrangement also enables us to continue a conveyance by other means with very little interruption, as it is evident that the receptacles may be received from the one, and lodged on to another kind of carriage or vessel separately from the wheels and frame work, without displacing the goods".
https://en.wikipedia.org/wiki/Henry_Robinson_Palmer
Apath(orfilepath,file path,pathname, or similar) is atext stringthat uniquely specifies an item in ahierarchical file system. Generally, a path is composed of directory names, special directory specifiers and optionally afilename, separated bydelimiting text. The delimiter varies by operating system and in theory can be anything, but popular, modern systems useslash/,backslash\, or colon:. A path can be either relative or absolute. A relative path includes information that is relative to a particular directory whereas an absolute path indicates a location relative to the systemroot directory, and therefore, does not depends on context like a relative path does. Often, a relative path is relative to theworking directory. For example, in commandls f,fis a relative path to the file with that name in the working directory. Paths are used extensively incomputer scienceto represent the directory/file relationships common in modern operating systems and are essential in the construction ofuniform resource locators(URLs). Multicsfirst introduced ahierarchical file systemwith directories (separated by ">") in the mid-1960s.[1] Around 1970,Unixintroduced the slash character ("/") as its directory separator. Originally,MS-DOSdid not support directories, but when adding the feature, using the Unix standard of slash was not a good option since many existing commands used slash as theswitchprefix. For example,dir /w. In contrast, Unix uses dash-as the switch prefix. In this context, MS-DOS version 2.0 used backslash\for the path delimiter since it is similar to slash but did not conflict with existing commands. This convention continued intoWindowsin its shellCommand Prompt. Eventually,PowerShell, was introduced to Windows that is slash-agnostic, allowing the use of either slash in a path.[2][3] The following table describes the syntax of paths in notable operating systems and with notable aspects by shell. UserDocs:/Letter.txtVariable:PSVersionTableRegistry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft.PowerShell.Security\Certificate::CurrentUser\ []IN_THIS_DIR.COM;[-.-]GreatGrandParent.TXTSYS$SYSDEVICE:[.DRAFTS]LETTER.TXT;4GEIN::[000000]LETTER.TXT;4SYS$LOGIN:LOGIN.COM FLIGHT.SIMULATOR,D2 note: &, %, and @ can also be used to reference the rootof the current user, the library and the current (working) directory respectively. When filesystems with filename extensions are mounted,'.' characters are changed to '/', as in the Japan/gif example above. //(root of domain)/(root of current node) note: prefix may be a number (0–31),*(boot volume) or@(AppleShare home directory) hb set -p --product [PRODUCT_NAME] Japanese and Korean versions of Windows often display the '¥' character or the '₩' character instead of the directory separator. In such cases the code for a backslash is being drawn as these characters. Very early versions of MS-DOS replaced the backslash with these glyphs on the display to make it possible to display them by programs that only understood 7-bitASCII(other characters such as the square brackets were replaced as well, seeISO 646,Windows Codepage 932 (Japanese Shift JIS), andCodepage 949 (Korean)). Although even the first version of Windows supported the 8-bitISO-8859-1character set which has the Yen sign at U+00A5, and modern versions of Windows supportsUnicodewhich has the Won sign at U+20A9, much software will continue to display backslashes found in ASCII files this way to preserve backward compatibility.[8] macOS, as a derivative of UNIX, uses UNIX paths internally. However, to preserve compatibility for software and familiarity for users, many portions of the GUI switch "/" typed by the user to ":" internally, and switch them back when displaying filenames (a ":" entered by the user is also changed into "/" but the inverse translation does not happen). Programming languages also use paths. E.g.: When a file is opened. Most programming languages use the path representation of the underlying operating system: This direct access to the operating system paths can hinder the portability of programs. To support portable programsJavausesFile.separatorto distinguish between / and \ separated paths.Seed7has a different approach for the path representation. In Seed7 all paths use the Unix path convention, independent of the operating system. Under windows a mapping takes place (e.g.: The path/c/usersis mapped toc:\users). The Microsoftuniversal naming convention(UNC), a.k.a.uniform naming convention, a.k.a.network path, specifies a syntax to describe the location of a network resource, such as a shared file, directory, or printer. A UNC path has the general form: Some Windows interfaces allow or require UNC syntax forWebDAVshare access, rather than a URL. The UNC syntax is extended[9]with optional components to denote use of SSL and TCP/IP port number, a WebDAV URL ofhttp[s]://HostName[:Port]/SharedFolder/Resourcebecomes When viewed remotely, the "SharedFolder" may have a name different from what a program on the server sees when opening "\SharedFolder". Instead, the SharedFolder name consists of an arbitrary name assigned to the folder when defining its "sharing". Some Windows interfaces also accept the "Long UNC": Windows uses the following types of paths: In versions of Windows prior to Windows XP, only the APIs that accept "long" device paths could accept more than 260 characters. TheshellinWindows XPandWindows Vista,explorer.exe, allows path names up to 248 characters long.[citation needed] Since UNCs start with two backslashes, and the backslash is also used for string escaping and inregular expressions, this can result in extreme cases ofleaning toothpick syndrome: an escaped string for a regular expression matching a UNC begins with 8 backslashes –\\\\\\\\– because the string and regular expression both require escaping. This can be simplified by usingraw strings, as in C#'s@"\\\\"or Python'sr'\\\\', or regular expression literals, as in Perl'sqr{\\\\}. Most Unix-like systems use a similar syntax.[13]POSIXallows treating a path beginning with two slashes in an implementation-defined manner,[14]though in other cases systems must treat multiple slashes as single slashes.[15]Many applications on Unix-like systems (for example,scp,rcp, andrsync) use resource definitions such as: orURIschemes with the service name (here 'smb'): The following examples are for typical, Unix-based file systems: Given the working directory is/home/mark/and it contains subdirectorybobapples, relative paths to the subdirectory include./bobapplesandbobapples, and the absolute path is/home/mark/bobapples. A command to change the working directory to the subdirectory: If the working directory was/home/jo, then the relative path../mark/bobapplesspecifies the subdirectory. The double dots..indicates a move up the directory hierarchy one level to/home, the rest indicates moving down tomarkand thenboapples. TheWindows APIaccepts slash for path delimiter. Unlike Unix that always has a single root directory, a Windows file system has a root for each storage drive. An absolute path includes a drive letter or uses the UNC format. A UNC path (starting with\\?\) does not support slashes.[4] A:\Temp\File.txtis an absolute path that specifies a file namedFile.txtin the directoryTempwhich is in the root of driveA:: C:..\File.txtis a relative path that specifies fileFile.txtlocated in the parent of the working directory on driveC:: Folder\SubFolder\File.txtis a relative path that specifies fileFile.txtin directorySubFolderwhich is in directoryFolderwhich is in the working directory of the current drive: File.txtis a relative path that specifiesFile.txtin the working directory: \\.\COM1specifies the firstserial port,COM1: The following uses a path with slashes for directory delimiter: A path with forward slashes may need to be surrounded by double quotes to disambiguate from command-line switches. For example,dir /windowsis invalid, butdir "/window"is valid. Andcdis more lenient by allowingcd /windows.
https://en.wikipedia.org/wiki/Path_(computing)
Incryptographic protocoldesign,cryptographic agilityorcrypto-agilityis the ability to switch between multiplecryptographic primitives. A cryptographically agile system implementing a particular standard can choose which combination of primitives to use. The primary goal of cryptographic agility is to enable rapid adaptations of newcryptographic primitivesandalgorithmswithout making disruptive changes to the system's infrastructure. Cryptographic agility acts as a safety measure or an incident response mechanism for when a cryptographic primitive of a system is discovered to be vulnerable.[1]A security system is considered crypto-agile if its cryptographic algorithms or parameters can be replaced with ease and is at least partly automated.[2][3]The impending arrival of aquantum computerthat can break existingasymmetric cryptographyis raising awareness of the importance of cryptographic agility.[4][5] TheX.509public key certificateillustrates crypto-agility. A public key certificate hascryptographic parametersincluding key type, key length, and ahash algorithm. X.509 version v.3, with key type RSA, a 1024-bit key length, and the SHA-1 hash algorithm were found byNISTto have a key length that made it vulnerable to attacks, thus prompting the transition to SHA-2.[6] With the rise ofsecure transport layer communicationin the end of the 1990s, cryptographic primitives and algorithms have been increasingly popular; for example, by 2019, more than 80% of all websites employed some form of security measures.[7]Furthermore, cryptographic techniques are widely incorporated to protect applications and business transactions. However, as cryptographic algorithms are deployed, research of their security intensifies, and new attacks against cryptographic primitives (old and new alike) are discovered. Crypto-agility tries to tackle the implied threat to information security by allowing swift deprecation of vulnerable primitives and replacement with new ones. This threat is not merely theoretical; many algorithms that were once considered secure (DES, 512-bitRSA,RC4) are now known to be vulnerable, some even to amateur attackers. On the other hand, new algorithms (AES,Elliptic curve cryptography) are often both more secure and faster in comparison to old ones. Systems designed to meet crypto-agility criteria are expected to be less affected should current primitives be found vulnerable, and may enjoy betterlatencyor battery usage by using new and improved primitives. For example,quantum computing, if feasible, is expected to be able to defeat existing public key cryptography algorithms. The overwhelming majority of existing public-key infrastructure relies on the computational hardness of problems such asinteger factorizationanddiscrete logarithms(which includeselliptic-curve cryptographyas a special case). Quantum computers runningShor's algorithmcan solve these problems exponentially faster than the best-known algorithms for conventional computers.[8]Post-quantum cryptographyis the subfield of cryptography that aims to replace quantum-vulnerable algorithms with new ones that are believed hard to break even for a quantum computer. The main families of post-quantum alternatives to factoring and discrete logarithms includelattice-based cryptography,multivariate cryptography,hash-based cryptography, andcode-based cryptography. System evolution and crypto-agility are not the same. System evolution progresses on the basis of emerging business and technical requirements. Crypto-agility is related instead to computing infrastructure and requires consideration by security experts, system designers, and application developers.[9] Best practices about dealing with crypto-agility include:[10] Cryptographic agility typically increases the complexity of the applications that rely on it. Developers need to build support for each of the optional cryptographic primitives, introducing more code and increasing the chance of implementation flaws as well as increasing maintenance and support costs.[14]Users of the systems need to select which primitives they wish to use; for example,OpenSSLusers can select from dozens of ciphersuites when usingTLS.[15]Further, when two parties negotiate the cryptographic primitives for their message exchange, it creates the opportunity for downgrade attacks by intermediaries (such asPOODLE), or for the selection of insecure primitives.[16] One alternative approach is to dramatically limit the choices available to both developers and users, so that there is less scope for implementation or configuration flaws.[17]In this approach, the designers of the library or system choose the primitives and do not offer a choice of cryptographic primitives (or, if they do, it is a very constrained set of choices). Opinionated encryption is visible in tools likeLibsodium, where high-level APIs explicitly aim to discourage developers from picking primitives, and inWireguard, where single primitives are picked to intentionally eliminate crypto-agility.[18] If opinionated encryption is used and a vulnerability is discovered in one of the primitives in a protocol, there is no way to substitute better primitives. Instead, the solution is to use versioned protocols. A new version of the protocol will include the fixed primitive. As a consequence of this, two parties running different versions of the protocol will not be able to communicate.
https://en.wikipedia.org/wiki/Cryptographic_agility
Segmented regression, also known aspiecewise regressionorbroken-stick regression, is a method inregression analysisin which theindependent variableis partitioned into intervals and a separate line segment is fit to each interval. Segmented regression analysis can also be performed on multivariate data by partitioning the various independent variables. Segmented regression is useful when the independent variables, clustered into different groups, exhibit different relationships between the variables in these regions. The boundaries between the segments arebreakpoints. Segmented linear regressionis segmented regression whereby the relations in the intervals are obtained bylinear regression. Segmented linear regression with two segments separated by abreakpointcan be useful to quantify an abrupt change of the response function (Yr) of a varying influential factor (x). The breakpoint can be interpreted as acritical,safe, orthresholdvalue beyond or below which (un)desired effects occur. The breakpoint can be important in decision making[1] The figures illustrate some of the results and regression types obtainable. A segmented regression analysis is based on the presence of a set of (y, x) data, in whichyis thedependent variableandxtheindependent variable. Theleast squaresmethod applied separately to each segment, by which the two regression lines are made to fit the data set as closely as possible while minimizing thesum of squares of the differences(SSD) between observed (y) and calculated (Yr) values of the dependent variable, results in the following two equations: where: The data may show many types or trends,[2]see the figures. The method also yields twocorrelation coefficients(R): and where: and In the determination of the most suitable trend,statistical testsmust be performed to ensure that this trend is reliable (significant). When no significant breakpoint can be detected, one must fall back on a regression without breakpoint. For the blue figure at the right that gives the relation between yield of mustard (Yr = Ym, t/ha) andsoil salinity(x= Ss, expressed as electric conductivity of the soil solution EC in dS/m) it is found that:[3] BP = 4.93, A1= 0, K1= 1.74, A2= −0.129, K2= 2.38, R12= 0.0035 (insignificant), R22= 0.395 (significant) and: indicating that soil salinities < 4.93 dS/m are safe and soil salinities > 4.93 dS/m reduce the yield @ 0.129 t/ha per unit increase of soil salinity. The figure also shows confidence intervals and uncertainty as elaborated hereunder. The followingstatistical testsare used to determine the type of trend: In addition, use is made of thecorrelation coefficientof all data (Ra), thecoefficient of determinationor coefficient of explanation,confidence intervalsof the regression functions, andANOVAanalysis.[5] The coefficient of determination for all data (Cd), that is to be maximized under the conditions set by the significance tests, is found from: where Yr is the expected (predicted) value ofyaccording to the former regression equations and Ya is the average of allyvalues. The Cd coefficient ranges between 0 (no explanation at all) to 1 (full explanation, perfect match).In a pure, unsegmented, linear regression, the values of Cd and Ra2are equal. In a segmented regression, Cd needs to be significantly larger than Ra2to justify the segmentation. Theoptimalvalue of the breakpoint may be found such that the Cd coefficient ismaximum. Segmented regression is often used to detect over which range an explanatory variable (X) has no effect on the dependent variable (Y), while beyond the reach there is a clear response, be it positive or negative. The reach of no effect may be found at the initial part of X domain or conversely at its last part. For the "no effect" analysis, application of theleast squaresmethod for the segmented regression analysis[6]may not be the most appropriate technique because the aim is rather to find the longest stretch over which the Y-X relation can be considered to possess zero slope while beyond the reach the slope is significantly different from zero but knowledge about the best value of this slope is not material. The method to find the no-effect range is progressive partial regression[7]over the range, extending the range with small steps until the regression coefficient gets significantly different from zero. In the next figure the break point is found at X=7.9 while for the same data (see blue figure above for mustard yield), the least squares method yields a break point only at X=4.9. The latter value is lower, but the fit of the data beyond the break point is better. Hence, it will depend on the purpose of the analysis which method needs to be employed.
https://en.wikipedia.org/wiki/Segmented_regression
Autoassociative memory, also known asauto-association memoryor anautoassociation network, is any type of memory that is able to retrieve a piece of data from only a tiny sample of itself. They are very effective in de-noising or removing interference from the input and can be used to determine whether the given input is “known” or “unknown”. In artificial neural network, examples includevariational autoencoder,denoising autoencoder,Hopfield network. In reference to computer memory, the idea of associative memory is also referred to asContent-addressable memory(CAM). The net is said to recognize a “known” vector if the net produces a pattern of activation on the output units which is same as one of the vectors stored in it. Standard memories (data storage) are organized by being indexed by positional memory addresses which are also used for data retrieval. Autoassociative memories are organized in such a way that data is stored in a graph like system with connection weights based on the number of inherent associative connections between two memories which makes it possible to query it using a memory already contained in the associative memory as query-key and retrieve that memory and closely connected memories at the same time.Hopfield networks[1]have been shown[2]to act asautoassociative memorysince they are capable of remembering data by observing a portion of that data. In some cases, an auto-associative net does not reproduce a stored pattern the first time around, but if the result of the first showing is input to the net again, the stored pattern is reproduced.[3]They are of 3 further kinds — Recurrent linear auto-associator,[4]Brain-State-in-a-Box net,[5]andDiscrete Hopfield net. The Hopfield Network is the most well known example of an autoassociative memory. Hopfield networks serve ascontent-addressable ("associative") memorysystems withbinarythresholdnodes, and they have been shown to act as autoassociative since they are capable of remembering data by observing a portion of that data.[2] Heteroassociative memories, on the other hand, can recall an associated piece of datum fromonecategory upon presentation of data fromanothercategory. For example: It is possible that the associative recall is a transformation from the pattern “banana” to the different pattern “monkey.”[6] Bidirectional associative memories(BAM)[7]areartificial neural networksthat have long been used for performingheteroassociativerecall. For example, the sentence fragments presented below are sufficient for most English-speaking adult humans to recall the missing information. Many readers will realize the missing information is in fact: This demonstrates the capability of autoassociative networks to recall the whole by using some of its parts.
https://en.wikipedia.org/wiki/Autoassociative_memory
Diffeomorphometryis the metric study of imagery, shape and form in the discipline ofcomputational anatomy(CA) inmedical imaging. The study of images incomputational anatomyrely on high-dimensionaldiffeomorphismgroupsφ∈DiffV{\displaystyle \varphi \in \operatorname {Diff} _{V}}which generate orbits of the formI≐{φ⋅I∣φ∈DiffV}{\displaystyle {\mathcal {I}}\doteq \{\varphi \cdot I\mid \varphi \in \operatorname {Diff} _{V}\}}, in which imagesI∈I{\displaystyle I\in {\mathcal {I}}}can be dense scalarmagnetic resonanceorcomputed axial tomographyimages. Fordeformable shapesthese are the collection ofmanifoldsM≐{φ⋅M∣φ∈DiffV}{\displaystyle {\mathcal {M}}\doteq \{\varphi \cdot M\mid \varphi \in \operatorname {Diff} _{V}\}}, points,curvesandsurfaces. The diffeomorphisms move the images and shapes through the orbit according to(φ,I)↦φ⋅I{\displaystyle (\varphi ,I)\mapsto \varphi \cdot I}which are defined as thegroup actions of computational anatomy. The orbit of shapes and forms is made into a metric space by inducing a metric on the group of diffeomorphisms. The study of metrics on groups of diffeomorphisms and the study of metrics between manifolds and surfaces has been an area of significant investigation.[1][2][3][4][5][6][7][8][9]In Computational anatomy, the diffeomorphometry metric measures how close and far two shapes or images are from each other. Informally, themetricis constructed by defining a flow of diffeomorphismsϕ˙t,t∈[0,1],ϕt∈DiffV{\displaystyle {\dot {\phi }}_{t},t\in [0,1],\phi _{t}\in \operatorname {Diff} _{V}}which connect the group elements from one to another, so forφ,ψ∈DiffV{\displaystyle \varphi ,\psi \in \operatorname {Diff} _{V}}thenϕ0=φ,ϕ1=ψ{\displaystyle \phi _{0}=\varphi ,\phi _{1}=\psi }. The metric between two coordinate systems or diffeomorphisms is then the shortest length orgeodesic flowconnecting them. The metric on the space associated to the geodesics is given byρ(φ,ψ)=infϕ:ϕ0=φ,ϕ1=ψ∫01‖ϕ˙t‖ϕtdt{\displaystyle \rho (\varphi ,\psi )=\inf _{\phi :\phi _{0}=\varphi ,\phi _{1}=\psi }\int _{0}^{1}\|{\dot {\phi }}_{t}\|_{\phi _{t}}\,dt}. The metrics on the orbitsI,M{\displaystyle {\mathcal {I}},{\mathcal {M}}}are inherited from the metric induced on the diffeomorphism group. The groupφ∈DiffV{\displaystyle \varphi \in \operatorname {Diff} _{V}}is thusly made into a smoothRiemannian manifoldwith Riemannian metric‖⋅‖φ{\displaystyle \|\cdot \|_{\varphi }}associated to the tangent spaces at allφ∈DiffV{\displaystyle \varphi \in \operatorname {Diff} _{V}}. TheRiemannian metricsatisfies at every point of the manifoldϕ∈DiffV{\displaystyle \phi \in \operatorname {Diff} _{V}}there is aninner productinducing the norm on thetangent space‖ϕ˙t‖ϕt{\displaystyle \|{\dot {\phi }}_{t}\|_{\phi _{t}}}that varies smoothly acrossDiffV{\displaystyle \operatorname {Diff} _{V}}. Oftentimes, the familiarEuclidean metricis not directly applicable because the patterns of shapes and images don't form avector space. In theRiemannian orbit model of Computational anatomy, diffeomorphisms acting on the formsφ⋅I∈I,φ∈DiffV,M∈M{\displaystyle \varphi \cdot I\in {\mathcal {I}},\varphi \in \operatorname {Diff} _{V},M\in {\mathcal {M}}}don't act linearly. There are many ways to define metrics, and for the sets associated to shapes theHausdorff metricis another. The method used to induce theRiemannian metricis to induce the metric on the orbit of shapes by defining it in terms of the metric length between diffeomorphic coordinate system transformations of the flows. Measuring the lengths of the geodesic flow between coordinates systems in the orbit of shapes is calleddiffeomorphometry. The diffeomorphisms incomputational anatomyare generated to satisfy theLagrangian and Eulerian specification of the flow fields,φt,t∈[0,1]{\displaystyle \varphi _{t},t\in [0,1]}, generated via the ordinary differential equation with the Eulerian vector fieldsv≐(v1,v2,v3){\displaystyle v\doteq (v_{1},v_{2},v_{3})}inR3{\displaystyle {\mathbb {R} }^{3}}forvt=φ˙t∘φt−1,t∈[0,1]{\displaystyle v_{t}={\dot {\varphi }}_{t}\circ \varphi _{t}^{-1},t\in [0,1]}. The inverse for the flow is given byddtφt−1=−(Dφt−1)vt,φ0−1=id,{\displaystyle {\frac {d}{dt}}\varphi _{t}^{-1}=-(D\varphi _{t}^{-1})v_{t},\ \varphi _{0}^{-1}=\operatorname {id} ,}and the3×3{\displaystyle 3\times 3}Jacobian matrix for flows inR3{\displaystyle \mathbb {R} ^{3}}given asDφ≐(∂φi∂xj).{\displaystyle \ D\varphi \doteq \left({\frac {\partial \varphi _{i}}{\partial x_{j}}}\right).} To ensure smooth flows of diffeomorphisms with inverse, the vector fieldsR3{\displaystyle {\mathbb {R} }^{3}}must be at least 1-time continuously differentiable in space[10][11]which are modelled as elements of the Hilbert space(V,‖⋅‖V){\displaystyle (V,\|\cdot \|_{V})}using theSobolevembedding theorems so that each elementvi∈H03,i=1,2,3,{\displaystyle v_{i}\in H_{0}^{3},i=1,2,3,}has 3-square-integrable derivatives thusly implies(V,‖⋅‖V){\displaystyle (V,\|\cdot \|_{V})}embeds smoothly in 1-time continuously differentiable functions.[10][11]The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm: Shapes inComputational Anatomy (CA)are studied via the use of diffeomorphic mapping for establishing correspondences between anatomical coordinate systems. In this setting, 3-dimensional medical images are modelled as diffeomorphic transformations of some exemplar, termed the templateItemp{\displaystyle I_{temp}}, resulting in the observed images to be elements of the randomorbit model of CA. For images these are defined asI∈I≐{I=Itemp∘φ,φ∈DiffV}{\displaystyle I\in {\mathcal {I}}\doteq \{I=I_{temp}\circ \varphi ,\varphi \in \operatorname {Diff} _{V}\}}, with for charts representing sub-manifolds denoted asM≐{φ⋅Mtemp:φ∈DiffV}{\displaystyle {\mathcal {M}}\doteq \{\varphi \cdot M_{temp}:\varphi \in \operatorname {Diff} _{V}\}}. The orbit of shapes and forms in Computational Anatomy are generated by the group actionI≐{φ⋅I:φ∈DiffV}{\displaystyle {\mathcal {I}}\doteq \{\varphi \cdot I:\varphi \in \operatorname {Diff} _{V}\}},M≐{φ⋅M:φ∈DiffV}{\displaystyle {\mathcal {M}}\doteq \{\varphi \cdot M:\varphi \in \operatorname {Diff} _{V}\}}. These are made into a Riemannian orbits by introducing a metric associated to each point and associated tangent space. For this a metric is defined on the group which induces the metric on the orbit. Take as the metric forComputational anatomyat each element of the tangent spaceφ∈DiffV{\displaystyle \varphi \in \operatorname {Diff} _{V}}in the group of diffeomorphisms with the vector fields modelled to be in a Hilbert space with the norm in theHilbert space(V,‖⋅‖V){\displaystyle (V,\|\cdot \|_{V})}. We modelV{\displaystyle V}as areproducing kernel Hilbert space (RKHS)defined by a 1-1, differential operatorA:V→V∗{\displaystyle A:V\rightarrow V^{*}}, whereV∗{\displaystyle V^{*}}is the dual-space. In general,σ≐Av∈V∗{\displaystyle \sigma \doteq Av\in V^{*}}is ageneralized functionor distribution, the linear form associated to the inner-product and norm for generalized functions are interpreted byintegration by partsaccording to forv,w∈V{\displaystyle v,w\in V}, WhenAv≐μdx{\displaystyle Av\doteq \mu \,dx}, a vector density,∫Av⋅vdx≐∫μ⋅vdx=∑i=13μividx.{\displaystyle \int Av\cdot v\,dx\doteq \int \mu \cdot v\,dx=\sum _{i=1}^{3}\mu _{i}v_{i}\,dx.} The differential operator is selected so that theGreen's kernelassociated to the inverse is sufficiently smooth so that thevector fields support 1-continuous derivative. TheSobolev embeddingtheorem arguments were made in demonstrating that 1-continuous derivative is required for smooth flows. TheGreen'soperator generated from theGreen's function(scalar case) associated to the differential operator smooths. For proper choice ofA{\displaystyle A}then(V,‖⋅‖V){\displaystyle (V,\|\cdot \|_{V})}is an RKHS with the operatorK=A−1:V∗→V{\displaystyle K=A^{-1}:V^{*}\rightarrow V}. The Green's kernels associated to the differential operator smooths since for controlling enough derivatives in the square-integral sense the kernelk(⋅,⋅){\displaystyle k(\cdot ,\cdot )}is continuously differentiable in both variables implying The metric on the group of diffeomorphisms is defined by the distance as defined on pairs of elements in the group of diffeomorphisms according to This distance provides a right-invariant metric of diffeomorphometry,[12][13][14]invariant to reparameterization of space since for allϕ∈DiffV{\displaystyle \phi \in \operatorname {Diff} _{V}}, The distance on images,[15]dI:I×I→R+{\displaystyle d_{\mathcal {I}}:{\mathcal {I}}\times {\mathcal {I}}\rightarrow \mathbb {R} ^{+}}, The distance on shapes and forms,[16]dM:M×M→R+{\displaystyle d_{\mathcal {M}}:{\mathcal {M}}\times {\mathcal {M}}\rightarrow \mathbb {R} ^{+}}, For calculating the metric, the geodesics are adynamical system, the flow of coordinatest↦ϕt∈DiffV{\displaystyle t\mapsto \phi _{t}\in \operatorname {Diff} _{V}}and the control the vector fieldt↦vt∈V{\displaystyle t\mapsto v_{t}\in V}related viaϕ˙t=vt⋅ϕt,ϕ0=id.{\displaystyle {\dot {\phi }}_{t}=v_{t}\cdot \phi _{t},\phi _{0}=\operatorname {id} .}The Hamiltonian view[17][18][19][20][21]reparameterizes the momentum distributionAv∈V∗{\displaystyle Av\in V^{*}}in terms of theHamiltonian momentum,a Lagrange multiplierp:ϕ˙↦(p∣ϕ˙){\displaystyle p:{\dot {\phi }}\mapsto (p\mid {\dot {\phi }})}constraining the Lagrangian velocityϕ˙t=vt∘ϕt{\displaystyle {\dot {\phi }}_{t}=v_{t}\circ \phi _{t}}.accordingly: ThePontryagin maximum principle[17]gives the HamiltonianH(ϕt,pt)≐maxvH(ϕt,pt,v).{\displaystyle H(\phi _{t},p_{t})\doteq \max _{v}H(\phi _{t},p_{t},v)\ .}The optimizing vector fieldvt≐argmaxv⁡H(ϕt,pt,v){\displaystyle v_{t}\doteq \operatorname {argmax} _{v}H(\phi _{t},p_{t},v)}with dynamicsϕ˙t=∂H(ϕt,pt)∂p,p˙t=−∂H(ϕt,pt)∂ϕ{\displaystyle {\dot {\phi }}_{t}={\frac {\partial H(\phi _{t},p_{t})}{\partial p}},{\dot {p}}_{t}=-{\frac {\partial H(\phi _{t},p_{t})}{\partial \phi }}}. Along the geodesic the Hamiltonian is constant:[22]H(ϕt,pt)=H(id,p0)=12∫Xp0⋅v0dx{\displaystyle H(\phi _{t},p_{t})=H(\operatorname {id} ,p_{0})={\frac {1}{2}}\int _{X}p_{0}\cdot v_{0}\,dx}. The metric distance between coordinate systems connected via the geodesic determined by the induced distance between identity and group element: Forlandmarks,xi,i=1,…,n{\displaystyle x_{i},i=1,\dots ,n}, the Hamiltonian momentum with Hamiltonian dynamics taking the form with The metric between landmarksd2=∑ip0(i)⋅∑jK(xi,xj)p0(j).{\displaystyle d^{2}=\textstyle \sum _{i}p_{0}(i)\cdot \sum _{j}\displaystyle K(x_{i},x_{j})p_{0}(j).} The dynamics associated to these geodesics is shown in the accompanying figure. Forsurfaces, the Hamiltonian momentum is defined across the surface has Hamiltonian and dynamics Forvolumesthe Hamiltonian with dynamics Software suitescontaining a variety of diffeomorphic mapping algorithms include the following:
https://en.wikipedia.org/wiki/Diffeomorphometry
L1-norm principal component analysis (L1-PCA)is a general method for multivariate data analysis.[1]L1-PCA is often preferred over standard L2-normprincipal component analysis(PCA) when the analyzed data may containoutliers(faulty values or corruptions), as it is believed to berobust.[2][3][4] Both L1-PCA and standard PCA seek a collection oforthogonaldirections (principal components) that define asubspacewherein data representation is maximized according to the selected criterion.[5][6][7]Standard PCA quantifies data representation as the aggregate of theL2-normof the data pointprojectionsinto the subspace, or equivalently the aggregateEuclidean distanceof the original points from their subspace-projected representations. L1-PCA uses instead the aggregate of the L1-norm of the data point projections into the subspace.[8]In PCA and L1-PCA, the number of principal components (PCs) is lower than therankof the analyzed matrix, which coincides with the dimensionality of the space defined by the original data points. Therefore, PCA or L1-PCA are commonly employed fordimensionality reductionfor the purpose of data denoising or compression. Among the advantages of standard PCA that contributed to its high popularity arelow-costcomputational implementation by means ofsingular-value decomposition(SVD)[9]and statistical optimality when the data set is generated by a truemultivariate normaldata source. However, in modern big data sets, data often include corrupted, faulty points, commonly referred to as outliers.[10]Standard PCA is known to be sensitive to outliers, even when they appear as a small fraction of the processed data.[11]The reason is that the L2-norm formulation of L2-PCA places squared emphasis on the magnitude of each coordinate of each data point, ultimately overemphasizing peripheral points, such as outliers. On the other hand, following an L1-norm formulation, L1-PCA places linear emphasis on the coordinates of each data point, effectively restraining outliers.[12] Consider anymatrixX=[x1,x2,…,xN]∈RD×N{\displaystyle \mathbf {X} =[\mathbf {x} _{1},\mathbf {x} _{2},\ldots ,\mathbf {x} _{N}]\in \mathbb {R} ^{D\times N}}consisting ofN{\displaystyle N}D{\displaystyle D}-dimensional data points. Definer=rank(X){\displaystyle r=rank(\mathbf {X} )}. For integerK{\displaystyle K}such that1≤K<r{\displaystyle 1\leq K<r}, L1-PCA is formulated as:[1] maxQ=[q1,q2,…,qK]∈RD×K‖X⊤Q‖1subject toQ⊤Q=IK.{\displaystyle {\begin{aligned}&{\underset {\mathbf {Q} =[\mathbf {q} _{1},\mathbf {q} _{2},\ldots ,\mathbf {q} _{K}]\in \mathbb {R} ^{D\times K}}{\max }}~~\|\mathbf {X} ^{\top }\mathbf {Q} \|_{1}\\&{\text{subject to}}~~\mathbf {Q} ^{\top }\mathbf {Q} =\mathbf {I} _{K}.\end{aligned}}} ForK=1{\displaystyle K=1}, (1) simplifies to finding the L1-norm principal component (L1-PC) ofX{\displaystyle \mathbf {X} }by maxq∈RD×1‖X⊤q‖1subject to‖q‖2=1.{\displaystyle {\begin{aligned}&{\underset {\mathbf {q} \in \mathbb {R} ^{D\times 1}}{\max }}~~\|\mathbf {X} ^{\top }\mathbf {q} \|_{1}\\&{\text{subject to}}~~\|\mathbf {q} \|_{2}=1.\end{aligned}}} In (1)-(2),L1-norm‖⋅‖1{\displaystyle \|\cdot \|_{1}}returns the sum of the absolute entries of its argument and L2-norm‖⋅‖2{\displaystyle \|\cdot \|_{2}}returns the sum of the squared entries of its argument. If one substitutes‖⋅‖1{\displaystyle \|\cdot \|_{1}}in (2) by theFrobenius/L2-norm‖⋅‖F{\displaystyle \|\cdot \|_{F}}, then the problem becomes standard PCA and it is solved by the matrixQ{\displaystyle \mathbf {Q} }that contains theK{\displaystyle K}dominant singular vectors ofX{\displaystyle \mathbf {X} }(i.e., the singular vectors that correspond to theK{\displaystyle K}highestsingular values). The maximization metric in (2) can be expanded as ‖X⊤Q‖1=∑k=1K∑n=1N|xn⊤qk|.{\displaystyle \|\mathbf {X} ^{\top }\mathbf {Q} \|_{1}=\sum _{k=1}^{K}\sum _{n=1}^{N}|\mathbf {x} _{n}^{\top }\mathbf {q} _{k}|.} For any matrixA∈Rm×n{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}withm≥n{\displaystyle m\geq n}, defineΦ(A){\displaystyle \Phi (\mathbf {A} )}as the nearest (in the L2-norm sense) matrix toA{\displaystyle \mathbf {A} }that has orthonormal columns. That is, define Φ(A)=argminQ∈Rm×n‖A−Q‖Fsubject toQ⊤Q=In.{\displaystyle {\begin{aligned}\Phi (\mathbf {A} )=&{\underset {\mathbf {Q} \in \mathbb {R} ^{m\times n}}{\text{argmin}}}~~\|\mathbf {A} -\mathbf {Q} \|_{F}\\&{\text{subject to}}~~\mathbf {Q} ^{\top }\mathbf {Q} =\mathbf {I} _{n}.\end{aligned}}} Procrustes Theorem[13][14]states that ifA{\displaystyle \mathbf {A} }has SVDUm×nΣn×nVn×n⊤{\displaystyle \mathbf {U} _{m\times n}{\boldsymbol {\Sigma }}_{n\times n}\mathbf {V} _{n\times n}^{\top }}, thenΦ(A)=UV⊤{\displaystyle \Phi (\mathbf {A} )=\mathbf {U} \mathbf {V} ^{\top }}. Markopoulos, Karystinos, and Pados[1]showed that, ifBBNM{\displaystyle \mathbf {B} _{\text{BNM}}}is the exact solution to the binary nuclear-norm maximization (BNM) problem maxB∈{±1}N×K‖XB‖∗2,{\displaystyle {\begin{aligned}{\underset {\mathbf {B} \in \{\pm 1\}^{N\times K}}{\text{max}}}~~\|\mathbf {X} \mathbf {B} \|_{*}^{2},\end{aligned}}} then QL1=Φ(XBBNM){\displaystyle {\begin{aligned}\mathbf {Q} _{\text{L1}}=\Phi (\mathbf {X} \mathbf {B} _{\text{BNM}})\end{aligned}}} is the exact solution to L1-PCA in (2). Thenuclear-norm‖⋅‖∗{\displaystyle \|\cdot \|_{*}}in (2) returns the summation of the singular values of its matrix argument and can be calculated by means of standard SVD. Moreover, it holds that, given the solution to L1-PCA,QL1{\displaystyle \mathbf {Q} _{\text{L1}}}, the solution to BNM can be obtained as BBNM=sgn(X⊤QL1){\displaystyle {\begin{aligned}\mathbf {B} _{\text{BNM}}={\text{sgn}}(\mathbf {X} ^{\top }\mathbf {Q} _{\text{L1}})\end{aligned}}} wheresgn(⋅){\displaystyle {\text{sgn}}(\cdot )}returns the{±1}{\displaystyle \{\pm 1\}}-sign matrix of its matrix argument (with no loss of generality, we can considersgn(0)=1{\displaystyle {\text{sgn}}(0)=1}). In addition, it follows that‖X⊤QL1‖1=‖XBBNM‖∗{\displaystyle \|\mathbf {X} ^{\top }\mathbf {Q} _{\text{L1}}\|_{1}=\|\mathbf {X} \mathbf {B} _{\text{BNM}}\|_{*}}. BNM in (5) is acombinatorialproblem over antipodal binary variables. Therefore, its exact solution can be found through exhaustive evaluation of all2NK{\displaystyle 2^{NK}}elements of its feasibility set, withasymptotic costO(2NK){\displaystyle {\mathcal {O}}(2^{NK})}. Therefore, L1-PCA can also be solved, through BNM, with costO(2NK){\displaystyle {\mathcal {O}}(2^{NK})}(exponential in the product of the number of data points with the number of the sought-after components). It turns out that L1-PCA can be solved optimally (exactly) with polynomial complexity inN{\displaystyle N}for fixed data dimensionD{\displaystyle D},O(NrK−K+1){\displaystyle {\mathcal {O}}(N^{rK-K+1})}.[1] For the special case ofK=1{\displaystyle K=1}(single L1-PC ofX{\displaystyle \mathbf {X} }), BNM takes the binary-quadratic-maximization (BQM) form maxb∈{±1}N×1b⊤X⊤Xb.{\displaystyle {\begin{aligned}&{\underset {\mathbf {b} \in \{\pm 1\}^{N\times 1}}{\text{max}}}~~\mathbf {b} ^{\top }\mathbf {X} ^{\top }\mathbf {X} \mathbf {b} .\end{aligned}}} The transition from (5) to (8) forK=1{\displaystyle K=1}holds true, since the unique singular value ofXb{\displaystyle \mathbf {X} \mathbf {b} }is equal to‖Xb‖2=b⊤X⊤Xb{\displaystyle \|\mathbf {X} \mathbf {b} \|_{2}={\sqrt {\mathbf {b} ^{\top }\mathbf {X} ^{\top }\mathbf {X} \mathbf {b} }}}, for everyb{\displaystyle \mathbf {b} }. Then, ifbBNM{\displaystyle \mathbf {b} _{\text{BNM}}}is the solution to BQM in (7), it holds that qL1=Φ(XbBNM)=XbBNM‖XbBNM‖2{\displaystyle {\begin{aligned}\mathbf {q} _{\text{L1}}=\Phi (\mathbf {X} \mathbf {b} _{\text{BNM}})={\frac {\mathbf {X} \mathbf {b} _{\text{BNM}}}{\|\mathbf {X} \mathbf {b} _{\text{BNM}}\|_{2}}}\end{aligned}}} is the exact L1-PC ofX{\displaystyle \mathbf {X} }, as defined in (1). In addition, it holds thatbBNM=sgn(X⊤qL1){\displaystyle \mathbf {b} _{\text{BNM}}={\text{sgn}}(\mathbf {X} ^{\top }\mathbf {q} _{\text{L1}})}and‖X⊤qL1‖1=‖XbBNM‖2{\displaystyle \|\mathbf {X} ^{\top }\mathbf {q} _{\text{L1}}\|_{1}=\|\mathbf {X} \mathbf {b} _{\text{BNM}}\|_{2}}. As shown above, the exact solution to L1-PCA can be obtained by the following two-step process: BNM in (5) can be solved by exhaustive search over the domain ofB{\displaystyle \mathbf {B} }with costO(2NK){\displaystyle {\mathcal {O}}(2^{NK})}. Also, L1-PCA can be solved optimally with costO(NrK−K+1){\displaystyle {\mathcal {O}}(N^{rK-K+1})}, whenr=rank(X){\displaystyle r=rank(\mathbf {X} )}is constant with respect toN{\displaystyle N}(always true for finite data dimensionD{\displaystyle D}).[1][15] In 2008, Kwak[12]proposed an iterative algorithm for the approximate solution of L1-PCA forK=1{\displaystyle K=1}. This iterative method was later generalized forK>1{\displaystyle K>1}components.[16]Another approximate efficient solver was proposed by McCoy and Tropp[17]by means of semi-definite programming (SDP). Most recently, L1-PCA (and BNM in (5)) were solved efficiently by means of bit-flipping iterations (L1-BF algorithm).[8] The computational cost of L1-BF isO(NDmin{N,D}+N2K2(K2+r)){\displaystyle {\mathcal {O}}(NDmin\{N,D\}+N^{2}K^{2}(K^{2}+r))}.[8] L1-PCA has also been generalized to processcomplexdata. For complex L1-PCA, two efficient algorithms were proposed in 2018.[18] L1-PCA has also been extended for the analysis oftensordata, in the form of L1-Tucker, the L1-norm robust analogous of standardTucker decomposition.[19]Two algorithms for the solution of L1-Tucker are L1-HOSVD and L1-HOOI.[19][20][21] MATLABcode for L1-PCA is available at MathWorks.[22]
https://en.wikipedia.org/wiki/L1-norm_principal_component_analysis
Instatistics,resamplingis the creation of new samples based on one observed sample. Resampling methods are: Permutation tests rely on resampling the original data assuming the null hypothesis. Based on the resampled data it can be concluded how likely the original data is to occur under the null hypothesis. Bootstrapping is a statistical method for estimating thesampling distributionof anestimatorbysamplingwith replacement from the original sample, most often with the purpose of deriving robust estimates ofstandard errorsandconfidence intervalsof a population parameter like amean,median,proportion,odds ratio,correlation coefficientorregressioncoefficient. It has been called theplug-in principle,[1]as it is the method ofestimationof functionals of a population distribution by evaluating the same functionals at theempirical distributionbased on a sample. For example,[1]when estimating thepopulationmean, this method uses thesamplemean; to estimate the populationmedian, it uses the sample median; to estimate the populationregression line, it uses the sample regression line. It may also be used for constructing hypothesis tests. It is often used as a robust alternative to inference based on parametric assumptions when those assumptions are in doubt, or where parametric inference is impossible or requires very complicated formulas for the calculation of standard errors. Bootstrapping techniques are also used in the updating-selection transitions ofparticle filters,genetic type algorithmsand related resample/reconfigurationMonte Carlo methodsused incomputational physics.[2][3]In this context, the bootstrap is used to replace sequentially empirical weighted probability measures byempirical measures. The bootstrap allows to replace the samples with low weights by copies of the samples with high weights. Cross-validation is a statistical method for validating apredictive model. Subsets of the data are held out for use as validating sets; a model is fit to the remaining data (a training set) and used to predict for the validation set. Averaging the quality of the predictions across the validation sets yields an overall measure of prediction accuracy. Cross-validation is employed repeatedly in building decision trees. One form of cross-validation leaves out a single observation at a time; this is similar to thejackknife. Another,K-fold cross-validation, splits the data intoKsubsets; each is held out in turn as the validation set. This avoids "self-influence". For comparison, inregression analysismethods such aslinear regression, eachyvalue draws the regression line toward itself, making the prediction of that value appear more accurate than it really is. Cross-validation applied to linear regression predicts theyvalue for each observation without using that observation. This is often used for deciding how many predictor variables to use in regression. Without cross-validation, adding predictors always reduces the residual sum of squares (or possibly leaves it unchanged). In contrast, the cross-validated mean-square error will tend to decrease if valuable predictors are added, but increase if worthless predictors are added.[4] Subsampling is an alternative method for approximating the sampling distribution of an estimator. The two key differences to the bootstrap are: The advantage of subsampling is that it is valid under much weaker conditions compared to the bootstrap. In particular, a set of sufficient conditions is that the rate of convergence of the estimator is known and that the limiting distribution is continuous. In addition, the resample (or subsample) size must tend to infinity together with the sample size but at a smaller rate, so that their ratio converges to zero. While subsampling was originally proposed for the case of independent and identically distributed (iid) data only, the methodology has been extended to cover time series data as well; in this case, one resamples blocks of subsequent data rather than individual data points. There are many cases of applied interest where subsampling leads to valid inference whereas bootstrapping does not; for example, such cases include examples where the rate of convergence of the estimator is not the square root of the sample size or when the limiting distribution is non-normal. When both subsampling and the bootstrap are consistent, the bootstrap is typically more accurate.RANSACis a popular algorithm using subsampling. Jackknifing (jackknife cross-validation), is used instatistical inferenceto estimate the bias and standard error (variance) of a statistic, when a random sample of observations is used to calculate it. Historically, this method preceded the invention of the bootstrap withQuenouilleinventing this method in 1949 andTukeyextending it in 1958.[5][6]This method was foreshadowed byMahalanobiswho in 1946 suggested repeated estimates of the statistic of interest with half the sample chosen at random.[7]He coined the name 'interpenetrating samples' for this method. Quenouille invented this method with the intention of reducing the bias of the sample estimate. Tukey extended this method by assuming that if the replicates could be considered identically and independently distributed, then an estimate of the variance of the sample parameter could be made and that it would be approximately distributed as a t variate withn−1 degrees of freedom (nbeing the sample size). The basic idea behind the jackknife variance estimator lies in systematically recomputing the statistic estimate, leaving out one or more observations at a time from the sample set. From this new set of replicates of the statistic, an estimate for the bias and an estimate for the variance of the statistic can be calculated. Jackknife is equivalent to the random (subsampling) leave-one-out cross-validation, it only differs in the goal.[8] For many statistical parameters the jackknife estimate of variance tends asymptotically to the true value almost surely. In technical terms one says that the jackknife estimate isconsistent. The jackknife is consistent for the samplemeans, samplevariances, central and non-central t-statistics (with possibly non-normal populations), samplecoefficient of variation,maximum likelihood estimators, least squares estimators,correlation coefficientsandregression coefficients. It is not consistent for the samplemedian. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with twodegrees of freedom. Instead of using the jackknife to estimate the variance, it may instead be applied to the log of the variance. This transformation may result in better estimates particularly when the distribution of the variance itself may be non normal. The jackknife, like the original bootstrap, is dependent on the independence of the data. Extensions of the jackknife to allow for dependence in the data have been proposed. One such extension is the delete-a-group method used in association withPoisson sampling. Both methods, the bootstrap and the jackknife, estimate the variability of a statistic from the variability of that statistic between subsamples, rather than from parametric assumptions. For the more general jackknife, the delete-m observations jackknife, the bootstrap can be seen as a random approximation of it. Both yield similar numerical results, which is why each can be seen as approximation to the other. Although there are huge theoretical differences in their mathematical insights, the main practical difference for statistics users is that thebootstrapgives different results when repeated on the same data, whereas the jackknife gives exactly the same result each time. Because of this, the jackknife is popular when the estimates need to be verified several times before publishing (e.g., official statistics agencies). On the other hand, when this verification feature is not crucial and it is of interest not to have a number but just an idea of its distribution, the bootstrap is preferred (e.g., studies in physics, economics, biological sciences). Whether to use the bootstrap or the jackknife may depend more on operational aspects than on statistical concerns of a survey. The jackknife, originally used for bias reduction, is more of a specialized method and only estimates the variance of the point estimator. This can be enough for basic statistical inference (e.g., hypothesis testing, confidence intervals). The bootstrap, on the other hand, first estimates the whole distribution (of the point estimator) and then computes the variance from that. While powerful and easy, this can become highly computationally intensive. "The bootstrap can be applied to both variance and distribution estimation problems. However, the bootstrap variance estimator is not as good as the jackknife or thebalanced repeated replication(BRR) variance estimator in terms of the empirical results. Furthermore, the bootstrap variance estimator usually requires more computations than the jackknife or the BRR. Thus, the bootstrap is mainly recommended for distribution estimation."[attribution needed][9] There is a special consideration with the jackknife, particularly with the delete-1 observation jackknife. It should only be used with smooth, differentiable statistics (e.g., totals, means, proportions, ratios, odd ratios, regression coefficients, etc.; not with medians or quantiles). This could become a practical disadvantage. This disadvantage is usually the argument favoring bootstrapping over jackknifing. More general jackknifes than the delete-1, such as the delete-m jackknife or the delete-all-but-2Hodges–Lehmann estimator, overcome this problem for the medians and quantiles by relaxing the smoothness requirements for consistent variance estimation. Usually the jackknife is easier to apply to complex sampling schemes than the bootstrap. Complex sampling schemes may involve stratification, multiple stages (clustering), varying sampling weights (non-response adjustments, calibration, post-stratification) and under unequal-probability sampling designs. Theoretical aspects of both the bootstrap and the jackknife can be found in Shao and Tu (1995),[10]whereas a basic introduction is accounted in Wolter (2007).[11]The bootstrap estimate of model prediction bias is more precise than jackknife estimates with linear models such as linear discriminant function or multiple regression.[12]
https://en.wikipedia.org/wiki/Randomization_test
In the field ofmachine learningand specifically the problem ofstatistical classification, aconfusion matrix, also known aserror matrix,[1]is a specifictablelayout that allows visualization of the performance of an algorithm, typically asupervised learningone; inunsupervised learningit is usually called amatching matrix. Each row of thematrixrepresents the instances in an actual class while each column represents the instances in a predicted class, or vice versa – both variants are found in the literature.[2]The diagonal of the matrix therefore represents all instances that are correctly predicted.[3]The name stems from the fact that it makes it easy to see whether the system is confusing two classes (i.e. commonly mislabeling one as another). It is a special kind ofcontingency table, with two dimensions ("actual" and "predicted"), and identical sets of "classes" in both dimensions (each combination of dimension and class is a variable in the contingency table). Given a sample of 12 individuals, 8 that have been diagnosed with cancer and 4 that are cancer-free, where individuals with cancer belong to class 1 (positive) and non-cancer individuals belong to class 0 (negative), we can display that data as follows: Assume that we have a classifier that distinguishes between individuals with and without cancer in some way, we can take the 12 individuals and run them through the classifier. The classifier then makes 9 accurate predictions and misses 3: 2 individuals with cancer wrongly predicted as being cancer-free (sample 1 and 2), and 1 person without cancer that is wrongly predicted to have cancer (sample 9). Notice, that if we compare the actual classification set to the predicted classification set, there are 4 different outcomes that could result in any particular column. One, if the actual classification is positive and the predicted classification is positive (1,1), this is called a true positive result because the positive sample was correctly identified by the classifier. Two, if the actual classification is positive and the predicted classification is negative (1,0), this is called a false negative result because the positive sample is incorrectly identified by the classifier as being negative. Third, if the actual classification is negative and the predicted classification is positive (0,1), this is called a false positive result because the negative sample is incorrectly identified by the classifier as being positive. Fourth, if the actual classification is negative and the predicted classification is negative (0,0), this is called a true negative result because the negative sample gets correctly identified by the classifier. We can then perform the comparison between actual and predicted classifications and add this information to the table, making correct results appear in green so they are more easily identifiable. The template for any binary confusion matrix uses the four kinds of results discussed above (true positives, false negatives, false positives, and true negatives) along with the positive and negative classifications. The four outcomes can be formulated in a 2×2confusion matrix, as follows: The color convention of the three data tables above were picked to match this confusion matrix, in order to easily differentiate the data. Now, we can simply total up each type of result, substitute into the template, and create a confusion matrix that will concisely summarize the results of testing the classifier: 8 + 4 = 12 In this confusion matrix, of the 8 samples with cancer, the system judged that 2 were cancer-free, and of the 4 samples without cancer, it predicted that 1 did have cancer. All correct predictions are located in the diagonal of the table (highlighted in green), so it is easy to visually inspect the table for prediction errors, as values outside the diagonal will represent them. By summing up the 2 rows of the confusion matrix, one can also deduce the total number of positive (P) and negative (N) samples in the original dataset, i.e.P=TP+FN{\displaystyle P=TP+FN}andN=FP+TN{\displaystyle N=FP+TN}. Inpredictive analytics, atable of confusion(sometimes also called aconfusion matrix) is a table with two rows and two columns that reports the number oftrue positives,false negatives,false positives, andtrue negatives. This allows more detailed analysis than simply observing the proportion of correct classifications (accuracy). Accuracy will yield misleading results if the data set is unbalanced; that is, when the numbers of observations in different classes vary greatly. For example, if there were 95 cancer samples and only 5 non-cancer samples in the data, a particular classifier might classify all the observations as having cancer. The overall accuracy would be 95%, but in more detail the classifier would have a 100% recognition rate (sensitivity) for the cancer class but a 0% recognition rate for the non-cancer class.F1 scoreis even more unreliable in such cases, and here would yield over 97.4%, whereasinformednessremoves such bias and yields 0 as the probability of an informed decision for any form of guessing (here always guessing cancer). According to Davide Chicco and Giuseppe Jurman, the most informative metric to evaluate a confusion matrix is theMatthews correlation coefficient (MCC).[11] Other metrics can be included in a confusion matrix, each of them having their significance and use. Confusion matrix is not limited to binary classification and can be used in multi-class classifiers as well. The confusion matrices discussed above have only two conditions: positive and negative. For example, the table below summarizes communication ofa whistled languagebetween two speakers, with zero values omitted for clarity.[20]
https://en.wikipedia.org/wiki/Confusion_matrix
Theattention economyrefers to the incentives of advertising-driven companies, in particular, to maximize the time and attention their users give to their product.[1][2] Attention economicsis an approach to themanagement of informationthat treats humanattentionas a scarcecommodityand applieseconomic theoryto solve various information management problems. According toMatthew Crawford, "Attention is aresource—a person has only so much of it."[3]Thomas H. DavenportandJohn C. Beck[4]add to that definition: Attention is focused mental engagement on a particular item of information. Items come into our awareness, we attend to a particular item, and then we decide whether to act.[5] A strong trigger of this effect is that it limits the mental capability of humans and the receptiveness of information is also limited. Attention allows information to be filtered such that the most important information can be extracted from the environment while irrelevant details can be left out.[6] Software applicationseither explicitly or implicitly take attention economy into consideration in theiruser interface designbased on the realization that if it takes the user too long to locate something, they will find it through another application. This is done, for instance, by creating filters to make sure viewers are presented with information that is most relevant, of interest, and personalized based on past web search history.[7] The economic value of time can be quantified and compared to monetary expenditures. Erik Brynjolfsson, Seon Tae Kim and Joo Hee Oh show that this makes it possible to formally analyze the attention economy and putting values on free goods.[8] Research from a wide range of disciplines including psychology,[9]cognitive science,[10]neuroscience,[11]and economics,[12]suggest that humans have limited cognitive resources that can be used at any given time, when resources are allocated to one task, the resources available for other tasks will be limited. Given that attention is a cognitive process that involves the selective concentration of resources on a given item of information, to the exclusion of other perceivable information, attention can be considered in terms of limited processing resources.[13] The concept of attention economics was first theorized by psychologist and economistHerbert A. Simon[14]when he wrote about the scarcity of attention in an information-rich world in 1971: [I]n an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.[15] He noted that many designers of information systems incorrectly represented their design problem as information scarcity rather than attention scarcity, and as a result, they built systems that excelled at providing more and more information to people, when what was really needed were systems that excelled at filtering out unimportant or irrelevant information.[16] Simon's characterization of the problem ofinformation overloadas an economic one has become increasingly popular in analyzing information consumption since the mid-1990s, when writers such asThomas H. Davenportand Michael Goldhaber[17]adopted terms like "attention economy" and "economics of attention".[18] Some writers have speculated that transactions based on attention will replace financial transactions as the focus of economic system. For example, Goldhaber wrote in 1997: "...transactions in which money is involved may be growing in total number, but the total number of global attention transactions is growing even faster."[19]For a 1999 essay,Georg Franckargued "income in attention ranks above financial success" for advertising-based media like magazines and television.[20]Information systems researchers have also adopted the idea, and are beginning to investigatemechanism designswhich build on the idea of creating property rights in attention (seeApplications). In 2022,Rice Universityprofessor Adrian Lenardic and two co-authors wrote forBigThinkthat attention economics adversely affected scientific research: "The attention a scientist’s work gains from the public now plays into its perceived value. Scientists list media exposure counts on résumés, and many PhD theses now include the number of times a candidate’s work has appeared in the popular science press. Science has succumbed to the attention economy."[21]They add that study results are publicized without proper peer input or reproducibility.[21] Ineconomic theory, market exchanges may have unintended consequences, calledexternalities, that aren't reflected in the price consumers pay upfront. When these consequences have a negative effect on an uninvolved third party, they're callednegative externalities, withpollutionbeing a common example.[22]The attention economy generates negative externalities for society that impact both individuals and communities.[23] One negative externality of the attention economy issocial media addiction. Given the monetization of human attention, social media platforms are designed to maximize user engagement, namely by influencing the brain's reward system. When users receive positive feedback on social media or view novel content, their brain releasesdopamine, leading them to stay on the platform for extended periods of time and come back to it repeatedly. Social media addiction has been linked to negative mental health outcomes such as depression, anxiety, and low self-esteem.[24] TheNetflixdocumentaryThe Social Dilemmaillustrates how algorithms from search engines and social media platforms negatively affect users while maximizing onlineengagement.[25][26] During the 2010s,social mediain conjunction withonline advertisingtechnologies inspired significant growth in thebusiness modelof the attention economy.[27][28]A study conducted by researchers atHanken School of Economicsfound that when the attention economy is paired with online advertising, the resulting financial arrangement can lead to the circulation offake newsand the amplification ofdisinformationfor profit.[28] Another negative externality of the attention economy is the rise ofsurveillance capitalism, which describes the practice of companies collecting personal data to buy and sell for profit. To capture user attention, companies collect data — such as demographics and behavioral patterns — and use it to create personalized user experiences that align with their interests based on the obtained data. Companies also sell this data to third parties, often without the user's informed consent.[29]These practices raise ethical concerns about privacy, misuse of data, and misrepresentation of communities.[30] Within the attention economy, engagement metrics influence the visibility of content and narratives. Algorithms in the attention economy are designed to maximize engagement, often prioritizing content that resonates with dominant cultural identities. As a result, marginalized groups may face challenges in having representation of their perspectives and concerns. For example, Black creators on platforms such asTikTokhave reported that their content had significant reductions in engagement after posting about theBlack Lives MatterMovement, suggesting that they wereshadow banned.[31]Furthermore, limiting the visibility of marginalized creators reduces the amount of attention they receive. This, in turn, hinders their ability to engage in activism and spread awareness about issues affecting their community to the broader public.[32][33] According todigital cultureexpertKevin Kelly, by 2008, the attention economy was increasingly one where the consumer product costs virtually nothing to reproduce and the problem facing the supplier of the product lies in adding valuable intangibles that cannot be reproduced at any cost. He identifies these intangibles as:[34] Attention economics is also relevant to the social sphere. Specifically, long-term attention can be considered according to the attention that people dedicate to managing their interactions with others. Dedicating too much attention to these interactions can lead to "social interaction overload",[35]i.e. when people are overwhelmed in managing their relationships with others, for instance in the context ofsocial network servicesin which people are the subject of a high level of social solicitations. Digital media and the internet facilitate participation in this economy by creating new channels for distributing attention. Ordinary people are now empowered to reach a wide audience by publishing their own content and commenting on the content of others.[36] Social attention can also be associated to collective attention, i.e. how "attention to novel items propagates and eventually fades among large populations".[37] "Attention economics" treats a potential consumer's attention as a resource.[38]Traditional media advertisers followed a model that suggested consumers went through a linear process they calledAIDA(attention, interest, desire and action).[39]Attention is therefore a major and the first stage in the process of converting non-consumers. Since the cost to transmit advertising to consumers has become sufficiently low given that more ads can be transmitted to a consumer (e.g. via online advertising) than the consumer can process, the consumer's attention becomes the scarce resource to be allocated. As such, a superfluidity of information may hinder an individual's decision-making who keeps searching and comparing products as long as it promises to provide more than it is using up.[40] Advertisers that produce attention-grabbing content that is presented to unconsenting consumers without compensation have been criticized for perpetratingattention theft.[41][42] One application treats various forms of information (e.g. spam, advertising) as a form of pollution or 'detrimental externality'.[43]In economics, anexternalityis a by-product of a production process that imposes burdens (or supplies benefits), to parties other than the intended consumer of a commodity.[44]For example; air and water pollution are ‘negative’ externalities that impose burdens on society and the environment. A market-based approach to controlling externalities was outlined inRonald Coase'sThe Problem of Social Cost(1960).[45]This evolved from an article on theFederal Communications Commission(1959),[46]in which Coase claimed thatradio-frequency interferenceis a negative externality that could be controlled by the creation ofproperty rights. Coase's approach to the management of externalities requires the careful specification of property rights and a set of rules for the initial allocation of the rights.[47]Once these rights are specified and allocated, amarket mechanismcan theoretically manage the externality problem.[48] Sending huge numbers of e-mail messages costs spammers very little, since the costs of e-mail messages are spread out over theinternet service providersthat distribute them (and the recipients who must spend attention dealing with them).[49]Thus, sending out as much spam as possible is a rational strategy: even if only 0.001% of recipients (1 in 100,000) is converted into a sale, a spam campaign can be profitable. Of course, it is very difficult to understand where all the revenue comes from since these businesses are run through proxy servers. However, if they were not profitable, it is reasonable to conclude that they would not be sending spam.[50]Spammers are demanding valuable attention from potential customers, but avoid paying a fair price for this attention due to the current architecture of e-mail systems.[51] One way this might be mitigated is through the implementation of "Sender Bond" whereby senders are required to post a financial bond that is forfeited if enough recipients report an email as spam.[52] Closely related is the idea of selling "interrupt rights", or small fees for the right to demand one's attention.[53]The cost of these rights could vary according to the person who is interrupted: interrupt rights for the CEO of a Fortune 500 company would presumably be extraordinarily expensive, while those of a high school student might be lower. Costs could also vary for an individual depending on context, perhaps rising during the busy holiday season and falling during the dog days of summer. Those who are interrupted could decline to collect their fees from friends, family, and other welcome interrupters.[54] Another idea in this vein is the creation of "attention bonds", small warranties that some information will not be a waste of the recipient's time, placed intoescrowat the time of sending.[55]Like the granters of interrupt rights, receivers could cash in their bonds to signal to the sender that a given communication was a waste of their time or elect not to cash them in to signal that more communication would be welcome.[56] Assearch engineshave become a primary means for finding and accessing information on the web, high rankings in the results for certain queries have become valuable commodities, due to the ability of search engines to focus searchers' attention.[57]Like other information systems, web search is vulnerable to pollution: "Because the Web environment contains profit seeking ventures, attention getting strategies evolve in response to search engine algorithms".[58] Since most major search engines now rely on some form ofPageRank(recursive counting ofhyperlinksto a site) to determine search result rankings, a gray market in the creation and trading of hyperlinks has emerged.[59][60]Participants in this market engage in a variety of practices known aslink spamming,link farming, andreciprocal linking.[61] Another issue, similar to the issue discussed above of whether or not to consider political e-mail campaigns as spam, is what to do about politically motivatedlink campaignsorGoogle bombs.[62]Currently, the major search engines do not treat these as web spam, but this is a decision made unilaterally by private companies. The paid inclusion model, as well as more pervasive advertising networks likeYahoo! Publisher Networkand Google'sAdSense, work by treating consumer attention as the property of the search engine (in the case of paid inclusion) or the publisher (in the case of advertising networks).[63][64]This is somewhat different from the anti-spam uses of property rights in attention, which treat an individual's attention as his or her own property. These advertising models significantly influence consumer behavior, often leveraging personal data to target ads more effectively. While this can enhance user experience by aligning advertisements with user interests, it raises privacy concerns and can lead to consumer manipulation. The phenomenon of "ad fatigue" where excessive exposure to ads leads to reduced attention and engagement with advertisements is also noteworthy.[65] Advancements in artificial intelligence and machine learning have transformed paid inclusion and advertising networks. These technologies allow for more sophisticated targeting and personalization of ads, improving effectiveness but also increasing concerns about surveillance and data privacy.[66] The regulation of paid inclusion and advertising networks is complex, involving multiple stakeholders with diverse interests. There is an ongoing debate about the balance between encouraging innovation and protecting consumer privacy. Ethical considerations also include the transparency of these models and their impact on the informational ecosystem, potentially leading to biased or manipulated content.[67]
https://en.wikipedia.org/wiki/Attention_economy
Bluetoothis a short-rangewirelesstechnology standard that is used for exchanging data between fixed and mobile devices over short distances and buildingpersonal area networks(PANs). In the most widely used mode, transmission power is limited to 2.5milliwatts, giving it a very short range of up to 10 metres (33 ft). It employsUHFradio wavesin theISM bands, from 2.402GHzto 2.48GHz.[3]It is mainly used as an alternative to wired connections to exchange files between nearby portable devices and connectcell phonesand music players withwireless headphones,wireless speakers,HIFIsystems,car audioand wireless transmission betweenTVsandsoundbars. Bluetooth is managed by theBluetooth Special Interest Group(SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. TheIEEEstandardized Bluetooth asIEEE 802.15.1but no longer maintains the standard. The Bluetooth SIG oversees the development of the specification, manages the qualification program, and protects the trademarks.[4]A manufacturer must meetBluetooth SIG standardsto market it as a Bluetooth device.[5]A network ofpatentsapplies to the technology, which is licensed to individual qualifying devices. As of 2021[update], 4.7 billion Bluetoothintegrated circuitchips are shipped annually.[6]Bluetooth was first demonstrated in space in 2024, an early test envisioned to enhanceIoTcapabilities.[7] The name "Bluetooth" was proposed in 1997 by Jim Kardach ofIntel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales fromFrans G. Bengtsson'sThe Long Ships, a historical novel about Vikings and the 10th-century Danish kingHarald Bluetooth. Upon discovering a picture of therunestone of Harald Bluetooth[8]in the bookA History of the VikingsbyGwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.[9][10][11] According to Bluetooth's official website, Bluetooth was only intended as a placeholder until marketing could come up with something really cool. Later, when it came time to select a serious name, Bluetooth was to be replaced with either RadioWire or PAN (Personal Area Networking). PAN was the front runner, but an exhaustive search discovered it already had tens of thousands of hits throughout the internet. A full trademark search on RadioWire couldn't be completed in time for launch, making Bluetooth the only choice. The name caught on fast and before it could be changed, it spread throughout the industry, becoming synonymous with short-range wireless technology.[12] Bluetooth is theAnglicisedversion of the ScandinavianBlåtand/Blåtann(or inOld Norseblátǫnn). It was theepithetof King Harald Bluetooth, who united the disparate Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.[13] The Bluetooth logois abind runemerging theYounger Futharkrunes(ᚼ,Hagall) and(ᛒ,Bjarkan), Harald's initials.[14][15] The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO atEricsson MobileinLund, Sweden. The purpose was to develop wireless headsets, according to two inventions byJohan Ullman,SE 8902098-6, issued 12 June 1989andSE 9202239, issued 24 July 1992. Nils Rydbeck taskedTord Wingrenwith specifying and DutchmanJaap Haartsenand Sven Mattisson with developing.[16]Both were working for Ericsson in Lund.[17]Principal design and development began in 1994 and by 1997 the team had a workable solution.[18]From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.[19][20][21][22] In 1997, Adalio Sanchez, then head ofIBMThinkPadproduct R&D, approached Nils Rydbeck about collaborating on integrating amobile phoneinto a ThinkPad notebook. The two assigned engineers fromEricssonandIBMstudied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal. Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruitedToshibaandNokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM. The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" atCOMDEX. The first Bluetooth mobile phone was the unreleased prototype Ericsson T36, though it was the revised Ericsson modelT39that actually made it to store shelves in June 2001. However Ericsson released the R520m in Quarter 1 of 2001,[23]making the R520m the first ever commercially available Bluetooth phone. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth. Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices. Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, sinceWi-Fiwas not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations withMotorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle[24]ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices, which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time. In 2012, Jaap Haartsen was nominated by theEuropean Patent Officefor theEuropean Inventor Award.[18] Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, includingguard bands2MHz wide at the bottom end and 3.5MHz wide at the top.[25]This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology calledfrequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, withadaptive frequency-hopping(AFH) enabled.[25]Bluetooth Low Energyuses 2MHz spacing, which accommodates 40 channels.[26] Originally,Gaussian frequency-shift keying(GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK(differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneousbit rateof 1Mbit/sis possible. The termEnhanced Data Rate(EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, transferring 2 and 3Mbit/s respectively. In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSKmodulation on 4 MHz channels with forward error correction (FEC).[27] Bluetooth is apacket-based protocolwith amaster/slave architecture. One master may communicate with up to seven slaves in apiconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625μs, and two slots make up a slot pair of 1250μs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots. The above excludes Bluetooth Low Energy, introduced in the 4.0 specification,[28]whichuses the same spectrum but somewhat differently. A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave). The Bluetooth Core Specification provides for the connection of two or more piconets to form ascatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another. At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in around-robinfashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.[29] Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-costtransceivermicrochipsin each device.[30]Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, aquasi opticalwireless path must be viable.[31] Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range.[2]The actual range of a given link depends on several qualities of both communicating devices and theair and obstacles in between. The primary attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), transmission power, and receiver sensitivity, and the relative orientations and gains of both antennas.[32] The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products. Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device.[33]In general, however, Class 1 devices have sensitivities similar to those of Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100 m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.[34][35][36] To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles. For example, Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.[37] Bluetooth exists in numerous products such as telephones,speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definitionheadsets,modems,hearing aids[53]and even watches.[54]Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files). Bluetooth protocols simplify the discovery and setup of services between devices.[55]Bluetooth devices can advertise all of the services they provide.[56]This makes using services easier, because more of the security,network addressand permission configuration can be automated than with many other network types.[55] A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While somedesktop computersand most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle". Unlike its predecessor,IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.[57] ForMicrosoftplatforms,Windows XP Service Pack 2and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR.[58]Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft.[59]Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR.[58]Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).[58]The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP,DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced.[58]Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent. Appleproducts have worked with Bluetooth sinceMac OSX v10.2, which was released in 2002.[60] Linuxhas two popularBluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed byQualcomm.[61]Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed byBroadcom.[62]There is also Affix stack, developed byNokia. It was once popular, but has not been updated since 2005.[63] FreeBSDhas included Bluetooth since its v5.0 release, implemented throughnetgraph.[64][65] NetBSDhas included Bluetooth since its v4.0 release.[66][67]Its Bluetooth stack was ported toOpenBSDas well, however OpenBSD later removed it as unmaintained.[68][69] DragonFly BSDhas had NetBSD's Bluetooth implementation since 1.11 (2008).[70][71]Anetgraph-based implementation fromFreeBSDhas also been available in the tree, possibly disabled until 2014-11-15, and may require more work.[72][73] The specifications were formalized by theBluetooth Special Interest Group(SIG) and formally announced on 20 May 1998.[74]In 2014 it had a membership of over 30,000 companies worldwide.[75]It was established byEricsson,IBM,Intel,NokiaandToshiba, and later joined by many other companies. All versions of the Bluetooth standards arebackward-compatiblewith all earlier versions.[76] The Bluetooth Core Specification Working Group (CSWG) produces mainly four kinds of specifications: Major enhancements include: This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for fasterdata transfer. The data rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s.[79]EDR uses a combination ofGFSKandphase-shift keyingmodulation (PSK) with two variants, π/4-DQPSKand 8-DPSK.[81]EDR can provide a lower power consumption through a reducedduty cycle. The specification is published asBluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.[82] Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.[81] The headline feature of v2.1 issecure simple pairing(SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.[83] Version 2.1 allows various other improvements, includingextended inquiry response(EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode. Version 3.0 + HS of the Bluetooth Core Specification[81]was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated802.11link. The main new feature isAMP(Alternative MAC/PHY), the addition of802.11as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0[84]or earlier Core Specification Addendum 1.[85] The high-speed (AMP) feature of Bluetooth v3.0 was originally intended forUWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.[86] On 16 March 2009, theWiMedia Allianceannounced it was entering into technology transfer agreements for the WiMediaUltra-wideband(UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG),Wireless USBPromoter Group and theUSB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.[87][88][89][90][91] In October 2009, theBluetooth Special Interest Groupsuspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of formerWiMediamembers had not and would not sign up to the necessary agreements for theIPtransfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer-term roadmap.[92][93][94] The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted as of 30 June 2010[update]. It includesClassic Bluetooth,Bluetooth high speedandBluetooth Low Energy(BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols. Bluetooth Low Energy, previously known as Wibree,[95]is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by acoin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions.[96]The provisional namesWibreeandBluetooth ULP(Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.[97] Compared toClassic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining asimilar communication range. In terms of lengthening the battery life of Bluetooth devices,BLErepresents a significant progression. Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost. General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services withAESEncryption. Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer. Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012. Core Specification Addendum 4 has an adoption date of 12 February 2013. The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.[106] New features of this specification include: Some features were already available in a Core Specification Addendum (CSA) before the release of v4.1. Released on 2 December 2014,[108]it introduces features for theInternet of things. The major areas of improvement are: Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.[109][110] The Bluetooth SIG released Bluetooth 5 on 6 December 2016.[111]Its new features are mainly focused on newInternet of Thingstechnology. Sony was the first to announce Bluetooth 5.0 support with itsXperia XZ Premiumin Feb 2017 during the Mobile World Congress 2017.[112]The SamsungGalaxy S8launched with Bluetooth 5 support in April 2017. In September 2017, theiPhone 8, 8 Plus andiPhone Xlaunched with Bluetooth 5 support as well.Applealso integrated Bluetooth 5 in its newHomePodoffering released on 9 February 2018.[113]Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0);[114]the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market." Bluetooth 5 provides, forBLE, options that can double the data rate (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation[115]of low-energy Bluetooth connections.[116][117][118] The major areas of improvement are: Features added in CSA5 – integrated in v5.0: The following features were removed in this version of the specification: The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.[120] The major areas of improvement are: Features added in Core Specification Addendum (CSA) 6 – integrated in v5.1: The following features were removed in this version of the specification: On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification version 5.2. The new specification adds new features:[121] The Bluetooth SIG published the Bluetooth Core Specification version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are:[128] The following features were removed in this version of the specification: The Bluetooth SIG released the Bluetooth Core Specification version 5.4 on 7 February 2023. This new version adds the following features:[129] The Bluetooth SIG released the Bluetooth Core Specification version 6.0 on 27 August 2024.[130]This version adds the following features:[131] The Bluetooth SIG released the Bluetooth Core Specification version 6.1 on 7 May 2025.[132] Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller. High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets. The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g.SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol. A Bluetooth device is ashort-rangewirelessdevice. Bluetooth devices arefabricatedonRF CMOSintegrated circuit(RF circuit) chips.[133][134] Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols.[135]Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols:HCIand RFCOMM.[citation needed] The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC). The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services: The Host Controller Interface provides a command interface between the controller and the host. TheLogical Link Control and Adaptation Protocol(L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols. Provides segmentation and reassembly of on-air packets. InBasicmode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the defaultMTU, and 48 bytes as the minimum mandatory supported MTU. InRetransmission and Flow Controlmodes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks. Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes: Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer. Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links. TheService Discovery Protocol(SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine whichBluetooth profilesthe headset can use (Headset Profile, Hands Free Profile (HFP),Advanced Audio Distribution Profile (A2DP)etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by aUniversally unique identifier(UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128). Radio Frequency Communications(RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulatesEIA-232(formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation. RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth. Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM. TheBluetooth Network Encapsulation Protocol(BNEP) is used for transferring another protocol stack's data via an L2CAP channel. Its main purpose is the transmission of IP packets in the Personal Area Networking Profile. BNEP performs a similar function toSNAPin Wireless LAN. TheAudio/Video Control Transport Protocol(AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player. TheAudio/Video Distribution Transport Protocol(AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over anL2CAPchannel intended for video distribution profile in the Bluetooth transmission. TheTelephony Control Protocol– Binary(TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices." TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest. Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include: Depending on packet type, individual packets may be protected byerror correction, either 1/3 rateforward error correction(FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged byautomatic repeat request(ARQ). Any Bluetooth device indiscoverable modetransmits the following information on demand: Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device. Every device has aunique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices. Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range namedT610(seeBluejacking). Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range). To resolve this conflict, Bluetooth uses a process calledbonding, and a bond is generated through a process calledpairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively. Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship. During pairing, the two devices establish a relationship by creating ashared secretknown as alink key. If both devices store the same link key, they are said to bepairedorbonded. A device that wants to communicate only with a bonded device cancryptographicallyauthenticatethe identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticatedACLlink between the devices may beencryptedto protect exchanged data againsteavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with. Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases. Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms: SSP is considered simple for the following reasons: Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simpleXOR attacksto retrieve the encryption key. Bluetooth v2.1 addresses this in the following ways: Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device. Bluetooth implementsconfidentiality,authenticationandkeyderivation with custom algorithms based on theSAFER+block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.[136]TheE0stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices. An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.[137] In September 2008, theNational Institute of Standards and Technology(NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.[138] Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See thepairing mechanismssection for more about these changes. Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!"[139]Bluejacking does not involve the removal or alteration of any data from the device.[140] Some form ofDoSis also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices. In 2001, Jakobsson and Wetzel fromBell Laboratoriesdiscovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme.[141]In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data.[142]In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at theCeBITfairgrounds, showing the importance of the problem to the world. A new attack calledBlueBugwas used for this experiment.[143]In 2004 the first purportedvirususing Bluetooth to spread itself among mobile phones appeared on theSymbian OS.[144]The virus was first described byKaspersky Laband requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology orSymbian OSsince the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see alsoBluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to 1.78 km (1.11 mi) with directional antennas and signal amplifiers.[145]This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.[146] In January 2005, a mobilemalwareworm known as Lasco surfaced. The worm began targeting mobile phones usingSymbian OS(Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other.SISfiles on the device, allowing replication to another device through the use of removable media (Secure Digital,CompactFlash, etc.). The worm can render the mobile device unstable.[147] In April 2005,University of Cambridgesecurity researchers published results of their actual implementation of passive attacks against thePIN-basedpairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.[148] In June 2005, Yaniv Shaked[149]and Avishai Wool[150]published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.[151] In August 2005, police inCambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.[152] In April 2006, researchers fromSecure NetworkandF-Securepublished a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.[153] In October 2006, at the Luxembourgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.[154] In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, includingMicrosoft Windows,Linux, AppleiOS, and GoogleAndroid. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.[155] In July 2018, Lior Neumann andEli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.[156][157] Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.[158] In August 2019, security researchers at theSingapore University of Technology and Design, Helmholtz Center for Information Security, andUniversity of Oxforddiscovered a vulnerability, called KNOB (Key Negotiation of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".[159][160]Google released anAndroidsecurity patch on 5 August 2019, which removed this vulnerability.[161] In November 2023, researchers fromEurecomrevealed a new class of attacks known as BLUFFS (Bluetooth Low Energy Forward and Future Secrecy Attacks). These 6 new attacks expand on and work in conjunction with the previously known KNOB and BIAS (Bluetooth Impersonation AttackS) attacks. While the previous KNOB and BIAS attacks allowed an attacker to decrypt and spoof Bluetooth packets within a session, BLUFFS extends this capability to all sessions generated by a device (including past, present, and future). All devices running Bluetooth versions 4.2 up to and including 5.4 are affected.[162][163] Bluetooth uses theradio frequencyspectrum in the 2.402GHz to 2.480GHz range,[164]which isnon-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included byIARCin the possiblecarcinogenlist. Maximum power output from a Bluetooth radio is 100mWfor Class1, 2.5mW for Class2, and 1mW for Class3 devices. Even the maximum power output of Class1 is a lower level than the lowest-powered mobile phones.[165]UMTSandW-CDMAoutput 250mW,GSM1800/1900outputs 1000mW, andGSM850/900outputs 2000mW. The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.[166] The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World.[167]The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.[168]
https://en.wikipedia.org/wiki/Bluetooth
Inmathematics, atopological spaceis, roughly speaking, ageometrical spacein whichclosenessis defined but cannot necessarily be measured by a numericdistance. More specifically, a topological space is asetwhose elements are calledpoints, along with an additional structure called a topology, which can be defined as a set ofneighbourhoodsfor each point that satisfy someaxiomsformalizing the concept of closeness. There are several equivalent definitions of a topology, the most commonly used of which is the definition throughopen sets, which is easier than the others to manipulate. A topological space is the most general type of amathematical spacethat allows for the definition oflimits,continuity, andconnectedness.[1][2]Common types of topological spaces includeEuclidean spaces,metric spacesandmanifolds. Although very general, the concept of topological spaces is fundamental, and used in virtually every branch of modern mathematics. The study of topological spaces in their own right is calledgeneral topology(or point-set topology). Around 1735,Leonhard Eulerdiscovered theformulaV−E+F=2{\displaystyle V-E+F=2}relating the number of vertices (V), edges (E) and faces (F) of aconvex polyhedron, and hence of aplanar graph. The study and generalization of this formula, specifically byCauchy(1789–1857) andL'Huilier(1750–1840),boosted the studyof topology. In 1827,Carl Friedrich GausspublishedGeneral investigations of curved surfaces, which in section 3 defines the curved surface in a similar manner to the modern topological understanding: "A curved surface is said to possess continuous curvature at one of its points A, if the direction of all the straight lines drawn from A to points of the surface at an infinitesimal distance from A are deflected infinitesimally from one and the same plane passing through A."[3][non-primary source needed] Yet, "untilRiemann's work in the early 1850s, surfaces were always dealt with from a local point of view (as parametric surfaces) and topological issues were never considered".[4]"MöbiusandJordanseem to be the first to realize that the main problem about the topology of (compact) surfaces is to find invariants (preferably numerical) to decide the equivalence of surfaces, that is, to decide whether two surfaces arehomeomorphicor not."[4] The subject is clearly defined byFelix Kleinin his "Erlangen Program" (1872): the geometry invariants of arbitrary continuous transformation, a kind of geometry. The term "topology" was introduced byJohann Benedict Listingin 1847, although he had used the term in correspondence some years earlier instead of previously used "Analysis situs". The foundation of this science, for a space of any dimension, was created byHenri Poincaré. His first article on this topic appeared in 1894.[5]In the 1930s,James Waddell Alexander IIandHassler Whitneyfirst expressed the idea that a surface is a topological space that islocally like a Euclidean plane. Topological spaces were first defined byFelix Hausdorffin 1914 in his seminal "Principles of Set Theory".Metric spaceshad been defined earlier in 1906 byMaurice Fréchet, though it was Hausdorff who popularised the term "metric space" (German:metrischer Raum).[6][7][better source needed] The utility of the concept of atopologyis shown by the fact that there are several equivalent definitions of thismathematical structure. Thus one chooses theaxiomatizationsuited for the application. The most commonly used is that in terms ofopen sets, but perhaps more intuitive is that in terms ofneighbourhoodsand so this is given first. This axiomatization is due toFelix Hausdorff. LetX{\displaystyle X}be a (possibly empty) set. The elements ofX{\displaystyle X}are usually calledpoints, though they can be any mathematical object. LetN{\displaystyle {\mathcal {N}}}be afunctionassigning to eachx{\displaystyle x}(point) inX{\displaystyle X}a non-empty collectionN(x){\displaystyle {\mathcal {N}}(x)}of subsets ofX.{\displaystyle X.}The elements ofN(x){\displaystyle {\mathcal {N}}(x)}will be calledneighbourhoodsofx{\displaystyle x}with respect toN{\displaystyle {\mathcal {N}}}(or, simply,neighbourhoods ofx{\displaystyle x}). The functionN{\displaystyle {\mathcal {N}}}is called aneighbourhood topologyif theaxiomsbelow[8]are satisfied; and thenX{\displaystyle X}withN{\displaystyle {\mathcal {N}}}is called atopological space. The first three axioms for neighbourhoods have a clear meaning. The fourth axiom has a very important use in the structure of the theory, that of linking together the neighbourhoods of different points ofX.{\displaystyle X.} A standard example of such a system of neighbourhoods is for the real lineR,{\displaystyle \mathbb {R} ,}where a subsetN{\displaystyle N}ofR{\displaystyle \mathbb {R} }is defined to be aneighbourhoodof a real numberx{\displaystyle x}if it includes an open interval containingx.{\displaystyle x.} Given such a structure, a subsetU{\displaystyle U}ofX{\displaystyle X}is defined to beopenifU{\displaystyle U}is a neighbourhood of all points inU.{\displaystyle U.}The open sets then satisfy the axioms given below in the next definition of a topological space. Conversely, when given the open sets of a topological space, the neighbourhoods satisfying the above axioms can be recovered by definingN{\displaystyle N}to be a neighbourhood ofx{\displaystyle x}ifN{\displaystyle N}includes an open setU{\displaystyle U}such thatx∈U.{\displaystyle x\in U.}[9] Atopologyon asetXmay be defined as a collectionτ{\displaystyle \tau }ofsubsetsofX, calledopen setsand satisfying the following axioms:[10] As this definition of a topology is the most commonly used, the setτ{\displaystyle \tau }of the open sets is commonly called atopologyonX.{\displaystyle X.} A subsetC⊆X{\displaystyle C\subseteq X}is said to beclosedin(X,τ){\displaystyle (X,\tau )}if itscomplementX∖C{\displaystyle X\setminus C}is an open set. Usingde Morgan's laws, the above axioms defining open sets become axioms definingclosed sets: Using these axioms, another way to define a topological space is as a setX{\displaystyle X}together with a collectionτ{\displaystyle \tau }of closed subsets ofX.{\displaystyle X.}Thus the sets in the topologyτ{\displaystyle \tau }are the closed sets, and their complements inX{\displaystyle X}are the open sets. There are many other equivalent ways to define a topological space: in other words the concepts of neighbourhood, or that of open or closed sets can be reconstructed from other starting points and satisfy the correct axioms. Another way to define a topological space is by using theKuratowski closure axioms, which define the closed sets as thefixed pointsof anoperatoron thepower setofX.{\displaystyle X.} Anetis a generalisation of the concept ofsequence. A topology is completely determined if for every net inX{\displaystyle X}the set of itsaccumulation pointsis specified. Many topologies can be defined on a set to form a topological space. When every open set of a topologyτ1{\displaystyle \tau _{1}}is also open for a topologyτ2,{\displaystyle \tau _{2},}one says thatτ2{\displaystyle \tau _{2}}isfinerthanτ1,{\displaystyle \tau _{1},}andτ1{\displaystyle \tau _{1}}iscoarserthanτ2.{\displaystyle \tau _{2}.}A proof that relies only on the existence of certain open sets will also hold for any finer topology, and similarly a proof that relies only on certain sets not being open applies to any coarser topology. The termslargerandsmallerare sometimes used in place of finer and coarser, respectively. The termsstrongerandweakerare also used in the literature, but with little agreement on the meaning, so one should always be sure of an author's convention when reading. The collection of all topologies on a given fixed setX{\displaystyle X}forms acomplete lattice: ifF={τα:α∈A}{\displaystyle F=\left\{\tau _{\alpha }:\alpha \in A\right\}}is a collection of topologies onX,{\displaystyle X,}then themeetofF{\displaystyle F}is the intersection ofF,{\displaystyle F,}and thejoinofF{\displaystyle F}is the meet of the collection of all topologies onX{\displaystyle X}that contain every member ofF.{\displaystyle F.} Afunctionf:X→Y{\displaystyle f:X\to Y}between topological spaces is calledcontinuousif for everyx∈X{\displaystyle x\in X}and every neighbourhoodN{\displaystyle N}off(x){\displaystyle f(x)}there is a neighbourhoodM{\displaystyle M}ofx{\displaystyle x}such thatf(M)⊆N.{\displaystyle f(M)\subseteq N.}This relates easily to the usual definition in analysis. Equivalently,f{\displaystyle f}is continuous if theinverse imageof every open set is open.[11]This is an attempt to capture the intuition that there are no "jumps" or "separations" in the function. Ahomeomorphismis abijectionthat is continuous and whoseinverseis also continuous. Two spaces are calledhomeomorphicif there exists a homeomorphism between them. From the standpoint of topology, homeomorphic spaces are essentially identical.[12] Incategory theory, one of the fundamentalcategoriesisTop, which denotes thecategory of topological spaceswhoseobjectsare topological spaces and whosemorphismsare continuous functions. The attempt to classify the objects of this category (up tohomeomorphism) byinvariantshas motivated areas of research, such ashomotopy theory,homology theory, andK-theory. A given set may have many different topologies. If a set is given a different topology, it is viewed as a different topological space. Any set can be given thediscrete topologyin which every subset is open. The only convergent sequences or nets in this topology are those that are eventually constant. Also, any set can be given thetrivial topology(also called the indiscrete topology), in which only the empty set and the whole space are open. Every sequence and net in this topology converges to every point of the space. This example shows that in general topological spaces, limits of sequences need not be unique. However, often topological spaces must beHausdorff spaceswhere limit points are unique. There exist numerous topologies on any givenfinite set. Such spaces are calledfinite topological spaces. Finite spaces are sometimes used to provide examples or counterexamples to conjectures about topological spaces in general. Any set can be given thecofinite topologyin which the open sets are the empty set and the sets whose complement is finite. This is the smallestT1topology on any infinite set.[13] Any set can be given thecocountable topology, in which a set is defined as open if it is either empty or its complement is countable. When the set is uncountable, this topology serves as a counterexample in many situations. The real line can also be given thelower limit topology. Here, the basic open sets are the half open intervals[a,b).{\displaystyle [a,b).}This topology onR{\displaystyle \mathbb {R} }is strictly finer than the Euclidean topology defined above; a sequence converges to a point in this topology if and only if it converges from above in the Euclidean topology. This example shows that a set may have many distinct topologies defined on it. Ifγ{\displaystyle \gamma }is anordinal number, then the setγ=[0,γ){\displaystyle \gamma =[0,\gamma )}may be endowed with theorder topologygenerated by the intervals(α,β),{\displaystyle (\alpha ,\beta ),}[0,β),{\displaystyle [0,\beta ),}and(α,γ){\displaystyle (\alpha ,\gamma )}whereα{\displaystyle \alpha }andβ{\displaystyle \beta }are elements ofγ.{\displaystyle \gamma .} Everymanifoldhas anatural topologysince it is locally Euclidean. Similarly, everysimplexand everysimplicial complexinherits a natural topology from . TheSierpiński spaceis the simplest non-discrete topological space. It has important relations to thetheory of computationand semantics. Every subset of a topological space can be given thesubspace topologyin which the open sets are the intersections of the open sets of the larger space with the subset. For anyindexed familyof topological spaces, the product can be given theproduct topology, which is generated by the inverse images of open sets of the factors under theprojectionmappings. For example, in finite products, a basis for the product topology consists of all products of open sets. For infinite products, there is the additional requirement that in a basic open set, all but finitely many of its projections are the entire space. This construction is a special case of aninitial topology. Aquotient spaceis defined as follows: ifX{\displaystyle X}is a topological space andY{\displaystyle Y}is a set, and iff:X→Y{\displaystyle f:X\to Y}is asurjectivefunction, then the quotient topology onY{\displaystyle Y}is the collection of subsets ofY{\displaystyle Y}that have openinverse imagesunderf.{\displaystyle f.}In other words, the quotient topology is the finest topology onY{\displaystyle Y}for whichf{\displaystyle f}is continuous. A common example of a quotient topology is when anequivalence relationis defined on the topological spaceX.{\displaystyle X.}The mapf{\displaystyle f}is then the natural projection onto the set ofequivalence classes. This construction is a special case of afinal topology. TheVietoris topologyon the set of all non-empty subsets of a topological spaceX,{\displaystyle X,}named forLeopold Vietoris, is generated by the following basis: for everyn{\displaystyle n}-tupleU1,…,Un{\displaystyle U_{1},\ldots ,U_{n}}of open sets inX,{\displaystyle X,}we construct a basis set consisting of all subsets of the union of theUi{\displaystyle U_{i}}that have non-empty intersections with eachUi.{\displaystyle U_{i}.} TheFell topologyon the set of all non-empty closed subsets of alocally compactPolish spaceX{\displaystyle X}is a variant of the Vietoris topology, and is named after mathematician James Fell. It is generated by the following basis: for everyn{\displaystyle n}-tupleU1,…,Un{\displaystyle U_{1},\ldots ,U_{n}}of open sets inX{\displaystyle X}and for every compact setK,{\displaystyle K,}the set of all subsets ofX{\displaystyle X}that are disjoint fromK{\displaystyle K}and have nonempty intersections with eachUi{\displaystyle U_{i}}is a member of the basis. Metric spaces embody ametric, a precise notion of distance between points. Everymetric spacecan be given a metric topology, in which the basic open sets are open balls defined by the metric. This is the standard topology on anynormed vector space. On a finite-dimensionalvector spacethis topology is the same for all norms. There are many ways of defining a topology onR,{\displaystyle \mathbb {R} ,}the set ofreal numbers. The standard topology onR{\displaystyle \mathbb {R} }is generated by theopen intervals. The set of all open intervals forms abaseor basis for the topology, meaning that every open set is a union of some collection of sets from the base. In particular, this means that a set is open if there exists an open interval of non zero radius about every point in the set. More generally, theEuclidean spacesRn{\displaystyle \mathbb {R} ^{n}}can be given a topology. In theusual topologyonRn{\displaystyle \mathbb {R} ^{n}}the basic open sets are the openballs. Similarly,C,{\displaystyle \mathbb {C} ,}the set ofcomplex numbers, andCn{\displaystyle \mathbb {C} ^{n}}have a standard topology in which the basic open sets are open balls. For anyalgebraic objectswe can introduce the discrete topology, under which the algebraic operations are continuous functions. For any such structure that is not finite, we often have a natural topology compatible with the algebraic operations, in the sense that the algebraic operations are still continuous. This leads to concepts such astopological groups,topological rings,topological fieldsandtopological vector spacesover the latter.Local fieldsare topological fields important innumber theory. TheZariski topologyis defined algebraically on thespectrum of a ringor analgebraic variety. OnRn{\displaystyle \mathbb {R} ^{n}}orCn,{\displaystyle \mathbb {C} ^{n},}the closed sets of the Zariski topology are thesolution setsof systems ofpolynomialequations. IfΓ{\displaystyle \Gamma }is afilteron a setX{\displaystyle X}then{∅}∪Γ{\displaystyle \{\varnothing \}\cup \Gamma }is a topology onX.{\displaystyle X.} Many sets oflinear operatorsinfunctional analysisare endowed with topologies that are defined by specifying when a particular sequence of functions converges to the zero function. Alinear graphhas a natural topology that generalizes many of the geometric aspects ofgraphswithverticesandedges. Outer spaceof afree groupFn{\displaystyle F_{n}}consists of the so-called "marked metric graph structures" of volume 1 onFn.{\displaystyle F_{n}.}[14] Topological spaces can be broadly classified,up tohomeomorphism, by theirtopological properties. A topological property is a property of spaces that is invariant under homeomorphisms. To prove that two spaces are not homeomorphic it is sufficient to find a topological property not shared by them. Examples of such properties includeconnectedness,compactness, and variousseparation axioms. For algebraic invariants seealgebraic topology.
https://en.wikipedia.org/wiki/Topological_space
In mathematics, aCarleman matrixis a matrix used to convertfunction compositionintomatrix multiplication. It is often used in iteration theory to find the continuousiteration of functionswhich cannot be iterated bypattern recognitionalone. Other uses of Carleman matrices occur in the theory ofprobabilitygenerating functions, andMarkov chains. TheCarleman matrixof an infinitely differentiable functionf(x){\displaystyle f(x)}is defined as: so as to satisfy the (Taylor series) equation: For instance, the computation off(x){\displaystyle f(x)}by simply amounts to the dot-product of row 1 ofM[f]{\displaystyle M[f]}with a column vector[1,x,x2,x3,...]τ{\displaystyle \left[1,x,x^{2},x^{3},...\right]^{\tau }}. The entries ofM[f]{\displaystyle M[f]}in the next row give the 2nd power off(x){\displaystyle f(x)}: and also, in order to have the zeroth power off(x){\displaystyle f(x)}inM[f]{\displaystyle M[f]}, we adopt the row 0 containing zeros everywhere except the first position, such that Thus, thedot productofM[f]{\displaystyle M[f]}with the column vector[1,x,x2,...]T{\displaystyle {\begin{bmatrix}1,x,x^{2},...\end{bmatrix}}^{T}}yields the column vector[1,f(x),f(x)2,...]T{\displaystyle \left[1,f(x),f(x)^{2},...\right]^{T}}, i.e., A generalization of the Carleman matrix of a function can be defined around any point, such as: orM[f]x0=M[g]{\displaystyle M[f]_{x_{0}}=M[g]}whereg(x)=f(x+x0)−x0{\displaystyle g(x)=f(x+x_{0})-x_{0}}. This allows thematrix powerto be related as: If we setψn(x)=xn{\displaystyle \psi _{n}(x)=x^{n}}we have theCarleman matrix. Becauseh(x)=∑ncn(h)⋅ψn(x)=∑ncn(h)⋅xn{\displaystyle h(x)=\sum _{n}c_{n}(h)\cdot \psi _{n}(x)=\sum _{n}c_{n}(h)\cdot x^{n}}then we know that the n-th coefficientcn(h){\displaystyle c_{n}(h)}must be the nth-coefficient of thetaylor seriesofh{\displaystyle h}. Thereforecn(h)=1n!h(n)(0){\displaystyle c_{n}(h)={\frac {1}{n!}}h^{(n)}(0)}ThereforeG[f]mn=cn(ψm∘f)=cn(f(x)m)=1n![dndxn(f(x))m]x=0{\displaystyle G[f]_{mn}=c_{n}(\psi _{m}\circ f)=c_{n}(f(x)^{m})={\frac {1}{n!}}\left[{\frac {d^{n}}{dx^{n}}}(f(x))^{m}\right]_{x=0}}Which is theCarleman matrixgiven above.(It's important to note that this is not an orthornormal basis) If{en(x)}n{\displaystyle \{e_{n}(x)\}_{n}}is an orthonormal basis for a Hilbert Space with a defined inner product⟨f,g⟩{\displaystyle \langle f,g\rangle }, we can setψn=en{\displaystyle \psi _{n}=e_{n}}andcn(h){\displaystyle c_{n}(h)}will be⟨h,en⟩{\displaystyle {\displaystyle \langle h,e_{n}\rangle }}. ThenG[f]mn=cn(em∘f)=⟨em∘f,en⟩{\displaystyle G[f]_{mn}=c_{n}(e_{m}\circ f)=\langle e_{m}\circ f,e_{n}\rangle }. Ifen(x)=einx{\displaystyle e_{n}(x)=e^{inx}}we have the analogous forFourier Series. Letc^n{\displaystyle {\hat {c}}_{n}}andG^{\displaystyle {\hat {G}}}represent the carleman coefficient and matrix in the fourier basis. Because the basis is orthogonal, we have. Then, therefore,G^[f]mn=cn^(em∘f)=⟨em∘f,en⟩{\displaystyle {\hat {G}}[f]_{mn}={\hat {c_{n}}}(e_{m}\circ f)=\langle e_{m}\circ f,e_{n}\rangle }which is Carleman matrices satisfy the fundamental relationship which makes the Carleman matrixMa (direct) representation off(x){\displaystyle f(x)}. Here the termf∘g{\displaystyle f\circ g}denotes the composition of functionsf(g(x)){\displaystyle f(g(x))}. Other properties include: The Carleman matrix of a constant is: The Carleman matrix of the identity function is: The Carleman matrix of a constant addition is: The Carleman matrix of thesuccessor functionis equivalent to theBinomial coefficient: The Carleman matrix of thelogarithmis related to the (signed)Stirling numbers of the first kindscaled byfactorials: The Carleman matrix of thelogarithmis related to the (unsigned)Stirling numbers of the first kindscaled byfactorials: The Carleman matrix of theexponential functionis related to theStirling numbers of the second kindscaled byfactorials: The Carleman matrix ofexponential functionsis: The Carleman matrix of a constant multiple is: The Carleman matrix of a linear function is: The Carleman matrix of a functionf(x)=∑k=1∞fkxk{\displaystyle f(x)=\sum _{k=1}^{\infty }f_{k}x^{k}}is: The Carleman matrix of a functionf(x)=∑k=0∞fkxk{\displaystyle f(x)=\sum _{k=0}^{\infty }f_{k}x^{k}}is: TheBell matrixor theJabotinsky matrixof a functionf(x){\displaystyle f(x)}is defined as[1][2][3] so as to satisfy the equation These matrices were developed in 1947 by Eri Jabotinsky to represent convolutions of polynomials.[4]It is thetransposeof the Carleman matrix and satisfy B[f∘g]=B[g]B[f],{\displaystyle B[f\circ g]=B[g]B[f]~,}which makes the Bell matrixBananti-representationoff(x){\displaystyle f(x)}.
https://en.wikipedia.org/wiki/Bell_matrix
Acold caseis acrime, or a suspected crime, that has not yet been fully resolved and is not the subject of a currentcriminal investigation, but for which new information could emerge from new witness testimony, re-examined archives, new or retained material evidence, or fresh activities of a suspect. New technological methods developed after the crime was committed can be used on the surviving evidence for analysis often with conclusive results. Typically, cold cases areviolentand other majorfelonycrimes, such asmurderandrape, which—unlike unsolved minor crimes—are generally not subject to astatute of limitations. Sometimes disappearances can also be considered cold cases if the victim has not been seen or heard from for some time, such as the case ofNatalee Hollowayor theBeaumont children. The rate of cold cases being solved are slowly declining, soon less than 30% will be solved per year. About 35% of those cases are not cold cases at all. Some cases become instantly cold when a seemingly closed (solved) case is re-opened due to the discovery of new evidence pointing away from the original suspect(s). Other cases are cold when the crime is discovered well after the fact—for example, by the discovery of human remains.[1]Some cases become classified cold cases when a case that had been originally ruled an accident or suicide is re-designated as murder when new evidence emerges. TheJohn Christiemurders is a notable case whenTimothy Evanswas wrongly executed for the alleged murders of his wife and child. Many other bodies were later found in the house where they lived with Christie, and he was then executed for the crimes. The case helped a campaign againstcapital punishmentin Britain. A case is considered unsolved until asuspecthas been identified,charged, andtriedfor the crime. A case that goes to trial and does not result in aconvictioncan also be kept on the books pending newevidence. In some cases, a suspect, often called a "person of interest" or "subject" is identified early on but no evidence definitively linking the subject to the crime is found at that time and more often than not the subject is not forthcoming with a confession. This often happens in cases where the subject has analibi, alibi witnesses, or lack of forensic evidence. Eventually, the alibi is disproved, the witnesses recanted their statements or advances in forensics helped bring the subjects to justice. Sometimes a case is not solved but forensic evidence helps to determine that the crimes areserial crimes. TheBTKcase andOriginal Night Stalkercases are such examples.[2]TheTexas Rangershave established a website[3]in the hopes that it shall elicit new information and investigative leads.[4] Sometimes, a viable suspect has been overlooked or simply ignored due to then-flimsy circumstantial evidence, the presence of a likelier suspect (who is later proven to be innocent), or a tendency of investigators to zero in on someone else to the exclusion of other possibilities (which goes back to the likelier suspect angle)—known as "tunnel vision". With the advent of and improvements toDNA testing/DNA profilingand otherforensicstechnology, many cold cases are being re-opened andprosecuted.[5]Policedepartments are opening cold case units whose job is to re-examine cold case files. DNA evidence helps in such cases but as in the case of fingerprints, it is of no value unless there is evidence on file to compare it to. However, to combat that issue, the FBI is switching from using theIntegrated Automated Fingerprint Identification System (IAFIS)to using a newer technology called theNext Generation Identification (NGI). Other improvements in forensics lie in fields such as: The identity ofJack the Ripperis a notorious example of an outstanding cold case, with numeroussuggestionsas to the identity of theserial killer. Similarly, theZodiac Killerhas been studied extensively for almost 50 years, with numerous suspects discussed and debated. The perpetrators of theWall Street bombingof 1920 have never been positively identified, though theGalleanists, a group ofItaliananarchists, are widely believed to have planned the explosion. Theburning of theReichstag buildingin 1933 remains controversial and althoughMarinus van der Lubbewas tried, convicted and executed forarson, it is possible that the Reichstag fire was perpetrated by theNazisto enhance their power and destroy democracy in Germany. The phrase "Cold Case" is found in a number of story and book titles. Examples include:
https://en.wikipedia.org/wiki/Cold_case
Inprobability theory, themultinomial distributionis a generalization of thebinomial distribution. For example, it models the probability of counts for each side of ak-sided die rolledntimes. Fornindependenttrials each of which leads to a success for exactly one ofkcategories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories. Whenkis 2 andnis 1, the multinomial distribution is theBernoulli distribution. Whenkis 2 andnis bigger than 1, it is thebinomial distribution. Whenkis bigger than 2 andnis 1, it is thecategorical distribution. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (sondetermines the suffix, andkthe prefix). TheBernoulli distributionmodels the outcome of a singleBernoulli trial. In other words, it models whether flipping a (possiblybiased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). Thebinomial distributiongeneralizes this to the number of heads from performingnindependent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome ofnexperiments, where the outcome of each trial has acategorical distribution, such as rolling a (possiblybiased)k-sided dientimes. Letkbe a fixed finite number. Mathematically, we havekpossible mutually exclusive outcomes, with corresponding probabilitiesp1, ...,pk, andnindependent trials. Since thekoutcomes are mutually exclusive and one must occur we havepi≥ 0 fori= 1, ...,kand∑i=1kpi=1{\displaystyle \sum _{i=1}^{k}p_{i}=1}. Then if the random variablesXiindicate the number of times outcome numberiis observed over thentrials, the vectorX= (X1, ...,Xk) follows a multinomial distribution with parametersnandp, wherep= (p1, ...,pk). While the trials are independent, their outcomesXiare dependent because they must sum to n. n∈{0,1,2,…}{\displaystyle n\in \{0,1,2,\ldots \}}number of trialsk>0{\displaystyle k>0}number of mutually exclusive events (integer) Suppose one does an experiment of extractingnballs ofkdifferent colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of colori(i= 1, ...,k) asXi, and denote aspithe probability that a given extraction will be in colori. Theprobability mass functionof this multinomial distribution is: for non-negative integersx1, ...,xk. The probability mass function can be expressed using thegamma functionas: This form shows its resemblance to theDirichlet distribution, which is itsconjugate prior. Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample? Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is themultivariate hypergeometric distribution, but the distributions converge as the population grows large in comparison to a fixed sample size[1]. The multinomial distribution is normalized according to: where the sum is over all permutations ofxj{\displaystyle x_{j}}such that∑j=1kxj=n{\displaystyle \sum _{j=1}^{k}x_{j}=n}. Theexpectednumber of times the outcomeiwas observed overntrials is Thecovariance matrixis as follows. Each diagonal entry is thevarianceof a binomially distributed random variable, and is therefore The off-diagonal entries are thecovariances: fori,jdistinct. All covariances are negative because for fixedn, an increase in one component of a multinomial vector requires a decrease in another component. When these expressions are combined into a matrix withi, jelementcov⁡(Xi,Xj),{\displaystyle \operatorname {cov} (X_{i},X_{j}),}the result is ak×kpositive-semidefinitecovariance matrixof rankk− 1. In the special case wherek=nand where thepiare all equal, the covariance matrix is thecentering matrix. The entries of the correspondingcorrelation matrixare Note that the number of trialsndrops out of this expression. Each of thekcomponents separately has a binomial distribution with parametersnandpi, for the appropriate value of the subscripti. Thesupportof the multinomial distribution is the set Its number of elements is In matrix notation, and withpT= the row vector transpose of the column vectorp. Just like one can interpret thebinomial distributionas (normalized) one-dimensional (1D) slices ofPascal's triangle, so too can one interpret the multinomial distribution as 2D (triangular) slices ofPascal's pyramid, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of therangeof the distribution: discretized equilateral "pyramids" in arbitrary dimension—i.e. asimplexwith a grid.[citation needed] Similarly, just like one can interpret thebinomial distributionas the polynomial coefficients of(p+q)n{\displaystyle (p+q)^{n}}when expanded, one can interpret the multinomial distribution as the coefficients of(p1+p2+p3+⋯+pk)n{\displaystyle (p_{1}+p_{2}+p_{3}+\cdots +p_{k})^{n}}when expanded, noting that just the coefficients must sum up to 1. ByStirling's formula, at the limit ofn,x1,...,xk→∞{\displaystyle n,x_{1},...,x_{k}\to \infty }, we haveln⁡(nx1,⋯,xk)+∑i=1kxiln⁡pi=−nDKL(p^‖p)−k−12ln⁡(2πn)−12∑i=1kln⁡(p^i)+o(1){\displaystyle \ln {\binom {n}{x_{1},\cdots ,x_{k}}}+\sum _{i=1}^{k}x_{i}\ln p_{i}=-nD_{KL}({\hat {p}}\|p)-{\frac {k-1}{2}}\ln(2\pi n)-{\frac {1}{2}}\sum _{i=1}^{k}\ln({\hat {p}}_{i})+o(1)}where relative frequenciesp^i=xi/n{\displaystyle {\hat {p}}_{i}=x_{i}/n}in the data can be interpreted as probabilities from the empirical distributionp^{\displaystyle {\hat {p}}}, andDKL{\displaystyle D_{KL}}is theKullback–Leibler divergence. This formula can be interpreted as follows. ConsiderΔk{\displaystyle \Delta _{k}}, the space of all possible distributions over the categories{1,2,...,k}{\displaystyle \{1,2,...,k\}}. It is asimplex. Aftern{\displaystyle n}independent samples from the categorical distributionp{\displaystyle p}(which is how we construct the multinomial distribution), we obtain an empirical distributionp^{\displaystyle {\hat {p}}}. By the asymptotic formula, the probability that empirical distributionp^{\displaystyle {\hat {p}}}deviates from the actual distributionp{\displaystyle p}decays exponentially, at a ratenDKL(p^‖p){\displaystyle nD_{KL}({\hat {p}}\|p)}. The more experiments and the more differentp^{\displaystyle {\hat {p}}}is fromp{\displaystyle p}, the less likely it is to see such an empirical distribution. IfA{\displaystyle A}is a closed subset ofΔk{\displaystyle \Delta _{k}}, then by dividing upA{\displaystyle A}into pieces, and reasoning about the growth rate ofPr(p^∈Aϵ){\displaystyle Pr({\hat {p}}\in A_{\epsilon })}on each pieceAϵ{\displaystyle A_{\epsilon }}, we obtainSanov's theorem, which states thatlimn→∞1nln⁡Pr(p^∈A)=−infp^∈ADKL(p^‖p){\displaystyle \lim _{n\to \infty }{\frac {1}{n}}\ln Pr({\hat {p}}\in A)=-\inf _{{\hat {p}}\in A}D_{KL}({\hat {p}}\|p)} Due to the exponential decay, at largen{\displaystyle n}, almost all the probability mass is concentrated in a small neighborhood ofp{\displaystyle p}. In this small neighborhood, we can take the first nonzero term in the Taylor expansion ofDKL{\displaystyle D_{KL}}, to obtainln⁡(nx1,⋯,xk)p1x1⋯pkxk≈−n2∑i=1k(p^i−pi)2pi=−12∑i=1k(xi−npi)2npi{\displaystyle \ln {\binom {n}{x_{1},\cdots ,x_{k}}}p_{1}^{x_{1}}\cdots p_{k}^{x_{k}}\approx -{\frac {n}{2}}\sum _{i=1}^{k}{\frac {({\hat {p}}_{i}-p_{i})^{2}}{p_{i}}}=-{\frac {1}{2}}\sum _{i=1}^{k}{\frac {(x_{i}-np_{i})^{2}}{np_{i}}}}This resembles the gaussian distribution, which suggests the following theorem: Theorem.At then→∞{\displaystyle n\to \infty }limit,n∑i=1k(p^i−pi)2pi=∑i=1k(xi−npi)2npi{\displaystyle n\sum _{i=1}^{k}{\frac {({\hat {p}}_{i}-p_{i})^{2}}{p_{i}}}=\sum _{i=1}^{k}{\frac {(x_{i}-np_{i})^{2}}{np_{i}}}}converges in distributionto thechi-squared distributionχ2(k−1){\displaystyle \chi ^{2}(k-1)}. The space of all distributions over categories{1,2,…,k}{\displaystyle \{1,2,\ldots ,k\}}is asimplex:Δk={(y1,…,yk):y1,…,yk≥0,∑iyi=1}{\displaystyle \Delta _{k}=\left\{(y_{1},\ldots ,y_{k})\colon y_{1},\ldots ,y_{k}\geq 0,\sum _{i}y_{i}=1\right\}}, and the set of all possible empirical distributions aftern{\displaystyle n}experiments is a subset of the simplex:Δk,n={(x1/n,…,xk/n):x1,…,xk∈N,∑ixi=n}{\displaystyle \Delta _{k,n}=\left\{(x_{1}/n,\ldots ,x_{k}/n)\colon x_{1},\ldots ,x_{k}\in \mathbb {N} ,\sum _{i}x_{i}=n\right\}}. That is, it is the intersection betweenΔk{\displaystyle \Delta _{k}}and the lattice(Zk)/n{\displaystyle (\mathbb {Z} ^{k})/n}. Asn{\displaystyle n}increases, most of the probability mass is concentrated in a subset ofΔk,n{\displaystyle \Delta _{k,n}}nearp{\displaystyle p}, and the probability distribution nearp{\displaystyle p}becomes well-approximated by(nx1,⋯,xk)p1x1⋯pkxk≈e−n2∑i(p^i−pi)2pi{\displaystyle {\binom {n}{x_{1},\cdots ,x_{k}}}p_{1}^{x_{1}}\cdots p_{k}^{x_{k}}\approx e^{-{\frac {n}{2}}\sum _{i}{\frac {({\hat {p}}_{i}-p_{i})^{2}}{p_{i}}}}}From this, we see that the subset upon which the mass is concentrated has radius on the order of1/n{\displaystyle 1/{\sqrt {n}}}, but the points in the subset are separated by distance on the order of1/n{\displaystyle 1/n}, so at largen{\displaystyle n}, the points merge into a continuum. To convert this from a discrete probability distribution to a continuous probability density, we need to multiply by the volume occupied by each point ofΔk,n{\displaystyle \Delta _{k,n}}inΔk{\displaystyle \Delta _{k}}. However, by symmetry, every point occupies exactly the same volume (except a negligible set on the boundary), so we obtain a probability densityρ(p^)=Ce−n2∑i(p^i−pi)2pi{\displaystyle \rho ({\hat {p}})=Ce^{-{\frac {n}{2}}\sum _{i}{\frac {({\hat {p}}_{i}-p_{i})^{2}}{p_{i}}}}}, whereC{\displaystyle C}is a constant. Finally, since the simplexΔk{\displaystyle \Delta _{k}}is not all ofRk{\displaystyle \mathbb {R} ^{k}}, but only within a(k−1){\displaystyle (k-1)}-dimensional plane, we obtain the desired result. The above concentration phenomenon can be easily generalized to the case where we condition upon linear constraints. This is the theoretical justification forPearson's chi-squared test. Theorem.Given frequenciesxi∈N{\displaystyle x_{i}\in \mathbb {N} }observed in a dataset withn{\displaystyle n}points, we imposeℓ+1{\displaystyle \ell +1}independent linearconstraints{∑ip^i=1,∑ia1ip^i=b1,∑ia2ip^i=b2,⋯,∑iaℓip^i=bℓ{\displaystyle {\begin{cases}\sum _{i}{\hat {p}}_{i}=1,\\\sum _{i}a_{1i}{\hat {p}}_{i}=b_{1},\\\sum _{i}a_{2i}{\hat {p}}_{i}=b_{2},\\\cdots ,\\\sum _{i}a_{\ell i}{\hat {p}}_{i}=b_{\ell }\end{cases}}}(notice that the first constraint is simply the requirement that the empirical distributions sum to one), such that empiricalp^i=xi/n{\displaystyle {\hat {p}}_{i}=x_{i}/n}satisfy all these constraints simultaneously. Letq{\displaystyle q}denote theI{\displaystyle I}-projection of prior distributionp{\displaystyle p}on the sub-region of the simplex allowed by the linear constraints. At then→∞{\displaystyle n\to \infty }limit, sampled countsnp^i{\displaystyle n{\hat {p}}_{i}}from the multinomial distributionconditional onthe linear constraints are governed by2nDKL(p^||q)≈n∑i(p^i−qi)2qi{\displaystyle 2nD_{KL}({\hat {p}}\vert \vert q)\approx n\sum _{i}{\frac {({\hat {p}}_{i}-q_{i})^{2}}{q_{i}}}}whichconverges in distributionto thechi-squared distributionχ2(k−1−ℓ){\displaystyle \chi ^{2}(k-1-\ell )}. An analogous proof applies in this Diophantine problem of coupled linear equations in count variablesnp^i{\displaystyle n{\hat {p}}_{i}},[2]but this timeΔk,n{\displaystyle \Delta _{k,n}}is the intersection of(Zk)/n{\displaystyle (\mathbb {Z} ^{k})/n}withΔk{\displaystyle \Delta _{k}}andℓ{\displaystyle \ell }hyperplanes, all linearly independent, so the probability densityρ(p^){\displaystyle \rho ({\hat {p}})}is restricted to a(k−ℓ−1){\displaystyle (k-\ell -1)}-dimensional plane. In particular, expanding the KL divergenceDKL(p^||p){\displaystyle D_{KL}({\hat {p}}\vert \vert p)}around its minimumq{\displaystyle q}(theI{\displaystyle I}-projection ofp{\displaystyle p}onΔk,n{\displaystyle \Delta _{k,n}}) in the constrained problem ensures by the Pythagorean theorem forI{\displaystyle I}-divergence that any constant and linear term in the countsnp^i{\displaystyle n{\hat {p}}_{i}}vanishes from the conditional probability to multinationally sample those counts. Notice that by definition, every one ofp^1,p^2,...,p^k{\displaystyle {\hat {p}}_{1},{\hat {p}}_{2},...,{\hat {p}}_{k}}must be a rational number, whereasp1,p2,...,pk{\displaystyle p_{1},p_{2},...,p_{k}}may be chosen from any real number in[0,1]{\displaystyle [0,1]}and need not satisfy the Diophantine system of equations. Only asymptotically asn→∞{\displaystyle n\rightarrow \infty }, thep^i{\displaystyle {\hat {p}}_{i}}'s can be regarded as probabilities over[0,1]{\displaystyle [0,1]}. Away from empirically observed constraintsb1,…,bℓ{\displaystyle b_{1},\ldots ,b_{\ell }}(such as moments or prevalences) the theorem can be generalized: Theorem. In the case that allp^i{\displaystyle {\hat {p}}_{i}}are equal, the Theorem reduces to the concentration of entropies around the Maximum Entropy.[3][4] In some fields such asnatural language processing, categorical and multinomial distributions are synonymous and it is common to speak of a multinomial distribution when acategorical distributionis actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-k" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range1…k{\displaystyle 1\dots k}; in this form, a categorical distribution is equivalent to a multinomial distribution over a single trial. The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions. Letq{\displaystyle q}denote a theoretical multinomial distribution and letp{\displaystyle p}be a true underlying distribution. The distributionsp{\displaystyle p}andq{\displaystyle q}are considered equivalent ifd(p,q)<ε{\displaystyle d(p,q)<\varepsilon }for a distanced{\displaystyle d}and a tolerance parameterε>0{\displaystyle \varepsilon >0}. The equivalence test problem isH0={d(p,q)≥ε}{\displaystyle H_{0}=\{d(p,q)\geq \varepsilon \}}versusH1={d(p,q)<ε}{\displaystyle H_{1}=\{d(p,q)<\varepsilon \}}. The true underlying distributionp{\displaystyle p}is unknown. Instead, the counting frequenciespn{\displaystyle p_{n}}are observed, wheren{\displaystyle n}is a sample size. An equivalence test usespn{\displaystyle p_{n}}to rejectH0{\displaystyle H_{0}}. IfH0{\displaystyle H_{0}}can be rejected then the equivalence betweenp{\displaystyle p}andq{\displaystyle q}is shown at a given significance level. The equivalence test for Euclidean distance can be found in text book of Wellek (2010).[5]The equivalence test for the total variation distance is developed in Ostrovski (2017).[6]The exact equivalence test for the specific cumulative distance is proposed in Frey (2009).[7] The distance between the true underlying distributionp{\displaystyle p}and a family of the multinomial distributionsM{\displaystyle {\mathcal {M}}}is defined byd(p,M)=minh∈Md(p,h){\displaystyle d(p,{\mathcal {M}})=\min _{h\in {\mathcal {M}}}d(p,h)}. Then the equivalence test problem is given byH0={d(p,M)≥ε}{\displaystyle H_{0}=\{d(p,{\mathcal {M}})\geq \varepsilon \}}andH1={d(p,M)<ε}{\displaystyle H_{1}=\{d(p,{\mathcal {M}})<\varepsilon \}}. The distanced(p,M){\displaystyle d(p,{\mathcal {M}})}is usually computed using numerical optimization. The tests for this case are developed recently in Ostrovski (2018).[8] In the setting of a multinomial distribution, constructing confidence intervals for the difference between the proportions of observations from two events,pi−pj{\displaystyle p_{i}-p_{j}}, requires the incorporation of the negative covariance between the sample estimatorsp^i=Xin{\displaystyle {\hat {p}}_{i}={\frac {X_{i}}{n}}}andp^j=Xjn{\displaystyle {\hat {p}}_{j}={\frac {X_{j}}{n}}}. Some of the literature on the subject focused on the use-case of matched-pairs binary data, which requires careful attention when translating the formulas to the general case ofpi−pj{\displaystyle p_{i}-p_{j}}for any multinomial distribution. Formulas in the current section will be generalized, while formulas in the next section will focus on the matched-pairs binary data use-case. Wald's standard error (SE) of the difference of proportion can be estimated using:[9]: 378[10] SE⁡(p^i−p^j)^=(p^i+p^j)−(p^i−p^j)2n{\displaystyle {\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}={\sqrt {\frac {({\hat {p}}_{i}+{\hat {p}}_{j})-({\hat {p}}_{i}-{\hat {p}}_{j})^{2}}{n}}}} For a100(1−α)%{\displaystyle 100(1-\alpha )\%}approximate confidence interval, themargin of errormay incorporate the appropriate quantile from thestandard normal distribution, as follows: (p^i−p^j)±zα/2⋅SE⁡(p^i−p^j)^{\displaystyle ({\hat {p}}_{i}-{\hat {p}}_{j})\pm z_{\alpha /2}\cdot {\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}} As the sample size (n{\displaystyle n}) increases, the sample proportions will approximately follow amultivariate normal distribution, thanks to themultidimensional central limit theorem(and it could also be shown using theCramér–Wold theorem). Therefore, their difference will also be approximately normal. Also, these estimators areweakly consistentand plugging them into the SE estimator makes it also weakly consistent. Hence, thanks toSlutsky's theorem, thepivotal quantity(p^i−p^j)−(pi−pj)SE⁡(p^i−p^j)^{\displaystyle {\frac {({\hat {p}}_{i}-{\hat {p}}_{j})-(p_{i}-p_{j})}{\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}}}approximately follows thestandard normal distribution. And from that, the aboveapproximate confidence intervalis directly derived. The SE can be constructed using the calculus ofthe variance of the difference of two random variables:SE⁡(p^i−p^j)^=p^i(1−p^i)n+p^j(1−p^j)n−2(−p^ip^jn)=1n(p^i+p^j−p^i2−p^j2+2p^ip^j)=(p^i+p^j)−(p^i−p^j)2n{\displaystyle {\begin{aligned}{\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}&={\sqrt {{\frac {{\hat {p}}_{i}(1-{\hat {p}}_{i})}{n}}+{\frac {{\hat {p}}_{j}(1-{\hat {p}}_{j})}{n}}-2\left(-{\frac {{\hat {p}}_{i}{\hat {p}}_{j}}{n}}\right)}}\\&={\sqrt {{\frac {1}{n}}\left({\hat {p}}_{i}+{\hat {p}}_{j}-{\hat {p}}_{i}^{2}-{\hat {p}}_{j}^{2}+2{\hat {p}}_{i}{\hat {p}}_{j}\right)}}\\&={\sqrt {\frac {({\hat {p}}_{i}+{\hat {p}}_{j})-({\hat {p}}_{i}-{\hat {p}}_{j})^{2}}{n}}}\end{aligned}}} A modification which includes acontinuity correctionadds1n{\displaystyle {\frac {1}{n}}}to the margin of error as follows:[11]: 102–3 (p^i−p^j)±(zα/2⋅SE⁡(p^i−p^j)^+1n){\displaystyle ({\hat {p}}_{i}-{\hat {p}}_{j})\pm \left(z_{\alpha /2}\cdot {\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}+{\frac {1}{n}}\right)} Another alternative is to rely on a Bayesian estimator usingJeffreys priorwhich leads to using adirichlet distribution, with all parameters being equal to 0.5, as a prior. The posterior will be the calculations from above, but after adding 1/2 to each of thekelements, leading to an overall increase of the sample size byk2{\displaystyle {\frac {k}{2}}}. This was originally developed for a multinomial distribution with four events, and is known aswald+2, for analyzing matched pairs data (see the next section for more details).[12] This leads to the following SE: SE⁡(p^i−p^j)^wald+k2=(p^i+p^j+1n)nn+k2−(p^i−p^j)2(nn+k2)2n+k2{\displaystyle {\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}_{wald+{\frac {k}{2}}}={\sqrt {\frac {\left({\hat {p}}_{i}+{\hat {p}}_{j}+{\frac {1}{n}}\right){\frac {n}{n+{\frac {k}{2}}}}-\left({\hat {p}}_{i}-{\hat {p}}_{j}\right)^{2}\left({\frac {n}{n+{\frac {k}{2}}}}\right)^{2}}{n+{\frac {k}{2}}}}}} SE⁡(p^i−p^j)^wald+k2=(xi+1/2n+k2+xj+1/2n+k2)−(xi+1/2n+k2−xj+1/2n+k2)2n+k2=(xin+xjn+1n)nn+k2−(xin−xjn)2(nn+k2)2n+k2=(p^i+p^j+1n)nn+k2−(p^i−p^j)2(nn+k2)2n+k2{\displaystyle {\begin{aligned}{\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}_{wald+{\frac {k}{2}}}&={\sqrt {\frac {\left({\frac {x_{i}+1/2}{n+{\frac {k}{2}}}}+{\frac {x_{j}+1/2}{n+{\frac {k}{2}}}}\right)-\left({\frac {x_{i}+1/2}{n+{\frac {k}{2}}}}-{\frac {x_{j}+1/2}{n+{\frac {k}{2}}}}\right)^{2}}{n+{\frac {k}{2}}}}}\\&={\sqrt {\frac {\left({\frac {x_{i}}{n}}+{\frac {x_{j}}{n}}+{\frac {1}{n}}\right){\frac {n}{n+{\frac {k}{2}}}}-\left({\frac {x_{i}}{n}}-{\frac {x_{j}}{n}}\right)^{2}\left({\frac {n}{n+{\frac {k}{2}}}}\right)^{2}}{n+{\frac {k}{2}}}}}\\&={\sqrt {\frac {\left({\hat {p}}_{i}+{\hat {p}}_{j}+{\frac {1}{n}}\right){\frac {n}{n+{\frac {k}{2}}}}-\left({\hat {p}}_{i}-{\hat {p}}_{j}\right)^{2}\left({\frac {n}{n+{\frac {k}{2}}}}\right)^{2}}{n+{\frac {k}{2}}}}}\end{aligned}}} Which can just be plugged into the original Wald formula as follows: (pi−pj)nn+k2±zα/2⋅SE⁡(p^i−p^j)^wald+k2{\displaystyle (p_{i}-p_{j}){\frac {n}{n+{\frac {k}{2}}}}\pm z_{\alpha /2}\cdot {\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}_{wald+{\frac {k}{2}}}} For the case of matched-pairs binary data, a common task is to build the confidence interval of the difference of the proportion of the matched events. For example, we might have a test for some disease, and we may want to check the results of it for some population at two points in time (1 and 2), to check if there was a change in the proportion of the positives for the disease during that time. Such scenarios can be represented using a two-by-twocontingency tablewith the number of elements that had each of the combination of events. We can use smallffor sampling frequencies:f11,f10,f01,f00{\displaystyle f_{11},f_{10},f_{01},f_{00}}, and capitalFfor population frequencies:F11,F10,F01,F00{\displaystyle F_{11},F_{10},F_{01},F_{00}}. These four combinations could be modeled as coming from a multinomial distribution (with four potential outcomes). The sizes of the sample and population can benandNrespectively. And in such a case, there is an interest in building a confidence interval for the difference of proportions from the marginals of the following (sampled) contingency table: In this case, checking the difference in marginal proportions means we are interested in using the following definitions:p1∗=F1∗N=F11+F10N{\displaystyle p_{1*}={\frac {F_{1*}}{N}}={\frac {F_{11}+F_{10}}{N}}},p∗1=F∗1N=F11+F01N{\displaystyle p_{*1}={\frac {F_{*1}}{N}}={\frac {F_{11}+F_{01}}{N}}}. And the difference we want to build confidence intervals for is: p∗1−p1∗=F11+F01N−F11+F10N=F01N−F10N=p01−p10{\displaystyle p_{*1}-p_{1*}={\frac {F_{11}+F_{01}}{N}}-{\frac {F_{11}+F_{10}}{N}}={\frac {F_{01}}{N}}-{\frac {F_{10}}{N}}=p_{01}-p_{10}} Hence, a confidence intervals for the marginal positive proportions (p∗1−p1∗{\displaystyle p_{*1}-p_{1*}}) is the same as building a confidence interval for the difference of the proportions from the secondary diagonal of the two-by-two contingency table (p01−p10{\displaystyle p_{01}-p_{10}}). Calculating ap-valuefor such a difference is known asMcNemar's test. Building confidence interval around it can be constructed using methods described above forConfidence intervals for the difference of two proportions. The Wald confidence intervals from the previous section can be applied to this setting, and appears in the literature using alternative notations. Specifically, the SE often presented is based on the contingency table frequencies instead of the sample proportions. For example, the Wald confidence intervals, provided above, can be written as:[11]: 102–3 SE⁡(p∗1−p1∗)^=SE⁡(p01−p10)^=n(f10+f01)−(f10−f01)2nn{\displaystyle {\widehat {\operatorname {SE} (p_{*1}-p_{1*})}}={\widehat {\operatorname {SE} (p_{01}-p_{10})}}={\frac {\sqrt {n(f_{10}+f_{01})-(f_{10}-f_{01})^{2}}}{n{\sqrt {n}}}}} Further research in the literature has identified several shortcomings in both the Wald and the Wald with continuity correction methods, and other methods have been proposed for practical application.[11] One such modification includesAgresti and Min’s Wald+2(similar to some of their other works[13]) in which each cell frequency had an extra12{\displaystyle {\frac {1}{2}}}added to it.[12]This leads to theWald+2confidence intervals. In a Bayesian interpretation, this is like building the estimators taking as prior adirichlet distributionwith all parameters being equal to 0.5 (which is, in fact, theJeffreys prior). The+2in the namewald+2can now be taken to mean that in the context of a two-by-two contingency table, which is a multinomial distribution with four possible events, then since we add 1/2 an observation to each of them, then this translates to an overall addition of 2 observations (due to the prior). This leads to the following modified SE for the case of matched pairs data: SE⁡(p∗1−p1∗)^=(n+2)(f10+f01+1)−(f10−f01)2(n+2)(n+2){\displaystyle {\widehat {\operatorname {SE} (p_{*1}-p_{1*})}}={\frac {\sqrt {(n+2)(f_{10}+f_{01}+1)-(f_{10}-f_{01})^{2}}}{(n+2){\sqrt {(n+2)}}}}} Which can just be plugged into the original Wald formula as follows: (p∗1−p1∗)nn+2±zα/2⋅SE⁡(p^i−p^j)^wald+2{\displaystyle (p_{*1}-p_{1*}){\frac {n}{n+2}}\pm z_{\alpha /2}\cdot {\widehat {\operatorname {SE} ({\hat {p}}_{i}-{\hat {p}}_{j})}}_{wald+2}} Other modifications includeBonett and Price’s Adjusted Wald, andNewcombe’s Score. First, reorder the parametersp1,…,pk{\displaystyle p_{1},\ldots ,p_{k}}such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variableXfrom a uniform (0, 1) distribution. The resulting outcome is the component {Xj= 1,Xk= 0 fork≠j} is one observation from the multinomial distribution withp1,…,pk{\displaystyle p_{1},\ldots ,p_{k}}andn= 1. A sum of independent repetitions of this experiment is an observation from a multinomial distribution withnequal to the number of such repetitions. Given the parametersp1,p2,…,pk{\displaystyle p_{1},p_{2},\ldots ,p_{k}}and a total for the samplen{\displaystyle n}such that∑i=1kXi=n{\displaystyle \sum _{i=1}^{k}X_{i}=n}, it is possible to sample sequentially for the number in an arbitrary stateXi{\displaystyle X_{i}}, by partitioning the state space intoi{\displaystyle i}and not-i{\displaystyle i}, conditioned on any prior samples already taken, repeatedly. Heuristically, each application of the binomial sample reduces the available number to sample from and the conditional probabilities are likewise updated to ensure logical consistency.[14]
https://en.wikipedia.org/wiki/Multinomial_distribution
Ambiguityoccurs when a single word or phrase may be interpreted in two or more ways. Aslawfrequently involves lengthy, complex texts, ambiguity is common. Thus, courts have evolved various doctrines for dealing with cases in which legal texts are ambiguous. In criminal law, therule of lenityholds that where a criminal statute is ambiguous, the meaning most favorable to the defendant—i.e., the one that imposes the lowest penalties—should be adopted.[1]In the US context, JusticeJohn Marshallstated the rule thus inUnited States v. Wiltberger: The rule that penal laws are to be construed strictly, is perhaps not much less old than construction itself. It is founded on the tenderness of the law for the rights of individuals; and on the plain principle that the power of punishment is vested in the legislative, not in the judicial department. It is the legislature, not the Court, which is to define a crime, and ordain its punishment.[2] Incontractlaw, thecontra proferentemrule holds that, depending on the circumstances, ambiguous terms in a contract may be construed in favor of the party with less bargaining power.[3] In Canada, courts have developed rules of construction to interpret ambiguities in treaties betweenIndigenous peoplesand theCrown.[4]In 1983, the Supreme Court of Canada held that "treaties and statutes relating to Indians should be liberally construed and doubtful expressions resolved in favour of the Indians."[5] Inproperty law, a distinction is drawn between patent ambiguity and latent ambiguity. The two forms of ambiguity differ in two respects: (1) what led to the existence of the ambiguity; and (2) the type of evidentiary basis that might be allowed in resolving it. Patent ambiguity is that ambiguity which isapparent on the faceof an instrument to any one perusing it, even if unacquainted with the circumstances of theparties.[6]In the case of a patent ambiguity,parol evidenceisadmissibleto explain only what has been written, not what the writer intended to write. For example, inSaunderson v Piper(1839),[7]where abill of exchangewas drawn in figures for £245 and in words for two hundred pounds, evidence that "and forty-five" had been omitted by mistake was rejected. But where it appears from the general context of the instrument what the parties really meant, the instrument will be construed as if there was no ambiguity, as inSayeandSele's case (1795),[8]where the name of the grantor had been omitted in the operative part of a grant, but, as it was clear from another part of the grant who he was, thedeedwas held to be valid.[9] Latent ambiguity is where the wording of an instrument is on the face of it clear and intelligible, but may, at the same time, apply equally to two different things or subject matters, as where a legacy is given "to my nephew, John," and thetestatoris shown to have two nephews of that name. A latent ambiguity may be explained by parol evidence: the ambiguity has been brought about by circumstances extraneous to the instrument, so the explanation must necessarily be sought in such circumstances.[9]
https://en.wikipedia.org/wiki/Ambiguity_(law)
RDF/XMLis a syntax,[1]defined by theW3C, to express (i.e.serialize) anRDFgraph as anXMLdocument. RDF/XML is sometimes misleadingly called simply RDF because it was introduced among the other W3C specifications defining RDF and it was historically the first W3C standard RDF serialization format. RDF/XML is the primary exchange syntax forOWL 2, and must be supported by all OWL 2 tools.[2] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/RDF/XML
Intelecommunications,long-term evolution(LTE) is astandardforwireless broadbandcommunication forcellularmobile devices and data terminals. It is considered to be a "transitional"4Gtechnology,[1]and is therefore also referred to as3.95Gas a step above3G.[2] LTE is based on the2GGSM/EDGEand 3GUMTS/HSPAstandards. It improves on those standards' capacity and speed by using a different radio interface and core network improvements.[3][4]LTE is the upgrade path for carriers with both GSM/UMTS networks andCDMA2000networks. LTE has been succeeded byLTE Advanced, which is officially defined as a "true" 4G technology[5]and also named "LTE+". The standard is developed by the3GPP(3rd Generation Partnership Project) and is specified in its Release 8 document series, with minor enhancements described in Release 9. LTE is also called3.95Gand has been marketed as4G LTEandAdvanced 4G;[citation needed]but the original version did not meet the technical criteria of a4Gwireless service, as specified in the 3GPP Release 8 and 9 document series forLTE Advanced. The requirements were set forth by theITU-Rorganisation in theIMT Advancedspecification; but, because of market pressure and the significant advances thatWiMAX,Evolved High Speed Packet Access, and LTE bring to the original 3G technologies, ITU-R later decided that LTE and the aforementioned technologies can be called 4G technologies.[6]The LTE Advanced standard formally satisfies the ITU-R requirements for being considered IMT-Advanced.[7]To differentiate LTE Advanced andWiMAX-Advancedfrom current[when?]4G technologies, ITU has defined the latter as "True 4G".[8][5] LTE stands for Long-Term Evolution[9]and is a registered trademark owned byETSI(European Telecommunications Standards Institute) for the wirelessdata communicationstechnology and development of the GSM/UMTS standards. However, other nations and companies do play an active role in the LTE project. The goal of LTE was to increase the capacity and speed of wireless data networks using newDSP(digital signal processing) techniques and modulations that were developed around the turn of the millennium. A further goal was the redesign and simplification of thenetwork architectureto anIP-based system with significantly reduced transferlatencycompared with the3Garchitecture. The LTE wireless interface is incompatible with2Gand 3G networks so it must be operated on a separateradio spectrum. The idea of LTE was first proposed in 1998, with the use of theCOFDMradio access technique to replace theCDMAand studying its Terrestrial use in the L band at 1428 MHz (TE) In 2004 by Japan'sNTT Docomo, with studies on the standard officially commenced in 2005.[10]In May 2007, the LTE/SAETrial Initiative (LSTI) alliance was founded as a global collaboration between vendors and operators with the goal of verifying and promoting the new standard in order to ensure the global introduction of the technology as quickly as possible.[11][12] The LTE standard was finalized in December 2008, and the first publicly available LTE service was launched byTeliaSonerainOsloandStockholmon December 14, 2009, as a data connection with a USB modem. The LTE services were launched by major North American carriers as well, with the Samsung SCH-r900 being the world's first LTE Mobile phone starting on September 21, 2010,[13][14]and Samsung Galaxy Indulge being the world's first LTE smartphone starting on February 10, 2011,[15][16]both offered byMetroPCS, and theHTC ThunderBoltoffered by Verizon starting on March 17 being the second LTE smartphone to be sold commercially.[17][18]In Canada,Rogers Wirelesswas the first to launch LTE network on July 7, 2011, offering the Sierra Wireless AirCard 313U USB mobile broadband modem, known as the "LTE Rocket stick" then followed closely by mobile devices from both HTC and Samsung.[19]Initially, CDMA operators planned to upgrade to rival standards calledUMBandWiMAX, but major CDMA operators (such asVerizon,SprintandMetroPCSin the United States,BellandTelusin Canada,au by KDDIin Japan,SK Telecomin South Korea andChina Telecom/China Unicomin China) have announced instead they intend to migrate to LTE. The next version of LTE isLTE Advanced, which was standardized in March 2011.[20]Services commenced in 2013.[21]Additional evolution known asLTE Advanced Prohave been approved in year 2015.[22] The LTE specification provides downlink peak rates of 300 Mbit/s, uplink peak rates of 75 Mbit/s andQoSprovisions permitting a transferlatencyof less than 5msin theradio access network. LTE has the ability to manage fast-moving mobiles and supports multi-cast and broadcast streams. LTE supports scalable carrierbandwidths, from 1.4MHzto 20 MHz and supports bothfrequency division duplexing(FDD) andtime-division duplexing(TDD). The IP-based network architecture, called theEvolved Packet Core(EPC) designed to replace theGPRS Core Network, supports seamlesshandoversfor both voice and data to cell towers with older network technology such asGSM,UMTSandCDMA2000.[23]The simpler architecture results in lower operating costs (for example, eachE-UTRAcell will support up to four times the data and voice capacity supported by HSPA[24]). BecauseLTE frequencies and bandsdiffer from country to country, only multi-band phones can use LTE in all countries where it is supported. Most carriers supporting GSM or HSUPA networks can be expected to upgrade their networks to LTE at some stage. A complete list of commercial contracts can be found at:[61] The following is a list of the top 10 countries/territories by 4G LTE coverage as measured by OpenSignal.com in February/March 2019.[72][73] For the complete list of all the countries/territories, seelist of countries by 4G LTE penetration. Long-Term Evolution Time-Division Duplex(LTE-TDD), also referred to asTDD LTE, is a4Gtelecommunications technology and standard co-developed by an international coalition of companies, includingChina Mobile,Datang Telecom,Huawei,ZTE,Nokia Solutions and Networks,Qualcomm,Samsung, andST-Ericsson. It is one of the two mobile data transmission technologies of the Long-Term Evolution (LTE) technology standard, the other beingLong-Term Evolution Frequency-Division Duplex(LTE-FDD). While some companies refer to LTE-TDD as "TD-LTE" for familiarity withTD-SCDMA, there is no reference to that abbreviation anywhere in the 3GPP specifications.[74][75][76] There are two major differences between LTE-TDD and LTE-FDD: how data is uploaded and downloaded, and what frequency spectra the networks are deployed in. While LTE-FDD uses paired frequencies to upload and download data,[77]LTE-TDD uses a single frequency, alternating between uploading and downloading data through time.[78][79]The ratio between uploads and downloads on a LTE-TDD network can be changed dynamically, depending on whether more data needs to be sent or received.[80]LTE-TDD and LTE-FDD also operate on different frequency bands,[81]with LTE-TDD working better at higher frequencies, and LTE-FDD working better at lower frequencies.[82]Frequencies used for LTE-TDD range from 1850 MHz to 3800 MHz, with several different bands being used.[83]The LTE-TDD spectrum is generally cheaper to access, and has less traffic.[81]Further, the bands for LTE-TDD overlap with those used forWiMAX, which can easily be upgraded to support LTE-TDD.[81] Despite the differences in how the two types of LTE handle data transmission, LTE-TDD and LTE-FDD share 90 percent of their core technology, making it possible for the same chipsets and networks to use both versions of LTE.[81][84]A number of companies produce dual-mode chips or mobile devices, includingSamsungandQualcomm,[85][86]while operatorsCMHKand Hi3G Access have developed dual-mode networks in Hong Kong and Sweden, respectively.[87] The creation of LTE-TDD involved a coalition of international companies that worked to develop and test the technology.[88]China Mobilewas an early proponent of LTE-TDD,[81][89]along with other companies likeDatang Telecom[88]andHuawei, which worked to deploy LTE-TDD networks, and later developed technology allowing LTE-TDD equipment to operate inwhite spaces—frequency spectra between broadcast TV stations.[75][90]Intelalso participated in the development, setting up a LTE-TDD interoperability lab with Huawei in China,[91]as well asST-Ericsson,[81]Nokia,[81]and Nokia Siemens (nowNokia Solutions and Networks),[75]which developed LTE-TDD base stations that increased capacity by 80 percent and coverage by 40 percent.[92]Qualcommalso participated, developing the world's first multi-mode chip, combining both LTE-TDD and LTE-FDD, along with HSPA and EV-DO.[86]Accelleran, a Belgian company, has also worked to build small cells for LTE-TDD networks.[93] Trials of LTE-TDD technology began as early as 2010, withReliance Industriesand Ericsson India conducting field tests of LTE-TDD inIndia, achieving 80 megabit-per second download speeds and 20 megabit-per-second upload speeds.[94]By 2011, China Mobile began trials of the technology in six cities.[75] Although initially seen as a technology utilized by only a few countries, including China and India,[95]by 2011 international interest in LTE-TDD had expanded, especially in Asia, in part due to LTE-TDD's lower cost of deployment compared to LTE-FDD.[75]By the middle of that year, 26 networks around the world were conducting trials of the technology.[76]The Global LTve (GTI) was also started in 2011, with founding partners China Mobile,Bharti Airtel,SoftBank Mobile,Vodafone,Clearwire, Aero2 andE-Plus.[96]In September 2011, Huawei announced it would partner with Polish mobile provider Aero2 to develop a combined LTE-TDD and LTE-FDD network in Poland,[97]and by April 2012,ZTE Corporationhad worked to deploy trial or commercial LTE-TDD networks for 33 operators in 19 countries.[87]In late 2012, Qualcomm worked extensively to deploy a commercial LTE-TDD network in India, and partnered with Bharti Airtel and Huawei to develop the first multi-mode LTE-TDD smartphone for India.[86] InJapan, SoftBank Mobile launched LTE-TDD services in February 2012 under the nameAdvanced eXtended Global Platform(AXGP), and marketed as SoftBank 4G (ja). The AXGP band was previously used forWillcom'sPHSservice, and after PHS was discontinued in 2010 the PHS band was re-purposed for AXGP service.[98][99] In the U.S., Clearwire planned to implement LTE-TDD, with chip-maker Qualcomm agreeing to support Clearwire's frequencies on its multi-mode LTE chipsets.[100]WithSprint'sacquisition of Clearwire in 2013,[77][101]the carrier began using these frequencies for LTE service on networks built bySamsung,Alcatel-Lucent, andNokia.[102][103] As of March 2013, 156 commercial 4G LTE networks existed, including 142 LTE-FDD networks and 14 LTE-TDD networks.[88]As of November 2013, the South Korean government planned to allow a fourth wireless carrier in 2014, which would provide LTE-TDD services,[79]and in December 2013, LTE-TDD licenses were granted to China's three mobile operators, allowing commercial deployment of 4G LTE services.[104] In January 2014, Nokia Solutions and Networks indicated that it had completed a series of tests ofvoice over LTE( VoLTE)calls on China Mobile's TD-LTE network.[105]The next month, Nokia Solutions and Networks and Sprint announced that they had demonstrated throughput speeds of 2.6 gigabits per second using a LTE-TDD network, surpassing the previous record of 1.6 gigabits per second.[106] Much of the LTE standard addresses the upgrading of 3G UMTS to what will eventually be4Gmobile communications technology. A large amount of the work is aimed at simplifying the architecture of the system, as it transitions from the existing UMTScircuit+packet switchingcombined network to an all-IP flat architecture system.E-UTRAis the air interface of LTE. Its main features are: The LTE standard supports onlypacket switchingwith its all-IP network. Voice calls in GSM, UMTS, and CDMA2000 arecircuit switched, so with the adoption of LTE, carriers will have to re-engineer their voice call network.[108]Four different approaches sprang up: One additional approach that is not initiated by operators is the usage ofover-the-top content(OTT) services, using applications likeSkypeandGoogle Talkto provide LTE voice service.[109] Most major backers of LTE preferred and promoted VoLTE from the beginning. The lack of software support in initial LTE devices, as well as core network devices, however, led to a number of carriers promotingVoLGA(Voice over LTE Generic Access) as an interim solution.[110]The idea was to use the same principles asGAN(Generic Access Network, also known as UMA or Unlicensed Mobile Access), which defines the protocols through which a mobile handset can perform voice calls over a customer's private Internet connection, usually over wireless LAN. VoLGA however never gained much support, because VoLTE (IMS) promises much more flexible services, albeit at the cost of having to upgrade the entire voice call infrastructure. VoLTE may require Single Radio Voice Call Continuity (SRVCC) to be able to smoothly perform a handover to a 2G or 3G network in case of poor LTE signal quality.[111] While the industry has standardized on VoLTE, early LTE deployments required carriers to introduce circuit-switched fallback as a stopgap measure. When placing or receiving a voice call on a non-VoLTE-enabled network or device, LTE handsets will fall back to old 2G or 3G networks for the duration of the call. To ensure compatibility, 3GPP demands at least AMR-NB codec (narrow band), but the recommended speech codec for VoLTE isAdaptive Multi-Rate Wideband, also known asHD Voice. This codec is mandated in 3GPP networks that support 16 kHz sampling.[112] Fraunhofer IIShas proposed and demonstrated "Full-HD Voice", an implementation of theAAC-ELD(Advanced Audio Coding – Enhanced Low Delay) codec for LTE handsets.[113]Where previous cell phone voice codecs only supported frequencies up to 3.5 kHz and upcomingwideband audioservices branded asHD Voiceup to 7 kHz, Full-HD Voice supports the entire bandwidth range from 20 Hz to 20 kHz. For end-to-end Full-HD Voice calls to succeed, however, both the caller's and recipient's handsets, as well as networks, have to support the feature.[114] The LTE standard covers a range of many different bands, each of which is designated by both a frequency and a band number: As a result, phones from one country may not work in other countries. Users will need a multi-band capable phone for roaming internationally. According to theEuropean Telecommunications Standards Institute's (ETSI)intellectual propertyrights (IPR) database, about 50 companies have declared, as of March 2012, holdingessential patentscovering the LTE standard.[121]The ETSI has made no investigation on the correctness of the declarations however,[121]so that "any analysis of essential LTE patents should take into account more than ETSI declarations."[122]Independent studies have found that about 3.3 to 5 percent of all revenues from handset manufacturers are spent on standard-essential patents. This is less than the combined published rates, due to reduced-rate licensing agreements, such as cross-licensing.[123][124][125]
https://en.wikipedia.org/wiki/LTE_(telecommunication)
Aninformation infrastructureis defined by Ole Hanseth (2002) as "a shared, evolving, open, standardized, and heterogeneous installed base"[1]and by Pironti (2006) as all of the people, processes, procedures, tools, facilities, and technology which support the creation, use, transport, storage, and destruction of information.[2] The notion of information infrastructures, introduced in the 1990s and refined during the following decade, has proven quite fruitful to theinformation systems(IS) field. It changed the perspective from organizations to networks and from systems to infrastructure, allowing for a global and emergent perspective on information systems. Information infrastructure is a technical structure of an organizational form, an analytical perspective or asemantic network. The concept of information infrastructure (II) was introduced in the early 1990s, first as a political initiative (Gore, 1993 & Bangemann, 1994), later as a more specific concept in IS research. For the IS research community, an important inspiration was Hughes' (1983) accounts of large technical systems, analyzed as socio-technical power structures (Bygstad, 2008).[3]Information infrastructure are typically different from the previous generations of "large technological system" because these digital sociotechnical systems are considered generative, meaning they allow new users to connect with or even appropriate the system.[4] Information infrastructure, as a theory, has been used to frame a number of extensive case studies (Star and Ruhleder 1996; Ciborra 2000; Hanseth and Ciborra 2007), and in particular to develop an alternative approach to IS design: "Infrastructures should rather be built by establishing working local solutions supporting local practices which subsequently are linked together rather than by defining universal standards and subsequently implementing them" (Ciborra and Hanseth 1998). It has later been developed into a full design theory, focusing on the growth of an installed base (Hanseth and Lyytinen 2008). Information infrastructures include the Internet, health systems and corporate systems.[5]It is also consistent to include innovations such asFacebook,LinkedInandMySpaceas excellent examples (Bygstad, 2008). Bowker has described several key terms and concepts that are enormously helpful for analyzing information infrastructure: imbrication, bootstrapping, figure/ground, and a short discussion of infrastructural inversion. "Imbrication" is an analytic concept that helps to ask questions about historical data. "Bootstrapping" is the idea that infrastructure must already exist in order to exist (2011). "Technological and non-technological elements that are linked" (Hanseth and Monteiro 1996). "Information infrastructures can, as formative contexts, shape not only the work routines, but also the ways people look at practices, consider them 'natural' and give them their overarching character of necessity. Infrastructure becomes an essential factor shaping the taken-for-grantedness of organizational practices" (Ciborra and Hanseth 1998). "The technological and human components, networks, systems, and processes that contribute to the functioning of the health information system" (Braa et al. 2007). "The set of organizational practices, technical infrastructure and social norms that collectively provide for the smooth operation of scientific work at a distance (Edwards et al. 2007). "A shared, evolving, heterogeneous installed base of IT capabilities developed on open and standardized interfaces" (Hanseth and Lyytinen 2008). According to Star and Ruhleder, there are 8 dimensions of information infrastructures. Presidential Chair and Professor of Information Studies at theUniversity of California, Los Angeles,Christine L. Borgmanargues information infrastructures, like all infrastructures, are "subject to public policy".[7]In the United States, public policy defines information infrastructures as the "physical and cyber-based systems essential to the minimum operations of the economy and government" and connected by information technologies.[7] Borgman says governments, businesses, communities, and individuals can work together to create a global information infrastructure which links "the world's telecommunication and computer networks together" and would enable the transmission of "every conceivable information and communication application."[7] Currently, the Internet is the default global information infrastructure.[8] TheAsia-Pacific Economic CooperationTelecommunications and Information Working Group (TEL) Program of Asian for Information and Communications Infrastructure.[9] Association of South East Asian Nations, e-ASEAN Framework Agreement of 2000.[9] National Information Infrastructure Act of 1993National Information Infrastructure(NII). TheNational Research CouncilestablishedCA*netin 1989 and the network connecting "all provincial nodes" was operational in 1990.[10]TheCanadian Network for the Advancement of Research, Industry and Education(CANARIE)was established in 1992 and CA*net was upgraded to aT1connection in 1993 andT3in 1995.[10]By 2000, "the commercial basis for Canada's information infrastructure" was established, and the government ended its role in the project.[10] In 1994, the European Union proposed the European Information Infrastructure.:[7]European Information Infrastructure has evolved furthermore thanks toMartin Bangemannreport and projects eEurope 2003+, eEurope 2005 and iIniciaive 2010.[11] In 1995, American Vice President Al Gore askedUSAIDto help improve Africa's connection to the global information infrastructure.[12] TheUSAID Leland Initiative (LI)was designed from June to September 1995, and implemented in on 29 September 1995.[12]The Initiative was "a five-year $15 million US Government effort to support sustainable development" by bringing "full Internet connectivity" to approximately 20 African nations.[13] The initiative had three strategic objectives:
https://en.wikipedia.org/wiki/Information_infrastructure
Attentionis amachine learningmethod that determines the importance of each component in a sequence relative to the other components in that sequence. Innatural language processing, importance is represented by"soft"weights assigned to each word in a sentence. More generally, attention encodes vectors calledtokenembeddingsacross a fixed-widthsequencethat can range from tens to millions of tokens in size. Unlike "hard" weights, which are computed during the backwards training pass, "soft" weights exist only in the forward pass and therefore change with every step of the input. Earlier designs implemented the attention mechanism in a serialrecurrent neural network(RNN) language translation system, but a more recent design, namely thetransformer, removed the slower sequential RNN and relied more heavily on the faster parallel attention scheme. Inspired by ideas aboutattention in humans, the attention mechanism was developed to address the weaknesses of leveraging information from thehidden layersof recurrent neural networks. Recurrent neural networks favor more recent information contained in words at the end of a sentence, while information earlier in the sentence tends to beattenuated. Attention allows a token equal access to any part of a sentence directly, rather than only through the previous state. Academic reviews of the history of the attention mechanism are provided in Niu et al.[1]and Soydaner.[2] seq2seqwith RNN + Attention.[13]Attention mechanism was added onto RNN encoder-decoder architecture to improve language translation of long sentences. See Overview section. The modern era of machine attention was revitalized by grafting an attention mechanism (Fig 1. orange) to an Encoder-Decoder. Figure 2 shows the internal step-by-step operation of the attention block (A) in Fig 1. This attention scheme has been compared to the Query-Key analogy of relational databases. That comparison suggests anasymmetricrole for the Query and Key vectors, whereoneitem of interest (the Query vector "that") is matched againstallpossible items (the Key vectors of each word in the sentence). However, both Self and Cross Attentions' parallel calculations matches all tokens of the K matrix with all tokens of the Q matrix; therefore the roles of these vectors aresymmetric. Possibly because the simplistic database analogy is flawed, much effort has gone into understanding attention mechanisms further by studying their roles in focused settings, such as in-context learning,[20]masked language tasks,[21]stripped down transformers,[22]bigram statistics,[23]N-gram statistics,[24]pairwise convolutions,[25]and arithmetic factoring.[26] In translating between languages, alignment is the process of matching words from the source sentence to words of the translated sentence. Networks that perform verbatim translation without regard to word order would show the highest scores along the (dominant) diagonal of the matrix. The off-diagonal dominance shows that the attention mechanism is more nuanced. Consider an example of translatingI love youto French. On the first pass through the decoder, 94% of the attention weight is on the first English wordI, so the network offers the wordje. On the second pass of the decoder, 88% of the attention weight is on the third English wordyou, so it offerst'. On the last pass, 95% of the attention weight is on the second English wordlove, so it offersaime. In theI love youexample, the second wordloveis aligned with the third wordaime. Stacking soft row vectors together forje,t', andaimeyields analignment matrix: Sometimes, alignment can be multiple-to-multiple. For example, the English phraselook it upcorresponds tocherchez-le. Thus, "soft" attention weights work better than "hard" attention weights (setting one attention weight to 1, and the others to 0), as we would like the model to make a context vector consisting of a weighted sum of the hidden vectors, rather than "the best one", as there may not be a best hidden vector. Many variants of attention implement soft weights, such as Forconvolutional neural networks, attention mechanisms can be distinguished by the dimension on which they operate, namely: spatial attention,[30]channel attention,[31]or combinations.[32][33] These variants recombine the encoder-side inputs to redistribute those effects to each target output. Often, a correlation-style matrix of dot products provides the re-weighting coefficients. In the figures below, W is the matrix of context attention weights, similar to the formula in Core Calculations section above. The size of the attention matrix is proportional to the square of the number of input tokens. Therefore, when the input is long, calculating the attention matrix requires a lot ofGPUmemory. Flash attention is an implementation that reduces the memory needs and increases efficiency without sacrificing accuracy. It achieves this by partitioning the attention computation into smaller blocks that fit into the GPU's faster on-chip memory, reducing the need to store large intermediate matrices and thus lowering memory usage while increasing computational efficiency.[38] Flex Attention[39]is an attention kernel developed by Meta that allows users to modify attention scores prior tosoftmaxand dynamically chooses the optimal attention algorithm. The major breakthrough came with self-attention, where each element in the input sequence attends to all others, enabling the model to capture global dependencies. This idea was central to the Transformer architecture, which replaced recurrence entirely with attention mechanisms. As a result, Transformers became the foundation for models like BERT, GPT, and T5 (Vaswani et al., 2017). Attention is widely used in natural language processing, computer vision, and speech recognition. In NLP, it improves context understanding in tasks like question answering and summarization. In vision, visual attention helps models focus on relevant image regions, enhancing object detection and image captioning. For matrices:Q∈Rm×dk,K∈Rn×dk{\displaystyle \mathbf {Q} \in \mathbb {R} ^{m\times d_{k}},\mathbf {K} \in \mathbb {R} ^{n\times d_{k}}}andV∈Rn×dv{\displaystyle \mathbf {V} \in \mathbb {R} ^{n\times d_{v}}}, the scaled dot-product, orQKV attentionis defined as:Attention(Q,K,V)=softmax(QKTdk)V∈Rm×dv{\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}\left({\frac {\mathbf {Q} \mathbf {K} ^{T}}{\sqrt {d_{k}}}}\right)\mathbf {V} \in \mathbb {R} ^{m\times d_{v}}}whereT{\displaystyle {}^{T}}denotestransposeand thesoftmax functionis applied independently to every row of its argument. The matrixQ{\displaystyle \mathbf {Q} }containsm{\displaystyle m}queries, while matricesK,V{\displaystyle \mathbf {K} ,\mathbf {V} }jointly contain anunorderedset ofn{\displaystyle n}key-value pairs. Value vectors in matrixV{\displaystyle \mathbf {V} }are weighted using the weights resulting from the softmax operation, so that the rows of them{\displaystyle m}-by-dv{\displaystyle d_{v}}output matrix are confined to theconvex hullof the points inRdv{\displaystyle \mathbb {R} ^{d_{v}}}given by the rows ofV{\displaystyle \mathbf {V} }. To understand thepermutation invarianceandpermutation equivarianceproperties of QKV attention,[40]letA∈Rm×m{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times m}}andB∈Rn×n{\displaystyle \mathbf {B} \in \mathbb {R} ^{n\times n}}bepermutation matrices; andD∈Rm×n{\displaystyle \mathbf {D} \in \mathbb {R} ^{m\times n}}an arbitrary matrix. The softmax function ispermutation equivariantin the sense that: By noting that the transpose of a permutation matrix is also its inverse, it follows that: which shows that QKV attention isequivariantwith respect to re-ordering the queries (rows ofQ{\displaystyle \mathbf {Q} }); andinvariantto re-ordering of the key-value pairs inK,V{\displaystyle \mathbf {K} ,\mathbf {V} }. These properties are inherited when applying linear transforms to the inputs and outputs of QKV attention blocks. For example, a simpleself-attentionfunction defined as: is permutation equivariant with respect to re-ordering the rows of the input matrixX{\displaystyle X}in a non-trivial way, because every row of the output is a function of all the rows of the input. Similar properties hold formulti-head attention, which is defined below. When QKV attention is used as a building block for an autoregressive decoder, and when at training time all input and output matrices haven{\displaystyle n}rows, amasked attentionvariant is used:Attention(Q,K,V)=softmax(QKTdk+M)V{\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}\left({\frac {\mathbf {Q} \mathbf {K} ^{T}}{\sqrt {d_{k}}}}+\mathbf {M} \right)\mathbf {V} }where the mask,M∈Rn×n{\displaystyle \mathbf {M} \in \mathbb {R} ^{n\times n}}is astrictly upper triangular matrix, with zeros on and below the diagonal and−∞{\displaystyle -\infty }in every element above the diagonal. The softmax output, also inRn×n{\displaystyle \mathbb {R} ^{n\times n}}is thenlower triangular, with zeros in all elements above the diagonal. The masking ensures that for all1≤i<j≤n{\displaystyle 1\leq i<j\leq n}, rowi{\displaystyle i}of the attention output is independent of rowj{\displaystyle j}of any of the three input matrices. The permutation invariance and equivariance properties of standard QKV attention do not hold for the masked variant. Multi-head attentionMultiHead(Q,K,V)=Concat(head1,...,headh)WO{\displaystyle {\text{MultiHead}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{Concat}}({\text{head}}_{1},...,{\text{head}}_{h})\mathbf {W} ^{O}}where each head is computed with QKV attention as:headi=Attention(QWiQ,KWiK,VWiV){\displaystyle {\text{head}}_{i}={\text{Attention}}(\mathbf {Q} \mathbf {W} _{i}^{Q},\mathbf {K} \mathbf {W} _{i}^{K},\mathbf {V} \mathbf {W} _{i}^{V})}andWiQ,WiK,WiV{\displaystyle \mathbf {W} _{i}^{Q},\mathbf {W} _{i}^{K},\mathbf {W} _{i}^{V}}, andWO{\displaystyle \mathbf {W} ^{O}}are parameter matrices. The permutation properties of (standard, unmasked) QKV attention apply here also. For permutation matrices,A,B{\displaystyle \mathbf {A} ,\mathbf {B} }: from which we also see thatmulti-head self-attention: is equivariant with respect to re-ordering of the rows of input matrixX{\displaystyle X}. Attention(Q,K,V)=softmax(tanh⁡(WQQ+WKK)V){\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}(\tanh(\mathbf {W} _{Q}\mathbf {Q} +\mathbf {W} _{K}\mathbf {K} )\mathbf {V} )}whereWQ{\displaystyle \mathbf {W} _{Q}}andWK{\displaystyle \mathbf {W} _{K}}are learnable weight matrices.[13] Attention(Q,K,V)=softmax(QWKT)V{\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}(\mathbf {Q} \mathbf {W} \mathbf {K} ^{T})\mathbf {V} }whereW{\displaystyle \mathbf {W} }is a learnable weight matrix.[27] Self-attention is essentially the same as cross-attention, except that query, key, and value vectors all come from the same model. Both encoder and decoder can use self-attention, but with subtle differences. For encoder self-attention, we can start with a simple encoder without self-attention, such as an "embedding layer", which simply converts each input word into a vector by a fixedlookup table. This gives a sequence of hidden vectorsh0,h1,…{\displaystyle h_{0},h_{1},\dots }. These can then be applied to a dot-product attention mechanism, to obtainh0′=Attention(h0WQ,HWK,HWV)h1′=Attention(h1WQ,HWK,HWV)⋯{\displaystyle {\begin{aligned}h_{0}'&=\mathrm {Attention} (h_{0}W^{Q},HW^{K},HW^{V})\\h_{1}'&=\mathrm {Attention} (h_{1}W^{Q},HW^{K},HW^{V})\\&\cdots \end{aligned}}}or more succinctly,H′=Attention(HWQ,HWK,HWV){\displaystyle H'=\mathrm {Attention} (HW^{Q},HW^{K},HW^{V})}. This can be applied repeatedly, to obtain a multilayered encoder. This is the "encoder self-attention", sometimes called the "all-to-all attention", as the vector at every position can attend to every other. For decoder self-attention, all-to-all attention is inappropriate, because during the autoregressive decoding process, the decoder cannot attend to future outputs that has yet to be decoded. This can be solved by forcing the attention weightswij=0{\displaystyle w_{ij}=0}for alli<j{\displaystyle i<j}, called "causal masking". This attention mechanism is the "causally masked self-attention".
https://en.wikipedia.org/wiki/Attention_(machine_learning)
Soramimi(空耳, "thought to have heard", or "pretending to have not heard"[1][2])is aJapaneseword that in the context of contemporary Japanese internet meme culture and its related slang is commonly used to refer to humorous homophonic reinterpretation, deliberately interpreting words as other similar-sounding words for comedy (similar to amondegreen, but done deliberately). The word is more commonly used for its original, literal meaning. The slang usage is derived from the long-running "Soramimi Hour" segment on Japanese comedianTamori's TV programTamori Club. Tamori is one of the "big three" televisioncomediansin Japan and is very influential.[3]The segment, in which he and his co-host watch mini-skits based on submissions from fans, began in 1992.[4] In modern Japanese internet culture, soramimi also includes videos with subtitles of humorously misinterpreted subtitles or text transcripts that do the same. Unlikehomophonic translation, soramimi can be contained within a single language. An example of "soramimi" humor confined to Japanese can be seen in the songKaidoku Funōby the rock bandJinn, in which the lyrics "tōkankaku, hito no naka de" ("feeling of distance, amongst people"), which are considered hard to make out by Japanese listeners, are intentionally misinterpreted as "gōkan da, futon no naka de" ("it's rape, in a futon") for comedic reasons.[5][user-generated source][6][user-generated source] Soramimi applies to dialogue as well as song lyrics. For example, in the 2004 filmDownfall, whenAdolf Hitlersays "und betrogen worden", it is misrepresented as "oppai purun purun" ("titty boing boing").[7][8][9][user-generated source] Soramimi humor was a staple in Japanese message board Flash animation culture from the late 1990s to the mid-2000s. It later became very popular onNiconico, a Japanese video-sharing website in which comments are overlaid directly onto the video, synced to specific playback times, allowing for soramimi subtitles to be easily added to any video.[10]One such example is theMoldovanbandO-Zone's song "Dragostea Din Tei". The refrain of the original song (inRomanian) is: Asoramimiversion, from the JapaneseFlashanimationMaiyahi, translates these words as:[11][12][a]
https://en.wikipedia.org/wiki/Soramimi
CONFIG.SYSis the primaryconfiguration filefor theDOSandOS/2operating systems. It is a specialASCIItext file that contains user-accessible setup or configuration directives evaluated by the operating system'sDOS BIOS(typically residing inIBMBIO.COMorIO.SYS) during boot. CONFIG.SYS was introduced with DOS 2.0.[nb 1] The directives in this file configure DOS for use with devices and applications in the system. The CONFIG.SYS directives also set up the memory managers in the system. After processing the CONFIG.SYS file, DOS proceeds to load and execute thecommand shellspecified in theSHELLline of CONFIG.SYS, orCOMMAND.COMif there is no such line. The command shell in turn is responsible for processing theAUTOEXEC.BATfile. CONFIG.SYS is composed mostly ofname=valuedirectives which look like variable assignments. In fact, these will either define some tunable parameters often resulting in reservation of memory, or load files, mostlydevice driversandterminate-and-stay-resident programs(TSRs), into memory. In DOS, CONFIG.SYS is located in theroot directoryof the drive from which the system was booted. The filename is also used byDisk Control Program[de](DCP), an MS-DOS derivative by the former East-GermanVEB Robotron.[1] Some versions of DOS will probe for alternative filenames taking precedence over the default CONFIG.SYS filename if they exist: While older versions ofConcurrent DOS3.2 to 4.1 did not support CONFIG.SYS files at all, later versions ofConcurrent DOS 386andConcurrent DOS XM, as well asMultiuser DOS,System ManagerandREAL/32will probe forCCONFIG.SYS(with "C" derived from "Concurrent") instead of CONFIG.SYS. Some versions of Multiuser DOS use a filename ofCCONFIG.INIinstead,[2][3]whereas REAL/32 is known to look forMCONFIG.SYS. These operating systems support many additional and different configuration settings (likeINIT_INSTALL) not known under MS-DOS/PC DOS, but they are stored in the binary repository namedCCONFIG.BINrather than inCCONFIG.INI.[2][3]Both files are typically modified through a configuration utility namedCONFIG.EXEonly.[2][3] UnderDR DOS3.31,PalmDOS1.0,Novell DOS7,OpenDOS7.01, and DR-DOS 7.02 and higher, a file namedDCONFIG.SYS(with "D" derived from "DR DOS"), if present, will take precedence over CONFIG.SYS.[4][5][6][7]Since DR DOS 6.0 this was used in conjunction with disk compression software, where the original boot drive C: would become drive D: after loading the compression driver (and the "D" in the file name came in handy as well), but it is commonly used to help maintain multiple configuration files in multi-boot scenarios. In addition to this, OpenDOS 7.01 and DR-OpenDOS 7.02 will look for a file namedODCONFIG.SYS,[8][9][6]whereas some issues of DR-DOS 7.02 and higher will instead also look forDRCONFIG.SYS.[6]Further, under DR DOS 6.0 and higher, theSYS /DR:extcommand can be used to change the default file extensions.[8][10][7]For example, withSYS /L /DR:703the writtenVolume Boot Recordwould look for a renamed and modified IBMBIO.703 system file (instead of the defaultIBMBIO.COM) and the IBMBIO.703 would look for IBMDOS.703 and [D]CONFIG.703 (instead ofIBMDOS.COMand [D]CONFIG.SYS), so that multiple parallel sets of files can coexist in the same root directory and be selected via a boot-loader like LOADER, supplied with Multiuser DOS and DR-DOS 7.02/7.03.[4]TheSHELLdirective is enhanced to provide means to specify alternative AUTOEXEC.BAT files via/P[:filename.ext]and in this specific scenario, COMMAND.COM will accept file extensions other than ".BAT" as well (both features are also supported by4DOS).[11]Under DR DOS 6.0 and higher, the CONFIG.SYS directiveCHAIN=filespeccan be used to continue processing in the named file, which does not necessarily need to reside in the root directory of the boot drive.[4][6]DR-DOS 7.02 and higher optionally support an additional parameter as inCHAIN=filespec,labelto jump to a specific:labelin the given file.[8][9][6]DR-DOS 7.03 and higher support a newSYS /Aparameter in order to copy the corresponding CONFIG.SYS and AUTOEXEC.BAT files along with the system files.[7] FreeDOSimplements a similar feature with itsFDCONFIG.SYSconfiguration file.RxDOS7.24 and higher useRXCONFIG.SYSinstead.[12]PTS-DOSusesCONFIG.PTS. Both CONFIG.SYS and AUTOEXEC.BAT can be found included in theroot folderofWindows 95, andWindows 98boot drives, as they are based on DOS. Typically, these files are left empty, with no content. Windows Medoes not even parse the CONFIG.SYS file during the Windows boot process,[13]loadingenvironment variablesfrom theWindows Registryinstead: UnderFlexOS, CONFIG.SYS is a binary file defining the resource managers and device drivers loaded. An example CONFIG.SYS for MS-DOS 5: The system can still boot if this file is missing or corrupted. However, this file, along withAUTOEXEC.BAT, is essential for the complete bootup process to occur with the DOS operating system. These files contain information that is used to customize the operating system for personal use. They also contain the requirements of different software application packages. A DOS system would require troubleshooting if either of these files became damaged or corrupted. If CONFIG.SYS does not contain aSHELLdirective (or the file is corrupt or missing), DOS typically searches for COMMAND.COM in the root directory of the boot drive.[19]If this is not found, versions of DOS before 6.0 will not start up. MS-DOS 6.0/PC DOS 6.1 and Novell DOS 7 and higher will instead display a prompt to enter the path and filename of a command processor. This recovery prompt is also displayed when the primary command processor is aborted due to faults or if it is exited deliberately.[4](In the case of COMMAND.COM, the internalEXITcommand is disabled only when the shell was started with/P.) This also provides limited means to replace the shell atruntimewithout having to reboot the system. Since the MS-DOS 7.0 and higher COMMAND.COM executable is incompatible with DR-DOS,[21]but typically resides in the root of drive C: in dual-boot scenarios with DR-DOS, DR-DOS 7.02 and higher no longer allow to bypassSHELLdirectives in (Ctrl+)F5/F7/F8"skip"/"trace"/"step" modes.[8][19][21](Some later issues added (Ctrl+)F6to reinvoke the formerF5"skip" behaviour in order to allow recovery from problems with invalidSHELLarguments as well.[19]) Also, if noSHELLdirective could be found when skipping CONFIG.SYS processing via (Ctrl+)F5(and also with (Ctrl+)F7/F8, when the default file extension has been changed withSYS /DR:ext),[7]the user is prompted to enter a valid shell file namebeforetrying to load COMMAND.COM from the root.[8][21]Pressing↵ Enterwithout specifying a file will assume the former default.[8] Depending on the version, the size of the CONFIG.SYS file is limited to a few kilobytes under MS-DOS/PC DOS (up to 64 KB in most recent versions), whereas the file's size is unlimited under DR-DOS.[4][19]This is because the former operating systems (since DOS 3.0[22]) will compile the file into some tokenized in-memory representation[22]before they sort and regroup the directives to be processed in a specific order (with device drivers always being loaded before TSRs), whereas DR-DOS interprets the file and executes most directives line-by-line, thereby giving full control over the load order of drivers and TSRs viaDEVICEandINSTALL(for example to solve load order conflicts or to load a program debugger before a device driver to be debugged)[8][19]and allowing to adapt the user interaction and change the flow through the file based on conditions like processor types installed, any type of keys pressed, load or input errors occurring, or return codes given by loaded software.[4][8]This becomes particularly useful sinceINSTALLcan also be used to run non-resident software under DR-DOS, so that temporary external programs can be integrated into the CONFIG.SYS control flow.[4][11][8] In MS-DOS/PC DOS 2.0 through 4.01, the length of theSHELLline was limited to 31 characters, whereas up to 128 characters are possible in later versions.[4][11]DR-DOS even accepts up to 255 characters.[4][11]CONFIG.SYS directives do not acceptlong filenames. When installingWindows 95over a preexisting DOS/Windows install, CONFIG.SYS and AUTOEXEC.BAT are renamed toCONFIG.DOSand AUTOEXEC.DOS. This is intended to ease dual booting between Windows 9x and DOS. When booting into DOS, they are temporarily renamed CONFIG.SYS and AUTOEXEC.BAT. Backups of the Windows 95 versions are made asCONFIG.W40and AUTOEXEC.W40 files. When Caldera DR-DOS 7.02/7.03 is installed on a system already containing Windows 95, Windows' CONFIG.SYS and AUTOEXEC.BAT retain those names. DR-DOS' startup files are installed as DCONFIG.SYS (a name already used in earlier versions of DR DOS) and AUTODOS7.BAT.[5] OS/2uses the CONFIG.SYS file extensively for setting up its configuration, drivers and environment before the graphical part of the system loads. In the OS/2 subsystem ofWindows NT, what appeared as CONFIG.SYS to OS/2 programs was actually stored in the registry. There are many undocumented or poorly documented CONFIG.SYS directives used by OS/2.[23] CONFIG.SYS continues to be used by the OS/2 derivativeseComStation[24]andArcaOS.[25]
https://en.wikipedia.org/wiki/IOPL_(CONFIG.SYS_directive)
Superconducting logicrefers to a class oflogic circuitsorlogic gatesthat use the unique properties ofsuperconductors, including zero-resistance wires, ultrafastJosephson junctionswitches, and quantization of magnetic flux (fluxoid). As of 2023, superconducting computing is a form ofcryogenic computing, as superconductive electronic circuits require cooling tocryogenictemperatures for operation, typically below 10kelvin. Oftensuperconducting computingis applied toquantum computing, with an important application known assuperconducting quantum computing. Superconducting digital logic circuits use single flux quanta (SFQ), also known asmagnetic flux quanta, to encode, process, and transport data. SFQ circuits are made up of active Josephson junctions and passive elements such as inductors, resistors, transformers, and transmission lines. Whereas voltages and capacitors are important in semiconductor logic circuits such asCMOS, currents and inductors are most important in SFQ logic circuits. Power can be supplied by eitherdirect currentoralternating current, depending on the SFQ logic family. The primary advantage of superconducting computing is improved power efficiency over conventionalCMOStechnology. Much of the power consumed, and heat dissipated, by conventional processors comes from moving information between logic elements rather than the actual logic operations. Because superconductors have zero electricalresistance, little energy is required to move bits within the processor. This is expected to result in power consumption savings of a factor of 500 for anexascale computer.[1]For comparison, in 2014 it was estimated that a 1 exaFLOPScomputer built in CMOS logic is estimated to consume some 500 megawatts of electrical power.[2]Superconducting logic can be an attractive option for ultrafast CPUs, where switching times are measured in picoseconds and operating frequencies approach 770 GHz.[3][4]However, since transferring information between the processor and the outside world does still dissipate energy, superconducting computing was seen as well-suited for computations-intensive tasks where the data largely stays in the cryogenic environment, rather thanbig dataapplications where large amounts of information are streamed from outside the processor.[1] As superconducting logic supports standard digital machine architectures and algorithms, the existing knowledge base for CMOS computing will still be useful in constructing superconducting computers. However, given the reduced heat dissipation, it may enable innovations such asthree-dimensional stackingof components. However, as they requireinductors, it is harder to reduce their size. As of 2014, devices usingniobiumas the superconducting material operating at 4Kwere considered state-of-the-art. Important challenges for the field were reliable cryogenic memory, as well as moving from research on individual components to large-scale integration.[1] Josephson junction countis a measure of superconducting circuit or device complexity, similar to thetransistor countused for semiconductor integrated circuits. Superconducting computing research has been pursued by the U. S.National Security Agencysince the mid-1950s. However, progress could not keep up with theincreasing performanceof standard CMOS technology. As of 2016 there are no commercial superconducting computers, although research and development continues.[5] Research in the mid-1950s to early 1960s focused on thecryotroninvented byDudley Allen Buck, but the liquid-helium temperatures and the slow switching time between superconducting and resistive states caused this research to be abandoned. In 1962Brian Josephsonestablished the theory behind theJosephson effect, and within a few years IBM had fabricated the first Josephson junction. IBM invested heavily in this technology from the mid-1960s to 1983.[6]By the mid-1970s IBM had constructed asuperconducting quantum interference deviceusing these junctions, mainly working withlead-based junctions and later switching to lead/niobium junctions. In 1980 the Josephson computer revolution was announced by IBM through the cover page of the May issue of Scientific American. One of the reasons which justified such a large-scale investment lies in that Moore's law - enunciated in 1965 - was expected to slow down and reach a plateau 'soon'. However, on the one hand Moore's law kept its validity, while the costs of improving superconducting devices were basically borne entirely by IBM alone and the latter, however big, could not compete with the whole world of semiconductors which provided nearly limitless resources.[7]Thus, the program was shut down in 1983 because the technology was not considered competitive with standard semiconductor technology. Founded by researchers with this IBM program, HYPRES developed and commercialized superconductor integrated circuits from its commercial superconductor foundry in Elmsford, New York.[8]The JapaneseMinistry of International Trade and Industryfunded a superconducting research effort from 1981 to 1989 that produced theETL-JC1, which was a 4-bit machine with 1,000 bits of RAM.[5] In 1983,Bell Labscreated niobium/aluminum oxideJosephson junctions that were more reliable and easier to fabricate. In 1985, theRapid single flux quantumlogic scheme, which had improved speed andenergy efficiency, was developed by researchers atMoscow State University. These advances led to the United States' Hybrid Technology Multi-Threaded project, started in 1997, which sought to beat conventional semiconductors to the petaflop computing scale. The project was abandoned in 2000, however, and the first conventional petaflop computer was constructed in 2008. After 2000, attention turned tosuperconducting quantum computing. The 2011 introduction ofreciprocal quantum logicby Quentin Herr ofNorthrop Grumman, as well as energy-efficient rapid single flux quantum by Hypres, were seen as major advances.[5] The push forexascale computingbeginning in the mid-2010s, as codified in theNational Strategic Computing Initiative, was seen as an opening for superconducting computing research as exascale computers based on CMOS technology would be expected to require impractical amounts of electrical power. TheIntelligence Advanced Research Projects Activity, formed in 2006, currently coordinates theU. S. Intelligence Community's research and development efforts in superconducting computing.[5] Despite the names of many of these techniques containing the word "quantum", they are not necessarily platforms forquantum computing.[citation needed] Rapid single flux quantum(RSFQ) superconducting logic was developed in the Soviet Union in the 1980s.[9]Information is carried by the presence or absence of a single flux quantum (SFQ). TheJosephson junctionsarecritically damped, typically by addition of an appropriately sized shunt resistor, to make them switch without a hysteresis. Clocking signals are provided to logic gates by separately distributed SFQ voltage pulses. Power is provided by bias currents distributed using resistors that can consume more than 10 times as much static power than the dynamic power used for computation. The simplicity of using resistors to distribute currents can be an advantage in small circuits and RSFQ continues to be used for many applications where energy efficiency is not of critical importance. RSFQ has been used to build specialized circuits for high-throughput and numerically intensive applications, such as communications receivers and digital signal processing. Josephson junctions in RSFQ circuits are biased in parallel. Therefore, the total bias current grows linearly with the Josephson junction count. This currently presents the major limitation on the integration scale of RSFQ circuits, which does not exceed a few tens of thousands of Josephson junctions per circuit. Reducing the resistor (R) used to distribute currents in traditional RSFQ circuits and adding an inductor (L) in series can reduce the static power dissipation and improve energy efficiency.[10][11] Reducing the bias voltage in traditional RSFQ circuits can reduce the static power dissipation and improve energy efficiency.[12][13] Efficient rapid single flux quantum (ERSFQ) logic was developed to eliminate the static power losses of RSFQ by replacing bias resistors with sets of inductors and current-limiting Josephson junctions.[14][15] Efficient single flux quantum (eSFQ) logic is also powered by direct current, but differs from ERSFQ in the size of the bias current limiting inductor and how the limiting Josephson junctions are regulated.[16] Reciprocal Quantum Logic (RQL) was developed to fix some of the problems of RSFQ logic. RQL usesreciprocal pairsof SFQ pulses to encode a logical '1'. Both power and clock are provided by multi-phasealternating currentsignals. RQL gates do not use resistors to distribute power and thus dissipate negligible static power.[17] Major RQL gates include:AndOr,AnotB, Set/Reset (with nondestructive readout), which together form a universal logic set and provide memory capabilities.[18] Adiabatic Quantum flux parametron (AQFP) logic was developed for energy-efficient operation and is powered by alternating current.[19][20] On January 13, 2021, it was announced that a 2.5 GHz prototype AQFP-based processor called MANA (Monolithic Adiabatic iNtegration Architecture) had achieved an energy efficiency that was 80 times that of traditional semiconductor processors, even accounting for the cooling.[21] Superconducting quantum computing is a promising implementation ofquantum informationtechnology that involves nanofabricatedsuperconductingelectrodescoupled throughJosephson junctions. As in a superconducting electrode, the phase and the charge areconjugate variables. There exist three families of superconducting qubits, depending on whether the charge, the phase, or neither of the two are good quantum numbers. These are respectively termedcharge qubits,flux qubits, and hybrid qubits.
https://en.wikipedia.org/wiki/Superconducting_computing
Interm logic(a branch ofphilosophical logic), thesquare of oppositionis adiagramrepresentingtherelationsbetween the four basiccategorical propositions. The origin of the square can be traced back toAristotle's tractateOn Interpretationand its distinction between two oppositions:contradictionandcontrariety. However, Aristotle did not draw any diagram; this was done several centuries later byBoethius. Intraditional logic, aproposition(Latin:propositio) is aspokenassertion(oratio enunciativa), not the meaning of an assertion, as inmodern philosophyoflanguageandlogic. Acategorical propositionis a simple proposition containing two terms, subject (S) and predicate (P), in which the predicate is either asserted or denied of the subject. Every categorical proposition can be reduced to one of fourlogical forms, namedA,E,I, andObased on the Latinaffirmo(I affirm), for the affirmative propositionsAandI, andnego(I deny), for the negative propositionsEandO. These are: In tabular form: *PropositionAmay be stated as "AllSisP." However, PropositionEwhen stated correspondingly as "AllSis notP."is ambiguous[2]because it can be either anEorOproposition, thus requiring a context to determine the form; the standard form "NoSisP" is unambiguous, so it is preferred. PropositionOalso takes the forms "SometimesSis notP." and "A certainSis notP." (literally the Latin 'QuoddamSnōn estP.') **Sx{\displaystyle Sx}in the modern forms means that a statementS{\displaystyle S}applies on an objectx{\displaystyle x}. It may be simply interpreted as "x{\displaystyle x}isS{\displaystyle S}" in many cases.Sx{\displaystyle Sx}can be also written asS(x){\displaystyle S(x)}. Aristotle states (in chapters six and seven of thePeri hermēneias(Περὶ Ἑρμηνείας, LatinDe Interpretatione, English 'On Interpretation')), that there are certain logical relationships between these four kinds of proposition. He says that to every affirmation there corresponds exactly one negation, and that every affirmation and its negation are 'opposed' such that always one of them must be true, and the other false. A pair of an affirmative statement and its negation is, he calls, a 'contradiction' (in medieval Latin,contradictio). Examples of contradictories are 'every man is white' and 'not every man is white' (also read as 'some men are not white'), 'no man is white' and 'some man is white'. The below relations, contrary, subcontrary, subalternation, and superalternation, do hold based on the traditional logic assumption that things stated asS(or things satisfying a statementSin modern logic) exist. If this assumption is taken out, then these relations do not hold. 'Contrary' (medieval:contrariae) statements, are such that both statements cannot be true at the same time. Examples of these are the universal affirmative 'every man is white', and the universal negative 'no man is white'. These cannot be true at the same time. However, these are not contradictories because both of them may be false. For example, it is false that every man is white, since some men are not white. Yet it is also false that no man is white, since there are some white men. Since every statement has the contradictory opposite (its negation), and since a contradicting statement is true when its opposite is false, it follows that the opposites of contraries (which the medievals calledsubcontraries,subcontrariae) can both be true, but they cannot both be false. Since subcontraries are negations of universal statements, they were called 'particular' statements by the medieval logicians. Another logical relation implied by this, though not mentioned explicitly by Aristotle, is 'alternation' (alternatio), consisting of 'subalternation' and 'superalternation'. Subalternation is a relation between the particular statement and the universal statement of the same quality (affirmative or negative) such that the particular is implied by the universal, while superalternation is a relation between them such that the falsity of the universal (equivalently the negation of the universal) is implied by the falsity of the particular (equivalently the negation of the particular).[3](The superalternation is thecontrapositiveof the subalternation.) In these relations, the particular is the subaltern of the universal, which is the particular's superaltern. For example, if 'every man is white' is true, its contrary 'no man is white' is false. Therefore, the contradictory 'some man is white' is true. Similarly the universal 'no man is white' implies the particular 'not every man is white'.[4][5] In summary: These relationships became the basis of a diagram originating withBoethiusand used by medieval logicians to classify the logical relationships. The propositions are placed in the four corners of a square, and the relations represented as lines drawn between them, whence the name 'The Square of Opposition'. Therefore, the following cases can be made:[6] To memorize them, the medievals invented the following Latin rhyme:[7] It affirms thatAandEare not neither both true nor both false in each of the above cases. The same applies toIandO. While the first two are universal statements, the coupleI/Orefers to particular ones. The Square of Oppositions was used for the categorical inferences described by the Greek philosopher Aristotle:conversion,obversionandcontraposition. Each of those three types of categorical inference was applied to the four Boethian logical forms:A,E,I, andO. Subcontraries (IandO), which medieval logicians represented in the form 'quoddamAestB' (some particularAisB) and 'quoddamAnon estB' (some particularAis notB) cannot both be false, since their universal contradictory statements (noAisB/ everyAisB) cannot both be true. This leads to a difficulty firstly identified byPeter Abelard(1079 – 21 April 1142). 'SomeAisB' seems to imply 'something isA', in other words, there exists something that isA. For example, 'Some man is white' seems to imply that at least one thing that exists is a man, namely the man who has to be white, if 'some man is white' is true. But, 'some man is not white' also implies that something as a man exists, namely the man who is not white, if the statement 'some man is not white' is true. But Aristotelian logic requires that, necessarily, one of these statements (more generally 'some particularAisB' and 'some particularAis notB') is true, i.e., they cannot both be false. Therefore, since both statements imply the presence of at least one thing that is a man, the presence of a man or men is followed. But, as Abelard points out in theDialectica, surely men might not exist?[8] Abelard also points out that subcontraries containing subject terms denoting nothing, such as 'a man who is a stone', are both false. Terence Parsons(born 1939) argues that ancient philosophers did not experience the problem ofexistential importas only the A (universal affirmative) and I (particular affirmative) forms had existential import. (If a statement includes a term such that the statement is false if the term has no instances, i.e., no thing associated with the term exists, then the statement is said to haveexistential importwith respect to that term.) He goes on to cite a medieval philosopherWilliam of Ockham(1215–35 –c.1286), And points toBoethius' translation of Aristotle's work as giving rise to the mistaken notion that theOform has existential import. In the 19th century,George Boole(November 1815 – 8 December 1864) argued for requiringexistential importon both terms in particular claims (IandO), but allowing all terms of universal claims (AandE) to lack existential import. This decision madeVenn diagramsparticularly easy to use for term logic. The square of opposition, under this Boolean set of assumptions, is often called themodern square of opposition. In the modern square of opposition,AandOclaims are contradictories, as areEandI, but all other forms of opposition cease to hold; there are no contraries, subcontraries, subalternations, and superalternations. Thus, from a modern point of view, it often makes sense to talk about 'the' opposition of a claim, rather than insisting, as older logicians did, that a claim has several different opposites, which are in different kinds of opposition with the claim. Gottlob Frege(8 November 1848 – 26 July 1925)'sBegriffsschriftalso presents a square of oppositions, organised in an almost identical manner to the classical square, showing the contradictories, subalternates and contraries between four formulae constructed from universal quantification, negation and implication. Algirdas Julien Greimas(9 March 1917 – 27 February 1992)'semiotic squarewas derived from Aristotle's work. The traditional square of opposition is now often compared with squares based on inner- and outer-negation.[14] The square of opposition has been extended to a logical hexagon which includes the relationships of six statements. It was discovered independently by bothAugustin Sesmat(April 7, 1885 – December 12, 1957) andRobert Blanché(1898–1975).[15]It has been proven that both the square and the hexagon, followed by a "logical cube", belong to a regular series of n-dimensional objects called "logical bi-simplexes of dimensionn." The pattern also goes even beyond this.[16] The logical square, also called square of opposition or square ofApuleius, has its origin in the four marked sentences to be employed in syllogistic reasoning: "Every man is bad," the universal affirmative - The negation of the universal affirmative "Not every man is bad" (or "Some men are not bad") - "Some men are bad," the particular affirmative - and finally, the negation of the particular affirmative "No man is bad".Robert Blanchépublished with Vrin his Structures intellectuelles in 1966 and since then many scholars think that the logical square or square of opposition representing four values should be replaced by thelogical hexagonwhich by representing six values is a more potent figure because it has the power to explain more things about logic and natural language. In modernmathematical logic, statements containing words "all", "some" and "no", can be stated in terms ofset theoryif we assume a set-like domain of discourse. If the set of allA's is labeled ass(A){\displaystyle s(A)}and the set of allB's ass(B){\displaystyle s(B)}, then: By definition, theempty set∅{\displaystyle \emptyset }is a subset of all sets. From this fact it follows that, according to this mathematical convention, if there are noA's, then the statements "AllAisB" and "NoAisB" are always true whereas the statements "SomeAisB" and "SomeAis notB" are always false. This also implies that AaB does not entail AiB, and some of the syllogisms mentioned above are not valid when there are noA's (s(A)=∅{\displaystyle s(A)=\emptyset }).
https://en.wikipedia.org/wiki/Contrary_(logic)
DNS Floodis a type ofdenial-of-service attack. It is the process whereby the traffic on anetworkresource or machine is stopped for some time. The offender sends a great number of requests to the resource or machine so that it might become unavailable to those who might try to reach it. During aDNSflood the host that connects to the Internet is disrupted due to an overload of traffic. It can be referred to as a disruption that causes the work of the resource or machine to halt by not allowing the traffic to land on it. This attack is mainly done byhackers[citation needed]to benefit from the attacked resource or machine. DDoS attacks have been perpetrated for many reasons, including blackmailing website owners and knocking out websites, including high-profile sites such as large bank websites.[1] Many methods can and are being adopted to prevent these types of attacks some of which include dropping malformed packages, use filters to avoid receiving packages from sources having potential to attack, timing out half open connections with greater hostility. One can also setSYN,ICMP, andUDPat lower levels to prevent such DDoS attacks from harming one's network.[2][3]
https://en.wikipedia.org/wiki/DNS_Flood
Intheoretical physics, agravitational anomalyis an example of agauge anomaly: it is an effect ofquantum mechanics— usually aone-loop diagram—that invalidates thegeneral covarianceof a theory ofgeneral relativitycombined with some other fields.[citation needed]The adjective "gravitational" is derived from the symmetry of a gravitational theory, namely from general covariance. A gravitational anomaly is generally synonymous withdiffeomorphism anomaly, sincegeneral covarianceis symmetry under coordinate reparametrization; i.e.diffeomorphism. General covariance is the basis ofgeneral relativity, the classical theory ofgravitation. Moreover, it is necessary for the consistency of any theory ofquantum gravity, since it is required in order to cancel unphysical degrees of freedom with a negative norm, namelygravitonspolarized along the time direction. Therefore, all gravitational anomalies must cancel out. The anomaly usually appears as aFeynman diagramwith achiralfermionrunning in the loop (a polygon) withnexternalgravitonsattached to the loop wheren=1+D/2{\displaystyle n=1+D/2}whereD{\displaystyle D}is thespacetimedimension. Consider a classical gravitational field represented by the vielbeineμa{\displaystyle e_{\;\mu }^{a}}and a quantized Fermi fieldψ{\displaystyle \psi }. The generating functional for this quantum field is Z[eμa]=e−W[eμa]=∫dψ¯dψe−∫d4xeLψ,{\displaystyle Z[e_{\;\mu }^{a}]=e^{-W[e_{\;\mu }^{a}]}=\int d{\bar {\psi }}d\psi \;\;e^{-\int d^{4}xe{\mathcal {L}}_{\psi }},} whereW{\displaystyle W}is the quantum action and thee{\displaystyle e}factor before the Lagrangian is the vielbein determinant, the variation of the quantum action renders δW[eμa]=∫d4xe⟨Taμ⟩δeμa{\displaystyle \delta W[e_{\;\mu }^{a}]=\int d^{4}x\;e\langle T_{\;a}^{\mu }\rangle \delta e_{\;\mu }^{a}} in which we denote a mean value with respect to the path integral by the bracket⟨⟩{\displaystyle \langle \;\;\;\rangle }. Let us label the Lorentz, Einstein and Weyl transformations respectively by their parametersα,ξ,σ{\displaystyle \alpha ,\,\xi ,\,\sigma }; they spawn the following anomalies: Lorentz anomaly δαW=∫d4xeαab⟨Tab⟩,{\displaystyle \delta _{\alpha }W=\int d^{4}xe\,\alpha _{ab}\langle T^{ab}\rangle ,} which readily indicates that the energy-momentum tensor has an anti-symmetric part. Einstein anomaly δξW=−∫d4xeξν(∇ν⟨Tνμ⟩−ωabν⟨Tab⟩),{\displaystyle \delta _{\xi }W=-\int d^{4}xe\,\xi ^{\nu }\left(\nabla _{\nu }\langle T_{\;\nu }^{\mu }\rangle -\omega _{ab\nu }\langle T^{ab}\rangle \right),} this is related to the non-conservation of the energy-momentum tensor, i.e.∇μ⟨Tμν⟩≠0{\displaystyle \nabla _{\mu }\langle T^{\mu \nu }\rangle \neq 0}. Weyl anomaly δσW=∫d4xeσ⟨Tμμ⟩,{\displaystyle \delta _{\sigma }W=\int d^{4}xe\,\sigma \langle T_{\;\mu }^{\mu }\rangle ,} which indicates that the trace is non-zero. Thisquantum mechanics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Diffeo_anomaly
Arace conditionorrace hazardis the condition of anelectronics,software, or othersystemwhere the system's substantive behavior isdependenton the sequence or timing of other uncontrollable events, leading to unexpected or inconsistent results. It becomes abugwhen one or more of the possible behaviors is undesirable. The termrace conditionwas already in use by 1954, for example inDavid A. Huffman's doctoral thesis "The synthesis of sequential switching circuits".[1] Race conditions can occur especially inlogic circuitsormultithreadedordistributedsoftware programs. Usingmutual exclusioncan prevent race conditions in distributed software systems. A typical example of a race condition may occur when alogic gatecombines signals that have traveled along different paths from the same source. The inputs to the gate can change at slightly different times in response to a change in the source signal. The output may, for a brief period, change to an unwanted state before settling back to the designed state. Certain systems can tolerate suchglitchesbut if this output functions as aclock signalfor further systems that contain memory, for example, the system can rapidly depart from its designed behaviour (in effect, the temporary glitch becomes a permanent glitch). Consider, for example, a two-inputAND gatefed with the following logic:output=A∧A¯{\displaystyle {\text{output}}=A\wedge {\overline {A}}}A logic signalA{\displaystyle A}on one input and its negation,¬A{\displaystyle \neg A}(the ¬ is aBoolean negation), on another input in theory never output a true value:A∧A¯≠1{\displaystyle A\wedge {\overline {A}}\neq 1}. If, however, changes in the value ofA{\displaystyle A}take longer to propagate to the second input than the first whenA{\displaystyle A}changes from false to true then a brief period will ensue during which both inputs are true, and so the gate's output will also be true.[2] A practical example of a race condition can occur when logic circuitry is used to detect certain outputs of a counter. If all the bits of the counter do not change exactly simultaneously, there will be intermediate patterns that can trigger false matches. Acritical race conditionoccurs when the order in which internal variables are changed determines the eventual state that thestate machinewill end up in. Anon-critical race conditionoccurs when the order in which internal variables are changed does not determine the eventual state that the state machine will end up in. Astatic race conditionoccurs when a signal and its complement are combined. Adynamic race conditionoccurs when it results in multiple transitions when only one is intended. They are due to interaction between gates. It can be eliminated by using no more than two levels of gating. Anessential race conditionoccurs when an input has two transitions in less than the total feedback propagation time. Sometimes they are cured using inductivedelay lineelements to effectively increase the time duration of an input signal. Design techniques such asKarnaugh mapsencourage designers to recognize and eliminate race conditions before they cause problems. Oftenlogic redundancycan be added to eliminate some kinds of races. As well as these problems, some logic elements can entermetastable states, which create further problems for circuit designers. A race condition can arise in software when a computer program has multiple code paths that are executing at the same time. If the multiple code paths take a different amount of time than expected, they can finish in a different order than expected, which can cause software bugs due to unanticipated behavior. A race can also occur between two programs, resulting in security issues. Critical race conditions cause invalid execution andsoftware bugs. Critical race conditions often happen when the processes or threads depend on some shared state. Operations upon shared states are done incritical sectionsthat must bemutually exclusive. Failure to obey this rule can corrupt the shared state. A data race is a type of race condition. Data races are important parts of various formalmemory models. The memory model defined in theC11andC++11standards specify that a C or C++ program containing a data race hasundefined behavior.[3][4] A race condition can be difficult to reproduce and debug because the end result isnondeterministicand depends on the relative timing between interfering threads. Problems of this nature can therefore disappear when running in debug mode, adding extra logging, or attaching a debugger. A bug that disappears like this during debugging attempts is often referred to as a "Heisenbug". It is therefore better to avoid race conditions by careful software design. Assume that two threads each increment the value of a global integer variable by 1. Ideally, the following sequence of operations would take place: In the case shown above, the final value is 2, as expected. However, if the two threads run simultaneously without locking or synchronization (viasemaphores), the outcome of the operation could be wrong. The alternative sequence of operations below demonstrates this scenario: In this case, the final value is 1 instead of the expected result of 2. This occurs because here the increment operations are not mutually exclusive. Mutually exclusive operations are those that cannot be interrupted while accessing some resource such as a memory location. Not everyone regards data races as a subset of race conditions.[5]The precise definition of data race is specific to the formal concurrency model being used, but typically it refers to a situation where a memory operation in one thread could potentially attempt to access a memory location at the same time that a memory operation in another thread is writing to that memory location, in a context where this is dangerous. This implies that a data race is different from a race condition as it is possible to havenondeterminismdue to timing even in a program without data races, for example, in a program in which all memory accesses use onlyatomic operations. This can be dangerous because on many platforms, if two threads write to a memory location at the same time, it may be possible for the memory location to end up holding a value that is some arbitrary and meaningless combination of the bits representing the values that each thread was attempting to write; this could result in memory corruption if the resulting value is one that neither thread attempted to write (sometimes this is called a 'torn write'). Similarly, if one thread reads from a location while another thread is writing to it, it may be possible for the read to return a value that is some arbitrary and meaningless combination of the bits representing the value that the memory location held before the write, and of the bits representing the value being written. On many platforms, special memory operations are provided for simultaneous access; in such cases, typically simultaneous access using these special operations is safe, but simultaneous access using other memory operations is dangerous. Sometimes such special operations (which are safe for simultaneous access) are calledatomicorsynchronizationoperations, whereas the ordinary operations (which are unsafe for simultaneous access) are calleddataoperations. This is probably why the term isdatarace; on many platforms, where there is a race condition involving onlysynchronizationoperations, such a race may be nondeterministic but otherwise safe; but adatarace could lead to memory corruption or undefined behavior. The precise definition of data race differs across formal concurrency models. This matters because concurrent behavior is often non-intuitive and so formal reasoning is sometimes applied. TheC++ standard, in draft N4296 (2014-11-19), defines data race as follows in section 1.10.23 (page 14)[6] Two actions arepotentially concurrentif The execution of a program contains adata raceif it contains two potentially concurrent conflicting actions, at least one of which is not atomic, and neither happens before the other, except for the special case for signal handlers described below [omitted]. Any such data race results in undefined behavior. The parts of this definition relating to signal handlers are idiosyncratic to C++ and are not typical of definitions ofdata race. The paperDetecting Data Races on Weak Memory Systems[7]provides a different definition: "two memory operationsconflictif they access the same location and at least one of them is a write operation ... "Two memory operations, x and y, in a sequentially consistent execution form a race 〈x,y〉,iffx and y conflict, and they are not ordered by the hb1 relation of the execution. The race 〈x,y〉, is adata raceiff at least one of x or y is a data operation. Here we have two memory operations accessing the same location, one of which is a write. The hb1 relation is defined elsewhere in the paper, and is an example of a typical "happens-before" relation; intuitively, if we can prove that we are in a situation where one memory operation X is guaranteed to be executed to completion before another memory operation Y begins, then we say that "X happens-before Y". If neither "X happens-before Y" nor "Y happens-before X", then we say that X and Y are "not ordered by the hb1 relation". So, the clause "... and they are not ordered by the hb1 relation of the execution" can be intuitively translated as "... and X and Y are potentially concurrent". The paper considers dangerous only those situations in which at least one of the memory operations is a "data operation"; in other parts of this paper, the paper also defines a class of "synchronization operations" which are safe for potentially simultaneous use, in contrast to "data operations". TheJava Language Specification[8]provides a different definition: Two accesses to (reads of or writes to) the same variable are said to be conflicting if at least one of the accesses is a write ... When a program contains two conflicting accesses (§17.4.1) that are not ordered by a happens-before relationship, it is said to contain a data race ... a data race cannot cause incorrect behavior such as returning the wrong length for an array. A critical difference between the C++ approach and the Java approach is that in C++, a data race is undefined behavior, whereas in Java, a data race merely affects "inter-thread actions".[8]This means that in C++, an attempt to execute a program containing a data race could (while still adhering to the spec) crash or could exhibit insecure or bizarre behavior, whereas in Java, an attempt to execute a program containing a data race may produce undesired concurrency behavior but is otherwise (assuming that the implementation adheres to the spec) safe. An important facet of data races is that in some contexts, a program that is free of data races is guaranteed to execute in asequentially consistentmanner, greatly easing reasoning about the concurrent behavior of the program. Formal memory models that provide such a guarantee are said to exhibit an "SC for DRF" (Sequential Consistency for Data Race Freedom) property. This approach has been said to have achieved recent consensus (presumably compared to approaches which guarantee sequential consistency in all cases, or approaches which do not guarantee it at all).[9] For example, in Java, this guarantee is directly specified:[8] A program is correctly synchronized if and only if all sequentially consistent executions are free of data races. If a program is correctly synchronized, then all executions of the program will appear to be sequentially consistent (§17.4.3). This is an extremely strong guarantee for programmers. Programmers do not need to reason about reorderings to determine that their code contains data races. Therefore they do not need to reason about reorderings when determining whether their code is correctly synchronized. Once the determination that the code is correctly synchronized is made, the programmer does not need to worry that reorderings will affect his or her code. A program must be correctly synchronized to avoid the kinds of counterintuitive behaviors that can be observed when code is reordered. The use of correct synchronization does not ensure that the overall behavior of a program is correct. However, its use does allow a programmer to reason about the possible behaviors of a program in a simple way; the behavior of a correctly synchronized program is much less dependent on possible reorderings. Without correct synchronization, very strange, confusing and counterintuitive behaviors are possible. By contrast, a draft C++ specification does not directly require an SC for DRF property, but merely observes that there exists a theorem providing it: [Note:It can be shown that programs that correctly use mutexes and memory_order_seq_cst operations to prevent all data races and use no other synchronization operations behave as if the operations executed by their constituent threads were simply interleaved, with each value computation of an object being taken from the last side effect on that object in that interleaving. This is normally referred to as “sequential consistency”. However, this applies only to data-race-free programs, and data-race-free programs cannot observe most program transformations that do not change single-threaded program semantics. In fact, most single-threaded program transformations continue to be allowed, since any program that behaves differently as a result must perform an undefined operation.— end note Note that the C++ draft specification admits the possibility of programs that are valid but use synchronization operations with a memory_order other than memory_order_seq_cst, in which case the result may be a program which is correct but for which no guarantee of sequentially consistency is provided. In other words, in C++, some correct programs are not sequentially consistent. This approach is thought to give C++ programmers the freedom to choose faster program execution at the cost of giving up ease of reasoning about their program.[9] There are various theorems, often provided in the form of memory models, that provide SC for DRF guarantees given various contexts. The premises of these theorems typically place constraints upon both the memory model (and therefore upon the implementation), and also upon the programmer; that is to say, typically it is the case that there are programs which do not meet the premises of the theorem and which could not be guaranteed to execute in a sequentially consistent manner. The DRF1 memory model[10]provides SC for DRF and allows the optimizations of the WO (weak ordering), RCsc (Release Consistencywith sequentially consistent special operations), VAX memory model, and data-race-free-0 memory models. The PLpc memory model[11]provides SC for DRF and allows the optimizations of the TSO (Total Store Order), PSO, PC (Processor Consistency), and RCpc (Release Consistencywith processor consistency special operations) models. DRFrlx[12]provides a sketch of an SC for DRF theorem in the presence of relaxed atomics. Many software race conditions have associatedcomputer securityimplications. A race condition allows an attacker with access to a shared resource to cause other actors that utilize that resource to malfunction, resulting in effects includingdenial of service[13]andprivilege escalation.[14][15] A specific kind of race condition involves checking for a predicate (e.g. forauthentication), then acting on the predicate, while the state can change between thetime-of-checkand thetime-of-use. When this kind ofbugexists in security-sensitive code, asecurity vulnerabilitycalled atime-of-check-to-time-of-use(TOCTTOU) bug is created. Race conditions are also intentionally used to createhardware random number generatorsandphysically unclonable functions.[16][citation needed]PUFs can be created by designing circuit topologies with identical paths to a node and relying on manufacturing variations to randomly determine which paths will complete first. By measuring each manufactured circuit's specific set of race condition outcomes, a profile can be collected for each circuit and kept secret in order to later verify a circuit's identity. Two or more programs may collide in their attempts to modify or access a file system, which can result in data corruption or privilege escalation.[14]File lockingprovides a commonly used solution. A more cumbersome remedy involves organizing the system in such a way that one unique process (running adaemonor the like) has exclusive access to the file, and all other processes that need to access the data in that file do so only via interprocess communication with that one process. This requires synchronization at the process level. A different form of race condition exists in file systems where unrelated programs may affect each other by suddenly using up available resources such as disk space, memory space, or processor cycles. Software not carefully designed to anticipate and handle this race situation may then become unpredictable. Such a risk may be overlooked for a long time in a system that seems very reliable. But eventually enough data may accumulate or enough other software may be added to critically destabilize many parts of a system. An example of this occurred withthe near loss of the Mars Rover "Spirit"not long after landing, which occurred due to deleted file entries causing the file system library to consume all available memory space.[17]A solution is for software to request and reserve all the resources it will need before beginning a task; if this request fails then the task is postponed, avoiding the many points where failure could have occurred. Alternatively, each of those points can be equipped with error handling, or the success of the entire task can be verified afterwards, before continuing. A more common approach is to simply verify that enough system resources are available before starting a task; however, this may not be adequate because in complex systems the actions of other running programs can be unpredictable. In networking, consider a distributed chat network likeIRC, where a user who starts a channel automatically acquires channel-operator privileges. If two users on different servers, on different ends of the same network, try to start the same-named channel at the same time, each user's respective server will grant channel-operator privileges to each user, since neither server will yet have received the other server's signal that it has allocated that channel. (This problem has been largelysolvedby various IRC server implementations.) In this case of a race condition, the concept of the "shared resource" covers the state of the network (what channels exist, as well as what users started them and therefore have what privileges), which each server can freely change as long as it signals the other servers on the network about the changes so that they can update their conception of the state of the network. However, thelatencyacross the network makes possible the kind of race condition described. In this case, heading off race conditions by imposing a form of control over access to the shared resource—say, appointing one server to control who holds what privileges—would mean turning the distributed network into a centralized one (at least for that one part of the network operation). Race conditions can also exist when a computer program is written withnon-blocking sockets, in which case the performance of the program can be dependent on the speed of the network link. Software flaws inlife-critical systemscan be disastrous. Race conditions were among the flaws in theTherac-25radiation therapymachine, which led to the death of at least three patients and injuries to several more.[18] Another example is theenergy management systemprovided byGE Energyand used byOhio-basedFirstEnergy Corp(among other power facilities). A race condition existed in the alarm subsystem; when three sagging power lines were tripped simultaneously, the condition prevented alerts from being raised to the monitoring technicians, delaying their awareness of the problem. This software flaw eventually led to theNorth American Blackout of 2003.[19]GE Energy later developed a software patch to correct the previously undiscovered error. Many software tools exist to help detect race conditions in software. They can be largely categorized into two groups:static analysistools anddynamic analysistools. Thread Safety Analysis is a static analysis tool for annotation-based intra-procedural static analysis, originally implemented as a branch of gcc, and now reimplemented inClang, supporting PThreads.[20][non-primary source needed] Dynamic analysis tools include: There are several benchmarks designed to evaluate the effectiveness of data race detection tools Race conditions are a common concern in human-computerinteraction designand softwareusability. Intuitively designed human-machine interfaces require that the user receives feedback on their actions that align with their expectations, but system-generated actions can interrupt a user's current action or workflow in unexpected ways, such as inadvertently answering or rejecting an incoming call on a smartphone while performing a different task.[citation needed] InUK railway signalling, a race condition would arise in the carrying out ofRule 55. According to this rule, if a train was stopped on a running line by a signal, the locomotive fireman would walk to the signal box in order to remind the signalman that the train was present. In at least one case, atWinwickin 1934, an accident occurred because the signalman accepted another train before the fireman arrived. Modern signalling practice removes the race condition by making it possible for the driver to instantaneously contact the signal box by radio. Race conditions are not confined to digital systems. Neuroscience is demonstrating that race conditions can occur in mammal brains as well, for example.[25][26]
https://en.wikipedia.org/wiki/Race_hazard
Incomputer science, acompiler-compilerorcompiler generatoris a programming tool that creates aparser,interpreter, orcompilerfrom some form of formal description of aprogramming languageand machine. The most common type of compiler-compiler is called aparser generator.[1]It handles onlysyntactic analysis. A formal description of a language is usually agrammarused as an input to a parser generator. It often resemblesBackus–Naur form(BNF),extended Backus–Naur form(EBNF), or has its own syntax. Grammar files describe asyntaxof a generated compiler's target programming language and actions that should be taken against its specific constructs. Source codefor a parser of the programming language is returned as the parser generator's output. This source code can then be compiled into a parser, which may be either standalone or embedded. The compiled parser then accepts the source code of the target programming language as an input and performs an action or outputs anabstract syntax tree(AST). Parser generators do not handle thesemanticsof the AST, or thegeneration of machine codefor the target machine.[2] Ametacompileris a software development tool used mainly in the construction ofcompilers,translators, andinterpretersfor other programming languages.[3]The input to a metacompiler is acomputer programwritten in aspecializedprogrammingmetalanguagedesigned mainly for the purpose of constructing compilers.[3][4]The language of the compiler produced is called the object language. The minimal input producing a compiler is ametaprogramspecifying the object language grammar andsemantictransformations into anobject program.[4][5] A typical parser generator associates executable code with each of the rules of the grammar that should be executed when these rules are applied by the parser. These pieces of code are sometimes referred to as semantic action routines since they define the semantics of the syntactic structure that is analyzed by the parser. Depending upon the type of parser that should be generated, these routines may construct aparse tree(orabstract syntax tree), or generate executable code directly. One of the earliest (1964), surprisingly powerful, versions of compiler-compilers isMETA II, which accepted an analytical grammar with output facilitiesthat produce stack machinecode, and is able to compile its own source code and other languages. Among the earliest programs of the originalUnixversions being built atBell Labswas the two-partlexandyaccsystem, which was normally used to outputC programming languagecode, but had a flexible output system that could be used for everything from programming languages to text file conversion. Their modernGNUversions areflexandbison. Some experimental compiler-compilers take as input a formal description of programming language semantics, typically usingdenotational semantics. This approach is often called 'semantics-based compiling', and was pioneered byPeter Mosses' Semantic Implementation System (SIS) in 1978.[6]However, both the generated compiler and the code it produced were inefficient in time and space. No production compilers are currently built in this way, but research continues. The Production Quality Compiler-Compiler (PQCC) project atCarnegie Mellon Universitydoes not formalize semantics, but does have a semi-formal framework for machine description. Compiler-compilers exist in many flavors, including bottom-up rewrite machine generators (seeJBurg) used to tile syntax trees according to a rewrite grammar for code generation, andattribute grammarparser generators (e.g.ANTLRcan be used for simultaneous type checking, constant propagation, and more during the parsing stage). Metacompilers reduce the task of writing compilers by automating the aspects that are the same regardless of the object language. This makes possible the design ofdomain-specific languageswhich are appropriate to the specification of a particular problem. A metacompiler reduces the cost of producingtranslatorsfor suchdomain-specificobject languages to a point where it becomes economically feasible to include in the solution of a problem adomain-specific languagedesign.[4] As a metacompiler'smetalanguagewill usually be a powerful string and symbol processing language, they often have strong applications for general-purpose applications, including generating a wide range of other software engineering and analysis tools.[4][7] Besides being useful fordomain-specific languagedevelopment, a metacompiler is a prime example of a domain-specific language, designed for the domain of compiler writing. A metacompiler is ametaprogramusually written in its own metalanguageor an existing computer programming language. The process of a metacompiler, written in its own metalanguage, compiling itself is equivalent toself-hosting compiler. Most common compilers written today are self-hosting compilers. Self-hosting is a powerful tool, of many metacompilers, allowing the easy extension of their own metaprogramming metalanguage. The feature that separates a metacompiler apart from other compiler compilers is that it takes as input a specializedmetaprogramminglanguage that describes all aspects of the compiler's operation. A metaprogram produced by a metacompiler is as complete a program as aprogramwritten inC++,BASICor any other generalprogramming language. The metaprogrammingmetalanguageis a powerful attribute allowing easier development of computer programming languages and other computer tools. Command line processors, text string transforming and analysis are easily coded using metaprogramming metalanguages of metacompilers. A full featured development package includes alinkerand arun timesupportlibrary. Usually, a machine-orientedsystem programming language, such asCor C++, is needed to write the support library. A library consisting of support functions needed for the compiling process usually completes the full metacompiler package. In computer science, the prefixmetais commonly used to meanabout (its own category). For example,metadataare data that describe other data. A language that is used to describe other languages is ametalanguage. Meta may also meanon a higher level of abstraction. Ametalanguageoperates on a higher level of abstraction in order to describe properties of a language.Backus–Naur form(BNF) is a formalmetalanguageoriginally used to defineALGOL 60. BNF is a weakmetalanguage, for it describes only thesyntaxand says nothing about thesemanticsor meaning. Metaprogramming is the writing ofcomputer programswith the ability to treatprogramsas their data. A metacompiler takes as input ametaprogramwritten in aspecialized metalanguages(a higher level abstraction) specifically designed for the purpose of metaprogramming.[4][5]The output is an executable object program. An analogy can be drawn: That as aC++compiler takes as input aC++programming language program, ametacompiler takes as input ametaprogrammingmetalanguageprogram. Many advocates of the languageForthcall the process of creating a new implementation of Forth a meta-compilation and that it constitutes a metacompiler. The Forth definition of metacompiler is: This Forth use of the term metacompiler is disputed in mainstream computer science. SeeForth (programming language)andHistory of compiler construction. The actual Forth process of compiling itself is a combination of a Forth being aself-hostingextensible programminglanguage and sometimescross compilation, long established terminology in computer science. Metacompilers are a general compiler writing system. Besides the Forth metacompiler concept being indistinguishable from self-hosting and extensible language. The actual process acts at a lower level defining a minimum subset of forthwords, that can be used to define additional forth words, A full Forth implementation can then be defined from the base set. This sounds like a bootstrap process. The problem is that almost every general purpose language compiler also fits the Forth metacompiler description. Just replace X with any common language, C, C++,Java,Pascal,COBOL,Fortran,Ada,Modula-2, etc. And X would be a metacompiler according to the Forth usage of metacompiler. A metacompiler operates at an abstraction level above the compiler it compiles. It only operates at the same (self-hosting compiler) level when compiling itself. One has to see the problem with this definition of metacompiler. It can be applied to most any language. However, on examining the concept of programming in Forth, adding new words to the dictionary, extending the language in this way is metaprogramming. It is this metaprogramming in Forth that makes it a metacompiler. Programming in Forth is adding new words to the language. Changing the language in this way ismetaprogramming. Forth is a metacompiler, because Forth is a language specifically designed for metaprogramming. Programming in Forth is extending Forth adding words to the Forth vocabulary creates a new Forthdialect. Forth is a specialized metacompiler for Forth language dialects. Design of the original compiler-compiler was started byTony Brookerand Derrick Morris in 1959, with initial testing beginning in March 1962.[8]The Brooker Morris Compiler Compiler (BMCC) was used to create compilers for the newAtlascomputer at theUniversity of Manchester, for several languages:Mercury Autocode, Extended Mercury Autocode,Atlas Autocode,ALGOL 60and ASAFortran. At roughly the same time, related work was being done by E. T. (Ned) Irons at Princeton, and Alick Glennie at the Atomic Weapons Research Establishment at Aldermaston whose "Syntax Machine" paper (declassified in 1977) inspired the META series of translator writing systems mentioned below. The early history of metacompilers is closely tied with the history of SIG/PLAN Working group 1 on Syntax Driven Compilers. The group was started primarily through the effort of Howard Metcalfe in the Los Angeles area.[9]In the fall of 1962, Howard Metcalfe designed two compiler-writing interpreters. One used a bottom-to-top analysis technique based on a method described by Ledley and Wilson.[10]The other used a top-to-bottom approach based on work by Glennie to generate random English sentences from a context-free grammar.[11] At the same time, Val Schorre described two "meta machines", one generative and one analytic. The generative machine was implemented and produced random algebraic expressions. Meta I the first metacompiler was implemented by Schorre on an IBM 1401 at UCLA in January 1963. His original interpreters and metamachines were written directly in a pseudo-machine language.META II, however, was written in a higher-level metalanguage able to describe its own compilation into the pseudo-machine language.[12][13][14] Lee Schmidt at Bolt, Beranek, and Newman wrote a metacompiler in March 1963 that utilized a CRT display on the time-sharing PDP-l.[15]This compiler produced actual machine code rather than interpretive code and was partially bootstrapped from Meta I.[citation needed] Schorre bootstrapped Meta II from Meta I during the spring of 1963. The paper on the refined metacompiler system presented at the 1964 Philadelphia ACM conference is the first paper on a metacompiler available as a general reference. The syntax and implementation technique of Schorre's system laid the foundation for most of the systems that followed. The system was implemented on a small 1401, and was used to implement a smallALGOL-like language.[citation needed] Many similar systems immediately followed.[citation needed] Roger Rutman ofAC Delcodeveloped and implemented LOGIK, a language for logical design simulation, on the IBM 7090 in January 1964.[16]This compiler used an algorithm that produced efficient code for Boolean expressions.[citation needed] Another paper in the 1964 ACM proceedings describesMeta III, developed bySchneiderand Johnson at UCLA for the IBM 7090.[17]Meta III represents an attempt to produce efficient machine code, for a large class of languages. Meta III was implemented completely in assembly language. Two compilers were written in Meta III, CODOL, a compiler-writing demonstration compiler, and PUREGOL, a dialect of ALGOL 60. (It was pure gall to call it ALGOL). Late in 1964, Lee Schmidt bootstrapped the metacompiler EQGEN, from the PDP-l to the Beckman 420. EQGEN was a logic equation generating language. In 1964, System Development Corporation began a major effort in the development of metacompilers. This effort includes powerful metacompilers, Bookl, and Book2 written inLispwhich have extensive tree-searching and backup ability. An outgrowth of one of theQ-32systems at SDC is Meta 5.[18]The Meta 5 system incorporates backup of the input stream and enough other facilities to parse any context-sensitive language. This system was successfully released to a wide number of users and had many string-manipulation applications other than compiling. It has many elaborate push-down stacks, attribute setting and testing facilities, and output mechanisms. That Meta 5 successfully translatesJOVIALprograms toPL/Iprograms demonstrates its power and flexibility. Robert McClure atTexas Instrumentsinvented a compiler-compiler calledTMG(presented in 1965). TMG was used to create early compilers for programming languages likeB,PL/IandALTRAN. Together with metacompiler of Val Schorre, it was an early inspiration for the last chapter ofDonald Knuth'sThe Art of Computer Programming.[19] The LOT system was developed during 1966 at Stanford Research Institute and was modeled very closely after Meta II.[20]It had new special-purpose constructs allowing it to generate a compiler which could in turn, compile a subset of PL/I. This system had extensive statistic-gathering facilities and was used to study the characteristics of top-down analysis. SIMPLE is a specialized translator system designed to aid the writing of pre-processors for PL/I, SIMPLE, written in PL/I, is composed of three components: An executive, a syntax analyzer and a semantic constructor.[21] TheTREE-METAcompiler was developed at Stanford Research Institute in Menlo Park, California. April 1968. The early metacompiler history is well documented in theTREE META manual.TREE META paralleled some of the SDC developments. Unlike earlier metacompilers it separated the semantics processing from the syntax processing. The syntax rules containedtreebuilding operations that combined recognized language elements with tree nodes. The tree structure representation of the input was then processed by a simple form of unparse rules. The unparse rules used node recognition and attribute testing that when matched resulted in the associated action being performed. In addition like tree element could also be tested in an unparse rule. Unparse rules were also a recursive language being able to call unparse rules passing elements of thee tree before the action of the unparse rule was performed. The concept of the metamachine originally put forth by Glennie is so simple that three hardware versions have been designed and one actually implemented. The latter at Washington University in St. Louis. This machine was built from macro-modular components and has for instructions the codes described by Schorre. CWIC (Compiler for Writing and Implementing Compilers) is the last known Schorre metacompiler. It was developed at Systems Development Corporation by Erwin Book, Dewey Val Schorre and Steven J. Sherman With the full power of (lisp 2) a list processing language optimizing algorithms could operate on syntax generated lists and trees before code generation. CWIC also had a symbol table built into the language. With the resurgence of domain-specific languages and the need for parser generators which are easy to use, easy to understand, and easy to maintain, metacompilers are becoming a valuable tool for advanced software engineering projects. Other examples of parser generators in the yacc vein areANTLR,Coco/R,[22]CUP,[citation needed]GNU Bison, Eli,[23]FSL,[citation needed]SableCC, SID (Syntax Improving Device),[24]andJavaCC. While useful, pure parser generators only address the parsing part of the problem of building a compiler. Tools with broader scope, such asPQCC,Coco/RandDMS Software Reengineering Toolkitprovide considerable support for more difficult post-parsing activities such as semantic analysis, code optimization and generation. The earliest Schorre metacompilers, META I and META II, were developed by D. Val Schorre at UCLA. Other Schorre based metacompilers followed. Each adding improvements to language analysis and/or code generation. In programming it is common to use the programming language name to refer to both the compiler and the programming language, the context distinguishing the meaning. A C++ program is compiled using a C++ compiler. That also applies in the following. For example, META II is both the compiler and the language. The metalanguages in the Schorre line of metacompilers are functional programming languages that use top down grammar analyzing syntax equations having embedded output transformation constructs. A syntax equation: is a compiledtestfunction returningsuccessorfailure. <name> is the function name. <body> is a form of logical expression consisting of tests that may be grouped, have alternates, and output productions. Atestis like aboolin other languages,successbeingtrueandfailurebeingfalse. Defining a programming language analytically top down is natural. For example, a program could be defined as: Defining a program as a sequence of zero or more declaration(s). In the Schorre METAXlanguages there is a driving rule. The program rule above is an example of a driving rule. The program rule is atestfunction that calls declaration, atestrule, that returnssuccessorfailure. The $ loop operator repeatedly calling declaration untilfailureis returned. The $ operator is always successful, even when there are zero declaration. Above program would always return success. (In CWIC a long fail can bypass declaration. A long-fail is part of the backtracking system of CWIC) The character sets of these early compilers were limited. The character/was used for the alternant (or) operator. "A or B" is written as A / B. Parentheses ( ) are used for grouping. Describes a construct of A followed by B or C. As a boolean expression it would be A sequence X Y has an implied XandY meaning.( )are grouping and/theoroperator. The order of evaluation is always left to right as an input character sequence is being specified by the ordering of the tests. Special operator words whose first character is a "." are used for clarity. .EMPTY is used as the last alternate when no previous alternant need be present. Indicates that X is optionally followed by AorB. This is a specific characteristic of these metalanguages being programming languages. Backtracking is avoided by the above. Other compiler constructor systems may have declared the three possible sequences and left it up to the parser to figure it out. The characteristics of the metaprogramming metalanguages above are common to all Schorre metacompilers and those derived from them. META I was a hand compiled metacompiler used to compile META II. Little else is known of META I except that the initial compilation of META II produced nearly identical code to that of the hand coded META I compiler. Each rule consists optionally of tests, operators, and output productions. A rule attempts to match some part of the input program source character stream returning success or failure. On success the input is advanced over matched characters. On failure the input is not advanced. Output productions produced a form of assembly code directly from a syntax rule. TREE-META introduced tree building operators:<node_name> and[<number>]moving the output production transforms to unparsed rules. The tree building operators were used in the grammar rules directly transforming the input into anabstract syntax tree. Unparse rules are also test functions that matched tree patterns. Unparse rules are called from a grammar rule when an abstract syntax tree is to be transformed into output code. The building of an abstract syntax tree and unparse rules allowed local optimizations to be performed by analyzing the parse tree. Moving of output productions to the unparse rules made a clear separation of grammar analysis and code production. This made the programming easier to read and understand. In 1968–1970, Erwin Book, Dewey Val Schorre, and Steven J. Sherman developed CWIC.[4](Compiler for Writing and Implementing Compilers) atSystem Development CorporationCharles Babbage Institute Center for the History of Information Technology (Box 12, folder 21), CWIC is a compiler development system composed of three special-purpose, domain specific, languages, each intended to permit the description of certain aspects of translation in a straight forward manner. The syntax language is used to describe the recognition of source text and the construction from it to an intermediatetreestructure. The generator language is used to describe the transformation of the tree into appropriate object language. The syntax language follows Dewey Val Schorre's previous line of metacompilers. It most resembles TREE-META havingtreebuilding operators in the syntax language. The unparse rules of TREE-META are extended to work with the object based generator language based onLISP 2. CWIC includes three languages: Generators Language had semantics similar toLisp. The parsetreewas thought of as a recursive list. The general form of a Generator Language function is: The code to process a giventreeincluded the features of a general purpose programming language, plus a form: <stuff>, which would emit (stuff) onto the output file. A generator call may be used in the unparse_rule. The generator is passed the element of unparse_rule pattern in which it is placed and its return values are listed in (). For example: That is, if the parsetreelooks like (ADD[<something1>,<something2>]), expr_gen(x) would be called with <something1> and return x. A variable in the unparse rule is a local variable that can be used in the production_code_generator. expr_gen(y) is called with <something2> and returns y. Here is a generator call in an unparse rule is passed the element in the position it occupies. Hopefully in the above x and y will be registers on return. The last transforms is intended to load an atomic into a register and return the register. The first production would be used to generate the 360 "AR" (Add Register) instruction with the appropriate values in general registers. The above example is only a part of a generator. Every generator expression evaluates to a value that con then be further processed. The last transform could just as well have been written as: In this case load returns its first parameter, the register returned by getreg(). the functions load and getreg are other CWIC generators. From the authors of CWIC: "A metacompiler assists the task of compiler-building by automating its non creative aspects, those aspects that are the same regardless of the language which the produced compiler is to translate. This makes possible the design of languages which are appropriate to the specification of a particular problem. It reduces the cost of producing processors for such languages to a point where it becomes economically feasible to begin the solution of a problem with language design."[4]
https://en.wikipedia.org/wiki/Compiler-compiler
Unix time[a]is a date and time representation widely used incomputing. It measures time by the number of non-leap secondsthat have elapsed since 00:00:00UTCon 1 January 1970, the Unixepoch. For example, at midnight on 1 January 2010, Unix time was 1262304000. Unix time originated as thesystem timeofUnixoperating systems. It has come to be widely used in other computeroperating systems,file systems,programming languages, anddatabases. In modern computing, values are sometimes stored with highergranularity, such asmicrosecondsornanoseconds. Unix time is currently defined as the number of non-leap seconds which have passed since 00:00:00UTC on Thursday, 1 January 1970, which is referred to as theUnixepoch.[3]Unix time is typically encoded as asigned integer. The Unix time0is exactly midnight UTC on 1 January 1970, with Unix time incrementing by 1 for every non-leap second after this. For example, 00:00:00UTC on 1 January 1971 is represented in Unix time as31536000. Negative values, on systems that support them, indicate times before the Unix epoch, with the value decreasing by 1 for every non-leap second before the epoch. For example, 00:00:00UTC on 1 January 1969 is represented in Unix time as−31536000. Every day in Unix time consists of exactly86400seconds. Unix time is sometimes referred to asEpoch time. This can be misleading since Unix time is not the only time system based on an epoch and the Unix epoch is not the only epoch used by other time systems.[5] Unix time differs from bothCoordinated Universal Time(UTC) andInternational Atomic Time(TAI) in its handling ofleap seconds. UTC includes leap seconds that adjust for the discrepancy between precise time, as measured byatomic clocks, andsolar time, relating to the position of the earth in relation to the sun.International Atomic Time(TAI), in which every day is precisely86400seconds long, ignores solar time and gradually losessynchronizationwith the Earth's rotation at a rate of roughly one second per year. In Unix time, every day contains exactly86400seconds. Each leap second uses thetimestampof a second that immediately precedes or follows it.[3] On a normal UTC day, which has a duration of86400seconds, the Unix time number changes in acontinuousmanner across midnight. For example, at the end of the day used in the examples above, the time representations progress as follows: When aleap secondoccurs, the UTC day is not exactly86400seconds long and the Unix time number (which always increases by exactly86400each day) experiences adiscontinuity. Leap seconds may be positive or negative. No negative leap second has ever been declared, but if one were to be, then at the end of a day with a negative leap second, the Unix time number would jump up by 1 to the start of the next day. During a positive leap second at the end of a day, which occurs about every year and a half on average, the Unix time number increases continuously into the next day during the leap second and then at the end of the leap second jumps back by 1 (returning to the start of the next day). For example, this is what happened on strictly conformingPOSIX.1systems at the end of 1998: Unix time numbers are repeated in the second immediately following a positive leap second. The Unix time number1483228800is thus ambiguous: it can refer either to start of the leap second (2016-12-31 23:59:60) or the end of it, one second later (2017-01-01 00:00:00). In the theoretical case when a negative leap second occurs, no ambiguity is caused, but instead there is a range of Unix time numbers that do not refer to any point in UTC time at all. A Unix clock is often implemented with a different type of positive leap second handling associated with theNetwork Time Protocol(NTP). This yields a system that does not conform to the POSIX standard. See the section below concerning NTP for details. When dealing with periods that do not encompass a UTC leap second, the difference between two Unix time numbers is equal to the duration in seconds of the period between the corresponding points in time. This is a common computational technique. However, where leap seconds occur, such calculations give the wrong answer. In applications where this level of accuracy is required, it is necessary to consult a table of leap seconds when dealing with Unix times, and it is often preferable to use a different time encoding that does not suffer from this problem. A Unix time number is easily converted back into a UTC time by taking the quotient and modulus of the Unix time number, modulo86400. The quotient is the number of days since the epoch, and the modulus is the number of seconds since midnight UTC on that day. If given a Unix time number that is ambiguous due to a positive leap second, this algorithm interprets it as the time just after midnight. It never generates a time that is during a leap second. If given a Unix time number that is invalid due to a negative leap second, it generates an equally invalid UTC time. If these conditions are significant, it is necessary to consult a table of leap seconds to detect them. Commonly aMills-style Unix clock is implemented with leap second handling not synchronous with the change of the Unix time number. The time number initially decreases where a leap should have occurred, and then it leaps to the correct time 1 second after the leap. This makes implementation easier, and is described by Mills' paper.[6]This is what happens across a positive leap second: This can be decoded properly by paying attention to the leap second state variable, which unambiguously indicates whether the leap has been performed yet. The state variable change is synchronous with the leap. A similar situation arises with a negative leap second, where the second that is skipped is slightly too late. Very briefly the system shows a nominally impossible time number, but this can be detected by the TIME_DEL state and corrected. In this type of system the Unix time number violates POSIX around both types of leap second. Collecting the leap second state variable along with the time number allows for unambiguous decoding, so the correct POSIX time number can be generated if desired, or the full UTC time can be stored in a more suitable format. The decoding logic required to cope with this style of Unix clock would also correctly decode a hypothetical POSIX-conforming clock using the same interface. This would be achieved by indicating the TIME_INS state during the entirety of an inserted leap second, then indicating TIME_WAIT during the entirety of the following second while repeating the seconds count. This requires synchronous leap second handling. This is probably the best way to express UTC time in Unix clock form, via a Unix interface, when the underlying clock is fundamentally untroubled by leap seconds. Another, much rarer, non-conforming variant of Unix time keeping involves incrementing the value for all seconds, including leap seconds;[7]some Linux systems are configured this way.[8]Time kept in this fashion is sometimes referred to as "TAI" (although timestamps can be converted to UTC if the value corresponds to a time when the difference between TAI and UTC is known), as opposed to "UTC" (although not all UTC time values have a unique reference in systems that do not count leap seconds).[8] Because TAI has no leap seconds, and every TAI day is exactly 86400 seconds long, this encoding is actually a pure linear count of seconds elapsed since 1970-01-01T00:00:10TAI. This makes time interval arithmetic much easier. Time values from these systems do not suffer the ambiguity that strictly conforming POSIX systems or NTP-driven systems have. In these systems it is necessary to consult a table of leap seconds to correctly convert between UTC and the pseudo-Unix-time representation. This resembles the manner in which time zone tables must be consulted to convert to and fromcivil time; theIANA time zone databaseincludes leap second information, and the sample code available from the same source uses that information to convert between TAI-based timestamps and local time. Conversion also runs into definitional problems prior to the 1972 commencement of the current form of UTC (see sectionUTC basisbelow). This system, despite its superficial resemblance, is not Unix time. It encodes times with values that differ by several seconds from the POSIX time values. A version of this system, in which the epoch was 1970-01-01T00:00:00TAI rather than 1970-01-01T00:00:10TAI, was proposed for inclusion in ISO C'stime.h, but only the UTC part was accepted in 2011.[9]Atai_clockdoes, however, exist in C++20. A Unix time number can be represented in any form capable of representing numbers. In some applications the number is simply represented textually as a string of decimal digits, raising only trivial additional problems. However, certain binary representations of Unix times are particularly significant. The Unixtime_tdata type that represents a point in time is, on many platforms, asigned integer, traditionally of 32bits(butsee below), directly encoding the Unix time number as described in the preceding section. A signed 32-bit value covers about 68 years before and after the 1970-01-01 epoch. The minimum representable date is Friday 1901-12-13, and the maximum representable date is Tuesday 2038-01-19. One second after 2038-01-19T03:14:07Z this representation willoverflowin what is known as theyear 2038 problem. UUIDv7 encodes the Unix epoch timestamp (in milliseconds) in an unsigned 48-bit field. This representation is valid until the year 10889 AD.[10] In some newer operating systems,time_thas been widened to 64 bits. This expands the times representable to about292.3 billion yearsin both directions, which is over twenty times the presentage of the universe. There was originally some controversy over whether the Unixtime_tshould be signed or unsigned. If unsigned, its range in the future would be doubled, postponing the 32-bit overflow (by 68 years). However, it would then be incapable of representing times prior to the epoch. The consensus is fortime_tto be signed, and this is the usual practice. The software development platform for version 6 of theQNXoperating system has an unsigned 32-bittime_t, though older releases used a signed type. ThePOSIXandOpen GroupUnix specifications include theC standard library, which includes the time types and functions defined in the<time.h>header file. The ISO C standard states thattime_tmust be an arithmetic type, but does not mandate any specific type or encoding for it. POSIX requirestime_tto be an integer type, but does not mandate that it be signed or unsigned. Unix has no tradition of directly representing non-integer Unix time numbers as binary fractions. Instead, times with sub-second precision are represented usingcomposite data typesthat consist of two integers, the first being atime_t(the integral part of the Unix time), and the second being the fractional part of the time number in millionths (instruct timeval) or billionths (instruct timespec).[11][12]These structures provide adecimal-basedfixed-pointdata format, which is useful for some applications, and trivial to convert for others. The present form of UTC, with leap seconds, is defined only starting from 1 January 1972. Prior to that, since 1 January 1961 there was an older form of UTC in which not only were there occasional time steps, which were by non-integer numbers of seconds, but also the UTC second was slightly longer than the SI second, and periodically changed to continuously approximate the Earth's rotation. Prior to 1961 there was no UTC, and prior to 1958 there was no widespreadatomic timekeeping; in these eras, some approximation ofGMT(based directly on the Earth's rotation) was used instead of an atomic timescale.[citation needed] The precise definition of Unix time as an encoding of UTC is only uncontroversial when applied to the present form of UTC. The Unix epoch predating the start of this form of UTC does not affect its use in this era: the number of days from 1 January 1970 (the Unix epoch) to 1 January 1972 (the start of UTC) is not in question, and the number of days is all that is significant to Unix time. The meaning of Unix time values below+63072000(i.e., prior to 1 January 1972) is not precisely defined. The basis of such Unix times is best understood to be an unspecified approximation of UTC. Computers of that era rarely had clocks set sufficiently accurately to provide meaningful sub-second timestamps in any case. Unix time is not a suitable way to represent times prior to 1972 in applications requiring sub-second precision; such applications must, at least, define which form of UT or GMT they use. As of 2009[update], the possibility of ending the use of leap seconds in civil time is being considered.[13]A likely means to execute this change is to define a new time scale, calledInternational Time[citation needed], that initially matches UTC but thereafter has no leap seconds, thus remaining at a constant offset from TAI. If this happens, it is likely that Unix time will be prospectively defined in terms of this new time scale, instead of UTC. Uncertainty about whether this will occur makes prospective Unix time no less predictable than it already is: if UTC were simply to have no further leap seconds the result would be the same. The earliest versions of Unix time had a 32-bit integer incrementing at a rate of 60Hz, which was the rate of the system clock on the hardware of the early Unix systems. Timestamps stored this way could only represent a range of a little over two and a quarter years. The epoch being counted from was changed with Unix releases to prevent overflow, with midnight on 1 January 1971 and 1 January 1972 both being used as epochs during Unix's early development. Early definitions of Unix time also lacked timezones.[14][15] The current epoch of 1 January 1970 00:00:00 UTC was selected arbitrarily by Unix engineers because it was considered a convenient date to work with. The precision was changed to count in seconds in order to avoid short-term overflow.[1] WhenPOSIX.1was written, the question arose of how to precisely definetime_tin the face of leap seconds. The POSIX committee considered whether Unix time should remain, as intended, a linear count of seconds since the epoch, at the expense of complexity in conversions with civil time or a representation of civil time, at the expense of inconsistency around leap seconds. Computer clocks of the era were not sufficiently precisely set to form a precedent one way or the other. The POSIX committee was swayed by arguments against complexity in the library functions,[citation needed]and firmly defined the Unix time in a simple manner in terms of the elements of UTC time. This definition was so simple that it did not even encompass the entireleap yearrule of the Gregorian calendar, and would make 2100 a leap year. The 2001 edition of POSIX.1 rectified the faulty leap year rule in the definition of Unix time, but retained the essential definition of Unix time as an encoding of UTC rather than a linear time scale. Since the mid-1990s, computer clocks have been routinely set with sufficient precision for this to matter, and they have most commonly been set using the UTC-based definition of Unix time. This has resulted in considerable complexity in Unix implementations, and in theNetwork Time Protocol, to execute steps in the Unix time number whenever leap seconds occur.[citation needed] Unix time is widely adopted in computing beyond its original application as the system time forUnix. Unix time is available in almost all system programmingAPIs, including those provided by both Unix-based and non-Unixoperating systems. Almost all modernprogramming languagesprovide APIs for working with Unix time or converting them to another data structure. Unix time is also used as a mechanism for storing timestamps in a number offile systems,file formats, anddatabases. TheC standard libraryuses Unix time for all date and time functions, and Unix time is sometimes referred to as time_t, the name of thedata typeused for timestamps inCandC++. C's Unix time functions are defined as the system time API in thePOSIXspecification.[16]The C standard library is used extensively in all modern desktop operating systems, includingMicrosoft WindowsandUnix-likesystems such asmacOSandLinux, where it is a standard programming interface.[17][18][19] iOSprovides a Swift API which defaults to using an epoch of 1 January 2001 but can also be used with Unix timestamps.[20]Androiduses Unix time alongside a timezone for its system time API.[21] Windows does not use Unix time for storing time internally but does use it in system APIs, which are provided in C++ and implement the C standard library specification.[17]Unix time is used in thePE formatfor Windows executables.[22] Unix time is typically available in major programming languages and is widely used in desktop, mobile, and web application programming.Javaprovides an Instant object which holds a Unix timestamp in both seconds and nanoseconds.[23]Pythonprovides a time library which uses Unix time.[24]JavaScriptprovides a Date library which provides and stores timestamps in milliseconds since the Unix epoch and is implemented in all modern desktop and mobileweb browsersas well as in JavaScript server environments likeNode.js.[25] Free Pascalimplements UNIX time with the GetTickCount (deprecated unsigned 32 bit) and GetTickCount64 (Unsigned 64 bit) functions to a resolution of 1ms onUnix-likeplatforms. Filesystems designed for use with Unix-based operating systems tend to use Unix time.APFS, the file system used by default across all Apple devices, andext4, which is widely used on Linux and Android devices, both use Unix time in nanoseconds for file timestamps.[26][27]Severalarchive file formatscan store timestamps in Unix time, includingRARandtar.[28][29]Unix time is also commonly used to store timestamps in databases, including inMySQLandPostgreSQL.[30][31] Unix time was designed to encode calendar dates and times in a compact manner intended for use by computers internally. It is not intended to be easily read by humans or to store timezone-dependent values. It is also limited by default to representing time in seconds, making it unsuited for use when a more precise measurement of time is needed, such as when measuring the execution time of programs.[32] Unix time by design does not require a specific size for the storage, but most common implementations of Unix time use asigned integerwith the same size as theword sizeof the underlying hardware. As the majority of modern computers are32-bitor64-bit, and a large number of programs are still written in 32-bit compatibility mode, this means that many programs using Unix time are using signed 32-bit integer fields. The maximum value of a signed 32-bit integer is231− 1, and the minimum value is−231, making it impossible to represent dates before 13 December 1901 (at 20:45:52 UTC) or after 19 January 2038 (at 03:14:07 UTC). The early cutoff can have an impact on databases that are storing historical information; in some databases where 32-bit Unix time is used for timestamps, it may be necessary to store time in a different form of field, such as a string, to represent dates before 1901. The late cutoff is known as theYear 2038 problemand has the potential to cause issues as the date approaches, as dates beyond the 2038 cutoff would wrap back around to the start of the representable range in 1901.[32]: 60 Date range cutoffs are not an issue with 64-bit representations of Unix time, as the effective range of dates representable with Unix time stored in a signed 64-bit integer is over 584 billion years, or 292 billion years in either direction of the 1970 epoch.[32]: 60-61[33] Unix time is not the only standard for time that counts away from an epoch. OnWindows, theFILETIMEtype stores time as a count of 100-nanosecond intervals that have elapsed since 0:00GMTon 1 January 1601.[34]Windows epoch time is used to store timestamps for files[35]and in protocols such as theActive DirectoryTime Service[36]andServer Message Block. TheNetwork Time Protocolused to coordinate time between computers uses an epoch of 1 January 1900, counted in an unsigned 32-bit integer for seconds and another unsigned 32-bit integer for fractional seconds, which rolls over every 232seconds (about once every 136 years).[37] Many applications and programming languages provide methods for storing time with an explicit timezone.[38]There are also a number of time format standards which exist to be readable by both humans and computers, such asISO 8601. Unix enthusiasts have a history of holding "time_t parties" (pronounced "timetea parties") to celebrate significant values of the Unix time number.[39][40]These are directly analogous to thenew yearcelebrations that occur at the change of year in many calendars. As the use of Unix time has spread, so has the practice of celebrating its milestones. Usually it is time values that are round numbers indecimalthat are celebrated, following the Unix convention of viewingtime_tvalues in decimal. Among some groups roundbinarynumbers are also celebrated,[citation needed]such as +230, which occurred at 13:37:04 UTC on Saturday, 10 January 2004. The events that these celebrate are typically described as "Nseconds since the Unix epoch", but this is inaccurate; as discussed above, due to the handling of leap seconds in Unix time the number of seconds elapsed since the Unix epoch is slightly greater than the Unix time number for times later than the epoch. Vernor Vinge's novelA Deepness in the Skydescribes a spacefaring trading civilization thousands of years in the future that still uses the Unix epoch. The "programmer-archaeologist" responsible for finding and maintaining usable code in mature computer systems first believes that the epoch refers to the time whenman first walked on the Moon, but then realizes that it is "the 0-second of one of humankind's first computer operating systems".[48]
https://en.wikipedia.org/wiki/Unix_time
The3B series computers[1][2]are a line of minicomputers[3]made between the late 1970s and 1993 byAT&T Computer Systems'Western Electricsubsidiary, for use with the company'sUNIXoperating system. The line primarily consists of the models 3B20, 3B5, 3B15, 3B2, and 3B4000. The series is notable for controlling a series ofelectronic switching systemsfortelecommunications, for general computing purposes, and for serving as the historical software porting base for commercial UNIX. The first 3B20D was installed inFresno, Californiaat Pacific Bell in 1981.[4]Within two years, several hundred were in place throughout theBell System. Some of the units came with "small, slow hard disks".[5] The general purpose family of 3B computer systems includes the 3B2, 3B5, 3B15, 3B20S, and 3B4000. They run the AT&TUNIXoperating system and were named after the successful 3B20D High Availability processor. In 1984, after regulatory constraints were lifted, AT&T introduced the 3B20D, 3B20S, 3B5, and 3B2 to the general computer market,[1][6]a move that some commentators saw as an attempt to compete withIBM.[7]In Europe, the 3B computers were distributed by Italian firmOlivetti, in which AT&T had a minority shareholding.[6][7]After AT&T bought NCR Corporation, effective January 1992, the computers were marketed through NCR sales channels.[8] Having produced 70,000 units, the AT&T Oklahoma City plant stopped manufacturing 3B machines at the end of 1993, with the 3B20D to be the last units manufactured.[8] The original series of 3B computers includes the models 3B20C, 3B20Dsuperminicomputer,[1]3B21D, and 3B21E. These systems are 32-bitmicroprogrammedduplex (redundant)high availabilityprocessor units running areal-time operating system. They were first produced in the late 1970s at theWestern Electricfactory inLisle, Illinois, for telecommunications applications including the4ESSand5ESSsystems. They use the Duplex Multi Environment Real Time (DMERT) operating system which was renamedUNIX-RTR(Real Time Reliable) in 1982. The Data Manipulation Unit (DMU) provides arithmetic and logic operations on 32-bit words using eightAMD 29014-bit-sliceALUs.[9]The first 3B20D is called the Model 1. Each processor's control unit consists of two frames of circuit packs. The whole duplex system requires seven-foot frames of circuit packs plus at least one tape drive frame (most telephone companies at that time wrote billing data onmagnetic tapes), and manywashing machine-sized disk drives. For training and lab purposes, a 3B20D can be divided into two "half-duplex" systems. A 3B20S consists of most of the same hardware as a half-duplex but uses a completely different operating system. The 3B20C was briefly available as a high-availability fault tolerantmultiprocessinggeneral-purpose computer in the commercial market in 1984. The 3B20E was created to provide a cost-reduced 3B20D for small offices that did not expect such high availability. It consists of a virtual "emulated" 3B20D environment running on a stand-alone general purpose computer; the system was ported to many computers, but primarily runs on theSun MicrosystemsSolarisenvironment. There were improvements to the 3B20D UNIX-RTR system in both software and hardware in the 1980s, 1990s, 2000s, and 2010s. Innovations included disk independent operation (DIOP: the ability to continue essential software processing such as telecommunications after duplex failure of redundant essential disks); off-line boot (the ability to split in half and boot the out-of-service half, typically on a new software release) and switch forward (switch processing to the previously out-of-service half); upgrading the disks tosolid-state drive(SSD); and upgrading the tape unit toCompactFlash. The processor was re-engineered and renamed in 1992 as the 3B21D. It is still in use as of 2023[update]as a component ofNokiaproducts such as the2STPsignal transfer pointand the 4ESS and 5ESS switches, which Nokia inherited from AT&T spin-offLucent Technologies. The 3B20S (simplex) was developed atBell Labsand produced by Western Electric in 1982 for general purpose internalBell Systemuse. The 3B20S[1]has hardware similar to the 3B20D, but one unit instead of two. The machine is approximately the size of a largerefrigerator, requiring a minimum of 170 square feet floor space.[10]It was in use at the1984 Summer Olympics, where around twelve 3B20S served theemailrequirements of theElectronic Messaging System, which was built to replace the man-based messaging system of earlier Olympiads. The system connected around 1800 user terminals and 200 printers.[11]The3B20Ais an enhanced version of the 3B20S, adding in a second processing unit working in parallel as a multiprocessor unit. The 3B5 is built with the older Western ElectricWE 3200032-bit microprocessor. The initial versions have discrete memory management unit hardware using gate arrays, and support segment-based memory translation. I/O is programmed using memory-mapped techniques. The machine is approximately the size of adishwasher, though adding the reel-to-reel tape drive increases its size. These computers useSMDhard drives. The 3B15, introduced in 1985,[12]uses the WE 32100 and is the faster follow-on to the 3B5 with similar large form factor. The 3B4000 is ahigh availabilityserver introduced in 1987[13]and based on a 'snugly-coupled' architecture using the WE series 32x00 32-bit processor. Known internally as 'Apache', the 3B4000 is a follow-on to the 3B15 and initial revisions use a 3B15 as a master processor. Developed in the mid-1980's at theLisleIndian Hill West facility by the High Performance Computer Development Lab, the system consists of multiple high performance (at the time) processor boards – adjunct processing elements (APEs) and adjunct communication elements (ACEs). These adjunct processors run a customized UNIX kernel with drivers for SCSI (APEs) and serial boards (ACEs). The processing boards are interconnected by a redundant low latency parallel bus (ABUS) running at 20 MB/s. The UNIX kernels running on the adjunct processors are modified to allow the fork/exec of processes across processing units. The system calls and peripheral drivers are also extended to allow processes to access remote resources across the ABUS. Since the ABUS is hot-swappable, processors can be added or replaced without shutting down the system. If one of the adjunct processors fails during operation, the system can detect and restart programs that were running on the failed element. The 3B4000 is capable of significant expansion; one test system (including storage) occupies 17 mid-height cabinets. Generally, the performance of the system increases linearly with additional processing elements, however the lack of a trueshared memorycapability requires rewriting applications that rely heavily on this feature to avoid a severe performance penalty. The 3B2 was introduced in 1984 using theWE 3200032-bit microprocessor at 8 MHz with memory management chips that supportsdemand paging. Uses include theSwitching Control Center System. The 3B2 Model 300, which can support up to 18 users,[1]is approximately 4 inches (100 mm) high and the 3B2 Model 400 is approximately 8 inches (200 mm) high. The 300 was soon supplanted by the 3B2/310 running at 10 MHz, which features the WE 32100 CPU as do later models. The Model 400, introduced in 1985,[12]allows more peripheral slots and more memory, and has a built-in 23 MBQICtape drive managed by afloppy disk controller(nicknamed the "floppy tape"). These three models use standardMFM5+1⁄4" hard disk drives. There are also Model 100 and Model 200 3B2 systems.[1] The 3B2/600,[3]running at 18 MHz, offers an improvement in performance and capacity: it features aSCSIcontroller for the 60 MB QIC tape and two internal full-height disk drives. The 600 is approximately twice as tall as a 400, and is oriented with the tape and floppy disk drives opposite the backplane (instead of at a right angle to it as on the 3xx, 4xx and later 500 models). Early models use an internalEmulexcard to interface the SCSI controller with ESDI disks, with later models using SCSI drives directly. The 3B2/500 was the next model to appear, essentially a 3B2/600 with enough components removed to fit into a 400 case; one internal disk drive and several backplane slots are sacrificed in this conversion. Unlike the 600, which because of its two large fans is loud, the 500 is tolerable in an office environment, like the 400.[citation needed] The 3B2/700[14]is an uprated version of the 600 featuring a slightly faster processor (WE 32200 at 22 MHz), and the 3B2/1000[2]is an additional step in this direction (WE 32200 at 24 MHz). Officially named theAT&T UNIX PC,[15]AT&T introduced adesktop computerin 1985 that is often dubbed the3B1. However, thisworkstationis unrelated in hardware to the 3B line, and is based on theMotorola 68010microprocessor. It runs a derivative of Unix System V Release 2 byConvergent Technologies. The system, which is also known as thePC-7300, is tailored for use as a productivity tool in office environments and as an electronic communication center.[15]
https://en.wikipedia.org/wiki/3B20C
In computing, aninterfaceis a shared boundary across which two or more separate components of acomputer systemexchange information. The exchange can be betweensoftware,computer hardware,peripheral devices,humans, and combinations of these.[1]Some computer hardware devices, such as atouchscreen, can both send and receive data through the interface, while others such as a mouse or microphone may only provide an interface to send data to a given system.[2] Hardware interfaces exist in many components, such as the variousbuses,storage devices, otherI/Odevices, etc. A hardware interface is described by the mechanical, electrical, and logical signals at the interface and the protocol for sequencing them (sometimes called signaling).[3]A standard interface, such asSCSI, decouples the design and introduction of computing hardware, such asI/Odevices, from the design and introduction of other components of a computing system, thereby allowing users and manufacturers great flexibility in the implementation of computing systems.[3]Hardware interfaces can beparallelwith several electrical connections carrying parts of the data simultaneously orserialwhere data are sent onebitat a time.[4] A software interface may refer to a wide range of different types of interfaces at different "levels". For example, an operating system may interface with pieces of hardware.Applicationsorprogramsrunning on the operating system may need to interact via datastreams, filters, and pipelines.[5]Inobject oriented programs, objects within an application may need to interact viamethods.[6] A key principle of design is to prohibit access to all resources by default, allowing access only through well-defined entry points, i.e., interfaces.[7]Software interfaces provide access to computer resources (such as memory, CPU, storage, etc.) of the underlying computer system; direct access (i.e., not through well-designed interfaces) to such resources by software can have major ramifications—sometimes disastrous ones—for functionality and stability.[citation needed] Interfaces between software components can provideconstants,data types, types ofprocedures,exceptionspecifications, andmethod signatures. Sometimes, publicvariablesare also defined as part of an interface.[8] The interface of a software moduleAis deliberately defined separately from theimplementationof that module. The latter contains the actual code of the procedures and methods described in the interface, as well as other "private" variables, procedures, etc. Another software moduleB, for example theclienttoA, that interacts withAis forced to do so only through the published interface. One practical advantage of this arrangement is that replacing the implementation ofAwith another implementation of the same interface should not causeBto fail—howAinternally meets the requirements of the interface is not relevant toB, whichis only concernedwith the specifications of the interface. (See alsoLiskov substitution principle.)[citation needed] In someobject-orientedlanguages, especially those without fullmultiple inheritance, the terminterfaceis used to define anabstract typethat acts as anabstractionof aclass. It contains no data, but defines behaviours asmethodsignatures. Aclasshaving code and data for all the methods corresponding to that interface and declaring so is said toimplementthat interface.[9]Furthermore, even in single-inheritance-languages, one can implement multiple interfaces, and hence canbeof different types at the same time.[10] An interface is thus atypedefinition; anywhere an object can be exchanged (for example, in afunctionormethodcall) thetypeof the object to be exchanged can be defined in terms of one of its implementedinterfaces or base-classes rather than specifying the specificclass. This approach means that any class that implements that interface can be used.[citation needed]For example, adummy implementationmay be used to allow development to progress before the final implementation is available. In another case, afake or mockimplementation may be substituted during testing. Suchstubimplementations are replaced by real code later in the development process. Usually, a method defined in an interface contains no code and thus cannot itself be called; it must be implemented by non-abstract code to be run when it is invoked.[citation needed]An interface called "Stack" might define two methods:push()andpop(). It can be implemented in different ways, for example,FastStackandGenericStack—the first being fast, working with a data structure of fixed size, and the second using a data structure that can be resized, but at the cost of somewhat lower speed. Though interfaces can contain many methods, they may contain only one or even none at all. For example, theJavalanguage defines the interfaceReadablethat has the singleread()method; various implementations are used for different purposes, includingBufferedReader,FileReader,InputStreamReader,PipedReader, andStringReader.Marker interfaceslikeSerializablecontain no methods at all and serve to provide run-time information to generic processing usingReflection.[11] The use of interfaces allows for a programming style calledprogramming to the interface. The idea behind this approach is to base programming logic on the interfaces of the objects used, rather than on internal implementation details. Programming to the interface reduces dependency on implementation specifics and makes code more reusable.[12] Pushing this idea to the extreme,inversion of controlleaves thecontextto inject the code with the specific implementations of the interface that will be used to perform the work. Auser interfaceis a point of interaction between a computer and humans; it includes any number ofmodalitiesofinteraction(such as graphics, sound, position, movement, etc.) where data is transferred between the user and the computer system.
https://en.wikipedia.org/wiki/Interface_(computing)
Linear logicis asubstructural logicproposed by FrenchlogicianJean-Yves Girardas a refinement ofclassicalandintuitionistic logic, joining thedualitiesof the former with many of theconstructiveproperties of the latter.[1]Although the logic has also been studied for its own sake, more broadly, ideas from linear logic have been influential in fields such asprogramming languages,game semantics, andquantum physics(because linear logic can be seen as the logic ofquantum information theory),[2]as well aslinguistics,[3]particularly because of its emphasis on resource-boundedness, duality, and interaction. Linear logic lends itself to many different presentations, explanations, and intuitions.Proof-theoretically, it derives from an analysis of classicalsequent calculusin which uses of (thestructural rules)contractionandweakeningare carefully controlled. Operationally, this means that logical deduction is no longer merely about an ever-expanding collection of persistent "truths", but also a way of manipulatingresourcesthat cannot always be duplicated or thrown away at will. In terms of simpledenotational models, linear logic may be seen as refining the interpretation of intuitionistic logic by replacingcartesian (closed) categoriesbysymmetric monoidal (closed) categories, or the interpretation of classical logic by replacingBoolean algebrasbyC*-algebras.[citation needed] The language ofclassical linear logic(CLL) is defined inductively by theBNF notation Herepandp⊥range overlogical atoms. For reasons to be explained below, theconnectives⊗, ⅋, 1, and ⊥ are calledmultiplicatives, the connectives &, ⊕, ⊤, and 0 are calledadditives, and the connectives ! and ? are calledexponentials. We can further employ the following terminology: Binary connectives ⊗, ⊕, & and ⅋ are associative and commutative; 1 is the unit for ⊗, 0 is the unit for ⊕, ⊥ is the unit for ⅋ and ⊤ is the unit for &. Every propositionAin CLL has adualA⊥, defined as follows: Observe that(-)⊥is aninvolution, i.e.,A⊥⊥=Afor all propositions.A⊥is also called thelinear negationofA. The columns of the table suggest another way of classifying the connectives of linear logic, termedpolarity: the connectives negated in the left column (⊗, ⊕, 1, 0, !) are calledpositive, while their duals on the right (⅋, &, ⊥, ⊤, ?) are callednegative; cf. table on the right. Linear implicationis not included in the grammar of connectives, but is definable in CLL using linear negation and multiplicative disjunction, byA⊸B:=A⊥⅋B. The connective ⊸ is sometimes pronounced "lollipop", owing to its shape. One way of defining linear logic is as asequent calculus. We use the lettersΓandΔto range over lists of propositionsA1, ...,An, also calledcontexts. Asequentplaces a context to the left and the right of theturnstile, writtenΓ⊢{\displaystyle \vdash }Δ. Intuitively, the sequent asserts that the conjunction ofΓentailsthe disjunction ofΔ(though we mean the "multiplicative" conjunction and disjunction, as explained below). Girard describes classical linear logic using onlyone-sidedsequents (where the left-hand context is empty), and we follow here that more economical presentation. This is possible because any premises to the left of a turnstile can always be moved to the other side and dualised. We now giveinference rulesdescribing how to build proofs of sequents.[4] First, to formalize the fact that we do not care about the order of propositions inside a context, we add the structural rule ofexchange: Note that we donotadd the structural rules of weakening and contraction, because we do care about the absence of propositions in a sequent, and the number of copies present. Next we addinitial sequentsandcuts: The cut rule can be seen as a way of composing proofs, and initial sequents serve as theunitsfor composition. In a certain sense these rules are redundant: as we introduce additional rules for building proofs below, we will maintain the property that arbitrary initial sequents can be derived from atomic initial sequents, and that whenever a sequent is provable it can be given a cut-free proof. Ultimately, thiscanonical formproperty (which can be divided into thecompleteness of atomic initial sequentsand thecut-elimination theorem, inducing a notion ofanalytic proof) lies behind the applications of linear logic in computer science, since it allows the logic to be used in proof search and as a resource-awarelambda-calculus. Now, we explain the connectives by givinglogical rules. Typically in sequent calculus one gives both "right-rules" and "left-rules" for each connective, essentially describing two modes of reasoning about propositions involving that connective (e.g., verification and falsification). In a one-sided presentation, one instead makes use of negation: the right-rules for a connective (say ⅋) effectively play the role of left-rules for its dual (⊗). So, we should expect a certain"harmony"between the rule(s) for a connective and the rule(s) for its dual. The rules for multiplicative conjunction (⊗) and disjunction (⅋): and for their units: Observe that the rules for multiplicative conjunction and disjunction areadmissiblefor plainconjunctionanddisjunctionunder a classical interpretation (i.e., they are admissible rules inLK). The rules for additive conjunction (&) and disjunction (⊕) : and for their units: Observe that the rules for additive conjunction and disjunction are again admissible under a classical interpretation. But now we can explain the basis for the multiplicative/additive distinction in the rules for the two different versions of conjunction: for the multiplicative connective (⊗), the context of the conclusion (Γ, Δ) is split up between the premises, whereas for the additive case connective (&) the context of the conclusion (Γ) is carried whole into both premises. The exponentials are used to give controlled access to weakening and contraction. Specifically, we add structural rules of weakening and contraction for?'d propositions:[5] and use the following logical rules, in which?Γstands for a list of propositions each prefixed with?: One might observe that the rules for the exponentials follow a different pattern from the rules for the other connectives, resembling the inference rules governing modalities in sequent calculus formalisations of thenormal modal logicS4, and that there is no longer such a clear symmetry between the duals!and?. This situation is remedied in alternative presentations of CLL (e.g., theLUpresentation). In addition to theDe Morgan dualitiesdescribed above, some important equivalences in linear logic include: By definition ofA⊸BasA⊥⅋B, the last two distributivity laws also give: (HereA≣Bis(A⊸B) & (B⊸A).) A map that is not an isomorphism yet plays a crucial role in linear logic is: Linear distributions are fundamental in the proof theory of linear logic. The consequences of this map were first investigated in Cockett & Seely (1997) and called a "weak distribution".[6]In subsequent work it was renamed to "linear distribution" to reflect the fundamental connection to linear logic. The following distributivity formulas are not in general an equivalence, only an implication: Both intuitionistic and classical implication can be recovered from linear implication by inserting exponentials: intuitionistic implication is encoded as!A⊸B, while classical implication can be encoded as!?A⊸ ?Bor!A⊸ ?!B(or a variety of alternative possible translations).[7]The idea is that exponentials allow us to use a formula as many times as we need, which is always possible in classical and intuitionistic logic. Formally, there exists a translation of formulas of intuitionistic logic to formulas of linear logic in a way that guarantees that the original formula is provable in intuitionistic logic if and only if the translated formula is provable in linear logic. Using theGödel–Gentzen negative translation, we can thus embed classicalfirst-order logicinto linear first-order logic. Lafont (1993) first showed how intuitionistic linear logic can be explained as a logic of resources, so providing the logical language with access to formalisms that can be used for reasoning about resources within the logic itself, rather than, as in classical logic, by means of non-logical predicates and relations.Tony Hoare(1985)'s classic example of the vending machine can be used to illustrate this idea. Suppose we represent having a candy bar by the atomic propositioncandy, and having a dollar by$1. To state the fact that a dollar will buy you one candy bar, we might write the implication$1⇒candy. But in ordinary (classical or intuitionistic) logic, fromAandA⇒Bone can concludeA∧B. So, ordinary logic leads us to believe that we can buy the candy bar and keep our dollar! Of course, we can avoid this problem by using more sophisticated encodings,[clarification needed]although typically such encodings suffer from theframe problem. However, the rejection of weakening and contraction allows linear logic to avoid this kind of spurious reasoning even with the "naive" rule. Rather than$1⇒candy, we express the property of the vending machine as alinearimplication$1⊸candy. From$1and this fact, we can concludecandy, butnot$1⊗candy. In general, we can use the linear logic propositionA⊸Bto express the validity of transforming resourceAinto resourceB. Running with the example of the vending machine, consider the "resource interpretations" of the other multiplicative and additive connectives. (The exponentials provide the means to combine this resource interpretation with the usual notion of persistentlogical truth.) Multiplicative conjunction(A⊗B)denotes simultaneous occurrence of resources, to be used as the consumer directs. For example, if you buy a stick of gum and a bottle of soft drink, then you are requestinggum⊗drink. The constant 1 denotes the absence of any resource, and so functions as the unit of ⊗. Additive conjunction(A&B)represents alternative occurrence of resources, the choice of which the consumer controls. If in the vending machine there is a packet of chips, a candy bar, and a can of soft drink, each costing one dollar, then for that price you can buy exactly one of these products. Thus we write$1⊸ (candy&chips&drink). We donotwrite$1⊸ (candy⊗chips⊗drink), which would imply that one dollar suffices for buying all three products together. However, from($1⊸ (candy&chips&drink)) ⊗ ($1⊸ (candy&chips&drink)) ⊗ ($1⊸ (candy&chips&drink)), we can correctly deduce$3⊸ (candy⊗chips⊗drink), where$3:=$1⊗$1⊗$1. The unit ⊤ of additive conjunction can be seen as a wastebasket for unneeded resources. For example, we can write$3⊸ (candy⊗ ⊤)to express that with three dollars you can get a candy bar and some other stuff, without being more specific (for example, chips and a drink, or $2, or $1 and chips, etc.). Additive disjunction(A⊕B)represents alternative occurrence of resources, the choice of which the machine controls. For example, suppose the vending machine permits gambling: insert a dollar and the machine may dispense a candy bar, a packet of chips, or a soft drink. We can express this situation as$1⊸ (candy⊕chips⊕drink). The constant 0 represents a product that cannot be made, and thus serves as the unit of ⊕ (a machine that might produceAor0is as good as a machine that always producesAbecause it will never succeed in producing a 0). So unlike above, we cannot deduce$3⊸ (candy⊗chips⊗drink)from this. Introduced byJean-Yves Girard, proof nets have been created to avoid thebureaucracy, that is all the things that make two derivations different in the logical point of view, but not in a "moral" point of view. For instance, these two proofs are "morally" identical: The goal of proof nets is to make them identical by creating a graphical representation of them. Theentailmentrelation in full CLL isundecidable.[8]When considering fragments of CLL, the decision problem has varying complexity: Many variations of linear logic arise by further tinkering with the structural rules: Different intuitionistic variants of linear logic have been considered. When based on a single-conclusion sequent calculus presentation, like in ILL (Intuitionistic Linear Logic), the connectives ⅋, ⊥, and ? are absent, and linear implication is treated as a primitive connective. In FILL (Full Intuitionistic Linear Logic) the connectives ⅋, ⊥, and ? are present, linear implication is a primitive connective and, similarly to what happens in intuitionistic logic, all connectives (except linear negation) are independent. There are also first- and higher-order extensions of linear logic, whose formal development is somewhat standard (seefirst-order logicandhigher-order logic).
https://en.wikipedia.org/wiki/Linear_logic
Inmathematicsandmathematical logic,Boolean algebrais a branch ofalgebra. It differs fromelementary algebrain two ways. First, the values of thevariablesare thetruth valuestrueandfalse, usually denoted by 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra useslogical operatorssuch asconjunction(and) denoted as∧,disjunction(or) denoted as∨, andnegation(not) denoted as¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describinglogical operationsin the same way that elementary algebra describes numerical operations. Boolean algebra was introduced byGeorge Boolein his first bookThe Mathematical Analysis of Logic(1847),[1]and set forth more fully in hisAn Investigation of the Laws of Thought(1854).[2]According toHuntington, the termBoolean algebrawas first suggested byHenry M. Shefferin 1913,[3]althoughCharles Sanders Peircegave the title "A Boolian [sic] Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880.[4]Boolean algebra has been fundamental in the development ofdigital electronics, and is provided for in all modernprogramming languages. It is also used inset theoryandstatistics.[5] A precursor of Boolean algebra wasGottfried Wilhelm Leibniz'salgebra of concepts. The usage of binary in relation to theI Chingwas central to Leibniz'scharacteristica universalis. It eventually created the foundations of algebra of concepts.[6]Leibniz's algebra of concepts is deductively equivalent to the Boolean algebra of sets.[7] Boole's algebra predated the modern developments inabstract algebraandmathematical logic; it is however seen as connected to the origins of both fields.[8]In an abstract setting, Boolean algebra was perfected in the late 19th century byJevons,Schröder,Huntingtonand others, until it reached the modern conception of an (abstract)mathematical structure.[8]For example, the empirical observation that one can manipulate expressions in thealgebra of sets, by translating them into expressions in Boole's algebra, is explained in modern terms by saying that the algebra of sets isaBoolean algebra(note theindefinite article). In fact,M. H. Stoneproved in 1936that every Boolean algebra isisomorphicto afield of sets.[9][10] In the 1930s, while studyingswitching circuits,Claude Shannonobserved that one could also apply the rules of Boole's algebra in this setting,[11]and he introducedswitching algebraas a way to analyze and design circuits by algebraic means in terms oflogic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as thetwo-element Boolean algebra. In modern circuit engineering settings, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably.[12][13][14] Efficient implementationofBoolean functionsis a fundamental problem in thedesignofcombinational logiccircuits. Modernelectronic design automationtools forvery-large-scale integration(VLSI) circuits often rely on an efficient representation of Boolean functions known as (reduced ordered)binary decision diagrams(BDD) forlogic synthesisandformal verification.[15] Logic sentences that can be expressed in classicalpropositional calculushave anequivalent expressionin Boolean algebra. Thus,Boolean logicis sometimes used to denote propositional calculus performed in this way.[16][17][18]Boolean algebra is not sufficient to capture logic formulas usingquantifiers, like those fromfirst-order logic. Although the development ofmathematical logicdid not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting ofalgebraic logic, which also studies the algebraic systems of many other logics.[8]Theproblem of determining whetherthe variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called theBoolean satisfiability problem(SAT), and is of importance totheoretical computer science, being the first problem shown to beNP-complete. The closely relatedmodel of computationknown as aBoolean circuitrelatestime complexity(of analgorithm) tocircuit complexity. Whereas expressions denote mainlynumbersin elementary algebra, in Boolean algebra, they denote thetruth valuesfalseandtrue. These values are represented with thebits, 0 and 1. They do not behave like theintegers0 and 1, for which1 + 1 = 2, but may be identified with the elements of thetwo-element fieldGF(2), that is,integer arithmetic modulo 2, for which1 + 1 = 0. Addition and multiplication then play the Boolean roles of XOR (exclusive-or) and AND (conjunction), respectively, with disjunctionx∨y(inclusive-or) definable asx+y−xyand negation¬xas1 −x. InGF(2),−may be replaced by+, since they denote the same operation; however, this way of writing Boolean operations allows applying the usual arithmetic operations of integers (this may be useful when using a programming language in whichGF(2)is not implemented). Boolean algebra also deals withfunctionswhich have their values in the set{0,1}. Asequence of bitsis a commonly used example of such a function. Another common example is the totality of subsets of a setE: to a subsetFofE, one can define theindicator functionthat takes the value1onF, and0outsideF. The most general example is the set elements of aBoolean algebra, with all of the foregoing being instances thereof. As with elementary algebra, the purely equational part of the theory may be developed, without considering explicit values for the variables.[19] While Elementary algebra has four operations (addition, subtraction, multiplication, and division), the Boolean algebra has only three basic operations:conjunction,disjunction, andnegation, expressed with the correspondingbinary operatorsAND(∧{\displaystyle \land }) and OR (∨{\displaystyle \lor }) and theunary operatorNOT(¬{\displaystyle \neg }), collectively referred to asBoolean operators.[20]Variables in Boolean algebra that store the logical value of 0 and 1 are called theBoolean variables. They are used to store either true or false values.[21]The basic operations on Boolean variablesxandyare defined as follows: Alternatively, the values ofx∧y,x∨y, and ¬xcan be expressed by tabulating their values withtruth tablesas follows:[22] When used in expressions, the operators are applied according to the precedence rules. As with elementary algebra, expressions in parentheses are evaluated first, following the precedence rules.[23] If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations of arithmetic (wherex+yuses addition andxyuses multiplication), or by the minimum/maximum functions: One might consider that only negation and one of the two other operations are basic because of the following identities that allow one to define conjunction in terms of negation and the disjunction, and vice versa (De Morgan's laws):[24] Operations composed from the basic operations include, among others, the following: These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs. Alawof Boolean algebra is anidentitysuch asx∨ (y∨z) = (x∨y) ∨zbetween two Boolean terms, where aBoolean termis defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of aBoolean algebraas anymodelof the Boolean laws, and as a means for deriving new laws from old as in the derivation ofx∨ (y∧z) =x∨ (z∧y)fromy∧z=z∧y(as treated in§ Axiomatizing Boolean algebra). Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra:[25][26] The following laws hold in Boolean algebra, but not in ordinary algebra: Takingx= 2in the third law above shows that it is not an ordinary algebra law, since2 × 2 = 4. The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1. For example, in absorption law 1, the left hand side would be1(1 + 1) = 2, while the right hand side would be 1 (and so on). All of the laws treated thus far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged, or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to bemonotone. Thus the axioms thus far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement ¬ as follows.[5] The complement operation is defined by the following two laws. All properties of negation including the laws below follow from the above two laws alone.[5] In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, hence in both algebras it satisfies the double negation law (also called involution law) But whereasordinary algebrasatisfies the two laws Boolean algebra satisfiesDe Morgan's laws: The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The lawscomplementation1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possiblecompleteset of laws oraxiomatizationof Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore, Boolean algebras can then be defined as themodelsof these axioms as treated in§ Boolean algebras. Writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras. This axiomatization is by no means the only one, or even necessarily the most natural given that attention was not paid as to whether some of the axioms followed from others, but there was simply a choice to stop when enough laws had been noticed, treated further in§ Axiomatizing Boolean algebra. Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as anytautology, understood as an equation that holds for all values of its variables over 0 and 1.[27][28]All these definitions of Boolean algebra can be shown to be equivalent. Principle: If {X, R} is apartially ordered set, then {X, R(inverse)} is also a partially ordered set. There is nothing special about the choice of symbols for the values of Boolean algebra. 0 and 1 could be renamed toαandβ, and as long as it was done consistently throughout, it would still be Boolean algebra, albeit with some obvious cosmetic differences. But suppose 0 and 1 were renamed 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating on the same values. However, it would not be identical to our original Boolean algebra because now ∨ behaves the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that the notation has been changed, despite the fact that 0s and 1s are still being used. But if in addition to interchanging the names of the values, the names of the two binary operations are also interchanged,nowthere is no trace of what was done. The end product is completely indistinguishable from what was started with. The columns forx∧yandx∨yin the truth tables have changed places, but that switch is immaterial. When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are switched simultaneously, the members of each pair are calleddualto each other. Thus 0 and 1 are dual, and ∧ and ∨ are dual. Theduality principle, also calledDe Morgan duality, asserts that Boolean algebra is unchanged when all dual pairs are interchanged. One change not needed to make as part of this interchange was to complement. Complement is aself-dualoperation. The identity or do-nothing operationx(copy the input to the output) is also self-dual. A more complicated example of a self-dual operation is(x∧y) ∨ (y∧z) ∨ (z∧x). There is no self-dual binary operation that depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, iff(x,y,z) = (x∧y) ∨ (y∧z) ∨ (z∧x), thenf(f(x,y,z),x,t)is a self-dual operation of four argumentsx,y,z,t. The principle of duality can be explained from agroup theoryperspective by the fact that there are exactly four functions that are one-to-one mappings (automorphisms) of the set ofBoolean polynomialsback to itself: the identity function, the complement function, the dual function and the contradual function (complemented dual). These four functions form agroupunderfunction composition, isomorphic to theKlein four-group,actingon the set of Boolean polynomials.Walter Gottschalkremarked that consequently a more appropriate name for the phenomenon would be theprinciple(orsquare)of quaternality.[5]: 21–22 AVenn diagram[29]can be used as a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of regionxcorresponds respectively to the values 1 (true) and 0 (false) for variablex. The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention). The three Venn diagrams in the figure below represent respectively conjunctionx∧y, disjunctionx∨y, and complement ¬x. For conjunction, the region inside both circles is shaded to indicate thatx∧yis 1 when both variables are 1. The other regions are left unshaded to indicate thatx∧yis 0 for the other three combinations. The second diagram represents disjunctionx∨yby shading those regions that lie inside either or both circles. The third diagram represents complement ¬xby shading the regionnotinside the circle. While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However, we could put a circle forxin those boxes, in which case each would denote a function of one argument,x, which returns the same value independently ofx, called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called azeroaryornullaryoperation, while a constant function takes one argument, which it ignores, and is aunaryoperation. Venn diagrams are helpful in visualizing laws. The commutativity laws for ∧ and ∨ can be seen from the symmetry of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because interchangingxandywould have the effect of reflecting the diagram horizontally and any failure of commutativity would then appear as a failure of symmetry. Idempotenceof ∧ and ∨ can be visualized by sliding the two circles together and noting that the shaded area then becomes the whole circle, for both ∧ and ∨. To see the first absorption law,x∧ (x∨y) =x, start with the diagram in the middle forx∨yand note that the portion of the shaded area in common with thexcircle is the whole of thexcircle. For the second absorption law,x∨ (x∧y) =x, start with the left diagram forx∧yand note that shading the whole of thexcircle results in just thexcircle being shaded, since the previous shading was inside thexcircle. The double negation law can be seen by complementing the shading in the third diagram for ¬x, which shades thexcircle. To visualize the first De Morgan's law,(¬x) ∧ (¬y) = ¬(x∨y), start with the middle diagram forx∨yand complement its shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes. The result is the same as if we shaded that region which is both outside thexcircleandoutside theycircle, i.e. the conjunction of their exteriors, which is what the left hand side of the law describes. The second De Morgan's law,(¬x) ∨ (¬y) = ¬(x∧y), works the same way with the two diagrams interchanged. The first complement law,x∧ ¬x= 0, says that the interior and exterior of thexcircle have no overlap. The second complement law,x∨ ¬x= 1, says that everything is either inside or outside thexcircle. Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting oflogic gatesconnected to form acircuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows:[30] The lines on the left of each gate represent input wires orports. The value of the input is represented by a voltage on the lead. For so-called "active-high" logic, 0 is represented by a voltage close to zero or "ground," while 1 is represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate represents the output port, which normally follows the same voltage conventions as the input ports. Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to the output; the small circle on the output denotes the actual inversion complementing the input. The convention of putting such a circle on any port means that the signal passing through this port is complemented on the way through, whether it is an input or output port. Theduality principle, orDe Morgan's laws, can be understood as asserting that complementing all three ports of an AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an inverter however leaves the operation unchanged. More generally, one may complement any of the eight subsets of the three ports of either an AND or OR gate. The resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s in their truth table. There are eight such because the "odd-bit-out" can be either 0 or 1 and can go in any of four positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namelyx,y, ¬x, and ¬y; and the remaining two arex⊕y(XOR) and its complementx≡y. The term "algebra" denotes both a subject, namely the subject ofalgebra, and an object, namely analgebraic structure. Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects called Boolean algebras, defined in full generality as any model of the Boolean laws. We begin with a special case of the notion definable without reference to the laws, namely concrete Boolean algebras, and then givethe formal definitionof the general notion. Aconcrete Boolean algebraorfield of setsis any nonempty set of subsets of a given setXclosed under the set operations ofunion,intersection, andcomplementrelative toX.[5] (HistoricallyXitself was required to be nonempty as well to exclude the degenerate or one-element Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since the degenerate algebra satisfies every equation. However, this exclusion conflicts with the preferred purely equational definition of "Boolean algebra", there being no way to rule out the one-element algebra using only equations— 0 ≠ 1 does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and letXbe empty.) Example 1.Thepower set2XofX, consisting of allsubsetsofX. HereXmay be any set: empty, finite, infinite, or evenuncountable. Example 2.The empty set andX. This two-element algebra shows that a concrete Boolean algebra can be finite even when it consists of subsets of an infinite set. It can be seen that every field of subsets ofXmust contain the empty set andX. Hence no smaller example is possible, other than the degenerate algebra obtained by takingXto be empty so as to make the empty set andXcoincide. Example 3.The set of finite andcofinitesets of integers, where a cofinite set is one omitting only finitely many integers. This is clearly closed under complement, and is closed under union because the union of a cofinite set with any set is cofinite, while the union of two finite sets is finite. Intersection behaves like union with "finite" and "cofinite" interchanged. This example is countably infinite because there are only countably many finite sets of integers. Example 4.For a less trivial example of the point made by example 2, consider aVenn diagramformed bynclosed curvespartitioningthe diagram into 2nregions, and letXbe the (infinite) set of all points in the plane not on any curve but somewhere within the diagram. The interior of each region is thus an infinite subset ofX, and every point inXis in exactly one region. Then the set of all 22npossible unions of regions (including the empty set obtained as the union of the empty set of regions andXobtained as the union of all 2nregions) is closed under union, intersection, and complement relative toXand therefore forms a concrete Boolean algebra. Again, there are finitely many subsets of an infinite set forming a concrete Boolean algebra, with example 2 arising as the casen= 0 of no curves. A subsetYofXcan be identified with anindexed familyof bits withindex setX, with the bit indexed byx∈Xbeing 1 or 0 according to whether or notx∈Y. (This is the so-calledcharacteristic functionnotion of a subset.) For example, a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,...,31}, with 0 and 31 indexing the low and high order bits respectively. For a smaller example, if⁠X={a,b,c}{\displaystyle X=\{a,b,c\}}⁠wherea, b, care viewed as bit positions in that order from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} ofXcan be identified with the respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers are infinitesequencesof bits, while those indexed by therealsin theunit interval[0,1] are packed too densely to be able to write conventionally but nonetheless form well-defined indexed families (imagine coloring every point of the interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]). From this bit vector viewpoint, a concrete Boolean algebra can be defined equivalently as a nonempty set of bit vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations ofbitwise∧, ∨, and ¬, as in1010∧0110 = 0010,1010∨0110 = 1110, and¬1010 = 0101, the bit vector realizations of intersection, union, and complement respectively. The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length one, which by the identification of bit vectors with subsets can also be understood as the two subsets of a one-element set. This is called theprototypicalBoolean algebra, justified by the following observation. This observation is proved as follows. Certainly any law satisfied by all concrete Boolean algebras is satisfied by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector. The final goal of the next section can be understood as eliminating "concrete" from the above observation. That goal is reached via the stronger observation that, up to isomorphism, all Boolean algebras are concrete. The Boolean algebras so far have all been concrete, consisting of bit vectors or equivalently of subsets of some set. Such a Boolean algebra consists of a set and operations on that set which can beshownto satisfy the laws of Boolean algebra. Instead of showing that the Boolean laws are satisfied, we can instead postulate a setX, two binary operations onX, and one unary operation, andrequirethat those operations satisfy the laws of Boolean algebra. The elements ofXneed not be bit vectors or subsets but can be anything at all. This leads to the more generalabstractdefinition. For the purposes of this definition it is irrelevant how the operations came to satisfy the laws, whether by fiat or proof. All concrete Boolean algebras satisfy the laws (by proof rather than fiat), whence every concrete Boolean algebra is a Boolean algebra according to our definitions. This axiomatic definition of a Boolean algebra as a set and certain operations satisfying certain laws or axiomsby fiatis entirely analogous to the abstract definitions ofgroup,ring,fieldetc. characteristic of modern orabstract algebra. Given any complete axiomatization of Boolean algebra, such as the axioms for acomplementeddistributive lattice, a sufficient condition for analgebraic structureof this kind to satisfy all the Boolean laws is that it satisfy just those axioms. The following is therefore an equivalent definition. The section onaxiomatizationlists other axiomatizations, any of which can be made the basis of an equivalent definition. Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Letnbe asquare-freepositive integer, one not divisible by the square of an integer, for example 30 but not 12. The operations ofgreatest common divisor,least common multiple, and division inton(that is, ¬x=n/x), can be shown to satisfy all the Boolean laws when their arguments range over the positive divisors ofn. Hence those divisors form a Boolean algebra. These divisors are not subsets of a set, making the divisors ofna Boolean algebra that is not concrete according to our definitions. However, if each divisor ofnisrepresentedby the set of its prime factors, this nonconcrete Boolean algebra isisomorphicto the concrete Boolean algebra consisting of all sets of prime factors ofn, with union corresponding to least common multiple, intersection to greatest common divisor, and complement to division inton. So this example, while not technically concrete, is at least "morally" concrete via this representation, called anisomorphism. This example is an instance of the following notion. The next question is answered positively as follows. That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This result depends on theBoolean prime ideal theorem, a choice principle slightly weaker than theaxiom of choice. This strong relationship implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of representability. It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example arelation algebrais a Boolean algebra with additional structure but it is not the case that every relation algebra is representable in the sense appropriate to relation algebras. The above definition of an abstract Boolean algebra as a set together with operations satisfying "the" Boolean laws raises the question of what those laws are. A simplistic answer is "all Boolean laws", which can be defined as all equations that hold for the Boolean algebra of 0 and 1. However, since there are infinitely many such laws, this is not a satisfactory answer in practice, leading to the question of it suffices to require only finitely many laws to hold. In the case of Boolean algebras, the answer is "yes": the finitely many equations listed above are sufficient. Thus, Boolean algebra is said to befinitely axiomatizableorfinitely based. Moreover, the number of equations needed can be further reduced. To begin with, some of the above laws are implied by some of the others. A sufficient subset of the above laws consists of the pairs of associativity, commutativity, and absorption laws, distributivity of ∧ over ∨ (or the other distributivity law—one suffices), and the two complement laws. In fact, this is the traditional axiomatization of Boolean algebra as acomplementeddistributive lattice. By introducing additional laws not listed above, it becomes possible to shorten the list of needed equations yet further; for instance, with the vertical bar representing theSheffer strokeoperation, the single axiom((a∣b)∣c)∣(a∣((a∣c)∣a))=c{\displaystyle ((a\mid b)\mid c)\mid (a\mid ((a\mid c)\mid a))=c}is sufficient to completely axiomatize Boolean algebra. It is also possible to find longer single axioms using more conventional operations; seeMinimal axioms for Boolean algebra.[32] Propositional logicis alogical systemthat is intimately connected to Boolean algebra.[5]Many syntactic concepts of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while the semantics of propositional logic are defined via Boolean algebras in a way that the tautologies (theorems) of propositional logic correspond to equational theorems of Boolean algebra. Syntactically, every Boolean term corresponds to apropositional formulaof propositional logic. In this translation between Boolean algebra and propositional logic, Boolean variablesx, y,... becomepropositional variables(oratoms)P, Q, ... Boolean terms such asx∨ybecome propositional formulasP∨Q; 0 becomesfalseor⊥, and 1 becomestrueorT. It is convenient when referring to generic propositions to use Greek letters Φ, Ψ, ... as metavariables (variables outside the language of propositional calculus, used when talkingaboutpropositional calculus) to denote propositions. The semantics of propositional logic rely ontruth assignments. The essential idea of a truth assignment is that the propositional variables are mapped to elements of a fixed Boolean algebra, and then thetruth valueof a propositional formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while inBoolean-valued semanticsarbitrary Boolean algebras are considered. Atautologyis a propositional formula that is assigned truth value1by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or, equivalently, every truth assignment to the two element Boolean algebra). These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean algebra. Every tautology Φ of propositional logic can be expressed as the Boolean equation Φ = 1, which will be a theorem of Boolean algebra. Conversely, every theorem Φ = Ψ of Boolean algebra corresponds to the tautologies (Φ ∨ ¬Ψ) ∧ (¬Φ ∨ Ψ) and (Φ ∧ Ψ) ∨ (¬Φ ∧ ¬Ψ). If → is in the language, these last tautologies can also be written as (Φ → Ψ) ∧ (Ψ → Φ), or as two separate theorems Φ → Ψ and Ψ → Φ; if ≡ is available, then the single tautology Φ ≡ Ψ can be used. One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural language.[33]Whereas the proposition "ifx= 3, thenx+ 1 = 4" depends on the meanings of such symbols as + and 1, the proposition "ifx= 3, thenx= 3" does not; it is true merely by virtue of its structure, and remains true whether "x= 3" is replaced by "x= 4" or "the moon is made of green cheese." The generic or abstract form of this tautology is "ifP, thenP," or in the language of Boolean algebra,P→P.[citation needed] ReplacingPbyx= 3 or any other proposition is calledinstantiationofPby that proposition. The result of instantiatingPin an abstract proposition is called aninstanceof the proposition. Thus,x= 3 →x= 3 is a tautology by virtue of being an instance of the abstract tautologyP→P. All occurrences of the instantiated variable must be instantiated with the same proposition, to avoid such nonsense asP→x= 3 orx= 3 →x= 4. Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional variables by abstract propositions, such as instantiatingQbyQ→PinP→ (Q→P) to yield the instanceP→ ((Q→P) →P). (The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables within the language of propositional calculus, since ordinary propositional variables can be considered within the language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not being part of the language of propositional calculus but rather part of the same language for talking about it that this sentence is written in, where there is a need to be able to distinguish propositional variables and their instantiations as being distinct syntactic entities.) An axiomatization of propositional calculus is a set of tautologies calledaxiomsand one or more inference rules for producing new tautologies from old. Aproofin an axiom systemAis a finite nonempty sequence of propositions each of which is either an instance of an axiom ofAor follows by some rule ofAfrom propositions appearing earlier in the proof (thereby disallowing circular reasoning). The last proposition is thetheoremproved by the proof. Every nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An axiomatization issoundwhen every theorem is a tautology, andcompletewhen every tautology is a theorem.[34] Propositional calculus is commonly organized as aHilbert system, whose operations are just those of Boolean algebra and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form issequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propositions calledsequents, such asA∨B,A∧C, ... ⊢A,B→C, ....The two halves of a sequent are called the antecedent and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is Γ, and for a succedent Δ; thus Γ,A⊢ Δ would denote a sequent whose succedent is a list Δ and whose antecedent is a list Γ with an additional propositionAappended after it. The antecedent is interpreted as the conjunction of its propositions, the succedent as the disjunction of its propositions, and the sequent itself as theentailmentof the succedent by the antecedent. Entailment differs from implication in that whereas the latter is a binaryoperationthat returns a value in a Boolean algebra, the former is a binaryrelationwhich either holds or does not hold. In this sense, entailment is anexternalform of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of ⊢ is as ≤ in the partial order of the Boolean algebra defined byx≤yjust whenx∨y=y. This ability to mix external implication ⊢ and internal implication → in the one logic is among the essential differences between sequent calculus and propositional calculus.[35] Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and mathematical logic, and is also used in other areas of mathematics such as set theory and statistics.[5] In the early 20th century, several electrical engineers[who?]intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits.Claude Shannonformally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis,A Symbolic Analysis of Relay and Switching Circuits. Today, all modern general-purposecomputersperform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: asvoltages on wiresin high-speed circuits and capacitive storage devices, as orientations of amagnetic domainin ferromagnetic storage devices, as holes inpunched cardsorpaper tape, and so on. (Some early computers used decimal circuits or mechanisms instead of two-valued logic circuits.) Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively 0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of different sizes in a punched card. In practice, the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per wire, high and low. Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011. When programming inmachine code,assembly language, and certain otherprogramming languages, programmers work with the low-level digital structure of thedata registers. These registers operate on voltages, where zero volts represents Boolean 0, and a reference voltage (often +5 V, +3.3 V, or +1.8 V) represents Boolean 1. Such languages support both numeric operations and logical operations. In this context, "numeric" means that the computer treats sequences of bits asbinary numbers(base two numbers) and executes arithmetic operations like add, subtract, multiply, or divide. "Logical" refers to the Boolean logical operations of disjunction, conjunction, and negation between two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence. Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean algebra as needed. A core differentiating feature between these families of operations is the existence of thecarryoperation in the first but not the second. Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced or complex answers such as "maybe" or "only on the weekend" are acceptable. In more focused situations such as a court of law or theorem-based mathematics, however, it is deemed advantageous to frame questions so as to admit a simple yes-or-no answer—is the defendant guilty or not guilty, is the proposition true or false—and to disallow any other answer. However, limiting this might prove in practice for the respondent, the principle of the simple yes–no question has become a central feature of both judicial and mathematical logic, makingtwo-valued logicdeserving of organization and study in its own right. A central concept of set theory is membership. An organization may permit multiple degrees of membership, such as novice, associate, and full. With sets, however, an element is either in or out. The candidates for membership in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as each wire is either high or low. Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory. Two-valued logic can be extended tomulti-valued logic, notably by replacing the Boolean domain {0, 1} with the unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 −x, conjunction (AND) is replaced with multiplication (xy), and disjunction (OR) is defined viaDe Morgan's law. Interpreting these values as logicaltruth valuesyields a multi-valued logic, which forms the basis forfuzzy logicandprobabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true. The original application for Boolean operations wasmathematical logic, where it combines the truth values, true or false, of individual formulas. Natural languages such as English have words for several Boolean operations, in particular conjunction (and), disjunction (or), negation (not), and implication (implies).But notis synonymous withand not. When used to combine situational assertions such as "the block is on the table" and "cats drink milk", which naïvely are either true or false, the meanings of theselogical connectivesoften have the meaning of their logical counterparts. However, with descriptions of behavior such as "Jim walked through the door", one starts to notice differences such as failure of commutativity, for example, the conjunction of "Jim opened the door" with "Jim walked through the door" in that order is not equivalent to their conjunction in the other order, sinceandusually meansand thenin such cases. Questions can be similar: the order "Is the sky blue, and why is the sky blue?" makes more sense than the reverse order. Conjunctive commands about behavior are like behavioral assertions, as inget dressed and go to school. Disjunctive commands suchlove me or leave meorfish or cut baittend to be asymmetric via the implication that one alternative is less preferable. Conjoined nouns such astea and milkgenerally describe aggregation as with set union whiletea or milkis a choice. However, context can reverse these senses, as inyour choices are coffee and teawhich usually means the same asyour choices are coffee or tea(alternatives). Double negation, as in "I don't not like milk", rarely means literally "I do like milk" but rather conveys some sort of hedging, as though to imply that there is a third possibility. "Not not P" can be loosely interpreted as "surely P", and althoughPnecessarily implies "not notP," the converse is suspect in English, much as withintuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them. Boolean operations are used indigital logicto combine the bits carried on individual wires, thereby interpreting them over {0,1}. When a vector ofnidentical binary gates are used to combine two bit vectors each ofnbits, the individual bit operations can be understood collectively as a single operation on values from aBoolean algebrawith 2nelements. Naive set theoryinterprets Boolean operations as acting on subsets of a given setX. As we saw earlier this behavior exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the disjunction of two bit vectors and so on. The 256-element free Boolean algebra on three generators is deployed incomputer displaysbased onraster graphics, which usebit blitto manipulate whole regions consisting ofpixels, relying on Boolean operations to specify how the source region should be combined with the destination, typically with the help of a third region called themask. Modernvideo cardsoffer all223= 256ternary operations for this purpose, with the choice of operation being a one-byte (8-bit) parameter. The constantsSRC = 0xaaor0b10101010,DST = 0xccor0b11001100, andMSK = 0xf0or0b11110000allow Boolean operations such as(SRC^DST)&MSK(meaning XOR the source and destination and then AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time,0x80in the(SRC^DST)&MSKexample,0x88if justSRC^DST, etc. At run time the video card interprets the byte as the raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and which takes time completely independent of the complexity of the expression. Solid modelingsystems forcomputer aided designoffer a variety of methods for building objects from other objects, combination by Boolean operations being one of them. In this method the space in which objects exist is understood as a setSofvoxels(the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are defined as subsets ofS, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on physical materials can be simulated on the computer with the Boolean operationx∧ ¬yorx−y, which in set theory is set difference, remove the elements ofyfrom those ofx. Thus given two shapes one to be machined and the other the material to be removed, the result of machining the former to remove the latter is described simply as their set difference. Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be considered to be an "element" of a "set." The following examples use a syntax supported byGoogle.[NB 1]
https://en.wikipedia.org/wiki/Boolean_logic
Incomputer networking,IP address spoofingorIP spoofingis the creation ofInternet Protocol(IP)packetswith a false sourceIP address, for the purpose of impersonating another computing system.[1] The basic protocol for sending data over the Internet network and many othercomputer networksis theInternet Protocol(IP). The protocol specifies that each IP packet must have aheaderwhich contains (among other things) the IP address of the sender of the packet. The source IP address is normally the address that the packet was sent from, but the sender's address in the header can be altered, so that to the recipient it appears that the packet came from another source. The protocol requires the receiving computer to send back a response to the source IP address therefore spoofing is mainly used when the sender can anticipate the network response or does not care about the response. The source IP address provides only limited information about the sender. It may provide general information on the region, city and town when on the packet was sent. It does not provide information on the identity of the sender or the computer being used. IP address spoofing involving the use of a trusted IP address can be used by network intruders to overcome network security measures, such asauthenticationbased on IP addresses. This type of attack is most effective where trust relationships exist between machines. For example, it is common on some corporate networks to have internal systems trust each other, so that users can log in without a username or password provided they are connecting from another machine on the internal network – which would require them already being logged in. By spoofing a connection from a trusted machine, an attacker on the same network may be able to access the target machine without authentication. IP address spoofing is most frequently used indenial-of-service attacks,[2]where the objective is to flood the target with an overwhelming volume of traffic, and the attacker does not care about receiving responses to the attack packets. Packets with spoofed IP addresses are more difficult to filter since each spoofed packet appears to come from a different address, and they hide the true source of the attack. Denial of service attacks that use spoofing typically randomly choose addresses from the entire IP address space, though more sophisticated spoofing mechanisms might avoid non-routable addresses or unused portions of the IP address space. The proliferation of largebotnetsmakes spoofing less important in denial of service attacks, but attackers typically have spoofing available as a tool, if they want to use it, so defenses against denial-of-service attacks that rely on the validity of the source IP address in attack packets might have trouble with spoofed packets. In DDoS attacks, the attacker may decide to spoof the IP source address to randomly generated addresses, so the victim machine cannot distinguish between the spoofed packets and legitimate packets. The replies would then be sent to random addresses that do not end up anywhere in particular. Such packages-to-nowhere are called thebackscatter, and there arenetwork telescopesmonitoring backscatter to measure the statistical intensity of DDoS attacks on the internet over time.[3] The use of packets with a false source IP address is not always evidence of malicious intent. For example, in performance testing of websites, hundreds or even thousands of "vusers" (virtual users) may be created, each executing a test script against the website under test, in order to simulate what will happen when the system goes "live" and a large number of users log in simultaneously.[citation needed] Since each user will normally have its own IP address, commercial testing products (such asHP LoadRunner,WebLOAD, and others) can use IP spoofing, allowing each user its own "return address" as well.[citation needed] IP spoofing is also used in some server-side load balancing. It lets the load balancer spray incoming traffic, but not need to be in the return path from the servers to the client. This saves a networking hop through switches and the load balancer as well as outbound message processing load on the load balancer. Output usually has more packets and bytes, so the savings are significant.[4][5] Configuration and services that are vulnerable to IP spoofing: Packet filteringis one defense against IPspoofing attacks. The gateway to a network usually performsingress filtering, which is blocking of packets from outside the network with a source address inside the network. This prevents an outside attacker spoofing the address of an internal machine. Ideally, the gateway would also performegress filteringon outgoing packets, which is blocking of packets from inside the network with a source address that is not inside. This prevents an attacker within the network performing filtering from launching IP spoofing attacks against external machines. Anintrusion detection system(IDS) is a common use of packet filtering, which has been used to secure the environments for sharing data over network and host-based IDS approaches.[6] It is also recommended to design network protocols and services so that they do not rely on the source IP address for authentication. Someupper layer protocolshave their own defense against IP spoofing attacks. For example,Transmission Control Protocol(TCP) uses sequence numbers negotiated with the remote machine to ensure that arriving packets are part of an established connection. Since the attacker normally cannot see any reply packets, the sequence number must be guessed in order to hijack the connection. The poor implementation in many older operating systems and network devices, however, means that TCP sequence numbers can be predicted. The termspoofingis also sometimes used to refer toheader forgery, the insertion of false or misleading information ine-mailornetnewsheaders. Falsified headers are used to mislead the recipient, or network applications, as to the origin of a message. This is a common technique ofspammersandsporgers, who wish to conceal the origin of their messages to avoid being tracked.
https://en.wikipedia.org/wiki/IP_address_spoofing
Virtual crimecan be described as a criminal act conducted in avirtual world– usuallymassively multiplayer online role-playing games(MMORPGs). To grasp the definition of virtual crime, the modern interpretation of the term "virtual" must be assessed to portray the implications of virtual crime. In this sense, virtual crime describes those online acts that “evoke the effects of real crime” but are not widely considered to be prosecutable acts.[1] There are several interpretations of the term "virtual crime". One scholar defined virtual crime as needing to have all the qualities of a real crime, and so it was not a new subset of crime at all.[2]It is difficult to prove that there are real-life implications of virtual crime, thus it is not widely accepted as prosecutable.[2] Examples of virtual crimes include mugging,sexual assault,theft, and construction ofsweatshops— all of which are usually committed withinvirtual worlds,metaversesandeconomies.[3][4] MMORPG-Massively multiplayer online role-playing game, which is a video game that combines aspects of a role-playing video game and a massively multiplayer online game. MMORPGs are a platform susceptible to virtual crime. MMOGorMMO-Massively multiplayer online game, which is an online video game with a large number of players on the sameserver. MMOGs are also platforms susceptible to virtual crime. Metaverse- In science fiction, the metaverse is a hypothetical iteration of theInternetas a single, universal, and immersive virtual world facilitated by the use ofvirtual reality(VR),augmented reality(AR), andmixed/extended reality(MR/XR) headsets.[3]In colloquial usage, a metaverse is a network of 3D virtual worlds focused on social and economic connection.[3][5][6][7]In scientific research, it is defined as “a three-dimensionalonline environmentin which users represented byavatarsinteract with each other in virtual spaces decoupled from the real physical world”.[5] Virtual world- Also called a virtual space, avirtual worldis acomputer-simulatedenvironment which may be populated by many users who can create a personal avatar, simultaneously and independently explore the virtual world, participate in its activities, and communicate with others.[8][9][10]This is where virtual crime takes place. Virtual economy- Also called a synthetic economy, avirtual economyexists within a virtual world, and users utilize it to buy, sell, and invest in virtual items, services, and properties.[4]With the rise of virtual worlds,virtual economiessee an increase in usage, demand, and currency exchange within, much like in real life.[4]In 2014, the exchange of currency for virtual property inSecond Life, a popular virtual world, was US$3.2 billion.[4]For perspective, this was the estimated combined annual trade for virtual economies in 2004.[4] Individuals or players within virtual worlds explore, build theircharacters, and collect items throughgameplayor various tasks.[4]These goods and services carry demonstrable value standard conceptions ofeconomic valuebecause players are willing to substitute real economicresourcesof time and money (monthly fees) in exchange for these resources.[4]However, in most games, players do not own, materially orintellectually, any part of thegame world, and merely pay to use it.[4]The ownership players have over in-game assets has evolved with the emergence ofblockchain games.[11] As virtual worlds become more popular and we see the rise of virtual economies, many issues and many opportunities rise as well.[4]For example,eBay, along with specialisttrading sites, have allowed players to sell their wares. This has attractedfraudulentsales as well astheft.[12]Manygame developers, such asBlizzard Entertainment(responsible forWorld of Warcraft), oppose and even prohibit the practice.[13] In the online world of Britannia, the currency of one Annum equates to about $3.4 US.[14]If someone were to steal another player'svirtual currency, they could convert it toUS dollarsviaPayPal, though this problem has not yet been reported. This stems controversy over whether or not this should be dealt with like real crime, as there are real-life implications.[15] While not resulting in physical injury or physicalassault, virtual sexual assault can inflictemotional harmandpsychological trauma.[16]One of the earliest reported instances of virtual sexual assault occurred in 1993 in thecomputer programmingworld ofLambdaMOO.[16]In 2007, aBelgiancitizen reported an instance of non-consensualsexual activity in the virtual worldSecond Lifeto Belgian police.[16] In 2005, in the gameThe Sims Online, a 17-year-old boy going by the in-game name "Evangeline" was discovered to have built a cyber-brothel, where customers would pay sim-money for minutes ofcybersex. This led to the cancellation of his accounts but no legal action, mainly because he was above theage of consentin his area.[17] In July 2018, a mother in the United States posted on Facebook that her daughter's avatar onRobloxhad beengang rapedby two other users. Roblox stated that it was outraged that a "bad actor" had violated its community policies and rules of conduct, and that they had zero tolerance over such acts.[18]The incident led to TheVillage Voicereprinting its 1993 article, "A Rape in Cyberspace".[19]In July 2021, a formally convicted sex offender was arrested in Illinois for allegedlygroomingandsolicitingaminorthrough the use of Roblox.[20] In November 2021, a beta user ofHorizon Worldsreported beinggropedin-game and that other users supported the conduct.[21]Meta responded that there are built-in tools to block interactions with other users, which are not enabled by default, and that although the incident was "absolutely unfortunate," it provides good feedback in making the blocking feature "trivially easy and findable."[21]A month later on Horizon Worlds, metaverse researcher andpsychotherapistNina Jane Patel reported that her avatar was gang-raped within 60 seconds of joining the platform.[22]Elena Martellozzo, an associate professor of criminology atMiddlesex Universitysays that such abuse may be the result ofdisinhibitiondue to the lack of face-to-face interaction that is exacerbated on the metaverse.[22] In 2022, a BBC News researcher posing as a 13-year-old girl onVRChatwas approached by adult men and directed to sex shops.[23]BBC News also reported that a safety campaigner knows of children who were groomed in games and forced to take part invirtual sex.[23] More examples of sexual assault in the virtual reality space include an incident in 2021 — Chanelle Siggens logged into the virtual reality gamePopulation Oneand another player simulated groping and ejaculating on her.[24] In 2024, the BBC reported that police were investigating a virtual sexual assault case.[25] The virtual economies of manyMMOs, and the exchange of virtual items and currency for real money, have resulted in the existence of in-game sweatshops. In virtual sweatshops, workers in thedeveloping world— typically China, although there have been reports of this type of activity inEastern Europeancountries — earn real-world wages for long days spent monotonously performing in-game tasks.[26]Instances typically involvefarming of resources or currency, which has given rise to the epithetChinese Adena Farmer, because of its first reported widespread use inLineage II. More egregious cases involve using exploits such asdupingcurrency or items. There have also been reports ofcollusionorvertical integrationamong farmers and virtual currency exchanges. In 2002, a company called Blacksnow Interactive, a game currency exchange, admitted to using workers in a "virtual sweatshop" inTijuana, Mexico to farm money and items fromUltima OnlineandDark Age of Camelot.[27]WhenMythic Entertainmentcracked down on the practice, Blacksnow attempted to sue the game company.[27] In November 2007, it was reported that a Dutch teenager had been arrested for allegedly stealing virtual furniture from "rooms" in 3D social-networking serviceHabbo Hotel.[28]The teenagers involved were accused of creating fakeHabbowebsites in order to lure users into entering their account details, which would then be used to steal virtual furniture bought with real money totaling €4000.[29] In China, Qiu Chengwei was sentenced to life in prison in 2005 after stabbing and killing fellowThe Legend of Mir 3gamer Zhu Caoyuan.[30]In the game Qiu had lent Zhu a powerful sword (a "dragon sabre"), which Zhu then went on to sell on eBay for 7,200 Yuan (about £473 or US$870).[30] The term "virtual mugging" was coined in Japan in 2005 when a player ofLineage IIusedbotsto defeat other players’ characters and take their items.[31]TheKagawaprefectural police arrested a Chinese foreign exchange student on Aug. 16 following the reports of virtual mugging and online sales of the stolen items.[31] In Sweden, a man threatened the families of 26 underage girls if they did not perform sexual acts online – he was sentenced to 10 years in prison and made to pay $131,590 in damages.[32]Official prosecution proceedings regarding virtual crime currently exist in countries like Sweden but not for a majority of the modern world.[32]
https://en.wikipedia.org/wiki/Virtual_crime
Anti–computer forensicsorcounter-forensicsare techniques used to obstructforensic analysis. Anti-forensics has only recently[when?]been recognized as a legitimate field of study. One of the more widely known and accepted definitions comes from Marc Rogers. One of the earliest detailed presentations of anti-forensics, inPhrack Magazinein 2002, defines anti-forensics as "the removal, or hiding, of evidence in an attempt to mitigate the effectiveness of a forensics investigation".[1] A more abbreviated definition is given by Scott Berinato in his article entitled, The Rise of Anti-Forensics. "Anti-forensics is more than technology. It is an approach to criminal hacking that can be summed up like this: Make it hard for them to find you and impossible for them to prove they found you."[2]Neither author takes into account using anti-forensics methods to ensure the privacy of one's personal data. Anti-forensics methods are often broken down into several sub-categories to make classification of the various tools and techniques simpler. One of the more widely accepted subcategory breakdowns was developed by Dr. Marcus Rogers. He has proposed the following sub-categories: data hiding, artifact wiping, trail obfuscation and attacks against the CF (computer forensics) processes and tools.[3]Attacks against forensics tools directly has also been called counter-forensics.[4] Within the field of digital forensics, there is much debate over the purpose and goals of anti-forensic methods. Theconventional wisdomis that anti-forensic tools are purely malicious in intent and design. Others believe that these tools should be used to illustrate deficiencies in digital forensic procedures, digital forensic tools, and forensic examiner education. This sentiment was echoed at the 2005Blackhat Conferenceby anti-forensic tool authors, James Foster and Vinnie Liu.[5]They stated that by exposing these issues, forensic investigators will have to work harder to prove that collected evidence is both accurate and dependable. They believe that this will result in better tools and education for the forensic examiner. Also, counter-forensics has significance for defence againstespionage, as recovering information by forensic tools serves the goals of spies equally as well as investigators. Data hidingis the process of making data difficult to find while also keeping it accessible for future use. "Obfuscationandencryptionof data give an adversary the ability to limit identification and collection of evidence by investigators while allowing access and use to themselves."[6] Some of the more common forms of data hiding include encryption,steganographyand other various forms of hardware/software based data concealment. Each of the different data hiding methods makes digital forensic examinations difficult. When the different data hiding methods are combined, they can make a successful forensic investigation nearly impossible. One of the more commonly used techniques to defeat computer forensics isdata encryption. In a presentation given on encryption and anti-forensic methodologies, the Vice President of Secure Computing, Paul Henry, referred toencryptionas a "forensic expert's nightmare".[7] The majority of publicly available encryption programs allow the user to create virtual encrypted disks which can only be opened with a designated key. Through the use of modern encryption algorithms and various encryption techniques these programs make the data virtually impossible to read without the designated key. File level encryption encrypts only the file contents. This leaves important information such as file name, size and timestamps unencrypted. Parts of the content of the file can be reconstructed from other locations, such as temporary files, swap file and deleted, unencrypted copies. Most encryption programs have the ability to perform a number of additional functions that make digital forensic efforts increasingly difficult. Some of these functions include the use of akeyfile, full-volume encryption, andplausible deniability. The widespread availability of software containing these functions has put the field of digital forensics at a great disadvantage. Steganographyis a technique where information or files are hidden within another file in an attempt to hide data by leaving it in plain sight. "Steganography produces dark data that is typically buried within light data (e.g., a non-perceptible digital watermark buried within a digital photograph)."[8]While some experts have argued that the use of steganography techniques is not very widespread and therefore the subject shouldn't be given a lot of thought, most experts agree that steganography has the capability of disrupting the forensic process when used correctly.[2] According to Jeffrey Carr, a 2007 edition of Technical Mujahid (a bi-monthly terrorist publication) outlined the importance of using a steganography program called Secrets of the Mujahideen. According to Carr, the program was touted as giving the user the capability to avoid detection by currentsteganalysisprograms. It did this through the use of steganography in conjunction with file compression.[9] Other forms of data hiding involve the use of tools and techniques to hide data throughout various locations in a computer system. Some of these places can include "memory,slack space, hidden directories,bad blocks, alternate data streams, (and)hidden partitions."[3] One of the more well known tools that is often used for data hiding is called Slacker (part of theMetasploitframework).[10]Slacker breaks up a file and places each piece of that file into theslack spaceof other files, thereby hiding it from the forensic examination software.[8]Another data hiding technique involves the use of bad sectors. To perform this technique, the user changes a particular sector from good to bad and then data is placed onto that particular cluster. The belief is that forensic examination tools will see these clusters as bad and continue on without any examination of their contents.[8] The methods used in artifact wiping are tasked with permanently eliminating particular files or entire file systems. This can be accomplished through the use of a variety of methods that include disk cleaning utilities, file wiping utilities and disk degaussing/destruction techniques.[3] Disk cleaning utilities use a variety of methods to overwrite the existing data on disks (seedata remanence). The effectiveness of disk cleaning utilities as anti-forensic tools is often challenged as some believe they are not completely effective. Experts who don't believe that disk cleaning utilities are acceptable for disk sanitization base their opinions of current DOD policy, which states that the only acceptable form of sanitization is degaussing. (SeeNational Industrial Security Program.) Disk cleaning utilities are also criticized because they leave signatures that the file system was wiped, which in some cases is unacceptable. Some of the widely used disk cleaning utilities includeDBAN,srm,BCWipe Total WipeOut, KillDisk, PC Inspector and CyberScrubs cyberCide. Another option which is approved by theNISTand theNSAis CMRR Secure Erase, which uses the Secure Erase command built into theATAspecification. File wiping utilities are used to delete individual files from an operating system. The advantage of file wiping utilities is that they can accomplish their task in a relatively short amount of time as opposed to disk cleaning utilities which take much longer. Another advantage of file wiping utilities is that they generally leave a much smaller signature than disk cleaning utilities. There are two primary disadvantages of file wiping utilities, first they require user involvement in the process and second some experts believe that file wiping programs don't always correctly and completely wipe file information.[11][12]Some of the widely used file wiping utilities includeBCWipe, R-Wipe & Clean, Eraser, Aevita Wipe & Delete and CyberScrubs PrivacySuite. On Linux tools likeshredandsrmcan be also used to wipe single files.[13][14]SSDs are by design more difficult to wipe, since the firmware can write to other cells therefore allowing data recovery. In these instances ATA Secure Erase should be used on the whole drive, with tools likehdparmthat support it.[15] Diskdegaussingis a process by which a magnetic field is applied to a digital media device. The result is a device that is entirely clean of any previously stored data. Degaussing is rarely used as an anti-forensic method despite the fact that it is an effective means to ensure data has been wiped. This is attributed to the high cost of degaussing machines, which are difficult for the average consumer to afford. A more commonly used technique to ensure data wiping is the physical destruction of the device. TheNISTrecommends that "physical destruction can be accomplished using a variety of methods, including disintegration, incineration, pulverizing, shredding and melting."[16] The purpose of trail obfuscation is to confuse, disorient, and divert the forensic examination process. Trail obfuscation covers a variety of techniques and tools that include "log cleaners,spoofing,misinformation, backbone hopping, zombied accounts, trojan commands."[3] One of the more widely known trail obfuscation tools is Timestomp (part of theMetasploit Framework).[10]Timestomp gives the user the ability to modify filemetadatapertaining to access, creation and modification times/dates.[2]By using programs such as Timestomp, a user can render any number of files useless in a legal setting by directly calling into question the files' credibility.[citation needed] Another well known trail-obfuscation program is Transmogrify (also part of the Metasploit Framework).[10]In most file types the header of the file contains identifying information. A (.jpg) would have header information that identifies it as a (.jpg), a (.doc) would have information that identifies it as (.doc) and so on. Transmogrify allows the user to change the header information of a file, so a (.jpg) header could be changed to a (.doc) header. If a forensic examination program oroperating systemwere to conduct a search for images on a machine, it would simply see a (.doc) file and skip over it.[2] In the past anti-forensic tools have focused on attacking the forensic process by destroying data, hiding data, or altering data usage information. Anti-forensics has recently moved into a new realm where tools and techniques are focused on attacking forensic tools that perform the examinations. These new anti-forensic methods have benefited from a number of factors to include well documented forensic examination procedures, widely known forensic tool vulnerabilities, and digital forensic examiners' heavy reliance on their tools.[3] During a typical forensic examination, the examiner would create an image of the computer's disks. This keeps the original computer (evidence) from being tainted by forensic tools.Hashesare created by the forensic examination software to verify theintegrityof the image. One of the recent anti-tool techniques targets the integrity of the hash that is created to verify the image. By affecting the integrity of the hash, any evidence that is collected during the subsequent investigation can be challenged.[3] To prevent physical access to data while the computer is powered on (from a grab-and-go theft for instance, as well as seizure from Law Enforcement), there are different solutions that could be implemented: Some of these methods rely on shutting the computer down, while the data might be retained in the RAM from a couple of seconds up to a couple minutes, theoretically allowing for acold boot attack.[21][22][23]Cryogenically freezing the RAM might extend this time even further and some attacks on the wild have been spotted.[24]Methods to counteract this attack exist and can overwrite the memory before shutting down. Some anti-forensic tools even detect the temperature of the RAM to perform a shutdown when below a certain threshold.[25][26] Attempts to create a tamper-resistant desktop computer has been made (as of 2020, the ORWL model is one of the best examples). However, security of this particular model is debated by security researcher andQubes OSfounderJoanna Rutkowska.[27] While the study and applications of anti-forensics are generally available to protect users from forensic attacks of their confidential data by their adversaries (eg investigative journalists, human rights defenders, activists, corporate or governmentespionage), Mac Rogers of Purdue University notes that anti-forensics tools can also be used by criminals. Rogers uses a more traditional "crime scene" approach when defining anti-forensics. "Attempts to negatively affect the existence, amount and/or quality of evidence from a crime scene, or make the analysis and examination of evidence difficult or impossible to conduct."[3] Anti-forensic methods rely on several weaknesses in the forensic process including: the human element, dependency on tools, and the physical/logical limitations of computers.[28]By reducing the forensic process's susceptibility to these weaknesses, an examiner can reduce the likelihood of anti-forensic methods successfully impacting an investigation.[28]This may be accomplished by providing increased training for investigators, and corroborating results using multiple tools.
https://en.wikipedia.org/wiki/Anti-computer_forensics
Ininformation theoryandcoding theorywith applications incomputer scienceandtelecommunications,error detection and correction(EDAC) orerror controlare techniques that enablereliable deliveryofdigital dataover unreliablecommunication channels. Many communication channels are subject tochannel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data in many cases. Error detectionis the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver. Error correctionis the detection of errors and reconstruction of the original, error-free data. In classical antiquity,copyistsof theHebrew Biblewere paid for their work according to the number ofstichs(lines of verse). As the prose books of the Bible were hardly ever written in stichs, the copyists, in order to estimate the amount of work, had to count the letters.[1]This also helped ensure accuracy in the transmission of the text with the production of subsequent copies.[2][3]Between the 7th and 10th centuries CE agroup of Jewish scribesformalized and expanded this to create theNumerical Masorahto ensure accurate reproduction of the sacred text. It included counts of the number of words in a line, section, book and groups of books, noting the middle stich of a book, word use statistics, and commentary.[1]Standards became such that a deviation in even a single letter in a Torah scroll was considered unacceptable.[4]The effectiveness of their error correction method was verified by the accuracy of copying through the centuries demonstrated by discovery of theDead Sea Scrollsin 1947–1956, dating fromc.150 BCE-75 CE.[5] The modern development oferror correction codesis credited toRichard Hammingin 1947.[6]A description ofHamming's codeappeared inClaude Shannon'sA Mathematical Theory of Communication[7]and was quickly generalized byMarcel J. E. Golay.[8] All error-detection and correction schemes add someredundancy(i.e., some extra data) to a message, which receivers can use to check consistency of the delivered message and to recover data that has been determined to be corrupted. Error detection and correction schemes can be eithersystematicor non-systematic. In a systematic scheme, the transmitter sends the original (error-free) data and attaches a fixed number ofcheck bits(orparity data), which are derived from the data bits by some encoding algorithm. If error detection is required, a receiver can simply apply the same algorithm to the received data bits and compare its output with the received check bits; if the values do not match, an error has occurred at some point during the transmission. If error correction is required, a receiver can apply the decoding algorithm to the received data bits and the received check bits to recover the original error-free data. In a system that uses a non-systematic code, the original message is transformed into an encoded message carrying the same information and that has at least as many bits as the original message. Good error control performance requires the scheme to be selected based on the characteristics of the communication channel. Commonchannel modelsincludememorylessmodels where errors occur randomly and with a certain probability, and dynamic models where errors occur primarily inbursts. Consequently, error-detecting and -correcting codes can be generally distinguished betweenrandom-error-detecting/correctingandburst-error-detecting/correcting. Some codes can also be suitable for a mixture of random errors and burst errors. If the channel characteristics cannot be determined, or are highly variable, an error-detection scheme may be combined with a system for retransmissions of erroneous data. This is known asautomatic repeat request(ARQ), and is most notably used in the Internet. An alternate approach for error control ishybrid automatic repeat request(HARQ), which is a combination of ARQ and error-correction coding. There are three major types of error correction:[9] Automatic repeat request(ARQ) is an error control method for data transmission that makes use of error-detection codes, acknowledgment and/or negative acknowledgment messages, andtimeoutsto achieve reliable data transmission. Anacknowledgmentis a message sent by the receiver to indicate that it has correctly received adata frame. Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e., within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions. Three types of ARQ protocols areStop-and-wait ARQ,Go-Back-N ARQ, andSelective Repeat ARQ. ARQ is appropriate if the communication channel has varying or unknowncapacity, such as is the case on the Internet. However, ARQ requires the availability of aback channel, results in possibly increasedlatencydue to retransmissions, and requires the maintenance of buffers and timers for retransmissions, which in the case ofnetwork congestioncan put a strain on the server and overall network capacity.[10] For example, ARQ is used on shortwave radio data links in the form ofARQ-E, or combined with multiplexing asARQ-M. Forward error correction(FEC) is a process of addingredundant datasuch as anerror-correcting code(ECC) to a message so that it can be recovered by a receiver even when a number of errors (up to the capability of the code being used) are introduced, either during the process of transmission or on storage. Since the receiver does not have to ask the sender for retransmission of the data, abackchannelis not required in forward error correction. Error-correcting codes are used inlower-layercommunication such ascellular network, high-speedfiber-optic communicationandWi-Fi,[11][12]as well as for reliable storage in media such asflash memory,hard diskandRAM.[13] Error-correcting codes are usually distinguished betweenconvolutional codesandblock codes: Shannon's theoremis an important theorem in forward error correction, and describes the maximuminformation rateat which reliable communication is possible over a channel that has a certain error probability orsignal-to-noise ratio(SNR). This strict upper limit is expressed in terms of thechannel capacity. More specifically, the theorem says that there exist codes such that with increasing encoding length the probability of error on adiscrete memoryless channelcan be made arbitrarily small, provided that thecode rateis smaller than the channel capacity. The code rate is defined as the fractionk/nofksource symbols andnencoded symbols. The actual maximum code rate allowed depends on the error-correcting code used, and may be lower. This is because Shannon's proof was only of existential nature, and did not show how to construct codes that are both optimal and haveefficientencoding and decoding algorithms. Hybrid ARQis a combination of ARQ and forward error correction. There are two basic approaches:[10] The latter approach is particularly attractive on anerasure channelwhen using arateless erasure code. Error detection is most commonly realized using a suitablehash function(or specifically, achecksum,cyclic redundancy checkor other algorithm). A hash function adds a fixed-lengthtagto a message, which enables receivers to verify the delivered message by recomputing the tag and comparing it with the one provided. There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detectingburst errors). A random-error-correcting code based onminimum distance codingcan provide a strict guarantee on the number of detectable errors, but it may not protect against apreimage attack. Arepetition codeis a coding scheme that repeats the bits across a channel to achieve error-free communication. Given a stream of data to be transmitted, the data are divided into blocks of bits. Each block is transmitted some predetermined number of times. For example, to send the bit pattern1011, the four-bit block can be repeated three times, thus producing1011 1011 1011. If this twelve-bit pattern was received as1010 1011 1011– where the first block is unlike the other two – an error has occurred. A repetition code is very inefficient and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g.,1010 1010 1010in the previous example would be detected as correct). The advantage of repetition codes is that they are extremely simple, and are in fact used in some transmissions ofnumbers stations.[14][15] Aparity bitis a bit that is added to a group of source bits to ensure that the number of set bits (i.e., bits with value 1) in the outcome is even or odd. It is a very simple scheme that can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the output. An even number of flipped bits will make the parity bit appear correct even though the data is erroneous. Parity bits added to eachwordsent are calledtransverse redundancy checks, while those added at the end of a stream ofwordsare calledlongitudinal redundancy checks. For example, if each of a series of m-bitwordshas a parity bit added, showing whether there were an odd or even number of ones in that word, any word with a single error in it will be detected. It will not be known where in the word the error is, however. If, in addition, after each stream of n words a parity sum is sent, each bit of which shows whether there were an odd or even number of ones at that bit-position sent in the most recent group, the exact position of the error can be determined and the error corrected. This method is only guaranteed to be effective, however, if there are no more than 1 error in every group of n words. With more error correction bits, more errors can be detected and in some cases corrected. There are also other bit-grouping techniques. Achecksumof a message is amodular arithmeticsum of message code words of a fixed word length (e.g., byte values). The sum may be negated by means of aones'-complementoperation prior to transmission to detect unintentional all-zero messages. Checksum schemes include parity bits,check digits, andlongitudinal redundancy checks. Some checksum schemes, such as theDamm algorithm, theLuhn algorithm, and theVerhoeff algorithm, are specifically designed to detect errors commonly introduced by humans in writing down or remembering identification numbers. Acyclic redundancy check(CRC) is a non-securehash functiondesigned to detect accidental changes to digital data in computer networks. It is not suitable for detecting maliciously introduced errors. It is characterized by specification of agenerator polynomial, which is used as thedivisorin apolynomial long divisionover afinite field, taking the input data as thedividend. Theremainderbecomes the result. A CRC has properties that make it well suited for detectingburst errors. CRCs are particularly easy to implement in hardware and are therefore commonly used incomputer networksand storage devices such ashard disk drives. The parity bit can be seen as a special-case 1-bit CRC. The output of acryptographic hash function, also known as amessage digest, can provide strong assurances aboutdata integrity, whether changes of the data are accidental (e.g., due to transmission errors) or maliciously introduced. Any modification to the data will likely be detected through a mismatching hash value. Furthermore, given some hash value, it is typically infeasible to find some input data (other than the one given) that will yield the same hash value. If an attacker can change not only the message but also the hash value, then akeyed hashormessage authentication code(MAC) can be used for additional security. Without knowing the key, it is not possible for the attacker to easily or conveniently calculate the correct keyed hash value for a modified message. Digital signatures can provide strong assurances about data integrity, whether the changes of the data are accidental or maliciously introduced. Digital signatures are perhaps most notable for being part of the HTTPS protocol for securely browsing the web. Any error-correcting code can be used for error detection. A code withminimumHamming distance,d, can detect up tod− 1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired. Codes with minimum Hamming distanced= 2 are degenerate cases of error-correcting codes and can be used to detect single errors. The parity bit is an example of a single-error-detecting code. Applications that require low latency (such as telephone conversations) cannot useautomatic repeat request(ARQ); they must useforward error correction(FEC). By the time an ARQ system discovers an error and re-transmits it, the re-sent data will arrive too late to be usable. Applications where the transmitter immediately forgets the information as soon as it is sent (such as most television cameras) cannot use ARQ; they must use FEC because when an error occurs, the original data is no longer available. Applications that use ARQ must have areturn channel; applications having no return channel cannot use ARQ. Applications that require extremely low error rates (such as digital money transfers) must use ARQ due to the possibility of uncorrectable errors with FEC. Reliability and inspection engineering also make use of the theory of error-correcting codes,[16]as well as natural language.[17] In a typicalTCP/IPstack, error control is performed at multiple levels: The development of error-correction codes was tightly coupled with the history of deep-space missions due to the extreme dilution of signal power over interplanetary distances, and the limited power availability aboard space probes. Whereas early missions sent their data uncoded, starting in 1968, digital error correction was implemented in the form of (sub-optimally decoded)convolutional codesandReed–Muller codes.[18]The Reed–Muller code was well suited to the noise the spacecraft was subject to (approximately matching abell curve), and was implemented for the Mariner spacecraft and used on missions between 1969 and 1977. TheVoyager 1andVoyager 2missions, which started in 1977, were designed to deliver color imaging and scientific information fromJupiterandSaturn.[19]This resulted in increased coding requirements, and thus, the spacecraft were supported by (optimallyViterbi-decoded) convolutional codes that could beconcatenatedwith an outerGolay (24,12,8) code. The Voyager 2 craft additionally supported an implementation of aReed–Solomon code. The concatenated Reed–Solomon–Viterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey toUranusandNeptune. After ECC system upgrades in 1989, both crafts used V2 RSV coding. TheConsultative Committee for Space Data Systemscurrently recommends usage of error correction codes with performance similar to the Voyager 2 RSV code as a minimum. Concatenated codes are increasingly falling out of favor with space missions, and are replaced by more powerful codes such asTurbo codesorLDPC codes. The different kinds of deep space and orbital missions that are conducted suggest that trying to find a one-size-fits-all error correction system will be an ongoing problem. For missions close to Earth, the nature of thenoisein thecommunication channelis different from that which a spacecraft on an interplanetary mission experiences. Additionally, as a spacecraft increases its distance from Earth, the problem of correcting for noise becomes more difficult. The demand for satellitetransponderbandwidth continues to grow, fueled by the desire to deliver television (including new channels andhigh-definition television) and IP data. Transponder availability and bandwidth constraints have limited this growth. Transponder capacity is determined by the selectedmodulationscheme and the proportion of capacity consumed by FEC. Error detection and correction codes are often used to improve the reliability of data storage media.[20]A parity track capable of detecting single-bit errors was present on the firstmagnetic tape data storagein 1951. Theoptimal rectangular codeused ingroup coded recordingtapes not only detects but also corrects single-bit errors. Somefile formats, particularlyarchive formats, include a checksum (most oftenCRC32) to detect corruption and truncation and can employ redundancy orparity filesto recover portions of corrupted data.Reed-Solomon codesare used incompact discsto correct errors caused by scratches. Modern hard drives use Reed–Solomon codes to detect and correct minor errors in sector reads, and to recover corrupted data from failing sectors and store that data in the spare sectors.[21]RAIDsystems use a variety of error correction techniques to recover data when a hard drive completely fails. Filesystems such asZFSorBtrfs, as well as someRAIDimplementations, supportdata scrubbingand resilvering, which allows bad blocks to be detected and (hopefully) recovered before they are used.[22]The recovered data may be re-written to exactly the same physical location, to spare blocks elsewhere on the same piece of hardware, or the data may be rewritten onto replacement hardware. Dynamic random-access memory(DRAM) may provide stronger protection againstsoft errorsby relying on error-correcting codes. Such error-correcting memory, known asECCorEDAC-protectedmemory, is particularly desirable for mission-critical applications, such as scientific computing, financial, medical, etc. as well as extraterrestrial applications due to the increasedradiationin space. Error-correcting memory controllers traditionally useHamming codes, although some usetriple modular redundancy.Interleavingallows distributing the effect of a single cosmic ray potentially upsetting multiple physically neighboring bits across multiple words by associating neighboring bits to different words. As long as asingle-event upset(SEU) does not exceed the error threshold (e.g., a single error) in any particular word between accesses, it can be corrected (e.g., by a single-bit error-correcting code), and the illusion of an error-free memory system may be maintained.[23] In addition to hardware providing features required for ECC memory to operate,operating systemsusually contain related reporting facilities that are used to provide notifications when soft errors are transparently recovered. One example is theLinux kernel'sEDACsubsystem (previously known asBluesmoke), which collects the data from error-checking-enabled components inside a computer system; besides collecting and reporting back the events related to ECC memory, it also supports other checksumming errors, including those detected on thePCI bus.[24][25][26]A few systems[specify]also supportmemory scrubbingto catch and correct errors early before they become unrecoverable.
https://en.wikipedia.org/wiki/Error_detection_and_correction
Plan 9 from Bell Labsis adistributed operating systemwhich originated from the Computing Science Research Center (CSRC) atBell Labsin the mid-1980s and built onUNIXconcepts first developed there in the late 1960s. Since 2000, Plan 9 has beenfree and open-source. The final official release was in early 2015. Under Plan 9, UNIX'severything is a filemetaphor is extended via a pervasive network-centricfilesystem, and thecursor-addressed,terminal-basedI/Oat the heart ofUNIX-likeoperating systems is replaced by awindowing systemandgraphical user interfacewithout cursor addressing, althoughrc, the Plan 9shell, is text-based. The namePlan 9 from Bell Labsis a reference to theEd Wood1957cultscience fictionZ-moviePlan 9 from Outer Space.[17]The system continues to be used and developed by operating system researchers and hobbyists.[18][19] Plan 9 from Bell Labs was originally developed, starting in the late 1980s,[19]by members of the Computing Science Research Center at Bell Labs, the same group that originally developedUnixand theC programming language.[20]The Plan 9 team was initially led byRob Pike,Ken Thompson, Dave Presotto and Phil Winterbottom, with support fromDennis Ritchieas head of the Computing Techniques Research Department. Over the years, many notable developers have contributed to the project, includingBrian Kernighan,Tom Duff,Doug McIlroy,Bjarne Stroustrupand Bruce Ellis.[21] Plan 9 replaced Unix as Bell Labs's primary platform for operating systems research.[22]It explored several changes to the original Unix model that facilitate the use and programming of the system, notably in distributedmulti-userenvironments. After several years of development and internal use, Bell Labs shipped the operating system to universities in 1992. Three years later, Plan 9 was made available for commercial parties by AT&T via the book publisherHarcourt Brace. With source licenses costing $350, AT&T targeted the embedded systems market rather than the computer market at large. Ritchie commented that the developers did not expect to do "much displacement" given how established other operating systems had become.[23] By early 1996, the Plan 9 project had been "put on the back burner" by AT&T in favor ofInferno, intended to be a rival toSun Microsystems'Java platform.[24]In the late 1990s, Bell Labs' new ownerLucent Technologiesdropped commercial support for the project and in 2000, a third release was distributed under anopen-source license.[25]A fourth release under a newfree softwarelicense occurred in 2002.[26]In early 2015, the final official release of Plan 9 occurred.[25] A user and development community, including current and formerBell Labspersonnel, produced minor daily releases in the form ofISO images. Bell Labs hosted the development.[27]The development source tree is accessible over the9PandHTTPprotocols and is used to update existing installations.[28]In addition to the official components of the OS included in the ISOs, Bell Labs also hosts a repository of externally developed applications and tools.[29] AsBell Labshas moved on to later projects in recent years, development of the official Plan 9 system had stopped. On March 23, 2021, development resumed following the transfer of copyright fromBell Labsto the Plan 9 Foundation.[10][30][31]Unofficial development for the system also continues on the 9front fork, where active contributors provide monthly builds and new functionality. So far, the 9front fork has provided the systemWi-Fidrivers, Audio drivers,USBsupport and built-in game emulator, along with other features.[32][33]Other recent Plan 9-inspired operating systems include Harvey OS[34]and Jehanne OS.[35] Plan 9 from Bell Labs is like theQuakers: distinguished by its stress on the 'Inner Light,' noted for simplicity of life, in particular for plainness of speech. Like the Quakers, Plan 9 does not proselytize. Plan 9 is adistributed operating system, designed to make a network ofheterogeneousand geographically separated computers function as a single system.[38]In a typical Plan 9 installation, users work at terminals running the window systemrio, and they access CPU servers which handle computation-intensive processes. Permanent data storage is provided by additional network hosts acting as file servers and archival storage.[39] Its designers state that, [t]he foundations of the system are built on two ideas: a per-processname spaceand a simple message-oriented file system protocol. The first idea (a per-process name space) means that, unlike on most operating systems,processes(running programs) each have their own view of thenamespace, corresponding to what other operating systems call the file system; a single path name may refer to different resources for different processes. The potential complexity of this setup is controlled by a set of conventional locations for common resources.[41][42] The second idea (a message-oriented filesystem) means that processes can offer their services to other processes by providing virtual files that appear in the other processes' namespace. Theclientprocess's input/output on such a file becomesinter-process communicationbetween the two processes. This way, Plan 9 generalizes the Unix notion of thefilesystemas the central point of access to computing resources. It carries over Unix's idea ofdevice filesto provide access to peripheral devices (mice, removable media, etc.) and the possibility tomountfilesystems residing on physically distinct filesystems into a hierarchical namespace, but adds the possibility to mount a connection to a server program that speaks a standardized protocol and treat its services as part of the namespace. For example, the original window system, called 8½, exploited these possibilities as follows. Plan 9 represents the user interface on a terminal by means of three pseudo-files:mouse, which can be read by a program to get notification of mouse movements and button clicks;cons, which can be used to perform textual input/output; andbitblt, writing to which enacts graphics operations (seebit blit). The window system multiplexes these devices: when creating a new window to run some program in, it first sets up a new namespace in whichmouse,consandbitbltare connected to itself, hiding the actual device files to which it itself has access. The window system thus receives all input and output commands from the program and handles these appropriately, by sending output to the actual screen device and giving the currently focused program the keyboard and mouse input.[39]The program does not need to know if it is communicating directly with the operating system's device drivers, or with the window system; it only has to assume that its namespace is set up so that these special files provide the kind of input and accept the kind of messages that it expects. Plan 9's distributed operation relies on the per-process namespaces as well, allowing client and server processes to communicate across machines in the way just outlined. For example, thecpucommand starts a remote session on a computation server. The command exports part of its local namespace, including the user's terminal's devices (mouse,cons,bitblt), to the server, so that remote programs can perform input/output using the terminal's mouse, keyboard and display, combining the effects ofremote loginand a shared network filesystem.[39][40] All programs that wish to provide services-as-files to other programs speak a unified protocol, called 9P. Compared to other systems, this reduces the number of customprogramming interfaces. 9P is a generic, medium-agnostic,byte-orientedprotocol that provides for messages delivered between a server and a client.[43]The protocol is used to refer to and communicate with processes, programs, and data, including both the user interface and the network.[44]With the release of the 4th edition, it was modified and renamed 9P2000.[26] Unlike most other operating systems, Plan 9 does not provide specialapplication programming interfaces(such asBerkeley sockets,X resourcesorioctlsystem calls) to access devices.[43]Instead, Plan 9 device drivers implement their control interface as a file system, so that the hardware can be accessed by the ordinary fileinput/outputoperationsreadandwrite. Consequently, sharing the device across the network can be accomplished by mounting the corresponding directory tree to the target machine.[17] Plan 9 allows the user to collect the files (callednames) from different directory trees in a single location. The resultinguniondirectorybehaves as the concatenation of the underlying directories (the order of concatenation can be controlled); if the constituent directories contain files having the same name, a listing of the union directory (lsorlc) will simply report duplicate names.[45]Resolution of a single path name is performed top-down: if the directoriestopandbottomare unioned intouwithtopfirst, thenu/namedenotestop/nameif it exists,bottom/nameonly if it existsandtop/namedoes not exist, and no file if neither exists. No recursive unioning of subdirectories is performed, so iftop/subdirexists, the files inbottom/subdirare not accessible through the union.[46] A union directory can be created by using a sequence ofbindcommands: In the example above,/arm/binis mounted at/bin, the contents of/arm/binreplacing the previous contents of/bin.Acme'sbindirectory is then union mounted after/bin, and Alice's personalbindirectory is union mounted before. When a file is requested from/bin, it is first looked for in/usr/alice/bin, then in/arm/bin, and then finally in/acme/bin/arm. The separate process namespaces thus usually replace the notion of asearch pathin the shell. A path environment variable ($path) still exists in thercshell (the shell mainly used in Plan 9); however, rc's path environment variable conventionally only contains the/binand.directories and modifying the variable is discouraged, instead, adding additional commands should be done by binding several directories together as a single/bin.[47][39]Unlike in Plan 9, the path environment variable of Unix shells should be set to include the additional directories whose executable files need to be added as commands. Furthermore, the kernel can keep separate mount tables for each process,[37]and can thus provide each process with its own file systemnamespace. Processes' namespaces can be constructed independently, and the user may work simultaneously with programs that have heterogeneous namespaces.[40]Namespaces may be used to create an isolated environment similar tochroot, but in a more secure way.[43] Plan 9's union directory architecture inspired4.4BSDandLinuxunion file systemimplementations,[45]although the developers of the BSD union mounting facility found the non-recursive merging of directories in Plan 9 "too restrictive for general purpose use".[46] Instead of having system calls specifically forprocess management, Plan 9 provides the/procfile system. Eachprocessappears as a directory containing information and control files which can be manipulated by the ordinary file IO system calls.[8] The file system approach allows Plan 9 processes to be managed with simple file management tools such aslsandcat; however, the processes cannot be copied and moved as files.[8] Plan 9 does not have specialised system calls orioctlsfor accessing the networking stack or networking hardware. Instead, the/netfile system is used. Network connections are controlled by reading and writing control messages to control files. Sub-directories such as/net/tcpand/net/udpare used as an interface to their respective protocols.[8] To reduce the complexity of managingcharacter encodings, Plan 9 usesUnicodethroughout the system. The initial Unicode implementation wasISO/IEC 10646-1:1993.Ken Thompsoninvented UTF-8, which became thenativeencoding in Plan 9. The entire system was converted to general use in 1992.[49]UTF-8 preserves backwards compatibility with traditionalnull-terminated strings, enabling more reliable information processing and the chaining of multilingual string data withUnix pipesbetween multiple processes. Using a single UTF-8 encoding with characters for all cultures and regions eliminates the need for switching between code sets.[50] Though interesting on their own, the design concepts of Plan 9 were supposed to be most useful when combined. For example, to implement anetwork address translation(NAT) server, a union directory can be created, overlaying therouter's/netdirectory tree with its own/net. Similarly, avirtual private network(VPN) can be implemented by overlaying in a union directory a/nethierarchy from a remotegateway, using secured 9P over the public Internet. A union directory with the/nethierarchy and filters can be used tosandboxan untrusted application or to implement afirewall.[43]In the same manner, a distributed computing network can be composed with a union directory of/prochierarchies from remote hosts, which allows interacting with them as if they are local. When used together, these features allow for assembling a complex distributed computing environment by reusing the existing hierarchical name system.[8] As a benefit from the system's design, most tasks in Plan 9 can be accomplished by usingls,cat,grep,cpandrmutilities in combination with therc shell(the default Plan 9 shell). Factotumis anauthenticationandkey managementserver for Plan 9. It handles authentication on behalf of other programs such that bothsecret keysand implementation details need only be known to Factotum.[51] UnlikeUnix, Plan 9 was designed with graphics in mind.[44]After booting, a Plan 9 terminal will run theriowindowing system, in which the user can create new windows displayingrc.[52]Graphical programs invoked from this shell replace it in its window. Theplumberprovides aninter-process communicationmechanism which allows system-wide hyperlinking. Samandacmeare Plan 9's text editors.[53] Plan 9 supports theKfs,Paq,Cwfs,FAT, andFossilfile systems. The last was designed at Bell Labs specifically for Plan 9 and provides snapshot storage capability. It can be used directly with a hard drive or backed withVenti, an archival file system and permanent data storage system. The distribution package for Plan 9 includes special compiler variants and programming languages, and provides a tailored set of libraries along with a windowinguser interfacesystem specific to Plan 9.[54]The bulk of the system is written in a dialect of C (ANSI Cwith some extensions and some other features left out). The compilers for this language were custom built with portability in mind; according to their author, they "compile quickly, load slowly, and produce medium quality object code".[55] Aconcurrent programming languagecalledAlefwas available in the first two editions, but was then dropped for maintenance reasons and replaced by athreadinglibrary for C.[56][57] Though Plan 9 was supposed to be a further development of Unix concepts, compatibility with preexisting Unix software was never the goal for the project. Manycommand-line utilitiesof Plan 9 share the names of Unix counterparts, but work differently.[48] Plan 9 can supportPOSIXapplications and can emulate theBerkeley socket interfacethrough theANSI/POSIX Environment(APE) that implements aninterfaceclose toANSI CandPOSIX, with some common extensions (the native Plan 9 C interfaces conform to neither standard). It also includes a POSIX-compatible shell. APE's authors claim to have used it to port theX Window System(X11) to Plan 9, although they do not ship X11 "because supporting it properly is too big a job".[58]Some Linux binaries can be used with the help of a "linuxemu" (Linux emulator) application; however, it is still a work in progress.[59]Vice versa, theVx32virtual machine allows a slightly modified Plan 9 kernel to run as a user process in Linux, supporting unmodified Plan 9 programs.[60] In 1991, Plan 9's designers compared their system to other early nineties operating systems in terms of size, showing that the source code for a minimal ("working, albeit not very useful") version was less than one-fifth the size of aMachmicrokernelwithout any device drivers (5899 or 4622lines of codefor Plan 9, depending on metric, vs. 25530 lines). The complete kernel comprised 18000 lines of code.[39](According to a 2006 count, the kernel was then some 150,000 lines, but this was compared against more than 4.8 million inLinux.[43]) Within the operating systems research community, as well as the commercial Unix world, other attempts at achieving distributed computing and remote filesystem access were made concurrently with the Plan 9 design effort. These included theNetwork File Systemand the associatedvnodearchitecture developed atSun Microsystems, and more radical departures from the Unix model such as theSpriteOS fromUC Berkeley. Sprite developer Brent Welch points out that the SunOS vnode architecture is limited compared to Plan 9's capabilities in that it does not support remote device access and remote inter-process communication cleanly, even though it could have, had the preexistingUNIX domain sockets(which "can essentially be used to name user-level servers") been integrated with the vnode architecture.[41] One critique of the "everything is a file", communication-by-textual-message design of Plan 9 pointed out limitations of this paradigm compared to thetypedinterfaces of Sun'sobject-oriented operating system,Spring: Plan 9 constrains everything to look like a file. In most cases the real interface type comprises the protocol of messages that must be written to, and read from, a file descriptor. This is difficult to specify and document, and prohibits any automatictype checkingat all, except for file errors at run time. (...) [A] path name relative to a process' implicit root context is theonlyway to name a service. Binding a name to an object can only be done by giving an existing name for the object, in the same context as the new name. As such, interface references simplycannotbe passed between processes, much less across networks. Instead, communication has to rely on conventions, which are prone to error and do not scale. A later retrospective comparison of Plan 9, Sprite and a third contemporary distributed research operating system,Amoeba, found that the environments they [Amoeba and Sprite] build are tightly coupled within the OS, making communication with external services difficult. Such systems suffer from the radical departure from the UNIX model, which also discourages portability of already existing software to the platform (...). The lack of developers, the very small range of supported hardware and the small, even compared to Plan 9, user base have also significantly slowed the adoption of those systems (...). In retrospect, Plan 9 was the only research distributed OS from that time which managed to attract developers and be used in commercial projects long enough to warrant its survival to this day. Plan 9 demonstrated that an integral concept of Unix—that every system interface could be represented as a set of files—could be successfully implemented in a modern distributed system.[52]Some features from Plan 9, like the UTF-8 character encoding of Unicode, have been implemented in other operating systems. Unix-like operating systems such as Linux have implemented 9P2000, Plan 9's protocol for accessing remote files, and have adopted features ofrfork, Plan 9's process creation mechanism.[64]Additionally, inPlan 9 from User Space, several of Plan 9's applications and tools, including the sam and acme editors, have been ported to Unix and Linux systems and have achieved some level of popularity. Several projects seek to replace theGNUoperating system programs surrounding the Linux kernel with the Plan 9 operating system programs.[65][66]The 9wmwindow managerwas inspired by8½, the older windowing system of Plan 9;[67]wmiiis also heavily influenced by Plan 9.[63]In computer science research, Plan 9 has been used as agrid computingplatform[68][62]and as a vehicle for research intoubiquitous computingwithoutmiddleware.[69]In commerce, Plan 9 underliesCoraidstorage systems. However, Plan 9 has never approached Unix in popularity, and has been primarily a research tool: [I]t looks like Plan 9 failed simply because it fell short of being a compelling enough improvement on Unix to displace its ancestor. Compared to Plan 9, Unix creaks and clanks and has obvious rust spots, but it gets the job done well enough to hold its position. There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough. Other factors that contributed to low adoption of Plan 9 include the lack of commercial backup, the low number of end-user applications, and the lack ofdevice drivers.[52][53] Plan 9 proponents and developers claim that the problems hindering its adoption have been solved, that its original goals as a distributed system, development environment, and research platform have been met, and that it enjoys moderate but growing popularity.[citation needed]Inferno, through its hosted capabilities, has been a vehicle for bringing Plan 9 technologies to other systems as a hosted part of heterogeneous computing grids.[70][71][72] Several projects work to extend Plan 9, including 9atom and 9front. Theseforksaugment Plan 9 with additionalhardware driversand software, including an improved version of the Upase-mailsystem, theGocompiler,Mercurialversion control systemsupport (and now also a git implementation), and other programs.[19][73]Plan 9 wasportedto theRaspberry Pisingle-board computer.[74][75]The Harvey project attempts to replace the custom Plan 9 C compiler withGCC, to leverage modern development tools such asGitHubandCoverity, and speed up development.[76] SinceWindows 10 version 1903, theWindows Subsystem for Linuximplements thePlan 9 Filesystem Protocolas a server and the hostWindowsoperating system acts as a client.[77] Starting with the release of Fourth edition in April 2002,[26]the full source code of Plan 9 from Bell Labs is freely available underLucent Public License1.02, which is considered to be anopen-source licenseby theOpen Source Initiative(OSI),free softwarelicense by theFree Software Foundation, and it passes theDebian Free Software Guidelines.[43] In February 2014, theUniversity of California, Berkeley, was authorized by the current Plan 9copyright holder–Alcatel-Lucent– to release all Plan 9 software previously governed by the Lucent Public License, Version 1.02 under theGPL-2.0-only.[92] On March 23, 2021, ownership of Plan 9 transferred fromBell Labsto the Plan 9 Foundation,[93]and all previous releases have been relicensed to theMIT License.[10]
https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs
Participatory culture, an opposing concept toconsumer culture, is a culture in which private individuals (the public) do not act as consumers only, but also as contributors or producers (prosumers).[1]The term is most often applied to the production or creation of some type ofpublished media. Recent advances in technologies (mostlypersonal computersand theInternet) have enabled private persons to create and publish such media, usually through the Internet.[2]Since technology now enables new forms of expression and engagement in public discourse, participatory culture not only supports individual creation but also informal relationships that pair novices with experts.[3]This new culture, as it relates to the Internet, has been described asWeb 2.0.[4]In participatory culture, "young people creatively respond to a plethora of electronic signals and culturalcommoditiesin ways that surprise their makers, finding meanings and identities never meant to be there and defying simple nostrums that bewail the manipulation or passivity of 'consumers'."[2] The increasing access to the Internet has come to play an integral part in the expansion of participatory culture because it increasingly enables people to work collaboratively, generate and disseminate news, ideas, and creative works, and connect with people who share similar goals and interests (seeaffinity groups). The potential of participatory culture for civic engagement and creative expression has been investigated by media scholarHenry Jenkins. In 2009, Jenkins and co-authors Ravi Purushotma,Katie Clinton, Margaret Weigel and Alice Robison authored awhite paperentitledConfronting the Challenges of Participatory Culture: Media Education for the 21st Century.[5]This paper describes a participatory culture as one: Participatory culture has been around longer than the Internet. The emergence of theAmateur Press Associationin the middle of the 19th century is an example of historical participatory culture; at that time, young people were hand typing and printing their own publications. These publications were mailed throughout a network of people and resemble what are now called social networks. The evolution fromzines, radio shows, group projects, and gossip to blogs, podcasts, wikis, and social networks has impacted society greatly. With web services such aseBay,Blogger,Wikipedia,Photobucket,Facebook, andYouTube, it is no wonder that culture has become more participatory. The implications of the gradual shift from production toprodusageare profound and will affect the very core of culture, economy, society, and democracy.[6] Forms of participatory culture can be manifested in affiliations, expressions, collaborative problem solving, and circulations. Affiliations include both formal and informal memberships in online communities such as discussion boards or social media. Expression refers to the types of media that could be created. This may manifest as memes, fanfiction, or other forms of mash-ups. When individuals and groups work together on a particular form of media or media product, like a wiki, then they engage in collaborative problem solving. Finally, circulation refers to the means through which the communication may be spread. This could include blogs, vlogs, podcasts, and even some forms of social media.[3]Some of the most popular apps that involve participation include: Facebook, Snapchat, Instagram, Tinder, LinkedIn, Twitter, and TikTok. Fanfiction creators were one of the first communities to showcase the public could participate in pop culture,[7]by changing, growing, and altering TV show storylines during their run times, as well as strengthening the series’ popularity after the last episode aired. Some fan fiction creators develop theories and speculation, while others create ‘new’ material outside of the confines of the original content. Fans expand on the original story, putting the characters falling in love within the series through different adventures and sexualities. These communities are composed of audiences and readers from around the world, at different ages, with different backgrounds, coming together to develop theories and possibilities about current TV shows, books and films, or expand and continue the stories of TV shows, books, and movies that have come to a close.[8] Astechnologycontinues to enable new avenues for communication, collaboration, and circulation of ideas, it has also given rise to new opportunities for consumers to create their own content. Barriers like time and money are beginning to become less significant to large groups of consumers. For example, the creation of movies once required large amounts of expensive equipment, but now movie clips can be made with equipment that is affordable to a growing number of people. The ease with whichconsumerscreate new material has also grown. Extensive knowledge of computer programming is no longer necessary to create content on the internet. Media sharing over the Internet acts as a platform to invite users to participate and create communities that share similar interests through duplicated sources, original content, and re-purposed material. People no longer blindly absorb and consume what large media corporations distribute.[9]Today there are a great deal of people who are consumers who also produce their own content (referring to "prosumers").[10]The reason participatory culture is a high interest topic is due to the fact that there are just so many different social media platforms to participate in and contribute to. These happen to be some of the leaders in the social media industry,[11]and are the reason people are able to have such an advantage to participate in media creation. Today, millions of people across the world have the ability to post, quote, film, or create whatever they want.[12]With the aid of these platforms, the ability to reach a global audience has never been easier.[13] Social media have become a huge factor in politics and civics in not just elections, but gaining funds, spreading information, getting legislation and petition support, and other political activities.[14]Social media make it easier for the public to make an impact and participate in politics. A study that showed the connection between Facebook messages among friends and how these messages have influenced political expression, voting, and information seeking in the 2012 United States presidential election.[15]Social media mobilizes people easily and effectively, and does the same for the circulation of information. These can accomplish political goals such as gaining support for legislation, but social media can also greatly influence elections. The impact social media can have on elections was shown in the 2016 United States presidential election, hundreds of fake news stories about candidates were shared on Facebook tens of millions of times. Some people do not recognize fake news and vote based on false information.[16] Not only has hardware increased the individual's ability to submit content to the internet so that it may be reached by a wide audience, but in addition numerous internet sites have increased access. Websites likeFlickr,Wikipedia, andFacebookencourage the submission of content to the Internet. They increase the ease with which a user may post content by allowing them to submit information even if they only have a web browser. The need for additional software is eliminated. These websites also serve to create online communities for the production of content. These communities and theirweb serviceshave been labelled as part ofWeb 2.0.[17] The relationship between Web 2.0 tools and participatory culture is more than just material, however. As the mindsets and skillsets of participatory practices have been increasingly taken up, people are increasingly likely to exploit new tools and technology in 2.0 ways. One example is the use of cellphone technology to engage "smart mobs" for political change worldwide. In countries where cellphone usage exceeds use of any other form of digital technology, passing information via mobile phone has helped bring about significant political and social change. Notable examples include the so-called "Orange Revolution" inUkraine,[18]the overthrow of Philippine PresidentJoseph Estrada,[19]and regular political protests worldwide[20] There have been several ways that participatory media allows people to create, connect, and share their content or build friendships throughout the media.YouTubeencourages people to create and upload their content to share it around the world, creating an environment for content creators new or old.Discordallows people, primarilygamers, to connect with each other around the world and acts as a livechatroom.Twitchis astreaming mediawebsite where content creators can "go live" for viewers all around the world. A lot of times, these participatory sites have community events such as charity events or memorial streams for someone important to the people in the Twitch community. Thesmartphoneis one example that combines the elements of interactivity, identity, and mobility. The mobility of the smartphone demonstrates that media is no longer bound by time and space and can be used in any context. Technology continues to progress in this direction as it becomes more user driven and less restricted to schedules and locations: for example, the progression of movies from theaters to private home viewing, to now the smartphone that can be watched anytime and anywhere. The smartphone also enhances the participatory culture by increased levels of interactivity. Instead of merely watching, users are actively involved in making decisions, navigating pages, contributing their own content and choosing what links to follow. This goes beyond the "keyboard" level of interactivity, where a person presses a key and the expected letter appears, and becomes rather a dynamic activity with continually new options and changing setting, without a set formula to follow. The consumer role shifts from a passive receiver to an active contributor. The smartphone epitomizes this by the endless choices and ways to get personally involved with multiple media at the same time, in a nonlinear way.[21] The smartphone also contributes to participatory culture because of how it changes the perception of identity. A user can hide behind an avatar, false profile, or simply an idealized self when interacting with others online. There is no accountability to be who one says one is. The ability to slide in and out of roles changes the effect of media on culture, and also the user himself.[22]Now not only are people active participants in media and culture, but also their imagined selves. In Vincent Miller'sUnderstanding Digital Culture,he makes the argument that the lines between producer and consumers have become blurry. Producers are those that create content and cultural objects, and consumers are the audience or purchasers of those objects. By referring toAxel Bruns' idea of "prosumer," Miller argues "With the advent of convergent new media and the plethora of choice in sources for information, as well as the increased capacity for individuals to produce content themselves, this shift away from producer hegemony to audience or consumer power would seem to have accelerated, thus eroding the producer-consumer distinction" (p. 87). "Prosumer" is the ending result of a strategy that has been increasingly used which encourages feedback between producers and consumers (prosumers), "which allows for more consumer influence over the production of goods."[23] Bruns (2008) refers toprodusage, therefore, as a community collaboration that participants can access in order to share "content, contributions, and tasks throughout the networked community" (p. 14). This is similar to how Wikipedia allows users to write, edit, and ultimately use content. Producers are active participants who are empowered by their participation as network builders. Bruns (2008) describes the empowerment for users as different from the typical "top-down mediated spaces of the traditional mediaspheres" (p. 14). Produsage occurs when the users are the producers and vice versa, essentially eliminating the need for these "top-down" interventions. The collaboration of each participant is based on a principle of inclusivity; each member contributes valuable information for another user to use, add to, or change. In a community of learners, collaboration through produsage can provide access to content for every participant, not just those with some kind of authority. Every participant has authority. This leads to Bruns' (2008) idea of "equipotentiality: the assumption that while the skills and abilities of all the participants in the produsage project are not equal, they have an equal ability to make a worthy contribution to the project" (p. 25). Because there are no more distinctions between producers and consumers, every participant has an equal chance to participate meaningfully in produsage.[24] In July 2020, an academic description reported on the nature and rise of the "robot prosumer", derived frommodern-day technologyand related participatory culture, that, in turn, was substantially predicted earlier byFrederik Pohland otherscience fiction writers.[25][26][27] An important contribution has been made by media theoristMirko Tobias Schäferwho distinguishes explicit and implicit participation (2011). Explicit participation describes the conscious and active engagement of users in fan communities or of developers in creative processes. Implicit participation is more subtle and unfolds often without the user's knowledge. In her book,TheCulture of Connectivity,Jose Van Dijckemphasizes the importance of recognizing this distinction in order to thoroughly analyze user agency as a techno-cultural construct (2013). Dijck (2013) outlines the various ways in which explicit participation can be conceptualized. The first is the statistical conception of user demographics. Websites may “publish facts and figures about their user intensity (e.g., unique monthly users), their national and global user diversity, and relevant demographic facts” (p. 33). For instance,Facebookpublishes user demographic data such as gender, age, income, education level and more.[28]Explicit participation can also take place on the research end, where an experimental subject interacts with a platform for research purposes. Dijck (2013) references Leon et al. (2011), giving an example of an experimental study where “a number of users may be selected to perform tasks so researchers can observe their ability to control privacy settings “(p. 33). Lastly, explicit participation may informethnographicdata throughobservational studies, orqualitativeinterview-based research concerning user habits.[29] Implicit participation is achieved by implementing user activities into user interfaces and back-end design. Schäfer argues that the success of popular Web 2.0 and social media applications thrives on implicit participation. The notion of implicit participation expands theories of participatory culture as formulated by Henry Jenkins and Axel Bruns who both focus most prominently on explicit participation (p. 44). Considering implicit participation allows therefore for a more accurate analysis of the role technology in co-shaping user interactions and user generated content (pp. 51–52).[30] The term "textual poachers" was originated by de Certeau and has been popularized by Jenkins.[31]Jenkins uses this term to describe how some fans go through content like their favourite movie and engage with the parts that they are interested in, unlike audiences who watch the show more passively and move on to the next thing.[32]Jenkins takes a stand against the stereotypical portrayal of fans as obsessive nerds who are out of touch with reality. He demonstrates that fans are pro-active constructors of an alternative culture using elements "poached" and reworked from the mass media.[32]Specifically, fans use what they have poached to become producers themselves, creating new cultural materials in a variety of analytical and creative formats from "meta" essays to fanfiction, comics, music, and more.[33]In this way, fans become active participants in the construction and circulation of textual meanings. Fans usually interact with each other through fan groups, fanzines, social events, and even in the case of Trekkers (fans of Star Trek) interact with each other through annual conferences.[34] In a participatory culture, fans are actively involved in the production, which may also influence producer decisions within the medium. Fans do not only interact with each other but also try to interact with media producers to express their opinions.[34]For example, what would be the ending between two characters in a TV show? Therefore, fans are readers and producers of culture. Participatory culture transforms the media consumption experience into the production of new texts, in fact, the production of new cultures and new communities. The result is an autonomous, self-sufficient fan culture.[35] Participatory culture lacks representation of the female, which has created a misrepresentation of women online. This in turn, makes it difficult for women to represent themselves with authenticity, and deters participation of females in participatory culture. The content that is viewed on the internet in participatory situations is biased because of the overrepresentation of male generated information, and the ideologies created by the male presence in media, thus creates a submissive role for the female user, as they unconsciously accept patriarchal ideologies as reality. With males in the dominant positions "media industries [engage]… existing technologies to break up and reformulate media texts for reasons of their own".[36] Design intent from the male perspective is a main issue deterring accurate female representation. Females active in participatory culture are at a disadvantage because the content they are viewing is not designed with their participation in mind. Instead of producing male biased content, "feminist interaction design should seek to bring about political emancipation… it should also force designers to question their own position to assert what an "improved society" is and how to achieve it".[37]The current interactions and interfaces of participatory culture fails to "challenge the hegemonic dominance, legitimacy and appropriateness of positivist epistemologies; theorize from the margins; and problematize gender".[38]Men typically are more involved in the technology industry as "relatively fewer women work in the industry that designs technology now... only in the areas of HCI/usability is the gender balance of workforce anything like equal".[38]Since technology and design is at the crux of the creation of participatory culture "much can – and should – be said about who does what, and it is fair to raise the question of whether an industry of men can design for women".[38]"Although the members of the group are not directly teaching or perhaps even indicating the object of… representation, their activities inevitably lead to the exposure of the other individual to that object and this leads to that individual acquiring the same narrow… representations as the other group members have. Social learning of this type (another, similar process is known aslocal enhancement) has been shown to lead to relatively stable social transmission of behavior over time".[36]Local enhancement is the driving mechanism that influences the audience to embody and recreate the messages produced in media. Statistically, men are actively engaging in the production of these problematic representations, whereas, women are not contributing to the portrayal of women experiences because of local enhancement that takes place on the web. There is no exact number to determine the precise percentage for female users; in 2011 there were numerous surveys that slightly fluctuate in numbers, but none seem to surpass 15 percent.[39]This shows a large disparity of online users in regards to gender when looking at Wikipedia content. Bias arises as the content presented in Wikipedia seems to be more male oriented.[40] Participatory culture has been hailed by some as a way to reform communication and enhance the quality ofmedia. According to media scholar Henry Jenkins, one result of the emergence of participatory cultures is an increase in the number of media resources available, giving rise to increased competition between media outlets. Producers of media are forced to pay more attention to the needs of consumers who can turn to other sources for information.[41] Howard Rheingoldand others have argued that the emergence of participatory cultures will enable deep social change. Until as recently as the end of the 20th century, Rheingold argues, a handful of generally privileged, generally wealthy people controlled nearly all forms of mass communication—newspapers, television, magazines, books and encyclopedias. Today, however, tools for media production and dissemination are readily available and allow for what Rheingold labels "participatory media."[42] As participation becomes easier, the diversity of voices that can be heard also increases. At one time only a few mass media giants controlled most of the information that flowed into the homes of the public, but with the advance of technology even a single person has the ability to spread information around the world. The diversification of media has benefits because in cases where the control of media becomes concentrated it gives those who have control the ability to influence the opinions and information that flows to thepublic domain.[43]Media concentration provides opportunity for corruption, but as information continues to become accessed from more and more places it becomes increasingly difficult to control the flow of information to the will of an agenda. Participatory Culture is also seen as a moredemocraticform of communication as it stimulates the audience to take an active part because they can help shape the flow of ideas across media formats.[43]The democratic tendency lent to communication by participatory culture allows new models of production that are not based on a hierarchical standard. In the face of increased participation, the traditional hierarchies will not disappear, but "Community, collaboration, and self-organization" can become the foundation of corporations as powerful alternatives.[44]Although there may be no real hierarchy evident in many collaborative websites, their ability to form large pools ofcollective intelligenceis not compromised. Participatory culture civics organizations mobilize participatory cultures towards political action. They build on participatory cultures and organize such communities toward civic and political goals.[45]Examples includeHarry Potter Alliance,Invisible Children, Inc., andNerdfighters, which each leverage shared cultural interests to connect and organize members towards explicit political goals. These groups run campaigns by informing, connecting, and eventually organizing their members through new media platforms. Neta Kligler-Vilenchik identified three mechanisms used to translate cultural interests into political outcomes:[46] Social and participatory media allow for—and, indeed, call for—a shift in how we approach teaching and learning in the classroom. The increased availability of the Internet in classrooms allows for greater access to information. For example, it is no longer necessary for relevant knowledge to be contained in some combination of the teacher and textbooks; today, knowledge can be more de-centralized and made available for all learners to access. The teacher, then, can help facilitate efficient and effective means of accessing, interpreting, and making use of that knowledge.[47] Jenkins believes that participatory culture can play a role in the education of young people as a new form of implicit curriculum.[48]He finds a growing body of academic research showing the potential benefits of participatory cultures, both formal and informal, for the education of young people. Including Peer-to-peer learning opportunities, the awareness of intellectual property and multiculturalism, cultural expression and the development of skills valued in the modern workplace, and a more empowered conception of citizenship.[48] Rachael Sullivan discusses how some online platforms can be a challenge. According to Rachael Sullivan's book review, she emphasizes on Reddit, and the content used that can be offensive and inappropriate.[49]Memes, GIFs, and other content that users create are negative, and are used primarily for trolling. Reddit has a platform where any users in the community can post without restrictions or barriers, regardless of whether it's positive or negative. This has the potential for backlash against Reddit, as it doesn't restrict content that could be considered offensive or pejorative, and can reflect negatively on the community as a whole. On the other hand, Reddit would likely face similar backlash for restricting what others would consider their right to free speech, although free speech only pertains to government backlash and not private companies. YouTube has been the start-up for many up and coming pop stars; Both Justin Bieber and One Direction can credit their presence on YouTube as the catalyst for their respective careers. Other users have gained fame or notoriety by expounding on how simple it can be to become a popular YouTuber. Charlie “How to Get Featured on YouTube,” is one such example, in that his library consists solely of videos on how to get featured, and nothing else. YouTube offers the younger generation the opportunity to test out their content, while gaining feedback via likes, dislikes, and comments to find out where they need to improve. All people want to be a consumer in some and an active contributor in other situations. Being a consumer or active contributor is not an attribute of a person, but of a context.[50]The important criteria that needs to be taken into account is personally meaningful activities. Participatory cultures empower humans to be active contributors in personally meaningful activities. The drawback of such cultures is that they may force humans to cope with the burden of being an active contributor in personally irrelevant activities. This trade-off can be illustrated with the potential and drawbacks of "Do-It-Yourself Societies": starting with self-service restaurants and self-service gas stations a few decades ago, and this trend has been greatly accelerated over the last 10 years. Through modern tools (including electronic commerce supported by the Web), humans are empowered to do many tasks themselves that were done previously by skilled domain workers serving as agents and intermediaries. While this shift provides power, freedom, and control to customers (e.g., banking can be done at any time of the day with ATMs, and from any location with the Web), it has led also to some less desirable consequences. People may consider some of these tasks not very meaningful personally and therefore would be more than content with a consumer role. Aside from simple tasks that require a small or no learning effort, customers lack the experience the professionals have acquired and maintained through daily use of systems, and the broad background knowledge to do these tasks efficiently and effectively. The tools used to do these tasks — banking, travel reservations, buying airline tickets, checking out groceries at the supermarket — are core technologies for the professionals, but occasional technologies for the customers. This will put a new, substantial burden on customers rather than having skilled domain workers doing these tasks.[50] Significantly, too, as businesses increasingly recruit participatory practices and resources to market goods and services, consumers who are comfortable working within participatory media are at a distinct advantage over those who are less comfortable. Not only do consumers who are resistant to making use of the affordances of participatory culture have decreased access to knowledge, goods, and services, but they are less likely to take advantage of the increased leverage inherent in engaging with businesses as aprosumer.[50] This category is linked to the issue of thedigital divide, the concern with providing access to technology for all learners. The movement to break down the digital divide has included efforts to bring computers into classrooms, libraries, and other public places. These efforts have been largely successful, but as Jenkins et al. argue, the concern is now with the quality access to available technologies. They explain: What a person can accomplish with an outdated machine in a public library with mandatory filtering software and no opportunity for storage or transmission pales in comparison to what [a] person can accomplish with a home computer with unfettered Internet access, high band-width, and continuous connectivity.(Current legislation to block access to social networking software in schools and public libraries will further widen the participation gap.) The school system's inability to close this participation gap has negative consequences for everyone involved. On the one hand, those youth who are most advanced in media literacies are often stripped of their technologies and robbed of their best techniques for learning in an effort to ensure a uniform experience for all in the classroom. On the other hand, many youth who have had no exposure to these new kinds of participatory cultures outside school find themselves struggling to keep up with their peers. (Jenkins et al. pg. 15) Passing out the technology free of charge is not enough to ensure youth and adults learn how to use the tools effectively. Most American youths now have at least minimal access to networked computers, be it at school or in public libraries, but "children who have access to home computers demonstrate more positive attitudes towards computers, show more enthusiasm, and report more enthusiast and ease when using computer than those who do not (Page 8 Wartella, O'Keefe, and Scantlin (2000)). As the children with more access to computers gain more comfort in using them, the less tech-savvy students get pushed aside. It is more than a simple binary at work here, as working-class youths may still have access so some technologies (e.g. gaming consoles) while other forms remain unattainable. This inequality would allow certain skills to develop in some children, such as play, while others remain unavailable, such as the ability to produce and distribute self-created media.[3] In a participatory culture, one of the key challenges that is encountered is participatory gap. This comes into play with the integration of media and society. Some of the largest challenges we face in regards to the participation gap is in education, learning, accessibility, and privacy. All of these factors are huge setbacks when it comes to the relatively new integration of youth participating in today's popular forms of media. Education is one realm where the participatory gap is very prominent. In today's society, our education system heavily focuses on integrating media into its curriculum. More and more our classrooms are utilizing computers and technology as learning aides. While this is beneficial for students and teachers to enhance learning environments and allow them to access a plethora of information, it also presents many problems. The participation gap leaves many schools as well as its teachers and students at a disadvantage as they struggle to utilize current technology in their curriculum. Many schools do not have to funding to invest in computers or new technologies for their academic programs. They are unable to afford computers, cameras, and interactive learning tools, which prevents students from accessing the tools that other, wealthier schools have. Another challenge is that as we integrate new technology into schools and academics, we need to be able to teach people how to use these instruments. Teaching both student and adults how to use new media technologies is essential so that they can actively participate as their peers do. Additionally, teaching children how to navigate the information available on new media technologies is very important as there is so much content available on the internet these days. For beginners this can be overwhelming and teaching kids as well as adults how to access what is pertinent, reliable and viable information will help them improve how they utilize media technologies. One huge aspect of the participation gap is access. Access to the Internet and computers is a luxury in some households, and in the today's society, access to a computer and the Internet is often overlooked by both the education system and many other entities. In today's society, almost everything we do is based online, from banking to shopping to homework and ordering food, we spend all of our time doing everyday tasks online. For those who are unable to access these things, they are automatically put at a severe disadvantage. They cannot participate in activities that their peers do and may suffer both academically and socially. The last feature of the participation gap is privacy concerns. We put everything on the Internet these days, from pictures to personal information. It is important to question how this content will be used. Who owns this content? Where does it go or where is it stored? For example, the controversy of Facebook and its ownership and rights of user's content has been a hot button issue over the past few years. It is disconcerting to a lot of people to find out that their content they have posted to a particular website is no longer under their control, but may be retained and used by the website in the future. All of the above-mentioned issued are key factors in the participation gap. They play a large role is the challenges we face as we incorporate new media technology into everyday life. These challenges affect how many populations interact with the changing media in society and unfortunately leave many at a disadvantage. This divide between users of new media and those who are unable to access these technologies is also referred to as the digital divide. It leaves low-income families and children at a severe disadvantage that affects them in the present as well as the future. Students for example are largely affected because without access to the Internet or a computer they are unable to do homework and projects and will moreover be unsuccessful in school. These poor grades can lead to frustration with academia and furthermore may lead to delinquent behavior, low income jobs, decreased chanced of pursuing higher educations, and poor job skills. Increased facility with technology does not necessarily lead to increased ability to interpret how technology exerts its own pressure on us. Indeed, with increased access to information, the ability to interpret the viability of that information becomes increasingly difficult.[51]It is crucial, then, to find ways to help young learners develop tactics for engaging critically with the tools and resources they use. This is identified as a "breakdown of traditional forms of professional training and socialization that might prepare young people for their increasingly public roles as media makers and community participants" (Jenkins et al. pg. 5). For example, throughout most of the last half of the 20th century learners who wanted to become journalists would generally engage in a formal apprenticeship through journalism classes and work on a high school newspaper. This work would be guided by a teacher who was an expert in the rules and norms of journalism and who would confer that knowledge to student-apprentices. With increasing access toWeb 2.0tools, however, anybody can be a journalist of sorts, with or without an apprenticeship to the discipline. A key goal in media education, then, must be to find ways to help learners develop techniques for active reflection on the choices they make—and contributions they offer—as members of a participatory culture. As teachers, administrators, and policymakers consider the role of new media and participatory practices in the school environment, they will need to find ways to address the multiple challenges. Challenges include finding ways to work with the decentralization of knowledge inherent in online spaces; developing policies with respect to filtering software that protects learners and schools without limiting students' access to sites that enable participation; and considering the role of assessment in classrooms that embrace participatory practices. Cultures are substantially defined by their media and their tools for thinking, working, learning, and collaborating. Unfortunately a large number of new media are designed to see humans only as consumers; and people, particularly young people in educational institutions, form mindsets based on their exposure to specific media. The current mindset about learning, teaching, and education is dominated by a view in which teaching is often fitted "into a mold in which a single, presumably omniscient teacher explicitly tells or shows presumably unknowing learners something they presumably know nothing about".[52]A critical challenge is a reformulation and reconceptualization of this impoverished and misleading conception. Learning should not take place in a separate phase and in a separate place, but should be integrated into people's lives allowing them to construct solutions to their own problems. As they experience breakdowns in doing so, they should be able to learn on demand by gaining access to directly relevant information. The direct usefulness of new knowledge for actual problem situations greatly improves the motivation to learn the new material because the time and effort invested in learning are immediately worthwhile for the task at hand — not merely for some putative long-term gain. In order to create active contributor mindsets serving as the foundation of participatory cultures, learning cannot be restricted to finding knowledge that is "out there". Rather than serving as the "reproductive organ of a consumer society"[53]educational institutions must cultivate the development of an active contributor mindset by creating habits, tools and skills that help people become empowered and willing to actively contribute to the design of their lives and communities. Beyond supporting contributions from individual designers, educational institutions need to build a culture and mindset of sharing, supported by effective technologies and sustained by personal motivation to occasionally work for the benefit of groups and communities. This includes finding ways for people to see work done for the benefits of others being "on-task", rather than as extra work for which there is no recognition and no reward. Jenkinset al.believes that conversation surrounding thedigital divideshould focus on opportunities to participate and to develop the cultural competencies and social skills required to take part rather than get stuck on the question of technological access. As institutions, schools have been slow on the uptake of participatory culture. Instead, afterschool programs currently devote more attention to the development of new media literacies, or, a set of cultural competencies and social skills that young people need in the new media landscape. Participatory culture shifts this literacy from the individual level to community involvement. Networking and collaboration develop social skills that are vital to the new literacies. Although new, these skills build on an existing foundation of traditional literacy, research skills, technical skills, and critical analysis skills taught in the classroom. Metadesignis "design for designers"[54]It represents an emerging conceptual framework aimed at defining and creating social and technical infrastructures in which participatory cultures can come alive and new forms of collaborative design can take place. It extends the traditional notion of system design beyond the original development of a system to allow users become co-designers and co-developers. It is grounded in the basic assumption that future uses and problems cannot be completely anticipated at design time, when a system is developed. Users, at use time, will discover mismatches between their needs and the support that an existing system can provide for them. These mismatches will lead to breakdowns that serve as potential sources of new insights, new knowledge, and new understanding. Meta-design supports participatory cultures as follows:
https://en.wikipedia.org/wiki/Participatory_culture
A number of countries have attempted to restrict the import ofcryptographytools. Countries may wish to restrict import of cryptography technologies for a number of reasons: TheElectronic Privacy Information Centerand Global Internet Liberty Campaign reports use a color code to indicate the level of restriction, with the following meanings:
https://en.wikipedia.org/wiki/Restrictions_on_the_import_of_cryptography
Thegolden-section searchis a technique for finding anextremum(minimum or maximum) of a function inside a specified interval. For a strictlyunimodal functionwith an extremum inside the interval, it will find that extremum, while for an interval containing multiple extrema (possibly including the interval boundaries), it will converge to one of them. If the only extremum on the interval is on a boundary of the interval, it will converge to that boundary point. The method operates by successively narrowing the range of values on the specified interval, which makes it relatively slow, but very robust. The technique derives its name from the fact that the algorithm maintains the function values for four points whose three interval widths are in the ratioφ:1:φ, whereφis thegolden ratio. These ratios are maintained for each iteration and are maximally efficient. Excepting boundary points, when searching for a minimum, the central point is always less than or equal to the outer points, assuring that a minimum is contained between the outer points. The converse is true when searching for a maximum. The algorithm is the limit ofFibonacci search(also described below) for many function evaluations. Fibonacci search and golden-section search were discovered byKiefer(1953) (see also Avriel and Wilde (1966)). The discussion here is posed in terms of searching for a minimum (searching for a maximum is similar) of aunimodal function. Unlike finding a zero, where two function evaluations with opposite sign are sufficient to bracket a root, when searching for a minimum, three values are necessary. The golden-section search is an efficient way to progressively reduce the interval locating the minimum. The key is to observe that regardless of how many points have been evaluated, the minimum lies within the interval defined by the two points adjacent to the point with the least value so far evaluated. The diagram above illustrates a single step in the technique for finding a minimum. The functional values off(x){\displaystyle f(x)}are on the vertical axis, and the horizontal axis is thexparameter. The value off(x){\displaystyle f(x)}has already been evaluated at the three points:x1{\displaystyle x_{1}},x2{\displaystyle x_{2}}, andx3{\displaystyle x_{3}}. Sincef2{\displaystyle f_{2}}is smaller than eitherf1{\displaystyle f_{1}}orf3{\displaystyle f_{3}}, it is clear that a minimum lies inside the interval fromx1{\displaystyle x_{1}}tox3{\displaystyle x_{3}}. The next step in the minimization process is to "probe" the function by evaluating it at a new value ofx, namelyx4{\displaystyle x_{4}}. It is most efficient to choosex4{\displaystyle x_{4}}somewhere inside the largest interval, i.e. betweenx2{\displaystyle x_{2}}andx3{\displaystyle x_{3}}. From the diagram, it is clear that if the function yieldsf4a>f(x2){\displaystyle f_{4a}>f(x_{2})}, then a minimum lies betweenx1{\displaystyle x_{1}}andx4{\displaystyle x_{4}}, and the new triplet of points will bex1{\displaystyle x_{1}},x2{\displaystyle x_{2}}, andx4{\displaystyle x_{4}}. However, if the function yields the valuef4b<f(x2){\displaystyle f_{4b}<f(x_{2})}, then a minimum lies betweenx2{\displaystyle x_{2}}andx3{\displaystyle x_{3}}, and the new triplet of points will bex2{\displaystyle x_{2}},x4{\displaystyle x_{4}}, andx3{\displaystyle x_{3}}. Thus, in either case, we can construct a new narrower search interval that is guaranteed to contain the function's minimum. From the diagram above, it is seen that the new search interval will be either betweenx1{\displaystyle x_{1}}andx4{\displaystyle x_{4}}with a length ofa+c, or betweenx2{\displaystyle x_{2}}andx3{\displaystyle x_{3}}with a length ofb. The golden-section search requires that these intervals be equal. If they are not, a run of "bad luck" could lead to the wider interval being used many times, thus slowing down the rate of convergence. To ensure thatb=a+c, the algorithm should choosex4=x1+(x3−x2){\displaystyle x_{4}=x_{1}+(x_{3}-x_{2})}. However, there still remains the question of wherex2{\displaystyle x_{2}}should be placed in relation tox1{\displaystyle x_{1}}andx3{\displaystyle x_{3}}. The golden-section search chooses the spacing between these points in such a way that these points have the same proportion of spacing as the subsequent triplex1,x2,x4{\displaystyle x_{1},x_{2},x_{4}}orx2,x4,x3{\displaystyle x_{2},x_{4},x_{3}}. By maintaining the same proportion of spacing throughout the algorithm, we avoid a situation in whichx2{\displaystyle x_{2}}is very close tox1{\displaystyle x_{1}}orx3{\displaystyle x_{3}}and guarantee that the interval width shrinks by the same constant proportion in each step. Mathematically, to ensure that the spacing after evaluatingf(x4){\displaystyle f(x_{4})}is proportional to the spacing prior to that evaluation, iff(x4){\displaystyle f(x_{4})}isf4a{\displaystyle f_{4a}}and our new triplet of points isx1{\displaystyle x_{1}},x2{\displaystyle x_{2}}, andx4{\displaystyle x_{4}}, then we want However, iff(x4){\displaystyle f(x_{4})}isf4b{\displaystyle f_{4b}}and our new triplet of points isx2{\displaystyle x_{2}},x4{\displaystyle x_{4}}, andx3{\displaystyle x_{3}}, then we want Eliminatingcfrom these two simultaneous equations yields or where φ is thegolden ratio: The appearance of the golden ratio in the proportional spacing of the evaluation points is how this searchalgorithmgets its name. Any number of termination conditions may be applied, depending upon the application. The interval ΔX=X4−X1is a measure of the absolute error in the estimation of the minimumXand may be used to terminate the algorithm. The value of ΔXis reduced by a factor ofr=φ− 1 for each iteration, so the number of iterations to reach an absolute error of ΔXis about ln(ΔX/ΔX0) / ln(r), where ΔX0is the initial value of ΔX. Because smooth functions are flat (their first derivative is close to zero) near a minimum, attention must be paid not to expect too great an accuracy in locating the minimum. The termination condition provided in the bookNumerical Recipes in Cis based on testing the gaps amongx1{\displaystyle x_{1}},x2{\displaystyle x_{2}},x3{\displaystyle x_{3}}andx4{\displaystyle x_{4}}, terminating when within the relative accuracy bounds whereτ{\displaystyle \tau }is a tolerance parameter of the algorithm, and|x|{\displaystyle |x|}is theabsolute valueofx{\displaystyle x}. The check is based on the bracket size relative to its central value, because that relative error inx{\displaystyle x}is approximately proportional to the squared absolute error inf(x){\displaystyle f(x)}in typical cases. For that same reason, the Numerical Recipes text recommends thatτ=ε{\displaystyle \tau ={\sqrt {\varepsilon }}}, whereε{\displaystyle \varepsilon }is the required absolute precision off(x){\displaystyle f(x)}. Note!The examples here describe an algorithm that is for finding theminimumof a function. For maximum, the comparison operators need to be reversed. A very similar algorithm can also be used to find theextremum(minimum or maximum) of asequenceof values that has a single local minimum or local maximum. In order to approximate the probe positions of golden section search while probing only integer sequence indices, the variant of the algorithm for this case typically maintains a bracketing of the solution in which the length of the bracketed interval is aFibonacci number. For this reason, the sequence variant of golden section search is often calledFibonacci search. Fibonacci search was first devised byKiefer(1953) as aminimaxsearch for the maximum (minimum) of a unimodal function in an interval. TheBisection methodis a similar algorithm for finding a zero of a function. Note that, for bracketing a zero, only two points are needed, rather than three. The interval ratio decreases by 2 in each step, rather than by the golden ratio.
https://en.wikipedia.org/wiki/Golden-section_search
Earliest deadline first(EDF) orleast time to gois a dynamic priorityscheduling algorithmused inreal-time operating systemsto place processes in apriority queue. Whenever a scheduling event occurs (task finishes, new task released, etc.) the queue will be searched for the process closest to its deadline. This process is the next to be scheduled for execution. EDF is anoptimalscheduling algorithm on preemptive uniprocessors, in the following sense: if a collection of independentjobs,each characterized by an arrival time, an execution requirement and a deadline, can be scheduled (by any algorithm) in a way that ensures all the jobs complete by their deadline, the EDF will schedule this collection of jobs so they all complete by their deadline. With scheduling periodic processes that have deadlines equal to their periods, EDF has a utilization bound of 100%. Thus, the schedulability test[1]for EDF is: where the{Ci}{\displaystyle \left\{C_{i}\right\}}are the worst-case computation-times of then{\displaystyle n}processes and the{Ti}{\displaystyle \left\{T_{i}\right\}}are their respective inter-arrival periods (assumed to be equal to the relative deadlines).[2] That is, EDF can guarantee that all deadlines are met provided that the totalCPUutilization is not more than 100%. Compared to fixed-priority scheduling techniques likerate-monotonic scheduling, EDF can guarantee all the deadlines in the system at higher loading. Note that use the schedulability test formula under deadline as period. When deadline is less than period, things are different. Here is an example: The four periodic tasks needs scheduling, where each task is depicted as TaskNo( computation time, relative deadline, period). They are T0(5,13,20), T1(3,7,11), T2(4,6,10) and T3(1,1,20). This task group meets utilization is no greater than 1.0, where utilization is calculated as 5/20+3/11+4/10+1/20 = 0.97 (two digits rounded), but it's still unscheduable, checkEDF Scheduling Failurefigure for details. EDF is also anoptimalscheduling algorithm on non-preemptive uniprocessors, but only among the class of scheduling algorithms that do not allow inserted idle time. When scheduling periodic processes that have deadlines equal to their periods, a sufficient (but not necessary) schedulability test for EDF becomes:[3] Whereprepresents the penalty for non-preemption, given bymax{Ci}{\displaystyle \left\{C_{i}\right\}}/min{Ti}{\displaystyle \left\{T_{i}\right\}}. If this factor can be kept small, non-preemptive EDF can be beneficial as it has low implementation overhead. However, when the system is overloaded, the set of processes that will miss deadlines is largely unpredictable (it will be a function of the exact deadlines and time at which the overload occurs.) This is a considerable disadvantage to a real time systems designer. The algorithm is also difficult to implement inhardwareand there is a tricky issue of representing deadlines in different ranges (deadlines can not be more precise than the granularity of the clock used for the scheduling). If a modular arithmetic is used to calculate future deadlines relative to now, the field storing a future relative deadline must accommodate at least the value of the (("duration" {of the longest expected time to completion} * 2) + "now"). Therefore EDF is not commonly found in industrial real-time computer systems. Instead, most real-time computer systems usefixed-priority scheduling(usuallyrate-monotonic scheduling). With fixed priorities, it is easy to predict that overload conditions will cause the low-priority processes to miss deadlines, while the highest-priority process will still meet its deadline. There is a significant body of research dealing with EDF scheduling inreal-time computing; it is possible to calculate worst case response times of processes in EDF, to deal with other types of processes than periodic processes and to use servers to regulate overloads. Consider 3 periodic processes scheduled on a preemptive uniprocessor. The execution times and periods are as shown in the following table: In this example, the units of time may be considered to be schedulabletime slices. The deadlines are that each periodic process must complete within its period. In the timing diagram, the columns represent time slices with time increasing to the right, and the processes all start their periods at time slice 0. The timing diagram's alternating blue and white shading indicates each process's periods, with deadlines at the color changes. The first process scheduled by EDF is P2, because its period is shortest, and therefore it has the earliest deadline. Likewise, when P2 completes, P1 is scheduled, followed by P3. At time slice 5, both P2 and P3 have the same deadline, needing to complete before time slice 10, so EDF may schedule either one. The utilization will be: (18+25+410)=(3740)=0.925=92.5%{\displaystyle \left({\frac {1}{8}}+{\frac {2}{5}}+{\frac {4}{10}}\right)=\left({\frac {37}{40}}\right)=0.925={\mathbf {92.5\%} }} Since theleast common multipleof the periods is 40, the scheduling pattern can repeat every 40 time slices. But, only 37 of those 40 time slices are used by P1, P2, or P3. Since the utilization, 92.5%, is not greater than 100%, the system is schedulable with EDF. Undesirable deadline interchanges may occur with EDF scheduling. A process may use a shared resource inside acritical section, to prevent it from being pre-emptively descheduled in favour of another process with an earlier deadline. If so, it becomes important for the scheduler to assign the running process the earliest deadline from among the other processes waiting for the resource. Otherwise the processes with earlier deadlines might miss them. This is especially important if the process running the critical section has a much longer time to complete and exit from its critical section, which will delay releasing the shared resource. But the process might still be pre-empted in favour of others that have earlier deadlines but do not share the critical resource. This hazard of deadline interchange is analogous topriority inversionwhen usingfixed-priority pre-emptive scheduling. To speed up the deadline search within the ready queue, the queue entries be sorted according to their deadlines. When a new process or a periodic process is given a new deadline, it is inserted before the first process with a later deadline. This way, the processes with the earliest deadlines are always at the beginning of the queue. In a heavy-traffic analysis of the behavior of a single-server queue under an earliest-deadline-first scheduling policy with reneging,[4]the processes have deadlines and are served only until their deadlines elapse. The fraction of "reneged work", defined as the residual work not serviced due to elapsed deadlines, is an important performance measure. It is commonly accepted that an implementation offixed-priority pre-emptive scheduling(FPS) is simpler than a dynamic priority scheduler, like the EDF. However, when comparing the maximum usage of an optimal scheduling under fixed priority (with the priority of each thread given by therate-monotonic scheduling), the EDF can reach 100% while the theoretical maximum value for rate-monotonic scheduling is around 69%. In addition, the worst-case overhead of an EDF implementation (fully preemptive or limited/non-preemptive) for periodic and/or sporadic tasks can be made proportional to the logarithm of the largest time representation required by a given system (to encode deadlines and periods) using Digital Search Trees.[5]In practical cases, such as embedded systems using a fixed, 32-bit representation of time, scheduling decisions can be made using this implementation in a small fixed-constant time which is independent of the number of system tasks. In such situations experiments have found little discernible difference in overhead between the EDF and FPS, even for task sets of (comparatively) large cardinality.[5] Note that EDF does not make any specific assumption on the periodicity of the tasks; hence, it can be used for scheduling periodic as well as aperiodic tasks.[2] Earliest Deadline First(EDF) scheduling finds its most significant applications inreal-time systemswhere missing deadlines can lead to critical consequences. These domains typically require deterministic timing guarantees: Although EDF implementations are not common in commercial real-time kernels, here are a few links of open-source and real-time kernels implementing EDF:
https://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling
Theactor modelincomputer scienceis amathematical modelofconcurrent computationthat treats anactoras the basic building block of concurrent computation. In response to amessageit receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their ownprivate state, but can only affect each other indirectly through messaging (removing the need forlock-based synchronization). The actor model originated in 1973.[1]It has been used both as a framework for atheoretical understandingofcomputationand as the theoretical basis for severalpractical implementationsofconcurrent systems. The relationship of the model to other work is discussed inactor model and process calculi. According toCarl Hewitt, unlike previous models of computation, the actor model was inspired byphysics, includinggeneral relativityandquantum mechanics.[citation needed]It was also influenced by the programming languagesLisp,Simula, early versions ofSmalltalk,capability-based systems, andpacket switching. Its development was "motivated by the prospect of highlyparallel computingmachines consisting of dozens, hundreds, or even thousands of independentmicroprocessors, each with its own local memory andcommunications processor, communicating via a high-performancecommunications network."[2]Since that time, the advent of massive concurrency throughmulti-coreandmanycorecomputer architectureshas revived interest in the actor model. Following Hewitt, Bishop, and Steiger's 1973 publication,Irene Greifdeveloped anoperational semanticsfor the actor model as part of herdoctoralresearch.[3]Two years later,Henry Bakerand Hewitt published a set of axiomatic laws for actor systems.[4][5]Other major milestones includeWilliam Clinger's1981 dissertation introducing adenotational semanticsbased onpower domains[2]andGul Agha's 1985 dissertation which further developed a transition-based semantic model complementary to Clinger's.[6]This resulted in the full development ofactor model theory. Major softwareimplementationwork was done by Russ Atkinson, Giuseppe Attardi, Henry Baker, Gerry Barber, Peter Bishop, Peter de Jong, Ken Kahn,Henry Lieberman, Carl Manning, Tom Reinhardt, Richard Steiger and Dan Theriault in the Message Passing Semantics Group atMassachusetts Institute of Technology(MIT). Research groups led by Chuck Seitz atCalifornia Institute of Technology(Caltech) andBill Dallyat MIT constructed computer architectures that further developed the message passing in the model. SeeActor model implementation. Research on the actor model has been carried out atCalifornia Institute of Technology,Kyoto UniversityTokoro Laboratory,Microelectronics and Computer Technology Corporation(MCC),MIT Artificial Intelligence Laboratory,SRI,Stanford University,University of Illinois at Urbana–Champaign,[7]Pierre and Marie Curie University(University of Paris 6),University of Pisa,University of TokyoYonezawa Laboratory,Centrum Wiskunde & Informatica(CWI) and elsewhere. The actor model adopts the philosophy thateverything is an actor. This is similar to theeverything is an objectphilosophy used by someobject-oriented programminglanguages. An actor is a computational entity that, in response to a message it receives, can concurrently: There is no assumed sequence to the above actions and they could be carried out in parallel. Decoupling the sender from communications sent was a fundamental advance of the actor model enablingasynchronous communicationand control structures as patterns ofpassing messages.[8] Recipients of messages are identified by address, sometimes called "mailing address". Thus an actor can only communicate with actors whose addresses it has. It can obtain those from a message it receives, or if the address is for an actor it has itself created. The actor model is characterized by inherent concurrency of computation within and among actors, dynamic creation of actors, inclusion of actor addresses in messages, and interaction only through direct asynchronousmessage passingwith no restriction on message arrival order. Over the years, several different formal systems have been developed which permit reasoning about systems in the actor model. These include: There are also formalisms that are not fully faithful to the actor model in that they do not formalize the guaranteed delivery of messages including the following (SeeAttempts to relate actor semantics to algebra and linear logic): The actor model can be used as a framework for modeling, understanding, and reasoning about a wide range ofconcurrent systems.[15]For example: The actor model is about the semantics ofmessage passing. Arguably, the first concurrent programs wereinterrupt handlers. During the course of its normal operation a computer needed to be able to receive information from outside (characters from a keyboard, packets from a network,etc). So when the information arrived the execution of the computer wasinterruptedand special code (called an interrupt handler) was called to put the information in adata bufferwhere it could be subsequently retrieved. In the early 1960s, interrupts began to be used to simulate the concurrent execution of several programs on one processor.[17]Having concurrency withshared memorygave rise to the problem ofconcurrency control. Originally, this problem was conceived as being one ofmutual exclusionon a single computer.Edsger Dijkstradevelopedsemaphoresand later, between 1971 and 1973,[18]Tony Hoare[19]andPer Brinch Hansen[20]developedmonitorsto solve the mutual exclusion problem. However, neither of these solutions provided a programming language construct that encapsulated access to shared resources. This encapsulation was later accomplished by the serializer construct ([Hewitt and Atkinson 1977, 1979] and [Atkinson 1980]). The first models of computation (e.g.,Turing machines, Post productions, thelambda calculus,etc.) were based on mathematics and made use of a global state to represent a computationalstep(later generalized in [McCarthy and Hayes 1969] and [Dijkstra 1976] seeEvent orderings versus global state). Each computational step was from one global state of the computation to the next global state. The global state approach was continued inautomata theoryforfinite-state machinesand push downstack machines, including theirnondeterministicversions. Such nondeterministic automata have the property ofbounded nondeterminism; that is, if a machine always halts when started in its initial state, then there is a bound on the number of states in which it halts. Edsger Dijkstrafurther developed the nondeterministic global state approach. Dijkstra's model gave rise to a controversy concerningunbounded nondeterminism(also calledunbounded indeterminacy), a property ofconcurrencyby which the amount of delay in servicing a request can become unbounded as a result of arbitration of contention for shared resourceswhile still guaranteeing that the request will eventually be serviced. Hewitt argued that the actor model should provide the guarantee of service. In Dijkstra's model, although there could be an unbounded amount of time between the execution of sequential instructions on a computer, a (parallel) program that started out in a well defined state could terminate in only a bounded number of states [Dijkstra 1976]. Consequently, his model could not provide the guarantee of service. Dijkstra argued that it was impossible to implement unbounded nondeterminism. Hewitt argued otherwise: there is no bound that can be placed on how long it takes a computational circuit called anarbiterto settle (seemetastability (electronics)).[21]Arbiters are used in computers to deal with the circumstance that computer clocks operate asynchronously with respect to input from outside,e.g., keyboard input, disk access, network input,etc.So it could take an unbounded time for a message sent to a computer to be received and in the meantime the computer could traverse an unbounded number of states. The actor model features unbounded nondeterminism which was captured in a mathematical model byWill Clingerusingdomain theory.[2]In the actor model, there is no global state.[dubious–discuss] Messages in the actor model are not necessarily buffered. This was a sharp break with previous approaches to models of concurrent computation. The lack of buffering caused a great deal of misunderstanding at the time of the development of the actor model and is still a controversial issue. Some researchers argued that the messages are buffered in the "ether" or the "environment". Also, messages in the actor model are simply sent (likepacketsinIP); there is no requirement for a synchronous handshake with the recipient. A natural development of the actor model was to allow addresses in messages. Influenced bypacket switched networks[1961 and 1964], Hewitt proposed the development of a new model of concurrent computation in which communications would not have any required fields at all: they could be empty. Of course, if the sender of a communication desired a recipient to have access to addresses which the recipient did not already have, the address would have to be sent in the communication. For example, an actor might need to send a message to a recipient actor from which it later expects to receive a response, but the response will actually be handled by a third actor component that has been configured to receive and handle the response (for example, a different actor implementing theobserver pattern). The original actor could accomplish this by sending a communication that includes the message it wishes to send, along with the address of the third actor that will handle the response. This third actor that will handle the response is called theresumption(sometimes also called acontinuationorstack frame). When the recipient actor is ready to send a response, it sends the response message to theresumptionactor address that was included in the original communication. So, the ability of actors to create new actors with which they can exchange communications, along with the ability to include the addresses of other actors in messages, gives actors the ability to create and participate in arbitrarily variable topological relationships with one another, much as the objects in Simula and other object-oriented languages may also be relationally composed into variable topologies of message-exchanging objects. As opposed to the previous approach based on composing sequential processes, the actor model was developed as an inherently concurrent model. In the actor model sequentiality was a special case that derived from concurrent computation as explained inactor model theory. Hewitt argued against adding the requirement that messages must arrive in the order in which they are sent to the actor. If output message ordering is desired, then it can be modeled by a queue actor that provides this functionality. Such a queue actor would queue the messages that arrived so that they could be retrieved inFIFOorder. So if an actorXsent a messageM1to an actorY, and laterXsent another messageM2toY, there is no requirement thatM1arrives atYbeforeM2. In this respect the actor model mirrorspacket switchingsystems which do not guarantee that packets must be received in the order sent. Not providing the order of delivery guarantee allows packet switching to buffer packets, use multiple paths to send packets, resend damaged packets, and to provide other optimizations. For more example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a messageM1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another messageM2before it has finished processingM1. Just because an actor is allowed to pipeline the processing of messages does not mean that itmustpipeline the processing. Whether a message is pipelined is an engineering tradeoff. How would an external observer know whether the processing of a message by an actor has been pipelined? There is no ambiguity in the definition of an actor created by the possibility of pipelining. Of course, it is possible to perform the pipeline optimization incorrectly in some implementations, in which case unexpected behavior may occur. Another important characteristic of the actor model is locality. Locality means that in processing a message, an actor can send messages only to addresses that it receives in the message, addresses that it already had before it received the message, and addresses for actors that it creates while processing the message. (But seeSynthesizing addresses of actors.) Also locality means that there is no simultaneous change in multiple locations. In this way it differs from some other models of concurrency,e.g., thePetri netmodel in which tokens are simultaneously removed from multiple locations and placed in other locations. The idea of composing actor systems into larger ones is an important aspect ofmodularitythat was developed in Gul Agha's doctoral dissertation,[6]developed later by Gul Agha, Ian Mason, Scott Smith, andCarolyn Talcott.[9] A key innovation was the introduction ofbehaviorspecified as a mathematical function to express what an actor does when it processes a message, including specifying a new behavior to process the next message that arrives. Behaviors provided a mechanism to mathematically model the sharing in concurrency. Behaviors also freed the actor model from implementation details,e.g., the Smalltalk-72 token stream interpreter. However, the efficient implementation of systems described by the actor model requireextensiveoptimization. SeeActor model implementationfor details. Other concurrency systems (e.g.,process calculi) can be modeled in the actor model using atwo-phase commit protocol.[22] There is aComputational Representation Theoremin the actor model for systems which are closed in the sense that they do not receive communications from outside. The mathematical denotation denoted by a closed systemS{\displaystyle {\mathtt {S}}}is constructed from an initial behavior⊥S{\displaystyle \bot _{\mathtt {S}}}and a behavior-approximating functionprogressionS.{\displaystyle \mathbf {progression} _{\mathtt {S}}.}These obtain increasingly better approximations and construct a denotation (meaning) forS{\displaystyle {\mathtt {S}}}as follows [Hewitt 2008; Clinger 1981]: In this way,S{\displaystyle {\mathtt {S}}}can be mathematically characterized in terms of all its possible behaviors (including those involving unbounded nondeterminism). AlthoughDenoteS{\displaystyle \mathbf {Denote} _{\mathtt {S}}}is not an implementation ofS{\displaystyle {\mathtt {S}}}, it can be used to prove a generalization of the Church-Turing-Rosser-Kleene thesis [Kleene 1943]: A consequence of the above theorem is that a finite actor can nondeterministically respond with anuncountable[clarify]number of different outputs. One of the key motivations for the development of the actor model was to understand and deal with the control structure issues that arose in development of thePlanner programming language.[citation needed]Once the actor model was initially defined, an important challenge was to understand the power of the model relative toRobert Kowalski's thesis that "computation can be subsumed by deduction". Hewitt argued that Kowalski's thesis turned out to be false for the concurrent computation in the actor model (seeIndeterminacy in concurrent computation). Nevertheless, attempts were made to extendlogic programmingto concurrent computation. However, Hewitt and Agha [1991] claimed that the resulting systems were not deductive in the following sense: computational steps of the concurrent logic programming systems do not follow deductively from previous steps (seeIndeterminacy in concurrent computation). Recently, logic programming has been integrated into the actor model in a way that maintains logical semantics.[21] Migration in the actor model is the ability of actors to change locations.E.g., in his dissertation, Aki Yonezawa modeled a post office that customer actors could enter, change locations within while operating, and exit. An actor that can migrate can be modeled by having a location actor that changes when the actor migrates. However the faithfulness of this modeling is controversial and the subject of research.[citation needed] The security of actors can be protected in the following ways: A delicate point in the actor model is the ability to synthesize the address of an actor. In some cases security can be used to prevent the synthesis of addresses (seeSecurity). However, if an actor address is simply a bit string then clearly it can be synthesized although it may be difficult or even infeasible to guess the address of an actor if the bit strings are long enough.SOAPuses aURLfor the address of an endpoint where an actor can be reached. Since aURLis a character string, it can clearly be synthesized although encryption can make it virtually impossible to guess. Synthesizing the addresses of actors is usually modeled using mapping. The idea is to use an actor system to perform the mapping to the actual actor addresses. For example, on a computer the memory structure of the computer can be modeled as an actor system that does the mapping. In the case ofSOAPaddresses, it's modeling theDNSand the rest of theURLmapping. Robin Milner's initial published work on concurrency[23]was also notable in that it was not based on composing sequential processes. His work differed from the actor model because it was based on a fixed number of processes of fixed topology communicating numbers and strings using synchronous communication. The originalcommunicating sequential processes(CSP) model[24]published byTony Hoarediffered from the actor model because it was based on the parallel composition of a fixed number of sequential processes connected in a fixed topology, and communicating using synchronous message-passing based on process names (seeActor model and process calculi history). Later versions of CSP abandoned communication based on process names in favor of anonymous communication via channels, an approach also used in Milner's work on thecalculus of communicating systems (CCS)and theπ-calculus. These early models by Milner and Hoare both had the property of bounded nondeterminism. Modern, theoretical CSP ([Hoare 1985] and [Roscoe 2005]) explicitly provides unbounded nondeterminism. Petri netsand their extensions (e.g., coloured Petri nets) are like actors in that they are based on asynchronous message passing and unbounded nondeterminism, while they are like early CSP in that they define fixed topologies of elementary processing steps (transitions) and message repositories (places). The actor model has been influential on both theory development and practical software development. The actor model has influenced the development of theπ-calculusand subsequentprocess calculi. In his Turing lecture, Robin Milner wrote:[25] Now, the pure lambda-calculus is built with just two kinds of thing: terms and variables. Can we achieve the same economy for a process calculus? Carl Hewitt, with his actors model, responded to this challenge long ago; he declared that a value, an operator on values, and a process should all be the same kind of thing: an actor. This goal impressed me, because it implies the homogeneity and completeness of expression ... But it was long before I could see how to attain the goal in terms of an algebraic calculus... So, in the spirit of Hewitt, our first step is to demand that all things denoted by terms or accessed by names—values, registers, operators, processes, objects—are all of the same kind of thing; they should all be processes. The actor model has had extensive influence on commercial practice. For example, Twitter has used actors for scalability.[26]Also, Microsoft has used the actor model in the development of its Asynchronous Agents Library.[27]There are multiple other actor libraries listed in the actor libraries and frameworks section below. According to Hewitt [2006], the actor model addresses issues in computer and communications architecture,concurrent programming languages, andWeb servicesincluding the following: Many of the ideas introduced in the actor model are now also finding application inmulti-agent systemsfor these same reasons [Hewitt 2006b 2007b]. The key difference is that agent systems (in most definitions) impose extra constraints upon the actors, typically requiring that they make use of commitments and goals. A number of different programming languages employ the actor model or some variation of it. These languages include: Actor libraries or frameworks have also been implemented to permit actor-style programming in languages that don't have actors built-in. Some of these frameworks are:
https://en.wikipedia.org/wiki/Actor_model