text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
Acatch-22is aparadoxicalsituation from which an individual cannot escape because of contradictory rules or limitations.[1]The term was first used byJoseph Hellerin his 1961 novelCatch-22.
Catch-22s often result from rules, regulations, or procedures that an individual is subject to, but has no control over, because to fight the rule is to accept it. Another example is a situation in which someone is in need of something that can only be had by not being in need of it (e.g. the only way to qualify for a loan is to prove to the bank that you do not need a loan). One connotation of the term is that the creators of the "catch-22" situation have created arbitrary rules in order to justify and conceal their ownabuse of power.
Joseph Hellercoined the term in his 1961 novelCatch-22, which describes absurd bureaucratic constraints on soldiers inWorld War II. The term is introduced by the character Doc Daneeka, an army surgeon who invokes "Catch-22" to explain why any pilot requesting mental evaluation for insanity—hoping to be found not sane enough to fly and thereby escape dangerous missions—demonstrates his own sanity in creating the request and thus cannot be declared insane. This phrase also means a dilemma or difficult circumstance from which there is no escape because of mutually conflicting or dependent conditions.[2]
"You mean there's a catch?"
"Sure there's a catch,"Doc Daneekareplied. "Catch-22. Anyone who wants to get out of combat duty isn't really crazy."
There was only one catch and that was Catch-22, which specified that a concern for one's own safety in the face of dangers that were real and immediate was the process of a rational mind.Orrwas crazy and could be grounded. All he had to do was ask; and as soon as he did, he would no longer be crazy and would have to fly more missions. Orr would be crazy to fly more missions and sane if he didn't, but if he was sane, he had to fly them. If he flew them, he was crazy and didn't have to; but if he didn't want to, he was sane and had to.Yossarianwas moved very deeply by the absolute simplicity of this clause of Catch-22 and let out a respectful whistle.
Different formulations of "Catch-22" appear throughout the novel. The term is applied to various loopholes and quirks of the military system, always with the implication that rules are inaccessible to and slanted against those lower in the hierarchy. In chapter 6, Yossarian (the protagonist) is told that Catch-22 requires him to do anything hiscommanding officertells him to do, regardless of whether these orders contradict orders from the officer's superiors.[3]
In a final episode, Catch-22 is described to Yossarian by an old woman recounting an act of violence by soldiers:[4][5]
"Catch-22 says they have a right to do anything we can't stop them from doing."
"What the hell are you talking about?" Yossarian shouted at her in bewildered, furious protest. "How did you know it was Catch-22? Who the hell told you it was Catch-22?"
"The soldiers with the hard white hats and clubs. The girls were crying. 'Did we do anything wrong?' they said. The men said no and pushed them away out the door with the ends of their clubs. 'Then why are you chasing us out?' the girls said. 'Catch-22,' the men said. All they kept saying was 'Catch-22, Catch-22.' What does it mean, Catch-22? What is Catch-22?"
"Didn't they show it to you?" Yossarian demanded, stamping about in anger and distress. "Didn't you even make them read it?"
"They don't have to show us Catch-22," the old woman answered. "The law says they don't have to."
"What law says they don't have to?"
"Catch-22."
According to literature professor Ian Gregson, the old woman's narrative defines "Catch-22" more directly as the "brutal operation of power", stripping away the "bogus sophistication" of the earlier scenarios.[6]
Besides referring to an unsolvable logicaldilemma, Catch-22 is invoked to explain or justify the military bureaucracy. For example, in the first chapter, it requires Yossarian to sign his name to letters he censors while he is confined to a hospital bed. One clause mentioned in chapter 10 closes a loophole in promotions, which one private had been exploiting to reattain the attractive rank ofprivate first classafter any promotion. Throughcourts-martialfor goingAWOL, he would be busted in rank back to private, but Catch-22 limited the number of times he could do this before being sent to the stockade.
At another point in the book, a prostitute explains to Yossarian that she cannot marry him because he is crazy, and she will never marry a crazy man. She considers any man crazy who would marry a woman who is not a virgin. This closed logic loop clearly illustrated Catch-22 because by her logic, all men who refuse to marry her are sane and thus she would consider marriage; but as soon as a man agrees to marry her, he becomes crazy for wanting to marry a non-virgin, and is instantly rejected.
At one point, Captain Black attempts to press Milo into deprivingMajor Majorof food as a consequence of not signing a loyalty oath that Major Major was never given an opportunity to sign in the first place. Captain Black asks Milo, "You're not against Catch-22, are you?"
In chapter 40, Catch-22 forces Colonels Korn and Cathcart to promote Yossarian to Major and ground him rather than simply sending him home. They fear that if they do not, others will refuse to fly, just as Yossarian did.
Heller originally wanted to call the phrase (and hence, the book) by other numbers, but he and his publishers eventually settled on 22. The number has no particular significance; it was chosen more or less foreuphony. The title was originallyCatch-18, but Heller changed it after the popularMila 18was published a short time beforehand.[7][8]
The term "catch-22" has filtered into common usage in the English language. In a 1975 interview, Heller said the term would not translate well into other languages.[8]
James E. Combs and Dan D. Nimmo suggest that the idea of a "catch-22" has gained popular currency because so many people in modern society are exposed to frustrating bureaucratic logic. They write of the rules of high school and colleges that:
This bogus democracy that can be overruled by arbitrary fiat is perhaps a citizen's first encounter with organizations that may profess 'open' and libertarian values, but in fact are closed and hierarchical systems. Catch-22 is an organizational assumption, an unwritten law of informal power that exempts the organization from responsibility and accountability, and puts the individual in the absurd position of being excepted for the convenience or unknown purposes of the organization.[5]
Along with George Orwell's "doublethink", "catch-22" has become one of the best-recognized ways to describe the predicament of being trapped by contradictory rules.[9]
A significant type of definition ofalternative medicinehas been termed a catch-22. In a 1998 editorial co-authored byMarcia Angell, a former editor of theNew England Journal of Medicine, argued that:
It is time for the scientific community to stop giving alternative medicine a free ride. There cannot be two kinds of medicine—conventional and alternative. There is only medicine that has been adequately tested and medicine that has not, medicine that works and medicine that may or may not work. Once a treatment has been tested rigorously, it no longer matters whether it was considered alternative at the outset. If it is found to be reasonably safe and effective, it will be accepted. But assertions, speculation, and testimonials do not substitute for evidence. Alternative treatments should be subjected to scientific testing no less rigorous than that required for conventional treatments.[10]
This definition has been described byRobert L. Parkas a logical catch-22 which ensures that anycomplementary and alternative medicine(CAM) method which is proven to work "would no longer be CAM, it would simply be medicine."[11]
U.S. Circuit JudgeDon Willettreferred toqualified immunity, which requires a violation of constitutional rights to have been previously established in order for a victim to claim damages, as a catch-22: "Section 1983 meets Catch-22. Important constitutional questions go unanswered precisely because those questions are yet unanswered. Courts then rely on that judicial silence to conclude there's no equivalent case on the books. No precedent = no clearly established law = no liability. An Escherian Stairwell. Heads government wins, tails plaintiff loses."[12][13]
Thearchetypalcatch-22, as formulated byJoseph Heller, involves the case ofJohn Yossarian, aU.S. Army Air Forcesbombardier, who wishes to be grounded from combat flight. This will only happen if he is evaluated by the squadron'sflight surgeonand found "unfit to fly". "Unfit" would be any pilot who is willing to fly such dangerous missions, as one would have to bemadto volunteer for possible death. However, to be evaluated, he mustrequestthe evaluation, an act that is considered sufficient proof for being declared sane. These conditions make it impossible to be declared "unfit".
The "Catch-22" is that "anyone who wants to get out of combat duty isn't really crazy".[14]Hence, pilots who request a mental fitness evaluationaresane, and therefore must fly in combat. At the same time, if an evaluation is not requested by the pilot, he will never receive one and thus can never be found insane, meaning he must also fly in combat.
Therefore, Catch-22 ensures that no pilot can ever be grounded for being insane even if he is.
A logical formulation of this situation is:
The philosopher Laurence Goldstein argues that the "airman's dilemma" is logically not even a condition that is true under no circumstances; it is a "vacuousbiconditional" that is ultimately meaningless. Goldstein writes:[15]
The catch is this: what looks like a statement of the conditions under which an airman can be excused flying dangerous missions reduces not to the statement
(which could be a mean way of disguising an unpleasant truth), but to the worthlessly empty announcement
If the catch were (i), that would not be so bad—an airman would at least be able to discover that under no circumstances could he avoid combat duty. But Catch-22 is worse—a welter of words that amounts to nothing; it is without content, it conveys no information at all.
|
https://en.wikipedia.org/wiki/Catch-22_(logic)
|
Thesample mean(sample average) orempirical mean(empirical average), and thesample covarianceorempirical covariancearestatisticscomputed from asampleof data on one or morerandom variables.
The sample mean is theaveragevalue (ormean value) of asampleof numbers taken from a largerpopulationof numbers, where "population" indicates not number of people but the entirety of relevant data, whether collected or not. A sample of 40 companies' sales from theFortune 500might be used for convenience instead of looking at the population, all 500 companies' sales. The sample mean is used as anestimatorfor the population mean, the average value in the entire population, where the estimate is more likely to be close to the population mean if the sample is large and representative. The reliability of the sample mean is estimated using thestandard error, which in turn is calculated using thevarianceof the sample. If the sample is random, the standard error falls with the size of the sample and the sample mean's distribution approaches the normal distribution as the sample size increases.
The term "sample mean" can also be used to refer to avectorof average values when the statistician is looking at the values of several variables in the sample, e.g. the sales, profits, and employees of a sample of Fortune 500 companies. In this case, there is not just a sample variance for each variable but a samplevariance-covariance matrix(or simplycovariance matrix) showing also the relationship between each pair of variables. This would be a 3×3 matrix when 3 variables are being considered. The sample covariance is useful in judging the reliability of the sample means as estimators and is also useful as an estimate of the population covariance matrix.
Due to their ease of calculation and other desirable characteristics, the sample mean and sample covariance are widely used in statistics to represent thelocationanddispersionof thedistributionof values in the sample, and to estimate the values for the population.
The sample mean is the average of the values of a variable in a sample, which is the sum of those values divided by the number of values. Using mathematical notation, if a sample ofNobservations on variableXis taken from the population, the sample mean is:
Under this definition, if the sample (1, 4, 1) is taken from the population (1,1,3,4,0,2,1,0), then the sample mean isx¯=(1+4+1)/3=2{\displaystyle {\bar {x}}=(1+4+1)/3=2}, as compared to the population mean ofμ=(1+1+3+4+0+2+1+0)/8=12/8=1.5{\displaystyle \mu =(1+1+3+4+0+2+1+0)/8=12/8=1.5}. Even if a sample is random, it is rarely perfectly representative, and other samples would have other sample means even if the samples were all from the same population. The sample (2, 1, 0), for example, would have a sample mean of 1.
If the statistician is interested inKvariables rather than one, each observation having a value for each of thoseKvariables, the overall sample mean consists ofKsample means for individual variables. Letxij{\displaystyle x_{ij}}be theithindependently drawn observation (i=1,...,N) on thejthrandom variable (j=1,...,K). These observations can be arranged intoNcolumn vectors, each withKentries, with theK×1 column vector giving thei-th observations of all variables being denotedxi{\displaystyle \mathbf {x} _{i}}(i=1,...,N).
Thesample mean vectorx¯{\displaystyle \mathbf {\bar {x}} }is a column vector whosej-th elementx¯j{\displaystyle {\bar {x}}_{j}}is the average value of theNobservations of thejthvariable:
Thus, the sample mean vector contains the average of the observations for each variable, and is written
Thesample covariance matrixis aK-by-KmatrixQ=[qjk]{\displaystyle \textstyle \mathbf {Q} =\left[q_{jk}\right]}with entries
whereqjk{\displaystyle q_{jk}}is an estimate of thecovariancebetween thejthvariable and thekthvariable of the population underlying the data.
In terms of the observation vectors, the sample covariance is
Alternatively, arranging the observation vectors as the columns of a matrix, so that
which is a matrix ofKrows andNcolumns.
Here, the sample covariance matrix can be computed as
where1N{\displaystyle \mathbf {1} _{N}}is anNby1vector of ones.
If the observations are arranged as rows instead of columns, sox¯{\displaystyle \mathbf {\bar {x}} }is now a 1×Krow vector andM=FT{\displaystyle \mathbf {M} =\mathbf {F} ^{\mathrm {T} }}is anN×Kmatrix whose columnjis the vector ofNobservations on variablej, then applying transposes
in the appropriate places yields
Like covariance matrices forrandom vector, sample covariance matrices arepositive semi-definite. To prove it, note that for any matrixA{\displaystyle \mathbf {A} }the matrixATA{\displaystyle \mathbf {A} ^{T}\mathbf {A} }is positive semi-definite. Furthermore, a covariance matrix is positive definite if and only if the rank of thexi.−x¯{\displaystyle \mathbf {x} _{i}.-\mathbf {\bar {x}} }vectors is K.
The sample mean and the sample covariance matrix areunbiased estimatesof themeanand thecovariance matrixof therandom vectorX{\displaystyle \textstyle \mathbf {X} }, a row vector whosejthelement (j = 1, ..., K) is one of the random variables.[1]The sample covariance matrix hasN−1{\displaystyle \textstyle N-1}in the denominator rather thanN{\displaystyle \textstyle N}due to a variant ofBessel's correction: In short, the sample covariance relies on the difference between each observation and the sample mean, but the sample mean is slightly correlated with each observation since it is defined in terms of all observations. If the population meanE(X){\displaystyle \operatorname {E} (\mathbf {X} )}is known, the analogous unbiased estimate
using the population mean, hasN{\displaystyle \textstyle N}in the denominator. This is an example of why in probability and statistics it is essential to distinguish betweenrandom variables(upper case letters) andrealizationsof the random variables (lower case letters).
Themaximum likelihoodestimate of the covariance
for theGaussian distributioncase hasNin the denominator as well. The ratio of 1/Nto 1/(N− 1) approaches 1 for largeN, so the maximum likelihood estimate approximately approximately equals the unbiased estimate when the sample is large.
For each random variable, the sample mean is a goodestimatorof the population mean, where a "good" estimator is defined as beingefficientand unbiased. Of course the estimator will likely not be the true value of thepopulationmean since different samples drawn from the same distribution will give different sample means and hence different estimates of the true mean. Thus the sample mean is arandom variable, not a constant, and consequently has its own distribution. For a random sample ofNobservations on thejthrandom variable, the sample mean's distribution itself has mean equal to the population meanE(Xj){\displaystyle E(X_{j})}and variance equal toσj2/N{\displaystyle \sigma _{j}^{2}/N}, whereσj2{\displaystyle \sigma _{j}^{2}}is the population variance.
The arithmetic mean of apopulation, or population mean, is often denotedμ.[2]The sample meanx¯{\displaystyle {\bar {x}}}(the arithmetic mean of a sample of values drawn from the population) makes a goodestimatorof the population mean, as its expected value is equal to the population mean (that is, it is anunbiased estimator). The sample mean is arandom variable, not a constant, since its calculated value will randomly differ depending on which members of the population are sampled, and consequently it will have its own distribution. For a random sample ofnindependentobservations, the expected value of the sample mean is
and thevarianceof the sample mean is
If the samples are not independent, butcorrelated, then special care has to be taken in order to avoid the problem ofpseudoreplication.
If the population isnormally distributed, then the sample mean is normally distributed as follows:
If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed ifnis large andσ2/n< +∞. This is a consequence of thecentral limit theorem.
In a weighted sample, each vectorxi{\displaystyle \textstyle {\textbf {x}}_{i}}(each set of single observations on each of theKrandom variables) is assigned a weightwi≥0{\displaystyle \textstyle w_{i}\geq 0}. Without loss of generality, assume that the weights arenormalized:
(If they are not, divide the weights by their sum).
Then theweighted meanvectorx¯{\displaystyle \textstyle \mathbf {\bar {x}} }is given by
and the elementsqjk{\displaystyle q_{jk}}of the weighted covariance matrixQ{\displaystyle \textstyle \mathbf {Q} }are[3]
If all weights are the same,wi=1/N{\displaystyle \textstyle w_{i}=1/N}, the weighted mean and covariance reduce to the (biased) sample mean and covariance mentioned above.
The sample mean and sample covariance are notrobust statistics, meaning that they are sensitive tooutliers. As robustness is often a desired trait, particularly in real-world applications, robust alternatives may prove desirable, notablyquantile-based statistics such as thesample medianfor location,[4]andinterquartile range(IQR) for dispersion. Other alternatives includetrimmingandWinsorising, as in thetrimmed meanand theWinsorized mean.
|
https://en.wikipedia.org/wiki/Sample_mean
|
Atrusted execution environment(TEE) is a secure area of amain processor. It helps the code and data loaded inside it be protected with respect toconfidentiality and integrity. Data confidentiality prevents unauthorized entities from outside the TEE from reading data, while code integrity prevents code in the TEE from being replaced or modified by unauthorized entities, which may also be the computer owner itself as in certainDRMschemes described inIntel SGX.
This is done by implementing unique, immutable, and confidential architectural security, which offers hardware-based memory encryption that isolates specific application code and data in memory. This allows user-level code to allocate private regions of memory, called enclaves, which are designed to be protected from processes running at higher privilege levels.[1][2][3]A TEE as an isolated execution environment provides security features such as isolated execution, integrity of applications executing with the TEE, and confidentiality of their assets. In general terms, the TEE offers an execution space that provides a higher level of security for trusted applications running on the device than a rich operating system (OS) and more functionality than a 'secure element' (SE).
TheOpen Mobile Terminal Platform(OMTP) first defined TEE in their "Advanced Trusted Environment:OMTP TR1" standard, defining it as a "set of hardware and software components providing facilities necessary to support applications," which had to meet the requirements of one of two defined security levels. The first security level, Profile 1, was targeted against only software attacks, while Profile 2, was targeted against both software and hardware attacks.[4]
Commercial TEE solutions based on ARMTrustZonetechnology, conforming to the TR1 standard, were later launched, such as Trusted Foundations developed by Trusted Logic.[5]
Work on the OMTP standards ended in mid-2010 when the group transitioned into theWholesale Applications Community(WAC).[6]
The OMTP standards, including those defining a TEE, are hosted byGSMA.[7]
The TEE typically consists of a hardware isolation mechanism plus a secure operating system running on top of that isolation mechanism, although the term has been used more generally to mean a protected solution.[8][9][10][11]Whilst a GlobalPlatform TEE requires hardware isolation, others, such as EMVCo, use the term TEE to refer to both hardware and software-based solutions.[12]FIDO uses the concept of TEE in the restricted operating environment for TEEs based on hardware isolation.[13]Only trusted applications running in a TEE have access to the full power of a device's main processor, peripherals, and memory, while hardware isolation protects these from user-installed apps running in a main operating system. Software and cryptogaphic inside the TEE protect the trusted applications contained within from each other.[14]
Service providers,mobile network operators(MNO), operating system developers,application developers, device manufacturers, platform providers, and silicon vendors are the main stakeholders contributing to the standardization efforts around the TEE.
To prevent the simulation of hardware with user-controlled software, a so-called "hardware root of trust" is used. This is aset of private keys that are embedded directly into the chip during manufacturing; one-time programmable memory such aseFusesis usually used on mobile devices. These cannot be changed, even after the device resets, and whose public counterparts reside in a manufacturer database, together with a non-secret hash of a public key belonging to the trusted party (usually a chip vendor) which is used to sign trusted firmware alongside the circuits doing cryptographic operations and controlling access.
The hardware is designed in a way which prevents all software not signed by the trusted party's key from accessing the privileged features. The public key of the vendor is provided at runtime and hashed; this hash is then compared to the one embedded in the chip. If the hash matches, the public key is used to verify adigital signatureof trusted vendor-controlled firmware (such as a chain of bootloaders on Android devices or 'architectural enclaves' in SGX). The trusted firmware is then used to implement remote attestation.[15]
When an application is attested, its untrusted components loads its trusted component into memory; the trusted application is protected from modification by untrusted components with hardware. Anonceis requested by the untrusted party from verifier's server and is used as part of a cryptographic authentication protocol, proving integrity of the trusted application. The proof is passed to the verifier, which verifies it. A valid proof cannot be computed in simulated hardware (i.e.QEMU) because in order to construct it, access to the keys baked into hardware is required; only trusted firmware has access to these keys and/or the keys derived from them or obtained using them. Because only the platform owner is meant to have access to the data recorded in the foundry, the verifying party must interact with the service set up by the vendor. If the scheme is implemented improperly, the chip vendor can track which applications are used on which chip and selectively deny service by returning a message indicating that authentication has not passed.[16]
To simulate hardware in a way which enables it to pass remote authentication, an attacker would have to extract keys from the hardware, which is costly because of the equipment and technical skill required to execute it. For example, usingfocused ion beams,scanning electron microscopes,microprobing, and chipdecapsulation[17][18][19][20][21][22]is difficult, or even impossible, if the hardware is designed in such a way that reverse-engineering destroys the keys. In most cases, the keys are unique for each piece of hardware, so that a key extracted from one chip cannot be used by others (for examplephysically unclonable functions[23][24]).
Though deprivation of ownership is not an inherent property of TEEs (it is possible to design the system in a way that allows only the user who has obtained ownership of the device first to control the system by burning a hash of their own key into e-fuses), in practice all such systems in consumer electronics are intentionally designed so as to allow chip manufacturers to control access to attestation and its algorithms. It allows manufacturers to grant access to TEEs only to software developers who have a (usually commercial) business agreement with the manufacturer,monetizingthe user base of the hardware, to enable such use cases astivoizationand DRM and to allow certain hardware features to be used only with vendor-supplied software, forcing users to use it despite itsantifeatures, likeads, tracking and use case restriction formarket segmentation.
There are a number of use cases for the TEE. Though not all possible use cases exploit the deprivation of ownership, TEE is usually used exactly for this.
Note: Much TEE literature covers this topic under the definition "premium content protection," which is the preferred nomenclature of many copyright holders. Premium content protection is a specific use case ofdigital rights management(DRM) and is controversial among some communities, such as theFree Software Foundation.[25]It is widely used by copyright holders to restrict the ways in which end users can consume content such as 4K high-definition films.
The TEE is a suitable environment for protecting digitally encoded information (for example, HD films or audio) on connected devices such as smartphones, tablets, and HD televisions. This suitability comes from the ability of the TEE to deprive the owner of the device of access stored secrets, and the fact that there is often a protected hardware path between the TEE and the display and/or subsystems on devices.
The TEE is used to protect the content once it is on the device. While the content is protected during transmission or streaming by the use of encryption, the TEE protects the content once it has been decrypted on the device by ensuring that decrypted content is not exposed to the environment not approved by the app developer or platform vendor.
Mobile commerce applications such as: mobile wallets, peer-to-peer payments, contactless payments or using a mobile device as a point of sale (POS) terminal often have well-defined security requirements. TEEs can be used, often in conjunction withnear-field communication(NFC), SEs, and trusted backend systems to provide the security required to enable financial transactions to take place
In some scenarios, interaction with the end user is required, and this may require the user to expose sensitive information such as a PIN, password, or biometric identifier to themobile OSas a means of authenticating the user. The TEE optionally offers a trusted user interface which can be used to construct user authentication on a mobile device.
With the rise of cryptocurrency, TEEs are increasingly used to implement crypto-wallets, as they offer the ability to store tokens more securely than regular operating systems, and can provide the necessary computation and authentication applications.[26]
The TEE is well-suited for supporting biometric identification methods (facial recognition, fingerprint sensor, and voice authorization), which may be easier to use and harder to steal than PINs and passwords. The authentication process is generally split into three main stages:
A TEE is a good area within a mobile device to house the matching engine and the associated processing required to authenticate the user. The environment is designed to protect the data and establish a buffer against the non-secure apps located inmobile OSes. This additional security may help to satisfy the security needs of service providers in addition to keeping the costs low for handset developers.
The TEE can be used by governments, enterprises, and cloud service providers to enable the secure handling of confidential information on mobile devices and on server infrastructure. The TEE offers a level of protection against software attacks generated in themobile OSand assists in the control of access rights. It achieves this by housing sensitive, ‘trusted’ applications that need to be isolated and protected from the mobile OS and any malicious malware that may be present. Through utilizing the functionality and security levels offered by the TEE, governments, and enterprises can be assured that employees using their own devices are doing so in a secure and trusted manner. Likewise, server-based TEEs help defend against internal and external attacks against backend infrastructure.
With the rise of software assets and reuses,modular programmingis the most productive process to design software architecture, by decoupling the functionalities into small independent modules. As each module contains everything necessary to execute its desired functionality, the TEE allows the organization of the complete system featuring a high level of reliability and security, while preventing each module from vulnerabilities of the others.
In order for the modules to communicate and share data, TEE provides means to securely have payloads sent/received between the modules, using mechanisms such as object serialization, in conjunction with proxies.
SeeComponent-based software engineering
The following hardware technologies can be used to support TEE implementations:
|
https://en.wikipedia.org/wiki/Secure_Enclave
|
TheNano[1]microprocessor fromVIA Technologiesis an eighth-generation CPU targeted at the consumer and embedded market.
|
https://en.wikipedia.org/wiki/List_of_VIA_Nano_microprocessors
|
Ahumanmicrochip implantis any electronic device implanted subcutaneously (subdermally) usually via an injection. Examples include an identifyingintegrated circuitRFID device encased insilicate glasswhich is implanted in the body of a human being. This type ofsubdermal implantusually contains a uniqueID numberthat can be linked to information contained in an external database, such asidentity document,criminal record,medical history, medications,address book, and otherpotential uses.
Several hobbyists, scientists and business personalities have placed RFID microchip implants into their hands or had them inserted by others.
For Microchip implants that are encapsulated in silicate glass, there exists multiple methods to embed the device subcutaneously ranging from placing the microchip implant in a syringe or trocar[39]and piercing under the flesh (subdermal) then releasing the syringe to using a cutting tool such as a surgical scalpel to cut open subdermal and positioning the implant in the open wound.
A list of popular uses for microchip implants are as follows;
Other uses either cosmetic or medical may also include;
RFID implants usingNFCtechnologies have been used as access cards ranging for car door entry to building access.[41]Secure identity has also been used to encapsulate or impersonate a users identity via secure element or related technologies.
Researchers have examined microchip implants in humans in the medical field and they indicate that there are potential benefits and risks to incorporating the device in the medical field. For example, it could be beneficial for noncompliant patients but still poses great risks for potential misuse of the device.[45]
Destron Fearing, a subsidiary ofDigital Angel, initially developed the technology for the VeriChip.[46]
In 2004, the VeriChip implanted device and reader were classified asClass II: General controls with special controlsby the FDA;[47]that year the FDA also published a draft guidance describing the special controls required to market such devices.[48]
About the size of a grain of rice, the device was typically implanted between the shoulder and elbow area of an individual's right arm. Once scanned at the proper frequency, the chip responded with a unique 16-digit number which could be then linked with information about the user held on a database for identity verification, medical records access and other uses. The insertion procedure was performed under local anesthetic in a physician's office.[49][50]
Privacy advocates raised concerns regarding potential abuse of the chip, with some warning that adoption by governments as a compulsory identification program could lead to erosion of civil liberties, as well asidentity theftif the device should be hacked.[50][51][52]Another ethical dilemma posed by the technology, is that people with dementia could possibly benefit the most from an implanted device that contained their medical records, but issues ofinformed consentare the most difficult in precisely such people.[53]
In June 2007, theAmerican Medical Associationdeclared that "implantable radio frequency identification (RFID) devices may help to identify patients, thereby improving the safety and efficiency of patient care, and may be used to enable secure access to patient clinical information",[54]but in the same year, news reports linking similar devices to cancer caused in laboratory animals.[55]
In 2010, the company, by then called PositiveID, withdrew the product from the market due to poor sales.[56]
In January 2012, PositiveID sold the chip assets to a company called VeriTeQ that was owned by Scott Silverman, the former CEO of Positive ID.[57]
In 2016, JAMM Technologies acquired the chip assets from VeriTeQ; JAMM's business plan was to partner with companies sellingimplanted medical devicesand use the RFID tags to monitor and identify the devices.[58]JAMM Technologies is co-located in the samePlymouth, Minnesotabuilding as Geissler Corporation with Randolph K. Geissler and Donald R. Brattain[59][60]listed as its principals.
The website also claims that Geissler was CEO of PositiveID Corporation, Destron Fearing Corporation, and Digital Angel Corporation.[61]
In 2018, a Danish firm called BiChip released a new generation of microchip implant[62]that is intended to be readable from a distance and connected to Internet. The company released an update for its microchip implant to associate it with the Ripplecryptocurrencyto allow payments to be made using the implanted microchip.[63]
Patients that undergo NFC implants do so for a variety of reasons ranging from, Biomedical diagnostics, health reasons to gaining new senses,[64]gain biological enhancement, to be part of existing growing movements, for workplace purposes, security, hobbyists and for scientific endeavour.[65]
In 2020, a London-based firm called Impli released a microchip implant that is intended to be used with an accompanying smartphone app. The primary functionality of the implant is as a storage of medical records. The implant can be scanned by any smartphone that has NFC capabilities.[66]
In February 2006, CityWatcher, Inc. of Cincinnati, OH became the first company in the world to implant microchips into their employees as part of their building access control and security system. The workers needed the implants to access the company's secure video tape room, as documented inUSA Today.[67]The project was initiated and implemented by Six Sigma Security, Inc. The VeriChip Corporation had originally marketed the implant as a way to restrict access to secure facilities such as power plants.
A major drawback for such systems is the relative ease with which the 16-digit ID number contained in a chip implant can be obtained and cloned using a hand-held device, a problem that has been demonstrated publicly by security researcher Jonathan Westhues[68]and documented in the May 2006 issue ofWiredmagazine,[69]among other places.
In 2017, Mike Miller, chief executive of theWorld Olympians Association, was widely reported as suggesting the use of such implants in athletes in an attempt to reduce problems in sports due to recreational drug use.[72]
Theoretically, a GPS-enabled chip could one day make it possible for individuals to be physically located by latitude, longitude, altitude, and velocity.[citation needed]Such implantable GPS devices are not technically feasible at this time. However, if widely deployed at some future point, implantable GPS devices could conceivably allow authorities to locatemissing people,fugitives, or those who fled a crime scene. Critics contend that the technology could lead topolitical repressionas governments could use implants to track and persecute human rights activists, labor activists, civil dissidents, and political opponents; criminals and domestic abusers could use them to stalk, harass, and/or abduct their victims.
Another suggested application for a tracking implant, discussed in 2008 by the legislature ofIndonesia'sIrian Jayawould be to monitor the activities of people infected withHIV, aimed at reducing their chances of infecting other people.[73][74]The microchipping section was not, however, included in the final version of the provincialHIV/AIDS Handling bylawpassed by the legislature in December 2008.[75]With current technology, this would not be workable anyway, since there is no implantable device on the market withGPS trackingcapability.
Some have theorized[who?]that governments could use implants for:
Infection has been cited as a source of failure within RFID and related microchip implanted individuals, either due to improper implantation techniques, implantrejectionsor corrosion of implant elements.[76]
Some chipped individuals have reported being turned away fromMRIsdue to the presence of magnets in their body.[77]No conclusive investigation has been done on the risks of each type of implant near MRIs, other than anecdotal reports ranging from no problems, requiring hand shielding before proximity, to being denied the MRI.[failed verification–see discussion]
Other medical imaging technologies likeX-rayandCT scannersdo not pose a similar risk. Rather, X-rays can be used to locate implants.
Electronics-based implants contain little material that can corrode.Magnetic implants, however, often contain a substantial amount of metallic elements by volume, and iron, a common implant element, is easily corroded by common elements such as oxygen and water. Implant corrosion occurs when these elements become trapped inside during the encapsulation process, which can cause slow corrosive effect, or the encapsulation fails and allows corrosive elements to come into contact with the magnet. Catastrophic encapsulation failures are usually obvious, resulting in tenderness, discoloration of the skin, and a slight inflammatory response. Small failures however can take much longer to become obvious, resulting in a slow degradation of field strength without many external signs that something is slowly going wrong with the magnet.[78]
In a self-published report,[79]anti-RFID advocateKatherine Albrecht, who refers to RFID devices as "spy chips", citesveterinaryandtoxicologicalstudies carried out from 1996 to 2006 which found lab rodents injected with microchips as an incidental part of unrelated experiments and dogs implanted with identification microchips sometimes developed cancerous tumors at the injection site (subcutaneoussarcomas) as evidence of a human implantation risk.[80]However, the link between foreign-body tumorigenesis in lab animals and implantation in humans has been publicly refuted as erroneous and misleading[81]and the report's author has been criticized[by whom?]over the use of "provocative" language "not based in scientific fact".[82]Notably, none of the studies cited specifically set out to investigate the cancer risk of implanted microchips and so none of the studies had a control group of animals that did not get implanted. While the issue is considered worthy of further investigation, one of the studies cited cautioned "Blind leaps from the detection of tumors to the prediction of human health risk should be avoided".[83][84][85]
The Council on Ethical and Judicial Affairs (CEJA) of theAmerican Medical Associationpublished a report in 2007 alleging that RFID implanted chips may compromiseprivacybecause even though no information can be stored in an RFID transponder, they allege that there is no assurance that the information contained in the chip can be properly protected.[86]
Stolen identity and privacy has been a major concern with microchip implants being cloned for various nefarious reasons in a process known asWireless identity theft. Incidents of forced removal of animal implants have been documented,[87]the concern lies in whether this same practice will be used to attack implanted microchipped patients also. Due to low adoption of microchip implants incidents of these physical attacks are rare. Nefarious RFID reprogramming of unprotected or unencrypted microchip tags are also a major security risk consideration.
There is concern technology can be abused.[88]Opponents have stated that such invasive technology has the potential to be used by governments to create an 'Orwellian'digital dystopiaand theorized that in such a world, self-determination, the ability to think freely, and all personal autonomy could be completely lost.[89][90][91]
In 2019,Elon Muskannounced that a company he had founded which deals with microchip implant research, called Neuralink, would be able to "solve"autismand other "brain diseases".[92]This led to a number of critics calling out Musk for his statements, with Dan Robitzski ofNeoscopesaying, "while schizophrenia can be a debilitating mental condition, autism is more tightly linked to a sense of identity — and listing it as a disease to be solved as Musk did risks further stigmatizing a community pushing for better treatment and representation."[93]Hilary Brueck ofInsideragreed, saying, "conditions like autism can't be neatly cataloged as things to "solve." Instead, they lead people to think differently". She went on to argue though that the technology shouldn't be discounted entirely, as it could potentially help people with a variety of disabilities such asblindnessandquadriplegia.[94]FellowInsiderwriter Isobel Asher Hamilton added, "it was not clear what Musk meant by saying Neuralink could "solve" autism, which is not a disease but a developmental disorder." She then cited The UK's National Autistic Society's website statement, which says, "Autism is not an illness or disease and cannot be 'cured.' Often people feel being autistic is a fundamental aspect of their identity."[95]Tristan Greene ofThe Next Webstated, in response to Musk, "there’s only one problem: autism isn’t a disease and it can’t be cured or solved. In fact, there’s some ethical debate in the medical community over whether autism, which is considered a disorder, should be treated as part of a person’s identity and not a ‘condition’ to be fixed... how freaking cool would it be to actually start yourTesla[electric vehicle] just by thinking? But, maybe... just maybe, the billionaire with access to the world's brightest medical minds who, even after founding a medical startup, still incorrectly thinks that autism is a disease that can be solved or cured shouldn't be someone we trust to shove wires or chips into our brains."[96]
Some autistic people also spoke out against Musk's statement about using microchips to "solve" autism, with Nera Birch ofThe Mighty, an autistic writer, stating, "autism is a huge part of who I am. It pervades every aspect of my life. Sure, there are days where being neurotypical would make everything so much easier. But I wouldn’t trade my autism for the world. I have the unique ability to view the world and experience senses in a way that makes all the negatives of autism worth it. The fact you think I would want to be “cured” is like saying I would rather be nothing than be myself. People with neurodiversity are proud of ourselves. For many of us, we wear our autism as a badge of pride. We have a culture within ourselves. It is not something that needs to be erased. The person with autism is not the problem. Neurotypical people need to stop molding us into somethingtheywant to interact with."[97]Florence Grant, an autistic writer forThe Independent, stated, "autistic people often have highly-focused interests, also known as special interests. I love my ability to hyperfocus and how passionate I get about things. I also notice small details and things that other people don’t see. I see the world differently, through a clear lens, and this means I can identify solutions where other people can’t. Does this sound familiar, Elon? My autism is a part of me, and it’s not something that can be separated from me. I should be able to exist freely autistic and proud. But for that to happen, the world needs to stop punishing difference and start embracing it." Grant noted that Musk himself had recently admitted that he had been diagnosed withAsperger's syndrome(itself an outdated diagnosis, the characteristics of which are currently recognized as part of the autism spectrum[98]) while hostingSaturday Night Live.[99]
Musk himself has not specified how Neuralink's microchip technology would "solve" autism, and has not commented publicly on the feedback from autistic people.
Despite a lack of evidence demonstrating invasive use or even technical capability of microchip implants, they have been the subject of many conspiracy theories.
TheSouthern Poverty Law Centerreported in 2010 that on the Christian right, there were concerns that implants could be the "mark of the beast" and amongst thePatriot movementthere were fears that implants could be used to track people.[100]The same yearNPRreported that a myth was circulating online that patients who signed up to receive treatment under theAffordable Care Act(Obamacare) would be implanted.[101]
In 2016,Snopesreported that being injected with microchips was a "perennial concern to the conspiracy-minded" and noted that a conspiracy theory was circulating in Australia at that time that the government was going to implant all of its citizens.[102]
A 2021 survey byYouGovfound that 20% of Americans believed microchips were inside theCOVID-19 vaccines.[103][104]A 2021Facebookpost byRT (Russia Today)claimedDARPAhad developed a COVID-19 detecting microchip implant.[105][106]
A few jurisdictions have researched or preemptively passed laws regarding human implantation of microchips.
In the United States, many states such asWisconsin(as of 2006),North Dakota(2007),California(2007),Oklahoma(2008), andGeorgia(2010) have laws making it illegal to force a person to have a microchip implanted, though politicians acknowledge they are unaware of cases of such forced implantation.[107][108][109][110]In 2010,Virginiapassed a bill forbidding companies from forcing employees to be implanted with tracking devices.[111]
In 2010,Washington'sHouse of Representativesintroduced a bill ordering the study of potential monitoring of sex offenders with implanted RFID or similar technology, but it did not pass.[112]
The general public are most familiar with microchips in the context ofidentifying pets.
Implanted individuals are considered to be grouped together as part of thetranshumanismmovement.
"Arkangel", an episode of the drama seriesBlack Mirror, considered the potential forhelicopter parentingof an imagined more advanced microchip.
Microchip implants have been explored incyberpunkmedia such asGhost in the Shell,Cyberpunk 2077, andDeus Ex.
Some Christians make a link between implants and the BiblicalMark of the Beast,[113][114]prophesied to be a future requirement for buying and selling, and a key element of theBook of Revelation.[115][116]Gary Wohlscheid, president of These Last Days Ministries, has argued that "Out of all the technologies with potential to be themark of the beast, VeriChip has got the best possibility right now."[117]
|
https://en.wikipedia.org/wiki/PositiveID
|
Roger Frontenac(fl.1950) was aFrench navyofficer and a scholar ofNostradamus' prophecies. He proposed an interpretation system for the text ofLes Prophéties, based upon a form ofcryptographyknown as theVigenère table.
Roger Frontenac, as a navy officer, was in charge ofmilitary ciphers. After World War II, he began to study the work of Nostradamus, treating it as any other message from an enemy. He searched for any hint about decoding methods. The name of Nostradamus' son Cesar led Frontenac to suspect the use of aCaesar cipher.[citation needed]
He published his treatise about Nostradamus' letters and works,La clef secrète de Nostradamus('The Secret Key of Nostradamus'), in 1950. In the book, Frontenac professed his belief in Nostradamus as a trueprophet, who made correct foretellings, and that the centuries (French:Les Prophéties) contained true predictions about future events until the year 3797.
However, Frontenac contended that those predictions were hidden, mixed, and not understandable before the events occurred. His conclusions were based on a combination of several cryptographic methods, including a systematic alteration in the metrical order ofquatrains' texts. This process was inspired by Nostradamus' use of the expressionrabouter obscurément('to mix in order to make them obscure') in a letter.[1]
The systematic reordering of quatrains, according to Frontenac, could be achieved using a couple of combinedkeys, and he stated that he managed to find the first key (a typical Vigenère text, easy to hold in memory), that was the Latin phrase:
Flamen fidele coegi id vulgo a Kabaloopplevi in viva acta tam latenter densaex HDMP fata hac cult sunt ob gratiaefidos Nostradamus fas obturavit a saxo
Loyal and inspired by the flame (of the Flamini priests),I conceived and gathered what ordinary people callKabbalò.I had hidden it, in living documents (Magical Actas), that are extremely condensed.The facts of destiny are in this way obscured, using the "HDMP" [perhaps the number 841216.[citation needed]]For those who believe inDivine Grace, Nostradamus has enclosed it in (or behind) a stone.
|
https://en.wikipedia.org/wiki/Roger_Frontenac
|
Ananadrome[1][2][3][4][a]is a word or phrase whose letters can be reversed to spell a different word or phrase. For example,dessertsis an anadrome ofstressed. An anadrome is therefore a special type ofanagram. The English language is replete with such words.
The wordanadromecomes from Greekanádromos(ἀνάδρομος), "running backward", and can be compared topalíndromos(παλίνδρομος), "running back again" (whencepalindrome).
There is a long history (dating at least to the fourteenth century, as withTreborandS. Uciredor) of alternate and invented names being created out of anadromes of real names; such a contrivedproper nounis sometimes called anananym, especially if it is used as personalpseudonym. Unlike typical anadromes, these anadromic formations often do not conform to any real names or words. Similarly cacographic anadromes are also characteristic of Victorianback slang, where for exampleyobstands forboy.
The English language has a very large number of single-word anadromes, by some counts more than 900.[3]Some examples:
An anadrome can also be a phrase, as inno tops↔spot on. The wordredrum(i.e., "red rum") is used this way formurderin theStephen KingnovelThe Shining(1977) andits film adaptation(1980).[11]
Anadromes exist in other written languages as well, as can be seen, for example, inSpanishorar↔raroorFrenchl'ami naturel("the natural friend") ↔le rut animal("the animal rut").
Many jazz titles were written by reversing names or nouns:Ecarohinverts the spelling of its composerHorace Silver's Christian name.Sonny Rollinsdedicated toNigeriaa tune called "Airegin".
A number ofPokémonspecies, such as the snake PokémonEkansandArbok(cobrabackwards with a K), have anadromic names.
|
https://en.wikipedia.org/wiki/Ananym
|
Incomputing, alinear-feedback shift register(LFSR) is ashift registerwhose input bit is alinear functionof its previous state.
The most commonly used linear function of single bits isexclusive-or(XOR). Thus, an LFSR is most often a shift register whose input bit is driven by the XOR of some bits of the overall shift register value.
The initial value of the LFSR is called the seed, and because the operation of the register is deterministic, the stream of values produced by the register is completely determined by its current (or previous) state. Likewise, because the register has a finite number of possible states, it must eventually enter a repeating cycle. However, an LFSR with awell-chosen feedback functioncan produce a sequence of bits that appears random and has avery long cycle.
Applications of LFSRs include generatingpseudo-random numbers,pseudo-noise sequences, fast digital counters, andwhitening sequences. Both hardware and software implementations of LFSRs are common.
The mathematics of acyclic redundancy check, used to provide a quick check against transmission errors, are closely related to those of an LFSR.[1]In general, the arithmetics behind LFSRs makes them very elegant as an object to study and implement. One can produce relatively complex logics with simple building blocks. However, other methods, that are less elegant but perform better, should be considered as well.
The bit positions that affect the next state are called thetaps. In the diagram the taps are [16,14,13,11]. The rightmost bit of the LFSR is called the output bit, which is always also a tap. To obtain the next state, the tap bits are XOR-ed sequentially; then, all bits are shifted one place to the right, with the rightmost bit being discarded, and that result of XOR-ing the tap bits is fed back into the now-vacant leftmost bit. To obtain the pseudorandom output stream, read the rightmost bit after each state transition.
The sequence of numbers generated by an LFSR or its XNOR counterpart can be considered abinary numeral systemjust as valid asGray codeor the natural binary code.
The arrangement of taps for feedback in an LFSR can be expressed infinite field arithmeticas apolynomialmod2. This means that the coefficients of the polynomial must be 1s or 0s. This is called the feedback polynomial or reciprocal characteristic polynomial. For example, if the taps are at the 16th, 14th, 13th and 11th bits (as shown), the feedback polynomial is
The "one" in the polynomial does not correspond to a tap – it corresponds to the input to the first bit (i.e.x0, which is equivalent to 1). The powers of the terms represent the tapped bits, counting from the left. The first and last bits are always connected as an input and output tap respectively.
The LFSR is maximal-length if and only if the corresponding feedback polynomial isprimitiveover theGalois fieldGF(2).[3][4]This means that the following conditions are necessary (but not sufficient):
Tables of primitive polynomials from which maximum-length LFSRs can be constructed are given below and in the references.
There can be more than one maximum-length tap sequence for a given LFSR length. Also, once one maximum-length tap sequence has been found, another automatically follows. If the tap sequence in ann-bit LFSR is[n,A,B,C, 0], where the 0 corresponds to thex0= 1 term, then the corresponding "mirror" sequence is[n,n−C,n−B,n−A, 0]. So the tap sequence[32, 22, 2, 1, 0]has as its counterpart[32, 31, 30, 10, 0]. Both give a maximum-length sequence.
An example inCis below:
If a fastparityorpopcountoperation is available, the feedback bit can be computed more efficiently as thedot productof the register with the characteristic polynomial:
If a rotation operation is available, the new state can be computed as
This LFSR configuration is also known asstandard,many-to-oneorexternal XOR gates. The alternative Galois configuration is described in the next section.
A sample python implementation of a similar (16 bit taps at [16,15,13,4]) Fibonacci LFSR would be
Where a register of 16 bits is used and the xor tap at the fourth, 13th, 15th and sixteenth bit establishes a maximum sequence length.
Named after the French mathematicianÉvariste Galois, an LFSR in Galois configuration, which is also known asmodular,internal XORs, orone-to-many LFSR, is an alternate structure that can generate the same output stream as a conventional LFSR (but offset in time).[5]In the Galois configuration, when the system is clocked, bits that are not taps are shifted one position to the right unchanged. The taps, on the other hand, are XORed with the output bit before they are stored in the next position. The new output bit is the next input bit. The effect of this is that when the output bit is zero, all the bits in the register shift to the right unchanged, and the input bit becomes zero. When the output bit is one, the bits in the tap positions all flip (if they are 0, they become 1, and if they are 1, they become 0), and then the entire register is shifted to the right and the input bit becomes 1.
To generate the same output stream, the order of the taps is thecounterpart(see above) of the order for the conventional LFSR, otherwise the stream will be in reverse. Note that the internal state of the LFSR is not necessarily the same. The Galois register shown has the same output stream as the Fibonacci register in the first section. A time offset exists between the streams, so a different startpoint will be needed to get the same output each cycle.
Below is aCcode example for the 16-bit maximal-period Galois LFSR example in the figure:
The branchif(lsb)lfsr^=0xB400u;can also be written aslfsr^=(-lsb)&0xB400u;which may produce more efficient code on some compilers. In addition, the left-shifting variant may produce even better code, as themsbis thecarryfrom the addition oflfsrto itself.
State and resulting bits can also be combined and computed in parallel. The following function calculates the next 64 bits using the 63-bit polynomialx63+x62+1{\displaystyle x^{63}+x^{62}+1}:
Binary Galois LFSRs like the ones shown above can be generalized to anyq-ary alphabet {0, 1, ...,q− 1} (e.g., for binary,q= 2, and the alphabet is simply {0, 1}). In this case, the exclusive-or component is generalized to additionmodulo-q(note that XOR is addition modulo 2), and the feedback bit (output bit) is multiplied (modulo-q) by aq-ary value, which is constant for each specific tap point. Note that this is also a generalization of the binary case, where the feedback is multiplied by either 0 (no feedback, i.e., no tap) or 1 (feedback is present). Given an appropriate tap configuration, such LFSRs can be used to generateGalois fieldsfor arbitrary prime values ofq.
As shown byGeorge Marsaglia[6]and further analysed byRichard P. Brent,[7]linear feedback shift registers can be implemented using XOR and Shift operations. This approach lends itself to fast execution in software because these operations typically map efficiently into modern processor instructions.
Below is aCcode example for a 16-bit maximal-period Xorshift LFSR using the 7,9,13 triplet from John Metcalf:[8]
Binary LFSRs of both Fibonacci and Galois configurations can be expressed as linear functions using matrices inF2{\displaystyle \mathbb {F} _{2}}(seeGF(2)).[9]Using thecompanion matrixof the characteristic polynomial of the LFSR and denoting the seed as a column vector(a0,a1,…,an−1)T{\displaystyle (a_{0},a_{1},\dots ,a_{n-1})^{\mathrm {T} }}, the state of the register in Fibonacci configuration afterk{\displaystyle k}steps is given by
Matrix for the corresponding Galois form is :
For a suitable initialisation,
the top coefficient of the column vector :
gives the termakof the original sequence.
These forms generalize naturally to arbitrary fields.
The following table lists examples of maximal-length feedback polynomials (primitive polynomials) for shift-register lengths up to 24. The formalism for maximum-length LFSRs was developed bySolomon W. Golombin his 1967 book.[10]The number of differentprimitive polynomialsgrows exponentially with shift-register length and can be calculated exactly usingEuler's totient function[11](sequenceA011260in theOEIS).
LFSRs can be implemented in hardware, and this makes them useful in applications that require very fast generation of a pseudo-random sequence, such asdirect-sequence spread spectrumradio. LFSRs have also been used for generating an approximation ofwhite noisein variousprogrammable sound generators.
The repeating sequence of states of an LFSR allows it to be used as aclock divideror as a counter when a non-binary sequence is acceptable, as is often the case where computer index or framing locations need to be machine-readable.[12]LFSRcountershave simpler feedback logic than natural binary counters orGray-code counters, and therefore can operate at higher clock rates. However, it is necessary to ensure that the LFSR never enters a lockup state (all zeros for a XOR based LFSR, and all ones for a XNOR based LFSR), for example by presetting it at start-up to any other state in the sequence. It is possible to count up and down with a LFSR. LFSR have also been used as aProgram Counter for CPUs, this requires that the program itself is "scrambled" and it done to save on gates when they are a premium (using fewer gates than an adder) and for speed (as a LFSR does not require a long carry chain).
The table of primitive polynomials shows how LFSRs can be arranged in Fibonacci or Galois form to give maximal periods. One can obtain any other period by adding to an LFSR that has a longer period some logic that shortens the sequence by skipping some states.
LFSRs have long been used aspseudo-random number generatorsfor use instream ciphers, due to the ease of construction from simpleelectromechanicalorelectronic circuits, longperiods, and very uniformlydistributedoutput streams. However, an LFSR is a linear system, leading to fairly easycryptanalysis. For example, given a stretch ofknown plaintext and corresponding ciphertext, an attacker can intercept and recover a stretch of LFSR output stream used in the system described, and from that stretch of the output stream can construct an LFSR of minimal size that simulates the intended receiver by using theBerlekamp-Massey algorithm. This LFSR can then be fed the intercepted stretch of output stream to recover the remaining plaintext.
Three general methods are employed to reduce this problem in LFSR-based stream ciphers:
Important: LFSR-based stream ciphers includeA5/1andA5/2, used inGSMcell phones,E0, used inBluetooth, and theshrinking generator. The A5/2 cipher has been broken and both A5/1 and E0 have serious weaknesses.[14][15]
The linear feedback shift register has a strong relationship tolinear congruential generators.[16]
LFSRs are used in circuit testing for test-pattern generation (for exhaustive testing, pseudo-random testing or pseudo-exhaustive testing) and for signature analysis.
Complete LFSR are commonly used as pattern generators for exhaustive testing, since they cover all possible inputs for ann-input circuit. Maximal-length LFSRs and weighted LFSRs are widely used as pseudo-random test-pattern generators for pseudo-random test applications.
Inbuilt-in self-test(BIST) techniques, storing all the circuit outputs on chip is not possible, but the circuit output can be compressed to form a signature that will later be compared to the golden signature (of the good circuit) to detect faults. Since this compression is lossy, there is always a possibility that a faulty output also generates the same signature as the golden signature and the faults cannot be detected. This condition is called error masking or aliasing. BIST is accomplished with a multiple-input signature register (MISR or MSR), which is a type of LFSR. A standard LFSR has a single XOR or XNOR gate, where the input of the gate is connected to several "taps" and the output is connected to the input of the first flip-flop. A MISR has the same structure, but the input to every flip-flop is fed through an XOR/XNOR gate. For example, a 4-bit MISR has a 4-bit parallel output and a 4-bit parallel input. The input of the first flip-flop is XOR/XNORd with parallel input bit zero and the "taps". Every other flip-flop input is XOR/XNORd with the preceding flip-flop output and the corresponding parallel input bit. Consequently, the next state of the MISR depends on the last several states opposed to just the current state. Therefore, a MISR will always generate the same golden signature given that the input sequence is the same every time.
Recent applications[17]are proposing set-reset flip-flops as "taps" of the LFSR. This allows the BIST system to optimise storage, since set-reset flip-flops can save the initial seed to generate the whole stream of bits from the LFSR. Nevertheless, this requires changes in the architecture of BIST, is an option for specific applications.
To prevent short repeating sequences (e.g., runs of 0s or 1s) from forming spectral lines that may complicate symbol tracking at the
receiver or interfere with other transmissions, the data bit sequence is combined with the output of a linear-feedback register before modulation and transmission. This scrambling is removed at the receiver after demodulation. When the LFSR runs at the samebit rateas the transmitted symbol stream, this technique is referred to asscrambling. When the LFSR runs considerably faster than the symbol stream, the LFSR-generated bit sequence is calledchipping code. The chipping code is combined with the data usingexclusive orbefore transmitting usingbinary phase-shift keyingor a similar modulation method. The resulting signal has a higher bandwidth than the data, and therefore this is a method ofspread-spectrumcommunication. When used only for the spread-spectrum property, this technique is calleddirect-sequence spread spectrum; when used to distinguish several signals transmitted in the same channel at the same time and frequency, it is calledcode-division multiple access.
Neither scheme should be confused withencryptionorencipherment; scrambling and spreading with LFSRs donotprotect the information from eavesdropping. They are instead used to produce equivalent streams that possess convenient engineering properties to allow robust and efficient modulation and demodulation.
Digital broadcasting systems that use linear-feedback registers:
Other digital communications systems using LFSRs:
LFSRs are also used inradio jammingsystems to generate pseudo-random noise to raise the noise floor of a target communication system.
The German time signalDCF77, in addition to amplitude keying, employsphase-shift keyingdriven by a 9-stage LFSR to increase the accuracy of received time and the robustness of the data stream in the presence of noise.[19]
|
https://en.wikipedia.org/wiki/Linear-feedback_shift_register
|
Secure codingis the practice of developing computersoftwarein such a way that guards against the accidental introduction ofsecurity vulnerabilities. Defects,bugsand logic flaws are consistently the primary cause of commonly exploited software vulnerabilities.[1]Through the analysis of thousands of reported vulnerabilities, security professionals have discovered that most vulnerabilities stem from a relatively small number of common software programming errors. By identifying the insecure coding practices that lead to these errors and educating developers on secure alternatives, organizations can take proactive steps to help significantly reduce or eliminate vulnerabilities in software before deployment.[2]
Some scholars have suggested that in order to effectively confront threats related tocybersecurity, proper security should be coded or “baked in” to the systems. With security being designed into the software, this ensures that there will be protection against insider attacks and reduces the threat to application security.[3]
Buffer overflows, a common software security vulnerability, happen when a process tries to store data beyond a fixed-length buffer. For example, if there are 8 slots to store items in, there will be a problem if there is an attempt to store 9 items. In computer memory the overflowed data may overwrite data in the next location which can result in a security vulnerability (stack smashing) or program termination (segmentation fault).[1]
An example of aCprogram prone to a buffer overflow is
If the user input is larger than the destination buffer, a buffer overflow will occur.
To fix this unsafe program, use strncpy to prevent a possible buffer overflow.
Another secure alternative is to dynamically allocate memory on the heap usingmalloc.
In the above code snippet, the program attempts to copy the contents ofsrcintodst, while also checking the return value of malloc to ensure that enough memory was able to be allocated for the destination buffer.
AFormat String Attackis when a malicious user supplies specific inputs that will eventually be entered as an argument to a function that performs formatting, such asprintf(). The attack involves the adversary reading from or writing to thestack.
The C printf function writes output to stdout. If the parameter of the printf function is not properly formatted, several security bugs can be introduced. Below is a program that is vulnerable to a format string attack.
A malicious argument passed to the program could be "%s%s%s%s%s%s%s", which can crash the program from improper memory reads.
Integer overflowoccurs when an arithmetic operation results in an integer too large to be represented within the available space. A program which does not properly check for integer overflow introduces potential software bugs and exploits.
Below is a function inC++which attempts to confirm that the sum of x and y is less than or equal to a defined value MAX:
The problem with the code is it does not check for integer overflow on the addition operation. If the sum of x and y is greater than the maximum possible value of anunsigned int, the addition operation will overflow and perhaps result in a value less than or equal to MAX, even though the sum of x and y is greater than MAX.
Below is a function which checks for overflow by confirming the sum is greater than or equal to both x and y. If the sum did overflow, the sum would be less than x or less than y.
Path traversal is a vulnerability whereby paths provided from an untrusted source are interpreted in such a way that unauthorised file access is possible.
For example, consider a script that fetches an article by taking a filename, which is then read by the script andparsed. Such a script might use the following hypothetical URL to retrieve an article aboutdog food:
If the script has no input checking, instead trusting that the filename is always valid, amalicious usercould forge a URL to retrieve configuration files from the web server:
Depending on the script, this may expose the/etc/passwdfile, which onUnix-likesystems contains (among others)user IDs, theirlogin names,home directorypaths andshells. (SeeSQL injectionfor a similar attack.)
|
https://en.wikipedia.org/wiki/Secure_coding
|
Incomputational complexity theory, afunction problemis acomputational problemwhere a single output (of atotal function) is expected for every input, but the output is more complex than that of adecision problem. For function problems, the output is not simply 'yes' or 'no'.
A functional problemP{\displaystyle P}is defined by arelationR{\displaystyle R}overstringsof an arbitraryalphabetΣ{\displaystyle \Sigma }:
An algorithm solvesP{\displaystyle P}if for every inputx{\displaystyle x}such that there exists ay{\displaystyle y}satisfying(x,y)∈R{\displaystyle (x,y)\in R}, the algorithm produces one suchy{\displaystyle y}, and if there are no suchy{\displaystyle y}, it rejects.
A promise function problem is allowed to do anything (thus may not terminate) if no suchy{\displaystyle y}exists.
A well-known function problem is given by the Functional Boolean Satisfiability Problem,FSATfor short. The problem, which is closely related to theSATdecision problem, can be formulated as follows:
In this case the relationR{\displaystyle R}is given by tuples of suitably encoded boolean formulas and satisfying assignments.
While a SAT algorithm, fed with a formulaφ{\displaystyle \varphi }, only needs to return "unsatisfiable" or "satisfiable", an FSAT algorithm needs to return some satisfying assignment in the latter case.
Other notable examples include thetravelling salesman problem, which asks for the route taken by the salesman, and theinteger factorization problem, which asks for the list of factors.
Consider an arbitrarydecision problemL{\displaystyle L}in the classNP. By the definition ofNP, each problem instancex{\displaystyle x}that is answered 'yes' has a polynomial-size certificatey{\displaystyle y}which serves as a proof for the 'yes' answer. Thus, the set of these tuples(x,y){\displaystyle (x,y)}forms a relation, representing the function problem "givenx{\displaystyle x}inL{\displaystyle L}, find a certificatey{\displaystyle y}forx{\displaystyle x}". This function problem is called thefunction variantofL{\displaystyle L}; it belongs to the classFNP.
FNPcan be thought of as the function class analogue ofNP, in that solutions ofFNPproblems can be efficiently (i.e., inpolynomial timein terms of the length of the input)verified, but not necessarily efficientlyfound. In contrast, the classFP, which can be thought of as the function class analogue ofP, consists of function problems whose solutions can be found in polynomial time.
Observe that the problemFSATintroduced above can be solved using only polynomially many calls to a subroutine which decides theSATproblem: An algorithm can first ask whether the formulaφ{\displaystyle \varphi }is satisfiable. After that the algorithm can fix variablex1{\displaystyle x_{1}}to TRUE and ask again. If the resulting formula is still satisfiable the algorithm keepsx1{\displaystyle x_{1}}fixed to TRUE and continues to fixx2{\displaystyle x_{2}}, otherwise it decides thatx1{\displaystyle x_{1}}has to be FALSE and continues. Thus,FSATis solvable in polynomial time using anoracledecidingSAT. In general, a problem inNPis calledself-reducibleif its function variant can be solved in polynomial time using an oracle deciding the original problem. EveryNP-completeproblem is self-reducible. It is conjectured[by whom?]that theinteger factorization problemis not self-reducible, because deciding whether an integer is prime is inP(easy),[1]while the integer factorization problem is believed to be hard for a classical computer.
There are several (slightly different) notions of self-reducibility.[2][3][4]
Function problems can bereducedmuch like decision problems: Given function problemsΠR{\displaystyle \Pi _{R}}andΠS{\displaystyle \Pi _{S}}we say thatΠR{\displaystyle \Pi _{R}}reduces toΠS{\displaystyle \Pi _{S}}if there exists polynomially-time computable functionsf{\displaystyle f}andg{\displaystyle g}such that for all instancesx{\displaystyle x}ofR{\displaystyle R}and possible solutionsy{\displaystyle y}ofS{\displaystyle S}, it holds that
It is therefore possible to defineFNP-completeproblems analogous to the NP-complete problem:
A problemΠR{\displaystyle \Pi _{R}}isFNP-completeif every problem inFNPcan be reduced toΠR{\displaystyle \Pi _{R}}. The complexity class ofFNP-completeproblems is denoted byFNP-CorFNPC. Hence the problemFSATis also anFNP-completeproblem, and it holds thatP=NP{\displaystyle \mathbf {P} =\mathbf {NP} }if and only ifFP=FNP{\displaystyle \mathbf {FP} =\mathbf {FNP} }.
The relationR(x,y){\displaystyle R(x,y)}used to define function problems has the drawback of being incomplete: Not every inputx{\displaystyle x}has a counterparty{\displaystyle y}such that(x,y)∈R{\displaystyle (x,y)\in R}. Therefore the question of computability of proofs is not separated from the question of their existence. To overcome this problem it is convenient to consider the restriction of function problems to total relations yielding the classTFNPas a subclass ofFNP. This class contains problems such as the computation of pureNash equilibriain certain strategic games where a solution is guaranteed to exist. In addition, ifTFNPcontains anyFNP-completeproblem it follows thatNP=co-NP{\displaystyle \mathbf {NP} ={\textbf {co-NP}}}.
|
https://en.wikipedia.org/wiki/Function_problem
|
Evolutionary linguisticsorDarwinian linguisticsis asociobiologicalapproach to the study oflanguage.[1][2]Evolutionary linguists consider linguistics as a subfield ofsociobiologyandevolutionary psychology. The approach is also closely linked withevolutionary anthropology,cognitive linguisticsandbiolinguistics. Studying languages as the products ofnature, it is interested in the biologicaloriginand development of language.[3]Evolutionary linguistics is contrasted withhumanisticapproaches, especiallystructural linguistics.[4]
A main challenge in this research is the lack of empirical data: there are noarchaeologicaltraces of early human language.Computational biological modellingandclinical researchwithartificial languageshave been employed to fill in gaps of knowledge. Although biology is understood to shape thebrain, whichprocesses language, there is no clear link between biology and specific human language structures orlinguistic universals.[5]
For lack of a breakthrough in the field, there have been numerous debates about what kind of natural phenomenon language might be. Some researchers focus on theinnate aspects of language. It is suggested that grammar has emerged adaptationally from the human genome, bringing about a languageinstinct;[6]or that it depends on a single mutation[7]which has caused alanguage organto appear in the human brain.[8]This is hypothesized to result in acrystalline[9]grammatical structure underlying all human languages. Others suggest language is not crystallized, but fluid and ever-changing.[10]Others, yet, liken languages to livingorganisms.[11]Languages are considered analogous to aparasite[12]orpopulationsofmind-viruses. There is so far littlescientific evidencefor any of these claims, and some of them have been labelled aspseudoscience.[13][14]
Although pre-Darwinian theorists had compared languages to living organisms as ametaphor, the comparison was first taken literally in 1863 by thehistorical linguistAugust Schleicherwho was inspired byCharles Darwin'sOn the Origin of Species.[15]At the time there was not enough evidence to prove that Darwin's theory ofnatural selectionwas correct. Schleicher proposed that linguistics could be used as a testing ground for the study of the evolution ofspecies.[16]A review of Schleicher's bookDarwinism as Tested by the Science of Languageappeared in the first issue ofNaturejournal in 1870.[17]Darwin reiterated Schleicher's proposition in his 1871 bookThe Descent of Man, claiming that languages are comparable to species, and thatlanguage changeoccurs throughnatural selectionas words 'struggle for life'. Darwin believed that languages had evolved from animalmating calls.[18]Darwinists considered the concept of language creation as unscientific.[19]
August Schleicher and his friendErnst Haeckelwere keen gardeners and regarded the study of cultures as a type ofbotany, with different species competing for the same living space.[20][16]Similar ideas became later advocated by politicians who wanted to appeal toworking classvoters, not least by thenational socialistswho subsequently included the concept of struggle for living space in their agenda.[21]Highly influential until the end ofWorld War II,social Darwinismwas eventually banished from human sciences, leading to a strict separation of natural and sociocultural studies.[16]
This gave rise to the dominance of structural linguistics in Europe. There had long been a dispute between the Darwinists and the French intellectuals with the topic of language evolution famously having been banned by theParis Linguistic Societyas early as in 1866.Ferdinand de Saussureproposedstructuralismto replace evolutionary linguistics in hisCourse in General Linguistics, published posthumously in 1916. The structuralists rose to academic political power in human and social sciences in the aftermath of the student revolts of Spring 1968, establishingSorbonneas an international centrepoint of humanistic thinking.
In theUnited States, structuralism was however fended off by the advocates ofbehavioural psychology; a linguistics framework nicknamed as 'American structuralism'. It was eventually replaced by the approach ofNoam Chomskywho published a modification ofLouis Hjelmslev'sformal structuralist theory, claiming thatsyntactic structuresareinnate. An active figure in peace demonstrations in the 1950s and 1960s, Chomsky rose to academic political power following Spring 1968 at the MIT.[22]
Chomsky became an influential opponent of the French intellectuals during the following decades, and his supporters successfully confronted thepost-structuralistsin theScience Warsof the late 1990s.[23]The shift of the century saw a new academic funding policy where interdisciplinary research became favoured, effectively directing research funds to biological humanities.[24]The decline of structuralism was evident by 2015 with Sorbonne having lost its former spirit.[25]
Chomsky eventually claimed that syntactic structures are caused by a randommutationin the humangenome,[7]proposing a similar explanation for other human faculties such asethics.[22]ButSteven Pinkerargued in 1990 that they are the outcome of evolutionaryadaptations.[26]
At the same time when the Chomskyan paradigm ofbiological determinismdefeatedhumanism, it was losing its own clout within sociobiology. It was reported likewise in 2015 thatgenerative grammarwas under fire inapplied linguisticsand in the process of being replaced withusage-based linguistics;[27]a derivative ofRichard Dawkins'smemetics.[28]It is a concept of linguistic units asreplicators. Following the publication of memetics in Dawkins's 1976 nonfiction bestsellerThe Selfish Gene, many biologically inclined linguists, frustrated with the lack of evidence for Chomsky'sUniversal Grammar, grouped under different brands including a framework calledCognitive Linguistics(with capitalised initials), and 'functional' (adaptational) linguistics (not to be confused withfunctional linguistics) to confront both Chomsky and the humanists.[4]The replicator approach is today dominant in evolutionary linguistics, applied linguistics, cognitive linguistics andlinguistic typology; while the generative approach has maintained its position in general linguistics, especiallysyntax; and incomputational linguistics.
Evolutionary linguistics is part of a wider framework ofUniversal Darwinism. In this view, linguistics is seen as anecologicalenvironment for research traditions struggling for the same resources.[4]According toDavid Hull, these traditions correspond to species in biology. Relationships between research traditions can besymbiotic,competitiveorparasitic. An adaptation of Hull's theory in linguistics is proposed byWilliam Croft.[3]He argues that the Darwinian method is more advantageous than linguistic models based onphysics,structuralist sociology, orhermeneutics.[4]
Evolutionary linguistics is often divided intofunctionalismandformalism,[29]concepts which are not to be confused withfunctionalismandformalismin the humanistic reference.[30]Functional evolutionary linguistics considers languages asadaptationsto human mind. The formalist view regards them as crystallised or non-adaptational.[29]
The adaptational view of language is advocated by various frameworks of cognitive and evolutionary linguistics, with the terms 'functionalism' and 'Cognitive Linguistics' often being equated.[31]It is hypothesised that the evolution of the animal brain provides humans with a mechanism of abstract reasoning which is a 'metaphorical' version of image-based reasoning.[32]Language is not considered as a separate area ofcognition, but as coinciding with general cognitive capacities, such asperception,attention,motor skills, and spatial andvisual processing. It is argued to function according to the same principles as these.[33][34]
It is thought that the brain links action schemes to form–meaning pairs which are calledconstructions.[35]Cognitive linguistic approaches to syntax are calledcognitiveandconstruction grammar.[33]Also deriving from memetics and other cultural replicator theories,[3]these can study the natural orsocial selectionand adaptation of linguistic units. Adaptational models reject a formal systemic view of language and consider language as a population of linguistic units.
The bad reputation of social Darwinism and memetics has been discussed in the literature, and recommendations for new terminology have been given.[36]What correspond to replicators or mind-viruses in memetics are calledlinguemesin Croft'stheory of Utterance Selection(TUS),[37]and likewise linguemes or constructions in construction grammar andusage-based linguistics;[38][39]andmetaphors,[40]frames[41]orschemas[42]in cognitive and construction grammar. The reference of memetics has been largely replaced with that of aComplex Adaptive System.[43]In current linguistics, this term covers a wide range of evolutionary notions while maintaining theNeo-Darwinianconcepts of replication and replicator population.[44]
Functional evolutionary linguistics is not to be confused withfunctional humanistic linguistics.
Advocates of formal evolutionary explanation in linguistics argue that linguistic structures are crystallised. Inspired by 19th century advances incrystallography, Schleicher argued that different types of languages are like plants, animals and crystals.[45]The idea of linguistic structures as frozen drops was revived intagmemics,[46]an approach to linguistics with the goal to uncover divine symmetries underlying all languages, as if caused bythe Creation.[47]
In modernbiolinguistics,the X-bar treeis argued to be like natural systems such asferromagnetic dropletsand botanic forms.[48]Generative grammar considers syntactic structures similar tosnowflakes.[9]It is hypothesised that such patterns are caused by amutationin humans.[7]
The formal–structural evolutionary aspect of linguistics is not to be confused withstructural linguistics.
There was some hope of a breakthrough with the discovery of theFOXP2gene.[49][50]There is little support, however, for the idea thatFOXP2is 'the grammar gene' or that it had much to do with the relatively recent emergence of syntactical speech.[51]The idea that people have a language instinct is disputed.[52][53]Memetics is sometimes discredited aspseudoscience[14]and neurological claims made by evolutionary cognitive linguists have been likened to pseudoscience.[13]All in all, there does not appear to be any evidence for the basic tenets of evolutionary linguistics beyond the fact that language is processed by the brain, and brain structures are shaped by genes.[54][55]
Evolutionary linguistics has been criticised by advocates of (humanistic) structural and functional linguistics.Ferdinand de Saussurecommented on 19th century evolutionary linguistics:
"Language was considered a specific sphere, a fourth natural kingdom; this led to methods of reasoning which would have caused astonishment in other sciences. Today one cannot read a dozen lines written at that time without being struck by absurdities of reasoning and by the terminology used to justify these absurdities”[56]
Mark Aronoff, however, argues that historical linguistics had its golden age during the time of Schleicher and his supporters, enjoying a place among the hard sciences, and considers the return of Darwinian linguistics as a positive development.Esa Itkonennonetheless deems the revival of Darwinism as a hopeless enterprise:
"There is ... an application of intelligence in linguistic change which is absent in biological evolution; and this suffices to make the two domains totally disanalogous ... [Grammaticalisation depends on] cognitive processes, ultimately serving the goal of problem solving, which intelligent entities like humans must perform all the time, but which biological entities like genes cannot perform. Trying to eliminate this basic difference leads to confusion.”[57]
Itkonen also points out that the principles of natural selection are not applicable because language innovation and acceptance have the same source which is the speech community. In biological evolution, mutation and selection have different sources. This makes it possible for people to change their languages, but not theirgenotype.[58]
|
https://en.wikipedia.org/wiki/Evolutionary_linguistics
|
Intheoretical physics, thehierarchy problemis the problem concerning the large discrepancy between aspects of the weak force and gravity.[1]There is no scientific consensus on why, for example, theweak forceis 1024times stronger thangravity.
A hierarchy problem[2]occurs when the fundamental value of some physical parameter, such as acoupling constantor a mass, in someLagrangianis vastly different from its effective value, which is the value that gets measured in an experiment. This happens because the effective value is related to the fundamental value by a prescription known asrenormalization, which applies corrections to it.
Typically the renormalized value of parameters are close to their fundamental values, but in some cases, it appears that there has been a delicate cancellation between the fundamental quantity and the quantum corrections. Hierarchy problems are related tofine-tuning problemsand problems of naturalness.
Throughout the 2010s, many scientists[3][4][5][6][7]argued that the hierarchy problem is a specific application ofBayesian statistics.
Studying renormalization in hierarchy problems is difficult, because such quantum corrections are usually power-law divergent, which means that the shortest-distance physics are most important. Because we do not know the precise details of thequantum gravity, we cannot even address how this delicate cancellation between two large terms occurs. Therefore, researchers are led to postulate new physical phenomena that resolve hierarchy problems without fine-tuning.
Suppose a physics model requires four parameters to produce a very high-quality working model capable of generating predictions regarding some aspect of our physical universe. Suppose we find through experiments that the parameters have values: 1.2, 1.31, 0.9 and a value near4×1029. One might wonder how such figures arise. In particular, one might be especially curious about a theory where three values are close to one, and the fourth is so different; i.e., the huge disproportion we seem to find between the first three parameters and the fourth. If one force is so much weaker than the others that it needs a factor of4×1029to allow it to be related to the others in terms of effects, we might also wonder how our universe come to be so exactly balanced when its forces emerged. In currentparticle physics, the differences between some actual parameters are much larger than this, so the question is noteworthy.
One explanation given by philosophers is theanthropic principle. If the universe came to exist by chance and vast numbers of other universes exist or have existed, then lifeforms capable of performing physics experiments only arose in universes that, by chance, had very balanced forces. All of the universes where the forces were not balanced did not develop life capable of asking this question. So if lifeforms likehuman beingsare aware and capable of asking such a question, humans must have arisen in a universe having balanced forces, however rare that might be.[8][9]
A second possible answer is that there is a deeper understanding of physics that we currently do not possess. There may be parameters from which we can derive physical constants that have fewer unbalanced values, or there may be a model with fewer parameters.[citation needed]
Inparticle physics, the most important hierarchy problem is the question that asks why theweak forceis 1024times as strong asgravity.[10]Both of these forces involve constants of nature, theFermi constantfor the weak force and theNewtonian constant of gravitationfor gravity. Furthermore, if theStandard Modelis used to calculate the quantum corrections to Fermi's constant, it appears that Fermi's constant is surprisingly large and is expected to be closer to Newton's constant unless there is a delicate cancellation between the bare value of Fermi's constant and the quantum corrections to it.
More technically, the question is why theHiggs bosonis so much lighter than thePlanck mass(or thegrand unification energy, or a heavy neutrino mass scale): one would expect that the large quantum contributions to the square of the Higgs boson mass would inevitably make the mass huge, comparable to the scale at which new physics appears unless there is an incrediblefine-tuningcancellation between the quadratic radiative corrections and the bare mass.
The problem cannot even be formulated in the strict context of the Standard Model, for the Higgs mass cannot be calculated. In a sense, the problem amounts to the worry that a future theory of fundamental particles, in which the Higgs boson mass will be calculable, should not have excessive fine-tunings.
There have been many proposed solutions by many experienced physicists.
Some physicists believe that one may solve the hierarchy problem viasupersymmetry. Supersymmetry can explain how a tiny Higgs mass can be protected from quantum corrections. Supersymmetry removes the power-law divergences of the radiative corrections to the Higgs mass and solves the hierarchy problem as long as the supersymmetric particles are light enough to satisfy theBarbieri–Giudicecriterion.[11]This still leaves open themu problem, however. The tenets of supersymmetry are being tested at theLHC, although no evidence has been found so far for supersymmetry.
Each particle that couples to the Higgs field has an associatedYukawa couplingλf{\textstyle \lambda _{f}}. The coupling with the Higgs field for fermions gives an interaction termLYukawa=−λfψ¯Hψ{\textstyle {\mathcal {L}}_{\mathrm {Yukawa} }=-\lambda _{f}{\bar {\psi }}H\psi }, withψ{\textstyle \psi }being theDirac fieldandH{\textstyle H}theHiggs field. Also, the mass of a fermion is proportional to its Yukawa coupling, meaning that the Higgs boson will couple most to the most massive particle. This means that the most significant corrections to the Higgs mass will originate from the heaviest particles, most prominently the top quark. By applying theFeynman rules, one gets the quantum corrections to the Higgs mass squared from a fermion to be:
ΔmH2=−|λf|28π2[ΛUV2+…].{\displaystyle \Delta m_{\rm {H}}^{2}=-{\frac {\left|\lambda _{f}\right|^{2}}{8\pi ^{2}}}[\Lambda _{\mathrm {UV} }^{2}+\dots ].}
TheΛUV{\textstyle \Lambda _{\mathrm {UV} }}is called the ultraviolet cutoff and is the scale up to which the Standard Model is valid. If we take this scale to be the Planck scale, then we have the quadratically diverging Lagrangian. However, suppose there existed two complex scalars (taken to be spin 0) such that:
λS=|λf|2{\displaystyle \lambda _{S}=\left|\lambda _{f}\right|^{2}}
(the couplings to the Higgs are exactly the same).
Then by the Feynman rules, the correction (from both scalars) is:
ΔmH2=2×λS16π2[ΛUV2+…].{\displaystyle \Delta m_{\rm {H}}^{2}=2\times {\frac {\lambda _{S}}{16\pi ^{2}}}[\Lambda _{\mathrm {UV} }^{2}+\dots ].}
(Note that the contribution here is positive. This is because of the spin-statistics theorem, which means that fermions will have a negative contribution and bosons a positive contribution. This fact is exploited.)
This gives a total contribution to the Higgs mass to be zero if we include both the fermionic and bosonic particles.Supersymmetryis an extension of this that creates 'superpartners' for all Standard Model particles.[12]
Without supersymmetry, a solution to the hierarchy problem has been proposed using just theStandard Model. The idea can be traced back to the fact that the term in the Higgs field that produces the uncontrolled quadratic correction upon renormalization is the quadratic one. If the Higgs field had no mass term, then no hierarchy problem arises. But by missing a quadratic term in the Higgs field, one must find a way to recover the breaking of electroweak symmetry through a non-null vacuum expectation value. This can be obtained using theWeinberg–Coleman mechanismwith terms in the Higgs potential arising from quantum corrections. Mass obtained in this way is far too small with respect to what is seen in accelerator facilities and so a conformal Standard Model needs more than one Higgs particle. This proposal has been put forward in 2006 byKrzysztof Antoni MeissnerandHermann Nicolai[13]and is currently under scrutiny. But if no further excitation is observed beyond the one seen so far atLHC, this model would have to be abandoned.
No experimental or observational evidence ofextra dimensionshas been officially reported. Analyses of results from theLarge Hadron Colliderseverely constrain theories withlarge extra dimensions.[14]However, extra dimensions could explain why the gravity force is so weak, and why the expansion of the universe is faster than expected.[15]
If we live in a 3+1 dimensional world, then we calculate the gravitational force viaGauss's law for gravity:
g(r)=−Gmerr2(1){\displaystyle \mathbf {g} (\mathbf {r} )=-Gm{\frac {\mathbf {e_{r}} }{r^{2}}}\qquad (1)}
which is simplyNewton's law of gravitation. Note that Newton's constantGcan be rewritten in terms of thePlanck mass.
G=ℏcMPl2{\displaystyle G={\frac {\hbar c}{M_{\mathrm {Pl} }^{2}}}}
If we extend this idea toδextra dimensions, then we get:
g(r)=−merMPl3+1+δ2+δr2+δ(2){\displaystyle \mathbf {g} (\mathbf {r} )=-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2+\delta }}}\qquad (2)}
whereMPl3+1+δ{\textstyle M_{\mathrm {Pl} _{3+1+\delta }}}is the3+1+δ{\textstyle \delta }-dimensional Planck mass. However, we are assuming that these extra dimensions are the same size as the normal 3+1 dimensions. Let us say that the extra dimensions are of sizen≪than normal dimensions. If we letr≪n, then we get (2). However, if we letr≫n, then we get our usual Newton's law. However, whenr≫n, the flux in the extra dimensions becomes a constant, because there is no extra room for gravitational flux to flow through. Thus the flux will be proportional tonδbecause this is the flux in the extra dimensions. The formula is:
g(r)=−merMPl3+1+δ2+δr2nδ−merMPl2r2=−merMPl3+1+δ2+δr2nδ{\displaystyle {\begin{aligned}\mathbf {g} (\mathbf {r} )&=-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2}n^{\delta }}}\\[2pt]-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} }^{2}r^{2}}}&=-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2}n^{\delta }}}\end{aligned}}}
which gives:
1MPl2r2=1MPl3+1+δ2+δr2nδ⟹MPl2=MPl3+1+δ2+δnδ{\displaystyle {\begin{aligned}{\frac {1}{M_{\mathrm {Pl} }^{2}r^{2}}}&={\frac {1}{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2}n^{\delta }}}\\[2pt]\implies \quad M_{\mathrm {Pl} }^{2}&=M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }n^{\delta }\end{aligned}}}
Thus the fundamental Planck mass (the extra-dimensional one) could actually be small, meaning that gravity is actually strong, but this must be compensated by the number of the extra dimensions and their size. Physically, this means that gravity is weak because there is a loss of flux to the extra dimensions.
This section is adapted fromQuantum Field Theory in a Nutshellby A. Zee.[16]
In 1998Nima Arkani-Hamed,Savas Dimopoulos, andGia Dvaliproposed theADD model, also known as the model withlarge extra dimensions, an alternative scenario to explain the weakness ofgravityrelative to the other forces.[17][18]This theory requires that the fields of theStandard Modelare confined to a four-dimensionalmembrane, while gravity propagates in several additional spatial dimensions that are large compared to thePlanck scale.[19]
In 1998–99Merab Gogberashvilipublished onarXiv(and subsequently in peer-reviewed journals) a number of articles where he showed that if the Universe is considered as a thin shell (a mathematicalsynonymfor "brane") expanding in 5-dimensional space then it is possible to obtain one scale for particle theory corresponding to the 5-dimensionalcosmological constantand Universe thickness, and thus to solve the hierarchy problem.[20][21][22]It was also shown that four-dimensionality of the Universe is the result of astabilityrequirement since the extra component of theEinstein field equationsgiving the localized solution formatterfields coincides with one of the conditions of stability.
Subsequently, there were proposed the closely relatedRandall–Sundrumscenarios which offered their solution to the hierarchy problem.
In 2019, a pair of researchers proposed thatIR/UV mixingresulting in the breakdown of theeffectivequantum field theorycould resolve the hierarchy problem.[23]In 2021, another group of researchers showed that UV/IR mixing could resolve the hierarchy problem in string theory.[24]
Inphysical cosmology, current observations in favor of anaccelerating universeimply the existence of a tiny, but nonzerocosmological constant. This problem, called thecosmological constant problem, is a hierarchy problem very similar to that of the Higgs boson mass problem, since the cosmological constant is also very sensitive to quantum corrections, but its calculation is complicated by the necessary involvement ofgeneral relativityin the problem. Proposed solutions to the cosmological constant problem include modifying and/or extending gravity,[25][26][27]adding matter with unvanishing pressure,[28]and UV/IR mixing in the Standard Model and gravity.[29][30]
Some physicists have resorted toanthropic reasoningto solve the cosmological constant problem,[31]but it is disputed whether such anthropic reasoning is scientific.[32][33]
|
https://en.wikipedia.org/wiki/Hierarchy_problem
|
YAGO(Yet Another GreatOntology) is an open source[3]knowledge basedeveloped at theMax Planck Institute for InformaticsinSaarbrücken. It is automatically extracted fromWikidataandSchema.org.
YAGO4, which was released in 2020, combines data that was extracted from Wikidata with relationship designators from Schema.org.[4]The previous version of YAGO, YAGO3, had knowledge of more than 10 million entities and contained more than 120 million facts about these entities.[5]The information in YAGO3 was extracted fromWikipedia(e.g., categories, redirects, infoboxes),WordNet(e.g., synsets, hyponymy), andGeoNames.[6]The accuracy of YAGO was manually evaluated to be above 95% on a sample of facts.[7]To integrate it to thelinked datacloud, YAGO has been linked to theDBpediaontology[8]and to theSUMOontology.[9]
YAGO3 is provided inTurtleandtsvformats. Dumps of the wholedatabaseare available, as well as thematic and specialized dumps. It can also be queried through various online browsers and through aSPARQLendpoint hosted by OpenLink Software. The source code of YAGO3 is available onGitHub.
YAGO has been used in theWatsonartificial intelligence system.[10]
|
https://en.wikipedia.org/wiki/YAGO_(database)
|
Addition(usually signified by theplus symbol, +) is one of the four basicoperationsofarithmetic, the other three beingsubtraction,multiplication, anddivision. The addition of twowhole numbersresults in the total orsumof those values combined. For example, the adjacent image shows two columns of apples, one with three apples and the other with two apples, totaling to five apples. This observation is expressed as"3 + 2 = 5", which is read as "three plus twoequalsfive".
Besidescountingitems, addition can also be defined and executed without referring toconcrete objects, using abstractions callednumbersinstead, such asintegers,real numbers, andcomplex numbers. Addition belongs to arithmetic, a branch ofmathematics. Inalgebra, another area of mathematics, addition can also be performed on abstract objects such asvectors,matrices,subspaces, andsubgroups.
Addition has several important properties. It iscommutative, meaning that the order of thenumbers being addeddoes not matter, so3 + 2 = 2 + 3, and it isassociative, meaning that when one adds more than two numbers, the order in which addition is performed does not matter. Repeated addition of1is the same as counting (seeSuccessor function). Addition of0does not change a number. Addition also obeys rules concerning related operations such as subtraction and multiplication.
Performing addition is one of the simplest numerical tasks to perform. Addition of very small numbers is accessible to toddlers; the most basic task,1 + 1, can be performed by infants as young as five months, and even some members of other animal species. Inprimary education, students are taught to add numbers in thedecimalsystem, beginning with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancientabacusto the moderncomputer, where research on the most efficient implementations of addition continues to this day.
Addition is written using theplus sign"+"between the terms, and the result is expressed with anequals sign. For example,1+2=3{\displaystyle 1+2=3}reads "one plus two equals three".[2]Nonetheless, some situations where addition is "understood", even though no symbol appears: a whole number followed immediately by afractionindicates the sum of the two, called amixed number, with an example,[3]312=3+12=3.5.{\displaystyle 3{\frac {1}{2}}=3+{\frac {1}{2}}=3.5.}This notation can cause confusion, since in most other contexts,juxtapositiondenotesmultiplicationinstead.[4]
The numbers or the objects to be added in general addition are collectively referred to as theterms,[5]theaddendsor thesummands.[2]This terminology carries over to the summation of multiple terms.
This is to be distinguished fromfactors, which aremultiplied.
Some authors call the first addend theaugend.[6]In fact, during theRenaissance, many authors did not consider the first addend an "addend" at all. Today, due to thecommutative propertyof addition, "augend" is rarely used, and both terms are generally called addends.[7]
All of the above terminology derives fromLatin. "Addition" and "add" areEnglishwords derived from the Latinverbaddere, which is in turn acompoundofad"to" anddare"to give", from theProto-Indo-European root*deh₃-"to give"; thus toaddis togive to.[7]Using thegerundivesuffix-ndresults in "addend", "thing to be added".[a]Likewise fromaugere"to increase", one gets "augend", "thing to be increased".
"Sum" and "summand" derive from the Latinnounsumma"the highest, the top" and associated verbsummare. This is appropriate not only because the sum of two positive numbers is greater than either, but because it was common for theancient GreeksandRomansto add upward, contrary to the modern practice of adding downward, so that a sum was literally at the top of the addends.[9]Addereandsummaredate back at least toBoethius, if not to earlier Roman writers such asVitruviusandFrontinus; Boethius also used several other terms for the addition operation. The laterMiddle Englishterms "adden" and "adding" were popularized byChaucer.[10]
Addition is one of the four basicoperationsofarithmetic, with the other three beingsubtraction,multiplication, anddivision. This operation works by adding two or more terms.[11]An arbitrary of many operation of additions is called thesummation.[12]An infinite summation is a delicate procedure known as aseries,[13]and it can be expressed throughcapital sigma notation∑{\textstyle \sum }, which compactly denotesiterationof the operation of addition based on the given indexes.[14]For example,∑k=15k2=12+22+32+42+52=55.{\displaystyle \sum _{k=1}^{5}k^{2}=1^{2}+2^{2}+3^{2}+4^{2}+5^{2}=55.}
Addition is used to model many physical processes. Even for the simple case of addingnatural numbers, there are many possible interpretations and even more visual representations.
Possibly the most basic interpretation of addition lies in combiningsets, that is:[2]
When two or more disjoint collections are combined into a single collection, the number of objects in the single collection is the sum of the numbers of objects in the original collections.
This interpretation is easy to visualize, with little danger of ambiguity. It is also useful in higher mathematics (for the rigorous definition it inspires, see§ Natural numbersbelow). However, it is not obvious how one should extend this version of an addition's operation to include fractional or negative numbers.[15]
One possible fix is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods. Rather than solely combining collections of segments, rods can be joined end-to-end, which illustrates another conception of addition: adding not the rods but the lengths of the rods.[16]
A second interpretation of addition comes from extending an initial length by a given length:[17]
When an original length is extended by a given amount, the final length is the sum of the original length and the length of the extension.
The suma+b{\displaystyle a+b}can be interpreted as abinary operationthat combinesa{\displaystyle a}andb{\displaystyle b}algebraically, or it can be interpreted as the addition ofb{\displaystyle b}more units toa{\displaystyle a}. Under the latter interpretation, the parts of a suma+b{\displaystyle a+b}play asymmetric roles, and the operationa+b{\displaystyle a+b}is viewed as applying theunary operation+b{\displaystyle +b}toa{\displaystyle a}.[18]Instead of calling botha{\displaystyle a}andb{\displaystyle b}addends, it is more appropriate to calla{\displaystyle a}the "augend" in this case, sincea{\displaystyle a}plays a passive role. The unary view is also useful when discussingsubtraction, because each unary addition operation has an inverse unary subtraction operation, and vice versa.
Addition iscommutative, meaning that one can change the order of the terms in a sum, but still get the same result. Symbolically, ifa{\displaystyle a}andb{\displaystyle b}are any two numbers, then:[19]a+b=b+a.{\displaystyle a+b=b+a.}The fact that addition is commutative is known as the "commutative law of addition"[20]or "commutative property of addition".[21]Some otherbinary operationsare commutative too as inmultiplication,[22]but others are not as insubtractionanddivision.[23]
Addition isassociative, which means that when three or more numbers are added together, theorder of operationsdoes not change the result. For any three numbersa{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}, it is true that:[24](a+b)+c=a+(b+c).{\displaystyle (a+b)+c=a+(b+c).}For example,(1+2)+3=1+(2+3){\displaystyle (1+2)+3=1+(2+3)}.
When addition is used together with other operations, theorder of operationsbecomes important. In the standard order of operations, addition is a lower priority thanexponentiation,nth roots, multiplication and division, but is given equal priority to subtraction.[25]
Addingzeroto any number does not change the number. In other words, zero is theidentity elementfor addition, and is also known as theadditive identity. In symbols, for everya{\displaystyle a}, one has:[24]a+0=0+a=a.{\displaystyle a+0=0+a=a.}This law was first identified inBrahmagupta'sBrahmasphutasiddhantain 628 AD, although he wrote it as three separate laws, depending on whethera{\displaystyle a}is negative, positive, or zero itself, and he used words rather than algebraic symbols. LaterIndian mathematiciansrefined the concept; around the year 830,Mahavirawrote, "zero becomes the same as what is added to it", corresponding to the unary statement0+a=a{\displaystyle 0+a=a}. In the 12th century,Bhaskarawrote, "In the addition of cipher, or subtraction of it, the quantity, positive or negative, remains the same", corresponding to the unary statementa+0=a{\displaystyle a+0=a}.[26]
Within the context of integers, addition ofonealso plays a special role: for any integera{\displaystyle a}, the integera+1{\displaystyle a+1}is the least integer greater thana{\displaystyle a}, also known as thesuccessorofa{\displaystyle a}. For instance, 3 is the successor of 2, and 7 is the successor of 6. Because of this succession, the value ofa+b{\displaystyle a+b}can also be seen as theb{\displaystyle b}-th successor ofa{\displaystyle a}, making addition an iterated succession. For example,6 + 2is 8, because 8 is the successor of 7, which is the successor of 6, making 8 the second successor of 6.[27]
To numerically add physical quantities withunits, they must be expressed with common units.[28]For example, adding 50 milliliters to 150 milliliters gives 200 milliliters. However, if a measure of 5 feet is extended by 2 inches, the sum is 62 inches, since 60 inches is synonymous with 5 feet. On the other hand, it is usually meaningless to try to add 3 meters and 4 square meters, since those units are incomparable; this sort of consideration is fundamental indimensional analysis.[29]
Studies on mathematical development starting around the 1980s have exploited the phenomenon ofhabituation:infantslook longer at situations that are unexpected.[30]A seminal experiment byKaren Wynnin 1992 involvingMickey Mousedolls manipulated behind a screen demonstrated that five-month-old infantsexpect1 + 1to be 2, and they are comparatively surprised when a physical situation seems to imply that1 + 1is either 1 or 3. This finding has since been affirmed by a variety of laboratories using different methodologies.[31]Another 1992 experiment with oldertoddlers, between 18 and 35 months, exploited their development of motor control by allowing them to retrieveping-pongballs from a box; the youngest responded well for small numbers, while older subjects were able to compute sums up to 5.[32]
Even some nonhuman animals show a limited ability to add, particularlyprimates. In a 1995 experiment imitating Wynn's 1992 result (but usingeggplantsinstead of dolls),rhesus macaqueandcottontop tamarinmonkeys performed similarly to human infants. More dramatically, after being taught the meanings of theArabic numerals0 through 4, onechimpanzeewas able to compute the sum of two numerals without further training.[33]More recently,Asian elephantshave demonstrated an ability to perform basic arithmetic.[34]
Typically, children first mastercounting. When given a problem that requires that two items and three items be combined, young children model the situation with physical objects, often fingers or a drawing, and then count the total. As they gain experience, they learn or discover the strategy of "counting-on": asked to find two plus three, children count three past two, saying "three, four,five" (usually ticking off fingers), and arriving at five. This strategy seems almost universal; children can easily pick it up from peers or teachers.[35]Most discover it independently. With additional experience, children learn to add more quickly by exploiting the commutativity of addition by counting up from the larger number, in this case, starting with three and counting "four,five." Eventually children begin to recall certain addition facts ("number bonds"), either through experience or rote memorization. Once some facts are committed to memory, children begin to derive unknown facts from known ones. For example, a child asked to add six and seven may know that6 + 6 = 12and then reason that6 + 7is one more, or 13.[36]Such derived facts can be found very quickly and most elementary school students eventually rely on a mixture of memorized and derived facts to add fluently.[37]
Different nations introduce whole numbers and arithmetic at different ages, with many countries teaching addition in pre-school.[38]However, throughout the world, addition is taught by the end of the first year of elementary school.[39]
The prerequisite to addition in thedecimalsystem is the fluent recall or derivation of the 100 single-digit "addition facts". One couldmemorizeall the facts byrote, but pattern-based strategies are more enlightening and, for most people, more efficient:[40]
As students grow older, they commit more facts to memory and learn to derive other facts rapidly and fluently. Many students never commit all the facts to memory, but can still find any basic fact quickly.[37]
The standard algorithm for adding multidigit numbers is to align the addends vertically and add the columns, starting from the ones column on the right. If a column exceeds nine, the extra digit is "carried" into the next column. For example, in the following image, the ones in the addition of59 + 27is 9 + 7 = 16, and the digit 1 is the carry.[b]An alternate strategy starts adding from the most significant digit on the left; this route makes carrying a little clumsier, but it is faster at getting a rough estimate of the sum. There are many alternative methods.
Decimal fractionscan be added by a simple modification of the above process.[41]One aligns two decimal fractions above each other, with the decimal point in the same location. If necessary, one can add trailing zeros to a shorter decimal to make it the same length as the longer decimal. Finally, one performs the same addition process as above, except the decimal point is placed in the answer, exactly where it was placed in the summands.
As an example, 45.1 + 4.34 can be solved as follows:
Inscientific notation, numbers are written in the formx=a×10b{\displaystyle x=a\times 10^{b}}, wherea{\displaystyle a}is the significand and10b{\displaystyle 10^{b}}is the exponential part. Addition requires two numbers in scientific notation to be represented using the same exponential part, so that the two significands can simply be added.
For example:
Addition in other bases is very similar to decimal addition. As an example, one can consider addition in binary.[42]Adding two single-digit binary numbers is relatively simple, using a form of carrying:
Adding two "1" digits produces a digit "0", while 1 must be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented:
This is known ascarrying.[43]When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary:
In this example, two numerals are being added together: 011012(1310) and 101112(2310). The top row shows the carry bits used. Starting in the rightmost column,1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added:1 + 0 + 1 = 102again; the 1 is carried, and 0 is written at the bottom. The third column:1 + 1 + 1 = 112. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002(3610).
Analog computerswork directly with physical quantities, so their addition mechanisms depend on the form of the addends. A mechanical adder might represent two addends as the positions of sliding blocks, in which case they can be added with anaveraginglever. If the addends are the rotation speeds of twoshafts, they can be added with adifferential. A hydraulic adder can add thepressuresin two chambers by exploitingNewton's second lawto balance forces on an assembly ofpistons. The most common situation for a general-purpose analog computer is to add twovoltages(referenced toground); this can be accomplished roughly with aresistornetwork, but a better design exploits anoperational amplifier.[44]
Addition is also fundamental to the operation ofdigital computers, where the efficiency of addition, in particular thecarrymechanism, is an important limitation to overall performance.[45]
Theabacus, also called a counting frame, is a calculating tool that was in use centuries before the adoption of the written modern numeral system and is still widely used by merchants, traders and clerks inAsia,Africa, and elsewhere; it dates back to at least 2700–2300 BC, when it was used inSumer.[46]
Blaise Pascalinvented the mechanical calculator in 1642;[47]it was the first operationaladding machine. It made use of a gravity-assisted carry mechanism. It was the only operational mechanical calculator in the 17th century[48]and the earliest automatic, digital computer.Pascal's calculatorwas limited by its carry mechanism, which forced its wheels to only turn one way so it could add. To subtract, the operator had to use thePascal's calculator's complement, which required as many steps as an addition.Giovanni Polenifollowed Pascal, building the second functional mechanical calculator in 1709, a calculating clock made of wood that, once setup, could multiply two numbers automatically.
Addersexecute integer addition in electronic digital computers, usually usingbinary arithmetic. The simplest architecture is the ripple carry adder, which follows the standard multi-digit algorithm. One slight improvement is thecarry skipdesign, again following human intuition; one does not perform all the carries in computing999 + 1, but one bypasses the group of 9s and skips to the answer.[49]
In practice, computational addition may be achieved viaXORandANDbitwise logical operations in conjunction with bitshift operations as shown in thepseudocodebelow. Both XOR and AND gates are straightforward to realize in digital logic, allowing the realization offull addercircuits, which in turn may be combined into more complex logical operations. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance, since it underlies allfloating-point operationsas well as such basic tasks asaddressgeneration duringmemoryaccess and fetchinginstructionsduringbranching. To increase speed, modern designs calculate digits inparallel; these schemes go by such names as carry select,carry lookahead, and theLingpseudocarry. Many implementations are, in fact, hybrids of these last three designs.[50]Unlike addition on paper, addition on a computer often changes the addends. Both addends are destroyed on the ancientabacusand adding board, leaving only the sum. The influence of the abacus on mathematical thinking was strong enough that earlyLatintexts often claimed that in the process of adding "a number to a number", both numbers vanish.[51]In modern times, the ADD instruction of amicroprocessoroften replaces the augend with the sum but preserves the addend.[52]In ahigh-level programming language, evaluatinga+b{\displaystyle a+b}does not change eithera{\displaystyle a}orb{\displaystyle b}; if the goal is to replacea{\displaystyle a}with the sum this must be explicitly requested, typically with the statementa=a+b{\displaystyle a=a+b}. Some languages likeCorC++allow this to be abbreviated asa+=b.
On a computer, if the result of an addition is too large to store, anarithmetic overflowoccurs, resulting in an incorrect answer. Unanticipated arithmetic overflow is a fairly common cause ofprogram errors. Such overflow bugs may be hard to discover and diagnose because they may manifest themselves only for very large input data sets, which are less likely to be used in validation tests.[53]TheYear 2000 problemwas a series of bugs where overflow errors occurred due to the use of a 2-digit format for years.[54]
Computers have another way of representing numbers, calledfloating-point arithmetic, which is similar to scientific notation described above and which reduces the overflow problem. Each floating point number has two parts, an exponent and a mantissa. To add two floating-point numbers, the exponents must match, which typically means shifting the mantissa of the smaller number. If the disparity between the larger and smaller numbers is too great, a loss of precision may result. If many smaller numbers are to be added to a large number, it is best to add the smaller numbers together first and then add the total to the larger number, rather than adding small numbers to the large number one at a time. This makes floating point addition non-associative in general. Seefloating-point arithmetic#Accuracy problems.
To prove the usual properties of addition, one must first define addition for the context in question. Addition is first defined on thenatural numbers. Inset theory, addition is then extended to progressively larger sets that include the natural numbers: theintegers, therational numbers, and thereal numbers.[55]Inmathematics education,[56]positive fractions are added before negative numbers are even considered; this is also the historical route.[57]
There are two popular ways to define the sum of two natural numbersa{\displaystyle a}andb{\displaystyle b}. If one defines natural numbers to be thecardinalitiesof finite sets (the cardinality of a set is the number of elements in the set), then it is appropriate to define their sum as follows:[58]
LetN(S){\displaystyle N(S)}be the cardinality of a setS{\displaystyle S}. Take two disjoint setsA{\displaystyle A}andB{\displaystyle B}, withN(A)=a{\displaystyle N(A)=a}andN(B)=b{\displaystyle N(B)=b}. Thena+b{\displaystyle a+b}is defined asN(A∪B){\displaystyle N(A\cup B)}.
HereA∪B{\displaystyle A\cup B}means theunionofA{\displaystyle A}andB{\displaystyle B}. An alternate version of this definition allowsA{\displaystyle A}andB{\displaystyle B}to possibly overlap and then takes theirdisjoint union, a mechanism that allows common elements to be separated out and therefore counted twice.
The other popular definition is recursive:[59]
Letn+{\displaystyle n^{+}}be the successor ofn{\displaystyle n}, that is the number followingn{\displaystyle n}in the natural numbers, so0+=1{\displaystyle 0^{+}=1},1+=2{\displaystyle 1^{+}=2}. Definea+0=a{\displaystyle a+0=a}. Define the general sum recursively bya+b+=(a+b)+{\displaystyle a+b^{+}=(a+b)^{+}}. Hence1+1=1+0+=(1+0)+=1+=2{\displaystyle 1+1=1+0^{+}=(1+0)^{+}=1^{+}=2}.
Again, there are minor variations upon this definition in the literature. Taken literally, the above definition is an application of therecursion theoremon thepartially ordered setN2{\displaystyle \mathbb {N} ^{2}}.[60]On the other hand, some sources prefer to use a restricted recursion theorem that applies only to the set of natural numbers. One then considersa{\displaystyle a}to be temporarily "fixed", applies recursion onb{\displaystyle b}to define a function "a+{\displaystyle a+}", and pastes these unary operations for alla{\displaystyle a}together to form the full binary operation.[61]
This recursive formulation of addition was developed by Dedekind as early as 1854, and he would expand upon it in the following decades. He proved the associative and commutative properties, among others, throughmathematical induction.[62]
The simplest conception of an integer is that it consists of anabsolute value(which is a natural number) and asign(generally eitherpositiveornegative). The integer zero is a special third case, being neither positive nor negative. The corresponding definition of addition must proceed by cases:[63]
For an integern{\displaystyle n}, let|n|{\displaystyle |n|}be its absolute value. Leta{\displaystyle a}andb{\displaystyle b}be integers. If eithera{\displaystyle a}orb{\displaystyle b}is zero, treat it as an identity. Ifa{\displaystyle a}andb{\displaystyle b}are both positive, definea+b=|a|+|b|{\displaystyle a+b=|a|+|b|}. Ifa{\displaystyle a}andb{\displaystyle b}are both negative, definea+b=−(|a|+|b|){\displaystyle a+b=-(|a|+|b|)}. Ifa{\displaystyle a}andb{\displaystyle b}have different signs, definea+b{\displaystyle a+b}to be the difference between|a|+|b|{\displaystyle |a|+|b|}, with the sign of the term whose absolute value is larger.
As an example,−6 + 4 = −2; because −6 and 4 have different signs, their absolute values are subtracted, and since the absolute value of the negative term is larger, the answer is negative.
Although this definition can be useful for concrete problems, the number of cases to consider complicates proofs unnecessarily. So the following method is commonly used for defining integers. It is based on the remark that every integer is the difference of two natural integers and that two such differences,a−b{\displaystyle a-b}andc−d{\displaystyle c-d}are equal if and only ifa+d=b+c{\displaystyle a+d=b+c}. So, one can define formally the integers as theequivalence classesofordered pairsof natural numbers under theequivalence relation(a,b)∼(c,d){\displaystyle (a,b)\sim (c,d)}if and only ifa+d=b+c{\displaystyle a+d=b+c}.[64]The equivalence class of(a,b){\displaystyle (a,b)}contains either(a−b,0){\displaystyle (a-b,0)}ifa≥b{\displaystyle a\geq b}, or(0,b−a){\displaystyle (0,b-a)}if otherwise. Given thatn{\displaystyle n}is a natural number, then one can denote+n{\displaystyle +n}the equivalence class of(n,0){\displaystyle (n,0)}, and by−n{\displaystyle -n}the equivalence class of(0,n){\displaystyle (0,n)}. This allows identifying the natural numbern{\displaystyle n}with the equivalence class+n{\displaystyle +n}.
The addition of ordered pairs is done component-wise:[65](a,b)+(c,d)=(a+c,b+d).{\displaystyle (a,b)+(c,d)=(a+c,b+d).}A straightforward computation shows that the equivalence class of the result depends only on the equivalence classes of the summands, and thus that this defines an addition of equivalence classes, that is, integers.[66]Another straightforward computation shows that this addition is the same as the above case definition.
This way of defining integers as equivalence classes of pairs of natural numbers can be used to embed into agroupany commutativesemigroupwithcancellation property. Here, the semigroup is formed by the natural numbers, and the group is the additive group of integers. The rational numbers are constructed similarly, by taking as a semigroup the nonzero integers with multiplication.
This construction has also been generalized under the name ofGrothendieck groupto the case of any commutative semigroup. Without the cancellation property, thesemigroup homomorphismfrom the semigroup into the group may be non-injective. Originally, the Grothendieck group was the result of this construction applied to the equivalence classes under isomorphisms of the objects of anabelian category, with thedirect sumas semigroup operation.
Addition ofrational numbersinvolves thefractions. The computation can be done by using theleast common denominator, but a conceptually simpler definition involves only integer addition and multiplication:ab+cd=ad+bcbd.{\displaystyle {\frac {a}{b}}+{\frac {c}{d}}={\frac {ad+bc}{bd}}.}As an example, the sum34+18=3×8+4×14×8=24+432=2832=78{\textstyle {\frac {3}{4}}+{\frac {1}{8}}={\frac {3\times 8+4\times 1}{4\times 8}}={\frac {24+4}{32}}={\frac {28}{32}}={\frac {7}{8}}}.
Addition of fractions is much simpler when thedenominatorsare the same; in this case, one can simply add the numerators while leaving the denominator the same:ac+bc=a+bc,{\displaystyle {\frac {a}{c}}+{\frac {b}{c}}={\frac {a+b}{c}},}so14+24=1+24=34{\textstyle {\frac {1}{4}}+{\frac {2}{4}}={\frac {1+2}{4}}={\frac {3}{4}}}.[67]
The commutativity and associativity of rational addition are easy consequences of the laws of integer arithmetic.[68]
A common construction of the set of real numbers is the Dedekind completion of the set of rational numbers. A real number is defined to be aDedekind cutof rationals: anon-empty setof rationals that is closed downward and has nogreatest element. The sum of real numbersaandbis defined element by element:[69]a+b={q+r∣q∈a,r∈b}.{\displaystyle a+b=\{q+r\mid q\in a,r\in b\}.}This definition was first published, in a slightly modified form, byRichard Dedekindin 1872.[70]The commutativity and associativity of real addition are immediate; defining the real number 0 as the set of negative rationals, it is easily seen as the additive identity. Probably the trickiest part of this construction pertaining to addition is the definition of additive inverses.[71]
Unfortunately, dealing with the multiplication of Dedekind cuts is a time-consuming case-by-case process similar to the addition of signed integers.[72]Another approach is the metric completion of the rational numbers. A real number is essentially defined to be the limit of aCauchy sequenceof rationals, liman. Addition is defined term by term:[73]limnan+limnbn=limn(an+bn).{\displaystyle \lim _{n}a_{n}+\lim _{n}b_{n}=\lim _{n}(a_{n}+b_{n}).}This definition was first published byGeorg Cantor, also in 1872, although his formalism was slightly different.[74]One must prove that this operation is well-defined, dealing with co-Cauchy sequences. Once that task is done, all the properties of real addition follow immediately from the properties of rational numbers. Furthermore, the other arithmetic operations, including multiplication, have straightforward, analogous definitions.[75]
Complex numbers are added by adding the real and imaginary parts of the summands.[76][77]That is to say:
Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbersAandB, interpreted as points of the complex plane, is the pointXobtained by building aparallelogramthree of whose vertices areO,AandB. Equivalently,Xis the point such that thetriangleswith verticesO,A,B, andX,B,A, arecongruent.
There are many binary operations that can be viewed as generalizations of the addition operation on the real numbers. The field of algebra is centrally concerned with such generalized operations, and they also appear inset theoryandcategory theory.
Inlinear algebra, avector spaceis an algebraic structure that allows for adding any twovectorsand for scaling vectors. A familiar vector space is the set of all ordered pairs of real numbers; the ordered pair(a,b){\displaystyle (a,b)}is interpreted as a vector from the origin in the Euclidean plane to the point(a,b){\displaystyle (a,b)}in the plane. The sum of two vectors is obtained by adding their individual coordinates:(a,b)+(c,d)=(a+c,b+d).{\displaystyle (a,b)+(c,d)=(a+c,b+d).}This addition operation is central toclassical mechanics, in whichvelocities,accelerationsandforcesare all represented by vectors.[78]
Matrix addition is defined for two matrices of the same dimensions. The sum of twom×n(pronounced "m by n") matricesAandB, denoted byA+B, is again anm×nmatrix computed by adding corresponding elements:[79][80]A+B=[a11a12⋯a1na21a22⋯a2n⋮⋮⋱⋮am1am2⋯amn]+[b11b12⋯b1nb21b22⋯b2n⋮⋮⋱⋮bm1bm2⋯bmn]=[a11+b11a12+b12⋯a1n+b1na21+b21a22+b22⋯a2n+b2n⋮⋮⋱⋮am1+bm1am2+bm2⋯amn+bmn]{\displaystyle {\begin{aligned}\mathbf {A} +\mathbf {B} &={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\\\end{bmatrix}}+{\begin{bmatrix}b_{11}&b_{12}&\cdots &b_{1n}\\b_{21}&b_{22}&\cdots &b_{2n}\\\vdots &\vdots &\ddots &\vdots \\b_{m1}&b_{m2}&\cdots &b_{mn}\\\end{bmatrix}}\\[8mu]&={\begin{bmatrix}a_{11}+b_{11}&a_{12}+b_{12}&\cdots &a_{1n}+b_{1n}\\a_{21}+b_{21}&a_{22}+b_{22}&\cdots &a_{2n}+b_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}+b_{m1}&a_{m2}+b_{m2}&\cdots &a_{mn}+b_{mn}\\\end{bmatrix}}\\\end{aligned}}}
For example:
Inmodular arithmetic, the set of available numbers is restricted to a finite subset of the integers, and addition "wraps around" when reaching a certain value, called the modulus. For example, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central tomusical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known inBoolean logicas the "exclusive or" function. A similar "wrap around" operation arises ingeometry, where the sum of twoangle measuresis often taken to be their sum as real numbers modulo 2π. This amounts to an addition operation on thecircle, which in turn generalizes to addition operations on many-dimensionaltori.
The general theory of abstract algebra allows an "addition" operation to be anyassociativeandcommutativeoperation on a set. Basicalgebraic structureswith such an addition operation includecommutative monoidsandabelian groups.
Linear combinationscombine multiplication and summation; they are sums in which each term has a multiplier, usually arealorcomplexnumber. Linear combinations are especially useful in contexts where straightforward addition would violate some normalization rule, such asmixingofstrategiesingame theoryorsuperpositionofstatesinquantum mechanics.[81]
A far-reaching generalization of the addition of natural numbers is the addition ofordinal numbersandcardinal numbersin set theory. These give two different generalizations of the addition of natural numbers to thetransfinite. Unlike most addition operations, the addition of ordinal numbers is not commutative.[82]Addition of cardinal numbers, however, is a commutative operation closely related to thedisjoint unionoperation.
Incategory theory, disjoint union is seen as a particular case of thecoproductoperation,[83]and general coproducts are perhaps the most abstract of all the generalizations of addition. Some coproducts, such asdirect sumandwedge sum, are named to evoke their connection with addition.
Addition, along with subtraction, multiplication, and division, is considered one of the basic operations and is used inelementary arithmetic.
Subtractioncan be thought of as a kind of addition—that is, the addition of anadditive inverse. Subtraction is itself a sort of inverse to addition, in that addingx{\displaystyle x}and subtractingx{\displaystyle x}areinverse functions.[84]Given a set with an addition operation, one cannot always define a corresponding subtraction operation on that set; the set of natural numbers is a simple example. On the other hand, a subtraction operation uniquely determines an addition operation, an additive inverse operation, and an additive identity; for this reason, an additive group can be described as a set that is closed under subtraction.[85]
Multiplicationcan be thought of asrepeated addition. If a single termxappears in a sumn{\displaystyle n}times, then the sum is the product ofn{\displaystyle n}andx. Nonetheless, this works only fornatural numbers.[86]By the definition in general, multiplication is the operation between two numbers, called the multiplier and the multiplicand, that are combined into a single number called the product.
In the real and complex numbers, addition and multiplication can be interchanged by theexponential function:[87]ea+b=eaeb.{\displaystyle e^{a+b}=e^{a}e^{b}.}This identity allows multiplication to be carried out by consulting atableoflogarithmsand computing addition by hand; it also enables multiplication on aslide rule. The formula is still a good first-order approximation in the broad context ofLie groups, where it relates multiplication of infinitesimal group elements with addition of vectors in the associatedLie algebra.[88]
There are even more generalizations of multiplication than addition.[89]In general, multiplication operations alwaysdistributeover addition; this requirement is formalized in the definition of aring. In some contexts, integers, distributivity over addition, and the existence of a multiplicative identity are enough to determine the multiplication operation uniquely. The distributive property also provides information about the addition operation; by expanding the product(1+1)(a+b){\displaystyle (1+1)(a+b)}in both ways, one concludes that addition is forced to be commutative. For this reason, ring addition is commutative in general.[90]
Divisionis an arithmetic operation remotely related to addition. Sincea/b=ab−1{\displaystyle a/b=ab^{-1}}, division is right distributive over addition:(a+b)/c=a/c+b/c{\displaystyle (a+b)/c=a/c+b/c}.[91]However, division is not left distributive over addition, such as1/(2+2){\displaystyle 1/(2+2)}is not the same as1/2+1/2{\displaystyle 1/2+1/2}.
The maximum operationmax(a,b){\displaystyle \max(a,b)}is a binary operation similar to addition. In fact, if two nonnegative numbersa{\displaystyle a}andb{\displaystyle b}are of differentorders of magnitude, their sum is approximately equal to their maximum. This approximation is extremely useful in the applications of mathematics, for example, in truncatingTaylor series. However, it presents a perpetual difficulty innumerical analysis, essentially since "max" is not invertible. Ifb{\displaystyle b}is much greater thana{\displaystyle a}, then a straightforward calculation of(a+b)−b{\displaystyle (a+b)-b}can accumulate an unacceptableround-off error, perhaps even returning zero. See alsoLoss of significance.
The approximation becomes exact in a kind of infinite limit; if eithera{\displaystyle a}orb{\displaystyle b}is an infinitecardinal number, their cardinal sum is exactly equal to the greater of the two.[93]Accordingly, there is no subtraction operation for infinite cardinals.[94]
Maximization is commutative and associative, like addition. Furthermore, since addition preserves the ordering of real numbers, addition distributes over "max" in the same way that multiplication distributes over addition:a+max(b,c)=max(a+b,a+c).{\displaystyle a+\max(b,c)=\max(a+b,a+c).}For these reasons, intropical geometryone replaces multiplication with addition and addition with maximization. In this context, addition is called "tropical multiplication", maximization is called "tropical addition", and the tropical "additive identity" isnegative infinity.[95]Some authors prefer to replace addition with minimization; then the additive identity is positive infinity.[96]
Tying these observations together, tropical addition is approximately related to regular addition through thelogarithm:log(a+b)≈max(loga,logb),{\displaystyle \log(a+b)\approx \max(\log a,\log b),}which becomes more accurate as the base of the logarithm increases.[97]The approximation can be made exact by extracting a constanth{\displaystyle h}, named by analogy with thePlanck constantfromquantum mechanics,[98]and taking the "classical limit" ash{\displaystyle h}tends to zero:max(a,b)=limh→0hlog(ea/h+eb/h).{\displaystyle \max(a,b)=\lim _{h\to 0}h\log(e^{a/h}+e^{b/h}).}In this sense, the maximum operation is adequantizedversion of addition.[99]
Convolutionis used to add two independentrandom variablesdefined bydistribution functions. Its usual definition combines integration, subtraction, and multiplication.[100]In general, convolution is useful as a kind of domain-side addition; by contrast, vector addition is a kind of range-side addition.
+Addition(+)
−Subtraction(−)
×Multiplication(×or·)
÷Division(÷or∕)
|
https://en.wikipedia.org/wiki/Addition
|
Incoding theory,burst error-correcting codesemploy methods of correctingburst errors, which are errors that occur in many consecutive bits rather than occurring in bits independently of each other.
Many codes have been designed to correctrandom errors. Sometimes, however, channels may introduce errors which are localized in a short interval. Such errors occur in a burst (calledburst errors) because they occur in many consecutive bits. Examples of burst errors can be found extensively in storage mediums. These errors may be due to physical damage such as scratch on a disc or a stroke of lightning in case of wireless channels. They are not independent; they tend to be spatially concentrated. If one bit has an error, it is likely that the adjacent bits could also be corrupted. The methods used to correct random errors are inefficient to correct burst errors.
A burst of lengthℓ[1]
Say a codewordC{\displaystyle C}is transmitted, and it is received asY=C+E.{\displaystyle Y=C+E.}Then, the error vectorE{\displaystyle E}is called a burst of lengthℓ{\displaystyle \ell }if the nonzero components ofE{\displaystyle E}are confined toℓ{\displaystyle \ell }consecutive components. For example,E=(010000110){\displaystyle E=(0{\textbf {1000011}}0)}is a burst of lengthℓ=7.{\displaystyle \ell =7.}
Although this definition is sufficient to describe what a burst error is, the majority of the tools developed for burst error correction rely on cyclic codes. This motivates our next definition.
A cyclic burst of lengthℓ[1]
An error vectorE{\displaystyle E}is called a cyclic burst error of lengthℓ{\displaystyle \ell }if its nonzero components are confined toℓ{\displaystyle \ell }cyclically consecutive components. For example, the previously considered error vectorE=(010000110){\displaystyle E=(010000110)}, is a cyclic burst of lengthℓ=5{\displaystyle \ell =5}, since we consider the error starting at position6{\displaystyle 6}and ending at position1{\displaystyle 1}. Notice the indices are0{\displaystyle 0}-based, that is, the first element is at position0{\displaystyle 0}.
For the remainder of this article, we will use the term burst to refer to a cyclic burst, unless noted otherwise.
It is often useful to have a compact definition of a burst error, that encompasses not only its length, but also the pattern, and location of such error. We define a burst description to be a tuple(P,L){\displaystyle (P,L)}whereP{\displaystyle P}is the pattern of the error (that is the string of symbols beginning with the first nonzero entry in the error pattern, and ending with the last nonzero symbol), andL{\displaystyle L}is the location, on the codeword, where the burst can be found.[1]
For example, the burst description of the error patternE=(010000110){\displaystyle E=(010000110)}isD=(1000011,1){\displaystyle D=(1000011,1)}. Notice that such description is not unique, becauseD′=(11001,6){\displaystyle D'=(11001,6)}describes the same burst error. In general, if the number of nonzero components inE{\displaystyle E}isw{\displaystyle w}, thenE{\displaystyle E}will havew{\displaystyle w}different burst descriptions each starting at a different nonzero entry ofE{\displaystyle E}. To remedy the issues that arise by the ambiguity of burst descriptions with the theorem below, however before doing so we need a definition first.
Definition.The number of symbols in a given error patterny,{\displaystyle y,}is denoted bylength(y).{\displaystyle \mathrm {length} (y).}
Theorem (Uniqueness of burst descriptions)—SupposeE{\displaystyle E}is an error vector of lengthn{\displaystyle n}with two burst descriptions(P1,L1){\displaystyle (P_{1},L_{1})}and(P2,L2){\displaystyle (P_{2},L_{2})}. Iflength(P1)+length(P2)⩽n+1,{\displaystyle \mathrm {length} (P_{1})+\mathrm {length} (P_{2})\leqslant n+1,}then the two descriptions are identical that is, their components are equivalent.[2]
Letw{\displaystyle w}be thehamming weight(or the number of nonzero entries) ofE{\displaystyle E}. ThenE{\displaystyle E}has exactlyw{\displaystyle w}error descriptions. Forw=0,1,{\displaystyle w=0,1,}there is nothing to prove. So we assume thatw⩾2{\displaystyle w\geqslant 2}and that the descriptions are not identical. We notice that each nonzero entry ofE{\displaystyle E}will appear in the pattern, and so, the components ofE{\displaystyle E}not included in the pattern will form a cyclic run of zeros, beginning after the last nonzero entry, and continuing just before the first nonzero entry of the pattern. We call the set of indices corresponding to this run as the zero run. We immediately observe that each burst description has a zero run associated with it and that each zero run is disjoint. Since we havew{\displaystyle w}zero runs, and each is disjoint, we have a total ofn−w{\displaystyle n-w}distinct elements in all the zero runs. On the other hand we have:n−w=number of zeros inE⩾(n−length(P1))+(n−length(P2))=2n−(length(P1)+length(P2))⩾2n−(n+1)length(P1)+length(P2)⩽n+1=n−1{\displaystyle {\begin{aligned}n-w={\text{number of zeros in }}E&\geqslant (n-\mathrm {length} (P_{1}))+(n-\mathrm {length} (P_{2}))\\&=2n-\left(\mathrm {length} (P_{1})+\mathrm {length} (P_{2})\right)\\&\geqslant 2n-(n+1)&&\mathrm {length} (P_{1})+\mathrm {length} (P_{2})\leqslant n+1\\&=n-1\end{aligned}}}This contradictsw⩾2.{\displaystyle w\geqslant 2.}Thus, the burst error descriptions are identical.
Acorollaryof the above theorem is that we cannot have two distinct burst descriptions for bursts of length12(n+1).{\displaystyle {\tfrac {1}{2}}(n+1).}
Cyclic codesare defined as follows: think of theq{\displaystyle q}symbols as elements inFq{\displaystyle \mathbb {F} _{q}}. Now, we can think of words as polynomials overFq,{\displaystyle \mathbb {F} _{q},}where the individual symbols of a word correspond to the different coefficients of the polynomial. To define a cyclic code, we pick a fixed polynomial, calledgenerator polynomial. The codewords of this cyclic code are all the polynomials that are divisible by this generator polynomial.
Codewords are polynomials of degree⩽n−1{\displaystyle \leqslant n-1}. Suppose that the generator polynomialg(x){\displaystyle g(x)}has degreer{\displaystyle r}. Polynomials of degree⩽n−1{\displaystyle \leqslant n-1}that are divisible byg(x){\displaystyle g(x)}result from multiplyingg(x){\displaystyle g(x)}by polynomials of degree⩽n−1−r{\displaystyle \leqslant n-1-r}. We haveqn−r{\displaystyle q^{n-r}}such polynomials. Each one of them corresponds to a codeword. Therefore,k=n−r{\displaystyle k=n-r}for cyclic codes.
Cyclic codes can detect all bursts of length up toℓ=n−k=r{\displaystyle \ell =n-k=r}. We will see later that the burst error detection ability of any(n,k){\displaystyle (n,k)}code is bounded from above byℓ⩽n−k{\displaystyle \ell \leqslant n-k}. Cyclic codes are considered optimal for burst error detection since they meet this upper bound:
Theorem (Cyclic burst correction capability)—Every cyclic code with generator polynomial of degreer{\displaystyle r}can detect all bursts of length⩽r.{\displaystyle \leqslant r.}
We need to prove that if you add a burst of length⩽r{\displaystyle \leqslant r}to a codeword (i.e. to a polynomial that is divisible byg(x){\displaystyle g(x)}), then the result is not going to be a codeword (i.e. the corresponding polynomial is not divisible byg(x){\displaystyle g(x)}). It suffices to show that no burst of length⩽r{\displaystyle \leqslant r}is divisible byg(x){\displaystyle g(x)}. Such a burst has the formxib(x){\displaystyle x^{i}b(x)}, wheredeg(b(x))<r.{\displaystyle \deg(b(x))<r.}Therefore,b(x){\displaystyle b(x)}is not divisible byg(x){\displaystyle g(x)}(because the latter has degreer{\displaystyle r}).g(x){\displaystyle g(x)}is not divisible byx{\displaystyle x}(Otherwise, all codewords would start with0{\displaystyle 0}). Therefore,xi{\displaystyle x^{i}}is not divisible byg(x){\displaystyle g(x)}as well.
The above proof suggests a simple algorithm for burst error detection/correction in cyclic codes: given a transmitted word (i.e. a polynomial of degree⩽n−1{\displaystyle \leqslant n-1}), compute the remainder of this word when divided byg(x){\displaystyle g(x)}. If the remainder is zero (i.e. if the word is divisible byg(x){\displaystyle g(x)}), then it is a valid codeword. Otherwise, report an error. To correct this error, subtract this remainder from the transmitted word. The subtraction result is going to be divisible byg(x){\displaystyle g(x)}(i.e. it is going to be a valid codeword).
By the upper bound on burst error detection (ℓ⩽n−k=r{\displaystyle \ell \leqslant n-k=r}), we know that a cyclic code can not detectallbursts of lengthℓ>r{\displaystyle \ell >r}. However cyclic codes can indeed detectmostbursts of length>r{\displaystyle >r}. The reason is that detection fails only when the burst is divisible byg(x){\displaystyle g(x)}. Over binary alphabets, there exist2ℓ−2{\displaystyle 2^{\ell -2}}bursts of lengthℓ{\displaystyle \ell }. Out of those, only2ℓ−2−r{\displaystyle 2^{\ell -2-r}}are divisible byg(x){\displaystyle g(x)}. Therefore, the detection failure probability is very small (2−r{\displaystyle 2^{-r}}) assuming a uniform distribution over all bursts of lengthℓ{\displaystyle \ell }.
We now consider a fundamental theorem about cyclic codes that will aid in designing efficient burst-error correcting codes, by categorizing bursts into different cosets.
Theorem (DistinctCosets)—A linear codeC{\displaystyle C}is anℓ{\displaystyle \ell }-burst-error-correcting code if all the burst errors of length⩽ℓ{\displaystyle \leqslant \ell }lie in distinctcosetsofC{\displaystyle C}.
Lete1,e2{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2}}be distinct burst errors of length⩽ℓ{\displaystyle \leqslant \ell }which lie in same coset of codeC{\displaystyle C}. Thenc=e1−e2{\displaystyle \mathbf {c} =\mathbf {e} _{1}-\mathbf {e} _{2}}is a codeword. Hence, if we receivee1,{\displaystyle \mathbf {e} _{1},}we can decode it either to0{\displaystyle \mathbf {0} }orc{\displaystyle \mathbf {c} }. In contrast, if all the burst errorse1{\displaystyle \mathbf {e} _{1}}ande2{\displaystyle \mathbf {e} _{2}}do not lie in same coset, then each burst error is determined by its syndrome. The error can then be corrected through its syndrome. Thus, a linear codeC{\displaystyle C}is anℓ{\displaystyle \ell }-burst-error-correcting code if and only if all the burst errors of length⩽ℓ{\displaystyle \leqslant \ell }lie in distinct cosets ofC{\displaystyle C}.
Theorem (Burst error codeword classification)—LetC{\displaystyle C}be a linearℓ{\displaystyle \ell }-burst-error-correcting code. Then no nonzero burst of length⩽2ℓ{\displaystyle \leqslant 2\ell }can be a codeword.
Letc{\displaystyle c}be a codeword with a burst of length⩽2ℓ{\displaystyle \leqslant 2\ell }. Thus it has the pattern(0,1,u,v,1,0){\displaystyle (0,1,u,v,1,0)}, whereu{\displaystyle u}andv{\displaystyle v}are words of length⩽ℓ−1.{\displaystyle \leqslant \ell -1.}Hence, the wordsw=(0,1,u,0,0,0){\displaystyle w=(0,1,u,0,0,0)}andc−w=(0,0,0,v,1,0){\displaystyle c-w=(0,0,0,v,1,0)}are two bursts of length⩽ℓ{\displaystyle \leqslant \ell }. For binary linear codes, they belong to the same coset. This contradicts the Distinct Cosets Theorem, therefore no nonzero burst of length⩽2ℓ{\displaystyle \leqslant 2\ell }can be a codeword.
By upper bound, we mean a limit on our error detection ability that we can never go beyond. Suppose that we want to design an(n,k){\displaystyle (n,k)}code that can detect all burst errors of length⩽ℓ.{\displaystyle \leqslant \ell .}A natural question to ask is: givenn{\displaystyle n}andk{\displaystyle k}, what is the maximumℓ{\displaystyle \ell }that we can never achieve beyond? In other words, what is the upper bound on the lengthℓ{\displaystyle \ell }of bursts that we can detect using any(n,k){\displaystyle (n,k)}code? The following theorem provides an answer to this question.
Theorem (Burst error detection ability)—The burst error detection ability of any(n,k){\displaystyle (n,k)}code isℓ⩽n−k.{\displaystyle \ell \leqslant n-k.}
First we observe that a code can detect all bursts of length⩽ℓ{\displaystyle \leqslant \ell }if and only if no two codewords differ by a burst of length⩽ℓ{\displaystyle \leqslant \ell }. Suppose that we have two code wordsc1{\displaystyle \mathbf {c} _{1}}andc2{\displaystyle \mathbf {c} _{2}}that differ by a burstb{\displaystyle \mathbf {b} }of length⩽ℓ{\displaystyle \leqslant \ell }. Upon receivingc1{\displaystyle \mathbf {c} _{1}}, we can not tell whether the transmitted word is indeedc1{\displaystyle \mathbf {c} _{1}}with no transmission errors, or whether it isc2{\displaystyle \mathbf {c} _{2}}with a burst errorb{\displaystyle \mathbf {b} }that occurred during transmission. Now, suppose that every two codewords differ by more than a burst of lengthℓ.{\displaystyle \ell .}Even if the transmitted codewordc1{\displaystyle \mathbf {c} _{1}}is hit by a burstb{\displaystyle \mathbf {b} }of lengthℓ{\displaystyle \ell }, it is not going to change into another valid codeword. Upon receiving it, we can tell that this isc1{\displaystyle \mathbf {c} _{1}}with a burstb.{\displaystyle \mathbf {b} .}By the above observation, we know that no two codewords can share the firstn−ℓ{\displaystyle n-\ell }symbols. The reason is that even if they differ in all the otherℓ{\displaystyle \ell }symbols, they are still going to be different by a burst of lengthℓ.{\displaystyle \ell .}Therefore, the number of codewordsqk{\displaystyle q^{k}}satisfiesqk⩽qn−ℓ.{\displaystyle q^{k}\leqslant q^{n-\ell }.}Applyinglogq{\displaystyle \log _{q}}to both sides and rearranging, we can see thatℓ⩽n−k{\displaystyle \ell \leqslant n-k}.
Now, we repeat the same question but for error correction: givenn{\displaystyle n}andk{\displaystyle k}, what is the upper bound on the lengthℓ{\displaystyle \ell }of bursts that we can correct using any(n,k){\displaystyle (n,k)}code? The following theorem provides a preliminary answer to this question:
Theorem (Burst error correction ability)—The burst error correction ability of any(n,k){\displaystyle (n,k)}code satisfiesℓ⩽n−k−logq(n−ℓ)+2{\displaystyle \ell \leqslant n-k-\log _{q}(n-\ell )+2}
First we observe that a code can correct all bursts of length⩽ℓ{\displaystyle \leqslant \ell }if and only if no two codewords differ by the sum of two bursts of length⩽ℓ.{\displaystyle \leqslant \ell .}Suppose that two codewordsc1{\displaystyle \mathbf {c} _{1}}andc2{\displaystyle \mathbf {c} _{2}}differ by burstsb1{\displaystyle \mathbf {b} _{1}}andb2{\displaystyle \mathbf {b} _{2}}of length⩽ℓ{\displaystyle \leqslant \ell }each. Upon receivingc1{\displaystyle \mathbf {c} _{1}}hit by a burstb1{\displaystyle \mathbf {b} _{1}}, we could interpret that as if it wasc2{\displaystyle \mathbf {c} _{2}}hit by a burst−b2{\displaystyle -\mathbf {b} _{2}}. We can not tell whether the transmitted word isc1{\displaystyle \mathbf {c} _{1}}orc2{\displaystyle \mathbf {c} _{2}}. Now, suppose that every two codewords differ by more than two bursts of lengthℓ{\displaystyle \ell }. Even if the transmitted codewordc1{\displaystyle \mathbf {c} _{1}}is hit by a burst of lengthℓ{\displaystyle \ell }, it is not going to look like another codeword that has been hit by another burst. For each codewordc,{\displaystyle \mathbf {c} ,}letB(c){\displaystyle B(\mathbf {c} )}denote the set of all words that differ fromc{\displaystyle \mathbf {c} }by a burst of length⩽ℓ.{\displaystyle \leqslant \ell .}Notice thatB(c){\displaystyle B(\mathbf {c} )}includesc{\displaystyle \mathbf {c} }itself. By the above observation, we know that for two different codewordsci{\displaystyle \mathbf {c} _{i}}andcj,B(ci){\displaystyle \mathbf {c} _{j},B(\mathbf {c} _{i})}andB(cj){\displaystyle B(\mathbf {c} _{j})}are disjoint. We haveqk{\displaystyle q^{k}}codewords. Therefore, we can say thatqk|B(c)|⩽qn{\displaystyle q^{k}|B(\mathbf {c} )|\leqslant q^{n}}. Moreover, we have(n−ℓ)qℓ−2⩽|B(c)|{\displaystyle (n-\ell )q^{\ell -2}\leqslant |B(\mathbf {c} )|}. By plugging the latter inequality into the former, then taking the baseq{\displaystyle q}logarithm and rearranging, we get the above theorem.
A stronger result is given by the Rieger bound:
Theorem (Rieger bound)—Ifℓ{\displaystyle \ell }is the burst error correcting ability of an(n,k){\displaystyle (n,k)}linear block code, then2ℓ⩽n−k{\displaystyle 2\ell \leqslant n-k}.
Any linear code that can correct any burst pattern of length⩽ℓ{\displaystyle \leqslant \ell }cannot have a burst of length⩽2ℓ{\displaystyle \leqslant 2\ell }as a codeword. If it had a burst of length⩽2ℓ{\displaystyle \leqslant 2\ell }as a codeword, then a burst of lengthℓ{\displaystyle \ell }could change the codeword to a burst pattern of lengthℓ{\displaystyle \ell }, which also could be obtained by making a burst error of lengthℓ{\displaystyle \ell }in all zero codeword. If vectors are non-zero in first2ℓ{\displaystyle 2\ell }symbols, then the vectors should be from different subsets of an array so that their difference is not a codeword of bursts of length2ℓ{\displaystyle 2\ell }. Ensuring this condition, the number of such subsets is at least equal to number of vectors. Thus, the number of subsets would be at leastq2ℓ{\displaystyle q^{2\ell }}. Hence, we have at least2ℓ{\displaystyle 2\ell }distinct symbols, otherwise, the difference of two such polynomials would be a codeword that is a sum of two bursts of length⩽ℓ.{\displaystyle \leqslant \ell .}Thus, this proves the Rieger Bound.
Definition.A linear burst-error-correcting code achieving the above Rieger bound is called an optimal burst-error-correcting code.
There is more than one upper bound on the achievable code rate of linear block codes for multiple phased-burst correction (MPBC). One such bound is constrained to a maximum correctable cyclic burst length within every subblock, or equivalently a constraint on the minimum error free length or gap within every phased-burst. This bound, when reduced to the special case of a bound for single burst correction, is the Abramson bound (a corollary of theHamming boundfor burst-error correction) when the cyclic burst length is less than half the block length.[3]
Theorem (number of bursts)—For1⩽ℓ⩽12(n+1),{\displaystyle 1\leqslant \ell \leqslant {\tfrac {1}{2}}(n+1),}over a binary alphabet, there aren2ℓ−1+1{\displaystyle n2^{\ell -1}+1}vectors of lengthn{\displaystyle n}which are bursts of length⩽ℓ{\displaystyle \leqslant \ell }.[1]
Since the burst length is⩽12(n+1),{\displaystyle \leqslant {\tfrac {1}{2}}(n+1),}there is a unique burst description associated with the burst. The burst can begin at any of then{\displaystyle n}positions of the pattern. Each pattern begins with1{\displaystyle 1}and contain a length ofℓ{\displaystyle \ell }. We can think of it as the set of all strings that begin with1{\displaystyle 1}and have lengthℓ{\displaystyle \ell }. Thus, there are a total of2ℓ−1{\displaystyle 2^{\ell -1}}possible such patterns, and a total ofn2ℓ−1{\displaystyle n2^{\ell -1}}bursts of length⩽ℓ.{\displaystyle \leqslant \ell .}If we include the all-zero burst, we haven2ℓ−1+1{\displaystyle n2^{\ell -1}+1}vectors representing bursts of length⩽ℓ.{\displaystyle \leqslant \ell .}
Theorem (Bound on the number of codewords)—If1⩽ℓ⩽12(n+1),{\displaystyle 1\leqslant \ell \leqslant {\tfrac {1}{2}}(n+1),}a binaryℓ{\displaystyle \ell }-burst error correcting code has at most2n/(n2ℓ−1+1){\displaystyle 2^{n}/(n2^{\ell -1}+1)}codewords.
Sinceℓ⩽12(n+1){\displaystyle \ell \leqslant {\tfrac {1}{2}}(n+1)}, we know that there aren2ℓ−1+1{\displaystyle n2^{\ell -1}+1}bursts of length⩽ℓ{\displaystyle \leqslant \ell }. Say the code hasM{\displaystyle M}codewords, then there areMn2ℓ−1{\displaystyle Mn2^{\ell -1}}codewords that differ from a codeword by a burst of length⩽ℓ{\displaystyle \leqslant \ell }. Each of theM{\displaystyle M}words must be distinct, otherwise the code would have distance<1{\displaystyle <1}. Therefore,M(2ℓ−1+1)⩽2n{\displaystyle M(2^{\ell -1}+1)\leqslant 2^{n}}impliesM⩽2n/(n2ℓ−1+1).{\displaystyle M\leqslant 2^{n}/(n2^{\ell -1}+1).}
Theorem (Abramson's bounds)—If1⩽ℓ⩽12(n+1){\displaystyle 1\leqslant \ell \leqslant {\tfrac {1}{2}}(n+1)}is a binarylinear(n,k),ℓ{\displaystyle (n,k),\ell }-burst error correcting code, its block-length must satisfy:n⩽2n−k−ℓ+1−1.{\displaystyle n\leqslant 2^{n-k-\ell +1}-1.}
For a linear(n,k){\displaystyle (n,k)}code, there are2k{\displaystyle 2^{k}}codewords. By our previous result, we know that2k⩽2nn2ℓ−1+1.{\displaystyle 2^{k}\leqslant {\frac {2^{n}}{n2^{\ell -1}+1}}.}Isolatingn{\displaystyle n}, we getn⩽2n−k−ℓ+1−2−ℓ+1{\displaystyle n\leqslant 2^{n-k-\ell +1}-2^{-\ell +1}}. Sinceℓ⩾1{\displaystyle \ell \geqslant 1}andn{\displaystyle n}must be an integer, we haven⩽2n−k−ℓ+1−1{\displaystyle n\leqslant 2^{n-k-\ell +1}-1}.
Remark.r=n−k{\displaystyle r=n-k}is called the redundancy of the code and in an alternative formulation for the Abramson's bounds isr⩾⌈log2(n+1)⌉+ℓ−1.{\displaystyle r\geqslant \lceil \log _{2}(n+1)\rceil +\ell -1.}
Sources:[3][4][5]
Whilecyclic codesin general are powerful tools for detecting burst errors, we now consider a family of binary cyclic codes named Fire Codes, which possess good single burst error correction capabilities. By single burst, say of lengthℓ{\displaystyle \ell }, we mean that all errors that a received codeword possess lie within a fixed span ofℓ{\displaystyle \ell }digits.
Letp(x){\displaystyle p(x)}be anirreducible polynomialof degreem{\displaystyle m}overF2{\displaystyle \mathbb {F} _{2}}, and letp{\displaystyle p}be the period ofp(x){\displaystyle p(x)}. The period ofp(x){\displaystyle p(x)}, and indeed of any polynomial, is defined to be the least positive integerr{\displaystyle r}such thatp(x)|xr−1.{\displaystyle p(x)|x^{r}-1.}Letℓ{\displaystyle \ell }be a positive integer satisfyingℓ⩽m{\displaystyle \ell \leqslant m}and2ℓ−1{\displaystyle 2\ell -1}not divisible byp{\displaystyle p}, wherem{\displaystyle m}andp{\displaystyle p}are the degree and period ofp(x){\displaystyle p(x)}, respectively. Define the Fire CodeG{\displaystyle G}by the followinggenerator polynomial:g(x)=(x2ℓ−1+1)p(x).{\displaystyle g(x)=\left(x^{2\ell -1}+1\right)p(x).}
We will show thatG{\displaystyle G}is anℓ{\displaystyle \ell }-burst-error correcting code.
Lemma 1—gcd(p(x),x2ℓ−1+1)=1.{\displaystyle \gcd \left(p(x),x^{2\ell -1}+1\right)=1.}
Letd(x){\displaystyle d(x)}be the greatest common divisor of the two polynomials. Sincep(x){\displaystyle p(x)}is irreducible,deg(d(x))=0{\displaystyle \deg(d(x))=0}ordeg(p(x)){\displaystyle \deg(p(x))}. Assumedeg(d(x))≠0,{\displaystyle \deg(d(x))\neq 0,}thenp(x)=cd(x){\displaystyle p(x)=cd(x)}for some constantc{\displaystyle c}. But,(1/c)p(x){\displaystyle (1/c)p(x)}is a divisor ofx2ℓ−1+1{\displaystyle x^{2\ell -1}+1}sinced(x){\displaystyle d(x)}is a divisor ofx2ℓ−1+1{\displaystyle x^{2\ell -1}+1}. But this contradicts our assumption thatp(x){\displaystyle p(x)}does not dividex2ℓ−1+1.{\displaystyle x^{2\ell -1}+1.}Thus,deg(d(x))=0,{\displaystyle \deg(d(x))=0,}proving the lemma.
Lemma 2—Ifp(x){\displaystyle p(x)}is a polynomial of periodp{\displaystyle p}, thenp(x)|xk−1{\displaystyle p(x)|x^{k}-1}if and only ifp|k.{\displaystyle p|k.}
Ifp|k{\displaystyle p|k}, thenxk−1=(xp−1)(1+xp+x2p+⋯+xk/p){\displaystyle x^{k}-1=(x^{p}-1)(1+x^{p}+x^{2p}+\dots +x^{k/p})}. Thus,p(x)|xk−1.{\displaystyle p(x)|x^{k}-1.}
Now supposep(x)|xk−1{\displaystyle p(x)|x^{k}-1}. Then,k⩾p{\displaystyle k\geqslant p}. We show thatk{\displaystyle k}is divisible byp{\displaystyle p}by induction onk{\displaystyle k}. The base casek=p{\displaystyle k=p}follows. Therefore, assumek>p{\displaystyle k>p}. We know thatp(x){\displaystyle p(x)}divides both (since it has periodp{\displaystyle p})xp−1=(x−1)(1+x+⋯+xp−1)andxk−1=(x−1)(1+x+⋯+xk−1).{\displaystyle x^{p}-1=(x-1)\left(1+x+\dots +x^{p-1}\right)\quad {\text{and}}\quad x^{k}-1=(x-1)\left(1+x+\dots +x^{k-1}\right).}Butp(x){\displaystyle p(x)}is irreducible, therefore it must divide both(1+x+⋯+xp−1){\displaystyle (1+x+\dots +x^{p-1})}and(1+x+⋯+xk−1){\displaystyle (1+x+\dots +x^{k-1})}; thus, it also divides the difference of the last two polynomials,xp(1+x+⋯+xp−k−1){\displaystyle x^{p}(1+x+\dots +x^{p-k-1})}. Then, it follows thatp(x){\displaystyle p(x)}divides(1+x+⋯+xp−k−1){\displaystyle (1+x+\cdots +x^{p-k-1})}. Finally, it also divides:xk−p−1=(x−1)(1+x+⋯+xp−k−1){\displaystyle x^{k-p}-1=(x-1)(1+x+\dots +x^{p-k-1})}. By the induction hypothesis,p|k−p{\displaystyle p|k-p}, thenp|k{\displaystyle p|k}.
A corollary to Lemma 2 is that sincep(x)=xp−1{\displaystyle p(x)=x^{p}-1}has periodp{\displaystyle p}, thenp(x){\displaystyle p(x)}dividesxk−1{\displaystyle x^{k}-1}if and only ifp|k{\displaystyle p|k}.
Theorem—The Fire Code isℓ{\displaystyle \ell }-burst error correcting[4][5]
If we can show that all bursts of lengthℓ{\displaystyle \ell }or less occur in differentcosets, we can use them ascoset leadersthat form correctable error patterns. The reason is simple: we know that each coset has a uniquesyndrome decodingassociated with it, and if all bursts of different lengths occur in different cosets, then all have unique syndromes, facilitating error correction.
Letxia(x){\displaystyle x^{i}a(x)}andxjb(x){\displaystyle x^{j}b(x)}be polynomials with degreesℓ1−1{\displaystyle \ell _{1}-1}andℓ2−1{\displaystyle \ell _{2}-1}, representing bursts of lengthℓ1{\displaystyle \ell _{1}}andℓ2{\displaystyle \ell _{2}}respectively withℓ1,ℓ2⩽ℓ.{\displaystyle \ell _{1},\ell _{2}\leqslant \ell .}The integersi,j{\displaystyle i,j}represent the starting positions of the bursts, and are less than the block length of the code. For contradiction sake, assume thatxia(x){\displaystyle x^{i}a(x)}andxjb(x){\displaystyle x^{j}b(x)}are in the same coset. Then,v(x)=xia(x)+xjb(x){\displaystyle v(x)=x^{i}a(x)+x^{j}b(x)}is a valid codeword (since both terms are in the same coset).Without loss of generality, picki⩽j{\displaystyle i\leqslant j}. By thedivision theoremwe can write:j−i=g(2ℓ−1)+r,{\displaystyle j-i=g(2\ell -1)+r,}for integersg{\displaystyle g}andr,0⩽r<2ℓ−1{\displaystyle r,0\leqslant r<2\ell -1}. We rewrite the polynomialv(x){\displaystyle v(x)}as follows:v(x)=xia(x)+xi+g(2ℓ−1)+r=xia(x)+xi+g(2ℓ−1)+r+2xi+rb(x)=xi(a(x)+xbb(x))+xi+rb(x)(xg(2ℓ−1)+1){\displaystyle v(x)=x^{i}a(x)+x^{i+g(2\ell -1)+r}=x^{i}a(x)+x^{i+g(2\ell -1)+r}+2x^{i+r}b(x)=x^{i}\left(a(x)+x^{b}b(x)\right)+x^{i+r}b(x)\left(x^{g(2\ell -1)}+1\right)}
Notice that at the second manipulation, we introduced the term2xi+rb(x){\displaystyle 2x^{i+r}b(x)}. We are allowed to do so, since Fire Codes operate onF2{\displaystyle \mathbb {F} _{2}}. By our assumption,v(x){\displaystyle v(x)}is a valid codeword, and thus, must be a multiple ofg(x){\displaystyle g(x)}. As mentioned earlier, since the factors ofg(x){\displaystyle g(x)}are relatively prime,v(x){\displaystyle v(x)}has to be divisible byx2ℓ−1+1{\displaystyle x^{2\ell -1}+1}. Looking closely at the last expression derived forv(x){\displaystyle v(x)}we notice thatxg(2ℓ−1)+1{\displaystyle x^{g(2\ell -1)}+1}is divisible byx2ℓ−1+1{\displaystyle x^{2\ell -1}+1}(by the corollary of Lemma 2). Therefore,a(x)+xbb(x){\displaystyle a(x)+x^{b}b(x)}is either divisible byx2ℓ−1+1{\displaystyle x^{2\ell -1}+1}or is0{\displaystyle 0}. Applying the division theorem again, we see that there exists a polynomiald(x){\displaystyle d(x)}with degreeδ{\displaystyle \delta }such that:a(x)+xbb(x)=d(x)(x2ℓ−1+1){\displaystyle a(x)+x^{b}b(x)=d(x)(x^{2\ell -1}+1)}
Then we may write:δ+2ℓ−1=deg(d(x)(x2ℓ−1+1))=deg(a(x)+xbb(x))=deg(xbb(x))deg(a(x))=ℓ1−1<2ℓ−1=b+ℓ2−1{\displaystyle {\begin{aligned}\delta +2\ell -1&=\deg \left(d(x)\left(x^{2\ell -1}+1\right)\right)\\&=\deg \left(a(x)+x^{b}b(x)\right)\\&=\deg \left(x^{b}b(x)\right)&&\deg(a(x))=\ell _{1}-1<2\ell -1\\&=b+\ell _{2}-1\end{aligned}}}
Equating the degree of both sides, gives usb=2ℓ−ℓ2+δ.{\displaystyle b=2\ell -\ell _{2}+\delta .}Sinceℓ1,ℓ2⩽ℓ{\displaystyle \ell _{1},\ell _{2}\leqslant \ell }we can concludeb⩾ℓ+δ,{\displaystyle b\geqslant \ell +\delta ,}which impliesb>ℓ−1{\displaystyle b>\ell -1}andb>δ{\displaystyle b>\delta }. Notice that in the expansion:a(x)+xbb(x)=1+a1x+a2x2+⋯+xℓ1−1+xb(1+b1x+b2x2+⋯+xℓ2−1).{\displaystyle a(x)+x^{b}b(x)=1+a_{1}x+a_{2}x^{2}+\dots +x^{\ell _{1}-1}+x^{b}\left(1+b_{1}x+b_{2}x^{2}+\dots +x^{\ell _{2}-1}\right).}The termxb{\displaystyle x^{b}}appears, but sinceδ<b<2ℓ−1{\displaystyle \delta <b<2\ell -1}, the resulting expressiond(x)(x2ℓ−1+1){\displaystyle d(x)(x^{2\ell -1}+1)}does not containxb{\displaystyle x^{b}}, therefored(x)=0{\displaystyle d(x)=0}and subsequentlya(x)+xbb(x)=0.{\displaystyle a(x)+x^{b}b(x)=0.}This requires thatb=0{\displaystyle b=0}, anda(x)=b(x){\displaystyle a(x)=b(x)}. We can further revise our division ofj−i{\displaystyle j-i}byg(2ℓ−1){\displaystyle g(2\ell -1)}to reflectb=0,{\displaystyle b=0,}that isj−i=g(2ℓ−1){\displaystyle j-i=g(2\ell -1)}.Substituting back intov(x){\displaystyle v(x)}gives us,v(x)=xib(x)(xj−1+1).{\displaystyle v(x)=x^{i}b(x)\left(x^{j-1}+1\right).}
Sincedeg(b(x))=ℓ2−1<ℓ{\displaystyle \deg(b(x))=\ell _{2}-1<\ell }, we havedeg(b(x))<deg(p(x))=m{\displaystyle \deg(b(x))<\deg(p(x))=m}. Butp(x){\displaystyle p(x)}is irreducible, thereforeb(x){\displaystyle b(x)}andp(x){\displaystyle p(x)}must be relatively prime. Sincev(x){\displaystyle v(x)}is a codeword,xj−1+1{\displaystyle x^{j-1}+1}must be divisible byp(x){\displaystyle p(x)}, as it cannot be divisible byx2ℓ−1+1{\displaystyle x^{2\ell -1}+1}. Therefore,j−i{\displaystyle j-i}must be a multiple ofp{\displaystyle p}. But it must also be a multiple of2ℓ−1{\displaystyle 2\ell -1}, which implies it must be a multiple ofn=lcm(2ℓ−1,p){\displaystyle n={\text{lcm}}(2\ell -1,p)}but that is precisely the block-length of the code. Therefore,j−i{\displaystyle j-i}cannot be a multiple ofn{\displaystyle n}since they are both less thann{\displaystyle n}. Thus, our assumption ofv(x){\displaystyle v(x)}being a codeword is incorrect, and thereforexia(x){\displaystyle x^{i}a(x)}andxjb(x){\displaystyle x^{j}b(x)}are in different cosets, with unique syndromes, and therefore correctable.
With the theory presented in the above section, consider the construction of a5{\displaystyle 5}-burst error correcting Fire Code. Remember that to construct a Fire Code, we need an irreducible polynomialp(x){\displaystyle p(x)}, an integerℓ{\displaystyle \ell }, representing the burst error correction capability of our code, and we need to satisfy the property that2ℓ−1{\displaystyle 2\ell -1}is not divisible by the period ofp(x){\displaystyle p(x)}. With these requirements in mind, consider the irreducible polynomialp(x)=1+x2+x5{\displaystyle p(x)=1+x^{2}+x^{5}}, and letℓ=5{\displaystyle \ell =5}. Sincep(x){\displaystyle p(x)}is a primitive polynomial, its period is25−1=31{\displaystyle 2^{5}-1=31}. We confirm that2ℓ−1=9{\displaystyle 2\ell -1=9}is not divisible by31{\displaystyle 31}. Thus,g(x)=(x9+1)(1+x2+x5)=1+x2+x5+x9+x11+x14{\displaystyle g(x)=(x^{9}+1)\left(1+x^{2}+x^{5}\right)=1+x^{2}+x^{5}+x^{9}+x^{11}+x^{14}}is a Fire Code generator. We can calculate the block-length of the code by evaluating theleast common multipleofp{\displaystyle p}and2ℓ−1{\displaystyle 2\ell -1}. In other words,n=lcm(9,31)=279{\displaystyle n={\text{lcm}}(9,31)=279}. Thus, the Fire Code above is a cyclic code capable of correcting any burst of length5{\displaystyle 5}or less.
Certain families of codes, such asReed–Solomon, operate on alphabet sizes larger than binary. This property awards such codes powerful burst error correction capabilities. Consider a code operating onF2m{\displaystyle \mathbb {F} _{2^{m}}}. Each symbol of the alphabet can be represented bym{\displaystyle m}bits. IfC{\displaystyle C}is an(n,k){\displaystyle (n,k)}Reed–Solomon code overF2m{\displaystyle \mathbb {F} _{2^{m}}}, we can think ofC{\displaystyle C}as an[mn,mk]2{\displaystyle [mn,mk]_{2}}code overF2{\displaystyle \mathbb {F} _{2}}.
The reason such codes are powerful for burst error correction is that each symbol is represented bym{\displaystyle m}bits, and in general, it is irrelevant how many of thosem{\displaystyle m}bits are erroneous; whether a single bit, or all of them{\displaystyle m}bits contain errors, from a decoding perspective it is still a single symbol error. In other words, since burst errors tend to occur in clusters, there is a strong possibility of several binary errors contributing to a single symbol error.
Notice that a burst of(m+1){\displaystyle (m+1)}errors can affect at most2{\displaystyle 2}symbols, and a burst of2m+1{\displaystyle 2m+1}can affect at most3{\displaystyle 3}symbols. Then, a burst oftm+1{\displaystyle tm+1}can affect at mostt+1{\displaystyle t+1}symbols; this implies that at{\displaystyle t}-symbols-error correcting code can correct a burst of length at most(t−1)m+1{\displaystyle (t-1)m+1}.
In general, at{\displaystyle t}-error correcting Reed–Solomon code overF2m{\displaystyle \mathbb {F} _{2^{m}}}can correct any combination oft1+⌊(l+m−2)/m⌋{\displaystyle {\frac {t}{1+\lfloor (l+m-2)/m\rfloor }}}or fewer bursts of lengthl{\displaystyle l}, on top of being able to correctt{\displaystyle t}-random worst case errors.
LetG{\displaystyle G}be a[255,223,33]{\displaystyle [255,223,33]}RS code overF28{\displaystyle \mathbb {F} _{2^{8}}}. This code was employed byNASAin theirCassini-Huygensspacecraft.[6]It is capable of correcting⌊33/2⌋=16{\displaystyle \lfloor 33/2\rfloor =16}symbol errors. We now construct a Binary RS CodeG′{\displaystyle G'}fromG{\displaystyle G}. Each symbol will be written using⌈log2(255)⌉=8{\displaystyle \lceil \log _{2}(255)\rceil =8}bits. Therefore, the Binary RS code will have[2040,1784,33]2{\displaystyle [2040,1784,33]_{2}}as its parameters. It is capable of correcting any single burst of lengthl=121{\displaystyle l=121}.
Interleaving is used to convert convolutional codes from random error correctors to burst error correctors. The basic idea behind the use of interleaved codes is to jumble symbols at the transmitter. This leads to randomization of bursts of received errors which are closely located and we can then apply the analysis for random channel. Thus, the main function performed by the interleaver at transmitter is to alter the input symbol sequence. At the receiver, the deinterleaver will alter the received sequence to get back the original unaltered sequence at the transmitter.
Theorem—If the burst error correcting ability of some code isℓ,{\displaystyle \ell ,}then the burst error correcting ability of itsλ{\displaystyle \lambda }-way interleave isλℓ.{\displaystyle \lambda \ell .}
Suppose that we have an(n,k){\displaystyle (n,k)}code that can correct all bursts of length⩽ℓ.{\displaystyle \leqslant \ell .}Interleavingcan provide us with a(λn,λk){\displaystyle (\lambda n,\lambda k)}code that can correct all bursts of length⩽λℓ,{\displaystyle \leqslant \lambda \ell ,}for any givenλ{\displaystyle \lambda }. If we want to encode a message of an arbitrary length using interleaving, first we divide it into blocks of lengthλk{\displaystyle \lambda k}. We write theλk{\displaystyle \lambda k}entries of each block into aλ×k{\displaystyle \lambda \times k}matrix using row-major order. Then, we encode each row using the(n,k){\displaystyle (n,k)}code. What we will get is aλ×n{\displaystyle \lambda \times n}matrix. Now, this matrix is read out and transmitted in column-major order. The trick is that if there occurs a burst of lengthh{\displaystyle h}in the transmitted word, then each row will contain approximatelyhλ{\displaystyle {\tfrac {h}{\lambda }}}consecutive errors (More specifically, each row will contain a burst of length at least⌊hλ⌋{\displaystyle \lfloor {\tfrac {h}{\lambda }}\rfloor }and at most⌈hλ⌉{\displaystyle \lceil {\tfrac {h}{\lambda }}\rceil }). Ifh⩽λℓ,{\displaystyle h\leqslant \lambda \ell ,}thenhλ⩽ℓ{\displaystyle {\tfrac {h}{\lambda }}\leqslant \ell }and the(n,k){\displaystyle (n,k)}code can correct each row. Therefore, the interleaved(λn,λk){\displaystyle (\lambda n,\lambda k)}code can correct the burst of lengthh{\displaystyle h}. Conversely, ifh>λℓ,{\displaystyle h>\lambda \ell ,}then at least one row will contain more thanhλ{\displaystyle {\tfrac {h}{\lambda }}}consecutive errors, and the(n,k){\displaystyle (n,k)}code might fail to correct them. Therefore, the error correcting ability of the interleaved(λn,λk){\displaystyle (\lambda n,\lambda k)}code is exactlyλℓ.{\displaystyle \lambda \ell .}The BEC efficiency of the interleaved code remains the same as the original(n,k){\displaystyle (n,k)}code. This is true because:2λℓλn−λk=2ℓn−k{\displaystyle {\frac {2\lambda \ell }{\lambda n-\lambda k}}={\frac {2\ell }{n-k}}}
The figure below shows a 4 by 3 interleaver.
The above interleaver is called as ablock interleaver. Here, the input symbols are written sequentially in the rows and the output symbols are obtained by reading the columns sequentially. Thus, this is in the form ofM×N{\displaystyle M\times N}array. Generally,N{\displaystyle N}is length of the codeword.
Capacity of block interleaver: For anM×N{\displaystyle M\times N}block interleaver and burst of lengthℓ,{\displaystyle \ell ,}the upper limit on number of errors isℓM.{\displaystyle {\tfrac {\ell }{M}}.}This is obvious from the fact that we are reading the output column wise and the number of rows isM{\displaystyle M}. By the theorem above for error correction capacity up tot,{\displaystyle t,}the maximum burst length allowed isMt.{\displaystyle Mt.}For burst length ofMt+1{\displaystyle Mt+1}, the decoder may fail.
Efficiency of block interleaver (γ{\displaystyle \gamma }):It is found by taking ratio of burst length where decoder may fail to the interleaver memory. Thus, we can formulateγ{\displaystyle \gamma }asγ=Mt+1MN≈tN.{\displaystyle \gamma ={\frac {Mt+1}{MN}}\approx {\frac {t}{N}}.}
Drawbacks of block interleaver :As it is clear from the figure, the columns are read sequentially, the receiver can interpret single row only after it receives complete message and not before that. Also, the receiver requires a considerable amount of memory in order to store the received symbols and has to store the complete message. Thus, these factors give rise to two drawbacks, one is the latency and other is the storage (fairly large amount of memory). These drawbacks can be avoided by using the convolutional interleaver described below.
Cross interleaver is a kind of multiplexer-demultiplexer system. In this system, delay lines are used to progressively increase length. Delay line is basically an electronic circuit used to delay the signal by certain time duration. Letn{\displaystyle n}be the number of delay lines andd{\displaystyle d}be the number of symbols introduced by each delay line. Thus, the separation between consecutive inputs =nd{\displaystyle nd}symbols. Let the length of codeword⩽n.{\displaystyle \leqslant n.}Thus, each symbol in the input codeword will be on distinct delay line. Let a burst error of lengthℓ{\displaystyle \ell }occur. Since the separation between consecutive symbols isnd,{\displaystyle nd,}the number of errors that the deinterleaved output may contain isℓnd+1.{\displaystyle {\tfrac {\ell }{nd+1}}.}By the theorem above, for error correction capacity up tot{\displaystyle t}, maximum burst length allowed is(nd+1)(t−1).{\displaystyle (nd+1)(t-1).}For burst length of(nd+1)(t−1)+1,{\displaystyle (nd+1)(t-1)+1,}decoder may fail.
Efficiency of cross interleaver (γ{\displaystyle \gamma }):It is found by taking the ratio of burst length where decoder may fail to the interleaver memory. In this case, the memory of interleaver can be calculated as(0+1+2+3+⋯+(n−1))d=n(n−1)2d.{\displaystyle (0+1+2+3+\cdots +(n-1))d={\frac {n(n-1)}{2}}d.}
Thus, we can formulateγ{\displaystyle \gamma }as follows:γ=(nd+1)(t−1)+1n(n−1)2d.{\displaystyle \gamma ={\frac {(nd+1)(t-1)+1}{{\frac {n(n-1)}{2}}d}}.}
Performance of cross interleaver :As shown in the above interleaver figure, the output is nothing but the diagonal symbols generated at the end of each delay line. In this case, when the input multiplexer switch completes around half switching, we can read first row at the receiver. Thus, we need to store maximum of around half message at receiver in order to read first row. This drastically brings down the storage requirement by half. Since just half message is now required to read first row, the latency is also reduced by half which is good improvement over the block interleaver. Thus, the total interleaver memory is split between transmitter and receiver.
Without error correcting codes, digital audio would not be technically feasible.[7]TheReed–Solomon codescan correct a corrupted symbol with a single bit error just as easily as it can correct a symbol with all bits wrong. This makes the RS codes particularly suitable for correcting burst errors.[5]By far, the most common application of RS codes is in compact discs. In addition to basic error correction provided by RS codes, protection against burst errors due to scratches on the disc is provided by a cross interleaver.[3]
Current compact disc digital audio system was developed by N. V. Philips of The Netherlands and Sony Corporation of Japan (agreement signed in 1979).
A compact disc comprises a 120 mm aluminized disc coated with a clear plastic coating, with spiral track, approximately 5 km in length, which is optically scanned by a laser of wavelength ~0.8 μm, at a constant speed of ~1.25 m/s. For achieving this constant speed, rotation of the disc is varied from ~8 rev/s while scanning at the inner portion of the track to ~3.5 rev/s at the outer portion. Pits and lands are the depressions (0.12 μm deep) and flat segments constituting the binary data along the track (0.6 μm width).[8]
The CD process can be abstracted as a sequence of the following sub-processes:
The process is subject to both burst errors and random errors.[7]Burst errors include those due to disc material (defects of aluminum reflecting film, poor reflective index of transparent disc material), disc production (faults during disc forming and disc cutting etc.), disc handling (scratches – generally thin, radial and orthogonal to direction of recording) and variations in play-back mechanism. Random errors include those due to jitter of reconstructed signal wave and interference in signal. CIRC (Cross-Interleaved Reed–Solomon code) is the basis for error detection and correction in the CD process. It corrects error bursts up to 3,500 bits in sequence (2.4 mm in length as seen on CD surface) and compensates for error bursts up to 12,000 bits (8.5 mm) that may be caused by minor scratches.
Encoding:Sound-waves are sampled and converted to digital form by an A/D converter. The sound wave is sampled for amplitude (at 44.1 kHz or 44,100 pairs, one each for the left and right channels of the stereo sound). The amplitude at an instance is assigned a binary string of length 16. Thus, each sample produces two binary vectors fromF216{\displaystyle \mathbb {F} _{2}^{16}}or 4F28{\displaystyle \mathbb {F} _{2}^{8}}bytes of data. Every second of sound recorded results in 44,100 × 32 = 1,411,200 bits (176,400 bytes) of data.[5]The 1.41 Mbit/s sampled data stream passes through the error correction system eventually getting converted to a stream of 1.88 Mbit/s.
Input for the encoder consists of input frames each of 24 8-bit symbols (12 16-bit samples from the A/D converter, 6 each from left and right data (sound) sources). A frame can be represented byL1R1L2R2…L6R6{\displaystyle L_{1}R_{1}L_{2}R_{2}\ldots L_{6}R_{6}}whereLi{\displaystyle L_{i}}andRi{\displaystyle R_{i}}are bytes from the left and right channels from theith{\displaystyle i^{th}}sample of the frame.
Initially, the bytes are permuted to form new frames represented byL1L3L5R1R3R5L2L4L6R2R4R6{\displaystyle L_{1}L_{3}L_{5}R_{1}R_{3}R_{5}L_{2}L_{4}L_{6}R_{2}R_{4}R_{6}}whereLi,Ri{\displaystyle L_{i},R_{i}}representi{\displaystyle i}-th left and right samples from the frame after 2 intervening frames.
Next, these 24 message symbols are encoded using C2 (28,24,5) Reed–Solomon code which is a shortened RS code overF256{\displaystyle \mathbb {F} _{256}}. This is two-error-correcting, being of minimum distance 5. This adds 4 bytes of redundancy,P1P2{\displaystyle P_{1}P_{2}}forming a new frame:L1L3L5R1R3R5P1P2L2L4L6R2R4R6{\displaystyle L_{1}L_{3}L_{5}R_{1}R_{3}R_{5}P_{1}P_{2}L_{2}L_{4}L_{6}R_{2}R_{4}R_{6}}. The resulting 28-symbol codeword is passed through a (28.4) cross interleaver leading to 28 interleaved symbols. These are then passed through C1 (32,28,5) RS code, resulting in codewords of 32 coded output symbols. Further regrouping of odd numbered symbols of a codeword with even numbered symbols of the next codeword is done to break up any short bursts that may still be present after the above 4-frame delay interleaving. Thus, for every 24 input symbols there will be 32 output symbols givingR=24/32{\displaystyle R=24/32}. Finally one byte of control and display information is added.[5]Each of the 33 bytes is then converted to 17 bits through EFM (eight to fourteen modulation) and addition of 3 merge bits. Therefore, the frame of six samples results in 33 bytes × 17 bits (561 bits) to which are added 24 synchronization bits and 3 merging bits yielding a total of 588 bits.
Decoding:The CD player (CIRC decoder) receives the 32 output symbol data stream. This stream passes through the decoder D1 first. It is up to individual designers of CD systems to decide on decoding methods and optimize their product performance. Being of minimum distance 5 The D1, D2 decoders can each correct a combination ofe{\displaystyle e}errors andf{\displaystyle f}erasures such that2e+f<5{\displaystyle 2e+f<5}.[5]In most decoding solutions, D1 is designed to correct single error. And in case of more than 1 error, this decoder outputs 28 erasures. The deinterleaver at the succeeding stage distributes these erasures across 28 D2 codewords. Again in most solutions, D2 is set to deal with erasures only (a simpler and less expensive solution). If more than 4 erasures were to be encountered, 24 erasures are output by D2. Thereafter, an error concealment system attempts to interpolate (from neighboring symbols) in case of uncorrectable symbols, failing which sounds corresponding to such erroneous symbols get muted.
Performance of CIRC:[7]CIRC conceals long bust errors by simple linear interpolation. 2.5 mm of track length (4000 bits) is the maximum completely correctable burst length. 7.7 mm track length (12,300 bits) is the maximum burst length that can be interpolated. Sample interpolation rate is one every 10 hours at Bit Error Rate (BER)=10−4{\displaystyle =10^{-4}}and 1000 samples per minute at BER =10−3{\displaystyle 10^{-3}}Undetectable error samples (clicks): less than one every 750 hours at BER =10−3{\displaystyle 10^{-3}}and negligible at BER =10−4{\displaystyle 10^{-4}}.
|
https://en.wikipedia.org/wiki/Burst_error-correcting_code
|
Quantum error correction(QEC) is a set of techniques used inquantum computingto protectquantum informationfrom errors due todecoherenceand otherquantum noise. Quantum error correction is theorised as essential to achievefault tolerant quantum computingthat can reduce the effects of noise on stored quantum information, faulty quantum gates, faulty quantum state preparation, and faulty measurements. Effective quantum error correction would allow quantum computers with low qubit fidelity to execute algorithms of higher complexity or greatercircuit depth.[1]
Classicalerror correctionoften employsredundancy. The simplest albeit inefficient approach is therepetition code. A repetition code stores the desired (logical) information as multiple copies, and—if these copies are later found to disagree due to errors introduced to the system—determines the most likely value for the original data by majority vote. For instance, suppose we copy a bit in the one (on) state three times. Suppose further that noise in the system introduces an error that corrupts the three-bit state so that one of the copied bits becomes zero (off) but the other two remain equal to one. Assuming that errors are independent and occur with some sufficiently low probabilityp, it is most likely that the error is a single-bit error and the intended message is three bits in the one state. It is possible that a double-bit error occurs and the transmitted message is equal to three zeros, but this outcome is less likely than the above outcome. In this example, the logical information is a single bit in the one state and the physical information are the three duplicate bits. Creating a physical state that represents the logical state is calledencodingand determining which logical state is encoded in the physical state is calleddecoding. Similar to classical error correction, QEC codes do not always correctly decode logical qubits, but instead reduce the effect of noise on the logical state.
Copying quantum information is not possible due to theno-cloning theorem. This theorem seems to present an obstacle to formulating a theory of quantum error correction. But it is possible tospreadthe (logical) information of one logicalqubitonto a highly entangled state of several (physical) qubits.Peter Shorfirst discovered this method of formulating aquantum error correcting codeby storing the information of one qubit onto a highly entangled state of nine qubits.[2]
In classical error correction,syndrome decodingis used to diagnose which error was the likely source of corruption on an encoded state. An error can then be reversed by applying a corrective operation based on the syndrome. Quantum error correction also employs syndrome measurements. It performs a multi-qubit measurement that does not disturb the quantum information in the encoded state but retrieves information about the error. Depending on the QEC code used, syndrome measurement can determine the occurrence, location and type of errors. In most QEC codes, the type of error is either a bit flip, or a sign (of thephase) flip, or both (corresponding to thePauli matricesX, Z, and Y). The measurement of the syndrome has theprojectiveeffect of aquantum measurement, so even if the error due to the noise was arbitrary, it can be expressed as a combination ofbasisoperations called the error basis (which is given by the Pauli matrices and theidentity). To correct the error, the Pauli operator corresponding to the type of error is used on the corrupted qubit to revert the effect of the error.
The syndrome measurement provides information about the error that has happened, but not about the information that is stored in the logical qubit—as otherwise the measurement would destroy anyquantum superpositionof this logical qubit with other qubits in thequantum computer, which would prevent it from being used to convey quantum information.
The repetition code works in aclassical channel, because classical bits are easy to measure and to repeat. This approach does not work for a quantum channel in which, due to theno-cloning theorem, it is not possible to repeat a single qubit three times. To overcome this, a different method has to be used, such as thethree-qubit bit-flip codefirst proposed by Asher Peres in 1985.[3]This technique usesentanglementand syndrome measurements and is comparable in performance with the repetition code.
Consider the situation in which we want to transmit the state of a single qubit|ψ⟩{\displaystyle \vert \psi \rangle }through a noisychannelE{\displaystyle {\mathcal {E}}}. Let us moreover assume that this channel either flips the state of the qubit, with probabilityp{\displaystyle p}, or leaves it unchanged. The action ofE{\displaystyle {\mathcal {E}}}on a general inputρ{\displaystyle \rho }can therefore be written asE(ρ)=(1−p)ρ+p⋅ρ{\displaystyle {\mathcal {E}}(\rho )=(1-p)\rho +p\cdot \rho }.
Let|ψ⟩=α0|0⟩+α1|1⟩{\displaystyle |\psi \rangle =\alpha _{0}|0\rangle +\alpha _{1}|1\rangle }be the quantum state to be transmitted. With no error-correcting protocol in place, the transmitted state will be correctly transmitted with probability1−p{\displaystyle 1-p}. We can however improve on this number byencodingthe state into a greater number of qubits, in such a way that errors in the corresponding logical qubits can be detected and corrected. In the case of the simple three-qubit repetition code, the encoding consists in the mappings|0⟩→|0L⟩≡|000⟩{\displaystyle \vert 0\rangle \rightarrow \vert 0_{\rm {L}}\rangle \equiv \vert 000\rangle }and|1⟩→|1L⟩≡|111⟩{\displaystyle \vert 1\rangle \rightarrow \vert 1_{\rm {L}}\rangle \equiv \vert 111\rangle }. The input state|ψ⟩{\displaystyle \vert \psi \rangle }is encoded into the state|ψ′⟩=α0|000⟩+α1|111⟩{\displaystyle \vert \psi '\rangle =\alpha _{0}\vert 000\rangle +\alpha _{1}\vert 111\rangle }. This mapping can be realized for example using two CNOT gates, entangling the system with twoancillary qubitsinitialized in the state|0⟩{\displaystyle \vert 0\rangle }.[4]The encoded state|ψ′⟩{\displaystyle \vert \psi '\rangle }is what is now passed through the noisy channel.
The channel acts on|ψ′⟩{\displaystyle \vert \psi '\rangle }by flipping some subset (possibly empty) of its qubits. No qubit is flipped with probability(1−p)3{\displaystyle (1-p)^{3}}, a single qubit is flipped with probability3p(1−p)2{\displaystyle 3p(1-p)^{2}}, two qubits are flipped with probability3p2(1−p){\displaystyle 3p^{2}(1-p)}, and all three qubits are flipped with probabilityp3{\displaystyle p^{3}}. Note that a further assumption about the channel is made here: we assume thatE{\displaystyle {\mathcal {E}}}acts equally and independently on each of the three qubits in which the state is now encoded. The problem is now how to detect and correct such errors, while not corrupting the transmitted state.
Let us assume for simplicity thatp{\displaystyle p}is small enough that the probability of more than a single qubit being flipped is negligible. One can then detect whether a qubit was flipped, without also querying for the values being transmitted, by asking whether one of the qubits differs from the others. This amounts to performing a measurement with four different outcomes, corresponding to the following four projective measurements:P0=|000⟩⟨000|+|111⟩⟨111|,P1=|100⟩⟨100|+|011⟩⟨011|,P2=|010⟩⟨010|+|101⟩⟨101|,P3=|001⟩⟨001|+|110⟩⟨110|.{\displaystyle {\begin{aligned}P_{0}&=|000\rangle \langle 000|+|111\rangle \langle 111|,\\P_{1}&=|100\rangle \langle 100|+|011\rangle \langle 011|,\\P_{2}&=|010\rangle \langle 010|+|101\rangle \langle 101|,\\P_{3}&=|001\rangle \langle 001|+|110\rangle \langle 110|.\end{aligned}}}This reveals which qubits are different from the others, without at the same time giving information about the state of the qubits themselves. If the outcome corresponding toP0{\displaystyle P_{0}}is obtained, no correction is applied, while if the outcome corresponding toPi{\displaystyle P_{i}}is observed, then the PauliXgate is applied to thei{\displaystyle i}-th qubit. Formally, this correcting procedure corresponds to the application of the following map to the output of the channel:Ecorr(ρ)=P0ρP0+∑i=13XiPiρPiXi.{\displaystyle {\mathcal {E}}_{\operatorname {corr} }(\rho )=P_{0}\rho P_{0}+\sum _{i=1}^{3}X_{i}P_{i}\rho \,P_{i}X_{i}.}
Note that, while this procedure perfectly corrects the output when zero or one flips are introduced by the channel, if more than one qubit is flipped then the output is not properly corrected. For example, if the first and second qubits are flipped, then the syndrome measurement gives the outcomeP3{\displaystyle P_{3}}, and the third qubit is flipped, instead of the first two. To assess the performance of this error-correcting scheme for a general input we can study thefidelityF(ψ′){\displaystyle F(\psi ')}between the input|ψ′⟩{\displaystyle \vert \psi '\rangle }and the outputρout≡Ecorr(E(|ψ′⟩⟨ψ′|)){\displaystyle \rho _{\operatorname {out} }\equiv {\mathcal {E}}_{\operatorname {corr} }({\mathcal {E}}(\vert \psi '\rangle \langle \psi '\vert ))}. Being the output stateρout{\displaystyle \rho _{\operatorname {out} }}correct when no more than one qubit is flipped, which happens with probability(1−p)3+3p(1−p)2{\displaystyle (1-p)^{3}+3p(1-p)^{2}}, we can write it as[(1−p)3+3p(1−p)2]|ψ′⟩⟨ψ′|+(...){\displaystyle [(1-p)^{3}+3p(1-p)^{2}]\,\vert \psi '\rangle \langle \psi '\vert +(...)}, where the dots denote components ofρout{\displaystyle \rho _{\operatorname {out} }}resulting from errors not properly corrected by the protocol. It follows thatF(ψ′)=⟨ψ′|ρout|ψ′⟩≥(1−p)3+3p(1−p)2=1−3p2+2p3.{\displaystyle F(\psi ')=\langle \psi '\vert \rho _{\operatorname {out} }\vert \psi '\rangle \geq (1-p)^{3}+3p(1-p)^{2}=1-3p^{2}+2p^{3}.}Thisfidelityis to be compared with the corresponding fidelity obtained when no error-correcting protocol is used, which was shown before to equal1−p{\displaystyle {1-p}}. A little algebra then shows that the fidelityaftererror correction is greater than the one without forp<1/2{\displaystyle p<1/2}. Note that this is consistent with the working assumption that was made while deriving the protocol (ofp{\displaystyle p}being small enough).
The bit flip is the only kind of error in classical computers. In quantum computers, however, another kind of error is possible: the sign flip. Through transmission in a channel, the relative sign between|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }can become inverted. For instance, a qubit in the state|−⟩=(|0⟩−|1⟩)/2{\displaystyle |-\rangle =(|0\rangle -|1\rangle )/{\sqrt {2}}}may have its sign flip to|+⟩=(|0⟩+|1⟩)/2.{\displaystyle |+\rangle =(|0\rangle +|1\rangle )/{\sqrt {2}}.}
The original state of the qubit|ψ⟩=α0|0⟩+α1|1⟩{\displaystyle |\psi \rangle =\alpha _{0}|0\rangle +\alpha _{1}|1\rangle }will be changed into the state|ψ′⟩=α0|+++⟩+α1|−−−⟩.{\displaystyle |\psi '\rangle =\alpha _{0}|{+}{+}{+}\rangle +\alpha _{1}|{-}{-}{-}\rangle .}
In the Hadamard basis, bit flips become sign flips and sign flips become bit flips. LetEphase{\displaystyle E_{\text{phase}}}be a quantum channel that can cause at most one phase flip. Then the bit-flip code from above can recover|ψ⟩{\displaystyle |\psi \rangle }by transforming into the Hadamard basis before and after transmission throughEphase{\displaystyle E_{\text{phase}}}.
The error channel may induce either a bit flip, a sign flip (i.e., a phase flip), or both. It is possible to correct for both types of errors on a logical qubit using a well-designed QEC code. One example of a code that does this is the Shor code, published in 1995.[2][5]: 10Since these two types of errors are the only types of errors that can result after a projective measurement, a Shor code corrects arbitrary single-qubit errors.
LetE{\displaystyle E}be aquantum channelthat can arbitrarily corrupt a single qubit. The 1st, 4th and 7th qubits are for the sign flip code, while the three groups of qubits (1,2,3), (4,5,6), and (7,8,9) are designed for the bit flip code. With the Shor code, a qubit state|ψ⟩=α0|0⟩+α1|1⟩{\displaystyle |\psi \rangle =\alpha _{0}|0\rangle +\alpha _{1}|1\rangle }will be transformed into the product of 9 qubits|ψ′⟩=α0|0S⟩+α1|1S⟩{\displaystyle |\psi '\rangle =\alpha _{0}|0_{S}\rangle +\alpha _{1}|1_{S}\rangle }, where|0S⟩=122(|000⟩+|111⟩)⊗(|000⟩+|111⟩)⊗(|000⟩+|111⟩){\displaystyle |0_{\rm {S}}\rangle ={\frac {1}{2{\sqrt {2}}}}(|000\rangle +|111\rangle )\otimes (|000\rangle +|111\rangle )\otimes (|000\rangle +|111\rangle )}|1S⟩=122(|000⟩−|111⟩)⊗(|000⟩−|111⟩)⊗(|000⟩−|111⟩){\displaystyle |1_{\rm {S}}\rangle ={\frac {1}{2{\sqrt {2}}}}(|000\rangle -|111\rangle )\otimes (|000\rangle -|111\rangle )\otimes (|000\rangle -|111\rangle )}
If a bit flip error happens to a qubit, the syndrome analysis will be performed on each block of qubits (1,2,3), (4,5,6), and (7,8,9) to detect and correct at most one bit flip error in each block.
If the three bit flip group (1,2,3), (4,5,6), and (7,8,9) are considered as three inputs, then the Shor code circuit can be reduced as a sign flip code. This means that the Shor code can also repair a sign flip error for a single qubit.
The Shor code also can correct for any arbitrary errors (both bit flip and sign flip) to a single qubit. If an error is modeled by a unitary transform U, which will act on a qubit|ψ⟩{\displaystyle |\psi \rangle }, thenU{\displaystyle U}can be described in the formU=c0I+c1X+c2Y+c3Z{\displaystyle U=c_{0}I+c_{1}X+c_{2}Y+c_{3}Z}wherec0{\displaystyle c_{0}},c1{\displaystyle c_{1}},c2{\displaystyle c_{2}}, andc3{\displaystyle c_{3}}are complex constants, I is the identity, and thePauli matricesare given byX=(0110);Y=(0−ii0);Z=(100−1).{\displaystyle {\begin{aligned}X&={\begin{pmatrix}0&1\\1&0\end{pmatrix}};\\Y&={\begin{pmatrix}0&-i\\i&0\end{pmatrix}};\\Z&={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}.\end{aligned}}}
IfUis equal toI, then no error occurs. IfU=X{\displaystyle U=X}, a bit flip error occurs. IfU=Z{\displaystyle U=Z}, a sign flip error occurs. IfU=iY{\displaystyle U=iY}then both a bit flip error and a sign flip error occur. In other words, the Shor code can correct any combination of bit or phase errors on a single qubit.
More generally, the error operatorUdoes not need to be unitary, but can be an Kraus operator from aquantum operationrepresenting a system interacting with its environment.
Several proposals have been made for storing error-correctable quantum information in bosonic modes.[clarification needed]Unlike a two-level system, aquantum harmonic oscillatorhas infinitely many energy levels in a single physical system. Codes for these systems include cat,[6][7][8]Gottesman-Kitaev-Preskill (GKP),[9]and binomial codes.[10][11]One insight offered by these codes is to take advantage of the redundancy within a single system, rather than to duplicate many two-level qubits.
Written in theFockbasis, the simplest binomial encoding is|0L⟩=|0⟩+|4⟩2,|1L⟩=|2⟩,{\displaystyle |0_{\rm {L}}\rangle ={\frac {|0\rangle +|4\rangle }{\sqrt {2}}},\quad |1_{\rm {L}}\rangle =|2\rangle ,}where the subscript L indicates a "logically encoded" state. Then if the dominant error mechanism of the system is the stochastic application of the bosoniclowering operatora^,{\displaystyle {\hat {a}},}the corresponding error states are|3⟩{\displaystyle |3\rangle }and|1⟩,{\displaystyle |1\rangle ,}respectively. Since the codewords involve only even photon number, and the error states involve only odd photon number, errors can be detected by measuring thephoton numberparity of the system.[10][12]Measuring the odd parity will allow correction by application of an appropriate unitary operation without knowledge of the specific logical state of the qubit. However, the particular binomial code above is not robust to two-photon loss.
Schrödinger cat states, superpositions of coherent states, can also be used as logical states for error correction codes. Cat code, realized by Ofek et al.[13]in 2016, defined two sets of logical states:{|0L+⟩,|1L+⟩}{\displaystyle \{|0_{L}^{+}\rangle ,|1_{L}^{+}\rangle \}}and{|0L−⟩,|1L−⟩}{\displaystyle \{|0_{L}^{-}\rangle ,|1_{L}^{-}\rangle \}}, where each of the states is a superposition ofcoherent stateas follows
|0L+⟩≡|α⟩+|−α⟩,|1L+⟩≡|iα⟩+|−iα⟩,|0L−⟩≡|α⟩−|−α⟩,|1L−⟩≡|iα⟩−|−iα⟩.{\displaystyle {\begin{aligned}|0_{L}^{+}\rangle &\equiv |\alpha \rangle +|-\alpha \rangle ,\\|1_{L}^{+}\rangle &\equiv |i\alpha \rangle +|-i\alpha \rangle ,\\|0_{L}^{-}\rangle &\equiv |\alpha \rangle -|-\alpha \rangle ,\\|1_{L}^{-}\rangle &\equiv |i\alpha \rangle -|-i\alpha \rangle .\end{aligned}}}
Those two sets of states differ from the photon number parity, as states denoted with+{\displaystyle ^{+}}only occupy even photon number states and states with−{\displaystyle ^{-}}indicate they have odd parity. Similar to the binomial code, if the dominant error mechanism of the system is the stochastic application of the bosoniclowering operatora^{\displaystyle {\hat {a}}}, the error takes the logical states from the even parity subspace to the odd one, and vice versa. Single-photon-loss errors can therefore be detected by measuring the photon number parity operatorexp(iπa^†a^){\displaystyle \exp(i\pi {\hat {a}}^{\dagger }{\hat {a}})}using a dispersively coupled ancillary qubit.[12]
Still, cat qubits are not protected against two-photon lossa^2{\displaystyle {\hat {a}}^{2}}, dephasing noisea^†a^{\displaystyle {\hat {a}}^{\dagger }{\hat {a}}}, photon-gain errora^†{\displaystyle {\hat {a}}^{\dagger }}, etc.[6][7][8]
In general, aquantum codefor aquantum channelE{\displaystyle {\mathcal {E}}}is a subspaceC⊆H{\displaystyle {\mathcal {C}}\subseteq {\mathcal {H}}}, whereH{\displaystyle {\mathcal {H}}}is the state Hilbert space, such that there exists another quantum channelR{\displaystyle {\mathcal {R}}}with(R∘E)(ρ)=ρ∀ρ=PCρPC,{\displaystyle ({\mathcal {R}}\circ {\mathcal {E}})(\rho )=\rho \quad \forall \rho =P_{\mathcal {C}}\rho P_{\mathcal {C}},}wherePC{\displaystyle P_{\mathcal {C}}}is theorthogonal projectionontoC{\displaystyle {\mathcal {C}}}. HereR{\displaystyle {\mathcal {R}}}is known as thecorrection operation.
Anon-degenerate codeis one for which different elements of the set of correctable errors produce linearly independent results when applied to elements of the code. If distinct of the set of correctable errors produce orthogonal results, the code is consideredpure.[14]
Over time, researchers have come up with several codes:
That these codes allow indeed for quantum computations of arbitrary length is the content of thequantum threshold theorem, found byMichael Ben-OrandDorit Aharonov, which asserts that you can correct for all errors if you concatenate quantum codes such as the CSS codes—i.e. re-encode each logical qubit by the same code again, and so on, on logarithmically many levels—providedthat the error rate of individualquantum gatesis below a certain threshold; as otherwise, the attempts to measure the syndrome and correct the errors would introduce more new errors than they correct for.
As of late 2004, estimates for this threshold indicate that it could be as high as 1–3%,[20]provided that there are sufficiently manyqubitsavailable.
There have been several experimental realizations of CSS-based codes. The first demonstration was withnuclear magnetic resonance qubits.[21]Subsequently, demonstrations have been made with linear optics,[22]trapped ions,[23][24]and superconducting (transmon) qubits.[25]
In 2016 for the first time the lifetime of a quantum bit was prolonged by employing a QEC code.[13]The error-correction demonstration was performed onSchrödinger-cat statesencoded in a superconducting resonator, and employed aquantum controllercapable of performing real-time feedback operations including read-out of the quantum information, its analysis, and the correction of its detected errors. The work demonstrated how the quantum-error-corrected system reaches the break-even point at which the lifetime of a logical qubit exceeds the lifetime of the underlying constituents of the system (the physical qubits).
Other error correcting codes have also been implemented, such as one aimed at correcting for photon loss, the dominant error source in photonic qubit schemes.[26][27]
In 2021, anentangling gatebetween two logical qubits encoded intopological quantum error-correction codeshas first been realized using 10 ions in atrapped-ion quantum computer.[28][29]2021 also saw the first experimental demonstration of fault-tolerant Bacon-Shor code in a single logical qubit of a trapped-ion system, i.e. a demonstration for which the addition of error correction is able to suppress more errors than is introduced by the overhead required to implement the error correction as well as fault tolerant Steane code.[30][31][32]In a different direction, using an encoding corresponding to the Jordan-Wigner mapped Majorana zero modes of a Kitaev chain, researchers were able to perform quantum teleportation of a logical qubit, where an improvement in fidelity from 71% to 85% was observed.[33]
In 2022, researchers at theUniversity of Innsbruckhave demonstrated a fault-tolerant universal set of gates on two logical qubits in a trapped-ion quantum computer. They have performed a logical two-qubit controlled-NOT gate between two instances of the seven-qubit colour code, and fault-tolerantly prepared a logicalmagic state.[34]
In February 2023, researchers at Google claimed to have decreased quantum errors by increasing the qubit number in experiments, they used a fault tolerantsurface codemeasuring an error rate of 3.028% and 2.914% for a distance-3 qubit array and a distance-5 qubit array respectively.[35][36][37]
In April 2024, researchers atMicrosoftclaimed to have successfully tested a quantum error correction code that allowed them to achieve an error rate with logical qubits that is 800 times better than the underlying physical error rate.[38]
This qubit virtualization system was used to create 4 logical qubits with 30 of the 32 qubits on Quantinuum's trapped-ion hardware. The system uses an active syndrome extraction technique to diagnose errors and correct them while calculations are underway without destroying the logical qubits.[39]
In January 2025, researchers atUNSW Sydneymanaged to develop an error correction method usingantimony-based materials, includingantimonides, leveraging high-dimensional quantum states (qudits) with up to eight states. By encoding quantum information in the nuclear spin of aphosphorusatom embedded insiliconand employing advanced pulse control techniques, they demonstrated enhanced error resilience.[40]
In 2022, research at University of Engineering and Technology Lahore demonstrated error cancellation by inserting single-qubit Z-axis rotation gates into strategically chosen locations of the superconductor quantum circuits.[41]The scheme has been shown to effectively correct errors that would otherwise rapidly add up under constructive interference of coherent noise. This is a circuit-level calibration scheme that traces deviations (e.g. sharp dips or notches) in the decoherence curve to detect and localize the coherent error, but does not require encoding or parity measurements.[42]However, further investigation is needed to establish the effectiveness of this method for the incoherent noise.[41]
|
https://en.wikipedia.org/wiki/Quantum_error_correction
|
A nationalidentity documentis an identity card with a photo, usable as an identity card at least inside the country, and which is issued by an official national authority. Identity cards can be issued voluntarily or may be compulsory to possess as a resident or citizen.[1]
Driving licencesand other cards issued bystate or regional governmentsindicating certain permissions are not counted here as national identity cards. So for example, by this criterion, theUnited States driver's licenseis excluded, as these are issued by local (state) governments.
Generally, most countries in the world issue identity cards, with less than 10 countries worldwide not issuing them, mostly confined to theanglosphere,microstatesandunrecognised states.[1]Many states issue voluntary identity cards to citizens as a convenience. As of 1996, identity cards were compulsory in over 100 countries.[2]In these countries, the meaning of compulsory varies.[2]
In theEuropean Union, anEU/EEA national identity cardcan be used to travel freely within theEU/EEAin lieu of a passport.[3]Similarly, in South America, citizens may use an identity card to travel betweenMERCOSURstates.[4]In many other areas of the world, simplified travel arrangements are in place for neighbouring countries, allowing the use of identity cards for travel.
The term "compulsory" may have different meanings and implications in different countries.Possessionof a card may only become compulsory at a certain age. There may be a penalty for notcarryinga card or identification such as adriving licence. In some cases a person may be detained until identity is proven. This facilitates police identification of fugitives. In some countries, police need a reason to ask for identification, such as suspicion of a crime or security risk, while in others, they can do so without stating a reason. Random checks are rare, except inpolice states. Normally there is an age limit, such as 18, after which possession is mandatory, even if minors aged 15–17 may need a card in order to prove that they are under 18.
The card's front has the bearer's picture (with an electronic stamp on it) and right thumb print. It also includes either the bearer's signature or – if the bearer is illiterate – the phrase "cannot sign" (não assina) The verso has the unique number assigned the bearer (registro geralor RG), the bearer's full name,filiation, birthplace (locality and federation unit), birth date, andCPF number. It may include some additional information. It is officially 102 × 68 mm,[13]but lamination tends to make it slightly larger than theISO/IEC 7810 ID-2standard of 105 × 74 mm, so it is a tight fit in most wallets. A driver's licence has only recently been given the same legal status as the national identity card. In most situations, only a few other documents can be substituted for a national identity card: for example, identification documents issued by national councils of professionals.
As of 2020, a new Electronic Identity Document is being issued which must be renewed every 10 years. This new document is available both physically, as a card, and electronically, through a mobile application[25]
In Greece, there are many everyday things one cannot do without an ID. In fact, according to an older law, the Police ID is the only legal identity document and no one has a right to ask for more identity documents. Since the 1980s all legal services in Greece must be done with this ID. It is possible to travel within the EU using a Greek national ID card, although it may cause delays at border controls because those cards do not have machine-readable zones. Carrying any ID is not de jure compulsory. However, during routine police checks, if a citizen is found without an ID, the police officer may take them to the nearest police station for further investigation, thus rendering always carrying the ID card de facto compulsory.
The Guatemalan constitution requires personal identification via documentation, person rooting or the government. If the person cannot be identified, they may be sent to a judge until identification is provided.[37]
Police officers have an absolute right to require every person aged 15 or above on public premises to produce their HKID or valid passport for inspection; failure to produce such photo ID constitutes an offence in law. The reason for setting up police random checks is due to the end of theTouch Base Policyon 24 October 1980, which meant that all illegal immigrants fromChinathat failed to present a validHong Kong Identity Cardat random checks would subsequently be sent back toMainland China.
The Directorate General of National Security of Morocco announced it will issue a newer version of the national electronic identity card (NEIC) from 2020. The NEIC isbiometricand provides citizens of a birth certificate, residence certificate, extract of birth and citizenship certificates.
North Korea is probably the country which imposes the strongest fines for citizens not carrying ID cards. For travel, North Koreans need both an identity card, and a "travel pass", with specified destination and written permission. Sometimes citizens may be punished with time in a labour camp for not carrying their cards, however this is often only a short sentence and people are usually released upon presentation of the card at a later date. Although much is not known about the properties of the card, it is probably plastic and similar in size to most European ID cards.
Between 2004 and 2008, all records were transferred to an electronic Korean-language central database. Obtaining a driving license in North Korea is not usual – except in the case of professional drivers, mechanics, and assistants – since few citizens own cars. Only government officials are issuedpassportsbecause the state restricts citizens travel. North Koreans working abroad are issued contracts between North Korea and the host country to allow for travel, and government officers often accompany and supervise workers.
The Philippine Identification System (PhilSys) ID also known as the Philippine identity card is issued to all Filipino citizens and resident aliens in the Philippines. The pilot implementation began in selected regions in 2018 and full implementation began in 2019.[73]The national ID card is not compulsory and will harmonize existing government-initiated identification cards issued including theUnified Multi-Purpose IDissued to members of theSocial Security System,Government Service Insurance System,Philippine Health Insurance Corporation, andHome Development Mutual Fund(Pag-IBIG Fund).[74]This will also replace theAlien Certificate of Registration (ACR) Cardfor foreign residents and expatriates who are living in the Philippines permanently.
Because it is sometimes necessary to produce a national identity card, many South African permanent residents carry their card at all times.
All citizens must submit and save their 10 fingerprints to the criminal database operated by National Police Agency and right thumb fingerprint to Ministry of Interior and Safety at the time of ID card application.
國民身份證
Documents for uruguayan citizens are in blue and documents for legal residents are in yellow with inscription "EXTRANJERO".
It is required for many things such as credit card transactions, age verification, etc.
These are countries where official authorities issue identity cards to those who request them, but where it is not illegal to be without an official identity document. For some services, identification is needed, but documents such as passports or identity cards issued by banks or driving licences can be used. In countries where national identity cards are fully voluntary, they are often not so commonly used, because many already have a passport and a driving licence, so a third identity document is often considered superfluous.
This national digital ID system also offers real-time online and offline authentication to supporteKYC. It is consent-basedbiometric-backed identification for alllegal residentsof Ethiopia (non-citizens and minors are also eligible).[121][122]
While police officers and some other officials have a right to demand to see one of those documents, the law does not state that one is obliged to submit the document immediately. Fines may only be applied if an identity card or passport is not possessed at all, if the document is expired or if one explicitly refuses to show ID to the police. If one is unable to produce an ID card or passport (or any other form of credible identification) during a police control, one can (in theory) be brought to the next police post and detained for a maximum of 12 hours, or until positive identification is possible. However, this measure is only applied if the police have reasonable grounds to believe the person detained has committed an offence.[127]
The British Overseas Territory of Gibraltar has a voluntary ID card system for citizens, valid in the UK and EU/European Free Trade Associationmember countries.
Police has the legal power to stop people on streets at random and ask for ID card. If the person has no proof for identification one can be detained for maximum 24 hours.
The US uses theSocial Security numberas the de facto national ID number of the country.
It is unclear if it is compulsory or not.
These are countries where official authorities do not issue any identity cards. When identification is needed, e.g. passports, driving licences, bank cards etc. can be used, along with manual verification such as utility bills and bank statements.[167]Most countries that are not listed at all in this page have no national ID card.
In 1985, there was a failed proposal to create anAustralia Card. In 2007, there was another failed proposal to create a non-compulsoryAccess Cardthat would act as a gateway to The Department of Human Services.
|
https://en.wikipedia.org/wiki/List_of_national_identity_card_policies_by_country
|
Inmachine learning,multiple-instance learning(MIL) is a type ofsupervised learning. Instead of receiving a set of instances which are individuallylabeled, the learner receives a set of labeledbags, each containing many instances. In the simple case of multiple-instancebinary classification, a bag may be labeled negative if all the instances in it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it which is positive. From a collection of labeled bags, the learner tries to either (i) induce a concept that will label individual instances correctly or (ii) learn how to label bags without inducing the concept.
Babenko (2008)[1]gives a simple example for MIL. Imagine several people, and each of them has a key chain that contains few keys. Some of these people are able to enter a certain room, and some aren't. The task is then to predict whether a certain key or a certain key chain can get you into that room. To solve this problem we need to find the exact key that is common for all the "positive" key chains. If we can correctly identify this key, we can also correctly classify an entire key chain - positive if it contains the required key, or negative if it doesn't.
Depending on the type and variation in training data, machine learning can be roughly categorized into three frameworks: supervised learning, unsupervised learning, and reinforcement learning.Multiple instance learning (MIL)falls under the supervised learning framework, where every training instance has a label, either discrete or real valued. MIL deals with problems with incomplete knowledge of labels in training sets. More precisely, in multiple-instance learning, the training set consists of labeled "bags", each of which is a collection of unlabeled instances. A bag is positively labeled if at least one instance in it is positive, and is negatively labeled if all instances in it are negative. The goal of the MIL is to predict the labels of new, unseen bags.
Keeler et al.,[2]in his work in the early 1990s was the first one to explore the area of MIL. The actual term multi-instance learning was introduced in the middle of the 1990s, by Dietterich et al. while they were investigating the problem of drug activity prediction.[3]They tried to create a learning system that could predict whether new molecule was qualified to make some drug, or not, through analyzing a collection of known molecules. Molecules can have many alternative low-energy states, but only one, or some of them, are qualified to make a drug. The problem arose because scientists could only determine if molecule is qualified, or not, but they couldn't say exactly which of its low-energy shapes are responsible for that.
One of the proposed ways to solve this problem was to use supervised learning, and regard all the low-energy shapes of the qualified molecule as positive training instances, while all of the low-energy shapes of unqualified molecules as negative instances. Dietterich et al. showed that such method would have a high false positive noise, from all low-energy shapes that are mislabeled as positive, and thus wasn't really useful.[3]Their approach was to regard each molecule as a labeled bag, and all the alternative low-energy shapes of that molecule as instances in the bag, without individual labels. Thus formulating multiple-instance learning.
Solution to the multiple instance learning problem that Dietterich et al. proposed is the axis-parallel rectangle (APR) algorithm.[3]It attempts to search for appropriate axis-parallel rectangles constructed by the conjunction of the features. They tested the algorithm on Musk dataset,[4][5][dubious–discuss]which is a concrete test data of drug activity prediction and the most popularly used benchmark in multiple-instance learning. APR algorithm achieved the best result, but APR was designed with Musk data in mind.
Problem of multi-instance learning is not unique to drug finding. In 1998, Maron and Ratan found another application of multiple instance learning to scene classification in machine vision, and devised Diverse Density framework.[6]Given an image, an instance is taken to be one or more fixed-size subimages, and the bag of instances is taken to be the entire image. An image is labeled positive if it contains the target scene - a waterfall, for example - and negative otherwise. Multiple instance learning can be used to learn the properties of the subimages which characterize the target scene. From there on, these frameworks have been applied to a wide spectrum of applications, ranging from image concept learning and text categorization, to stock market prediction.
Take image classification for exampleAmores (2013). Given an image, we want to know its target class based on its visual content. For instance, the target class might be "beach", where the image contains both "sand" and "water". InMILterms, the image is described as abagX={X1,..,XN}{\displaystyle X=\{X_{1},..,X_{N}\}}, where eachXi{\displaystyle X_{i}}is the feature vector (calledinstance) extracted from the correspondingi{\displaystyle i}-th region in the image andN{\displaystyle N}is the total regions (instances) partitioning the image. The bag is labeledpositive("beach") if it contains both "sand" region instances and "water" region instances.
Examples of where MIL is applied are:
Numerous researchers have worked on adapting classical classification techniques, such assupport vector machinesorboosting, to work within the context of multiple-instance learning.
If the space of instances isX{\displaystyle {\mathcal {X}}}, then the set of bags is the set of functionsNX={B:X→N}{\displaystyle \mathbb {N} ^{\mathcal {X}}=\{B:{\mathcal {X}}\rightarrow \mathbb {N} \}}, which is isomorphic to the set of multi-subsets ofX{\displaystyle {\mathcal {X}}}. For each bagB∈NX{\displaystyle B\in \mathbb {N} ^{\mathcal {X}}}and each instancex∈X{\displaystyle x\in {\mathcal {X}}},B(x){\displaystyle B(x)}is viewed as the number of timesx{\displaystyle x}occurs inB{\displaystyle B}.[8]LetY{\displaystyle {\mathcal {Y}}}be the space of labels, then a "multiple instance concept" is a mapc:NX→Y{\displaystyle c:\mathbb {N} ^{\mathcal {X}}\rightarrow {\mathcal {Y}}}. The goal of MIL is to learn such a concept. The remainder of the article will focus onbinary classification, whereY={0,1}{\displaystyle {\mathcal {Y}}=\{0,1\}}.
Most of the work on multiple instance learning, including Dietterich et al. (1997) and Maron & Lozano-Pérez (1997) early papers,[3][9]make the assumption regarding the relationship between the instances within a bag and the class label of the bag. Because of its importance, that assumption is often called standard MI assumption.
The standard assumption takes each instancex∈X{\displaystyle x\in {\mathcal {X}}}to have an associated labely∈{0,1}{\displaystyle y\in \{0,1\}}which is hidden to the learner. The pair(x,y){\displaystyle (x,y)}is called an "instance-level concept". A bag is now viewed as a multiset of instance-level concepts, and is labeled positive if at least one of its instances has a positive label, and negative if all of its instances have negative labels. Formally, letB={(x1,y1),…,(xn,yn)}{\displaystyle B=\{(x_{1},y_{1}),\ldots ,(x_{n},y_{n})\}}be a bag. The label ofB{\displaystyle B}is thenc(B)=1−∏i=1n(1−yi){\displaystyle c(B)=1-\prod _{i=1}^{n}(1-y_{i})}. Standard MI assumption is asymmetric, which means that if the positive and negative labels are reversed, the assumption has a different meaning. Because of that, when we use this assumption, we need to be clear which label should be the positive one.
The standard assumption might be viewed as too strict, and therefore in the recent years, researchers tried to relax that position, which gave rise to other more loose assumptions.[10]The reason for this is the belief that standard MIL assumption is appropriate for the Musk dataset, but since MIL can be applied to numerous other problems, some different assumptions could probably be more appropriate. Guided by that idea, Weidmann[11]formulated a hierarchy of generalized instance-based assumptions for MIL. It consists of the standard MI assumption and three types of generalized MI assumptions, each more general than the last, in the sense that the former can be obtained as a specific choice of parameters of the latter, standard⊂{\displaystyle \subset }presence-based⊂{\displaystyle \subset }threshold-based⊂{\displaystyle \subset }count-based, with the count-based assumption being the most general and the standard assumption being the least general. (Note however, that any bag meeting the count-based assumption meets the threshold-based assumption which in turn meets the presence-based assumption which, again in turn, meet the standard assumption. In that sense it is also correct to state that the standard assumption is the weakest, hence most general, and the count-based assumption is the strongest, hence least general.) One would expect an algorithm which performs well under one of these assumptions to perform at least as well under the less general assumptions.
The presence-based assumption is a generalization of the standard assumption, wherein a bag must contain all instances that belong to a set of required instance-level concepts in order to be labeled positive. Formally, letCR⊆X×Y{\displaystyle C_{R}\subseteq {\mathcal {X}}\times {\mathcal {Y}}}be the set of required instance-level concepts, and let#(B,ci){\displaystyle \#(B,c_{i})}denote the number of times the instance-level conceptci{\displaystyle c_{i}}occurs in the bagB{\displaystyle B}. Thenc(B)=1⇔#(B,ci)≥1{\displaystyle c(B)=1\Leftrightarrow \#(B,c_{i})\geq 1}for allci∈CR{\displaystyle c_{i}\in C_{R}}. Note that, by takingCR{\displaystyle C_{R}}to contain only one instance-level concept, the presence-based assumption reduces to the standard assumption.
A further generalization comes with the threshold-based assumption, where each required instance-level concept must occur not only once in a bag, but some minimum (threshold) number of times in order for the bag to be labeled positive. With the notation above, to each required instance-level conceptci∈CR{\displaystyle c_{i}\in C_{R}}is associated a thresholdli∈N{\displaystyle l_{i}\in \mathbb {N} }. For a bagB{\displaystyle B},c(B)=1⇔#(B,ci)≥li{\displaystyle c(B)=1\Leftrightarrow \#(B,c_{i})\geq l_{i}}for allci∈CR{\displaystyle c_{i}\in C_{R}}.
The count-based assumption is a final generalization which enforces both lower and upper bounds for the number of times a required concept can occur in a positively labeled bag. Each required instance-level conceptci∈CR{\displaystyle c_{i}\in C_{R}}has a lower thresholdli∈N{\displaystyle l_{i}\in \mathbb {N} }and upper thresholdui∈N{\displaystyle u_{i}\in \mathbb {N} }withli≤ui{\displaystyle l_{i}\leq u_{i}}. A bagB{\displaystyle B}is labeled according toc(B)=1⇔li≤#(B,ci)≤ui{\displaystyle c(B)=1\Leftrightarrow l_{i}\leq \#(B,c_{i})\leq u_{i}}for allci∈CR{\displaystyle c_{i}\in C_{R}}.
Scott, Zhang, and Brown (2005)[12]describe another generalization of the standard model, which they call "generalized multiple instance learning" (GMIL). The GMIL assumption specifies a set of required instancesQ⊆X{\displaystyle Q\subseteq {\mathcal {X}}}. A bagX{\displaystyle X}is labeled positive if it contains instances which are sufficiently close to at leastr{\displaystyle r}of the required instancesQ{\displaystyle Q}.[12]Under only this condition, the GMIL assumption is equivalent to the presence-based assumption.[8]However, Scott et al. describe a further generalization in which there is a set of attraction pointsQ⊆X{\displaystyle Q\subseteq {\mathcal {X}}}and a set of repulsion pointsQ¯⊆X{\displaystyle {\overline {Q}}\subseteq {\mathcal {X}}}. A bag is labeled positive if and only if it contains instances which are sufficiently close to at leastr{\displaystyle r}of the attraction points and are sufficiently close to at mosts{\displaystyle s}of the repulsion points.[12]This condition is strictly more general than the presence-based, though it does not fall within the above hierarchy.
In contrast to the previous assumptions where the bags were viewed as fixed, the collective assumption views a bagB{\displaystyle B}as a distributionp(x|B){\displaystyle p(x|B)}over instancesX{\displaystyle {\mathcal {X}}}, and similarly view labels as a distributionp(y|x){\displaystyle p(y|x)}over instances. The goal of an algorithm operating under the collective assumption is then to model the distributionp(y|B)=∫Xp(y|x)p(x|B)dx{\displaystyle p(y|B)=\int _{\mathcal {X}}p(y|x)p(x|B)dx}.
Sincep(x|B){\displaystyle p(x|B)}is typically considered fixed but unknown, algorithms instead focus on computing the empirical version:p^(y|B)=1nB∑i=1nBp(y|xi){\displaystyle {\widehat {p}}(y|B)={\frac {1}{n_{B}}}\sum _{i=1}^{n_{B}}p(y|x_{i})}, wherenB{\displaystyle n_{B}}is the number of instances in bagB{\displaystyle B}. Sincep(y|x){\displaystyle p(y|x)}is also typically taken to be fixed but unknown, most collective-assumption based methods focus on learning this distribution, as in the single-instance version.[8][10]
While the collective assumption weights every instance with equal importance, Foulds extended the collective assumption to incorporate instance weights. The weighted collective assumption is then thatp^(y|B)=1wB∑i=1nBw(xi)p(y|xi){\displaystyle {\widehat {p}}(y|B)={\frac {1}{w_{B}}}\sum _{i=1}^{n_{B}}w(x_{i})p(y|x_{i})}, wherew:X→R+{\displaystyle w:{\mathcal {X}}\rightarrow \mathbb {R} ^{+}}is a weight function over instances andwB=∑x∈Bw(x){\displaystyle w_{B}=\sum _{x\in B}w(x)}.[8]
There are two major flavors of algorithms for Multiple Instance Learning: instance-based and metadata-based, or embedding-based algorithms. The term "instance-based" denotes that the algorithm attempts to find a set of representative instances based on an MI assumption and classify future bags from these representatives. By contrast, metadata-based algorithms make no assumptions about the relationship between instances and bag labels, and instead try to extract instance-independent information (or metadata) about the bags in order to learn the concept.[10]For a survey of some of the modern MI algorithms see Foulds and Frank.[8]
The earliest proposed MI algorithms were a set of "iterated-discrimination" algorithms developed by Dietterich et al., and Diverse Density developed by Maron and Lozano-Pérez.[3][9]Both of these algorithms operated under the standard assumption.
Broadly, all of the iterated-discrimination algorithms consist of two phases. The first phase is to grow anaxis parallel rectangle(APR) which contains at least one instance from each positive bag and no instances from any negative bags. This is done iteratively: starting from a random instancex1∈B1{\displaystyle x_{1}\in B_{1}}in a positive bag, the APR is expanded to the smallest APR covering any instancex2{\displaystyle x_{2}}in a new positive bagB2{\displaystyle B_{2}}. This process is repeated until the APR covers at least one instance from each positive bag. Then, each instancexi{\displaystyle x_{i}}contained in the APR is given a "relevance", corresponding to how many negative points it excludes from the APR if removed. The algorithm then selects candidate representative instances in order of decreasing relevance, until no instance contained in a negative bag is also contained in the APR. The algorithm repeats these growth and representative selection steps until convergence, where APR size at each iteration is taken to be only along candidate representatives.
After the first phase, the APR is thought to tightly contain only the representative attributes. The second phase expands this tight APR as follows: a Gaussian distribution is centered at each attribute and a looser APR is drawn such that positive instances will fall outside the tight APR with fixed probability.[4]Though iterated discrimination techniques work well with the standard assumption, they do not generalize well to other MI assumptions.[8]
In its simplest form, Diverse Density (DD) assumes a single representative instancet∗{\displaystyle t^{*}}as the concept. This representative instance must be "dense" in that it is much closer to instances from positive bags than from negative bags, as well as "diverse" in that it is close to at least one instance from each positive bag.
LetB+={Bi+}1m{\displaystyle {\mathcal {B}}^{+}=\{B_{i}^{+}\}_{1}^{m}}be the set of positively labeled bags and letB−={Bi−}1n{\displaystyle {\mathcal {B}}^{-}=\{B_{i}^{-}\}_{1}^{n}}be the set of negatively labeled bags, then the best candidate for the representative instance is given byt^=argmaxtDD(t){\displaystyle {\hat {t}}=\arg \max _{t}DD(t)}, where the diverse densityDD(t)=Pr(t|B+,B−)=argmaxt∏i=1mPr(t|Bi+)∏i=1nPr(t|Bi−){\displaystyle DD(t)=Pr\left(t|{\mathcal {B}}^{+},{\mathcal {B}}^{-}\right)=\arg \max _{t}\prod _{i=1}^{m}Pr\left(t|B_{i}^{+}\right)\prod _{i=1}^{n}Pr\left(t|B_{i}^{-}\right)}under the assumption that bags are independently distributed given the conceptt∗{\displaystyle t^{*}}. LettingBij{\displaystyle B_{ij}}denote the jth instance of bag i, the noisy-or model gives:
P(t|Bij){\displaystyle P(t|B_{ij})}is taken to be the scaled distanceP(t|Bij)∝exp(−∑ksk2(xk−(Bij)k)2){\displaystyle P(t|B_{ij})\propto \exp \left(-\sum _{k}s_{k}^{2}\left(x_{k}-(B_{ij})_{k}\right)^{2}\right)}wheres=(sk){\displaystyle s=(s_{k})}is the scaling vector. This way, if every positive bag has an instance close tot{\displaystyle t}, thenPr(t|Bi+){\displaystyle Pr(t|B_{i}^{+})}will be high for eachi{\displaystyle i}, but if any negative bagBi−{\displaystyle B_{i}^{-}}has an instance close tot{\displaystyle t},Pr(t|Bi−){\displaystyle Pr(t|B_{i}^{-})}will be low. Hence,DD(t){\displaystyle DD(t)}is high only if every positive bag has an instance close tot{\displaystyle t}and no negative bags have an instance close tot{\displaystyle t}. The candidate conceptt^{\displaystyle {\hat {t}}}can be obtained through gradient methods. Classification of new bags can then be done by evaluating proximity tot^{\displaystyle {\hat {t}}}.[9]Though Diverse Density was originally proposed by Maron et al. in 1998, more recent MIL algorithms use the DD framework, such as EM-DD in 2001[13]and DD-SVM in 2004,[14]and MILES in 2006[8]
A number of single-instance algorithms have also been adapted to a multiple-instance context under the standard assumption, including
Post 2000, there was a movement away from the standard assumption and the development of algorithms designed to tackle the more general assumptions listed above.[10]
Because of the high dimensionality of the new feature space and the cost of explicitly enumerating all APRs of the original instance space, GMIL-1 is inefficient both in terms of computation and memory. GMIL-2 was developed as a refinement of GMIL-1 in an effort to improve efficiency. GMIL-2 pre-processes the instances to find a set of candidate representative instances. GMIL-2 then maps each bag to a Boolean vector, as in GMIL-1, but only considers APRs corresponding to unique subsets of the candidate representative instances. This significantly reduces the memory and computational requirements.[8]
By mapping each bag to a feature vector of metadata, metadata-based algorithms allow the flexibility of using an arbitrary single-instance algorithm to perform the actual classification task. Future bags are simply mapped (embedded) into the feature space of metadata and labeled by the chosen classifier. Therefore, much of the focus for metadata-based algorithms is on what features or what type of embedding leads to effective classification. Note that some of the previously mentioned algorithms, such as TLC and GMIL could be considered metadata-based.
They define two variations of kNN, Bayesian-kNN and citation-kNN, as adaptations of the traditional nearest-neighbor problem to the multiple-instance setting.
So far this article has considered multiple instance learning exclusively in the context of binary classifiers. However, the generalizations of single-instance binary classifiers can carry over to the multiple-instance case.
Recent reviews of the MIL literature include:
|
https://en.wikipedia.org/wiki/Multiple_instance_learning
|
Perceptual learningislearningbetterperceptionskills such as differentiating twomusical tonesfrom one another or categorizations of spatial and temporal patterns relevant to real-world expertise. Examples of this may includereading, seeing relations amongchesspieces, and knowing whether or not anX-rayimage shows a tumor.
Sensory modalitiesmay includevisual, auditory, tactile, olfactory, and taste. Perceptual learning forms important foundations of complexcognitiveprocesses (i.e., language) and interacts with other kinds of learning to produce perceptual expertise.[1][2]Underlying perceptual learning are changes in the neural circuitry. The ability for perceptual learning is retained throughout life.[3]
Laboratory studies reported many examples of dramatic improvements in sensitivities from appropriately structured perceptuallearningtasks. In visualVernier acuitytasks, observers judge whether one line is displaced above or below a second line. Untrained observers are often already very good with this task, but after training, observers'thresholdhas been shown to improve as much as 6 fold.[4][5][6]Similar improvements have been found for visual motion discrimination[7]and orientation sensitivity.[8][9]Invisual searchtasks, observers are asked to find a target object hidden among distractors or in noise. Studies of perceptuallearningwith visual search show that experience leads to great gains in sensitivity and speed. In one study by Karni and Sagi,[3]the time it took for subjects to search for an oblique line among a field of horizontal lines was found to improve dramatically, from about 200ms in one session to about 50ms in a later session. With appropriate practice, visual search can become automatic and very efficient, such that observers do not need more time to search when there are more items present on the search field.[10]Tactile perceptual learning has been demonstrated on spatial acuity tasks such as tactile grating orientation discrimination, and on vibrotactile perceptual tasks such as frequency discrimination; tactile learning on these tasks has been found to transfer from trained to untrained fingers.[11][12][13][14]Practice with Braille reading and daily reliance on the sense of touch may underlie the enhancement in tactile spatial acuity of blind compared to sighted individuals.[15]
Perceptual learning is prevalent and occurs continuously in everyday life. "Experience shapes the way people see and hear."[16]Experience provides the sensory input to our perceptions as well as knowledge about identities. When people are less knowledgeable about different races and cultures people develop stereotypes because they are less knowledgeable. Perceptual learning is a more in-depth relationship between experience and perception. Different perceptions of the same sensory input may arise in individuals with different experiences or training. This leads to important issues about the ontology of sensory experience, the relationship between cognition and perception.
An example of this is money. Every day we look at money and we can look at it and know what it is but when you are asked to find the correct coin in similar coins that have slight differences we may have a problem finding the difference. This is because we see it every day but we are not directly trying to find a difference. Learning to perceive differences and similarities among stimuli based on exposure to the stimuli. A study conducted by Gibson's in 1955 illustrates how exposure to stimuli can affect how well we learn details for different stimuli.
As our perceptual system adapts to the natural world, we become better at discriminating between different stimuli when they belong to different categories than when they belong to the same category. We also tend to become less sensitive to the differences between two instances of the same category.[17]These effects are described as the result ofcategorical perception. Categorical perception effects do not transfer across domains.
Infants, when different sounds belong to the same phonetic category in their native language, tend to lose sensitivity to differences between speech sounds by 10 months of age.[18]They learn to pay attention to salient differences between native phonetic categories, and ignore the less language-relevant ones. In chess, expert chess players encode larger chunks of positions and relations on the board and require fewer exposures to fully recreate a chess board. This is not due to their possessing superior visual skill, but rather to their advanced extraction of structural patterns specific to chess.[19][20]
When a woman has a baby, shortly after the baby's birth she will be able to decipher the difference in her baby's cry. This is because she is becoming more sensitive to the differences. She can tell what cry is because they are hungry, need to be changed, etc.
Extensive practice reading in English leads to extraction and rapid processing of the structural regularities of English spelling patterns. Theword superiority effectdemonstrates this—people are often much faster at recognizing words than individual letters.[21][22]
In speech phonemes, observers who listen to a continuum of equally spaced consonant-vowel syllables going from /be/ to /de/ are much quicker to indicate that two syllables are different when they belonged to different phonemic categories than when they were two variants of the same phoneme, even when physical differences were equated between each pair of syllables.[23]
Other examples of perceptual learning in the natural world include the ability to distinguish between relative pitches in music,[24]identify tumors in x-rays,[25]sort day-old chicks by gender,[26]taste the subtle differences between beers or wines,[27]identify faces as belonging to different races,[28]detect the features that distinguish familiar faces,[29]discriminate between two bird species ("great blue crown heron" and "chipping sparrow"),[30]and attend selectively to the hue, saturation and brightness values that comprise a color definition.[31]
The prevalent idiom that “practice makes perfect” captures the essence of the ability to reach impressive perceptual expertise. This has been demonstrated for centuries and through extensive amounts of practice in skills such as wine tasting, fabric evaluation, or musical preference. The first documented report, dating to the mid-19th century, is the earliest example of tactile training aimed at decreasing the minimal distance at which individuals can discriminate whether one or two points on their skin have been touched. It was found that this distance (JND, Just Noticeable Difference) decreases dramatically with practice, and that this improvement is at least partially retained on subsequent days. Moreover, this improvement is at least partially specific to the trained skin area. A particularly dramatic improvement was found for skin positions at which initial discrimination was very crude (e.g. on the back), though training could not bring the JND of initially crude areas down to that of initially accurate ones (e.g. finger tips).[32]William Jamesdevoted a section in his Principles of Psychology (1890/1950) to "the improvement in discrimination by practice".[33]He noted examples and emphasized the importance of perceptual learning for expertise. In 1918,Clark L. Hull, a noted learning theorist, trained human participants to learn to categorize deformed Chinese characters into categories. For each category, he used 6 instances that shared some invariant structural property. People learned to associate a sound as the name of each category, and more importantly, they were able to classify novel characters accurately.[34]This ability to extract invariances from instances and apply them to classify new instances marked this study as a perceptual learning experiment. It was not until 1969, however, thatEleanor Gibsonpublished her seminal bookThe Principles of Perceptual learning and Developmentand defined the modern field of perceptual learning. She established the study of perceptual learning as an inquiry into the behavior and mechanism of perceptual change. By the mid-1970s, however, this area was in a state of dormancy due to a shift in focus to perceptual and cognitive development in infancy. Much of the scientific community tended to underestimate the impact of learning compared with innate mechanisms. Thus, most of this research focused on characterizing basic perceptual capacities of young infants rather than on perceptual learning processes.
Since the mid-1980s, there has been a new wave of interest in perceptual learning due to findings of cortical plasticity at the lowest sensory levels of sensory systems. Our increased understanding of the physiology and anatomy of our cortical systems has been used to connect the behavioral improvement to the underlying cortical areas. This trend began with earlier findings ofHubelandWieselthat perceptual representations at sensory areas of the cortex are substantially modified during a short ("critical") period immediately following birth. Merzenich, Kaas and colleagues showed that thoughneuroplasticityis diminished, it is not eliminated when the critical period ends.[35]Thus, when the external pattern of stimulation is substantially modified, neuronal representations in lower-level (e.g.primary) sensory areas are also modified. Research in this period centered on basic sensory discriminations, where remarkable improvements were found on almost any sensory task through discrimination practice. Following training, subjects were tested with novel conditions and learning transfer was assessed. This work departed from earlier work on perceptual learning, which spanned different tasks and levels.
A question still debated today is to what extent improvements from perceptual learning stems from peripheral modifications compared with improvement in higher-level readout stages. Early interpretations, such as that suggested byWilliam James, attributed it to higher-level categorization mechanisms whereby initially blurred differences are gradually associated with distinctively different labels. The work focused on basic sensory discrimination, however, suggests that the effects of perceptual learning are specific to changes in low-levels of the sensory nervous system (i.e., primary sensory cortices).[36]More recently, research suggest that perceptual learning processes are multilevel and flexible.[37]This cycles back to the earlier Gibsonian view that low-level learning effects are modulated by high-level factors, and suggests that improvement in information extraction may not involve only low-level sensory coding but also apprehension of relatively abstract structure and relations in time and space.
Within the past decade, researchers have sought a more unified understanding of perceptual learning and worked to apply these principles to improve perceptual learning in applied domains.
Perceptual learning effects can be organized into two broad categories: discovery effects and fluency effects.[1]Discovery effects involve some change in the bases of response such as in selecting new information relevant for the task, amplifying relevant information or suppressing irrelevant information. Experts extract larger "chunks" of information and discover high-order relations and structures in their domains of expertise that are invisible to novices. Fluency effects involve changes in the ease of extraction. Not only can experts process high-order information, they do so with great speed and lowattentional load. Discovery and fluency effects work together so that as the discovery structures becomes more automatic, attentional resources are conserved for discovery of new relations and for high-level thinking and problem-solving.
William James(Principles of Psychology, 1890) asserted that "My experience is what I agree to attend to. Only those items which I notice shape my mind - without selective interest, experience is an utter chaos.".[33]His view was extreme, yet its gist was largely supported by subsequentbehavioralandphysiologicalstudies. Mere exposure does not seem to suffice for acquiring expertise.
Indeed, a relevant signal in a givenbehavioralcondition may be considered noise in another. For example, when presented with two similar stimuli, one might endeavor to study the differences between their representations in order to improve one's ability to discriminate between them, or one may instead concentrate on the similarities to improve one's ability to identify both as belonging to the same category. A specific difference between them could be considered 'signal' in the first case and 'noise' in the second case. Thus, as we adapt to tasks and environments, we pay increasingly more attention to the perceptual features that are relevant and important for the task at hand, and at the same time, less attention to the irrelevant features. This mechanism is called attentional weighting.[37]
However, recent studies suggest that perceptual learning occurs without selective attention.[38]Studies of such task-irrelevant perceptual learning (TIPL) show that the degree of TIPL is similar to that found through direct training procedures.[39]TIPL for a stimulus depends on the relationship between that stimulus and important task events[40]or upon stimulus reward contingencies.[41]It has thus been suggested that learning (of task irrelevant stimuli) is contingent upon spatially diffusive learning signals.[42]Similar effects, but upon a shorter time scale, have been found for memory processes and in some cases is called attentional boosting.[43]Thus, when an important (alerting) event occurs, learning may also affect concurrent, non-attended and non-salient stimuli.[44]
The time course of perceptuallearningvaries from one participant to another.[11]Perceptual learning occurs not only within the first training session but also between sessions.[45]Fast learning (i.e., within-first-session learning) and slow learning (i.e., between-session learning) involves different changes in the human adultbrain. While the fast learning effects can only be retained for a short term of several days, the slowlearningeffects can be preserved for a long term over several months.[46]
Research on basicsensorydiscriminations often show that perceptuallearningeffects are specific to the trained task orstimulus.[47]Many researchers take this to suggest that perceptual learning may work by modifying thereceptive fieldsof the cells (e.g.,V1and V2 cells) that initially encode the stimulus. For example, individual cells could adapt to become more sensitive to important features, effectively recruiting more cells for a particular purpose, making some cells more specifically tuned for the task at hand.[48]Evidence for receptive field change has been found using single-cell recording techniques inprimatesin both tactile and auditory domains.[49]
However, not all perceptuallearningtasks are specific to the trained stimuli or tasks. Sireteanu and Rettenback[50]discussed discrimination learning effects that generalize across eyes, retinal locations and tasks. Ahissar and Hochstein[51]used visual search to show that learning to detect a single line element hidden in an array of differently-oriented line segments could generalize to positions at which the target was never presented. In human vision, not enough receptive field modification has been found in early visual areas to explain perceptual learning.[52]Training that produces large behavioral changes such as improvements in discrimination does not produce changes in receptive fields. In studies where changes have been found, the changes are too small to explain changes in behavior.[53]
The Reverse Hierarchy Theory (RHT), proposed by Ahissar & Hochstein, aims to link between learning dynamics and specificity and the underlying neuronal sites.[54]RHT proposes that naïve performance is based on responses at high-level cortical areas, where crude, categorical level representations of the environment are represented. Hence initial learning stages involve understanding global aspects of the task. Subsequent practice may yield better perceptual resolution as a consequence of accessing lower-level information via the feedback connections going from high to low levels. Accessing the relevant low-level representations requires a backward search during which informative input populations of neurons in the low level are allocated. Hence, subsequent learning and its specificity reflect the resolution of lower levels. RHT thus proposes that initial performance is limited by the high-level resolution whereas post-training performance is limited by the resolution at low levels. Since high-level representations of different individuals differ due to their prior experience, their initial learning patterns may differ. Several imaging studies are in line with this interpretation, finding that initial performance is correlated with average (BOLD) responses at higher-level areas whereas subsequent performance is more correlated with activity at lower-level areas[citation needed]. RHT proposes that modifications at low levels will occur only when the backward search (from high to low levels of processing) is successful. Such success requires that the backward search will "know" which neurons in the lower level are informative. This "knowledge" is gained by training repeatedly on a limited set of stimuli, such that the same lower-level neuronal populations are informative during several trials. Recent studies found that mixing a broad range of stimuli may also yield effective learning if these stimuli are clearly perceived as different, or are explicitly tagged as different. These findings further support the requirement for top-down guidance in order to obtain effective learning.
In some complex perceptual tasks, allhumansare experts. We are all very sophisticated, but not infallible at scene identification, face identification and speechperception. Traditional explanations attribute this expertise to some holistic, somewhat specialized, mechanisms. Perhaps such quick identifications are achieved by more specific and complex perceptual detectors which gradually "chunk" (i.e., unitize) features that tend to concur, making it easier to pull a whole set of information. Whether any concurrence of features can gradually be chunked with practice or chunking can only be obtained with some pre-disposition (e.g. faces, phonological categories) is an open question. Current findings suggest that such expertise is correlated with a significant increase in the cortical volume involved in these processes. Thus, we all have somewhat specialized face areas, which may reveal an innate property, but we also develop somewhat specialized areas for written words as opposed to single letters or strings of letter-like symbols. Moreover, special experts in a given domain have larger cortical areas involved in that domain. Thus, expert musicians have larger auditory areas.[55]These observations are in line with traditional theories of enrichment proposing that improved performance involves an increase in cortical representation. For this expertise, basic categorical identification may be based on enriched and detailed representations, located to some extent in specialized brain areas.Physiologicalevidence suggests that training for refined discrimination along basic dimensions (e.g. frequency in the auditory modality) also increases the representation of the trained parameters, though in these cases the increase may mainly involve lower-level sensory areas.[56]
In 2005, Petrov, Dosher and Lu pointed out that perceptuallearningmay be explained in terms of the selection of which analyzers best perform the classification, even in simple discrimination tasks. They explain that the some part of the neural system responsible for particular decisions have specificity[clarification needed], while low-level perceptual units do not.[37]In their model, encodings at the lowest level do not change. Rather, changes that occur in perceptual learning arise from changes in higher-level, abstract representations of the relevant stimuli. Because specificity can come from differentially selecting information, this "selective reweighting theory" allows for learning of complex, abstract representation. This corresponds to Gibson's earlier account of perceptual learning as selection andlearningof distinguishing features. Selection may be the unifying principles of perceptual learning at all levels.[57]
Ivan Pavlovdiscoveredconditioning. He found that when a stimulus (e.g. sound) is immediately followed by food several times, the mere presentation of this stimulus would subsequently elicit saliva in a dog's mouth. He further found that when he used a differential protocol, by consistently presenting food after one stimulus while not presenting food after another stimulus, dogs were quickly conditioned to selectively salivate in response to the rewarded one. He then asked whether this protocol could be used to increase perceptual discrimination, by differentially rewarding two very similar stimuli (e.g. tones with similar frequency). However, he found that differential conditioning was not effective.
Pavlov's studies were followed by many training studies which found that an effective way to increase perceptual resolution is to begin with a large difference along the required dimension and gradually proceed to small differences along this dimension. This easy-to-difficult transfer was termed "transfer along a continuum".
These studies showed that the dynamics of learning depend on the training protocol, rather than on the total amount of practice. Moreover, it seems that the strategy implicitly chosen for learning is highly sensitive to the choice of the first few trials during which the system tries to identify the relevant cues.
Several studies asked whetherlearningtakes place during practice sessions or in between, for example, during subsequent sleep. The dynamics oflearningare hard to evaluate since the directly measured parameter is performance, which is affected by bothlearning, inducing improvement, and fatigue, which hampers performance. Current studies suggest that sleep contributes to improved and durablelearningeffects, by further strengthening connections in the absence of continued practice.[45][58][59]Bothslow-waveandREM(rapid eye movement) stages of sleep may contribute to this process, via not-yet-understood mechanisms.
Practice with comparison and contrast of instances that belong to the same or different categories allow for the pick-up of the distinguishing features—features that are important for the classification task—and the filter of the irrelevant features.[60]
Learningeasy examples first may lead to better transfer and betterlearningof more difficult cases.[61]By recording ERPs from human adults, Ding and Colleagues investigated the influence of task difficulty on the brain mechanisms of visual perceptual learning. Results showed that difficult task training affected earlier visual processing stage and broader visual cortical regions than easy task training.[62]
Active classification effort and attention are often necessary to produce perceptual learning effects.[59]However, in some cases, mere exposure to certain stimulus variations can produce improved discriminations.
In many cases, perceptual learning does not require feedback (whether or not the classification is correct).[56]Other studies suggest that block feedback (feedback only after a block of trials) produces more learning effects than no feedback at all.[63]
Despite the marked perceptual learning demonstrated in different sensory systems and under varied training paradigms, it is clear that perceptual learning must face certain unsurpassable limits imposed by the physical characteristics of the sensory system. For instance, in tactile spatial acuity tasks, experiments suggest that the extent of learning is limited by fingertip surface area, which may constrain the underlying density ofmechanoreceptors.[11]
In many domains of expertise in the real world, perceptual learning interacts with other forms of learning.Declarative knowledgetends to occur with perceptual learning. As we learn to distinguish between an array of wine flavors, we also develop a wide range of vocabularies to describe the intricacy of each flavor.
Similarly, perceptual learning also interacts flexibly withprocedural knowledge. For example, the perceptual expertise of a baseball player at bat can detect early in the ball's flight whether the pitcher threw a curveball. However, the perceptual differentiation of the feel of swinging the bat in various ways may also have been involved in learning the motor commands that produce the required swing.[1]
Perceptuallearningis often said to beimplicit, such thatlearningoccurs without awareness. It is not at all clear whether perceptuallearningis always implicit. Changes in sensitivity that arise are often not conscious and do not involve conscious procedures, but perceptual information can be mapped onto various responses.[1]
In complex perceptual learning tasks (e.g., sorting of newborn chicks by sex, playing chess), experts are often unable to explain what stimulus relationships they are using in classification. However, in less complex perceptuallearningtasks, people can point out what information they're using to make classifications.
Perceptual learning is distinguished from category learning. Perceptual learning generally refers to the enhancement of detectability of a perceptual item or the discriminability between two or more items. In contrast, category learning involves labeling or categorizing an item into a particular group or category. However, in some cases, there is an overlap between perceptual learning and category learning. For instance, to discriminate between two items, a categorical difference between them may sometimes be utilized, in which case category learning, rather than perceptual learning, is thought to occur. Although perceptual learning and category learning are distinct forms of learning, they can interact. For example, category learning that groups multiple orientations into different categories can lead perceptual learning of one orientation to transfer across other orientations within the same category as the trained orientation. This is termed "category-induced perceptual learning".
Multiple different category learning systems may mediate the learning of different category structures. "Two systems that have received support are a frontal-based explicit system that uses logical reasoning, depends on working memory and executive attention, and is mediated primarily by the anterior cingulate, the prefrontal cortex and the associative striatum, including the head of the caudate. The second is a basal ganglia-mediated implicit system that uses procedural learning, requires a dopamine reward signal and is mediated primarily by the sensorimotor striatum"[64]The studies showed that there was significant involvement of the striatum and less involvement of the medial temporal lobes in category learning. In people who have striatal damage, the need to ignore irrelevant information is more predictive of a rule-based category learning deficit. Whereas, the complexity of the rule is predictive of an information integration category learning deficit.
An important potential application of perceptuallearningis the acquisition of skill for practical purposes. Thus it is important to understand whether training for increased resolution in lab conditions induces a general upgrade which transfers to other environmental contexts, or results from mechanisms which are context specific. Improving complex skills is typically gained by training under complex simulation conditions rather than one component at a time. Recent lab-based training protocols with complex action computer games have shown that such practice indeed modifiesvisualskills in a general way, which transfers to new visual contexts. In 2010, Achtman, Green, and Bavelier reviewed the research on video games to train visual skills.[65]They cite a previous review by Green & Bavelier (2006)[66]on using video games to enhance perceptual and cognitive abilities. A variety of skills were upgraded in video game players, including "improved hand-eye coordination,[67]increased processing in the periphery,[68]enhanced mental rotation skills,[69]greater divided attention abilities,[70]and faster reaction times,[71]to name a few". An important characteristic is the functional increase in the size of the effective visual field (within which viewers can identify objects), which is trained in action games and transfers to new settings. Whether learning of simple discriminations, which are trained in separation, transfers to new stimulus contexts (e.g. complex stimulus conditions) is still an open question.
Like experimental procedures, other attempts to apply perceptuallearningmethods to basic and complex skills use training situations in which the learner receives many short classification trials. Tallal, Merzenich and their colleagues have successfully adapted auditory discrimination paradigms to address speech and language difficulties.[72][73]They reported improvements in language learning-impaired children using specially enhanced and extended speech signals. The results applied not only to auditory discrimination performance but speech and language comprehension as well.
In educational domains, recent efforts byPhilip Kellmanand colleagues showed that perceptual learning can be systematically produced and accelerated using specific, computer-based technology. Their approach to perceptual learning methods take the form of perceptual learning modules (PLMs): sets of short, interactive trials that develop, in a particular domain, learners' pattern recognition, classification abilities, and their abilities to map across multiple representations. As a result of practice with mapping across transformations (e.g., algebra, fractions) and across multiple representations (e.g., graphs, equations, and word problems), students show dramatic gains in their structure recognition in fraction learning and algebra. They also demonstrated that when students practice classifying algebraic transformations using PLMs, the results show remarkable improvements in fluency at algebra problem solving.[57][74][75]These results suggests that perceptual learning can offer a needed complement to conceptual and procedural instructions in the classroom.
Similar results have also been replicated in other domains with PLMs, including anatomic recognition in medical and surgical training,[76]reading instrumental flight displays,[77]and apprehending molecular structures in chemistry.[78]
|
https://en.wikipedia.org/wiki/Perceptual_learning
|
SuperMemo(from "Super Memory") is alearningmethod andsoftwarepackage developed by SuperMemo World and SuperMemo R&D withPiotr WoźniakinPolandfrom 1985 to the present.[2]It is based on research intolong-term memory, and is a practical application of thespaced repetitionlearning method that has been proposed for efficient instruction by a number of psychologists as early as in the 1930s.[3]
The method is available as a computer program forWindows,Windows CE,Windows Mobile(Pocket PC),Palm OS(PalmPilot), etc. Course software by the same company (SuperMemo World) can also be used in aweb browseror even without a computer.[4]
The desktop version of SuperMemo started as aflashcardsoftware (SuperMemo 1.0 (1987)).[5]Since SuperMemo 10 (2000), it began to supportincremental reading.[6]
The SuperMemo program stores a database of questions and answers constructed by the user. When reviewing information saved in the database, the program uses the SuperMemo algorithm to decide what questions to show the user. The user then answers the question and rates their relative ease of recall - with grades of 0 to 5 (0 is the hardest, 5 is the easiest) - and their rating is used to calculate how soon they should be shown the question again. While the exact algorithm varies with the version of SuperMemo, in general, items that are harder to remember show up more frequently.[2]
Besides simple text questions and answers, the latest version of SuperMemo supports images, video, and HTML questions and answers.[7]
Since 2000,[6]SuperMemo has had a unique set of features that distinguish it from other spaced repetition programs, calledincremental reading(IR or "increading"[8]). Whereas earlier versions were built around users entering information they wanted to use, using IR, users can import text that they want to learn from. The user reads the text inside of SuperMemo, and tools are provided to bookmark one's location in the text and automatically schedule it to be revisited later, extract valuable information, and turn extracts into questions for the user to learn. By automating the entire process of reading and extracting knowledge to be remembered all in the same program, time is saved from having to manually prepare information, and insights into the nature of learning can be used to make the entire process more natural for the user. Furthermore, since the process of extracting knowledge can often lead to the extraction of more information than can actually be feasibly remembered, a priority system is implemented that allows the user to ensure that the most important information is remembered when they can't review all information in the system.[9]
The specific algorithms SuperMemo uses have been published, and re-implemented in other programs.
Different algorithms have been used; SM-0 refers to the original (non-computer-based) algorithm, while SM-2 refers to the original computer-based algorithm released in 1987 (used in SuperMemo versions 1.0 through 3.0, referred to as SM-2 because SuperMemo version 2 was the most popular of these).[10][11]Subsequent versions of the software have claimed to further optimize the algorithm.
Piotr Woźniak, the developer of SuperMemo algorithms, released the description for SM-5 in a paper titledOptimization of repetition spacing in the practice of learning.Little detail is specified in the algorithms released later than that.
In 1995, SM-8, which capitalized on data collected by users of SuperMemo 6 and SuperMemo 7 and added a number of improvements that strengthened the theoretical validity of the function of optimum intervals and made it possible to accelerate its adaptation, was introduced in SuperMemo 8.[12]
In 2002, SM-11, the first SuperMemo algorithm that was resistant to interference from the delay or advancement of repetitions was introduced in SuperMemo 11 (aka SuperMemo 2002). In 2005, SM-11 was tweaked to introduce boundaries on A and B parameters computed from the Grade vs. Forgetting Index data.[12]
In 2011, SM-15, which notably eliminated two weaknesses of SM-11 that would show up in heavily overloaded collections with very large item delays, was introduced in Supermemo 15.[12]
In 2016, SM-17, the first version of the algorithm to incorporate the two component model of memory, was introduced in SuperMemo 17.[13]
The latest version of the SuperMemo algorithm is SM-18, released in 2019.[14]
The first computer-based SuperMemo algorithm (SM-2)[11]tracks three properties for each card being studied:
Every time the user starts a review session, SuperMemo provides the user with the cards whose last review occurred at leastIdays ago. For each review, the user tries to recall the information and (after being shown the correct answer) specifies a gradeq(from 0 to 5) indicating a self-evaluation the quality of their response, with each grade having the following meaning:
The following algorithm[15]is then applied to update the three variables associated with the card:
After all scheduled reviews are complete, SuperMemo asks the user to re-review any cards they marked with a grade less than 4 repeatedly until they give a grade ≥ 4.
Some of the algorithms have been re-implemented in other, oftenfreeprograms such asAnki,Mnemosyne, andEmacs Org-mode's Org-drill. See fulllist of flashcard software.
The SM-2 algorithm has proven most popular in other applications, and is used (in modified form) in Anki and Mnemosyne, among others. Org-drill implements SM-5 by default, and optionally other algorithms such as SM-2 and a simplified SM-8.
|
https://en.wikipedia.org/wiki/SuperMemo
|
Inmathematical logic,abstract algebraic logicis the study of the algebraization ofdeductive systemsarising as an abstraction of the well-knownLindenbaum–Tarski algebra, and how the resulting algebras are related to logical systems.[1]
The archetypal association of this kind, one fundamental to the historical origins ofalgebraic logicand lying at the heart of all subsequently developed subtheories, is the association between the class ofBoolean algebrasand classicalpropositional calculus. This association was discovered byGeorge Boolein the 1850s, and then further developed and refined by others, especiallyC. S. PeirceandErnst Schröder, from the 1870s to the 1890s. This work culminated inLindenbaum–Tarski algebras, devised byAlfred Tarskiand his studentAdolf Lindenbaumin the 1930s. Later, Tarski and his American students (whose ranks include Don Pigozzi) went on to discovercylindric algebra, whose representable instances algebraize all of classicalfirst-order logic, and revivedrelation algebra, whosemodelsinclude all well-knownaxiomatic set theories.
Classical algebraic logic, which comprises all work in algebraic logic until about 1960, studied the properties of specific classes of algebras used to "algebraize" specific logical systems of particular interest to specific logical investigations. Generally, the algebra associated with a logical system was found to be a type oflattice, possibly enriched with one or moreunary operationsother than latticecomplementation.
Abstract algebraic logicis a modern subarea of algebraic logic that emerged in Poland during the 1950s and 60s with the work ofHelena Rasiowa,Roman Sikorski,Jerzy Łoś, andRoman Suszko(to name but a few). It reached maturity in the 1980s with the seminal publications of the Polish logicianJanusz Czelakowski, the Dutch logicianWim Blokand the American logicianDon Pigozzi. The focus of abstract algebraic logic shifted from the study of specific classes of algebras associated with specific logical systems (the focus of classical algebraic logic), to the study of:
The passage from classical algebraic logic to abstract algebraic logic may be compared to the passage from "modern" orabstract algebra(i.e., the study ofgroups,rings,modules,fields, etc.) touniversal algebra(the study of classes of algebras of arbitrary similarity types (algebraicsignatures) satisfying specific abstract properties).
The two main motivations for the development of abstract algebraic logic are closely connected to (1) and (3) above. With respect to (1), a critical step in the transition was initiated by the work of Rasiowa. Her goal was to abstract results and methods known to hold for the classicalpropositional calculusandBoolean algebrasand some other closely related logical systems, in such a way that these results and methods could be applied to a much wider variety of propositional logics.
(3) owes much to the joint work of Blok and Pigozzi exploring the different forms that the well-knowndeduction theoremof classical propositional calculus andfirst-order logictakes on in a wide variety of logical systems. They related these various forms of the deduction theorem to the properties of the algebraic counterparts of these logical systems.
Abstract algebraic logic has become a well established subfield of algebraic logic, with many deep and interesting results. These results explain many properties of different classes of logical systems previously explained only on a case-by-case basis or shrouded in mystery. Perhaps the most important achievement of abstract algebraic logic has been the classification of propositional logics in ahierarchy, called theabstract algebraic hierarchyor Leibniz hierarchy, whose different levels roughly reflect the strength of the ties between a logic at a particular level and its associated class of algebras. The position of a logic in this hierarchy determines the extent to which that logic may be studied using known algebraic methods and techniques. Once a logic is assigned to a level of this hierarchy, one may draw on the powerful arsenal of results, accumulated over the past 30-odd years, governing the algebras situated at the same level of the hierarchy.
The similar terms 'general algebraic logic' and 'universal algebraic logic' refer the approach of the Hungarian School includingHajnal Andréka,István Németiand others.
|
https://en.wikipedia.org/wiki/Abstract_algebraic_logic
|
In thehistory of cryptography,Typex(alternatively,Type XorTypeX) machines wereBritishcipher machines used from 1937. It was an adaptation of the commercial GermanEnigmawith a number of enhancements that greatly increased its security. The cipher machine (and its many revisions) was used until the mid-1950s when other more modern military encryption systems came into use.
Like Enigma, Typex was arotor machine. Typex came in a number of variations, but all contained five rotors, as opposed to three or four in the Enigma. Like the Enigma, the signal was sent through the rotors twice, using a "reflector" at the end of the rotor stack. On a Typex rotor, each electrical contact was doubled to improve reliability.
Of the five rotors, typically the first two were stationary. These provided additional enciphering without adding complexity to the rotor turning mechanisms. Their purpose was similar to the plugboard in the Enigmas, offering additional randomization that could be easily changed. Unlike Enigma's plugboard, however, the wiring of those two rotors could not be easily changed day-to-day. Plugboards were added to later versions of Typex.
The major improvement the Typex had over the standard Enigma was that the rotors in the machine contained multiple notches that would turn the neighbouring rotor. This eliminated an entire class of attacks on the system, whereas Enigma's fixed notches resulted in certain patterns appearing in the cyphertext that could be seen under certain circumstances.
Some Typex rotors came in two parts, where aslugcontaining the wiring was inserted into a metal casing. Different casings contained different numbers of notches around the rim, such as 5, 7 or 9 notches. Each slug could be inserted into a casing in two different ways by turning it over. In use, all the rotors of the machine would use casings with the same number of notches. Normally five slugs were chosen from a set of ten.
On some models, operators could achieve a speed of 20 words a minute, and the output ciphertext or plaintext was printed on paper tape. For some portable versions, such as the Mark III, a message was typed with the left hand while the right hand turned a handle.[1]
Several Internet Typex articles say that onlyVaselinewas used to lubricate Typex machines and that no other lubricant was used. Vaseline was used to lubricate the rotor disc contacts. Without this there was a risk of arcing which would burn the insulation between the contacts. For the rest of the machine two grades of oil (Spindle Oils 1 and 2) were used. Regular cleaning and maintenance was essential.
In particular, the letters/figures cam-clusterbalatadiscs had to be kept lubricated.[citation needed]
By the 1920s, the British Government was seeking a replacement for itsbook ciphersystems, which had been shown to be insecure and which proved to be slow and awkward to use. In 1926, an inter-departmental committee was formed to consider whether they could be replaced with cipher machines. Over a period of several years and at large expense, the committee investigated a number of options but no proposal was decided upon. One suggestion was put forward by Wing CommanderOswyn G. W. G. Lywoodto adapt the commercial Enigma by adding a printing unit but the committee decided against pursuing Lywood's proposal.
In August 1934, Lywood began work on a machine authorised by theRAF. Lywood worked with J. C. Coulson, Albert P. Lemmon, and Ernest W. Smith atKidbrookeinGreenwich, with the printing unit provided byCreed & Company. The first prototype was delivered to theAir Ministryon 30 April 1935. In early 1937, around 30 Typex Mark I machines were supplied to the RAF. The machine was initially termed the "RAF Enigma with Type X attachments".
The design of its successor had begun by February 1937. In June 1938,Typex Mark IIwas demonstrated to the cipher-machine committee, who approved an order of 350 machines. The Mark II model was bulky, incorporating two printers: one for plaintext and one for ciphertext. As a result, it was significantly larger than the Enigma, weighing around 120 lb (54 kg) , and measuring 30 in (760 mm) × 22 in (560 mm) × 14 in (360 mm). After trials, the machine was adopted by the RAF, Army and other government departments. DuringWorld War II, a large number of Typex machines were manufactured by the tabulating machine manufacturerPowers-Samas.[2]
Typex Mark IIIwas a more portable variant, using the same drums as the Mark II machines powered by turning a handle (it was also possible to attach a motor drive). The maximum operating speed is around 60 letters a minute, significantly slower than the 300 achievable with the Mark II.
Typex Mark VIwas another handle-operated variant, measuring 20 in (510 mm) ×12 in (300 mm) ×9 in (230 mm), weighing 30 lb (14 kg) and consisting of over 700 components.
Plugboards for the reflector were added to the machine from November 1941.
For inter-Allied communications duringWorld War II, theCombined Cipher Machine(CCM) was developed, used in theRoyal Navyfrom November 1943. The CCM was implemented by making modifications to Typex and the United StatesECM Mark IImachine so that they would be compatible.
Typex Mark VIIIwas a Mark II fitted with a morse perforator.
Typex 22(BID/08/2) andTypex 23(BID/08/3) were late models, that incorporated plugboards for improved security. Mark 23 was a Mark 22 modified for use with the CCM. InNew Zealand, Typex Mark II and Mark III were superseded by Mark 22 and Mark 23 on 1 January 1950. The Royal Air Force used a combination of the Creed Teleprinter and Typex until 1960. This amalgamation allowed a single operator to use punch tape and printouts for both sending and receiving encrypted material.
Erskine (2002) estimates that around 12,000 Typex machines were built by the end of World War II.
Less than a year into the war, the Germans could read all British military encryption other than Typex,[3]which was used by the British armed forces and by Commonwealth countries including Australia, Canada and New Zealand. TheRoyal Navydecided to adopt the RAF Type X Mark II in 1940 after trials; eight stations already had Type X machines. Eventually over 600 machines would be required. New Zealand initially got two machines at a cost of £115 (GBP) each for Auckland and Wellington.[4]
From 1943 the Americans and the British agreed upon aCombined Cipher Machine(CCM). The British Typex and AmericanECM Mark IIcould be adapted to become interoperable. While the British showed Typex to the Americans, the Americans never permitted the British to see the ECM, which was a more complex design. Instead, attachments were built for both that allowed them to read messages created on the other.
In 1944 the Admiralty decided to supply 2 CCM Mark III machines (the Typex Mark II with adaptors for the American CCM) for each "major" war vessel down to and including corvettes but not submarines; RNZN vessels were theAchilles,Arabis(then out of action),Arbutus,GambiaandMatua.[5]
Although a British test cryptanalytic attack made considerable progress, the results were not as significant as against the Enigma, due to the increased complexity of the system and the low levels of traffic.
A Typex machine without rotors was captured by German forces atDunkirkduring theBattle of Franceand more than one German cryptanalytic section proposed attempting to crack Typex; however, theB-Dienstcodebreaking organisation gave up on it after six weeks, when further time and personnel for such attempts were refused.[6]
One German cryptanalyst stated that the Typex was more secure than the Enigma since it had seven rotors, therefore no major effort was made to crack Typex messages as they believed that even the Enigma's messages were unbreakable.[7]
Although the Typex has been attributed as having good security, the historic record is much less clear. There was an ongoing investigation into Typex security that arose out of German POWs in North Africa claiming that Typex traffic was decipherable.
A brief excerpt from the report
TOP SECRET U [ZIP/SAC/G.34]THE POSSIBLE EXPLOITATION OF TYPEX BY THE GERMAN SIGINT SERVICESThe following is a summary of information so far received on German attempts to break into the British Typex machine, based on P/W interrogations carried out during and subsequent to the war. It is divided into (a) the North African interrogations, (b) information gathered after the end of the war, and (c) an attempt to sum up the evidence for and against the possibility of German successes.Apart from an unconfirmed report from an agent in France on 19 July 1942 to the effect that the GAF were using two British machines captured at DUNKIRK for passing their own traffic between BERLIN and GOLDAP, our evidence during the war was based on reports that OKH was exploiting Typex material left behind in TOBRUK in 1942.
Typex machines continued in use long after World War II. TheNew Zealandmilitary used TypeX machines until the early 1970s, disposing of its last machine in about 1973.[8]
All the versions of the Typex had advantages over the German military versions of the Enigma machine. The German equivalent teleprinter machines in World War II (used by higher-level but not field units) were theLorenz SZ 40/42andSiemens and Halske T52usingFish cyphers.
|
https://en.wikipedia.org/wiki/Typex
|
Aphonestheme(/foʊˈnɛsθiːm/foh-NESS-theem;[1]phonaesthemein British English) is a pattern of sounds systematically paired with a certainmeaningin alanguage. The concept was proposed in 1930 by British linguistJ. R. Firth, who coined the term from the Greekφωνήphone, "sound", andαἴσθημαaisthema, "perception" (fromαίσθάνομαιaisthanomai, "I perceive").[2]For example, sequence "sl-" appears in English words denoting low-friction motion, like "slide", "slick" and "sled".[3]
A phonestheme is different from aphoneme(a basic unit of word-differentiating sound) or amorpheme(a basic unit of meaning) because it does not meet the normal criterion ofcompositionality.[4][5]
WithinC.S. Peirce's"theory of signs" the phonestheme is considered to be an "icon" rather than a "symbol" or an "index".[6]
Phonesthemes are of critical interest to students of the internal structure of words because they appear to be a case where the internal structure of the word is non-compositional; i.e., a word with a phonestheme in it has other material in it that is not itself a morpheme. Phonesthemes "fascinate some linguists", asBen Zimmerhas phrased it, in a process that can become "mystical" or "unscientific".[7]
For example, the English phonestheme "gl-" occurs in a large number of words relating to light or vision, like "glitter", "glisten", "glow", "gleam", "glare", "glint", "glimmer", "gloss", and so on; yet, despite this, the remainder of each word is not itself a phonestheme (i.e., a pairing of form and meaning); i.e., "-isten", "-ow", and "-eam" do not make meaningful contributions to "glisten", "glow", and "gleam".[8]There are multiple main ways in which phonesthemes are empirically identified.[9]
The first is through corpus studies, where the words of a language are subjected to statistical analysis, and the particular form-meaning pairing, or phonestheme, is shown to constitute a statistically unexpected distribution in the lexicon or not.
Corpus studies can inform a researcher about the current state of the lexicon, a critical first step, but importantly are completely uninformative when it comes to questions of whether and how phonesthemes are represented in the minds of language users.
The second type of approach makes use of the tendency for phonesthemes to participate in the coinage and interpretation ofneologisms(i.e., new words in a language). Various studies have demonstrated that, when asked to invent or interpret new words, subjects tend to follow the patterns that are predicted by the phonesthemes in their language. It is known, for example, that the wordbangleis aloanfromHindibut speakers tend to associate it with Englishonomatopoeialikebang. While this approach demonstrates the vitality of phonesthemic patterns, it does not provide any evidence about whether (or how) phonesthemes are represented in the minds of speaker-hearers.
The final type of evidence uses the methods ofpsycholinguisticsto study exactly how phonesthemes participate in language processing. One such method is phonesthemic priming — akin to morphological priming — which demonstrates that people represent phonesthemes much as they do typical morphemes, despite the fact that phonesthemes are non-compositional.
Discussions of phonesthesia are often grouped with other phenomena under the rubric ofsound symbolism.
While phonaesthemes may be language-specific, it has been pointed out that people may be sensitive to some phonaesthemes (e.g. /fl-/, or /tr-/) irrespective of where sound-meaning correspondences are exemplified in the lexicon of their mother tongue (e.g. English, French, Spanish or Macedonian).[10]
Phonesthemes have been documented in numerous languages from diverse language families, among them English, Swedish, and otherIndo-European languages,Austronesian languages, and Japanese.[citation needed]
While phonesthemes have mostly been identified in the onsets of words and syllables, they can have other forms. There has been some argument that sequences like "-ash" and "-ack" in English also serve as phonesthemes, due to their patterning in words that denote forceful, destructive contact ("smash", "crash", "bash", etc.) and abrupt contact ("smack", "whack", "crack", etc.), respectively.[11]
In addition to the distribution of phonesthemes, linguists consider theirmotivation. In some cases, there may appear to be good sound-symbolic reasons why phonesthemes would have the form they have. In the case of "-ack", for example, we might imagine that the words sharing this phonestheme do so because they denote events that would produce a similar sound. But critically, there are many phonesthemes for which there can be no sound-symbolic basis, such as "gl-", for the simple reason that their meanings (such as 'pertaining to light or vision') entail no sound.
While there are numerous studies on living languages, research is lacking about ancient languages, although the first documented example of phonesthemes dates back to at least the fourth century B.C.: Plato's
Cratylus clearly mentioned a gl- phonestheme (a different one from that discussed previously, as those words are not of Greek origin) as well as an st- one and gave an explanation in terms of phonosemantics.[12]
Examples of phonesthemes in English include:
"cl-":related to a closing motion of a single object, such as "clam", "clamp", "clap", "clasp", "clench", "cling", "clip", "clop", "clutch".
"fl-":related to movement, such as "flap", "flare", "flee", "flick", "flicker", "fling", "flip", "flit", "flitter", "flow", "flutter", "fly", "flurry".[13]
"gl-":related to light, as in "glade", "glance", "glare", "glass", "gleam", "glimmer", "glint", "glisten", "glitter", "gloaming", "gloom", "gloss", "glow".[4][14]
"sl-":appears in words denoting frictionless motion, like "slide", "slick", "sled", and so on. These are themselves a subset of a larger set of words beginning with “sl-“ that are pejorative behaviours, traits, or events: slab, slack, slang, slant, slap, slash, slate, slattern, slaver, slay, sleek, sleepy, sleet, slime, slip, slipshod, slit, slither, slobber, slog, slope, sloppy, slosh, sloth, slouch, slough, slovenly, slow, sludge, slug, sluggard, slum, slump, slur, slut, sly.[15][5]
"sn-":related to the nose or mouth, as in "snack", "snarl", "sneer", "sneeze", "snicker/snigger", "sniff", "sniffle", "snivel", "snoot", "snore", "snorkel", "snort", "snot", "snout", "snub" (as an adjective), "snuff", "snuffle".[13]
"st-":appears in three families of meanings:[16]
"str-":denoting something long and thin, as in "straight", "strand", "strap", "straw", "streak", "stream", "string", "strip", "stripe".[17]
"sw-":related to a long movement, as in "sway", "sweep", "swerve", "swing", "swipe", "swirl", "swish", "swoop".
"tw-":connotes a twisting motion, as in "twist", "twirl", "tweak", "twill", "tweed", "tweezer", "twiddle", "twine", "twinge".[14]
"-ow(e)l":connotes something sinister, as in "owl", "prowl", "scowl", "growl", "howl", "rowel", "bowel", "jowl".[14]
"-ump":related to a hemispherical shape or pile, as in "bump", "clump", "dump", "jump", "hump", "lump", "mump", "rump", "stump".[13]
|
https://en.wikipedia.org/wiki/Phonestheme
|
Inuniversal algebra, avariety of algebrasorequational classis theclassof allalgebraic structuresof a givensignaturesatisfying a given set ofidentities. For example, thegroupsform a variety of algebras, as do theabelian groups, therings, themonoidsetc. According toBirkhoff's theorem, a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking ofhomomorphicimages,subalgebras, and(direct) products. In the context ofcategory theory, a variety of algebras, together with its homomorphisms, forms acategory; these are usually calledfinitary algebraic categories.
Acovarietyis the class of allcoalgebraic structuresof a given signature.
A variety of algebras should not be confused with analgebraic variety, which means a set of solutions to asystem of polynomial equations. They are formally quite distinct and their theories have little in common.
The term "variety of algebras" refers to algebras in the general sense ofuniversal algebra; there is also a more specific sense of algebra, namely asalgebra over a field, i.e. avector spaceequipped with abilinearmultiplication.
Asignature(in this context) is a set, whose elements are calledoperations, each of which is assigned anatural number(0, 1, 2, ...) called itsarity. Given a signatureσand a setV, whose elements are calledvariables, awordis a finiterooted treein which each node is labelled by either a variable or an operation, such that every node labelled by a variable has no branches away from the root and every node labelled by an operationohas as many branches away from the root as the arity ofo. Anequational lawis a pair of such words; the axiom consisting of the wordsvandwis written asv=w.
Atheoryconsists of a signature, a set of variables, and a set of equational laws. Any theory gives a variety of algebras as follows. Given a theoryT, analgebraofTconsists of a setAtogether with, for each operationoofTwith arityn, a functionoA:An→Asuch that for each axiomv=wand each assignment of elements ofAto the variables in that axiom, the equation holds that is given by applying the operations to the elements ofAas indicated by the trees definingvandw. The class of algebras of a given theoryTis called avariety of algebras.
Given two algebras of a theoryT, sayAandB, ahomomorphismis a functionf:A→Bsuch that
for every operationoof arityn. Any theory gives acategorywhere the objects are algebras of that theory and the morphisms are homomorphisms.
The class of allsemigroupsforms a variety of algebras of signature (2), meaning that a semigroup has a single binary operation. A sufficient defining equation is the associative law:
The class ofgroupsforms a variety of algebras of signature (2,0,1), the three operations being respectivelymultiplication(binary),identity(nullary, a constant) andinversion(unary). The familiar axioms of associativity, identity and inverse form one suitable set of identities:
The class ofringsalso forms a variety of algebras. The signature here is (2,2,0,0,1) (two binary operations, two constants, and one unary operation).
If we fix a specific ringR, we can consider the class ofleftR-modules. To express the scalar multiplication with elements fromR, we need one unary operation for each element ofR. If the ring is infinite, we will thus have infinitely many operations, which is allowed by the definition of an algebraic structure in universal algebra. We will then also need infinitely many identities to express the module axioms, which is allowed by the definition of a variety of algebras. So the leftR-modules do form a variety of algebras.
Thefieldsdonotform a variety of algebras; the requirement that all non-zero elements be invertible cannot be expressed as a universally satisfied identity (see below).
Thecancellative semigroupsalso do not form a variety of algebras, since the cancellation property is not an equation, it is an implication that is not equivalent to any set of equations. However, they do form aquasivarietyas the implication defining the cancellation property is an example of aquasi-identity.
Given a class of algebraic structures of the same signature, we can define the notions of homomorphism,subalgebra, andproduct.Garrett Birkhoffproved that a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras and arbitrary products.[1]This is a result of fundamental importance to universal algebra and known asBirkhoff's variety theoremor as theHSP theorem.H,S, andPstand, respectively, for the operations of homomorphism, subalgebra, and product.
One direction of the equivalence mentioned above, namely that a class of algebras satisfying some set of identities must be closed under the HSP operations, follows immediately from the definitions. Proving theconverse—classes of algebras closed under the HSP operations must be equational—is more difficult.
Using the easy direction of Birkhoff's theorem, we can for example verify the claim made above, that the field axioms are not expressible by any possible set of identities: the product of fields is not a field, so fields do not form a variety.
Asubvarietyof a variety of algebrasVis a subclass ofVthat has the same signature asVand is itself a variety, i.e., is defined by a set of identities.
Notice that although every group becomes a semigroup when the identity as a constant is omitted (and/or the inverse operation is omitted), the class of groups doesnotform a subvariety of the variety of semigroups because the signatures are different.
Similarly, the class of semigroups that are groups is not a subvariety of the variety of semigroups. The class of monoids that are groups contains⟨Z,+⟩{\displaystyle \langle \mathbb {Z} ,+\rangle }and does not contain its subalgebra (more precisely, submonoid)⟨N,+⟩{\displaystyle \langle \mathbb {N} ,+\rangle }.
However, the class ofabelian groupsis a subvariety of the variety of groups because it consists of those groups satisfyingxy=yx, with no change of signature. Thefinitely generated abelian groupsdo not form a subvariety, since by Birkhoff's theorem they don't form a variety, as an arbitrary product of finitely generated abelian groups is not finitely generated.
Viewing a varietyVand its homomorphisms as acategory, a subvarietyUofVis afull subcategoryofV, meaning that for any objectsa,binU, the homomorphisms fromatobinUare exactly those fromatobinV.
SupposeVis a non-trivial variety of algebras, i.e.Vcontains algebras with more than one element. One can show that for every setS, the varietyVcontains afree algebra FSon S. This means that there is an injective set mapi:S→FSthat satisfies the followinguniversal property: given any algebraAinVand any mapk:S→A, there exists a uniqueV-homomorphismf:FS→Asuch thatf∘i=k.
This generalizes the notions offree group,free abelian group,free algebra,free moduleetc. It has the consequence that every algebra in a variety is a homomorphic image of a free algebra.
Besides varieties, category theorists use two other frameworks that are equivalent in terms of the kinds of algebras they describe: finitarymonadsandLawvere theories. We may go from a variety to a finitary monad as follows. A category with some variety of algebras as objects and homomorphisms as morphisms is called afinitary algebraic category. For any finitary algebraic categoryV, theforgetful functorG:V→Sethas aleft adjointF:Set→V, namely the functor that assigns to each set the free algebra on that set. This adjunction ismonadic, meaning that the categoryVis equivalent to theEilenberg–Moore categorySetTfor the monadT=GF. Moreover the monadTisfinitary, meaning it commutes with filteredcolimits.
The monadT:Set→Setis thus enough to recover the finitary algebraic category. Indeed, finitary algebraic categories are precisely those categories equivalent to the Eilenberg-Moore categories of finitary monads. Both these, in turn, are equivalent to categories of algebras of Lawvere theories.
Working with monads permits the following generalization. One says a category is analgebraic categoryif it ismonadicoverSet. This is a more general notion than "finitary algebraic category" because it admits such categories asCABA(complete atomic Boolean algebras) andCSLat(complete semilattices) whose signatures include infinitary operations. In those two cases the signature is large, meaning that it forms not a set but a proper class, because its operations are of unbounded arity. The algebraic category ofsigma algebrasalso has infinitary operations, but their arity is countable whence its signature is small (forms a set).
Every finitary algebraic category is alocally presentable category.
Since varieties are closed under arbitrary direct products, all non-trivial varieties contain infinite algebras. Attempts have been made to develop a finitary analogue of the theory of varieties. This led, e.g., to the notion ofvariety of finite semigroups. This kind of variety uses only finitary products. However, it uses a more general kind of identities.
Apseudovarietyis usually defined to be a class of algebras of a given signature, closed under the taking of homomorphic images, subalgebras and finitary direct products. Not every author assumes that all algebras of a pseudovariety are finite; if this is the case, one sometimes talks of avariety of finite algebras. For pseudovarieties, there is no general finitary counterpart to Birkhoff's theorem, but in many cases the introduction of a more complex notion of equations allows similar results to be derived.[2]Namely, a class of finite monoids is a variety of finite monoids if and only if it can be defined by a set ofprofiniteidentities.[3]
Pseudovarieties are of particular importance in the study of finitesemigroupsand hence informal language theory.Eilenberg's theorem, often referred to as thevariety theorem, describes a natural correspondence between varieties ofregular languagesand pseudovarieties of finite semigroups.
Two monographs available free online:
|
https://en.wikipedia.org/wiki/Variety_(universal_algebra)
|
AnABX testis a method of comparing two choices of sensory stimuli to identify detectable differences between them. A subject is presented with two known samples (sampleA, the first reference, and sampleB, the second reference) followed by one unknown sampleXthat is randomly selected from either A or B. The subject is then required to identify X as either A or B. If X cannot be identified reliably with a lowp-valuein a predetermined number of trials, then thenull hypothesiscannot be rejected and it cannot be proven that there is a perceptible difference between A and B.
ABX tests can easily be performed asdouble-blind trials, eliminating any possible unconscious influence from the researcher or the test supervisor. Because samples A and B are provided just prior to sample X, the difference does not have to be discerned using long-term memory or past experience. Thus, the ABX test answers whether or not, under the test circumstances, a perceptual difference can be found.
ABX tests are commonly used in evaluations of digitalaudio data compressionmethods; sample A is typically an uncompressed sample, and sample B is a compressed version of A. Audiblecompression artifactsthat indicate a shortcoming in the compression algorithm can be identified with subsequent testing. ABX tests can also be used to compare the different degrees of fidelity loss between two different audio formats at a givenbitrate.
ABX tests can be used to audition input, processing, and output components as well as cabling: virtually any audio product or prototype design.
The history of ABX testing and naming dates back to 1950 in a paper published by two Bell Labs researchers, W. A. Munson and Mark B. Gardner, titledStandardizing Auditory Tests.[1]
The purpose of the present paper is to describe a test procedure which has shown promise in this direction and to give descriptions of equipment which have been found helpful in minimizing the variability of the test results. The procedure, which we have called the "ABX" test, is a modification of the method of paired comparisons. An observer is presented with a time sequence of three signals for each judgment he is asked to make. During the first time interval he hears signal A, during the second, signal B, and finally signal X. His task is to indicate whether the sound heard during the X interval was more like that during the A interval or more like that during the B interval. For a threshold test, the A interval is quiet, the B interval is signal, and the X interval is either quiet or signal.
The test has evolved to other variations such as subject control over duration and sequence of testing. One such example was the hardware ABX comparator in 1977, built by the ABX company in Troy, Michigan, and documented by one of its founders, David Clark.[2]
Refinements to the A/B test
The author's first experience with double-blind audibility testing was as a member of the SMWTMS Audio Club in early 1977. A button was provided which would select at random component A or B. Identifying one of these, the X component was greatly hampered by not having the known A and B available for reference.
This was corrected by using three interlocked pushbuttons, A, B, and X. Once an X was selected, it would remain that particular A or B until it was decided to move on to another random selection.
However, another problem quickly became obvious. There was always an audible relay transition time delay when switching from A to B. When switching from A to X, however, the time delay would be missing if X was really A and present if X was really B. This extraneous cue was removed by inserting a fixed length dropout time when any change was made. The dropout time was selected to be 50 ms which produces a slight consistent click while allowing subjectively instant comparison.
The ABX company is now defunct and hardware comparators in general as commercial offerings extinct. Myriad of software tools exist such as Foobar ABX plug-in for performing file comparisons. But hardware equipment testing requires building custom implementations.
ABX test equipment utilizing relays to switch between two different hardware paths can help determine if there are perceptual differences in cables and components. Video, audio and digital transmission paths can be compared. If the switching is microprocessor controlled, double-blind tests are possible.
Loudspeaker level and line level audio comparisons could be performed on an ABX test device offered for sale as theABX ComparatorbyQSC Audio Productsfrom 1998 to 2004. Other hardware solutions have been fabricated privately by individuals or organizations for internal testing.
If only one ABX trial were performed, random guessing would incur a 50% chance of choosing the correct answer, the same as flipping a coin. In order to make a statement having some degree ofconfidence, many trials must be performed. By increasing the number of trials, the likelihood of statistically asserting a person's ability to distinguish A and B is enhanced for a given confidence level. A 95% confidence level is commonly consideredstatistically significant.[2]The company QSC, in the ABX Comparator user manual, recommended a minimum of ten listening trials in each round of tests.[3]
QSC recommended that no more than 25 trials be performed, as subject fatigue can set in, making the test less sensitive (less likely to reveal one's actual ability to discern the difference between A and B).[3]However, a more sensitive test can be obtained bypoolingthe results from a number of such tests using separate individuals or tests from the same subject conducted in between rest breaks. For a large number of total trials N, a significant result (one with 95% confidence) can be claimed if the number of correct responses exceedsN/2+N{\displaystyle N/2+{\sqrt {N}}}. Important decisions are normally based on a higher level of confidence, since an erroneoussignificant resultwould be claimed in one of 20 such tests simply by chance.
Thefoobar2000and theAmarokaudio players support software-based ABX testing, the latter using a third-party script. Lacinato ABX is a cross-platform audio testing tool for Linux, Windows, and 64-bit Mac. Lacinato WebABX is a web-based cross-browser audio ABX tool. Open source aveX was mainly developed forLinuxwhich also provides test-monitoring from a remote computer. ABX patcher is an ABX implementation forMax/MSP. More ABX software can be found at the archived PCABX website.
Acodec listening testis ascientificstudydesigned to compare two or morelossyaudiocodecs, usually with respect to perceivedfidelityor compression efficiency.
ABX is a type offorced choicetesting. A subject's choices can be on merit, i.e. the subject indeed honestly tried to identify whether X seemed closer to A or B. But uninterested or tired subjects might choose randomly without even trying. If not caught, this may dilute the results of other subjects who intently took the test and subject the outcome toSimpson's paradox, resulting in false summary results. Simply looking at the outcome totals of the test (mout ofnanswers correct) cannot reveal occurrences of this problem.
This problem becomes more acute if the differences are small. The user may get frustrated and simply aim to finish the test by voting randomly. In this regard, forced-choice tests such as ABX tend to favor negative outcomes when differences are small if proper protocols are not used to guard against this problem.
Best practices call for both the inclusion of controls and the screening of subjects:[5]
A major consideration is the inclusion of appropriate control conditions. Typically, control conditions include the presentation of unimpaired audio materials, introduced in ways that are unpredictable to the subjects. It is the differences between judgement of these control stimuli and the potentially impaired ones that allows one to conclude that the grades are actual assessments of the impairments.
3.2.2 Post-screening of subjects
Post-screening methods can be roughly separated into at least two classes; one is based on inconsistencies compared with the mean result and another relies on the ability of the subject to make correct identifications. The first class is never justifiable. Whenever a subjective listening test is performed with the test method recommended here, the required information for the second class of post-screening is automatically available. A suggested statistical method for doing this is described in Attachment 1.'
The methods are primarily used to eliminate subjects who cannot make the appropriate discriminations. The application of a post-screening method may clarify the tendencies in a test result. However, bearing in mind the variability of subjects’ sensitivities to different artefacts, caution should be exercised.
Other flaws include lack of subject training and familiarization with the test and content selected:
4.1 Familiarization or training phase
Prior to formal grading, subjects must be allowed to become thoroughly familiar with the test facilities, the test environment, the grading process, the grading scales and the methods of their use. Subjects should also become thoroughly familiar with the artefacts under study. For the most sensitive tests they should be exposed to all the material they will be grading later in the formal grading sessions. During familiarization or training, subjects should be preferably together in groups (say, consisting of three subjects), so that they can interact freely and discuss the artefacts they detect with each other.
Other problems might arise from the ABX equipment itself, as outlined by Clark,[2]where the equipment provides atell, allowing the subject to identify the source. Lack of transparency of the ABX fixture creates similar problems.
Since auditory tests and many other sensory tests rely onshort-term memory, which only lasts a few seconds, it is critical that the test fixture allows the subject to identify short segments that can be compared quickly. Pops and glitches in switching apparatus likewise must be eliminated, as they may dominate or otherwise interfere with the stimuli being tested in what is stored in the subject's short-term memory.
Since ABX testing requires human beings for evaluation of lossy audio codecs, it is time-consuming and costly. Therefore, cheaper approaches have been developed, e.g.PEAQ, which is an implementation of theODG.
InMUSHRA, the subject is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference and one or more anchors. A 0–100 rating scale makes it possible to rate very small differences, and the hidden version still provides discrimination checks.
Alternative general methods are used indiscrimination testing, such as paired comparison, duo–trio, andtriangle testing. Of these, duo–trio and triangle testing are particularly close to ABX testing. Schematically:
In this context, ABX testing is also known as "duo–trio" in "balanced reference" mode – both knowns are presented as references, rather than one alone.[6]
|
https://en.wikipedia.org/wiki/ABX_test
|
Inmathematics, inlinear algebra, aWeyr canonical form(or,Weyr formorWeyr matrix) is asquare matrixwhich (in some sense) induces "nice" properties with matrices it commutes with. It also has a particularly simple structure and the conditions for possessing a Weyr form are fairly weak, making it a suitable tool for studying classes ofcommuting matrices. A square matrix is said to beinthe Weyrcanonical formif the matrix has the structure defining the Weyr canonical form. The Weyr form was discovered by theCzechmathematicianEduard Weyrin 1885.[1][2][3]The Weyr form did not become popular among mathematicians and it was overshadowed by the closely related, but distinct, canonical form known by the nameJordan canonical form.[3]The Weyr form has been rediscovered several times since Weyr’s original discovery in 1885.[4]This form has been variously called asmodified Jordan form,reordered Jordan form,second Jordan form,andH-form.[4]The current terminology is credited to Shapiro who introduced it in a paper published in theAmerican Mathematical Monthlyin 1999.[4][5]
Recently several applications have been found for the Weyr matrix. Of particular interest is an application of the Weyr matrix in the study ofphylogenetic invariantsinbiomathematics.
A basic Weyr matrix witheigenvalueλ{\displaystyle \lambda }is ann×n{\displaystyle n\times n}matrixW{\displaystyle W}of the following form: There is aninteger partition
such that, whenW{\displaystyle W}is viewed as anr×r{\displaystyle r\times r}block matrix(Wij){\displaystyle (W_{ij})}, where the(i,j){\displaystyle (i,j)}blockWij{\displaystyle W_{ij}}is anni×nj{\displaystyle n_{i}\times n_{j}}matrix, the following three features are present:
In this case, we say thatW{\displaystyle W}has Weyr structure(n1,n2,…,nr){\displaystyle (n_{1},n_{2},\ldots ,n_{r})}.
The following is an example of a basic Weyr matrix.
W={\displaystyle W=}=[W11W12W22W23W33W34W44]{\displaystyle ={\begin{bmatrix}W_{11}&W_{12}&&\\&W_{22}&W_{23}&\\&&W_{33}&W_{34}\\&&&W_{44}\\\end{bmatrix}}}
In this matrix,n=9{\displaystyle n=9}andn1=4,n2=2,n3=2,n4=1{\displaystyle n_{1}=4,n_{2}=2,n_{3}=2,n_{4}=1}. SoW{\displaystyle W}has the Weyr structure(4,2,2,1){\displaystyle (4,2,2,1)}. Also,
W11=[λ0000λ0000λ0000λ]=λI4,W22=[λ00λ]=λI2,W33=[λ00λ]=λI2,W44=[λ]=λI1{\displaystyle W_{11}={\begin{bmatrix}\lambda &0&0&0\\0&\lambda &0&0\\0&0&\lambda &0\\0&0&0&\lambda \\\end{bmatrix}}=\lambda I_{4},\quad W_{22}={\begin{bmatrix}\lambda &0\\0&\lambda \\\end{bmatrix}}=\lambda I_{2},\quad W_{33}={\begin{bmatrix}\lambda &0\\0&\lambda \\\end{bmatrix}}=\lambda I_{2},\quad W_{44}={\begin{bmatrix}\lambda \\\end{bmatrix}}=\lambda I_{1}}
and
W12=[10010000],W23=[1001],W34=[10].{\displaystyle W_{12}={\begin{bmatrix}1&0\\0&1\\0&0\\0&0\\\end{bmatrix}},\quad W_{23}={\begin{bmatrix}1&0\\0&1\\\end{bmatrix}},\quad W_{34}={\begin{bmatrix}1\\0\\\end{bmatrix}}.}
LetW{\displaystyle W}be a square matrix and letλ1,…,λk{\displaystyle \lambda _{1},\ldots ,\lambda _{k}}be the distinct eigenvalues ofW{\displaystyle W}. We say thatW{\displaystyle W}is in Weyr form (or is a Weyr matrix) ifW{\displaystyle W}has the following form:
W=[W1W2⋱Wk]{\displaystyle W={\begin{bmatrix}W_{1}&&&\\&W_{2}&&\\&&\ddots &\\&&&W_{k}\\\end{bmatrix}}}
whereWi{\displaystyle W_{i}}is a basic Weyr matrix with eigenvalueλi{\displaystyle \lambda _{i}}fori=1,…,k{\displaystyle i=1,\ldots ,k}.
The following image shows an example of a general Weyr matrix consisting of three basic Weyr matrix blocks. The basic Weyr matrix in the top-left corner has the structure (4,2,1) with eigenvalue 4, the middle block has structure (2,2,1,1) with eigenvalue -3 and the one in the lower-right corner has the structure (3, 2) with eigenvalue 0.
The Weyr canonical formW=P−1JP{\displaystyle W=P^{-1}JP}is related to the Jordan formJ{\displaystyle J}by a simple permutationP{\displaystyle P}for each Weyrbasic blockas follows: The first index of each Weyr subblock forms the largest Jordan chain. After crossing out these rows and columns, the first index of each new subblock forms the second largest Jordan chain, and so forth.[6]
That the Weyr form is a canonical form of a matrix is a consequence of the following result:[3]Each square matrixA{\displaystyle A}over an algebraically closed field is similar to a Weyr matrixW{\displaystyle W}which is unique up to permutation of its basic blocks. The matrixW{\displaystyle W}is called the Weyr (canonical) form ofA{\displaystyle A}.
LetA{\displaystyle A}be a square matrix of ordern{\displaystyle n}over analgebraically closed fieldand let the distinct eigenvalues ofA{\displaystyle A}beλ1,λ2,…,λk{\displaystyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{k}}. TheJordan–Chevalley decompositiontheorem states thatA{\displaystyle A}issimilarto a block diagonal matrix of the form
A=[λ1I+N1λ2I+N2⋱λkI+Nk]=[λ1Iλ2I⋱λkI]+[N1N2⋱Nk]=D+N{\displaystyle A={\begin{bmatrix}\lambda _{1}I+N_{1}&&&\\&\lambda _{2}I+N_{2}&&\\&&\ddots &\\&&&\lambda _{k}I+N_{k}\\\end{bmatrix}}={\begin{bmatrix}\lambda _{1}I&&&\\&\lambda _{2}I&&\\&&\ddots &\\&&&\lambda _{k}I\\\end{bmatrix}}+{\begin{bmatrix}N_{1}&&&\\&N_{2}&&\\&&\ddots &\\&&&N_{k}\\\end{bmatrix}}=D+N}
whereD{\displaystyle D}is adiagonal matrix,N{\displaystyle N}is anilpotent matrix, and[D,N]=0{\displaystyle [D,N]=0}, justifying the reduction ofN{\displaystyle N}into subblocksNi{\displaystyle N_{i}}. So the problem of reducingA{\displaystyle A}to the Weyr form reduces to the problem of reducing the nilpotent matricesNi{\displaystyle N_{i}}to the Weyr form. This leads to the generalizedeigenspacedecomposition theorem.
Given a nilpotent square matrixA{\displaystyle A}of ordern{\displaystyle n}over an algebraically closed fieldF{\displaystyle F}, the following algorithm produces an invertible matrixC{\displaystyle C}and a Weyr matrixW{\displaystyle W}such thatW=C−1AC{\displaystyle W=C^{-1}AC}.
Step 1
LetA1=A{\displaystyle A_{1}=A}
Step 2
Step 3
IfA2{\displaystyle A_{2}}is nonzero, repeat Step 2 onA2{\displaystyle A_{2}}.
Step 4
Continue the processes of Steps 1 and 2 to obtain increasingly smaller square matricesA1,A2,A3,…{\displaystyle A_{1},A_{2},A_{3},\ldots }and associatedinvertible matricesP1,P2,P3,…{\displaystyle P_{1},P_{2},P_{3},\ldots }until the first zero matrixAr{\displaystyle A_{r}}is obtained.
Step 5
The Weyr structure ofA{\displaystyle A}is(n1,n2,…,nr){\displaystyle (n_{1},n_{2},\ldots ,n_{r})}whereni{\displaystyle n_{i}}= nullity(Ai){\displaystyle (A_{i})}.
Step 6
Step 7
Use elementary row operations to find an invertible matrixYr−1{\displaystyle Y_{r-1}}of appropriate size such that the productYr−1Xr,r−1{\displaystyle Y_{r-1}X_{r,r-1}}is a matrix of the formIr,r−1=[IO]{\displaystyle I_{r,r-1}={\begin{bmatrix}I\\O\end{bmatrix}}}.
Step 8
SetQ1={\displaystyle Q_{1}=}diag(I,I,…,Yr−1−1,I){\displaystyle (I,I,\ldots ,Y_{r-1}^{-1},I)}and computeQ1−1XQ1{\displaystyle Q_{1}^{-1}XQ_{1}}. In this matrix, the(r,r−1){\displaystyle (r,r-1)}-block isIr,r−1{\displaystyle I_{r,r-1}}.
Step 9
Find a matrixR1{\displaystyle R_{1}}formed as a product ofelementary matricessuch thatR1−1Q1−1XQ1R1{\displaystyle R_{1}^{-1}Q_{1}^{-1}XQ_{1}R_{1}}is a matrix in which all the blocks above the blockIr,r−1{\displaystyle I_{r,r-1}}contain only0{\displaystyle 0}'s.
Step 10
Repeat Steps 8 and 9 on columnr−1{\displaystyle r-1}converting(r−1,r−2){\displaystyle (r-1,r-2)}-block toIr−1,r−2{\displaystyle I_{r-1,r-2}}viaconjugationby some invertible matrixQ2{\displaystyle Q_{2}}. Use this block to clear out the blocks above, via conjugation by a productR2{\displaystyle R_{2}}of elementary matrices.
Step 11
Repeat these processes onr−2,r−3,…,3,2{\displaystyle r-2,r-3,\ldots ,3,2}columns, using conjugations byQ3,R3,…,Qr−2,Rr−2,Qr−1{\displaystyle Q_{3},R_{3},\ldots ,Q_{r-2},R_{r-2},Q_{r-1}}. The resulting matrixW{\displaystyle W}is now in Weyr form.
Step 12
LetC=P1diag(I,P2)⋯diag(I,Pr−1)Q1R1Q2⋯Rr−2Qr−1{\displaystyle C=P_{1}{\text{diag}}(I,P_{2})\cdots {\text{diag}}(I,P_{r-1})Q_{1}R_{1}Q_{2}\cdots R_{r-2}Q_{r-1}}. ThenW=C−1AC{\displaystyle W=C^{-1}AC}.
Some well-known applications of the Weyr form are listed below:[3]
|
https://en.wikipedia.org/wiki/Weyr_canonical_form
|
Inmachine learning,diffusion models, also known asdiffusion-based generative modelsorscore-based generative models, are a class oflatent variablegenerativemodels. A diffusion model consists of two major components: the forward diffusion process, and the reverse sampling process. The goal of diffusion models is to learn adiffusion processfor a given dataset, such that the process can generate new elements that are distributed similarly as the original dataset. A diffusion model models data as generated by a diffusion process, whereby a new datum performs arandom walk with driftthrough the space of all possible data.[1]A trained diffusion model can be sampled in many ways, with different efficiency and quality.
There are various equivalent formalisms, includingMarkov chains, denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations.[2]They are typically trained usingvariational inference.[3]The model responsible for denoising is typically called its "backbone". The backbone may be of any kind, but they are typicallyU-netsortransformers.
As of 2024[update], diffusion models are mainly used forcomputer visiontasks, includingimage denoising,inpainting,super-resolution,image generation, and video generation. These typically involve training a neural network to sequentiallydenoiseimages blurred withGaussian noise.[1][4]The model is trained to reverse the process of adding noise to an image. After training to convergence, it can be used for image generation by starting with an image composed of random noise, and applying the network iteratively to denoise the image.
Diffusion-based image generators have seen widespread commercial interest, such asStable DiffusionandDALL-E. These models typically combine diffusion models with other models, such as text-encoders and cross-attention modules to allow text-conditioned generation.[5]
Other than computer vision, diffusion models have also found applications innatural language processing[6][7]such astext generation[8][9]andsummarization,[10]sound generation,[11]and reinforcement learning.[12][13]
Diffusion models were introduced in 2015 as a method to train a model that can sample from a highly complex probability distribution. They used techniques fromnon-equilibrium thermodynamics, especiallydiffusion.[14]
Consider, for example, how one might model the distribution of all naturally-occurring photos. Each image is a point in the space of all images, and the distribution of naturally-occurring photos is a "cloud" in space, which, by repeatedly adding noise to the images, diffuses out to the rest of the image space, until the cloud becomes all but indistinguishable from aGaussian distributionN(0,I){\displaystyle {\mathcal {N}}(0,I)}. A model that can approximately undo the diffusion can then be used to sample from the original distribution. This is studied in "non-equilibrium" thermodynamics, as the starting distribution is not in equilibrium, unlike the final distribution.
The equilibrium distribution is the Gaussian distributionN(0,I){\displaystyle {\mathcal {N}}(0,I)}, with pdfρ(x)∝e−12‖x‖2{\displaystyle \rho (x)\propto e^{-{\frac {1}{2}}\|x\|^{2}}}. This is just theMaxwell–Boltzmann distributionof particles in a potential wellV(x)=12‖x‖2{\displaystyle V(x)={\frac {1}{2}}\|x\|^{2}}at temperature 1. The initial distribution, being very much out of equilibrium, would diffuse towards the equilibrium distribution, making biased random steps that are a sum of pure randomness (like aBrownian walker) and gradient descent down the potential well. The randomness is necessary: if the particles were to undergo only gradient descent, then they will all fall to the origin, collapsing the distribution.
The 2020 paper proposed the Denoising Diffusion Probabilistic Model (DDPM), which improves upon the previous method byvariational inference.[3][15]
To present the model, we need some notation.
Aforward diffusion processstarts at some starting pointx0∼q{\displaystyle x_{0}\sim q}, whereq{\displaystyle q}is the probability distribution to be learned, then repeatedly adds noise to it byxt=1−βtxt−1+βtzt{\displaystyle x_{t}={\sqrt {1-\beta _{t}}}x_{t-1}+{\sqrt {\beta _{t}}}z_{t}}wherez1,...,zT{\displaystyle z_{1},...,z_{T}}are IID samples fromN(0,I){\displaystyle {\mathcal {N}}(0,I)}. This is designed so that for any starting distribution ofx0{\displaystyle x_{0}}, we havelimtxt|x0{\displaystyle \lim _{t}x_{t}|x_{0}}converging toN(0,I){\displaystyle {\mathcal {N}}(0,I)}.
The entire diffusion process then satisfiesq(x0:T)=q(x0)q(x1|x0)⋯q(xT|xT−1)=q(x0)N(x1|α1x0,β1I)⋯N(xT|αTxT−1,βTI){\displaystyle q(x_{0:T})=q(x_{0})q(x_{1}|x_{0})\cdots q(x_{T}|x_{T-1})=q(x_{0}){\mathcal {N}}(x_{1}|{\sqrt {\alpha _{1}}}x_{0},\beta _{1}I)\cdots {\mathcal {N}}(x_{T}|{\sqrt {\alpha _{T}}}x_{T-1},\beta _{T}I)}orlnq(x0:T)=lnq(x0)−∑t=1T12βt‖xt−1−βtxt−1‖2+C{\displaystyle \ln q(x_{0:T})=\ln q(x_{0})-\sum _{t=1}^{T}{\frac {1}{2\beta _{t}}}\|x_{t}-{\sqrt {1-\beta _{t}}}x_{t-1}\|^{2}+C}whereC{\displaystyle C}is a normalization constant and often omitted. In particular, we note thatx1:T|x0{\displaystyle x_{1:T}|x_{0}}is agaussian process, which affords us considerable freedom inreparameterization. For example, by standard manipulation with gaussian process,xt|x0∼N(α¯tx0,σt2I){\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}xt−1|xt,x0∼N(μ~t(xt,x0),σ~t2I){\displaystyle x_{t-1}|x_{t},x_{0}\sim {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},x_{0}),{\tilde {\sigma }}_{t}^{2}I)}In particular, notice that for larget{\displaystyle t}, the variablext|x0∼N(α¯tx0,σt2I){\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}converges toN(0,I){\displaystyle {\mathcal {N}}(0,I)}. That is, after a long enough diffusion process, we end up with somexT{\displaystyle x_{T}}that is very close toN(0,I){\displaystyle {\mathcal {N}}(0,I)}, with all traces of the originalx0∼q{\displaystyle x_{0}\sim q}gone.
For example, sincext|x0∼N(α¯tx0,σt2I){\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}we can samplext|x0{\displaystyle x_{t}|x_{0}}directly "in one step", instead of going through all the intermediate stepsx1,x2,...,xt−1{\displaystyle x_{1},x_{2},...,x_{t-1}}.
We knowxt−1|x0{\textstyle x_{t-1}|x_{0}}is a gaussian, andxt|xt−1{\textstyle x_{t}|x_{t-1}}is another gaussian. We also know that these are independent. Thus we can perform a reparameterization:xt−1=α¯t−1x0+1−α¯t−1z{\displaystyle x_{t-1}={\sqrt {{\bar {\alpha }}_{t-1}}}x_{0}+{\sqrt {1-{\bar {\alpha }}_{t-1}}}z}xt=αtxt−1+1−αtz′{\displaystyle x_{t}={\sqrt {\alpha _{t}}}x_{t-1}+{\sqrt {1-\alpha _{t}}}z'}wherez,z′{\textstyle z,z'}are IID gaussians.
There are 5 variablesx0,xt−1,xt,z,z′{\textstyle x_{0},x_{t-1},x_{t},z,z'}and two linear equations. The two sources of randomness arez,z′{\textstyle z,z'}, which can be reparameterized by rotation, since the IID gaussian distribution is rotationally symmetric.
By plugging in the equations, we can solve for the first reparameterization:xt=α¯tx0+αt−α¯tz+1−αtz′⏟=σtz″{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\underbrace {{\sqrt {\alpha _{t}-{\bar {\alpha }}_{t}}}z+{\sqrt {1-\alpha _{t}}}z'} _{=\sigma _{t}z''}}wherez″{\textstyle z''}is a gaussian with mean zero and variance one.
To find the second one, we complete the rotational matrix:[z″z‴]=[αt−α¯tσtβtσt??][zz′]{\displaystyle {\begin{bmatrix}z''\\z'''\end{bmatrix}}={\begin{bmatrix}{\frac {\sqrt {\alpha _{t}-{\bar {\alpha }}_{t}}}{\sigma _{t}}}&{\frac {\sqrt {\beta _{t}}}{\sigma _{t}}}\\?&?\end{bmatrix}}{\begin{bmatrix}z\\z'\end{bmatrix}}}
Since rotational matrices are all of the form[cosθsinθ−sinθcosθ]{\textstyle {\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{bmatrix}}}, we know the matrix must be[z″z‴]=[αt−α¯tσtβtσt−βtσtαt−α¯tσt][zz′]{\displaystyle {\begin{bmatrix}z''\\z'''\end{bmatrix}}={\begin{bmatrix}{\frac {\sqrt {\alpha _{t}-{\bar {\alpha }}_{t}}}{\sigma _{t}}}&{\frac {\sqrt {\beta _{t}}}{\sigma _{t}}}\\-{\frac {\sqrt {\beta _{t}}}{\sigma _{t}}}&{\frac {\sqrt {\alpha _{t}-{\bar {\alpha }}_{t}}}{\sigma _{t}}}\end{bmatrix}}{\begin{bmatrix}z\\z'\end{bmatrix}}}and since the inverse of rotational matrix is its transpose,[zz′]=[αt−α¯tσt−βtσtβtσtαt−α¯tσt][z″z‴]{\displaystyle {\begin{bmatrix}z\\z'\end{bmatrix}}={\begin{bmatrix}{\frac {\sqrt {\alpha _{t}-{\bar {\alpha }}_{t}}}{\sigma _{t}}}&-{\frac {\sqrt {\beta _{t}}}{\sigma _{t}}}\\{\frac {\sqrt {\beta _{t}}}{\sigma _{t}}}&{\frac {\sqrt {\alpha _{t}-{\bar {\alpha }}_{t}}}{\sigma _{t}}}\end{bmatrix}}{\begin{bmatrix}z''\\z'''\end{bmatrix}}}
Plugging back, and simplifying, we havext=α¯tx0+σtz″{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\sigma _{t}z''}xt−1=μ~t(xt,x0)−σ~tz‴{\displaystyle x_{t-1}={\tilde {\mu }}_{t}(x_{t},x_{0})-{\tilde {\sigma }}_{t}z'''}
The key idea of DDPM is to use a neural network parametrized byθ{\displaystyle \theta }. The network takes in two argumentsxt,t{\displaystyle x_{t},t}, and outputs a vectorμθ(xt,t){\displaystyle \mu _{\theta }(x_{t},t)}and a matrixΣθ(xt,t){\displaystyle \Sigma _{\theta }(x_{t},t)}, such that each step in the forward diffusion process can be approximately undone byxt−1∼N(μθ(xt,t),Σθ(xt,t)){\displaystyle x_{t-1}\sim {\mathcal {N}}(\mu _{\theta }(x_{t},t),\Sigma _{\theta }(x_{t},t))}. This then gives us a backward diffusion processpθ{\displaystyle p_{\theta }}defined bypθ(xT)=N(xT|0,I){\displaystyle p_{\theta }(x_{T})={\mathcal {N}}(x_{T}|0,I)}pθ(xt−1|xt)=N(xt−1|μθ(xt,t),Σθ(xt,t)){\displaystyle p_{\theta }(x_{t-1}|x_{t})={\mathcal {N}}(x_{t-1}|\mu _{\theta }(x_{t},t),\Sigma _{\theta }(x_{t},t))}The goal now is to learn the parameters such thatpθ(x0){\displaystyle p_{\theta }(x_{0})}is as close toq(x0){\displaystyle q(x_{0})}as possible. To do that, we usemaximum likelihood estimationwith variational inference.
TheELBO inequalitystates thatlnpθ(x0)≥Ex1:T∼q(⋅|x0)[lnpθ(x0:T)−lnq(x1:T|x0)]{\displaystyle \ln p_{\theta }(x_{0})\geq E_{x_{1:T}\sim q(\cdot |x_{0})}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]}, and taking one more expectation, we getEx0∼q[lnpθ(x0)]≥Ex0:T∼q[lnpθ(x0:T)−lnq(x1:T|x0)]{\displaystyle E_{x_{0}\sim q}[\ln p_{\theta }(x_{0})]\geq E_{x_{0:T}\sim q}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]}We see that maximizing the quantity on the right would give us a lower bound on the likelihood of observed data. This allows us to perform variational inference.
Define the loss functionL(θ):=−Ex0:T∼q[lnpθ(x0:T)−lnq(x1:T|x0)]{\displaystyle L(\theta ):=-E_{x_{0:T}\sim q}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]}and now the goal is to minimize the loss by stochastic gradient descent. The expression may be simplified to[16]L(θ)=∑t=1TExt−1,xt∼q[−lnpθ(xt−1|xt)]+Ex0∼q[DKL(q(xT|x0)‖pθ(xT))]+C{\displaystyle L(\theta )=\sum _{t=1}^{T}E_{x_{t-1},x_{t}\sim q}[-\ln p_{\theta }(x_{t-1}|x_{t})]+E_{x_{0}\sim q}[D_{KL}(q(x_{T}|x_{0})\|p_{\theta }(x_{T}))]+C}whereC{\displaystyle C}does not depend on the parameter, and thus can be ignored. Sincepθ(xT)=N(xT|0,I){\displaystyle p_{\theta }(x_{T})={\mathcal {N}}(x_{T}|0,I)}also does not depend on the parameter, the termEx0∼q[DKL(q(xT|x0)‖pθ(xT))]{\displaystyle E_{x_{0}\sim q}[D_{KL}(q(x_{T}|x_{0})\|p_{\theta }(x_{T}))]}can also be ignored. This leaves justL(θ)=∑t=1TLt{\displaystyle L(\theta )=\sum _{t=1}^{T}L_{t}}withLt=Ext−1,xt∼q[−lnpθ(xt−1|xt)]{\displaystyle L_{t}=E_{x_{t-1},x_{t}\sim q}[-\ln p_{\theta }(x_{t-1}|x_{t})]}to be minimized.
Sincext−1|xt,x0∼N(μ~t(xt,x0),σ~t2I){\displaystyle x_{t-1}|x_{t},x_{0}\sim {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},x_{0}),{\tilde {\sigma }}_{t}^{2}I)}, this suggests that we should useμθ(xt,t)=μ~t(xt,x0){\displaystyle \mu _{\theta }(x_{t},t)={\tilde {\mu }}_{t}(x_{t},x_{0})}; however, the network does not have access tox0{\displaystyle x_{0}}, and so it has to estimate it instead. Now, sincext|x0∼N(α¯tx0,σt2I){\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}, we may writext=α¯tx0+σtz{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\sigma _{t}z}, wherez{\displaystyle z}is some unknown gaussian noise. Now we see that estimatingx0{\displaystyle x_{0}}is equivalent to estimatingz{\displaystyle z}.
Therefore, let the network output a noise vectorϵθ(xt,t){\displaystyle \epsilon _{\theta }(x_{t},t)}, and let it predictμθ(xt,t)=μ~t(xt,xt−σtϵθ(xt,t)α¯t)=xt−ϵθ(xt,t)βt/σtαt{\displaystyle \mu _{\theta }(x_{t},t)={\tilde {\mu }}_{t}\left(x_{t},{\frac {x_{t}-\sigma _{t}\epsilon _{\theta }(x_{t},t)}{\sqrt {{\bar {\alpha }}_{t}}}}\right)={\frac {x_{t}-\epsilon _{\theta }(x_{t},t)\beta _{t}/\sigma _{t}}{\sqrt {\alpha _{t}}}}}It remains to designΣθ(xt,t){\displaystyle \Sigma _{\theta }(x_{t},t)}. The DDPM paper suggested not learning it (since it resulted in "unstable training and poorer sample quality"), but fixing it at some valueΣθ(xt,t)=ζt2I{\displaystyle \Sigma _{\theta }(x_{t},t)=\zeta _{t}^{2}I}, where eitherζt2=βtorσ~t2{\displaystyle \zeta _{t}^{2}=\beta _{t}{\text{ or }}{\tilde {\sigma }}_{t}^{2}}yielded similar performance.
With this, the loss simplifies toLt=βt22αtσt2ζt2Ex0∼q;z∼N(0,I)[‖ϵθ(xt,t)−z‖2]+C{\displaystyle L_{t}={\frac {\beta _{t}^{2}}{2\alpha _{t}\sigma _{t}^{2}\zeta _{t}^{2}}}E_{x_{0}\sim q;z\sim {\mathcal {N}}(0,I)}\left[\left\|\epsilon _{\theta }(x_{t},t)-z\right\|^{2}\right]+C}which may be minimized by stochastic gradient descent. The paper noted empirically that an even simpler loss functionLsimple,t=Ex0∼q;z∼N(0,I)[‖ϵθ(xt,t)−z‖2]{\displaystyle L_{simple,t}=E_{x_{0}\sim q;z\sim {\mathcal {N}}(0,I)}\left[\left\|\epsilon _{\theta }(x_{t},t)-z\right\|^{2}\right]}resulted in better models.
After a noise prediction network is trained, it can be used for generating data points in the original distribution in a loop as follows:
Score-based generative model is another formulation of diffusion modelling. They are also called noise conditional score network (NCSN) or score-matching with Langevin dynamics (SMLD).[17][18][19][20]
Consider the problem of image generation. Letx{\displaystyle x}represent an image, and letq(x){\displaystyle q(x)}be the probability distribution over all possible images. If we haveq(x){\displaystyle q(x)}itself, then we can say for certain how likely a certain image is. However, this is intractable in general.
Most often, we are uninterested in knowing the absolute probability of a certain image. Instead, we are usually only interested in knowing how likely a certain image is compared to its immediate neighbors — e.g. how much more likely is an image of cat compared to some small variants of it? Is it more likely if the image contains two whiskers, or three, or with some Gaussian noise added?
Consequently, we are actually quite uninterested inq(x){\displaystyle q(x)}itself, but rather,∇xlnq(x){\displaystyle \nabla _{x}\ln q(x)}. This has two major effects:
Let thescore functionbes(x):=∇xlnq(x){\displaystyle s(x):=\nabla _{x}\ln q(x)}; then consider what we can do withs(x){\displaystyle s(x)}.
As it turns out,s(x){\displaystyle s(x)}allows us to sample fromq(x){\displaystyle q(x)}using thermodynamics. Specifically, if we have a potential energy functionU(x)=−lnq(x){\displaystyle U(x)=-\ln q(x)}, and a lot of particles in the potential well, then the distribution at thermodynamic equilibrium is theBoltzmann distributionqU(x)∝e−U(x)/kBT=q(x)1/kBT{\displaystyle q_{U}(x)\propto e^{-U(x)/k_{B}T}=q(x)^{1/k_{B}T}}. At temperaturekBT=1{\displaystyle k_{B}T=1}, the Boltzmann distribution is exactlyq(x){\displaystyle q(x)}.
Therefore, to modelq(x){\displaystyle q(x)}, we may start with a particle sampled at any convenient distribution (such as the standard gaussian distribution), then simulate the motion of the particle forwards according to theLangevin equationdxt=−∇xtU(xt)dt+dWt{\displaystyle dx_{t}=-\nabla _{x_{t}}U(x_{t})dt+dW_{t}}and the Boltzmann distribution is,by Fokker-Planck equation, the unique thermodynamic equilibrium. So no matter what distributionx0{\displaystyle x_{0}}has, the distribution ofxt{\displaystyle x_{t}}converges in distribution toq{\displaystyle q}ast→∞{\displaystyle t\to \infty }.
Given a densityq{\displaystyle q}, we wish to learn a score function approximationfθ≈∇lnq{\displaystyle f_{\theta }\approx \nabla \ln q}. This isscore matching.[21]Typically, score matching is formalized as minimizingFisher divergencefunctionEq[‖fθ(x)−∇lnq(x)‖2]{\displaystyle E_{q}[\|f_{\theta }(x)-\nabla \ln q(x)\|^{2}]}. By expanding the integral, and performing an integration by parts,Eq[‖fθ(x)−∇lnq(x)‖2]=Eq[‖fθ‖2+2∇⋅fθ]+C{\displaystyle E_{q}[\|f_{\theta }(x)-\nabla \ln q(x)\|^{2}]=E_{q}[\|f_{\theta }\|^{2}+2\nabla \cdot f_{\theta }]+C}giving us a loss function, also known as theHyvärinen scoring rule, that can be minimized by stochastic gradient descent.
Suppose we need to model the distribution of images, and we wantx0∼N(0,I){\displaystyle x_{0}\sim {\mathcal {N}}(0,I)}, a white-noise image. Now, most white-noise images do not look like real images, soq(x0)≈0{\displaystyle q(x_{0})\approx 0}for large swaths ofx0∼N(0,I){\displaystyle x_{0}\sim {\mathcal {N}}(0,I)}. This presents a problem for learning the score function, because if there are no samples around a certain point, then we can't learn the score function at that point. If we do not know the score function∇xtlnq(xt){\displaystyle \nabla _{x_{t}}\ln q(x_{t})}at that point, then we cannot impose the time-evolution equation on a particle:dxt=∇xtlnq(xt)dt+dWt{\displaystyle dx_{t}=\nabla _{x_{t}}\ln q(x_{t})dt+dW_{t}}To deal with this problem, we performannealing. Ifq{\displaystyle q}is too different from a white-noise distribution, then progressively add noise until it is indistinguishable from one. That is, we perform a forward diffusion, then learn the score function, then use the score function to perform a backward diffusion.
Consider again the forward diffusion process, but this time in continuous time:xt=1−βtxt−1+βtzt{\displaystyle x_{t}={\sqrt {1-\beta _{t}}}x_{t-1}+{\sqrt {\beta _{t}}}z_{t}}By taking theβt→β(t)dt,dtzt→dWt{\displaystyle \beta _{t}\to \beta (t)dt,{\sqrt {dt}}z_{t}\to dW_{t}}limit, we obtain a continuous diffusion process, in the form of astochastic differential equation:dxt=−12β(t)xtdt+β(t)dWt{\displaystyle dx_{t}=-{\frac {1}{2}}\beta (t)x_{t}dt+{\sqrt {\beta (t)}}dW_{t}}whereWt{\displaystyle W_{t}}is aWiener process(multidimensional Brownian motion).
Now, the equation is exactly a special case of theoverdamped Langevin equationdxt=−DkBT(∇xU)dt+2DdWt{\displaystyle dx_{t}=-{\frac {D}{k_{B}T}}(\nabla _{x}U)dt+{\sqrt {2D}}dW_{t}}whereD{\displaystyle D}is diffusion tensor,T{\displaystyle T}is temperature, andU{\displaystyle U}is potential energy field. If we substitute inD=12β(t)I,kBT=1,U=12‖x‖2{\displaystyle D={\frac {1}{2}}\beta (t)I,k_{B}T=1,U={\frac {1}{2}}\|x\|^{2}}, we recover the above equation. This explains why the phrase "Langevin dynamics" is sometimes used in diffusion models.
Now the above equation is for the stochastic motion of a single particle. Suppose we have a cloud of particles distributed according toq{\displaystyle q}at timet=0{\displaystyle t=0}, then after a long time, the cloud of particles would settle into the stable distribution ofN(0,I){\displaystyle {\mathcal {N}}(0,I)}. Letρt{\displaystyle \rho _{t}}be the density of the cloud of particles at timet{\displaystyle t}, then we haveρ0=q;ρT≈N(0,I){\displaystyle \rho _{0}=q;\quad \rho _{T}\approx {\mathcal {N}}(0,I)}and the goal is to somehow reverse the process, so that we can start at the end and diffuse back to the beginning.
ByFokker-Planck equation, the density of the cloud evolves according to∂tlnρt=12β(t)(n+(x+∇lnρt)⋅∇lnρt+Δlnρt){\displaystyle \partial _{t}\ln \rho _{t}={\frac {1}{2}}\beta (t)\left(n+(x+\nabla \ln \rho _{t})\cdot \nabla \ln \rho _{t}+\Delta \ln \rho _{t}\right)}wheren{\displaystyle n}is the dimension of space, andΔ{\displaystyle \Delta }is theLaplace operator. Equivalently,∂tρt=12β(t)(∇⋅(xρt)+Δρt){\displaystyle \partial _{t}\rho _{t}={\frac {1}{2}}\beta (t)(\nabla \cdot (x\rho _{t})+\Delta \rho _{t})}
If we have solvedρt{\displaystyle \rho _{t}}for timet∈[0,T]{\displaystyle t\in [0,T]}, then we can exactly reverse the evolution of the cloud. Suppose we start with another cloud of particles with densityν0=ρT{\displaystyle \nu _{0}=\rho _{T}}, and let the particles in the cloud evolve according to
dyt=12β(T−t)ytdt+β(T−t)∇ytlnρT−t(yt)⏟score functiondt+β(T−t)dWt{\displaystyle dy_{t}={\frac {1}{2}}\beta (T-t)y_{t}dt+\beta (T-t)\underbrace {\nabla _{y_{t}}\ln \rho _{T-t}\left(y_{t}\right)} _{\text{score function }}dt+{\sqrt {\beta (T-t)}}dW_{t}}
then by plugging into the Fokker-Planck equation, we find that∂tρT−t=∂tνt{\displaystyle \partial _{t}\rho _{T-t}=\partial _{t}\nu _{t}}. Thus this cloud of points is the original cloud, evolving backwards.[22]
At the continuous limit,α¯t=(1−β1)⋯(1−βt)=e∑iln(1−βi)→e−∫0tβ(t)dt{\displaystyle {\bar {\alpha }}_{t}=(1-\beta _{1})\cdots (1-\beta _{t})=e^{\sum _{i}\ln(1-\beta _{i})}\to e^{-\int _{0}^{t}\beta (t)dt}}and soxt|x0∼N(e−12∫0tβ(t)dtx0,(1−e−∫0tβ(t)dt)I){\displaystyle x_{t}|x_{0}\sim N\left(e^{-{\frac {1}{2}}\int _{0}^{t}\beta (t)dt}x_{0},\left(1-e^{-\int _{0}^{t}\beta (t)dt}\right)I\right)}In particular, we see that we can directly sample from any point in the continuous diffusion process without going through the intermediate steps, by first samplingx0∼q,z∼N(0,I){\displaystyle x_{0}\sim q,z\sim {\mathcal {N}}(0,I)}, then getxt=e−12∫0tβ(t)dtx0+(1−e−∫0tβ(t)dt)z{\displaystyle x_{t}=e^{-{\frac {1}{2}}\int _{0}^{t}\beta (t)dt}x_{0}+\left(1-e^{-\int _{0}^{t}\beta (t)dt}\right)z}. That is, we can quickly samplext∼ρt{\displaystyle x_{t}\sim \rho _{t}}for anyt≥0{\displaystyle t\geq 0}.
Now, define a certain probability distributionγ{\displaystyle \gamma }over[0,∞){\displaystyle [0,\infty )}, then the score-matching loss function is defined as the expected Fisher divergence:L(θ)=Et∼γ,xt∼ρt[‖fθ(xt,t)‖2+2∇⋅fθ(xt,t)]{\displaystyle L(\theta )=E_{t\sim \gamma ,x_{t}\sim \rho _{t}}[\|f_{\theta }(x_{t},t)\|^{2}+2\nabla \cdot f_{\theta }(x_{t},t)]}After training,fθ(xt,t)≈∇lnρt{\displaystyle f_{\theta }(x_{t},t)\approx \nabla \ln \rho _{t}}, so we can perform the backwards diffusion process by first samplingxT∼N(0,I){\displaystyle x_{T}\sim {\mathcal {N}}(0,I)}, then integrating the SDE fromt=T{\displaystyle t=T}tot=0{\displaystyle t=0}:xt−dt=xt+12β(t)xtdt+β(t)fθ(xt,t)dt+β(t)dWt{\displaystyle x_{t-dt}=x_{t}+{\frac {1}{2}}\beta (t)x_{t}dt+\beta (t)f_{\theta }(x_{t},t)dt+{\sqrt {\beta (t)}}dW_{t}}This may be done by any SDE integration method, such asEuler–Maruyama method.
The name "noise conditional score network" is explained thus:
DDPM and score-based generative models are equivalent.[18][1][23]This means that a network trained using DDPM can be used as a NCSN, and vice versa.
We know thatxt|x0∼N(α¯tx0,σt2I){\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}, so byTweedie's formula, we have∇xtlnq(xt)=1σt2(−xt+α¯tEq[x0|xt]){\displaystyle \nabla _{x_{t}}\ln q(x_{t})={\frac {1}{\sigma _{t}^{2}}}(-x_{t}+{\sqrt {{\bar {\alpha }}_{t}}}E_{q}[x_{0}|x_{t}])}As described previously, the DDPM loss function is∑tLsimple,t{\displaystyle \sum _{t}L_{simple,t}}withLsimple,t=Ex0∼q;z∼N(0,I)[‖ϵθ(xt,t)−z‖2]{\displaystyle L_{simple,t}=E_{x_{0}\sim q;z\sim {\mathcal {N}}(0,I)}\left[\left\|\epsilon _{\theta }(x_{t},t)-z\right\|^{2}\right]}wherext=α¯tx0+σtz{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\sigma _{t}z}. By a change of variables,Lsimple,t=Ex0,xt∼q[‖ϵθ(xt,t)−xt−α¯tx0σt‖2]=Ext∼q,x0∼q(⋅|xt)[‖ϵθ(xt,t)−xt−α¯tx0σt‖2]{\displaystyle L_{simple,t}=E_{x_{0},x_{t}\sim q}\left[\left\|\epsilon _{\theta }(x_{t},t)-{\frac {x_{t}-{\sqrt {{\bar {\alpha }}_{t}}}x_{0}}{\sigma _{t}}}\right\|^{2}\right]=E_{x_{t}\sim q,x_{0}\sim q(\cdot |x_{t})}\left[\left\|\epsilon _{\theta }(x_{t},t)-{\frac {x_{t}-{\sqrt {{\bar {\alpha }}_{t}}}x_{0}}{\sigma _{t}}}\right\|^{2}\right]}and the term inside becomes a least squares regression, so if the network actually reaches the global minimum of loss, then we haveϵθ(xt,t)=xt−α¯tEq[x0|xt]σt=−σt∇xtlnq(xt){\displaystyle \epsilon _{\theta }(x_{t},t)={\frac {x_{t}-{\sqrt {{\bar {\alpha }}_{t}}}E_{q}[x_{0}|x_{t}]}{\sigma _{t}}}=-\sigma _{t}\nabla _{x_{t}}\ln q(x_{t})}
Thus, a score-based network predicts noise, and can be used for denoising.
Conversely, the continuous limitxt−1=xt−dt,βt=β(t)dt,ztdt=dWt{\displaystyle x_{t-1}=x_{t-dt},\beta _{t}=\beta (t)dt,z_{t}{\sqrt {dt}}=dW_{t}}of the backward equationxt−1=xtαt−βtσtαtϵθ(xt,t)+βtzt;zt∼N(0,I){\displaystyle x_{t-1}={\frac {x_{t}}{\sqrt {\alpha _{t}}}}-{\frac {\beta _{t}}{\sigma _{t}{\sqrt {\alpha _{t}}}}}\epsilon _{\theta }(x_{t},t)+{\sqrt {\beta _{t}}}z_{t};\quad z_{t}\sim {\mathcal {N}}(0,I)}gives us precisely the same equation as score-based diffusion:xt−dt=xt(1+β(t)dt/2)+β(t)∇xtlnq(xt)dt+β(t)dWt{\displaystyle x_{t-dt}=x_{t}(1+\beta (t)dt/2)+\beta (t)\nabla _{x_{t}}\ln q(x_{t})dt+{\sqrt {\beta (t)}}dW_{t}}Thus, at infinitesimal steps of DDPM, a denoising network performs score-based diffusion.
In DDPM, the sequence of numbers0=σ0<σ1<⋯<σT<1{\displaystyle 0=\sigma _{0}<\sigma _{1}<\cdots <\sigma _{T}<1}is called a (discrete time)noise schedule. In general, consider a strictly increasing monotonic functionσ{\displaystyle \sigma }of typeR→(0,1){\displaystyle \mathbb {R} \to (0,1)}, such as thesigmoid function. In that case, a noise schedule is a sequence of real numbersλ1<λ2<⋯<λT{\displaystyle \lambda _{1}<\lambda _{2}<\cdots <\lambda _{T}}. It then defines a sequence of noisesσt:=σ(λt){\displaystyle \sigma _{t}:=\sigma (\lambda _{t})}, which then derives the other quantitiesβt=1−1−σt21−σt−12{\displaystyle \beta _{t}=1-{\frac {1-\sigma _{t}^{2}}{1-\sigma _{t-1}^{2}}}}.
In order to use arbitrary noise schedules, instead of training a noise prediction modelϵθ(xt,t){\displaystyle \epsilon _{\theta }(x_{t},t)}, one trainsϵθ(xt,σt){\displaystyle \epsilon _{\theta }(x_{t},\sigma _{t})}.
Similarly, for the noise conditional score network, instead of trainingfθ(xt,t){\displaystyle f_{\theta }(x_{t},t)}, one trainsfθ(xt,σt){\displaystyle f_{\theta }(x_{t},\sigma _{t})}.
The original DDPM method for generating images is slow, since the forward diffusion process usually takesT∼1000{\displaystyle T\sim 1000}to make the distribution ofxT{\displaystyle x_{T}}to appear close to gaussian. However this means the backward diffusion process also take 1000 steps. Unlike the forward diffusion process, which can skip steps asxt|x0{\displaystyle x_{t}|x_{0}}is gaussian for allt≥1{\displaystyle t\geq 1}, the backward diffusion process does not allow skipping steps. For example, to samplext−2|xt−1∼N(μθ(xt−1,t−1),Σθ(xt−1,t−1)){\displaystyle x_{t-2}|x_{t-1}\sim {\mathcal {N}}(\mu _{\theta }(x_{t-1},t-1),\Sigma _{\theta }(x_{t-1},t-1))}requires the model to first samplext−1{\displaystyle x_{t-1}}. Attempting to directly samplext−2|xt{\displaystyle x_{t-2}|x_{t}}would require us to marginalize outxt−1{\displaystyle x_{t-1}}, which is generally intractable.
DDIM[24]is a method to take any model trained on DDPM loss, and use it to sample with some steps skipped, sacrificing an adjustable amount of quality. If we generate the Markovian chain case in DDPM to non-Markovian case, DDIM corresponds to the case that the reverse process has variance equals to 0. In other words, the reverse process (and also the forward process) is deterministic. When using fewer sampling steps, DDIM outperforms DDPM.
In detail, the DDIM sampling method is as follows. Start with the forward diffusion processxt=α¯tx0+σtϵ{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\sigma _{t}\epsilon }. Then, during the backward denoising process, givenxt,ϵθ(xt,t){\displaystyle x_{t},\epsilon _{\theta }(x_{t},t)}, the original data is estimated asx0′=xt−σtϵθ(xt,t)α¯t{\displaystyle x_{0}'={\frac {x_{t}-\sigma _{t}\epsilon _{\theta }(x_{t},t)}{\sqrt {{\bar {\alpha }}_{t}}}}}then the backward diffusion process can jump to any step0≤s<t{\displaystyle 0\leq s<t}, and the next denoised sample isxs=α¯sx0′+σs2−(σs′)2ϵθ(xt,t)+σs′ϵ{\displaystyle x_{s}={\sqrt {{\bar {\alpha }}_{s}}}x_{0}'+{\sqrt {\sigma _{s}^{2}-(\sigma '_{s})^{2}}}\epsilon _{\theta }(x_{t},t)+\sigma _{s}'\epsilon }whereσs′{\displaystyle \sigma _{s}'}is an arbitrary real number within the range[0,σs]{\displaystyle [0,\sigma _{s}]}, andϵ∼N(0,I){\displaystyle \epsilon \sim {\mathcal {N}}(0,I)}is a newly sampled gaussian noise.[16]If allσs′=0{\displaystyle \sigma _{s}'=0}, then the backward process becomes deterministic, and this special case of DDIM is also called "DDIM". The original paper noted that when the process is deterministic, samples generated with only 20 steps are already very similar to ones generated with 1000 steps on the high-level.
The original paper recommended defining a single "eta value"η∈[0,1]{\displaystyle \eta \in [0,1]}, such thatσs′=ησ~s{\displaystyle \sigma _{s}'=\eta {\tilde {\sigma }}_{s}}. Whenη=1{\displaystyle \eta =1}, this is the original DDPM. Whenη=0{\displaystyle \eta =0}, this is the fully deterministic DDIM. For intermediate values, the process interpolates between them.
By the equivalence, the DDIM algorithm also applies for score-based diffusion models.
Since the diffusion model is a general method for modelling probability distributions, if one wants to model a distribution over images, one can first encode the images into a lower-dimensional space by an encoder, then use a diffusion model to model the distribution over encoded images. Then to generate an image, one can sample from the diffusion model, then use a decoder to decode it into an image.[25]
The encoder-decoder pair is most often avariational autoencoder(VAE).
[26]proposed various architectural improvements. For example, they proposed log-space interpolation during backward sampling. Instead of sampling fromxt−1∼N(μ~t(xt,x~0),σ~t2I){\displaystyle x_{t-1}\sim {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},{\tilde {x}}_{0}),{\tilde {\sigma }}_{t}^{2}I)}, they recommended sampling fromN(μ~t(xt,x~0),(σtvσ~t1−v)2I){\displaystyle {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},{\tilde {x}}_{0}),(\sigma _{t}^{v}{\tilde {\sigma }}_{t}^{1-v})^{2}I)}for a learned parameterv{\displaystyle v}.
In thev-predictionformalism, the noising formulaxt=α¯tx0+1−α¯tϵt{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+{\sqrt {1-{\bar {\alpha }}_{t}}}\epsilon _{t}}is reparameterised by an angleϕt{\displaystyle \phi _{t}}such thatcosϕt=α¯t{\displaystyle \cos \phi _{t}={\sqrt {{\bar {\alpha }}_{t}}}}and a "velocity" defined bycosϕtϵt−sinϕtx0{\displaystyle \cos \phi _{t}\epsilon _{t}-\sin \phi _{t}x_{0}}. The network is trained to predict the velocityv^θ{\displaystyle {\hat {v}}_{\theta }}, and denoising is byxϕt−δ=cos(δ)xϕt−sin(δ)v^θ(xϕt){\displaystyle x_{\phi _{t}-\delta }=\cos(\delta )\;x_{\phi _{t}}-\sin(\delta ){\hat {v}}_{\theta }\;(x_{\phi _{t}})}.[27]This parameterization was found to improve performance, as the model can be trained to reach total noise (i.e.ϕt=90∘{\displaystyle \phi _{t}=90^{\circ }}) and then reverse it, whereas the standard parameterization never reaches total noise sinceα¯t>0{\displaystyle {\sqrt {{\bar {\alpha }}_{t}}}>0}is always true.[28]
Classifier guidance was proposed in 2021 to improve class-conditional generation by using a classifier. The original publication usedCLIP text encodersto improve text-conditional image generation.[29]
Suppose we wish to sample not from the entire distribution of images, but conditional on the image description. We don't want to sample a generic image, but an image that fits the description "black cat with red eyes". Generally, we want to sample from the distributionp(x|y){\displaystyle p(x|y)}, wherex{\displaystyle x}ranges over images, andy{\displaystyle y}ranges over classes of images (a description "black cat with red eyes" is just a very detailed class, and a class "cat" is just a very vague description).
Taking the perspective of thenoisy channel model, we can understand the process as follows: To generate an imagex{\displaystyle x}conditional on descriptiony{\displaystyle y}, we imagine that the requester really had in mind an imagex{\displaystyle x}, but the image is passed through a noisy channel and came out garbled, asy{\displaystyle y}. Image generation is then nothing but inferring whichx{\displaystyle x}the requester had in mind.
In other words, conditional image generation is simply "translating from a textual language into a pictorial language". Then, as in noisy-channel model, we use Bayes theorem to getp(x|y)∝p(y|x)p(x){\displaystyle p(x|y)\propto p(y|x)p(x)}in other words, if we have a good model of the space of all images, and a good image-to-class translator, we get a class-to-image translator "for free". In the equation for backward diffusion, the score∇lnp(x){\displaystyle \nabla \ln p(x)}can be replaced by∇xlnp(x|y)=∇xlnp(x)⏟score+∇xlnp(y|x)⏟classifier guidance{\displaystyle \nabla _{x}\ln p(x|y)=\underbrace {\nabla _{x}\ln p(x)} _{\text{score}}+\underbrace {\nabla _{x}\ln p(y|x)} _{\text{classifier guidance}}}where∇xlnp(x){\displaystyle \nabla _{x}\ln p(x)}is the score function, trained as previously described, and∇xlnp(y|x){\displaystyle \nabla _{x}\ln p(y|x)}is found by using a differentiable image classifier.
During the diffusion process, we need to condition on the time, giving∇xtlnp(xt|y,t)=∇xtlnp(y|xt,t)+∇xtlnp(xt|t){\displaystyle \nabla _{x_{t}}\ln p(x_{t}|y,t)=\nabla _{x_{t}}\ln p(y|x_{t},t)+\nabla _{x_{t}}\ln p(x_{t}|t)}Although, usually the classifier model does not depend on time, in which casep(y|xt,t)=p(y|xt){\displaystyle p(y|x_{t},t)=p(y|x_{t})}.
Classifier guidance is defined for the gradient of score function, thus for score-based diffusion network, but as previously noted, score-based diffusion models are equivalent to denoising models byϵθ(xt,t)=−σt∇xtlnp(xt|t){\displaystyle \epsilon _{\theta }(x_{t},t)=-\sigma _{t}\nabla _{x_{t}}\ln p(x_{t}|t)}, and similarly,ϵθ(xt,y,t)=−σt∇xtlnp(xt|y,t){\displaystyle \epsilon _{\theta }(x_{t},y,t)=-\sigma _{t}\nabla _{x_{t}}\ln p(x_{t}|y,t)}. Therefore, classifier guidance works for denoising diffusion as well, using the modified noise prediction:[29]ϵθ(xt,y,t)=ϵθ(xt,t)−σt∇xtlnp(y|xt,t)⏟classifier guidance{\displaystyle \epsilon _{\theta }(x_{t},y,t)=\epsilon _{\theta }(x_{t},t)-\underbrace {\sigma _{t}\nabla _{x_{t}}\ln p(y|x_{t},t)} _{\text{classifier guidance}}}
The classifier-guided diffusion model samples fromp(x|y){\displaystyle p(x|y)}, which is concentrated around themaximum a posteriori estimateargmaxxp(x|y){\displaystyle \arg \max _{x}p(x|y)}. If we want to force the model to move towards themaximum likelihood estimateargmaxxp(y|x){\displaystyle \arg \max _{x}p(y|x)}, we can usepγ(x|y)∝p(y|x)γp(x){\displaystyle p_{\gamma }(x|y)\propto p(y|x)^{\gamma }p(x)}whereγ>0{\displaystyle \gamma >0}is interpretable asinverse temperature. In the context of diffusion models, it is usually called theguidance scale. A highγ{\displaystyle \gamma }would force the model to sample from a distribution concentrated aroundargmaxxp(y|x){\displaystyle \arg \max _{x}p(y|x)}. This sometimes improves quality of generated images.[29]
This gives a modification to the previous equation:∇xlnpβ(x|y)=∇xlnp(x)+γ∇xlnp(y|x){\displaystyle \nabla _{x}\ln p_{\beta }(x|y)=\nabla _{x}\ln p(x)+\gamma \nabla _{x}\ln p(y|x)}For denoising models, it corresponds to[30]ϵθ(xt,y,t)=ϵθ(xt,t)−γσt∇xtlnp(y|xt,t){\displaystyle \epsilon _{\theta }(x_{t},y,t)=\epsilon _{\theta }(x_{t},t)-\gamma \sigma _{t}\nabla _{x_{t}}\ln p(y|x_{t},t)}
If we do not have a classifierp(y|x){\displaystyle p(y|x)}, we could still extract one out of the image model itself:[30]∇xlnpγ(x|y)=(1−γ)∇xlnp(x)+γ∇xlnp(x|y){\displaystyle \nabla _{x}\ln p_{\gamma }(x|y)=(1-\gamma )\nabla _{x}\ln p(x)+\gamma \nabla _{x}\ln p(x|y)}Such a model is usually trained by presenting it with both(x,y){\displaystyle (x,y)}and(x,None){\displaystyle (x,{\rm {None}})}, allowing it to model both∇xlnp(x|y){\displaystyle \nabla _{x}\ln p(x|y)}and∇xlnp(x){\displaystyle \nabla _{x}\ln p(x)}.
Note that for CFG, the diffusion model cannot be merely a generative model of the entire data distribution∇xlnp(x){\displaystyle \nabla _{x}\ln p(x)}. It must be a conditional generative model∇xlnp(x|y){\displaystyle \nabla _{x}\ln p(x|y)}. For example, in stable diffusion, the diffusion backbone takes as input both a noisy modelxt{\displaystyle x_{t}}, a timet{\displaystyle t}, and a conditioning vectory{\displaystyle y}(such as a vector encoding a text prompt), and produces a noise predictionϵθ(xt,y,t){\displaystyle \epsilon _{\theta }(x_{t},y,t)}.
For denoising models, it corresponds toϵθ(xt,y,t,γ)=ϵθ(xt,t)+γ(ϵθ(xt,y,t)−ϵθ(xt,t)){\displaystyle \epsilon _{\theta }(x_{t},y,t,\gamma )=\epsilon _{\theta }(x_{t},t)+\gamma (\epsilon _{\theta }(x_{t},y,t)-\epsilon _{\theta }(x_{t},t))}As sampled by DDIM, the algorithm can be written as[31]ϵuncond←ϵθ(xt,t)ϵcond←ϵθ(xt,t,c)ϵCFG←ϵuncond+γ(ϵcond−ϵuncond)x0←(xt−σtϵCFG)/1−σt2xs←1−σs2x0+σs2−(σs′)2ϵuncond+σs′ϵ{\displaystyle {\begin{aligned}\epsilon _{\text{uncond}}&\leftarrow \epsilon _{\theta }(x_{t},t)\\\epsilon _{\text{cond}}&\leftarrow \epsilon _{\theta }(x_{t},t,c)\\\epsilon _{\text{CFG}}&\leftarrow \epsilon _{\text{uncond}}+\gamma (\epsilon _{\text{cond}}-\epsilon _{\text{uncond}})\\x_{0}&\leftarrow (x_{t}-\sigma _{t}\epsilon _{\text{CFG}})/{\sqrt {1-\sigma _{t}^{2}}}\\x_{s}&\leftarrow {\sqrt {1-\sigma _{s}^{2}}}x_{0}+{\sqrt {\sigma _{s}^{2}-(\sigma _{s}')^{2}}}\epsilon _{\text{uncond}}+\sigma _{s}'\epsilon \\\end{aligned}}}A similar technique applies to language model sampling. Also, if the unconditional generationϵuncond←ϵθ(xt,t){\displaystyle \epsilon _{\text{uncond}}\leftarrow \epsilon _{\theta }(x_{t},t)}is replaced byϵneg cond←ϵθ(xt,t,c′){\displaystyle \epsilon _{\text{neg cond}}\leftarrow \epsilon _{\theta }(x_{t},t,c')}, then it results in negative prompting, which pushes the generation away fromc′{\displaystyle c'}condition.[32][33]
Given a diffusion model, one may regard it either as a continuous process, and sample from it by integrating a SDE, or one can regard it as a discrete process, and sample from it by iterating the discrete steps. The choice of the "noise schedule"βt{\displaystyle \beta _{t}}can also affect the quality of samples. A noise schedule is a function that sends a natural number to a noise level:t↦βt,t∈{1,2,…},β∈(0,1){\displaystyle t\mapsto \beta _{t},\quad t\in \{1,2,\dots \},\beta \in (0,1)}A noise schedule is more often specified by a mapt↦σt{\displaystyle t\mapsto \sigma _{t}}. The two definitions are equivalent, sinceβt=1−1−σt21−σt−12{\displaystyle \beta _{t}=1-{\frac {1-\sigma _{t}^{2}}{1-\sigma _{t-1}^{2}}}}.
In the DDPM perspective, one can use the DDPM itself (with noise), or DDIM (with adjustable amount of noise). The case where one adds noise is sometimes called ancestral sampling.[34]One can interpolate between noise and no noise. The amount of noise is denotedη{\displaystyle \eta }("eta value") in the DDIM paper, withη=0{\displaystyle \eta =0}denoting no noise (as indeterministicDDIM), andη=1{\displaystyle \eta =1}denoting full noise (as in DDPM).
In the perspective of SDE, one can use any of thenumerical integration methods, such asEuler–Maruyama method,Heun's method,linear multistep methods, etc. Just as in the discrete case, one can add an adjustable amount of noise during the integration.[35]
A survey and comparison of samplers in the context of image generation is in.[36]
Notable variants include[37]Poisson flow generative model,[38]consistency model,[39]critically-damped Langevin diffusion,[40]GenPhys,[41]cold diffusion,[42]discrete diffusion,[43][44]etc.
Abstractly speaking, the idea of diffusion model is to take an unknown probability distribution (the distribution of natural-looking images), then progressively convert it to a known probability distribution (standard gaussian distribution), by building an absolutely continuous probability path connecting them. The probability path is in fact defined implicitly by the score function∇lnpt{\displaystyle \nabla \ln p_{t}}.
In denoising diffusion models, the forward process adds noise, and the backward process removes noise. Both the forward and backward processes areSDEs, though the forward process is integrable in closed-form, so it can be done at no computational cost. The backward process is not integrable in closed-form, so it must be integrated step-by-step by standard SDE solvers, which can be very expensive. The probability path in diffusions model is defined through anItô processand one can retrieve the deterministic process by using the Probability ODE flow formulation.[1]
In flow-based diffusion models, the forward process is a deterministic flow along a time-dependent vector field, and the backward process is also a deterministic flow along the same vector field, but going backwards. Both processes are solutions toODEs. If the vector field is well-behaved, the ODE will also be well-behaved.
Given two distributionsπ0{\displaystyle \pi _{0}}andπ1{\displaystyle \pi _{1}}, a flow-based model is a time-dependent velocity fieldvt(x){\displaystyle v_{t}(x)}in[0,1]×Rd{\displaystyle [0,1]\times \mathbb {R} ^{d}}, such that if we start by sampling a pointx∼π0{\displaystyle x\sim \pi _{0}}, and let it move according to the velocity field:ddtϕt(x)=vt(ϕt(x))t∈[0,1],starting fromϕ0(x)=x{\displaystyle {\frac {d}{dt}}\phi _{t}(x)=v_{t}(\phi _{t}(x))\quad t\in [0,1],\quad {\text{starting from }}\phi _{0}(x)=x}we end up with a pointx1∼π1{\displaystyle x_{1}\sim \pi _{1}}. The solutionϕt{\displaystyle \phi _{t}}of the above ODE define a probability pathpt=[ϕt]#π0{\displaystyle p_{t}=[\phi _{t}]_{\#}\pi _{0}}by thepushforward measureoperator. In particular,[ϕ1]#π0=π1{\displaystyle [\phi _{1}]_{\#}\pi _{0}=\pi _{1}}.
The probability path and the velocity field also satisfy thecontinuity equation, in the sense of probability distribution:∂tpt+∇⋅(vtpt)=0{\displaystyle \partial _{t}p_{t}+\nabla \cdot (v_{t}p_{t})=0}To construct a probability path, we start by construct a conditional probability pathpt(x|z){\displaystyle p_{t}(x\vert z)}and the corresponding conditional velocity fieldvt(x|z){\displaystyle v_{t}(x\vert z)}on some conditional distributionq(z){\displaystyle q(z)}. A natural choice is the Gaussian conditional probability path:pt(x|z)=N(mt(z),ζt2I){\displaystyle p_{t}(x\vert z)={\mathcal {N}}\left(m_{t}(z),\zeta _{t}^{2}I\right)}The conditional velocity field which corresponds to the geodesic path between conditional Gaussian path isvt(x|z)=ζt′ζt(x−mt(z))+mt′(z){\displaystyle v_{t}(x\vert z)={\frac {\zeta _{t}'}{\zeta _{t}}}(x-m_{t}(z))+m_{t}'(z)}The probability path and velocity field are then computed by marginalizing
pt(x)=∫pt(x|z)q(z)dzandvt(x)=Eq(z)[vt(x|z)pt(x|z)pt(x)]{\displaystyle p_{t}(x)=\int p_{t}(x\vert z)q(z)dz\qquad {\text{ and }}\qquad v_{t}(x)=\mathbb {E} _{q(z)}\left[{\frac {v_{t}(x\vert z)p_{t}(x\vert z)}{p_{t}(x)}}\right]}
The idea ofoptimal transport flow[45]is to construct a probability path minimizing theWasserstein metric. The distribution on which we condition is an approximation of the optimal transport plan betweenπ0{\displaystyle \pi _{0}}andπ1{\displaystyle \pi _{1}}:z=(x0,x1){\displaystyle z=(x_{0},x_{1})}andq(z)=Γ(π0,π1){\displaystyle q(z)=\Gamma (\pi _{0},\pi _{1})}, whereΓ{\displaystyle \Gamma }is the optimal transport plan, which can be approximated bymini-batch optimal transport.If the batch size is not large, then the transport it computes can be very far from the true optimal transport.
The idea ofrectified flow[46][47]is to learn a flow model such that the velocity is nearly constant along each flow path. This is beneficial, because we can integrate along such a vector field with very few steps. For example, if an ODEϕt˙(x)=vt(ϕt(x)){\displaystyle {\dot {\phi _{t}}}(x)=v_{t}(\phi _{t}(x))}follows perfectly straight paths, it simplifies toϕt(x)=x0+t⋅v0(x0){\displaystyle \phi _{t}(x)=x_{0}+t\cdot v_{0}(x_{0})}, allowing for exact solutions in one step. In practice, we cannot reach such perfection, but when the flow field is nearly so, we can take a few large steps instead of many little steps.
The general idea is to start with two distributionsπ0{\displaystyle \pi _{0}}andπ1{\displaystyle \pi _{1}}, then construct a flow fieldϕ0={ϕt:t∈[0,1]}{\displaystyle \phi ^{0}=\{\phi _{t}:t\in [0,1]\}}from it, then repeatedly apply a "reflow" operation to obtain successive flow fieldsϕ1,ϕ2,…{\displaystyle \phi ^{1},\phi ^{2},\dots }, each straighter than the previous one. When the flow field is straight enough for the application, we stop.
Generally, for any time-differentiable processϕt{\displaystyle \phi _{t}},vt{\displaystyle v_{t}}can be estimated by solving:minθ∫01Ex∼pt[‖vt(x,θ)−vt(x)‖2]dt.{\displaystyle \min _{\theta }\int _{0}^{1}\mathbb {E} _{x\sim p_{t}}\left[\lVert {v_{t}(x,\theta )-v_{t}(x)}\rVert ^{2}\right]\,\mathrm {d} t.}
In rectified flow, by injecting strong priors that intermediate trajectories are straight, it can achieve both theoretical relevance for optimal transport and computational efficiency, as ODEs with straight paths can be simulated precisely without time discretization.
Specifically, rectified flow seeks to match an ODE with the marginal distributions of thelinear interpolationbetween points from distributionsπ0{\displaystyle \pi _{0}}andπ1{\displaystyle \pi _{1}}. Given observationsx0∼π0{\displaystyle x_{0}\sim \pi _{0}}andx1∼π1{\displaystyle x_{1}\sim \pi _{1}}, the canonical linear interpolationxt=tx1+(1−t)x0,t∈[0,1]{\displaystyle x_{t}=tx_{1}+(1-t)x_{0},t\in [0,1]}yields a trivial casex˙t=x1−x0{\displaystyle {\dot {x}}_{t}=x_{1}-x_{0}}, which cannot be causally simulated withoutx1{\displaystyle x_{1}}. To address this,xt{\displaystyle x_{t}}is "projected" into a space of causally simulatable ODEs, by minimizing the least squares loss with respect to the directionx1−x0{\displaystyle x_{1}-x_{0}}:minθ∫01Eπ0,π1,pt[‖(x1−x0)−vt(xt)‖2]dt.{\displaystyle \min _{\theta }\int _{0}^{1}\mathbb {E} _{\pi _{0},\pi _{1},p_{t}}\left[\lVert {(x_{1}-x_{0})-v_{t}(x_{t})}\rVert ^{2}\right]\,\mathrm {d} t.}
The data pair(x0,x1){\displaystyle (x_{0},x_{1})}can be any coupling ofπ0{\displaystyle \pi _{0}}andπ1{\displaystyle \pi _{1}}, typically independent (i.e.,(x0,x1)∼π0×π1{\displaystyle (x_{0},x_{1})\sim \pi _{0}\times \pi _{1}}) obtained by randomly combining observations fromπ0{\displaystyle \pi _{0}}andπ1{\displaystyle \pi _{1}}. This process ensures that the trajectories closely mirror the density map ofxt{\displaystyle x_{t}}trajectories butrerouteat intersections to ensure causality. This rectifying process is also known as Flow Matching,[48]Stochastic Interpolation,[49]and alpha-(de)blending.[50]
A distinctive aspect of rectified flow is its capability for "reflow", which straightens the trajectory of ODE paths. Denote the rectified flowϕ0={ϕt:t∈[0,1]}{\displaystyle \phi ^{0}=\{\phi _{t}:t\in [0,1]\}}induced from(x0,x1){\displaystyle (x_{0},x_{1})}asϕ0=Rectflow((x0,x1)){\displaystyle \phi ^{0}={\mathsf {Rectflow}}((x_{0},x_{1}))}. Recursively applying thisRectflow(⋅){\displaystyle {\mathsf {Rectflow}}(\cdot )}operator generates a series of rectified flowsϕk+1=Rectflow((ϕ0k(x0),ϕ1k(x1))){\displaystyle \phi ^{k+1}={\mathsf {Rectflow}}((\phi _{0}^{k}(x_{0}),\phi _{1}^{k}(x_{1})))}. This "reflow" process not only reduces transport costs but also straightens the paths of rectified flows, makingϕk{\displaystyle \phi ^{k}}paths straighter with increasingk{\displaystyle k}.
Rectified flow includes a nonlinear extension where linear interpolationxt{\displaystyle x_{t}}is replaced with any time-differentiable curve that connectsx0{\displaystyle x_{0}}andx1{\displaystyle x_{1}}, given byxt=αtx1+βtx0{\displaystyle x_{t}=\alpha _{t}x_{1}+\beta _{t}x_{0}}. This framework encompasses DDIM and probability flow ODEs as special cases, with particular choices ofαt{\displaystyle \alpha _{t}}andβt{\displaystyle \beta _{t}}. However, in the case where the path ofxt{\displaystyle x_{t}}is not straight, the reflow process no longer ensures a reduction in convex transport costs, and also no longer straighten the paths ofϕt{\displaystyle \phi _{t}}.[46]
See[51]for a tutorial on flow matching, with animations.
For generating images by DDPM, we need a neural network that takes a timet{\displaystyle t}and a noisy imagext{\displaystyle x_{t}}, and predicts a noiseϵθ(xt,t){\displaystyle \epsilon _{\theta }(x_{t},t)}from it. Since predicting the noise is the same as predicting the denoised image, then subtracting it fromxt{\displaystyle x_{t}}, denoising architectures tend to work well. For example, theU-Net, which was found to be good for denoising images, is often used for denoising diffusion models that generate images.[52]
For DDPM, the underlying architecture ("backbone") does not have to be a U-Net. It just has to predict the noise somehow. For example, the diffusion transformer (DiT) uses aTransformerto predict the mean and diagonal covariance of the noise, given the textual conditioning and the partially denoised image. It is the same as standard U-Net-based denoising diffusion model, with a Transformer replacing the U-Net.[53]Mixture of experts-Transformer can also be applied.[54]
DDPM can be used to model general data distributions, not just natural-looking images. For example, Human Motion Diffusion[55]models human motion trajectory by DDPM. Each human motion trajectory is a sequence of poses, represented by either joint rotations or positions. It uses aTransformernetwork to generate a less noisy trajectory out of a noisy one.
The base diffusion model can only generate unconditionally from the whole distribution. For example, a diffusion model learned onImageNetwould generate images that look like a random image from ImageNet. To generate images from just one category, one would need to impose the condition, and then sample from the conditional distribution. Whatever condition one wants to impose, one needs to first convert the conditioning into a vector of floating point numbers, then feed it into the underlying diffusion model neural network. However, one has freedom in choosing how to convert the conditioning into a vector.
Stable Diffusion, for example, imposes conditioning in the form ofcross-attention mechanism, where the query is an intermediate representation of the image in the U-Net, and both key and value are the conditioning vectors. The conditioning can be selectively applied to only parts of an image, and new kinds of conditionings can be finetuned upon the base model, as used in ControlNet.[56]
As a particularly simple example, considerimage inpainting. The conditions arex~{\displaystyle {\tilde {x}}}, the reference image, andm{\displaystyle m}, the inpaintingmask. The conditioning is imposed at each step of the backward diffusion process, by first samplingx~t∼N(α¯tx~,σt2I){\displaystyle {\tilde {x}}_{t}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}{\tilde {x}},\sigma _{t}^{2}I\right)}, a noisy version ofx~{\displaystyle {\tilde {x}}}, then replacingxt{\displaystyle x_{t}}with(1−m)⊙xt+m⊙x~t{\displaystyle (1-m)\odot x_{t}+m\odot {\tilde {x}}_{t}}, where⊙{\displaystyle \odot }meanselementwise multiplication.[57]Another application of cross-attention mechanism is prompt-to-prompt image editing.[58]
Conditioning is not limited to just generating images from a specific category, or according to a specific caption (as in text-to-image). For example,[55]demonstrated generating human motion, conditioned on an audio clip of human walking (allowing syncing motion to a soundtrack), or video of human running, or a text description of human motion, etc. For how conditional diffusion models are mathematically formulated, see a methodological summary in.[59]
As generating an image takes a long time, one can try to generate a small image by a base diffusion model, then upscale it by other models. Upscaling can be done byGAN,[60]Transformer,[61]or signal processing methods likeLanczos resampling.
Diffusion models themselves can be used to perform upscaling. Cascading diffusion model stacks multiple diffusion models one after another, in the style ofProgressive GAN. The lowest level is a standard diffusion model that generate 32x32 image, then the image would be upscaled by a diffusion model specifically trained for upscaling, and the process repeats.[52]
In more detail, the diffusion upscaler is trained as follows:[52]
This section collects some notable diffusion models, and briefly describes their architecture.
The DALL-E series by OpenAI are text-conditional diffusion models of images.
The first version of DALL-E (2021) is not actually a diffusion model. Instead, it uses a Transformer architecture that autoregressively generates a sequence of tokens, which is then converted to an image by the decoder of a discrete VAE. Released with DALL-E was the CLIP classifier, which was used by DALL-E to rank generated images according to how close the image fits the text.
GLIDE (2022-03)[62]is a 3.5-billion diffusion model, and a small version was released publicly.[5]Soon after, DALL-E 2 was released (2022-04).[63]DALL-E 2 is a 3.5-billion cascaded diffusion model that generates images from text by "inverting the CLIP image encoder", the technique which they termed "unCLIP".
The unCLIP method contains 4 models: a CLIP image encoder, a CLIP text encoder, an image decoder, and a "prior" model (which can be a diffusion model, or an autoregressive model). During training, the prior model is trained to convert CLIP image encodings to CLIP text encodings. The image decoder is trained to convert CLIP image encodings back to images. During inference, a text is converted by the CLIP text encoder to a vector, then it is converted by the prior model to an image encoding, then it is converted by the image decoder to an image.
Sora(2024-02) is a diffusion Transformer model (DiT).
Stable Diffusion(2022-08), released by Stability AI, consists of a denoising latent diffusion model (860 million parameters), a VAE, and a text encoder. The denoising network is a U-Net, with cross-attention blocks to allow for conditional image generation.[64][25]
Stable Diffusion 3 (2024-03)[65]changed the latent diffusion model from the UNet to a Transformer model, and so it is a DiT. It uses rectified flow.
Stable Video 4D (2024-07)[66]is a latent diffusion model for videos of 3D objects.
Imagen (2022)[67][68]uses aT5-XXL language modelto encode the input text into an embedding vector. It is a cascaded diffusion model with three sub-models. The first step denoises a white noise to a 64×64 image, conditional on the embedding vector of the text. This model has 2B parameters. The second step upscales the image by 64×64→256×256, conditional on embedding. This model has 650M parameters. The third step is similar, upscaling by 256×256→1024×1024. This model has 400M parameters. The three denoising networks are all U-Nets.
Muse (2023-01)[69]is not a diffusion model, but an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens.
Imagen 2 (2023-12) is also diffusion-based. It can generate images based on a prompt that mixes images and text. No further information available.[70]Imagen 3 (2024-05) is too. No further information available.[71]
Veo (2024) generates videos by latent diffusion. The diffusion is conditioned on a vector that encodes both a text prompt and an image prompt.[72]
Make-A-Video (2022) is a text-to-video diffusion model.[73][74]
CM3leon (2023) is not a diffusion model, but an autoregressive causally masked Transformer, with mostly the same architecture asLLaMa-2.[75][76]
Transfusion (2024) is a Transformer that combines autoregressive text generation and denoising diffusion. Specifically, it generates text autoregressively (with causal masking), and generates images by denoising multiple times over image tokens (with all-to-all attention).[77]
Movie Gen (2024) is a series of Diffusion Transformers operating on latent space and by flow matching.[78]
|
https://en.wikipedia.org/wiki/Diffusion_model
|
Inabstract algebra, askew latticeis analgebraic structurethat is anon-commutativegeneralization of alattice. While the termskew latticecan be used to refer to any non-commutative generalization of a lattice, since 1989 it has been used primarily as follows.
Askew latticeis asetSequipped with twoassociative,idempotentbinary operations∧{\displaystyle \wedge }and∨{\displaystyle \vee }, calledmeetandjoin, that validate the following dual pair of absorption laws
Given that∨{\displaystyle \vee }and∧{\displaystyle \wedge }are associative and idempotent, these identities are equivalent to validating the following dual pair of statements:
For over 60 years, noncommutative variations of lattices have been studied with differing motivations. For some the motivation has been an interest in the conceptual boundaries oflattice theory; for others it was a search for noncommutative forms oflogicandBoolean algebra; and for others it has been the behavior ofidempotentsinrings. Anoncommutative lattice, generally speaking, is analgebra(S;∧,∨){\displaystyle (S;\wedge ,\vee )}where∧{\displaystyle \wedge }and∨{\displaystyle \vee }areassociative,idempotentbinaryoperationsconnected byabsorption identitiesguaranteeing that∧{\displaystyle \wedge }in some way dualizes∨{\displaystyle \vee }. The precise identities chosen depends upon the underlying motivation, with differing choices producing distinctvarieties of algebras.
Pascual Jordan, motivated by questions inquantum logic, initiated a study ofnoncommutative latticesin his 1949 paper,Über Nichtkommutative Verbände,[2]choosing the absorption identities
He referred to those algebras satisfying them asSchrägverbände. By varying or augmenting these identities, Jordan and others obtained a number of varieties of noncommutative lattices.
Beginning with Jonathan Leech's 1989 paper,Skew lattices in rings,[1]skew lattices as defined above have been the primary objects of study. This was aided by previous results aboutbands. This was especially the case for many of the basic properties.
Natural partial order and natural quasiorder
In a skew latticeS{\displaystyle S}, the naturalpartial orderis defined byy≤x{\displaystyle y\leq x}ifx∧y=y=y∧x{\displaystyle x\wedge y=y=y\wedge x}, or dually,x∨y=x=y∨x{\displaystyle x\vee y=x=y\vee x}. The naturalpreorderonS{\displaystyle S}is given byy⪯x{\displaystyle y\preceq x}ify∧x∧y=y{\displaystyle y\wedge x\wedge y=y}or duallyx∨y∨x=x{\displaystyle x\vee y\vee x=x}. While≤{\displaystyle \leq }and⪯{\displaystyle \preceq }agree on lattices,≤{\displaystyle \leq }properly refines⪯{\displaystyle \preceq }in the noncommutative case. The induced naturalequivalenceD{\displaystyle D}is defined byxDy{\displaystyle xDy}ifx⪯y⪯x{\displaystyle x\preceq y\preceq x}, that is,x∧y∧x=x{\displaystyle x\wedge y\wedge x=x}andy∧x∧y=y{\displaystyle y\wedge x\wedge y=y}or dually,x∨y∨x=x{\displaystyle x\vee y\vee x=x}andy∨x∨y=y{\displaystyle y\vee x\vee y=y}. The blocks of the partitionS/D{\displaystyle S/D}are
lattice ordered byA>B{\displaystyle A>B}if and only ifa∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}exist such thata>b{\displaystyle a>b}. This permits us to drawHasse diagramsof skew lattices such as the following pair:
E.g., in the diagram on the left above, thata{\displaystyle a}andb{\displaystyle b}areD{\displaystyle D}related is expressed by the dashed
segment. The slanted lines reveal the natural partial order between elements of the distinctD{\displaystyle D}-classes. The elements1{\displaystyle 1},c{\displaystyle c}and0{\displaystyle 0}form the singletonD{\displaystyle D}-classes.
Rectangular Skew Lattices
Skew lattices consisting of a singleD{\displaystyle D}-class are calledrectangular. They are characterized by the equivalent identities:x∧y∧x=x{\displaystyle x\wedge y\wedge x=x},y∨x∨y=y{\displaystyle y\vee x\vee y=y}andx∨y=y∧x{\displaystyle x\vee y=y\wedge x}. Rectangular skew lattices are isomorphic to skew lattices having the following construction (and conversely): given nonempty
setsL{\displaystyle L}andR{\displaystyle R}, onL×R{\displaystyle L\times R}define(x,y)∨(z,w)=(z,y){\displaystyle (x,y)\vee (z,w)=(z,y)}and(x,y)∧(z,w)=(x,w){\displaystyle (x,y)\wedge (z,w)=(x,w)}. TheD{\displaystyle D}-class partition of a skew latticeS{\displaystyle S}, as indicated in the above diagrams, is the unique partition ofS{\displaystyle S}into its maximal rectangular subalgebras, Moreover,D{\displaystyle D}is acongruencewith the inducedquotientalgebraS/D{\displaystyle S/D}being the maximal lattice image ofS{\displaystyle S}, thus making every skew latticeS{\displaystyle S}a lattice of rectangular subalgebras. This is the Clifford–McLean theorem for skew lattices, first given for bands separately byCliffordand McLean. It is also known asthe first decomposition theorem for skew lattices.
Right (left) handed skew lattices and the Kimura factorization
A skew lattice is right-handed if it satisfies the identityx∧y∧x=y∧x{\displaystyle x\wedge y\wedge x=y\wedge x}or dually,x∨y∨x=x∨y{\displaystyle x\vee y\vee x=x\vee y}.
These identities essentially assert thatx∧y=y{\displaystyle x\wedge y=y}andx∨y=x{\displaystyle x\vee y=x}in eachD{\displaystyle D}-class. Every skew latticeS{\displaystyle S}has a unique maximal right-handed imageS/L{\displaystyle S/L}where the congruenceL{\displaystyle L}is defined byxLy{\displaystyle xLy}if bothx∧y=x{\displaystyle x\wedge y=x}andy∧x=y{\displaystyle y\wedge x=y}(or dually,x∨y=y{\displaystyle x\vee y=y}andy∨x=x{\displaystyle y\vee x=x}). Likewise a skew lattice is left-handed ifx∧y=x{\displaystyle x\wedge y=x}andx∨y=y{\displaystyle x\vee y=y}in eachD{\displaystyle D}-class. Again the maximal left-handed image of a skew latticeS{\displaystyle S}is the imageS/R{\displaystyle S/R}where the congruenceR{\displaystyle R}is defined in dual fashion toL{\displaystyle L}. Many examples of skew lattices are either right- or left-handed. In the lattice of congruences,R∨L=D{\displaystyle R\vee L=D}andR∩L{\displaystyle R\cap L}is the identity congruenceΔ{\displaystyle \Delta }. The induced epimorphismS→S/D{\displaystyle S\rightarrow S/D}factors through both induced epimorphismsS→S/L{\displaystyle S\rightarrow S/L}andS→S/R{\displaystyle S\rightarrow S/R}. SettingT=S/D{\displaystyle T=S/D}, the homomorphismk:S→S/L×S/R{\displaystyle k:S\rightarrow S/L\times S/R}defined byk(x)=(Lx,Rx){\displaystyle k(x)=(L_{x},R_{x})}, induces an isomorphismk∗:S∼S/L×TS/R{\displaystyle k*:S\sim S/L\times _{T}S/R}. This is the Kimura factorization ofS{\displaystyle S}into a fibred product of its maximal right- and left-handed images.
Like the Clifford–McLean theorem, Kimura factorization (or thesecond decomposition theorem for skew lattices) was first given for regular bands (bands that satisfy the middle absorption
identity,xyxzx=xyzx{\displaystyle xyxzx=xyzx}). Indeed, both∧{\displaystyle \wedge }and∨{\displaystyle \vee }are regular band operations. The above symbolsD{\displaystyle D},R{\displaystyle R}andL{\displaystyle L}come, of course, from basic semigroup theory.[1][3][4][5][6][7][8][9]
Skew lattices form a variety. Rectangular skew lattices, left-handed and right-handed skew lattices all form subvarieties that are central to the basic structure theory of skew lattices. Here are several
more.
Symmetric skew lattices
A skew latticeSis symmetric if for anyx,y∈S{\displaystyle x,y\in S},x∧y=y∧x{\displaystyle x\wedge y=y\wedge x}if and only ifx∨y=y∨x{\displaystyle x\vee y=y\vee x}. Occurrences of commutation are thus unambiguous for such skew lattices, with subsets of pairwise commuting elements generating commutative subalgebras, i.e., sublattices. (This is not true for skew lattices in general.) Equational bases for this subvariety, first given by Spinks[10]are:x∨y∨(x∧y)=(y∧x)∨y∨x{\displaystyle x\vee y\vee (x\wedge y)=(y\wedge x)\vee y\vee x}andx∧y∧(x∨y)=(y∨x)∧y∧x{\displaystyle x\wedge y\wedge (x\vee y)=(y\vee x)\wedge y\wedge x}.
Alattice sectionof a skew latticeS{\displaystyle S}is a sublatticeT{\displaystyle T}ofS{\displaystyle S}meeting eachD{\displaystyle D}-class ofS{\displaystyle S}at a single element.T{\displaystyle T}is thus an internal copy of the latticeS/D{\displaystyle S/D}with the compositionT⊆S→S/D{\displaystyle T\subseteq S\rightarrow S/D}being an isomorphism. All symmetric skew lattices for which|S/D|≤ℵ0{\displaystyle |S/D|\leq \aleph _{0}}admit a lattice section.[9]Symmetric or not, having a lattice sectionT{\displaystyle T}guarantees thatS{\displaystyle S}also has internal copies ofS/L{\displaystyle S/L}andS/R{\displaystyle S/R}given respectively byT[R]=⋃t∈TRt{\displaystyle T[R]=\bigcup _{t\in T}R_{t}}andT[L]=⋃t∈TLt{\displaystyle T[L]=\bigcup _{t\in T}L_{t}}, whereRt{\displaystyle R_{t}}andLt{\displaystyle Lt}are theR{\displaystyle R}andL{\displaystyle L}congruence classes oft{\displaystyle t}inT{\displaystyle T}. ThusT[R]⊆S→S/L{\displaystyle T[R]\subseteq S\rightarrow S/L}andT[L]⊆S→S/R{\displaystyle T[L]\subseteq S\rightarrow S/R}are isomorphisms.[7]This leads to a commuting diagram of embedding dualizing the preceding Kimura diagram.
Cancellative skew lattices
A skew lattice is cancellative ifx∨y=x∨z{\displaystyle x\vee y=x\vee z}andx∧y=x∧z{\displaystyle x\wedge y=x\wedge z}impliesy=z{\displaystyle y=z}and likewisex∨z=y∨z{\displaystyle x\vee z=y\vee z}andx∧z=y∧z{\displaystyle x\wedge z=y\wedge z}impliesx=y{\displaystyle x=y}. Cancellatice skew lattices are symmetric and can be shown to form a variety. Unlike lattices, they need not be distributive, and conversely.
Distributive skew lattices
Distributive skew lattices are determined by the identities:
x∧(y∨z)∧x=(x∧y∧x)∨(x∧z∧x){\displaystyle x\wedge (y\vee z)\wedge x=(x\wedge y\wedge x)\vee (x\wedge z\wedge x)}(D1)
x∨(y∧z)∨x=(x∨y∨x)∧(x∨z∨x).{\displaystyle x\vee (y\wedge z)\vee x=(x\vee y\vee x)\wedge (x\vee z\vee x).}(D'1)
Unlike lattices, (D1) and (D'1) are not equivalent in general for skew lattices, but they are for symmetric skew lattices.[8][11][12]The condition (D1) can be strengthened to
x∧(y∨z)∧w=(x∧y∧w)∨(x∧z∧w){\displaystyle x\wedge (y\vee z)\wedge w=(x\wedge y\wedge w)\vee (x\wedge z\wedge w)}(D2)
in which case (D'1) is a consequence. A skew latticeS{\displaystyle S}satisfies both (D2) and its dual,x∨(y∧z)∨w=(x∨y∨w)∧(x∨z∨w){\displaystyle x\vee (y\wedge z)\vee w=(x\vee y\vee w)\wedge (x\vee z\vee w)}, if and only if it factors as the product of a distributive lattice and a rectangular skew lattice. In this latter case (D2) can be strengthened to
x∧(y∨z)=(x∧y)∨(x∧z){\displaystyle x\wedge (y\vee z)=(x\wedge y)\vee (x\wedge z)}and(y∨z)∧w=(y∧w)∨(z∧w){\displaystyle (y\vee z)\wedge w=(y\wedge w)\vee (z\wedge w)}. (D3)
On its own, (D3) is equivalent to (D2) when symmetry is added.[1]We thus have six subvarieties of skew lattices determined respectively by (D1), (D2), (D3) and their duals.
Normal skew lattices
As seen above,∧{\displaystyle \wedge }and∨{\displaystyle \vee }satisfy the identityxyxzx=xyzx{\displaystyle xyxzx=xyzx}. Bands satisfying the stronger identity,xyzx=xzyx{\displaystyle xyzx=xzyx}, are called normal. A skew lattice is normal skew if it satisfies
x∧y∧z∧x=x∧z∧y∧x.(N){\displaystyle x\wedge y\wedge z\wedge x=x\wedge z\wedge y\wedge x.(N)}
For each element a in a normal skew latticeS{\displaystyle S}, the seta∧S∧a{\displaystyle a\wedge S\wedge a}defined by {a∧x∧a|x∈S{\displaystyle a\wedge x\wedge a|x\in S}} or equivalently {x∈S|x≤a{\displaystyle x\in S|x\leq a}} is a sublattice ofS{\displaystyle S}, and conversely. (Thus normal skew lattices have also been called local lattices.) When both∧{\displaystyle \wedge }and∨{\displaystyle \vee }are normal,S{\displaystyle S}splits isomorphically into a productT×D{\displaystyle T\times D}of a latticeT{\displaystyle T}and a rectangular skew latticeD{\displaystyle D}, and conversely. Thus both normal skew lattices and split skew lattices form varieties. Returning to distribution,(D2)=(D1)+(N){\displaystyle (D2)=(D1)+(N)}so that(D2){\displaystyle (D2)}characterizes the variety of distributive, normal skew lattices, and (D3) characterizes the variety of symmetric, distributive, normal skew lattices.
Categorical skew lattices
A skew lattice is categorical if nonempty composites of coset bijections are coset bijections. Categorical skew lattices form a variety. Skew lattices in rings and normal skew lattices are examples
of algebras in this variety.[3]Leta>b>c{\displaystyle a>b>c}witha∈A{\displaystyle a\in A},b∈B{\displaystyle b\in B}andc∈C{\displaystyle c\in C},φ{\displaystyle \varphi }be the coset bijection fromA{\displaystyle A}toB{\displaystyle B}takinga{\displaystyle a}tob{\displaystyle b},ψ{\displaystyle \psi }be the coset bijection fromB{\displaystyle B}toC{\displaystyle C}takingb{\displaystyle b}toc{\displaystyle c}and finallyχ{\displaystyle \chi }be the coset bijection fromA{\displaystyle A}toC{\displaystyle C}takinga{\displaystyle a}toc{\displaystyle c}. A skew latticeS{\displaystyle S}is categorical if one always has the equalityψ∘φ=χ{\displaystyle \psi \circ \varphi =\chi }, i.e. , if the
composite partial bijectionψ∘φ{\displaystyle \psi \circ \varphi }if nonempty is a coset bijection from aC{\displaystyle C}-coset ofA{\displaystyle A}to anA{\displaystyle A}-coset
ofC{\displaystyle C}. That is(A∧b∧A)∩(C∨b∨C)=(C∨a∨C)∧b∧(C∨a∨C)=(A∧c∧A)∨b∨(A∧c∧A){\displaystyle (A\wedge b\wedge A)\cap (C\vee b\vee C)=(C\vee a\vee C)\wedge b\wedge (C\vee a\vee C)=(A\wedge c\wedge A)\vee b\vee (A\wedge c\wedge A)}.
All distributive skew lattices are categorical. Though symmetric skew lattices might not be. In a sense they reveal the independence between the properties of symmetry and distributivity.[1][3][5][8][9][10][12][13]
A zero element in a skew latticeSis an element 0 ofSsuch that for allx∈S,{\displaystyle x\in S,}0∧x=0=x∧0{\displaystyle 0\wedge x=0=x\wedge 0}or, dually,0∨x=x=x∨0.{\displaystyle 0\vee x=x=x\vee 0.}(0)
A Boolean skew lattice is a symmetric, distributive normal skew lattice with 0,(S;∨,∧,0),{\displaystyle (S;\vee ,\wedge ,0),}such thata∧S∧a{\displaystyle a\wedge S\wedge a}is a Boolean lattice for eacha∈S.{\displaystyle a\in S.}Given such skew latticeS, a difference operator \ is defined by x \ y =x−x∧y∧x{\displaystyle x-x\wedge y\wedge x}where the latter is evaluated in the Boolean latticex∧S∧x.{\displaystyle x\wedge S\wedge x.}[1]In the presence of (D3) and (0), \ is characterized by the identities:
y∧x∖y=0=x∖y∧y{\displaystyle y\wedge x\setminus y=0=x\setminus y\wedge y}and(x∧y∧x)∨x∖y=x=x∖y∨(x∧y∧x).{\displaystyle (x\wedge y\wedge x)\vee x\setminus y=x=x\setminus y\vee (x\wedge y\wedge x).}(S B)
One thus has a variety of skew Boolean algebras(S;∨,∧,0){\displaystyle (S;\vee ,\wedge ,\,0)}characterized by identities (D3), (0) and (S B). A primitive skew Boolean algebra consists of 0 and a single non-0D-class. Thus it is the result of adjoining a 0 to a rectangular skew latticeDvia (0) withx∖y=x{\displaystyle x\setminus y=x}, ify=0{\displaystyle y=0}and0{\displaystyle 0}otherwise. Every skew Boolean algebra is asubdirect productof primitive algebras. Skew Boolean algebras play an important role in the study of discriminator varieties and other generalizations inuniversal algebraof Boolean behavior.[14][15][16][17][18][19][20][21][22][23][24]
LetA{\displaystyle A}be aringand letE(A){\displaystyle E(A)}denote thesetof allidempotentsinA{\displaystyle A}. For allx,y∈A{\displaystyle x,y\in A}setx∧y=xy{\displaystyle x\wedge y=xy}andx∨y=x+y−xy{\displaystyle x\vee y=x+y-xy}.
Clearly∧{\displaystyle \wedge }but also∨{\displaystyle \vee }isassociative. If a subsetS⊆E(A){\displaystyle S\subseteq E(A)}is closed under∧{\displaystyle \wedge }and∨{\displaystyle \vee }, then(S,∧,∨){\displaystyle (S,\wedge ,\vee )}is a distributive, cancellative skew lattice. To find such skew lattices inE(A){\displaystyle E(A)}one looks at bands inE(A){\displaystyle E(A)}, especially the ones that are maximal with respect to some constraint. In fact, every multiplicative band in(){\displaystyle ()}that is maximal with respect to being right regular (= ) is also closed under∨{\displaystyle \vee }and so forms a right-handed skew lattice. In general, every right regular band inE(A){\displaystyle E(A)}generates a right-handed skew lattice inE(A){\displaystyle E(A)}. Dual remarks also hold for left regular bands (bands satisfying the identityxyx=xy{\displaystyle xyx=xy}) inE(A){\displaystyle E(A)}. Maximal regular bands need not to be closed under∨{\displaystyle \vee }as defined; counterexamples are easily found using multiplicative rectangular bands. These cases are closed, however, under the cubic variant of∨{\displaystyle \vee }defined byx∇y=x+y+yx−xyx−yxy{\displaystyle x\nabla y=x+y+yx-xyx-yxy}since in these casesx∇y{\displaystyle x\nabla y}reduces toyx{\displaystyle yx}to give the dual rectangular band. By replacing the condition of regularity by normality(xyzw=xzyw){\displaystyle (xyzw=xzyw)}, every maximal normal multiplicative bandS{\displaystyle S}inE(A){\displaystyle E(A)}is also closed under∇{\displaystyle \nabla }with(S;∧,∨,/,0){\displaystyle (S;\wedge ,\vee ,/,0)}, wherex/y=x−xyx{\displaystyle x/y=x-xyx}, forms a Boolean skew lattice. WhenE(A){\displaystyle E(A)}itself is closed under multiplication, then it is a normal band and thus forms a Boolean skew lattice. In fact, any skew Boolean algebra can be embedded into such an algebra.[25]When A has a multiplicative identity1{\displaystyle 1}, the condition thatE(A){\displaystyle E(A)}is multiplicatively closed is well known to imply thatE(A){\displaystyle E(A)}forms a Boolean algebra. Skew lattices in rings continue to be a good source of examples and motivation.[22][26][27][28][29]
Skew lattices consisting of exactly twoD-classes are called primitive skew lattices. Given such a skew latticeS{\displaystyle S}withD{\displaystyle D}-classesA>B{\displaystyle A>B}inS/D{\displaystyle S/D}, then for anya∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}, the subsets
A∧b∧A={\displaystyle A\wedge b\wedge A=}{u∧b∧u:u∈A{\displaystyle u\wedge b\wedge u:u\in A}}⊆B{\displaystyle \subseteq B}andB∨a∨B={\displaystyle B\vee a\vee B=}{v∨a∨v:v∈B{\displaystyle v\vee a\vee v:v\in B}}⊆A{\displaystyle \subseteq A}
are called, respectively,cosets of A in Bandcosets of B in A. These cosets partition B and A withb∈A∧b∧A{\displaystyle b\in A\wedge b\wedge A}anda∈B∧a∧B{\displaystyle a\in B\wedge a\wedge B}. Cosets are always rectangular subalgebras in theirD{\displaystyle D}-classes. What is more, the partial order≥{\displaystyle \geq }induces a coset bijectionφ:B∨a∨B→A∧b∧A{\displaystyle \varphi :B\vee a\vee B\rightarrow A\wedge b\wedge A}defined by:
ϕ(x)=y{\displaystyle \phi (x)=y}iffx>y{\displaystyle x>y}, forx∈B∨a∨B{\displaystyle x\in B\vee a\vee B}andy∈A∧b∧A{\displaystyle y\in A\wedge b\wedge A}.
Collectively, coset bijections describe≥{\displaystyle \geq }between the subsetsA{\displaystyle A}andB{\displaystyle B}. They also determine∨{\displaystyle \vee }and∧{\displaystyle \wedge }for pairs of elements from distinctD{\displaystyle D}-classes. Indeed, givena∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}, letφ{\displaystyle \varphi }be the
cost bijection between the cosetsB∨a∨B{\displaystyle B\vee a\vee B}inA{\displaystyle A}andA∧b∧A{\displaystyle A\wedge b\wedge A}inB{\displaystyle B}. Then:
a∨b=a∨φ−1(b),b∨a=φ−1(b)∨a{\displaystyle a\vee b=a\vee \varphi -1(b),b\vee a=\varphi -1(b)\vee a}anda∧b=φ(a)∧b,b∧a=b∧φ(a){\displaystyle a\wedge b=\varphi (a)\wedge b,b\wedge a=b\wedge \varphi (a)}.
In general, givena,c∈A{\displaystyle a,c\in A}andb,d∈B{\displaystyle b,d\in B}witha>b{\displaystyle a>b}andc>d{\displaystyle c>d}, thena,c{\displaystyle a,c}belong to a commonB{\displaystyle B}- coset inA{\displaystyle A}andb,d{\displaystyle b,d}belong to a commonA{\displaystyle A}-coset inB{\displaystyle B}if and only ifa>b//c>d{\displaystyle a>b//c>d}. Thus each coset bijection is, in some sense, a maximal collection of mutually parallel pairsa>b{\displaystyle a>b}.
Every primitive skew latticeS{\displaystyle S}factors as the fibred product of its maximal left and right- handed primitive imagesS/R×2S/L{\displaystyle S/R\times _{2}S/L}. Right-handed primitive skew lattices are constructed as follows. LetA=∪iAi{\displaystyle A=\cup _{i}A_{i}}andB=∪jBj{\displaystyle B=\cup _{j}B_{j}}be partitions of disjoint nonempty setsA{\displaystyle A}andB{\displaystyle B}, where allAi{\displaystyle A_{i}}andBj{\displaystyle B_{j}}share a common size. For each pairi,j{\displaystyle i,j}pick a fixed bijectionφi,j{\displaystyle \varphi _{i},j}fromAi{\displaystyle A_{i}}ontoBj{\displaystyle B_{j}}. OnA{\displaystyle A}andB{\displaystyle B}separately setx∧y=y{\displaystyle x\wedge y=y}andx∨y=x{\displaystyle x\vee y=x}; but givena∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}, set
a∨b=a,b∨a=a′,a∧b=b{\displaystyle a\vee b=a,b\vee a=a',a\wedge b=b}andb∧a=b′{\displaystyle b\wedge a=b'}
whereφi,j(a′)=b{\displaystyle \varphi _{i,j}(a')=b}andφi,j(a)=b′{\displaystyle \varphi _{i,j}(a)=b'}witha′{\displaystyle a'}belonging to the cellAi{\displaystyle A_{i}}ofa{\displaystyle a}andb′{\displaystyle b'}belonging to the cellBj{\displaystyle B_{j}}ofb{\displaystyle b}. The variousφi,j{\displaystyle \varphi i,j}are the coset bijections. This is illustrated in the following partial Hasse diagram where|Ai|=|Bj|=2{\displaystyle |A_{i}|=|B_{j}|=2}and the arrows indicate theφi,j{\displaystyle \varphi _{i,j}}-outputs and≥{\displaystyle \geq }fromA{\displaystyle A}andB{\displaystyle B}.
One constructs left-handed primitive skew lattices in dual fashion. All right [left] handed primitive skew lattices can be constructed in this fashion.[1]
A nonrectangular skew latticeS{\displaystyle S}is covered by its maximal primitive skew lattices: given comparableD{\displaystyle D}-classesA>B{\displaystyle A>B}inS/D{\displaystyle S/D},A∪B{\displaystyle A\cup B}forms a maximal primitive subalgebra ofS{\displaystyle S}and everyD{\displaystyle D}-class inS{\displaystyle S}lies in such a subalgebra. The coset structures on these primitive subalgebras combine to determine the outcomesx∨y{\displaystyle x\vee y}andx∧y{\displaystyle x\wedge y}at least whenx{\displaystyle x}andy{\displaystyle y}are comparable under⪯{\displaystyle \preceq }. It turns out thatx∨y{\displaystyle x\vee y}andx∧y{\displaystyle x\wedge y}are determined in general by cosets and their bijections, although in
a slightly less direct manner than the⪯{\displaystyle \preceq }-comparable case. In particular, given two incomparableD-classes A and B with joinD-classJand meetD-classM{\displaystyle M}inS/D{\displaystyle S/D}, interesting connections arise between the two coset decompositions of J (or M) with respect to A and B.[3]
Thus a skew lattice may be viewed as a coset atlas of rectangular skew lattices placed on the vertices of a lattice and coset bijections between them, the latter seen as partial isomorphisms
between the rectangular algebras with each coset bijection determining a corresponding pair of cosets. This perspective gives, in essence, the Hasse diagram of the skew lattice, which is easily
drawn in cases of relatively small order. (See the diagrams in Section 3 above.) Given a chain ofD-classesA>B>C{\displaystyle A>B>C}inS/D{\displaystyle S/D}, one has three sets of coset bijections: from A to B, from B to C and from A to C. In general, given coset bijectionsφ:A→B{\displaystyle \varphi :A\rightarrow B}andψ:B→C{\displaystyle \psi :B\rightarrow C}, the composition of partial bijectionsψφ{\displaystyle \psi \varphi }could be empty. If it is not, then a unique coset bijectionχ:A→C{\displaystyle \chi :A\rightarrow C}exists such thatψφ⊆χ{\displaystyle \psi \varphi \subseteq \chi }. (Again,χ{\displaystyle \chi }is a bijection between a pair of cosets inA{\displaystyle A}andC{\displaystyle C}.) This inclusion can be strict. It is always an equality (givenψφ≠∅{\displaystyle \psi \varphi \neq \emptyset }) on a given skew latticeSprecisely whenSis categorical. In this case, by including the identity maps on each rectangularD-class and adjoining empty bijections between properly comparableD-classes, one has a category of rectangular algebras and coset bijections between them. The simple examples in Section 3 are categorical.
|
https://en.wikipedia.org/wiki/Skew_lattice
|
Ingraph theory, the termbipartite hypergraphdescribes several related classes ofhypergraphs, all of which are natural generalizations of abipartite graph.
The weakest definition of bipartiteness is also called2-colorability. A hypergraphH= (V,E) is called 2-colorable if its vertex setVcan be partitioned into two sets,XandY, such that each hyperedge meets bothXandY. Equivalently, the vertices ofHcan be 2-colored so that no hyperedge is monochromatic. Every bipartite graphG= (X+Y,E) is 2-colorable: each edge contains exactly one vertex ofXand one vertex ofY, so e.g.Xcan be colored blue andYcan be colored yellow and no edge is monochromatic.
The property of 2-colorability was first introduced byFelix Bernsteinin the context of set families;[1]therefore it is also calledProperty B.
A stronger definition of bipartiteness is: a hypergraph is calledbipartiteif its vertex setVcan be partitioned into two sets,XandY, such that each hyperedge containsexactly oneelement ofX.[2][3]Every bipartite graph is also a bipartite hypergraph.
Every bipartite hypergraph is 2-colorable, but bipartiteness is stronger than 2-colorability. LetHbe a hypergraph on the vertices {1, 2, 3, 4} with the following hyperedges:
{ {1,2,3} , {1,2,4} , {1,3,4} , {2,3,4} }
ThisHis 2-colorable, for example by the partitionX= {1,2} andY= {3,4}. However, it is not bipartite, since every setXwith one element has an empty intersection with one hyperedge, and every setXwith two or more elements has an intersection of size 2 or more with at least two hyperedges.
Hall's marriage theoremhas been generalized from bipartite graphs to bipartite hypergraphs; seeHall-type theorems for hypergraphs.
A stronger definition is: given an integern, a hypergraph is calledn-uniform if all its hyperedges contain exactlynvertices. Ann-uniform hypergraph is calledn-partiteif its vertex setVcan be partitioned intonsubsets such that each hyperedge contains exactly one element from each subset.[4]An alternative term israinbow-colorable.[5]
Everyn-partiteness hypergraph is bipartite, but n-partiteness is stronger than bipartiteness. LetHbe a hypergraph on the vertices {1, 2, 3, 4} with the following hyperedges;
{ {1,2,3} , {1,2,4} , {1,3,4} }
ThisHis 3-uniform. It is bipartite by the partitionX= {1} andY= {2,3,4}. However, it is not 3-partite: in every partition ofVinto 3 subsets, at least one subset contains two vertices, and thus at least one hyperedge contains two vertices from this subset.
A 3-partite hypergraph is often called "tripartite hypergraph". However, a 2-partite hypergraph isnotthe same as a bipartite hypergraph; it is equivalent to a bipartitegraph.
There are other natural generalizations of bipartite graphs. A hypergraph is calledbalancedif it is essentially 2-colorable, and remains essentially 2-colorable upon deleting any number of vertices (seeBalanced hypergraph).
The properties of bipartiteness and balance do not imply each other.
Bipartiteness does not imply balance. For example, letHbe the hypergraph with vertices {1,2,3,4} and edges:
{ {1,2,3} , {1,2,4} , {1,3,4} }
It is bipartite by the partitionX={1},Y={2,3,4}. But is not balanced. For example, if vertex 1 is removed, we get the restriction ofHto {2,3,4}, which has the following hyperedges;
{ {2,3} , {2,4} , {3,4} }
It is not 2-colorable, since in any 2-coloring there are at least two vertices with the same color, and thus at least one of the hyperedges is monochromatic.
Another way to see thatHis not balanced is that it contains the odd-length cycle C = (2 - {1,2,3} - 3 - {1,3,4} - 4 - {1,2,4} - 2), and no edge ofCcontains all three vertices 2,3,4 ofC.
Balance does not imply bipartiteness. LetHbe the hypergraph:[citation needed]
{ {1,2} , {3,4} , {1,2,3,4} }
it is 2-colorable and remains 2-colorable upon removing any number of vertices from it. However, it is not bipartite, since to have exactly one green vertex in each of the first two hyperedges, we must have two green vertices in the last hyperedge.
|
https://en.wikipedia.org/wiki/Bipartite_hypergraph
|
Value theory, also calledaxiology, studies the nature, sources, and types ofvalues. It is a branch ofphilosophyand an interdisciplinary field closely associated withsocial sciencessuch aseconomics,sociology,anthropology, andpsychology.
Value is the worth of something, usually understood as a degree that covers both positive and negative magnitudes corresponding to the termsgoodandbad. Values influence many human endeavors related toemotion,decision-making, andaction. Value theorists distinguish various types of values, like the contrast betweenintrinsic and instrumental value. An entity hasintrinsic valueif it is good in itself, independent of external factors. An entity has instrumental value if it is useful as a means leading to other good things. Other classifications focus on the type of benefit, including economic, moral, political, aesthetic, and religious values. Further categorizations cover attributive, predicative, personal, impersonal, and agent-relative values.
Valuerealistsstate that values exist asobjectivefeatures of reality. Anti-realists reject this, with some seeing values as subjective human creations and others viewing value statements as meaningless. Regarding the sources of value,hedonistsargue that onlypleasurehas intrinsic value, whereas desire theorists discussdesiresas the ultimate source of value.Perfectionism, another approach, emphasizes the cultivation of characteristic human abilities. Valuepluralismidentifies diverse sources of intrinsic value, raising the issue of whether values belonging to different types are comparable. Value theorists employ variousmethods of inquiry, ranging fromreliance on intuitionsandthought experimentsto thedescription of first-person experienceand the analysis of language.
Ethics, a related field, focuses primarily onnormativeconcepts of right behavior, whereas value theory exploresevaluativeconcepts about what is good. In economics,theories of valueare frameworks to assess and explain the economic value ofcommodities. Sociology and anthropology examine values as aspects of societies and cultures, reflecting dominant preferences and beliefs. Psychologists tend to understand values as abstractmotivationalgoals that shape an individual'spersonality. The roots of value theory lie inantiquityas reflections on thehighest goodthat humans should pursue. Diverse traditions contributed to this area of thought during themedievalandearly modern periods, but it was only established as a distinct discipline in the late 19th and early 20th centuries.
Value theory, also known asaxiologyandtheory of values, is the systematic study ofvalues. As a branch ofphilosophy, it examines which things are good and what it means for something to be good. It distinguishes different types of values and explores how they can be measured and compared. This field also studies whether values are a fundamental aspect of reality and how they influence phenomena such asemotion,desire, decision, andaction.[2]Value theory is relevant to many human endeavors because values are guiding principles that underlie the political, economic, scientific, and personal spheres.[3]It analyzes and evaluates phenomena such aswell-being,utility,beauty, human life,knowledge,wisdom,freedom,love, andjustice.[4]
The precise definition of value theory is debated and some theorists rely on alternative characterizations. In a broad sense,value theoryis a catch-all label that encompasses all philosophical disciplines studying evaluative and normative topics. According to this view, value theory is one of the main branches of philosophy and includesethics,aesthetics,social philosophy,political philosophy, andphilosophy of religion.[5]A similar broad characterization sees value theory as a multidisciplinary area of inquiry that integrates research from fields likesociology,anthropology,psychology, andeconomicsalongside philosophy.[6]In a narrow sense, value theory is a subdiscipline of ethics that is particularly relevant to the school ofconsequentialismsince it determines how to assess the value of consequences.[7]
The wordaxiologyhas its origin in theancient Greektermsἄξιος(axios, meaning'worthy'or'of value') andλόγος(logos, meaning'study'or'theory of').[8]Even though the roots of value theory reach back to theancient period, this area of thought was only conceived as a distinct discipline in the late 19th and early 20th centuries, when the termaxiologywas coined.[9]The termsvalue theoryandaxiologyare usually used as synonyms but some philosophers distinguish between them. According to one characterization, axiology is a subfield of value theory that limits itself to theories about which things are valuable and how valuable they are.[10][a]The termtimologyis an older and less common synonym.[12]
Value is the worth, usefulness, or merit of something.[b]Value theorists examine the expressions used to describe and compare values, calledevaluative terms.[15]They are further interested in the types or categories of values. The proposed classifications overlap and are based on factors like the source, beneficiary, and function of the value.[16]
Values are expressed through evaluative terms. For example, the wordsgood,best,great, andexcellentconvey positive values, whereas words likebadandterribleindicate negative values.[15]Value theorists distinguish between thin andthick evaluative terms. Thin evaluative terms, likegoodandbad, express pure evaluations without any additional descriptive content.[c]They contrast with thick evaluative terms, likecourageousandcruel, which provide more information by expressing other qualities, such ascharacter traits, in addition to the evaluation.[18]Values are often understood as degrees that cover positive and negative magnitudes corresponding to good and bad. The termvalueis sometimes restricted to positive degrees to contrast with the termdisvaluefor negative degrees. The wordsbetterandworseare used to compare degrees, but it is controversial whether a quantitative comparison is always possible.[19]Evaluationis the assessment or measurement of value, often employed to compare the benefits of different options to find the most advantageous choice.[20]
Evaluative terms are sometimes distinguished fromnormativeor deontic terms. Normative terms, likeright,wrong, andobligation, prescribe actions or other states by expressing what ought to be done or what is required.[21]Evaluative terms have a wider scope because they are not limited to what people can control or are responsible for. For instance, involuntary events like digestion and earthquakes can have a positive or negative value even if they are not right or wrong in a strict sense.[22]Despite the distinction, evaluative and normative concepts are closely related. For example, the value of the consequences of an action may influence its normative status—whether the action is right or wrong.[23]
A thing has intrinsic or final value if it is good in itself or good for its own sake, independent of external factors or outcomes. A thing has extrinsic or instrumental value if it is useful or leads to other good things, serving as a means to bring about a desirable end. For example, tools like microwaves or money have instrumental value due to the useful functions they perform.[24]In some cases, the thing produced this way has itself instrumental value, like when using money to buy a microwave. This can result in a chain of instrumentally valuable things in which each link gets its value by causing the following link. Intrinsically valuable things stand at the endpoint of these chains and ground the value of all the preceding links.[25]
One suggestion to distinguish between intrinsic and instrumental value, proposed byG. E. Moore, relies on athought experimentthat imagines the valuable thing in isolation from everything else. In such a situation, purely instrumentally valuable things lose their value since they serve no purpose while purely intrinsically valuable things remain valuable.[26][d]According to a common view,pleasureis one of the sources of intrinsic value. Other suggested sources includedesiresatisfaction,virtue,life,health,beauty,freedom, andknowledge.[28]
Intrinsic and instrumental value are not exclusive categories. As a result, a thing can have both intrinsic and instrumental value if it is both good in itself while also leading to other good things.[29]In a similar sense, a thing can have different instrumental values at the same time, both positive and negative ones. This is the case if some of its consequences are good while others are bad. The total instrumental value of a thing is the value balance of all its consequences.[30]
Because instrumental value depends on other values, it is an open question whether it should be understood as a value in a strict sense. For example, the overall value of a chain of causes leading to an intrinsically valuable thing remains the same if instrumentally valuable links are added or removed without affecting the intrinsically valuable thing. The observation that the overall value does not change is sometimes used as an argument that the things added or removed do not have value.[31]
Traditionally, value theorists have used the termsintrinsic valueandfinal valueinterchangeably, just like the termsextrinsic valueandinstrumental value. This practice has been questioned in the 20th century based on the idea that they are similar but not identical concepts. According to this view, a thing has intrinsic value if the source of its value is anintrinsic property, meaning that the value does not depend on how the thing is related to other objects. Extrinsic value, by contrast, depends onexternal relations. This view sees instrumental value as one type of extrinsic value based on externalcausal relations. At the same time, it allows that there are other types of non-instrumental extrinsic value that result from external non-causal relations. Final value is understood as what is valued for its own sake, independent of whether intrinsic or extrinsic properties are responsible.[32][e]
Another distinction relies on the contrast between absolute and relative value. Absolute value, also calledvaluesimpliciter, is a form of unconditional value. A thing has relative value if its value is limited to certain considerations or viewpoints.[34]
One form of relative value is restricted to the type of an entity, expressed in sentences like "That is a good knife" or "Jack is a good thief". This form is known asattributivegoodnesssince the word "good" modifies the meaning of another term. To be attributively good as a certain type means to possess qualities characteristic of that type. For instance, a good knife is sharp and a good thief has the skill of stealing without getting caught. Attributive goodness contrasts withpredicativegoodness. The sentence "Pleasure is good" is an example since the wordgoodis used as a predicate to talk about the unqualified value of pleasure.[35]Attributive and predicative goodness can accompany each other, but this is not always the case. For instance, being a good thief is not necessarily a good thing.[36]
Another type of relative value restricts goodness to a specific person. Known aspersonal value,[f]it expresses what benefits a particular person, promotes theirwelfare, or is in their interest. For example, a poem written by a child may have personal value for the parents even if the poem lacks value for others. Impersonal value, by contrast, is good in general without restriction to any specific person or viewpoint.[38]Some philosophers, like Moore, reject the existence of personal values, holding that all values are impersonal. Others have proposed theories about the relation between personal and impersonal value. The agglomerative theory says that impersonal value is nothing but the sum of all personal values. Another view understands impersonal value as a specific type of personal value taken from the perspective of the universe as a whole.[39]
Agent-relative value is sometimes contrasted with personal value as another person-specific limitation of the evaluative outlook. Agent-relative values affect moral considerations about what a person is responsible for or guilty of. For example, if Mei promises to pick Pedro up from the airport then an agent-relative value obligates Mei to drive to the airport. This obligation is in place even if it does not benefit Mei, in which case there is an agent-relative value without a personal value. Inconsequentialism,[g]agent-relative values are often discussed in relation toethical dilemmas. One dilemma revolves around the question of whether an individual should murder an innocent person if this prevents the murder of two innocent people by a different perpetrator. The agent-neutral perspective tends to affirm this idea since one murder is preferable to two. The agent-relative perspective tends to reject this conclusion, arguing that the initial murder should be avoided since it negatively impacts the agent-relative value of the individual committing it.[41]
Traditionally, most value theorists see absolute value as the main topic of value theory and focus their attention on this type. Nonetheless, some philosophers, likePeter GeachandPhilippa Foot, have argued that the concept of absolute value by itself is meaningless and should be understood as one form of relative value.[42]
Other classifications of values have been proposed without a widely accepted main classification.[43]Some focus on the types of entities that have value. They include distinct categories for entities like things, the environment, individuals, groups, and society. Another subdivision pays attention to the type of benefit involved and encompasses material, economic, moral, social, political, aesthetic, and religious values. Classifications by the beneficiary of the value distinguish between self- and other-oriented values.[44]
A historically influential approach identifies three spheres of value:truth, goodness, and beauty.[h]For example, theneo-KantianphilosopherWilhelm Windelbandcharacterizes them as the highest goals ofconsciousness, withthoughtaiming at truth,willaiming at goodness, andemotionaiming at beauty. A similar view, proposed by theChinese philosopherZhang Dainian, says that the value of truth belongs to knowledge, the value of goodness belongs to behavior, and the value of beauty belongs to art.[46]This three-fold distinction also plays a central role in the philosophies ofFranz BrentanoandJürgen Habermas.[47]Other suggested types of values include objective, subjective, potential, actual, contingent, necessary, inherent, and constitutive values.[48]
Valuerealismis the view that values have mind-independent existence.[49][i]This means thatobjectivefacts determine what has value, irrespective of subjective beliefs and preferences.[50]According to this view, the evaluative statement "That act is bad" is as objectively true or false as the empirical statement "That act causes distress".[51]
Realists often analyze values aspropertiesof valuable things.[52]For example, stating that kindness is good asserts that kindness possesses the property of goodness. Value realists disagree about what type of property is involved.Naturalistssay that value is a natural property. Natural properties, like size and shape, can be known throughempirical observationand are studied by the natural sciences.Non-naturalistsreject this view but agree that values are real. They say that values differ significantly from empirical properties and belong to another realm of reality. According to one view, they are known through rational or emotional intuition rather than empirical observation.[53]
Another disagreement among realists is about whether the entity carrying the value is a concreteindividualor astate of affairs.[54]For instance, the name "Bill" refers to an individual while the sentence "Bill is pleased" refers to a state of affairs. States of affairs are complex entities that combine other entities, like the individual "Bill" and the property "pleased". Some value theorists hold that the value is a property directly of Bill while others contend that it is a property of the state of affairs that Bill is pleased.[55]This distinction affects various disputes in value theory. In some cases, a value is intrinsic according to one view and extrinsic according to the other.[56]
Value realism contrasts withanti-realism, which comes in various forms. In its strongest version, anti-realism rejects the existence of values in any form, claiming that value statements are meaningless.[57][j]Between these two positions, there are various intermediate views. Some anti-realists accept that value claims have meaning but deny that they have atruth value,[k]a position known asnon-cognitivism. For example,emotivistssay that value claims express emotional attitudes, similar to how exclamations like "Yay!" or "Boo!" express emotions rather than stating facts.[60][l]
Cognitivistscontend that value statements have a truth value.Error theoristsdefend anti-realism based on this view by stating that all value statements are false because there are no values.[62]Another view accepts the existence of values but denies that they are mind-independent. According to this view, themental statesof individuals determine whether an object has value, for instance, because individuals desire it.[63]A similar view is defended byexistentialistslikeJean-Paul Sartre, who argued that values are human creations that endow the world with meaning.[64]Subjectivist theories say that values are relative to each subject, whereas more objectivist outlooks hold that values depend onmindin general rather than on the individual mind.[65]A different position accepts that values are mind-independent but holds that they are reducible to other facts, meaning that they are not a fundamental part of reality. One form ofreductionismmaintains that a thing is good if it is fitting to favor this thing, regardless of whether people actually favor it, a position known as thefitting-attitude theory of value. The buck-passing account, a closely related reductive view, argues that a thing is valuable if people have reasons to treat the thing in certain ways. These reasons come from other features of the valuable thing. The strongest form of realism says that value is a fundamental part of reality and cannot be reduced to other aspects.[66]
Various theories about the sources of value have been proposed. They aim to clarify what kinds of things are intrinsically good.[67]The historically influential theory ofhedonism[m]states that how people feel is the only source of value. More specifically, it says thatpleasureis the only intrinsic good andpainis the only intrinsic evil.[69]According to this view, everything else only has instrumental value to the extent that it leads to pleasure or pain, including knowledge, health, and justice. Hedonists usually understand the termpleasurein a broad sense that covers all kinds of enjoyable experiences, including bodily pleasures of food and sex as well as more intellectual or abstract pleasures, like the joy of reading a book or happiness about a friend's promotion. Pleasurable experiences come in degrees, and hedonists usually associate their intensity and duration with the magnitude of value they have.[70][n]
Many hedonists identify pleasure and pain as symmetric opposites, meaning that the value of pleasure balances out the disvalue of pain if they have the same intensity. However, some hedonists reject this symmetry and give more weight to avoiding pain than to experiencing pleasure.[72]Although it is widely accepted that pleasure is valuable, the hedonist claim that it is the only source of value is controversial.[73]Welfarism, a closely related theory, understandswell-beingas the only source of value. Well-being is what is ultimately good for a person, which can include other aspects besides pleasure, such as health,personal growth, meaningfulrelationships, and a sense of purpose in life.[74]
Desire theories offer a slightly different account, stating that desire satisfaction[o]is the only source of value.[p]This theory overlaps with hedonism because many people desire pleasure and because desire satisfaction is often accompanied by pleasure. Nonetheless, there are important differences: people desire a variety of other things as well, like knowledge, achievement, and respect; additionally, desire satisfaction may not always result in pleasure.[77]Some desire theorists hold that value is a property of desire satisfaction itself, while others say that it is a property of the objects that satisfy a desire.[78]One debate in desire theory concerns whether every desire is a source of value. For example, if a person has a false belief that money makes them happy, it is questionable whether the satisfaction of their desire for money is a source of value. To address this consideration, some desire theorists say that a desire can only provide value if a fully informed and rational person would have it, thereby excluding misguided desires from being a source of value.[79]
Perfectionismidentifies the realization ofhuman natureand the cultivation of characteristic human abilities as the source of intrinsic goodness. It covers capacities and character traits belonging to the bodily, emotional, volitional, cognitive, social, artistic, and religious fields. Perfectionists disagree about which human excellences are the most important. Many are pluralistic in recognizing a diverse array of human excellences, such as knowledge, creativity, health, beauty, free agency, and moral virtues like benevolence and courage.[80]According to one suggestion, there are two main fields of human goods: theoretical abilities responsible for understanding the world and practical abilities responsible for interacting with it.[81]Some perfectionists provide an ideal characterization of human nature as the goal of human flourishing, holding that human excellences are those aspects that promote the realization of this goal. This view is exemplified inAristotle's focus onrationalityas the nature and ideal state of human beings.[82]Non-humanistic versions extend perfectionism to the natural world in general, arguing that excellence as a source of intrinsic value is not limited to the human realm.[83]
Monisttheories of value assert that there is only a single source of intrinsic value. They agree that various things have value but maintain that all fundamentally good things belong to the same type. For example, hedonists hold that nothing but pleasure has intrinsic value, while desire theorists argue that desire satisfaction is the only source of fundamental goodness.Pluralistsreject this view, contending that a simple single-value system is too crude to capture the complexity of the sphere of values. They say that diverse sources of value exist independently of one another, each contributing to the overall value of the world.[84]
One motivation for value pluralism is the observation that people value diverse types of things, including happiness, friendship, success, and knowledge.[85]This diversity becomes particularly prominent when people face difficult decisions between competing values, such as choosing between friendship and career success.[86]In such cases, value pluralists can argue that the different items have different types of value. Since monists accept only one source of intrinsic value, they may provide a different explanation by proposing that some of the valuable items only have instrumental value but lack intrinsic value.[87]
Pluralists have proposed various accounts of how their view affects practical decisions. Rational decisions often rely on value comparisons to determine which course of action should be pursued.[89]Some pluralists discuss a hierarchy of values reflecting the relative importance and weight of different value types to help people promote higher values when faced with difficult choices.[90]For example, philosopherMax Schelerranks values based on how enduring and fulfilling they are into the levels of pleasure, utility, vitality, culture, and holiness. He asserts that people should not promote lower values, like pleasure, if this comes at the expense of higher values.[88][q]
Radical pluralists reject this approach, putting more emphasis on diversity by holding that different types of values are not comparable with each other. This means that each value type is unique, making it impossible to determine which one is superior.[92][r]Some value theorists use radical pluralism to argue that value conflicts are inevitable, that the gain of one value cannot always compensate for the loss of another, and that someethical dilemmasare irresolvable.[94]For example, philosopherIsaiah Berlinapplied this idea to the values oflibertyandequality, arguing that a gain in one cannot make up for a loss in the other. Similarly, philosopherJoseph Razsaid that it is often impossible to compare the values of career paths, like when choosing between becoming alawyeror aclarinetist.[95]The termsincomparabilityandincommensurabilityare often used as synonyms. However, philosophers likeRuth Changdistinguish them. According to this view, incommensurability means that there is no common measure to quantify values of different types. Incommensurable values may or may not be comparable. If they are, it is possible to say that one value is better than another, but it is not possible to quantify how much better it is.[96]
Several controversies surround the question of how the intrinsic value of awholeis determined by the intrinsic values of its parts. According to the additivity principle, the intrinsic value of a whole is simply the sum of the intrinsic values of its parts. For example, if a virtuous person becomes happy then the intrinsic value of the happiness is simply added to the intrinsic value of the virtue, thereby increasing the overall value.[97]
Various counterexamples to the additivity principle have been proposed, suggesting that the relation between parts and wholes is more complex. For instance,Immanuel Kantargued that if a vicious person becomes happy, this happiness, though good in itself, does not increase the overall value. On the contrary, it makes things worse, according to Kant, since viciousness should not be rewarded with happiness. This situation is known as anorganic unity—a whole whose intrinsic value differs from the sum of the intrinsic values of its parts.[99]Another perspective, calledholism about value, asserts that the intrinsic value of a thing depends on its context. Holists can argue that happiness has positive intrinsic value in the context of virtue and negative intrinsic value in the context of vice. Atomists reject this view, saying that intrinsic value is context-independent.[100]
Theories of value aggregation provide concrete principles for calculating the overall value of an outcome based on how positively or negatively each individual is affected by it. For example, if a government implements a new policy that affects some people positively and others negatively, theories of value aggregation can be used to determine whether the overall value of the policy is positive or negative. Axiologicalutilitarianismaccepts the additivity principle, saying that the total value is simply the sum of all individual values.[101]Axiologicalegalitariansare not only interested in the sum total of value but also in how the values are distributed. They argue that an outcome with a balanced advantage distribution is better than an outcome where some benefit a lot while others benefit little, even if the two outcomes have the same sum total.[102]Axiologicalprioritariansare particularly concerned with the benefits of individuals who are worse off. They say that providing advantages to people in need has more value than providing the same advantages to others.[102]
Another debate addresses themeaning of life, investigating whether life or existence as a whole has a higher meaning or purpose.[103]Naturalistviews argue that the meaning of life is found within the physical world, either as objective values that are true for everyone or as subjective values that vary according to individual preferences. Suggested fields where humans find meaning include exercisingfreedom, committing oneself to a cause, practicingaltruism, engaging in positivesocial relationships, or pursuing personalhappiness.[104]Supernaturalists, by contrast, propose that meaning lies beyond the natural world. For example, various religions teach thatGodcreated the world for a higher purpose, imbuing existence with meaning. A related outlook argues that immortalsoulsserve as sources of meaning by being connected to atranscendent realityand evolvingspiritually.[105]Existential nihilistsreject both naturalist and supernaturalist explanations by asserting that there is no higher purpose. They suggest that life is meaningless, with the consequence that there is no higher reason to continue living and that all efforts, achievements, happiness, and suffering are ultimately pointless.[106]
Formal axiology is a theory of value initially developed by philosopherRobert S. Hartman. This approach treats axiology as aformal science, akin tologicandmathematics. It usesaxiomsto give an abstract definition of value, understanding it not as a property of things but as a property of concepts. Value measures the extent to which an entity fulfills its concept. For example, a good car has all the desirable qualities of cars, like a reliable engine and effective brakes, whereas a bad car lacks many. Formal axiology distinguishes between three fundamental value types: intrinsic values apply to people; extrinsic values apply to things, actions, and social roles; systemic values apply to conceptual constructs. Formal axiology examines how these value types form a hierarchy and how they can be measured.[107]
Value theorists employ variousmethodsto conduct their inquiries, justify theories, and measure values.Intuitionistsrely onintuitionsto assess evaluative claims. In this context, an intuition is an immediate apprehension or understanding of aself-evidentclaim, meaning that its truth can be assessed withoutinferringit from another observation.[108]Value theorists often rely onthought experimentsto gain this type of understanding. Thought experiments are imagined scenarios that exemplify philosophical problems. Philosophers usecounterfactual reasoningto evaluate possible consequences and gain insight into underlying problems.[109]For example, philosopherRobert Nozickimagines anexperience machinethat can virtually simulate an ideal life. Based on his contention that people would not want to spend the rest of their lives in this pleasurable simulation, Nozick argues against thehedonistclaim that pleasure is the only source of intrinsic value. According to him, the thought experiment shows that the value of an authentic connection to reality is not reducible to pleasure.[110][s]
Phenomenologistsprovide a detailed first-person description of theexperienceof values. They closely examine emotional experiences, ranging from desire, interest, and preference to feelings in the form of love and hate. However, they do not limit their inquiry to these phenomena, asserting that values permeate experience at large.[111]A key aspect of the phenomenological method is tosuspend preconceived ideas and judgmentsto understand the essence of experiences as they present themselves to consciousness.[112]
The analysis of concepts andordinary languageis another method of inquiry. By examining terms and sentences used to talk about values, value theorists aim to clarify their meanings, uncover crucial distinctions, and formulate arguments for and against axiological theories.[113]For instance, a prominent dispute betweennaturalistsandnon-naturalistshinges on theconceptual analysisof the termgood, in particular, whether its meaning can be analyzed through natural terms, likepleasure.[114][t]
In thesocial sciences, value theorists face the challenge of measuring the evaluative outlook of individuals and groups. Specifically, they aim to determine personal value hierarchies, for example, whether a person gives more weight to truth than to moral goodness or beauty.[116]They distinguish between direct and indirect measurement methods. Direct methods involve asking people straightforward questions about what things they value and which value priorities they have. This approach assumes that people are aware of their evaluative outlook and able to articulate it accurately. Indirect methods do not share this assumption, asserting instead that values guide behavior and choices on an unconscious level. Consequently, they observe how people decide and act, seeking to infer the underlying value attitudes responsible for picking one course of action rather than another.[117]
Various catalogs orscales of valueshave been proposed to measure value priorities. TheRokeach Value Surveyconsiders a total of 36 values divided into two groups: instrumental values, like honesty and capability, which serve as means to promote terminal values, such as freedom and family security. It asks participants to rank the values based on their impact on the participants' lives, aiming to understand the relative importance assigned to each of them. TheSchwartz theory of basic human valuesis a modification of the Rokeach Value Survey that seeks to provide a more cross-cultural and universal assessment. It arranges the values in a circular manner to reflect that neighboring values are compatible with each other, such as openness to change and self-enhancement, while values on opposing sides may conflict with each other, such as openness to change and conservation.[118]
Ethics and value theory are overlapping fields of inquiry. Ethics studiesmoralphenomena, focusing on how people should act or which behaviors are morally right.[119]Value theory investigates the nature, sources, and types of values in general.[2]Some philosophers understand value theory as a subdiscipline of ethics. This is based on the idea that what people should do is affected by value considerations but not necessarily limited to them.[7]Another view sees ethics as a subdiscipline of value theory. This outlook follows the idea that ethics is concerned with moral values affecting what people can control, whereas value theory examines a broader range of values, including those beyond anyone's control.[120]Some perspectives contrast ethics and value theory, asserting that thenormativeconcepts examined by ethics are distinct from the evaluative concepts examined by value theory.[23]Axiological ethicsis a subfield of ethics examining the nature and role of values from a moral perspective, with particular interest in determining which ends are worth pursuing.[121]
The ethical theory ofconsequentialismcombines the perspectives of ethics and value theory, asserting that the rightness of an action depends on the value of its consequences. Consequentialists compare possible courses of action, saying that people should follow the one leading to the best overall consequences.[122]The overall consequences of an action are the totality of its effects, or how it impacts the world by starting a causal chain of events that would not have occurred otherwise.[123]Distinct versions of consequentialism rely on different theories of the sources of value.Classical utilitarianism, a prominent form of consequentialism, says that moral actions produce the greatest amount ofpleasurefor the greatest number of people. It combines a consequentialist outlook on right action with ahedonistoutlook on pleasure as the only source of intrinsic value.[124]
Economics is asocial sciencestudying how goods and services are produced, distributed, and consumed, both from the perspective of individual agents and societal systems.[125]Economists view evaluations as a driving force underlying economic activity. They use the notion ofeconomic valueand related evaluative concepts to understand decision-making processes, resource allocation, and the impact of policies. The economic value or benefit of acommodityis the advantage it provides to aneconomic agent, often measured in terms of what people arewilling to payfor it.[126]
Economic theories of value are frameworks to explain how economic value arises and which factors influence it. Prominent frameworks include the classicallabor theory of valueand the neo-classicalmarginal theory of value.[127]The labor theory, initially developed by the economistsAdam SmithandDavid Ricardo, distinguishes betweenuse value—the utility or satisfaction a commodity provides—andexchange value—the proportion at which one commodity can be exchanged with another.[128]It focuses on exchange value, which it says is determined by theamount of labor required to produce the commodity. In its simplest form, it directly correlates exchange value to labor time. For example, if the time needed to hunt a deer is twice the time needed to hunt a beaver then one deer is worth two beavers.[129]The philosopherKarl Marxextended the labor theory of value in various ways. He introduced the concept ofsurplus value, which goes beyond the time and resources invested to explain howcapitalistscan profit from the labor of their employees.[130]
The marginal theory of value focuses on consumption rather than production. It says that the utility of a commodity is the source of its value. Specifically, it is interested inmarginal utility, the additional satisfaction gained from consuming one more unit of the commodity. Marginal utility often diminishes if many units have already been consumed, leading to a decrease in the exchange value of commodities that are abundantly available.[131]Both the labor theory and the marginal theory were later challenged by theSraffian theory of value.[132]
Sociology studies social behavior, relationships, institutions, and society at large.[133]In their analyses and explanations of these phenomena, some sociologists use the concept of values to understand issues likesocial cohesionandconflict, the norms and practices people follow, andcollective action. They usually understand values as subjective attitudes possessed by individuals and shared in social groups. According to this view, values are beliefs or priorities about goals worth pursuing that guide people to act in certain ways. For example, societies that value education may invest substantial resources to ensure high-quality schooling. This subjective conception of values as aspects of individuals and social groups contrasts with the objective conceptions of values more prominent in economics, which understand values as aspects of commodities.[134]
Shared values can help unite people in the pursuit of a common cause, fostering social cohesion. Value differences, by contrast, may divide people into antagonistic groups that promote conflicting projects. Some sociologists employ value research to predict how people will behave. Given the observation that someone values the environment, they may conclude that this person is more likely torecycleor support pro-environmental legislation.[135]One approach to this type of research usesvalue scales, such as theRokeach Value Surveyand theSchwartz theory of basic human values, to measure the value outlook of individuals and groups.[136]
Anthropology also studies human behavior and societies but does not limit itself to contemporary social structures, extending its focus to humanity both past and present.[137]Similar to sociologists, many anthropologists understand values as social representations of goals worth pursuing. For them, values are embedded in mental structures associated with culture and ideology about what is desirable. A slightly different approach in anthropology focuses on the practical side of values, holding that values are constantly created through human activity.[138]
Anthropological value theoristsuse values to compare cultures.[139]They can be employed to examine similarities as universal concerns present in every society. For example, anthropologistClyde Kluckhohnand sociologistFred Strodtbeckproposed a set of value orientations found in every culture.[140]Values can also be used to analyze differences between cultures and value changes within a culture. AnthropologistLouis Dumontfollowed this idea, suggesting that the cultural meaning systems in distinct societies differ in their value priorities. He argued that values are ordered hierarchically around a set of paramount values that trump all other values. For example, Dumont analyzed thetraditional Indian caste systemas a cultural hierarchy based on the value of purity, extending from the pureBrahminsto the "untouchable"Dalits.[141]
The contrast betweenindividualism and collectivismis an influential topic in cross-cultural value research. Individualism promotes values associated with theautonomyof individuals, such asself-directedness, independence, and the fulfillment of personal goals. Collectivism gives priority to group-related values, like cooperation,conformity, and foregoing personal advantages for the sake of collective benefits. As a rough simplification, it is often suggested that individualism is more prominent inWestern cultures, whereas collectivism is more commonly observed inEastern cultures.[142]
As the study ofmental phenomenaand behavior, psychology contrasts with sociology and anthropology by focusing more on the perspective of individuals than the broader social and cultural contexts.[143]Psychologists tend to understand values as abstractmotivationalgoals or general principles about what matters.[144]From this perspective, values differ from specific plans andintentionssince they are stable evaluative tendencies not bound to concrete situations.[145]
Various psychological theories of values establish a close link between an individual's evaluative outlook and theirpersonality.[146]An early theory, formulated by psychologistsPhilip E. VernonandGordon Allport, understands personality as a collection of aspects unified by a coherentvalue system. It distinguishes between six personality types corresponding to the value spheres of theory, economy, aesthetics, society, politics, and religion. For example, people with theoretical personalities place special importance on thevalue of knowledgeand the discovery oftruth.[147]Influenced by Vernon and Allport, psychologistMilton Rokeachconceptualized values as enduring beliefs about what goals and conduct are preferable. He divided values into the categories of instrumental and terminal values. He thought that a central aspect of personality lies in how people prioritize the values within each category.[148]PsychologistShalom Schwartzrefined this approach by linking values to emotion and motivation. He explored how value rankings affect decisions in which the values of different options conflict.[149]
The origin of value theory lies in theancient period, with early reflections on the good life and the ends worth pursuing.[150]Socrates(c.469–399 BCE)[151]identified the highest good as the right combination ofknowledge,pleasure, andvirtue, holding that active inquiry is associated with pleasure while knowledge of the Good leads to virtuous action.[152]Plato(c.428–347 BCE)[153]conceivedthe Goodas a universal and changeless idea. It is the highest form in histheory of forms, acting as the source of all other forms and the foundation of reality and knowledge.[154]Aristotle(384–322 BCE)[155]saweudaimoniaas the highest good and ultimate goal of human life. He understood eudaimonia as a form of happiness or flourishing achieved through the exercise of virtues in accordance withreason, leading to the full realization of human potential.[156]Epicurus(c.341–271 BCE) proposed a nuancedegoistichedonism, stating that personal pleasure is the greatest good while recommending moderation to avoid the negative effects of excessive desires and anxiety about the future.[157]According to theStoics, a virtuous life following nature and reason is the highest good. They thought that self-mastery andrationalitylead to a pleasantequanimityindependent of external circumstances.[158]Influenced by Plato,Plotinus(c.204/5–270 CE) held that the Good is the ultimate principle of reality from which everything emanates. For him,evilis not a distinct opposing principle but merely a deficiency or absence ofbeingresulting from a missing connection to the Good.[159]
In ancientIndian philosophy, the idea that people are trapped in acycle of rebirthsarose around 600 BCE.[161]Many traditions adopted it, arguing that liberation from this cycle is the highest good.[162]Hindu philosophydistinguishes thefour fundamental valuesofduty,economic wealth,sensory pleasure, andliberation.[163]ManyHindu schools of thoughtprioritize the value of liberation.[164]A similar outlook is found in ancientBuddhist philosophy, starting between the sixth and the fifth centuries BCE, where the cessation ofsufferingthrough the attainment ofNirvanais considered the ultimate goal.[165]Inancient China,Confucius(c.551–479 BCE)[166]explored the role ofself-cultivationin leading a virtuous life, viewinggeneral benevolence towards humanityas the supreme virtue.[160]In comparing the highest virtue to water,Laozi(6th century BCE)[u]emphasized the importance of living in harmony with thenatural order of the universe.[168]
Religious teachings influenced value theory in themedieval period. EarlyChristian thinkers, such asAugustine of Hippo(354–430 CE),[170]adapted the theories of Plato and Plotinus into a religious framework. They identified God as the ultimate source of existence and goodness, seeing evil as a mere lack or privation of good.[171]Drawing onAristotelianism, Christian philosopherThomas Aquinas(1224–1274 CE)[172]said that communion with the divine, achieved through abeatific visionof God, is thehighest endof humans.[173]InArabic–Persian philosophy,al-Farabi(c.878–950 CE)[174]asserted that the supreme form of human perfection is an intellectual happiness, reachable in theafterlifeby developing the intellect to its fullest potential.[175]Avicenna(980–1037 CE)[176]also regarded the intellect as the highest human faculty. He thought that a contemplative life prepares humans for the greatest good, which is only attained in the afterlife when humans are free from bodily distractions.[177]In Indian philosophy,Adi Shankara(c.700–750 CE)[178]taught that liberation, the highest human end, is reached by realizing that theselfis the same asultimate realityencompassing all of existence.[169]In Chinese thought, the earlyneo-ConfucianphilosopherHan Yu(768–824 CE) identified the sage as an ideal role model who, through self-cultivation, achieves personal integrity expressed in harmony between theory and action in daily life.[179]
In theearly modern period,Thomas Hobbes(1588–1679)[180]understood values as subjective phenomena that depend on a person's interests. He examined how the interests of individuals can be aggregated to guide political decisions.[181]David Hume(1711–1776)[182]agreed with Hobbes's subjectivism, exploringhow values differ from objective facts.[183]Immanuel Kant(1724–1804)[184]asserted that the highest good is happiness in proportion to moral virtue. He emphasized the primacy of virtue by respecting the moral law and the inherent value of people, adding that moral virtue is ideally, but not always, accompanied by personal happiness.[185]Jeremy Bentham(1748–1832)[186]andJohn Stuart Mill(1806–1873)[187]formulatedclassical utilitarianism, combining ahedonisttheory about value with aconsequentialisttheory about right action.[188]Hermann Lotze(1817–1881)[189]developed a philosophy of values, holding that values make the world meaningful as an ordered whole centered around goodness.[190]Influenced by Lotze, theneo-KantianphilosopherWilhelm Windelband(1848–1915)[191]understood philosophy as a theory of values, claiming that universal values determine the principles that all subjects should follow, including the norms of knowledge and action.[192]Friedrich Nietzsche(1844–1900)[193]held that values are human creations. He criticized traditional values in general and Christian values in particular, calling for arevaluation of all valuescentered on life-affirmation, power, and excellence.[194]
In the early 20th century,PragmatistphilosopherJohn Dewey(1859–1952)[195]defended axiologicalnaturalism. He distinguished values from value judgments, adding that the skill of correct value assessment must be learned through experience.[196][v]G. E. Moore(1873–1958)[198]developed and refined various axiological concepts, such as organic unity and the contrast between intrinsic and extrinsic value. He defendednon-naturalismabout the nature of values andintuitionismabout the knowledge of values.[199]W. D. Ross(1877–1971)[200]accepted and further elaborated on Moore's intuitionism, using it to formulate an axiological pluralism.[201][w]R. B. Perry(1876–1957)[203]andD. W. Prall(1886–1940)[204]articulated systematic theories of value based on the idea that values originate in affective states such as interest and liking.[205]Robert S. Hartman(1910–1973)[206]developed formal axiology, saying that values measure the level to which a thing embodies its ideal concept.[207]A. J. Ayer(1910–1989)[208]proposed anti-realism about values, arguing thatvalue statements merely expressthe speaker's approval or disapproval.[209]A different type of anti-realism, introduced byJ. L. Mackie(1917–1981),[210]suggests thatall value assertions are falsesince no values exist.[211]G. H. von Wright(1916–2003)[212]provided aconceptual analysisof the termgoodby distinguishing different meanings or varieties of goodness, such as the technical goodness of a good driver and the hedonic goodness of a good meal.[213]
Incontinental philosophy,Franz Brentano(1838–1917)[215]formulated an early version of the fitting-attitude theory of value, saying that a thing is good if it is fitting to have a positive attitude towards it, such as love.[214]In the 1890s, his studentsAlexius Meinong(1853–1920)[216]andChristian von Ehrenfels(1859–1932)[217]conceived the idea of a general theory of values.[218]Edmund Husserl(1859–1938),[216]another of Brentano's students, developedphenomenologyand applied this approach to the study of values.[219]Following Husserl's approach,Max Scheler(1874–1928) andNicolai Hartmann(1882–1950) each proposed a comprehensive system ofaxiological ethics.[220]Asserting that values have objective reality, they explored how different value types form a hierarchy and examined the problems of value conflicts and right decisions from this hierarchical perspective.[221]Martin Heidegger(1889–1976)[222]criticized value theory, claiming that it rests on a mistakenmetaphysicalperspective by understanding values as aspects of things.[223]ExistentialistphilosopherJean-Paul Sartre(1905–1980)[224]suggested that values do not exist by themselves but are actively created, emphasizing the role of humanfreedom, responsibility, andauthenticityin the process.[225]
|
https://en.wikipedia.org/wiki/Axiology#Intrinsic_value
|
Indatabase management, anaggregate functionoraggregation functionis afunctionwhere multiple values are processed together to form a singlesummary statistic.
Common aggregate functions include:
Others include:
Formally, an aggregate function takes as input aset, amultiset(bag), or alistfrom some input domainIand outputs an element of an output domainO.[1]The input and output domains may be the same, such as forSUM, or may be different, such as forCOUNT.
Aggregate functions occur commonly in numerousprogramming languages, inspreadsheets, and inrelational algebra.
Thelistaggfunction, as defined in theSQL:2016standard[2]aggregates data from multiple rows into a single concatenated string.
In theentity relationship diagram, aggregation is represented as seen in Figure 1 with a rectangle around the relationship and its entities to indicate that it is being treated as an aggregate entity.[3]
Aggregate functions present abottleneck, because they potentially require having all input values at once. Indistributed computing, it is desirable to divide such computations into smaller pieces, and distribute the work, usuallycomputing in parallel, via adivide and conquer algorithm.
Some aggregate functions can be computed by computing the aggregate for subsets, and then aggregating these aggregates; examples includeCOUNT,MAX,MIN, andSUM. In other cases the aggregate can be computed by computing auxiliary numbers for subsets, aggregating these auxiliary numbers, and finally computing the overall number at the end; examples includeAVERAGE(tracking sum and count, dividing at the end) andRANGE(tracking max and min, subtracting at the end). In other cases the aggregate cannot be computed without analyzing the entire set at once, though in some cases approximations can be distributed; examples includeDISTINCT COUNT(Count-distinct problem),MEDIAN, andMODE.
Such functions are calleddecomposable aggregation functions[4]ordecomposable aggregate functions. The simplest may be referred to asself-decomposable aggregation functions, which are defined as those functionsfsuch that there is amerge operator⋄{\displaystyle \diamond }such that
where⊎{\displaystyle \uplus }is the union of multisets (seemonoid homomorphism).
For example,SUM:
COUNT:
MAX:
MIN:
Note that self-decomposable aggregation functions can be combined (formally, taking the product) by applying them separately, so for instance one can compute both theSUMandCOUNTat the same time, by tracking two numbers.
More generally, one can define adecomposable aggregation functionfas one that can be expressed as the composition of a final functiongand a self-decomposable aggregation functionh,f=g∘h,f(X)=g(h(X)){\displaystyle f=g\circ h,f(X)=g(h(X))}. For example,AVERAGE=SUM/COUNTandRANGE=MAX−MIN.
In theMapReduceframework, these steps are known as InitialReduce (value on individual record/singleton set), Combine (binary merge on two aggregations), and FinalReduce (final function on auxiliary values),[5]and moving decomposable aggregation before the Shuffle phase is known as an InitialReduce step,[6]
Decomposable aggregation functions are important inonline analytical processing(OLAP), as they allow aggregation queries to be computed on the pre-computed results in theOLAP cube, rather than on the base data.[7]For example, it is easy to supportCOUNT,MAX,MIN, andSUMin OLAP, since these can be computed for each cell of the OLAP cube and then summarized ("rolled up"), but it is difficult to supportMEDIAN, as that must be computed for every view separately.
In order to calculate the average and standard deviation from aggregate data, it is necessary to have available for each group: the total of values (Σxi= SUM(x)), the number of values (N=COUNT(x)) and the total of squares of the values (Σxi2=SUM(x2)) of each groups.[8]AVG:AVG(X⊎Y)=(AVG(X)∗COUNT(X)+AVG(Y)∗COUNT(Y))/(COUNT(X)+COUNT(Y)){\displaystyle \operatorname {AVG} (X\uplus Y)={\bigl (}\operatorname {AVG} (X)*\operatorname {COUNT} (X)+\operatorname {AVG} (Y)*\operatorname {COUNT} (Y){\bigr )}/{\bigl (}\operatorname {COUNT} (X)+\operatorname {COUNT} (Y){\bigr )}}orAVG(X⊎Y)=(SUM(X)+SUM(Y))/(COUNT(X)+COUNT(Y)){\displaystyle \operatorname {AVG} (X\uplus Y)={\bigl (}\operatorname {SUM} (X)+\operatorname {SUM} (Y){\bigr )}/{\bigl (}\operatorname {COUNT} (X)+\operatorname {COUNT} (Y){\bigr )}}or, only if COUNT(X)=COUNT(Y)AVG(X⊎Y)=(AVG(X)+AVG(Y))/2{\displaystyle \operatorname {AVG} (X\uplus Y)={\bigl (}\operatorname {AVG} (X)+\operatorname {AVG} (Y){\bigr )}/2}SUM(x2):
The sum of squares of the values is important in order to calculate the Standard Deviation of groupsSUM(X2⊎Y2)=SUM(X2)+SUM(Y2){\displaystyle \operatorname {SUM} (X^{2}\uplus Y^{2})=\operatorname {SUM} (X^{2})+\operatorname {SUM} (Y^{2})}STDDEV:For a finite population with equal probabilities at all points, we have[9][circular reference]STDDEV(X)=s(x)=1N∑i=1N(xi−x¯)2=1N(∑i=1Nxi2)−(x¯)2=SUM(x2)/COUNT(x)−AVG(x)2{\displaystyle \operatorname {STDDEV} (X)=s(x)={\sqrt {{\frac {1}{N}}\sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}}}={\sqrt {{\frac {1}{N}}\left(\sum _{i=1}^{N}x_{i}^{2}\right)-({\overline {x}})^{2}}}={\sqrt {\operatorname {SUM} (x^{2})/\operatorname {COUNT} (x)-\operatorname {AVG} (x)^{2}}}}
This means that the standard deviation is equal to the square root of the difference between the average of the squares of the values and the square of the average value.STDDEV(X⊎Y)=SUM(X2⊎Y2)/COUNT(X⊎Y)−AVG(X⊎Y)2{\displaystyle \operatorname {STDDEV} (X\uplus Y)={\sqrt {\operatorname {SUM} (X^{2}\uplus Y^{2})/\operatorname {COUNT} (X\uplus Y)-\operatorname {AVG} (X\uplus Y)^{2}}}}STDDEV(X⊎Y)=(SUM(X2)+SUM(Y2))/(COUNT(X)+COUNT(Y))−((SUM(X)+SUM(Y))/(COUNT(X)+COUNT(Y)))2{\displaystyle \operatorname {STDDEV} (X\uplus Y)={\sqrt {{\bigl (}\operatorname {SUM} (X^{2})+\operatorname {SUM} (Y^{2}){\bigr )}/{\bigl (}\operatorname {COUNT} (X)+\operatorname {COUNT} (Y){\bigr )}-{\bigl (}(\operatorname {SUM} (X)+\operatorname {SUM} (Y))/(\operatorname {COUNT} (X)+\operatorname {COUNT} (Y)){\bigr )}^{2}}}}
|
https://en.wikipedia.org/wiki/Aggregate_function
|
Analog signal processingis a type ofsignal processingconducted oncontinuousanalog signalsby some analog means (as opposed to the discretedigital signal processingwhere thesignal processingis carried out by a digital process). "Analog" indicates something that is mathematically represented as a set of continuous values. This differs from "digital" which uses a series of discrete quantities to represent signal. Analog values are typically represented as avoltage,electric current, orelectric chargearound components in the electronic devices. An error or noise affecting such physical quantities will result in a corresponding error in the signals represented by such physical quantities.
Examples ofanalog signal processinginclude crossover filters in loudspeakers, "bass", "treble" and "volume" controls on stereos, and "tint" controls on TVs. Common analog processing elements include capacitors, resistors and inductors (as the passive elements) andtransistorsorop-amps(as the active elements).
A system's behavior can be mathematically modeled and is represented in the time domain as h(t) and in thefrequency domainas H(s), where s is acomplex numberin the form of s=a+ib, or s=a+jb in electrical engineering terms (electrical engineers use "j" instead of "i" because current is represented by the variable i). Input signals are usually called x(t) or X(s) and output signals are usually called y(t) or Y(s).
Convolutionis the basic concept in signal processing that states an input signal can be combined with the system's function to find the output signal. It is the integral of the product of two waveforms after one has reversed and shifted; the symbol for convolution is *.
That is the convolution integral and is used to find the convolution of a signal and a system; typically a = -∞ and b = +∞.
Consider two waveforms f and g. By calculating the convolution, we determine how much a reversed function g must be shifted along the x-axis to become identical to function f. The convolution function essentially reverses and slides function g along the axis, and calculates the integral of their (f and the reversed and shifted g) product for each possible amount of sliding. When the functions match, the value of (f*g) is maximized. This occurs because when positive areas (peaks) or negative areas (troughs) are multiplied, they contribute to the integral.
TheFourier transformis a function that transforms a signal or system in the time domain into the frequency domain, but it only works for certain functions. The constraint on which systems or signals can be transformed by the Fourier Transform is that:
This is the Fourier transform integral:
Usually the Fourier transform integral isn't used to determine the transform; instead, a table of transform pairs is used to find the Fourier transform of a signal or system. The inverse Fourier transform is used to go from frequency domain to time domain:
Each signal or system that can be transformed has a unique Fourier transform. There is only one time signal for any frequency signal, and vice versa.
TheLaplace transformis a generalizedFourier transform. It allows a transform of any system or signal because it is a transform into the complex plane instead of just the jω line like the Fourier transform. The major difference is that the Laplace transform has a region of convergence for which the transform is valid. This implies that a signal in frequency may have more than one signal in time; the correct time signal for the transform is determined by theregion of convergence. If the region of convergence includes the jω axis, jω can be substituted into the Laplace transform for s and it's the same as the Fourier transform. The Laplace transform is:
and the inverse Laplace transform, if all the singularities of X(s) are in the left half of the complex plane, is:
Bode plotsare plots of magnitude vs. frequency and phase vs. frequency for a system. The magnitude axis is in [Decibel] (dB). The phase axis is in either degrees or radians. The frequency axes are in a [logarithmic scale]. These are useful because for sinusoidal inputs, the output is the input multiplied by the value of the magnitude plot at the frequency and shifted by the value of the phase plot at the frequency.
This is the domain that most people are familiar with. A plot in the time domain shows the amplitude of the signal with respect to time.
A plot in thefrequency domainshows either the phase shift or magnitude of a signal at each frequency that it exists at. These can be found by taking the Fourier transform of a time signal and are plotted similarly to a bode plot.
While any signal can be used in analog signal processing, there are many types of signals that are used very frequently.
Sinusoidsare the building block of analog signal processing. All real world signals can be represented as an infinite sum of sinusoidal functions via aFourier series. A sinusoidal function can be represented in terms of an exponential by the application ofEuler's Formula.
An impulse (Dirac delta function) is defined as a signal that has an infinite magnitude and an infinitesimally narrow width with an area under it of one, centered at zero. An impulse can be represented as an infinite sum of sinusoids that includes all possible frequencies. It is not, in reality, possible to generate such a signal, but it can be sufficiently approximated with a large amplitude, narrow pulse, to produce the theoretical impulse response in a network to a high degree of accuracy. The symbol for an impulse is δ(t). If an impulse is used as an input to a system, the output is known as the impulse response. The impulse response defines the system because all possible frequencies are represented in the input
A unit step function, also called theHeaviside step function, is a signal that has a magnitude of zero before zero and a magnitude of one after zero. The symbol for a unit step is u(t). If a step is used as the input to a system, the output is called the step response. The step response shows how a system responds to a sudden input, similar to turning on a switch. The period before the output stabilizes is called the transient part of a signal. The step response can be multiplied with other signals to show how the system responds when an input is suddenly turned on.
The unit step function is related to the Dirac delta function by;
Linearity means that if you have two inputs and two corresponding outputs, if you take a linear combination of those two inputs you will get a linear combination of the outputs. An example of a linear system is a first order low-pass or high-pass filter. Linear systems are made out of analog devices that demonstrate linear properties. These devices don't have to be entirely linear, but must have a region of operation that is linear. An operational amplifier is a non-linear device, but has a region of operation that is linear, so it can be modeled as linear within that region of operation. Time-invariance means it doesn't matter when you start a system, the same output will result. For example, if you have a system and put an input into it today, you would get the same output if you started the system tomorrow instead. There aren't any real systems that are LTI, but many systems can be modeled as LTI for simplicity in determining what their output will be. All systems have some dependence on things like temperature, signal level or other factors that cause them to be non-linear or non-time-invariant, but most are stable enough to model as LTI. Linearity and time-invariance are important because they are the only types of systems that can be easily solved using conventional analog signal processing methods. Once a system becomes non-linear or non-time-invariant, it becomes a non-linear differential equations problem, and there are very few of those that can actually be solved. (Haykin & Van Veen 2003)
|
https://en.wikipedia.org/wiki/Analog_signal_processing
|
Informal ontology, a branch ofmetaphysics, and inontological computer science,mereotopologyis afirst-order theory, embodyingmereologicalandtopologicalconcepts, of the relations among wholes, parts, parts of parts, and theboundariesbetween parts.
Mereotopology begins in philosophy with theories articulated byA. N. Whiteheadin several books and articles he published between 1916 and 1929, drawing in part on the mereogeometry of De Laguna (1922). The first to have proposed the idea of a point-free definition of the concept of topological space in mathematics wasKarl Mengerin his bookDimensionstheorie(1928) -- see also his (1940). The early historical background of mereotopology is documented in Bélanger and Marquis (2013) and Whitehead's early work is discussed in Kneebone (1963: ch. 13.5) and Simons (1987: 2.9.1).[1]The theory of Whitehead's 1929Process and Realityaugmented the part-whole relation with topological notions such ascontiguityandconnection. Despite Whitehead's acumen as a mathematician, his theories were insufficiently formal, even flawed. By showing how Whitehead's theories could be fully formalized and repaired, Clarke (1981, 1985) founded contemporary mereotopology.[2]The theories of Clarke and Whitehead are discussed in Simons (1987: 2.10.2), and Lucas (2000: ch. 10). The entryWhitehead's point-free geometryincludes two contemporary treatments of Whitehead's theories, due to Giangiacomo Gerla, each different from the theory set out in the next section.
Although mereotopology is a mathematical theory, we owe its subsequent development tologiciansand theoreticalcomputer scientists. Lucas (2000: ch. 10) and Casati and Varzi (1999: ch. 4,5) are introductions to mereotopology that can be read by anyone having done a course infirst-order logic. More advanced treatments of mereotopology include Cohn and Varzi (2003) and, for the mathematically sophisticated, Roeper (1997). For a mathematical treatment ofpoint-free geometry, see Gerla (1995).Lattice-theoretic (algebraic) treatments of mereotopology ascontact algebrashave been applied to separate thetopologicalfrom themereologicalstructure, see Stell (2000), Düntsch and Winter (2004).
Barry Smith,[3]Anthony Cohn,Achille Varziand their co-authors have shown that mereotopology can be useful informal ontologyandcomputer science, by allowing the formalization of relations such ascontact,connection,boundaries,interiors, holes, and so on. Mereotopology has been applied also as a tool for qualitativespatial-temporal reasoning, with constraint calculi such as theRegion Connection Calculus(RCC). It provides the starting point for the theory of fiat boundaries developed by Smith and Varzi,[4]which grew out of the attempt to distinguish formally between
Mereotopology is being applied by Salustri in the domain of digital manufacturing (Salustri, 2002) and by Smith and Varzi to the formalization of basic notions of ecology and environmental biology (Smith and Varzi, 1999,[7]2002[8]). It has been applied also to deal with vague boundaries in geography (Smith and Mark, 2003[9]), and in the study of vagueness and granularity (Smith and Brogaard, 2002,[10]Bittner and Smith, 2001,[11]2001a[12]).
Casati and Varzi (1999: ch.4) set out a variety of mereotopological theories in a consistent notation. This section sets out several nested theories that culminate in their preferred theoryGEMTC, and follows their exposition closely. The mereological part of GEMTC is the conventional theoryGEM. Casati and Varzi do not say if themodelsof GEMTC include any conventionaltopological spaces.
We begin with somedomain of discourse, whose elements are calledindividuals(asynonymformereologyis "the calculus of individuals"). Casati and Varzi prefer limiting the ontology to physical objects, but others freely employ mereotopology to reason about geometric figures and events, and to solve problems posed by research inmachine intelligence.
An upper case Latin letter denotes both arelationand thepredicateletter referring to that relation infirst-order logic. Lower case letters from the end of the alphabet denote variables ranging over the domain; letters from the start of the alphabet are names of arbitrary individuals. If a formula begins with anatomic formulafollowed by thebiconditional, the subformula to the right of the biconditional is a definition of the atomic formula, whose variables areunbound. Otherwise, variables not explicitly quantified are tacitlyuniversally quantified. The axiomCnbelow corresponds to axiomC.nin Casati and Varzi (1999: ch. 4).
We begin with a topological primitive, abinary relationcalledconnection; the atomic formulaCxydenotes that "xis connected toy." Connection is governed, at minimum, by the axioms:
C1.Cxx.{\displaystyle \ Cxx.}(reflexive)
C2.Cxy→Cyx.{\displaystyle Cxy\rightarrow Cyx.}(symmetric)
LetE, the binary relation ofenclosure, be defined as:
Exy↔[Czx→Czy].{\displaystyle Exy\leftrightarrow [Czx\rightarrow Czy].}
Exyis read as "yenclosesx" and is also topological in nature. A consequence ofC1-2is thatEisreflexiveandtransitive, and hence apreorder. IfEis also assumedextensional, so that:
(Exa↔Exb)↔(a=b),{\displaystyle (Exa\leftrightarrow Exb)\leftrightarrow (a=b),}
thenEcan be provedantisymmetricand thus becomes apartial order. Enclosure, notatedxKy, is the single primitive relation of thetheories in Whitehead (1919, 1920), the starting point of mereotopology.
Letparthoodbe the defining primitivebinary relationof the underlyingmereology, and let theatomic formulaPxydenote that "xis part ofy". We assume thatPis apartial order. Call the resulting minimalist mereological theoryM.
Ifxis part ofy, we postulate thatyenclosesx:
C3.Pxy→Exy.{\displaystyle \ Pxy\rightarrow Exy.}
C3nicely connectsmereologicalparthood totopologicalenclosure.
LetO, the binary relation of mereologicaloverlap, be defined as:
Oxy↔∃z[Pzx∧Pzy].{\displaystyle Oxy\leftrightarrow \exists z[Pzx\land \ Pzy].}
LetOxydenote that "xandyoverlap." WithOin hand, a consequence ofC3is:
Oxy→Cxy.{\displaystyle Oxy\rightarrow Cxy.}
Note that theconversedoes not necessarily hold. While things that overlap are necessarily connected, connected things do not necessarily overlap. If this were not the case,topologywould merely be a model ofmereology(in which "overlap" is always either primitive or defined).
Ground mereotopology (MT) is the theory consisting of primitiveCandP, definedEandO, the axiomsC1-3, and axioms assuring thatPis apartial order. Replacing theMinMTwith the standardextensionalmereologyGEMresults in the theoryGEMT.
LetIPxydenote that "xis an internal part ofy."IPis defined as:
IPxy↔(Pxy∧(Czx→Ozy)).{\displaystyle IPxy\leftrightarrow (Pxy\land (Czx\rightarrow Ozy)).}
Let σxφ(x) denote the mereological sum (fusion) of all individuals in the domain satisfying φ(x). σ is avariable bindingprefixoperator. The axioms ofGEMassure that this sum exists if φ(x) is afirst-order formula. With σ and the relationIPin hand, we can define theinteriorofx,ix,{\displaystyle \mathbf {i} x,}as the mereological sum of all interior partszofx, or:
ix={\displaystyle \mathbf {i} x=}dfσz[IPzx].{\displaystyle \sigma z[IPzx].}
Two easy consequences of this definition are:
iW=W,{\displaystyle \mathbf {i} W=W,}
whereWis the universal individual, and
C5.[13]P(ix)x.{\displaystyle \ P(\mathbf {i} x)x.}(Inclusion)
The operatorihas two more axiomatic properties:
C6.i(ix)=ix.{\displaystyle \mathbf {i} (\mathbf {i} x)=\mathbf {i} x.}(Idempotence)
C7.i(x×y)=ix×iy,{\displaystyle \mathbf {i} (x\times y)=\mathbf {i} x\times \mathbf {i} y,}
wherea×bis the mereological product ofaandb, not defined whenOabis false.idistributes over product.
It can now be seen thatiisisomorphicto theinterior operatoroftopology. Hence thedualofi, the topologicalclosure operatorc, can be defined in terms ofi, andKuratowski's axioms forcare theorems. Likewise, given an axiomatization ofcthat is analogous toC5-7,imay be defined in terms ofc, andC5-7become theorems. AddingC5-7toGEMTresults in Casati and Varzi's preferred mereotopological theory,GEMTC.
xisself-connectedif it satisfies the following predicate:
SCx↔((Owx↔(Owy∨Owz))→Cyz).{\displaystyle SCx\leftrightarrow ((Owx\leftrightarrow (Owy\lor Owz))\rightarrow Cyz).}
Note that the primitive and defined predicates ofMTalone suffice for this definition. The predicateSCenables formalizing the necessary condition given inWhitehead'sProcess and Realityfor the mereological sum of two individuals to exist: they must be connected. Formally:
C8.Cxy→∃z[SCz∧Ozx∧(Pwz→(Owx∨Owy)).{\displaystyle Cxy\rightarrow \exists z[SCz\land Ozx\land (Pwz\rightarrow (Owx\lor Owy)).}
Given some mereotopologyX, addingC8toXresults in what Casati and Varzi call theWhiteheadian extensionofX, denotedWX. Hence the theory whose axioms areC1-8isWGEMTC.
The converse ofC8is aGEMTCtheorem. Hence given the axioms ofGEMTC,Cis a defined predicate ifOandSCare taken as primitive predicates.
If the underlying mereology isatomlessand weaker thanGEM, the axiom that assures the absence of atoms (P9in Casati and Varzi 1999) may be replaced byC9, which postulates that no individual has atopological boundary:
C9.∀x∃y[Pyx∧(Czy→Ozx)∧¬(Pxy∧(Czx→Ozy))].{\displaystyle \forall x\exists y[Pyx\land (Czy\rightarrow Ozx)\land \lnot (Pxy\land (Czx\rightarrow Ozy))].}
When the domain consists of geometric figures, the boundaries can be points, curves, and surfaces. What boundaries could mean, given other ontologies, is not an easy matter and is discussed in Casati and Varzi (1999: ch. 5).
|
https://en.wikipedia.org/wiki/Mereotopology
|
InDOS memory management,conventional memory, also calledbase memory, is the first 640kilobytesof the memory onIBM PCor compatible systems. It is the read-write memory directly addressable by the processor for use by the operating system and application programs. As memory prices rapidly declined, this design decision became a limitation in the use of large memory capacities until the introduction of operating systems and processors that made it irrelevant.
The640 KB barrieris an architectural limitation ofIBM PC compatiblePCs. TheIntel 8088CPU, used in theoriginal IBM PC, was able to address 1 MB (220bytes), since the chip offered 20address lines. In the design of the PC, the memory below 640 KB was forrandom-access memoryon the motherboard or on expansion boards, and it was called the conventional memory area.The first memory segment (64 KB) of the conventional memory area is namedlower memoryorlow memory area. The remaining 384 KB beyond the conventional memory area, called theupper memory area(UMA), was reserved for system use and optional devices. UMA was used for theROM BIOS, additionalread-only memory, BIOS extensions for fixed disk drives and video adapters, video adapter memory, and othermemory-mapped input and output devices. The design of the original IBM PC placed theColor Graphics Adapter(CGA) memory map in UMA.
The need for more RAM grew faster than the needs of hardware to utilize the reserved addresses, which resulted in RAM eventually being mapped into these unused upper areas to utilize all available addressable space. This introduced a reserved "hole" (or several holes) into the set of addresses occupied by hardware that could be used for arbitrary data. Avoiding such a hole was difficult and ugly and not supported byDOSor most programs that could run on it. Later, space between the holes would be used as upper memory blocks (UMBs).
To maintain compatibility with older operating systems and applications, the 640 KB barrier remained part of the PC design even after the 8086/8088 had been replaced with theIntel 80286processor, which could address up to 16 MB of memory inprotected mode. The 1 MB barrier also remained as long as the 286 was running inreal mode, since DOS required real mode which uses the segment and offset registers in an overlapped manner such that addresses with more than 20 bits are not possible. It is still present in IBM PC compatibles today if they are running in real mode such as used by DOS. Even the most modern Intel PCs still have the area between 640 and 1024KBreserved.[3][4]This however is invisible to programs (or even most of the operating system) on newer operating systems (such asWindows,Linux, orMac OS X) that usevirtual memory, because they have no awareness of physical memory addresses at all. Instead they operate within a virtual address space, which is defined independently of available RAM addresses.[5]
Some motherboards feature a "Memory Hole at 15 Megabytes" option required for certain VGA video cards that require exclusive access to one particular megabyte for video memory. Later video cards using theAGP(PCI memory space) bus can have 256 MB memory with 1 GBaperture size.
One technique used on earlyIBM XTcomputers was to install additional RAM into the video memory address range and push the limit up to the start of theMonochrome Display Adapter(MDA). Sometimes software or a customaddress decoderwas required for this to work. This moved the barrier to 704 KB (with MDA/HGC) or 736 KB (with CGA).[6][7]
Memory managerson386-basedsystems (such asQEMMor MEMMAX (+V) inDR-DOS) could achieve the same effect, adding conventional memory at 640 KB and moving the barrier to 704 KB (up to segment B000, the start of MDA/HGC) or 736 KB (up to segment B800, the start of the CGA).[7]Only CGA could be used in this situation, becauseEnhanced Graphics Adapter(EGA) video memory was immediately adjacent to the conventional memory area below the 640 KB line; the same memory area could not be used both for theframe bufferof the video card and for transient programs.
All Computers' piggy-back add-onmemory management unitsAllCardfor XT-[8][9]andChargecard[10]for 286/386SX-class computers, as well as MicroWay's ECM (Extended Conventional Memory) add-on-board[11]allowed normal memory to be mapped into the A0000–EFFFF (hex) address range, giving up to 952 KB for DOS programs. Programs such asLotus 1-2-3, which accessed video memory directly, needed to bepatchedto handle this memory layout. Therefore, the 640 KB barrier was removed at the cost of hardware compatibility.[10]
It was also possible to useconsole redirection[12](either by specifying an alternative console device likeAUX:when initially invokingCOMMAND.COMor by usingCTTYlater on) to direct output to and receive input from adumb terminalor another computer running aterminal emulator. Assuming theSystem BIOSstill permitted the machine to boot (which is often the case at least with BIOSes for embedded PCs), the video card in a so calledheadless computercould then be removed completely, and the system could provide a total of 960 KB of continuous DOS memory for programs to load.
Similar usage was possible on many DOS- but not IBM-compatible computers with a non-fragmented memory layout, for exampleSCPS-100 bussystems equipped with their8086CPU card CP-200B and up to sixteen SCP 110A memory cards (with 64 KB RAM on each of them) for a total of up to 1024 KB (without video card, but utilizing console redirection, and after mapping out the boot/BIOS ROM),[13]theVictor 9000/Sirius 1which supported up to 896 KB, or theApricot PCwith more continuous DOS memory to be used under its custom version of MS-DOS.
Most standard programs written for DOS did not necessarily need 640 KB or more of memory. Instead, driver software and utilities referred to asterminate-and-stay-resident programs(TSRs) could be used in addition to the standard DOS software. These drivers and utilities typically used some conventional memory permanently, reducing the total available for standard DOS programs.
Some very common DOS drivers and TSRs using conventional memory included:
As can be seen above, many of these drivers and TSRs could be considered practically essential to the full-featured operation of the system. But in many cases a choice had to be made by the computer user, to decide whether to be able to run certain standard DOS programs or have all their favorite drivers and TSRs loaded. Loading the entire list shown above is likely either impractical or impossible, if the user also wants to run a standard DOS program as well.
In some cases drivers or TSRs would have to be unloaded from memory to run certain programs, and then reloaded after running the program. For drivers that could not be unloaded, later versions of DOS included a startup menu capability to allow the computer user to select various groups of drivers and TSRs to load before running certain high-memory-usage standard DOS programs.
As DOS applications grew larger and more complex in the late 1980s and early 1990s, it became common practice to free up conventional memory by moving the device drivers and TSR programs into upper memory blocks (UMBs) in theupper memory area(UMA) at boot, in order to maximize the conventional memory available for applications. This had the advantage of not requiring hardware changes, and preserved application compatibility.
This feature was first provided by third-party products such asQEMM, before being built intoDR DOS 5.0in 1990 thenMS-DOS 5.0in 1991. Most users used the accompanyingEMM386driver provided in MS-DOS 5, but third-party products from companies such asQEMMalso proved popular.
At startup, drivers could be loaded high using the "DEVICEHIGH=" directive, while TSRs could be loaded high using the "LOADHIGH", "LH" or "HILOAD" directives. If the operation failed, the driver or TSR would automatically load into the regular conventional memory instead.
CONFIG.SYS, loading ANSI.SYS into UMBs, no EMS support enabled:
AUTOEXEC.BAT, loading MOUSE, DOSKEY, and SMARTDRV into UMBs if possible:
The ability of DOS versions 5.0 and later to move their own system core code into thehigh memory area(HMA) through theDOS=HIGH command gave another boost to free memory.
Hardware expansion boards could use any of the upper memory area for ROM addressing, so the upper memory blocks were of variable size and in different locations for each computer, depending on the hardware installed. Some windows of upper memory could be large and others small. Loading drivers and TSRs high would pick a block and try to fit the program into it, until a block was found where it fit, or it would go into conventional memory.
An unusual aspect of drivers and TSRs is that they would use different amounts of conventional and/or upper memory, based on the order they were loaded. This could be used to advantage if the programs were repeatedly loaded in different orders, and checking to see how much memory was free after each permutation. For example, if there was a 50 KB UMB and a 10 KB UMB, and programs needing 8 KB and 45 KB were loaded, the 8 KB might go into the 50 KB UMB, preventing the second from loading. Later versions of DOS allowed the use of a specific load address for a driver or TSR, to fit drivers/TSRs more tightly together.
In MS-DOS 6.0, Microsoft introducedMEMMAKER, which automated this process of block matching, matching the functionality third-partymemory managersoffered. This automatic optimization often still did not provide the same result as doing it by hand, in the sense of providing the greatest free conventional memory.
Also in some cases third-party companies wrote special multi-function drivers that would combine the capabilities of several standard DOS drivers and TSRs into a single very compact program that used just a few kilobytes of memory. For example, the functions of mouse driver, CD-ROM driver, ANSI support, DOSKEY command recall, and disk caching would all be combined together in one program, consuming just 1 – 2 kilobytes of conventional memory for normal driver/interrupt access, and storing the rest of the multi-function program code in EMS or XMS memory.
The barrier was only overcome with the arrival ofDOS extenders, which allowed DOS applications to run in 16-bit or 32-bitprotected mode, but these were not very widely used outside ofcomputer gaming. With a 32-bit DOS extender, a game could benefit from a 32-bit flat address space and the full 32-bit instruction set without the 66h/67h operand/address override prefixes. 32-bit DOS extenders required compiler support (32-bit compilers) whileXMSandEMSworked with an old compiler targeting 16-bit real-mode DOS applications. The two most common specifications for DOS extenders wereVCPI- and laterDPMI-compatible with Windows 3.x.
The most notable DPMI-compliant DOS extender may beDOS/4GW, shipping withWatcom. It was very common in games for DOS. Such a game would consist of either a DOS/4GW 32-bit kernel, or a stub which loaded a DOS/4GW kernel located in the path or in the same directory and a 32-bit "linear executable". Utilities are available which can strip DOS/4GW out of such a program and allow the user to experiment with any of the several, and perhaps improved, DOS/4GW clones.
Prior to DOS extenders, if a user installed additional memory and wished to use it under DOS, they would first have to install and configure drivers to support eitherexpanded memoryspecification (EMS) orextended memoryspecification (XMS) and run programs supporting one of these specifications.
EMS was a specification available on all PCs, including those based on theIntel 8086andIntel 8088, which allowed add-on hardware to page small chunks of memory in and out (bank switching) of the "real mode" addressing space (0x0400–0xFFFF). This allowed 16-bit real-mode DOS programs to access several megabytes of RAM through a hole in real memory, typically (0xE000–0xEFFF). A program would then have to explicitly request the page to be accessed before using it. These memory locations could then be used arbitrarily until replaced by another page. This is very similar to modern pagedvirtual memory. However, in a virtual memory system, the operating system handles allpagingoperations, while paging was explicit with EMS.
XMS provided a basic protocol which allowed a 16-bit DOS programs to load chunks of 80286 or 80386 extended memory in low memory (address 0x0400–0xFFFF). A typical XMS driver had to switch to protected mode in order to load this memory. The problem with this approach is that while in 286 protected mode, direct DOS calls could not be made. The workaround was to implement a callback mechanism, requiring a reset of the 286. On the 286, this was a major problem. TheIntel 80386, which introduced "virtual 8086 mode", allowed the guest kernel to emulate the 8086 and run the host operating system without having to actually force the processor back into "real mode".HIMEM.SYS2.03 and higher usedunreal modeon the 80386 and higher CPUs while HIMEM.SYS 2.06 and higher usedLOADALLto change undocumented internal registers on the 80286, significantly improving interrupt latency by avoiding repeated real mode/protected mode switches.[14]
Windows installs its own version of HIMEM.SYS[15]on DOS 3.3 and higher. Windows HIMEM.SYS launches 32-bit protected mode XMS (n).0 services provider for the Windows Virtual Machine Manager, which then provides XMS (n-1).0 services to DOS boxes and the 16-bit Windows machine (e.g. DOS 7 HIMEM.SYS is XMS 3.0 but running 'MEM' command in a Windows 95 DOS window shows XMS 2.0 information).
|
https://en.wikipedia.org/wiki/Console_redirection
|
Inprobabilityandstatistics, acompound probability distribution(also known as amixture distributionorcontagious distribution) is theprobability distributionthat results from assuming that arandom variableis distributed according to some parametrized distribution, with (some of) the parameters of that distribution themselves being random variables.
If the parameter is ascale parameter, the resulting mixture is also called ascale mixture.
The compound distribution ("unconditional distribution") is the result ofmarginalizing(integrating) over thelatentrandom variable(s) representing the parameter(s) of the parametrized distribution ("conditional distribution").
Acompound probability distributionis the probability distribution that results from assuming that a random variableX{\displaystyle X}is distributed according to some parametrized distributionF{\displaystyle F}with an unknown parameterθ{\displaystyle \theta }that is again distributed according to some other distributionG{\displaystyle G}. The resulting distributionH{\displaystyle H}is said to be the distribution that results from compoundingF{\displaystyle F}withG{\displaystyle G}. The parameter's distributionG{\displaystyle G}is also called themixing distributionorlatent distribution. Technically, theunconditionaldistributionH{\displaystyle H}results frommarginalizingoverG{\displaystyle G}, i.e., from integrating out the unknown parameter(s)θ{\displaystyle \theta }. Itsprobability density functionis given by:
The same formula applies analogously if some or all of the variables are vectors.
From the above formula, one can see that a compound distribution essentially is a special case of amarginal distribution: Thejoint distributionofx{\displaystyle x}andθ{\displaystyle \theta }is given byp(x,θ)=p(x|θ)p(θ){\displaystyle p(x,\theta )=p(x|\theta )p(\theta )}, and the compound results as its marginal distribution:p(x)=∫p(x,θ)dθ{\displaystyle {\textstyle p(x)=\int p(x,\theta )\operatorname {d} \!\theta }}.
If the domain ofθ{\displaystyle \theta }is discrete, then the distribution is again a special case of amixture distribution.
The compound distributionH{\displaystyle H}will depend on the specific expression of each distribution, as well as which parameter ofF{\displaystyle F}is distributed according to the distributionG{\displaystyle G}, and the parameters ofH{\displaystyle H}will include any parameters ofG{\displaystyle G}that are not marginalized, or integrated, out.
ThesupportofH{\displaystyle H}is the same as that ofF{\displaystyle F}, and if the latter is a two-parameter distribution parameterized with the mean and variance, some general properties exist.
The compound distribution's first twomomentsare given by thelaw of total expectationand thelaw of total variance:
EH[X]=EG[EF[X|θ]]{\displaystyle \operatorname {E} _{H}[X]=\operatorname {E} _{G}{\bigl [}\operatorname {E} _{F}[X|\theta ]{\bigr ]}}
VarH(X)=EG[VarF(X|θ)]+VarG(EF[X|θ]){\displaystyle \operatorname {Var} _{H}(X)=\operatorname {E} _{G}{\bigl [}\operatorname {Var} _{F}(X|\theta ){\bigr ]}+\operatorname {Var} _{G}{\bigl (}\operatorname {E} _{F}[X|\theta ]{\bigr )}}
If the mean ofF{\displaystyle F}is distributed asG{\displaystyle G}, which in turn has meanμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}the expressions above implyEH[X]=EG[θ]=μ{\displaystyle \operatorname {E} _{H}[X]=\operatorname {E} _{G}[\theta ]=\mu }andVarH(X)=VarF(X|θ)+VarG(Y)=τ2+σ2{\displaystyle \operatorname {Var} _{H}(X)=\operatorname {Var} _{F}(X|\theta )+\operatorname {Var} _{G}(Y)=\tau ^{2}+\sigma ^{2}}, whereτ2{\displaystyle \tau ^{2}}is the variance ofF{\displaystyle F}.
letF{\displaystyle F}andG{\displaystyle G}be probability distributions parameterized with mean a variance asx∼F(θ,τ2)θ∼G(μ,σ2){\displaystyle {\begin{aligned}x&\sim {\mathcal {F}}(\theta ,\tau ^{2})\\\theta &\sim {\mathcal {G}}(\mu ,\sigma ^{2})\end{aligned}}}then denoting the probability density functions asf(x|θ)=pF(x|θ){\displaystyle f(x|\theta )=p_{F}(x|\theta )}andg(θ)=pG(θ){\displaystyle g(\theta )=p_{G}(\theta )}respectively, andh(x){\displaystyle h(x)}being the probability density ofH{\displaystyle H}we haveEH[X]=∫Fxh(x)dx=∫Fx∫Gf(x|θ)g(θ)dθdx=∫G∫Fxf(x|θ)dxg(θ)dθ=∫GEF[X|θ]g(θ)dθ{\displaystyle {\begin{aligned}\operatorname {E} _{H}[X]=\int _{F}xh(x)dx&=\int _{F}x\int _{G}f(x|\theta )g(\theta )d\theta dx\\&=\int _{G}\int _{F}xf(x|\theta )dx\ g(\theta )d\theta \\&=\int _{G}\operatorname {E} _{F}[X|\theta ]g(\theta )d\theta \end{aligned}}}and we have from the parameterizationF{\displaystyle {\mathcal {F}}}andG{\displaystyle {\mathcal {G}}}thatEF[X|θ]=∫Fxf(x|θ)dx=θEG[θ]=∫Gθg(θ)dθ=μ{\displaystyle {\begin{aligned}\operatorname {E} _{F}[X|\theta ]&=\int _{F}xf(x|\theta )dx=\theta \\\operatorname {E} _{G}[\theta ]&=\int _{G}\theta g(\theta )d\theta =\mu \end{aligned}}}and therefore the mean of the compound distributionEH[X]=μ{\displaystyle \operatorname {E} _{H}[X]=\mu }as per the expression for its first moment above.
The variance ofH{\displaystyle H}is given byEH[X2]−(EH[X])2{\displaystyle \operatorname {E} _{H}[X^{2}]-(\operatorname {E} _{H}[X])^{2}}, andEH[X2]=∫Fx2h(x)dx=∫Fx2∫Gf(x|θ)g(θ)dθdx=∫Gg(θ)∫Fx2f(x|θ)dxdθ=∫Gg(θ)(τ2+θ2)dθ=τ2∫Gg(θ)dθ+∫Gg(θ)θ2dθ=τ2+(σ2+μ2),{\displaystyle {\begin{aligned}\operatorname {E} _{H}[X^{2}]=\int _{F}x^{2}h(x)dx&=\int _{F}x^{2}\int _{G}f(x|\theta )g(\theta )d\theta dx\\&=\int _{G}g(\theta )\int _{F}x^{2}f(x|\theta )dx\ d\theta \\&=\int _{G}g(\theta )(\tau ^{2}+\theta ^{2})d\theta \\&=\tau ^{2}\int _{G}g(\theta )d\theta +\int _{G}g(\theta )\theta ^{2}d\theta \\&=\tau ^{2}+(\sigma ^{2}+\mu ^{2}),\end{aligned}}}given the fact that∫Fx2f(x∣θ)dx=EF[X2∣θ]=VarF(X∣θ)+(EF[X∣θ])2{\displaystyle \int _{F}x^{2}f(x\mid \theta )dx=\operatorname {E} _{F}[X^{2}\mid \theta ]=\operatorname {Var} _{F}(X\mid \theta )+(\operatorname {E} _{F}[X\mid \theta ])^{2}}and∫Gθ2g(θ)dθ=EG[θ2]=VarG(θ)+(EG[θ])2{\displaystyle \int _{G}\theta ^{2}g(\theta )d\theta =\operatorname {E} _{G}[\theta ^{2}]=\operatorname {Var} _{G}(\theta )+(\operatorname {E} _{G}[\theta ])^{2}}. Finally we getVarH(X)=EH[X2]−(EH[X])2=τ2+σ2{\displaystyle {\begin{aligned}\operatorname {Var} _{H}(X)&=\operatorname {E} _{H}[X^{2}]-(\operatorname {E} _{H}[X])^{2}\\&=\tau ^{2}+\sigma ^{2}\end{aligned}}}
Distributions of commontest statisticsresult as compound distributions under their null hypothesis, for example inStudent's t-test(where the test statistic results as the ratio of anormaland achi-squaredrandom variable), or in theF-test(where the test statistic is the ratio of twochi-squaredrandom variables).
Compound distributions are useful for modeling outcomes exhibitingoverdispersion, i.e., a greater amount of variability than would be expected under a certain model. For example, count data are commonly modeled using thePoisson distribution, whose variance is equal to its mean. The distribution may be generalized by allowing for variability in itsrate parameter, implemented via agamma distribution, which results in a marginalnegative binomial distribution. This distribution is similar in its shape to the Poisson distribution, but it allows for larger variances. Similarly, abinomial distributionmay be generalized to allow for additional variability by compounding it with abeta distributionfor its success probability parameter, which results in abeta-binomial distribution.
Besides ubiquitous marginal distributions that may be seen as special cases of compound distributions,
inBayesian inference, compound distributions arise when, in the notation above,Frepresents the distribution of future observations andGis theposterior distributionof the parameters ofF, given the information in a set of observed data. This gives aposterior predictive distribution. Correspondingly, for theprior predictive distribution,Fis the distribution of a new data point whileGis theprior distributionof the parameters.
Convolutionof probability distributions (to derive the probability distribution of sums of random variables) may also be seen as a special case of compounding; here the sum's distribution essentially results from considering one summand as a randomlocation parameterfor the other summand.[1]
Compound distributions derived fromexponential familydistributions often have a closed form.
If analytical integration is not possible, numerical methods may be necessary.
Compound distributions may relatively easily be investigated usingMonte Carlo methods, i.e., by generating random samples. It is often easy to generate random numbers from the
distributionsp(θ){\displaystyle p(\theta )}as well asp(x|θ){\displaystyle p(x|\theta )}and then utilize these to performcollapsed Gibbs samplingto generate samples fromp(x){\displaystyle p(x)}.
A compound distribution may usually also be approximated to a sufficient degree by amixture distributionusing a finite number of mixture components, allowing to derive approximate density, distribution function etc.[1]
Parameter estimation(maximum-likelihoodormaximum-a-posterioriestimation) within a compound distribution model may sometimes be simplified by utilizing theEM-algorithm.[2]
The notion of "compound distribution" as used e.g. in the definition of aCompound Poisson distributionorCompound Poisson processis different from the definition found in this article. The meaning in this article corresponds to what is used in e.g.Bayesian hierarchical modeling.
The special case for compound probability distributions where the parametrized distributionF{\displaystyle F}is thePoisson distributionis also calledmixed Poisson distribution.
|
https://en.wikipedia.org/wiki/Scale_mixture
|
Poshis asoftwareframeworkused incross-platformsoftwaredevelopment. It was created by Brian Hook.[1]It isBSD licensedand as of 17 March 2014[update], atversion1.3.002.
The Posh software framework provides aheader fileand an optionalCsource file.
Posh does not provide alternatives where a hostplatformdoes not offer a feature, but informs throughpreprocessormacroswhat is supported and what is not. It sets macros to assist in compiling with variouscompilers(such asGCC,MSVCandOpenWatcom), and different hostendiannesses. In its simplest form, only a single header file is required. In the optional C source file, there arefunctionsfor byte swapping and in-memoryserialisation/deserialisation.
Brian Hook also createdSAL(Simple Audio Library) that utilises Posh. Both are featured in his book "Write Portable Code". Posh is also used inFerretandVega Strike.
Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Poshlib
|
Rule inductionis an area ofmachine learningin which formal rules are extracted from a set of observations. The rules extracted may represent a fullscientific modelof the data, or merely represent localpatternsin the data.
Data miningin general and rule induction in detail are trying to create algorithms without human programming but with analyzing existing data structures.[1]: 415-In the easiest case, a rule is expressed with “if-then statements” and was created with theID3 algorithmfor decision tree learning.[2]: 7[1]: 348Rule learning algorithm are taking training data as input and creating rules by partitioning the table withcluster analysis.[2]: 7A possible alternative over the ID3 algorithm is genetic programming which evolves a program until it fits to the data.[3]: 2
Creating different algorithm and testing them with input data can be realized in the WEKA software.[3]: 125Additional tools are machine learning libraries forPython, likescikit-learn.
Some major rule induction paradigms are:
Some rule induction algorithms are:
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Rule_induction
|
Inpsychologyandphilosophy,theory of mind(often abbreviated to ToM) refers to the capacity to understand other individuals by ascribingmental statesto them. A theory of mind includes the understanding that others'beliefs,desires,intentions,emotions, andthoughtsmay be different from one's own.[1]Possessing a functional theory of mind is crucial for success in everyday humansocial interactions. People utilize a theory of mind whenanalyzing,judging, andinferringother people's behaviors.
Theory of mind was first conceptualized by researchers evaluating the presence oftheory of mind in animals.[2][3]Today, theory of mind research also investigates factors affecting theory of mind in humans, such as whether drug and alcohol consumption,language development, cognitive delays, age, and culture can affect a person's capacity to display theory of mind.
It has been proposed that deficits in theory of mind may occur in people withautism,[5]anorexia nervosa,[6]schizophrenia,dysphoria,addiction,[7]andbrain damagecaused byalcohol's neurotoxicity.[8][9]Neuroimagingshows that themedial prefrontal cortex(mPFC), the posteriorsuperior temporal sulcus(pSTS), theprecuneus, and theamygdalaare associated with theory of mind tasks. Patients withfrontal lobeortemporoparietal junctionlesions find some theory of mind tasks difficult. One's theory of mind develops in childhood as theprefrontal cortexdevelops.[10]
The "theory of mind" is described as atheorybecause the behavior of the other person, such as their statements and expressions, is the only thing being directly observed; no one has direct access to the mind of another, and the existence and nature of the mind must be inferred.[11]It is typically assumed others have minds analogous to one's own; this assumption is based on three reciprocal social interactions, as observed injoint attention,[12]the functional use of language,[13]and the understanding of others' emotions and actions.[14]Theory of mind allows one to attribute thoughts, desires, and intentions to others, to predict or explain their actions, and to posit their intentions. It enables one to understand that mental states can be the cause of—and can be used to explain and predict—the behavior of others.[11]Being able to attribute mental states to others and understanding them as causes of behavior implies, in part, one must be able to conceive of the mind as a "generator of representations".[15]If a person does not have a mature theory of mind, it may be a sign of cognitive or developmental impairment.[16]
Theory of mind appears to be an innate potential ability in humans that requires social and other experience over many years for its full development. Different people may develop more or less effective theories of mind.Neo-Piagetian theories of cognitive developmentmaintain that theory of mind is a byproduct of a broaderhypercognitiveability of the human mind to register, monitor, and represent its own functioning.[17]
Empathy—the recognition and understanding of the states of mind of others, including their beliefs, desires, and particularly emotions—is a related concept. Empathy is often characterized as the ability to "put oneself into another's shoes". Recentneuro-ethologicalstudies of animal behavior suggest that rodents may exhibit empathetic abilities.[18]While empathy is known as emotional perspective-taking, theory of mind is defined as cognitive perspective-taking.[19]
Research on theory of mind, in humans and animals, adults and children, normally and atypically developing, has grown rapidly in the years sincePremackand Guy Woodruff's 1978 paper, "Does the chimpanzee have a theory of mind?".[11]The field ofsocial neurosciencehas also begun to address this debate by imaging the brains of humans while they perform tasks that require the understanding of an intention, belief, or other mental state in others.
An alternative account of theory of mind is given inoperantpsychology and providesempirical evidencefor a functional account of both perspective-taking and empathy. The most developed operant approach is founded on research on derived relational responding[jargon]and is subsumed withinrelational frame theory. Derived relational responding relies on the ability to identifyderived relations, or relationships between stimuli that are not directly learned orreinforced; for example, if "snake" is related to "danger" and "danger" is related to "fear", people may know to fear snakes even without learning an explicit connection between snakes and fear.[20]According to this view, empathy and perspective-taking comprise a complex set of derived relational abilities based on learning to discriminate and respond verbally to ever more complex relations between self, others, place, and time, and through established relations.[21][22][23]
Discussions of theory of mind have their roots in philosophical debate from the time ofRené Descartes'Second Meditation, which set the foundations for considering the science of the mind.
Two differing approaches in philosophy for explaining theory of mind aretheory-theoryandsimulation theory.[24]Theory-theory claims that individuals use "theories" grounded infolk psychologyto reason about others' minds. According to theory-theory, these folk psychology theories are developed automatically and innately by concepts and rules we have for ourselves, and then instantiated through social interactions.[25]In contrast, simulation-theory argues that individuals simulate the internal states of others to build mental models for their cognitive processes. A basic example of this is someone imagining themselves in the position of another person to infer the other person's thoughts and feelings.[26]Theory of mind is also closely related toperson perceptionandattribution theoryfromsocial psychology.
It is common and intuitive to assume that others have minds. Peopleanthropomorphizenon-human animals, inanimate objects, and even natural phenomena.Daniel Dennettreferred to this tendency as taking an "intentional stance" toward things: we assume they have intentions, to help predict their future behavior.[27]However, there is an important distinction between taking an "intentional stance" toward something and entering a "shared world" with it. The intentional stance is a functional relationship, describing the use of a theory due to its practical utility, rather than the accuracy of its representation of the world. As such, it is something people resort to during interpersonal interactions. A shared world is directly perceived and its existence structures reality itself for the perceiver. It is not just a lens, through which the perceiver views the world; it in many ways constitutes the cognition, as both its object and the blueprint used to structure perception into understanding.
The philosophical roots of another perspective, therelational frame theory(RFT) account of theory of mind, arise from contextual psychology, which refers to the study of organisms (both human and non-human) interacting in and with a historical and current situational context. It is an approach based oncontextualism, a philosophy in which any event is interpreted as an ongoing act inseparable from its current and historical context and in whicha radically functional approach to truth and meaningis adopted. As a variant of contextualism, RFT focuses on the construction of practical, scientific knowledge. This scientific form of contextual psychology is virtually synonymous with the philosophy of operant psychology.[28]
The study of which animals are capable of attributing knowledge and mental states to others, as well as the development of this ability in humanontogenyandphylogeny, identifies several behavioral precursors to theory of mind. Understanding attention, understanding of others' intentions, and imitative experience with others are hallmarks of a theory of mind that may be observed early in the development of what later becomes a full-fledged theory.
Simon Baron-Cohenproposed that infants' understanding of attention in others acts as a critical precursor to the development of theory of mind.[12]Understanding attention involves understanding that seeing can be directed selectively as attention, that the looker assesses the seen object as "of interest", and that seeing can induce beliefs. A possible illustration of theory of mind in infants is joint attention. Joint attention refers to when two people look at and attend to the same thing. Parents often use the act of pointing to prompt infants to engage in joint attention; understanding this prompt requires that infants take into account another person's mental state and understand that the person notices an object or finds it of interest. Baron-Cohen speculates that the inclination to spontaneously reference an object in the world as of interest, via pointing, ("Proto declarative pointing") and to likewise appreciate the directed attention of another, may be the underlying motive behind all human communication.[12]
Understanding others' intentions is another critical precursor to understanding other minds because intentionality is a fundamental feature of mental states and events. The "intentional stance" was defined byDaniel Dennett[29]as an understanding that others' actions are goal-directed and arise from particular beliefs or desires. Both two and three-year-old children could discriminate when an experimenter intentionally or accidentally marked a box with stickers.[30]Even earlier in development,Andrew N. Meltzofffound that 18-month-old infants could perform target tasks involving the manipulation of objects that adult experimenters attempted and failed, suggesting the infants could represent the object-manipulating behavior of adults as involving goals and intentions.[31]While attribution of intention and knowledge is investigated in young humans and nonhuman animals to detect precursors to a theory of mind, Gagliardi et al. have pointed out that even adult humans do not always act in a way consistent with an attributional perspective (i.e., based on attribution of knowledge to others).[32]In their experiment, adult human subjects attempted to choose the container baited with a small object from a selection of four containers when guided by confederates who could not see which container was baited.
Research in developmental psychology suggests that an infant's ability to imitate others lies at the origins of both theory of mind and other social-cognitive achievements likeperspective-takingand empathy.[33]According to Meltzoff, the infant's innate understanding that others are "like me" allows them to recognize the equivalence between the physical and mental states apparent in others and those felt by the self. For example, the infant uses their own experiences, orienting their head and eyes toward an object of interest to understand the movements of others who turn toward an object; that is, they will generally attend to objects of interest or significance. Some researchers in comparative disciplines have hesitated to put too much weight on imitation as a critical precursor to advanced human social-cognitive skills like mentalizing and empathizing, especially if true imitation is no longer employed by adults. A test of imitation by Alexandra Horowitz found that adult subjects imitated an experimenter demonstrating a novel task far less closely than children did. Horowitz points out that the precise psychological state underlying imitation is unclear and cannot, by itself, be used to draw conclusions about the mental states of humans.[34]
While much research has been done on infants, theory of mind develops continuously throughout childhood and into late adolescence as thesynapsesin the prefrontal cortex develop. The prefrontal cortex is thought to be involved in planning and decision-making.[35]Children seem to develop theory of mind skills sequentially. The first skill to develop is the ability to recognize that others have diverse desires. Children are able to recognize that others have diverse beliefs soon after. The next skill to develop is recognizing that others have access to different knowledge bases. Finally, children are able to understand that others may have false beliefs and that others are capable of hiding emotions. While this sequence represents the general trend in skill acquisition, it seems that more emphasis is placed on some skills in certain cultures, leading to more valued skills to develop before those that are considered not as important. For example, inindividualisticcultures such as the United States, a greater emphasis is placed on the ability to recognize that others have different opinions and beliefs. In acollectivisticculture, such as China, this skill may not be as important and therefore may not develop until later.[36]
There is evidence that the development of theory of mind is closely intertwined with language development in humans. One meta-analysis showed a moderate to strong correlation (r= 0.43) between performance on theory of mind and language tasks.[37]Both language and theory of mind begin to develop around the same time in children (between ages two and five), but many other abilities develop during this same time period as well, and they do not produce such high correlations with one another nor with theory of mind.
Pragmatic theories of communication assume that infants must possess an understanding of beliefs and mental states of others to infer the communicative content that proficient language users intend to convey.[38]Since spoken phrases can have different meanings depending on context, theory of mind can play a crucial role in understanding the intentions of others and inferring the meaning of words. Some empirical results suggest that even 13-month-old infants have an early capacity for communicative mind-reading that enables them to infer what relevant information is transferred between communicative partners, which implies that human language relies at least partially on theory of mind skills.[39]
Carol A. Miller posed further possible explanations for this relationship. Perhaps the extent of verbal communication and conversation involving children in a family could explain theory of mind development. Such language exposure could help introduce a child to the different mental states and perspectives of others.[40]Empirical findings indicate that participation in family discussion predicts scores on theory of mind tasks,[41]and that deaf children who have hearing parents and may not be able to communicate with their parents much during early years of development tend to score lower on theory of mind tasks.[42]
Another explanation of the relationship between language and theory of mind development has to do with a child's understanding of mental-state words such as "think" and "believe". Since a mental state is not something that one can observe from behavior, children must learn the meanings of words denoting mental states from verbal explanations alone, requiring knowledge of the syntactic rules, semantic systems, and pragmatics of a language.[40]Studies have shown that understanding of these mental state words predicts theory of mind in four-year-olds.[43]
A third hypothesis is that the ability to distinguish a whole sentence ("Jimmy thinks the world is flat") from its embedded complement ("the world is flat") and understand that one can be true while the other can be false is related to theory of mind development. Recognizing these complements as being independent of one another is a relatively complex syntactic skill and correlates with increased scores on theory of mind tasks in children.[44]
There is also evidence that the areas of the brain responsible for language and theory of mind are closely connected. Thetemporoparietal junction(TPJ) is involved in the ability to acquire new vocabulary, as well as to perceive and reproduce words. The TPJ also contains areas that specialize in recognizing faces, voices, and biological motion, and in theory of mind. Since all of these areas are located so closely together, it is reasonable to suspect that they work together. Studies have reported an increase in activity in the TPJ when patients are absorbing information through reading or images regarding other peoples' beliefs but not while observing information about physical control stimuli.[45]
Adults have theory of mind concepts that they developed as children (concepts such as belief, desire, knowledge, and intention). They use these concepts to meet the diverse demands of social life, ranging from snap decisions about how to trick an opponent in a competitive game, to keeping up with who knows what in a fast-moving conversation, to judging the guilt or innocence of the accused in a court of law.[46]
Boaz Keysar, Dale Barr, and colleagues found that adults often failed tousetheir theory of mind abilities to interpret a speaker's message, and acted as if unaware that the speaker lacked critical knowledge about a task. In one study, a confederate instructed adult participants to rearrange objects, some of which were not visible to the confederate, as part of a communication game. Only objects that were visible to both the confederate and the participant were part of the game. Despite knowing that the confederate could not see some of the objects, a third of the participants still tried to move those objects.[47]Other studies show that adults are prone toegocentric biases, with which they are influenced by their own beliefs, knowledge, or preferences when judging those of other people, or that they neglect other people's perspectives entirely.[48]There is also evidence that adults with greater memory,inhibitory capacity, and motivation are more likely to use their theory of mind abilities.[49]
In contrast, evidence about indirect effects of thinking about other people's mental states suggests that adults may sometimes use their theory of mind automatically. Agnes Kovacs and colleagues measured the time it took adults to detect the presence of a ball as it was revealed from behind an occluder. They found that adults' speed of response was influenced by whether another person (the "agent") in the scene thought there was a ball behind the occluder, even though adults were not asked to pay attention to what the agent thought.[50]
Dana Samson and colleagues measured the time it took adults to judge the number of dots on the wall of a room. They found that adults responded more slowly when another person standing in the room happened to see fewer dots than they did, even when they had never been asked to pay attention to what the person could see.[51]It has been questioned whether these "altercentric biases" truly reflect automatic processing of what another person is thinking or seeing or, instead, reflect attention and memory effects cued by the other person, but not involving any representation of what they think or see.[52]
Different theories seek to explain such results. If theory of mind is automatic, this would help explain how people keep up with the theory of mind demands of competitive games and fast-moving conversations. It might also explain evidence that human infants and some non-human species sometimes appear capable of theory of mind, despite their limited resources for memory and cognitive control.[53]If theory of mind is effortful and not automatic, on the other hand, this explains why it feels effortful to decide whether a defendant is guilty or whether a negotiator is bluffing. Economy of effort would help explain why people sometimes neglect to use their theory of mind.
Ian Apperly andStephen Butterfillsuggested that people have "two systems" for theory of mind,[54]in common with "two systems" accounts in many other areas of psychology.[55]In this account, "system 1" is cognitively efficient and enables theory of mind for a limited but useful set of circumstances. "System 2" is cognitively effortful, but enables much more flexible theory of mind abilities. PhilosopherPeter Carruthersdisagrees, arguing that the same core theory of mind abilities can be used in both simple and complex ways.[56]The account has been criticized by Celia Heyes who suggests that "system 1" theory of mind abilities do not require representation of mental states of other people, and so are better thought of as "sub-mentalizing".[52]
In older age, theory of mind capacities decline, irrespective of how exactly they are tested.[57]However, the decline in other cognitive functions is even stronger, suggesting that social cognition is better preserved. In contrast to theory of mind, empathy shows no impairments in aging.[58][59]
There are two kinds of theory of mind representations: cognitive (concerning mental states, beliefs, thoughts, and intentions) and affective (concerning the emotions of others). Cognitive theory of mind is further separated into first order (e.g., I think she thinks that) and second order (e.g. he thinks that she thinks that). There is evidence that cognitive and affective theory of mind processes are functionally independent from one another.[60]In studies of Alzheimer's disease, which typically occurs in older adults, patients display impairment with second order cognitive theory of mind, but usually not with first order cognitive or affective theory of mind. However, it is difficult to discern a clear pattern of theory of mind variation due to age. There have been many discrepancies in the data collected thus far, likely due to small sample sizes and the use of different tasks that only explore one aspect of theory of mind. Many researchers suggest that theory of mind impairment is simply due to the normal decline in cognitive function.[61]
Researchers propose that five key aspects of theory of mind develop sequentially for all children between the ages of three and five:[62]diverse desires, diverse beliefs, knowledge access, false beliefs, and hidden emotions.[62]Australian, American, and European children acquire theory of mind in this exact order,[10]and studies with children in Canada, India, Peru, Samoa, and Thailand indicate that they all pass the false belief task at around the same time, suggesting that children develop theory of mind consistently around the world.[63]
However, children fromIranandChinadevelop theory of mind in a slightly different order. Although they begin the development of theory of mind around the same time, toddlers from these countries understand knowledge access before Western children but take longer to understand diverse beliefs.[10][16]Researchers believe this swap in the developmental order is related to the culture ofcollectivismin Iran and China, which emphasizes interdependence and shared knowledge as opposed to the culture ofindividualismin Western countries, which promotes individuality and accepts differing opinions. Because of these different cultural values, Iranian and Chinese children might take longer to understand that other people have different beliefs and opinions. This suggests that the development of theory of mind is not universal and solely determined by innate brain processes but also influenced by social and cultural factors.[10]
Theory of mind can help historians to more properly understand historical figures' characters, for exampleThomas Jefferson. Emancipationists likeDouglas L. Wilsonand scholars at the Thomas Jefferson Foundation view Jefferson as an opponent of slavery all his life, noting Jefferson's attempts within the limited range of options available to him to undermine slavery, his many attempts at abolition legislation, the manner in which he provided for slaves, and his advocacy of their more humane treatment. This view contrasts with that of revisionists likePaul Finkelman, who criticizes Jefferson for racism, slavery, and hypocrisy. Emancipationist views on this hypocrisy recognize that if he tried to be true to his word, it would have alienated his fellow Virginians. In another example,Franklin D. Rooseveltdid not join NAACP leaders in pushing for federal anti-lynching legislation, as he believed that such legislation was unlikely to pass and that his support for it would alienate Southern congressmen, including many of Roosevelt's fellow Democrats.
Whether children younger than three or four years old have a theory of mind is a topic of debate among researchers. It is a challenging question, due to the difficulty of assessing what pre-linguistic children understand about others and the world. Tasks used in research into the development of theory of mind must take into account theumwelt[64]of the pre-verbal child.
One of the most important milestones in theory of mind development is the ability to attributefalse belief: in other words, to understand that other people can believe things which are not true. To do this, it is suggested, one must understand how knowledge is formed, that people's beliefs are based on their knowledge, that mental states can differ from reality, and that people's behavior can be predicted by their mental states. Numerous versions of false-belief task have been developed, based on the initial task created by Wimmer and Perner (1983).[65]
In the most common version of the false-belief task (often called theSally-Anne test), children are told a story about Sally and Anne. Sally has a marble, which she places into her basket, and then leaves the room. While she is out of the room, Anne takes the marble from the basket and puts it into the box. The child being tested is then asked where Sally will look for the marble once she returns. The child passes the task if she answers that Sally will look in the basket, where Sally put the marble; the child fails the task if she answers that Sally will look in the box. To pass the task, the child must be able to understand that another's mental representation of the situation is different from their own, and the child must be able to predict behavior based on that understanding.[66]Another example depicts a boy who leaves chocolate on a shelf and then leaves the room. His mother puts it in the fridge. To pass the task, the child must understand that the boy, upon returning, holds the false belief that his chocolate is still on the shelf.[67]
The results of research using false-belief tasks have been called into question: most typically developing children are able to pass the tasks from around age four.[68]Yet early studies asserted that 80% of children diagnosed with autism were unable to pass this test, while children with other disabilities likeDown syndromewere able to.[69]However this assertion could not be replicated by later studies.[70][71][72][73]It instead was concluded that children fail these tests due to a lack of understanding of extraneous processes and a basic lack of mental processing capabilities.[74]
Adults may also struggle with false beliefs, for instance when they showhindsight bias.[75]In one experiment, adult subjects who were asked for an independent assessment were unable to disregard information on actual outcome. Also in experiments with complicated situations, when assessing others' thinking, adults can fail to correctly disregard certain information that they have been given.[67]
Other tasks have been developed to try to extend the false-belief task. In the "unexpected contents" or "smarties" task, experimenters ask children what they believe to be the contents of a box that looks as though it holdsSmartieschocolates. After the child guesses "Smarties", it is shown that the box in fact contained pencils. The experimenter then re-closes the box and asks the child what she thinks another person, who has not been shown the true contents of the box, will think is inside. The child passes the task if he/she responds that another person will think that "Smarties" exist in the box, but fails the task if she responds that another person will think that the box contains pencils. Gopnik & Astington found that children pass this test at age four or five years.[76]Though the use of such implicit tests has yet to reach a consensus on their validity and reproducibility of study results.[77]
The "false-photograph" task[78]also measures theory of mind development. In this task, children must reason about what is represented in a photograph that differs from the current state of affairs. Within the false-photograph task, either a location or identity change exists.[79]In the location-change task, the examiner puts an object in one location (e.g. chocolate in an open green cupboard), whereupon the child takes a Polaroid photograph of the scene. While the photograph is developing, the examiner moves the object to a different location (e.g. a blue cupboard), allowing the child to view the examiner's action. The examiner asks the child two control questions: "When we first took the picture, where was the object?" and "Where is the object now?" The subject is also asked a "false-photograph" question: "Where is the object in the picture?" The child passes the task if he/she correctly identifies the location of the object in the picture and the actual location of the object at the time of the question. However, the last question might be misinterpreted as "Where in this room is the object that the picture depicts?" and therefore some examiners use an alternative phrasing.[80]
To make it easier for animals, young children, and individuals with classicalautismto understand and perform theory of mind tasks, researchers have developed tests in which verbal communication is de-emphasized: some whose administration does not involve verbal communication on the part of the examiner, some whose successful completion does not require verbal communication on the part of the subject, and some that meet both of those standards. One category of tasks uses a preferential-looking paradigm, withlooking timeas the dependent variable. For instance, nine-month-old infants prefer looking at behaviors performed by a human hand over those made by an inanimate hand-like object.[81]Other paradigms look at rates of imitative behavior, the ability to replicate and complete unfinished goal-directed acts,[31]and rates of pretend play.[82]
Research on the early precursors of theory of mind has invented ways to observe preverbal infants' understanding of other people's mental states, including perception and beliefs. Using a variety of experimental procedures, studies show that infants from their first year of life have an implicit understanding of what other people see[83]and what they know.[84][85]A popular paradigm used to study infants' theory of mind is the violation-of-expectation procedure, which exploits infants' tendency to look longer at unexpected and surprising events compared to familiar and expected events. The amount of time they look at an event gives researchers an indication of what infants might be inferring, or their implicit understanding of events. One study using this paradigm found that 16-month-olds tend to attribute beliefs to a person whose visual perception was previously witnessed as being "reliable", compared to someone whose visual perception was "unreliable". Specifically, 16-month-olds were trained to expect a person's excited vocalization and gaze into a container to be associated with finding a toy in the reliable-looker condition or an absence of a toy in the unreliable-looker condition. Following this training phase, infants witnessed, in an object-search task, the same persons searching for a toy either in the correct or incorrect location after they both witnessed the location of where the toy was hidden. Infants who experienced the reliable looker were surprised and therefore looked longer when the person searched for the toy in the incorrect location compared to the correct location. In contrast, the looking time for infants who experienced the unreliable looker did not differ for either search locations. These findings suggest that 16-month-old infants can differentially attribute beliefs about a toy's location based on the person's prior record of visual perception.[86]
With the methods used to test theory of mind, it has been experimentally shown that very simple robots that only react by reflexes and are not built to have any complex cognition at all can pass the tests for having theory of mind abilities that psychology textbooks assume to be exclusive to humans older than four or five years. Whether such a robot passes the test is influenced by completely non-cognitive factors such as placement of objects and the structure of the robot body influencing how the reflexes are conducted. It has therefore been suggested that theory of mind tests may not actually test cognitive abilities.[87]
Furthermore, early research into theory of mind in autistic children[69]is argued to constituteepistemological violencedue to implicit or explicit negative and universal conclusions about autistic individuals being drawn from empirical data that viably supports other (non-universal) conclusions.[88]
Theory of mind impairment, ormind-blindness, describes a difficulty someone would have with perspective-taking. Individuals with theory of mind impairment struggle to see phenomena from any other perspective than their own.[89]Individuals who experience a theory of mind deficit have difficulty determining the intentions of others, lack understanding of how their behavior affects others, and have a difficult time with social reciprocity.[90]Theory of mind deficits have been observed in people withautism spectrumdisorders,schizophrenia,nonverbal learning disorderand along with people under the influence of alcohol and narcotics, sleep-deprived people, and people who are experiencing severe emotional or physical pain. Theory of mind deficits have also been observed in deaf children who are late signers (i.e. are born to hearing parents), but such a deficit is due to the delay in language learning, not any cognitive deficit, and therefore disappears once the child learns sign language.[91]
In 1985Simon Baron-Cohen,Alan M. Leslie, andUta Frithsuggested that children withautismdo not employ theory of mind and that autistic children have particular difficulties with tasks requiring the child to understand another person's beliefs.[69]These difficulties persist when children are matched for verbal skills and they have been taken as a key feature of autism.[92]Although in a 2019 review, Gernsbacher and Yergeau argued that "the claim that autistic people lack a theory of mind is empirically questionable", as there have been numerous failed replications of classic ToM studies and the meta-analytical effect sizes of such replications were minimal to small.[70]
Many individuals classified as autistic have severe difficulty assigning mental states to others, and some seem to lack theory of mind capabilities.[93]Researchers who study the relationship between autism and theory of mind attempt to explain the connection in a variety of ways. One account assumes that theory of mind plays a role in the attribution of mental states to others and in childhood pretend play.[94]According to Leslie,[94]theory of mind is the capacity to mentally represent thoughts, beliefs, and desires, regardless of whether the circumstances involved are real. This might explain why some autistic individuals show extreme deficits in both theory of mind and pretend play. However, Hobson proposes a social-affective justification,[95]in which deficits in theory of mind in autistic people result from a distortion in understanding and responding to emotions. He suggests that typically developing individuals, unlike autistic individuals, are born with a set of skills (such as social referencing ability) that later lets them comprehend and react to other people's feelings. Other scholars emphasize that autism involves a specific developmental delay, so that autistic children vary in their deficiencies, because they experience difficulty in different stages of growth. Very early setbacks can alter proper advancement of joint-attention behaviors, which may lead to a failure to form a full theory of mind.[93]
It has been speculated that theory of mind exists on acontinuumas opposed to the traditional view of a discrete presence or absence.[82]While some research has suggested that some autistic populations are unable to attribute mental states to others,[12]recent evidence points to the possibility of coping mechanisms that facilitate the attribution of mental states.[96]A binary view regarding theory of mind contributes to thestigmatizationof autistic adults who do possess perspective-taking capacity, as the assumption that autistic people do not have empathy can become a rationale fordehumanization.[97]
Tine et al. report that autistic children score substantially lower on measures of social theory of mind (i.e., "reasoning aboutothers'mental states", p. 1) in comparison to children diagnosed withAsperger syndrome.[98]
Generally, children with more advanced theory of mind abilities display more advanced social skills, greater adaptability to new situations, and greater cooperation with others. As a result, these children are typically well-liked. However, "children may use their mind-reading abilities to manipulate, outwit, tease, or trick their peers."[99]Individuals possessing inferior theory of mind skills, such as children with autism spectrum disorder, may be socially rejected by their peers since they are unable to communicate effectively.Social rejectionhas been proven to negatively impact a child's development and can put the child at greater risk of developing depressive symptoms.[100]
Peer-mediated interventions (PMI) are a school-based treatment approach for children and adolescents with autism spectrum disorder in which peers are trained to be role models in order to promote social behavior. Laghi et al. studied whether analysis of prosocial (nice) and antisocial (nasty) theory-of-mind behaviors could be used, in addition to teacher recommendations, to select appropriate candidates for PMI programs. Selecting children with advanced theory-of-mind skills who use them in prosocial ways will theoretically make the program more effective. While the results indicated that analyzing the social uses of theory of mind of possible candidates for a PMI program may increase the program's efficacy, it may not be a good predictor of a candidate's performance as a role model.[35]
A 2014 Cochrane review on interventions based on theory of mind found that such a theory could be taught to individuals with autism but claimed little evidence of skill maintenance, generalization to other settings, or development effects on related skills.[101]
Some 21st century studies have shown that the results of some studies of theory of mind tests on autistic people may be misinterpreted based on thedouble empathy problem, which proposes that rather than autistic people specifically having trouble with theory of mind, autistic people and non-autistic people have equal difficulty understanding one-another due to their neurological differences.[102]Studies have shown that autistic adults perform better in theory of mind tests when paired with other autistic adults[103]as well as possibly autistic close family members.[104]Academics who acknowledge the double empathy problem also propose that it is likely autistic people understand non-autistic people to a higher degree than vice-versa, due to the necessity of functioning in a non-autistic society.[105]
Psychopathyis another deficit that is of large importance when discussing theory of mind. While psychopathic individuals show impaired emotional behavior including a lack of emotional responsiveness to others and deficient empathy, as well as impaired social behavior, there are many controversies regarding psychopathic individuals' theory of mind.[106]Many different studies provide contradictory information on a correlation between theory of mind impairment and psychopathic individuals.
There have been some speculations made about the similarities between individuals with autism and psychopathic individuals in the theory of mind performance. In this study in 2008, the Happé's advanced test of theory of mind was presented to a group of 25 psychopaths, and 25 non-psychopathsincarcerated. This test showed that there was not a difference in the performance of the task for the psychopaths and non-psychopaths. However, they were able to see that the psychopaths were performing significantly better than the most highly able adult autistic population.[107]This shows that there is not a similarity between individuals with autism and psychopathic individuals.
There have been repetitive suggestions regarding the possibility that a deficient or biased grasp of others’ mental states, or theory of mind, could potentially contribute to antisocial behavior, aggression, and psychopathy.[108]In one study named ‘Reading the Mind in the Eyes’, the participants view photographs of an individual’s eye and had to attribute a mental state, or emotion, to the individual. This is an interesting test becauseMagnetic resonance imagingstudies showed that this task produced increased activity in the dorsolateral prefrontal and the left medial frontal cortices, the superior temporal gyrus, and the left amygdala. There is extensive literature suggesting amygdala dysfunction in psychopathy however, this test shows that both groups of psychopathic and non-psychopathic adults performed equally well on the test.[108]Thus, disregarding that there isn’t Theory of Mind impairment in psychopathic individuals.
In another study using asystemic reviewandmeta-analysis, data was gathered from 42 different studies and found that psychopathic traits are associated with impairment in the theory of mind task performance. This relationship was not regulated by age, population, psychopathy measurement (self-report versus clinical checklist) or theory of mind task type (cognitive versus affective).[109]This study used past studies to show that there is a relationship between psychopathic individuals and theory of mind impairments.
In 2009 a study was conducted to test whether impairment in the emotional aspects of theory of mind rather that the general theory of mind abilities may account for some of the impaired social behavior in psychopathy.[106]This study involved criminal offenders diagnosed withantisocial personality disorderwho had high psychopathy features, participants with localized lesions in theorbitofrontal cortex, participants with non-frontal lesions, and healthy control subjects. Subjects were tested with a task that examines affective versus cognitive theory of mind. They found that the individuals with psychopathy and those with orbitofrontal cortex lesions were both impaired on the affective theory of mind but not in cognitive theory of mind when compared to the control group.[106]
Individuals diagnosed withschizophreniacan show deficits in theory of mind. Mirjam Sprong and colleagues investigated the impairment by examining 29 different studies, with a total of over 1500 participants.[110]Thismeta-analysisshowed significant and stable deficit of theory of mind in people with schizophrenia. They performed poorly on false-belief tasks, which test the ability to understand that others can hold false beliefs about events in the world, and also on intention-inference tasks, which assess the ability to infer a character's intention from reading a short story. Schizophrenia patients withnegative symptoms, such as lack of emotion, motivation, or speech, have the most impairment in theory of mind and are unable to represent the mental states of themselves and of others. Paranoid schizophrenic patients also perform poorly because they have difficulty accurately interpreting others' intentions. The meta-analysis additionally showed that IQ, gender, and age of the participants do not significantly affect the performance of theory of mind tasks.[110]
Research suggests that impairment in theory of mind negatively affects clinical insight—the patient's awareness of their mental illness.[111]Insight requires theory of mind; a patient must be able to adopt a third-person perspective and see the self as others do.[112]A patient with good insight can accurately self-represent, by comparing himself with others and by viewing himself from the perspective of others.[111]Insight allows a patient to recognize and react appropriately to his symptoms. A patient who lacks insight does not realize that he has a mental illness, because of his inability to accurately self-represent. Therapies that teach patients perspective-taking and self-reflection skills can improve abilities in reading social cues and taking the perspective of another person.[111]
Research indicates that theory-of-mind deficit is a stable trait-characteristic rather than a state-characteristic of schizophrenia.[113]The meta-analysis conducted by Sprong et al. showed that patients in remission still had impairment in theory of mind. This indicates that the deficit is not merely a consequence of the active phase of schizophrenia.[110]
Schizophrenic patients' deficit in theory of mind impairs their interactions with others. Theory of mind is particularly important for parents, who must understand the thoughts and behaviors of their children and react accordingly. Dysfunctional parenting is associated with deficits in the first-order theory of mind, the ability to understand another person's thoughts, and in the second-order theory of mind, the ability to infer what one person thinks about another person's thoughts.[114]Compared with healthy mothers, mothers with schizophrenia are found to be more remote, quiet, self-absorbed, insensitive, unresponsive, and to have fewer satisfying interactions with their children.[114]They also tend to misinterpret their children's emotional cues, and often misunderstand neutral faces as negative.[114]Activities such as role-playing and individual or group-based sessions are effective interventions that help the parents improve on perspective-taking and theory of mind.[114]There is a strong association between theory of mind deficit and parental role dysfunction.
Impairments in theory of mind, as well as other social-cognitive deficits, are commonly found in people who havealcohol use disorders, due to theneurotoxiceffects of alcohol on the brain, particularly theprefrontal cortex.[8]
Individuals in amajor depressive episode, a disorder characterized by social impairment, show deficits in theory of mind decoding.[115]Theory of mind decoding is the ability to use information available in the immediate environment (e.g., facial expression, tone of voice, body posture) to accurately label the mental states of others. The opposite pattern, enhanced theory of mind, is observed in individuals vulnerable to depression, including those individuals with pastmajor depressive disorder (MDD),[116]dysphoric individuals,[117]and individuals with a maternal history of MDD.[118]
Children diagnosed withdevelopmental language disorder(DLD) exhibit much lower scores on reading and writing sections of standardized tests, yet have a normal nonverbal IQ. These language deficits can be any specific deficits in lexical semantics, syntax, or pragmatics, or a combination of multiple problems. Such children often exhibit poorer social skills than normally developing children, and seem to have problems decoding beliefs in others. A recent meta-analysis confirmed that children with DLD have substantially lower scores on theory of mind tasks compared to typically developing children.[119]This strengthens the claim that language development is related to theory of mind.
Research on theory of mind inautismled to the view that mentalizing abilities are subserved by dedicated mechanisms that can—in some cases—be impaired while general cognitive function remains largely intact.
Neuroimagingresearch supports this view, demonstrating specific brain regions are consistently engaged during theory of mind tasks.Positron emission tomography(PET) research on theory of mind, using verbal and pictorial story comprehension tasks, identifies a set of brain regions including themedial prefrontal cortex(mPFC), and area around posteriorsuperior temporal sulcus(pSTS), and sometimesprecuneusandamygdala/temporopolar cortex.[120][121]Research on the neural basis of theory of mind has diversified, with separate lines of research focusing on the understanding of beliefs, intentions, and more complex properties of minds such as psychological traits.
Studies fromRebecca Saxe's lab at MIT, using a false-belief versus false-photograph task contrast aimed at isolating the mentalizing component of the false-belief task, have consistently found activation in the mPFC, precuneus, and temporoparietal junction (TPJ), right-lateralized.[122][123]In particular, Saxe et al. proposed that the right TPJ (rTPJ) is selectively involved in representing the beliefs of others.[124]Some debate exists, as the same rTPJ region is consistently activated during spatial reorienting of visual attention;[125][126]Jean Decetyfrom the University of Chicago and Jason Mitchell from Harvard thus propose that the rTPJ subserves a more general function involved in both false-belief understanding and attentional reorienting, rather than a mechanism specialized for social cognition. However, it is possible that the observation of overlapping regions for representing beliefs and attentional reorienting may simply be due to adjacent, but distinct, neuronal populations that code for each. The resolution of typical fMRI studies may not be good enough to show that distinct/adjacent neuronal populations code for each of these processes. In a study following Decety and Mitchell, Saxe and colleagues used higher-resolution fMRI and showed that the peak of activation for attentional reorienting is approximately 6–10 mm above the peak for representing beliefs. Further corroborating that differing populations of neurons may code for each process, they found no similarity in the patterning of fMRI response across space.[127]
Using single-cell recordings in the humandorsomedial prefrontal cortex(dmPFC), researchers atMGHidentified neurons that encode information about others' beliefs, which were distinct from self-beliefs, across different scenarios in a false-belief task. They further showed that these neurons could provide detailed information about others' beliefs, and could accurately predict these beliefs' verity.[128]These findings suggest a prominent role of distinct neuronal populations in the dmPFC in theory of mind complemented by the TPJ and pSTS.
Functional imaging also illuminates the detection of mental state information in animations of moving geometric shapes similar to those used in Heider and Simmel (1944),[129]which typical humans automatically perceive as social interactions laden with intention and emotion. Three studies found remarkably similar patterns of activation during the perception of such animations versus a random or deterministic motion control: mPFC, pSTS,fusiform face area(FFA), and amygdala were selectively engaged during the theory of mind condition.[130]Another study presented subjects with an animation of two dots moving with a parameterized degree of intentionality (quantifying the extent to which the dots chased each other), and found that pSTS activation correlated with this parameter.[131]
A separate body of research implicates the posterior superior temporal sulcus in the perception of intentionality in human action. This area is also involved in perceiving biological motion, including body, eye, mouth, and point-light display motion.[132]One study found increased pSTS activation while watching a human lift his hand versus having his hand pushed up by a piston (intentional versus unintentional action).[133]Several studies found increased pSTS activation when subjects perceive a human action that is incongruent with the action expected from the actor's context and inferred intention. Examples would be: a human performing a reach-to-grasp motion on empty space next to an object, versus grasping the object;[134]a human shifting eye gaze toward empty space next to a checkerboard target versus shifting gaze toward the target;[135]an unladen human turning on a light with his knee, versus turning on a light with his knee while carrying a pile of books;[136]and a walking human pausing as he passes behind a bookshelf, versus walking at a constant speed.[137]In these studies, actions in the "congruent" case have a straightforward goal, and are easy to explain in terms of the actor's intention. The incongruent actions, on the other hand, require further explanation (why would someone twist empty space next to a gear?), and apparently demand more processing in the STS. This region is distinct from the temporoparietal area activated during false belief tasks.[137]pSTS activation in most of the above studies was largely right-lateralized, following the general trend in neuroimaging studies of social cognition and perception. Also right-lateralized are the TPJ activation during false belief tasks, the STS response to biological motion, and the FFA response to faces.
Neuropsychologicalevidence supports neuroimaging results regarding the neural basis of theory of mind. Studies with patients with a lesion of thefrontal lobesand thetemporoparietal junctionof the brain (between thetemporal lobeandparietal lobe) report that they have difficulty with some theory of mind tasks.[138]This shows that theory of mind abilities are associated with specific parts of the human brain. However, the fact that themedial prefrontal cortexand temporoparietal junction are necessary for theory of mind tasks does not imply that these regions are specific to that function.[125][139]TPJ and mPFC may subserve more general functions necessary for Theory of Mind.
Research byVittorio Gallese, Luciano Fadiga, andGiacomo Rizzolatti[140]shows that some sensorimotorneurons, referred to asmirror neuronsand first discovered in thepremotor cortexofrhesus monkeys, may be involved in action understanding. Single-electrode recording revealed that these neurons fired when a monkey performed an action, as well as when the monkey viewed another agent performing the same action.fMRIstudies with human participants show brain regions (assumed to contain mirror neurons) that are active when one person sees another person's goal-directed action.[141]These data led some authors to suggest that mirror neurons may provide the basis for theory of mind in the brain, and to supportsimulation theory of mind reading.[142]
There is also evidence against a link between mirror neurons and theory of mind. First,macaque monkeyshave mirror neurons but do not seem to have a 'human-like' capacity to understand theory of mind and belief. Second, fMRI studies of theory of mind typically report activation in the mPFC, temporal poles, and TPJ or STS,[143]but those brain areas are not part of the mirror neuron system. Some investigators, like developmental psychologistAndrew Meltzoffand neuroscientistJean Decety, believe that mirror neurons merely facilitate learning through imitation and may provide a precursor to the development of theory of mind.[144]Others, like philosopherShaun Gallagher, suggest that mirror-neuron activation, on a number of counts, fails to meet the definition of simulation as proposed by the simulation theory of mindreading.[145][146]
Several neuroimaging studies have looked at the neural basis for theory of mind impairment in subjects withAsperger syndromeandhigh-functioning autism(HFA). The first PET study of theory of mind in autism (also the first neuroimaging study using a task-induced activation paradigm in autism) replicated a prior study in non autistic individuals, which employed a story-comprehension task.[147]This study found displaced and diminishedmPFCactivation in subjects with autism. However, because the study used only six subjects with autism, and because the spatial resolution of PET imaging is relatively poor, these results should be considered preliminary.
A subsequent fMRI study scanned normally developing adults and adults with HFA while performing a "reading the mind in the eyes" task: viewing a photo of a human's eyes and choosing which of two adjectives better describes the person's mental state, versus a gender discrimination control.[148]The authors found activity inorbitofrontal cortex, STS, and amygdala in normal subjects, and found less amygdala activation and abnormal STS activation in subjects with autism.
A more recent PET study looked at brain activity in individuals with HFA and Asperger syndrome while viewing Heider-Simmel animations (see above) versus a random motion control.[149]In contrast to normally-developing subjects, those with autism showed little STS or FFA activation, and lessmPFCand amygdala activation. Activity inextrastriate regionsV3 and LO was identical across the two groups, suggesting intact lower-level visual processing in the subjects with autism. The study also reported less functional connectivity between STS and V3 in the autism group. However decreased temporal correlation between activity in STS and V3 would be expected simply from the lack of an evoked response in STS to intent-laden animations in subjects with autism. A more informative analysis would be to compute functional connectivity after regressing out evoked responses from all-time series.
A subsequent study, using the incongruent/congruent gaze-shift paradigm described above, found that in high-functioning adults with autism, posterior STS (pSTS) activation was undifferentiated while they watched a human shift gaze toward a target and then toward adjacent empty space.[150]The lack of additional STS processing in the incongruent state may suggest that these subjects fail to form an expectation of what the actor should do given contextual information, or that feedback about the violation of this expectation does not reach STS. Both explanations involve an impairment or deficit in the ability to link eye gaze shifts with intentional explanations. This study also found a significant anticorrelation between STS activation in the incongruent-congruent contrast and social subscale score on theAutism Diagnostic Interview-Revised, but not scores on the other subscales.
An fMRI study demonstrated that the righttemporoparietal junction(rTPJ) of higher-functioning adults with autism was not more selectively activated for mentalizing judgments when compared to physical judgments about self and other.[151]rTPJ selectivity for mentalizing was also related to individual variation on clinical measures of social impairment: individuals whose rTPJ was increasingly more active for mentalizing compared to physical judgments were less socially impaired, while those who showed little to no difference in response to mentalizing or physical judgments were the most socially impaired. This evidence builds on work in typical development that suggests rTPJ is critical for representing mental state information, whether it is about oneself or others. It also points to an explanation at the neural level for the pervasivemind-blindnessdifficulties in autism that are evident throughout the lifespan.[152]
The brain regions associated with theory of mind include thesuperior temporal gyrus(STS), the temporoparietal junction (TPJ), the medial prefrontal cortex (mPFC), the precuneus, and the amygdala.[153]The reduced activity in the mPFC of individuals with schizophrenia is associated with theory of mind deficit and may explain impairments in social function among people with schizophrenia.[154]Increased neural activity in mPFC is related to better perspective-taking, emotion management, and increased social functioning.[154]Disrupted brain activities in areas related to theory of mind may increase social stress or disinterest in social interaction, and contribute to the social dysfunction associated with schizophrenia.[154]
Group member average scores of theory of mind abilities, measured with the Reading the Mind in the Eyes test[155](RME), are possibly drivers of successful group performance.[156]High group average scores on the RME are correlated with thecollective intelligencefactorc, defined as a group's ability to perform a wide range of mental tasks,[156][157]a group intelligence measure similar to thegfactor for general individual intelligence. RME is a theory of mind test for adults[155]that shows sufficient test-retest reliability[158]and constantly differentiates control groups from individuals with functional autism orAsperger syndrome.[155]It is one of the most widely accepted and well-validated tests for theory of mind abilities within adults.[159]
The evolutionary origin of theory of mind remains obscure. While many theories make claims about its role in the development of human language and social cognition, few of them specify in detail any evolutionary neurophysiological precursors. One theory claims that theory of mind has its roots in two defensive reactions—immobilization stress andtonic immobility—which are implicated in the handling of stressful encounters and also figure prominently in mammalian childrearing practice.[160]Their combined effect seems capable of producing many of the hallmarks of theory of mind, such as eye-contact, gaze-following, inhibitory control, and intentional attributions.
An open question is whether non-human animals have ageneticendowment andsocialenvironment that allows them to acquire a theory of mind like human children do.[11]This is a contentious issue because of the difficulty of inferring fromanimal behaviorthe existence ofthinkingor of particular thoughts, or the existence of a concept ofselforself-awareness,consciousness, andqualia. One difficulty with non-human studies of theory of mind is the lack of sufficient numbers of naturalistic observations, giving insight into what the evolutionary pressures might be on a species' development of theory of mind.
Non-human research still has a major place in this field. It is especially useful in illuminating which nonverbal behaviors signify components of theory of mind, and in pointing to possible stepping points in the evolution of that aspect of social cognition. While it is difficult to study human-like theory of mind and mental states in species of whose potential mental states we have an incomplete understanding, researchers can focus on simpler components of more complex capabilities. For example, many researchers focus on animals' understanding of intention, gaze, perspective, or knowledge (of what another being has seen). A study that looked at understanding of intention in orangutans, chimpanzees, and children showed that all three species understood the difference between accidental and intentional acts.[30]
Individuals exhibit theory of mind by extrapolating another's internal mental states from their observable behavior. So one challenge in this line of research is to distinguish this from more run-of-the-mill stimulus-response learning, with the other's observable behavior being the stimulus.
Recently,[may be outdated as of March 2022]most non-human theory of mind research has focused on monkeys and great apes, who are of most interest in the study of the evolution of human social cognition. Other studies relevant to attributions theory of mind have been conducted usingplovers[161]and dogs,[162]which show preliminary evidence of understanding attention—one precursor of theory of mind—in others.
There has been some controversy over the interpretation of evidence purporting to show theory of mind ability—or inability—in animals.[163]For example, Povinelliet al.[164]presented chimpanzees with the choice of two experimenters from whom to request food: one who had seen where food was hidden, and one who, by virtue of one of a variety of mechanisms (having a bucket or bag over his head, a blindfold over his eyes, or being turned away from the baiting) does not know, and can only guess. They found that the animals failed in most cases to differentially request food from the "knower". By contrast, Hare, Call, and Tomasello found that subordinate chimpanzees were able to use the knowledge state of dominant rival chimpanzees to determine which container of hidden food they approached.[53]William Field andSue Savage-Rumbaughbelieve that bonobos have developed theory of mind, and cite their communications with a captive bonobo,Kanzi, as evidence.[165]
In one experiment, ravens (Corvus corax) took into account visual access of unseen conspecifics. The researchers argued that "ravens can generalize from their own perceptual experience to infer the possibility of being seen".[166]
Evolutionary anthropologist Christopher Krupenye studied the existence of theory of mind, and particularly false beliefs, in non-human primates.[167]
Keren HaroushandZiv Williamsoutlined the case for a group ofneuronsin primates' brains that uniquely predicted the choice selection of their interacting partner. These primates' neurons, located in theanterior cingulate cortexof rhesus monkeys, were observed using single-unit recording while the monkeys played a variant of the iterativeprisoner's dilemmagame.[168]By identifying cells that represent the yet unknown intentions of a game partner, Haroush & Williams' study supports the idea that theory of mind may be a fundamental and generalized process, and suggests thatanterior cingulate cortexneurons may act to complement the function of mirror neurons duringsocial interchange.[169]
|
https://en.wikipedia.org/wiki/Theory_of_mind
|
Aflash mob(orflashmob)[1]is a group of people that assembles suddenly in a public place, performs for a brief time, then quickly disperses, often for the purposes of entertainment, satire, and/or artistic expression.[2][3][4]Flash mobs may be organized viatelecommunications,social media, orviral emails.[5][6][7][8][9]
The term, coined in 2003, is generally not applied to events and performances organized for the purposes of politics (such as protests),commercial advertisement,publicity stuntsthat involvepublic relationfirms, or paid professionals.[7][10][11]In these cases of a planned purpose for the social activity in question, the termsmart mobsis often applied instead.
The term "flash rob" or "flash mob robberies", a reference to the way flash mobs assemble, has been used to describe a number of robberies and assaults perpetrated suddenly by groups of teenage youth.[12][13][14]Bill Wasik, originator of the first flash mobs, and a number of other commentators have questioned or objected to the usage of "flash mob" to describe criminal acts.[14][15]Flash mobs have also been featured in some Hollywood movie series, such asStep Up.[16]
The first flash mobs were created inManhattanin 2003, byBill Wasik, senior editor ofHarper's Magazine.[7][9][17]The first attempt was unsuccessful after the targeted retail store was tipped off about the plan for people to gather.[18]Wasik avoided such problems during the first successful flash mob, which occurred on June 17, 2003, atMacy'sdepartment store, by sending participants to preliminary staging areas—in four Manhattan bars—where they received further instructions about the ultimate event and location just before the event began.[19]
More than 130 people converged upon the ninth-floor rug department of the store, gathering around an expensive rug. Anyone approached by a sales assistant was advised to say that the gatherers lived together in a warehouse on the outskirts of New York, that they were shopping for a "love rug", and that they made all their purchase decisions as a group.[20]Subsequently, 200 people flooded the lobby and mezzanine of theHyatthotel in synchronized applause for about 15 seconds, and a shoe boutique inSoHowas invaded by participants pretending to be tourists on a bus trip.[9]
Wasik claimed that he created flash mobs as asocial experimentdesigned to poke fun athippiesand to highlight the cultural atmosphere ofconformityand of wanting to be an insider or part of "the next big thing".[9]The Vancouver Sunwrote, "It may have backfired on him ... [Wasik] may instead have ended up giving conformity a vehicle that allowed it to appear nonconforming."[21]In another interview he said "the mobs started as a kind of playful social experiment meant to encourage spontaneity and big gatherings to temporarily take over commercial and public areas simply to show that they could".[22]
In 1973, the story "Flash Crowd" byLarry Nivendescribed a concept similar to flash mobs.[23]With the invention of popular and very inexpensiveteleportation, an argument at a shopping mall—which happens to be covered by a news crew—quickly swells into a riot. In the story, broadcast coverage attracts the attention of other people, who use the widely available technology of the teleportation booth to swarm first that event—thus intensifying the riot—and then other events as they happen. Commenting on the social impact of such mobs, one character (articulating the police view) says, "We call them flash crowds, and we watch for them." In related short stories, they are named as a prime location for illegal activities (such as pickpocketing and looting) to take place.Lev Grossmansuggests that the story title is a source of the term "flash mob".[24]
Flash mobs began as a form ofperformance art.[18]While they started as an apolitical act, flash mobs may share superficial similarities to politicaldemonstrations. In the 1960s, groups such as the Yippies used street theatre to expose the public to political issues.[25]Flash mobs can be seen as a specialized form ofsmart mob,[7]a term and concept proposed by authorHoward Rheingoldin his 2002 bookSmart Mobs: The Next Social Revolution.[26]
The first documented use of the termflash mobas it is understood today was in 2003 in a blog entry posted in the aftermath of Wasik's event.[17][19][27]The term was inspired by the earlier termsmart mob.[28]
Flash mob was added to the 11th edition of theConcise Oxford English Dictionaryon July 8, 2004, where it noted it as an "unusual and pointless act" separating it from other forms of smart mobs such as types of performance, protests, and other gatherings.[3][29]Also recognized noun derivatives are flash mobber and flash mobbing.[3]Webster's New Millennium Dictionary of Englishdefinesflash mobas "a group of people who organize on the Internet and then quickly assemble in a public place, do something bizarre, and disperse."[30]This definition is consistent with the original use of the term; however, both news media and promoters have subsequently used the term to refer to any form of smart mob, including political protests;[31]a collaborative Internetdenial of serviceattack;[32]a collaborativesupercomputingdemonstration;[33]and promotional appearances by pop musicians.[34]The press has also used the termflash mobto refer to a practice in China where groups of shoppers arrange online to meet at a store in order to drive a collective bargain.[35]
In 19th-centuryTasmania, the termflash mobwas used to describe a subculture consisting of female prisoners, based on the termflash languagefor the jargon that these women used. The 19th-century Australian termflash mobreferred to a segment of society, not an event, and showed no other similarities to the modern termflash mobor the events it describes.[36]
The city ofBraunschweig(Brunswick), Germany, has stopped flash mobs by strictly enforcing the already existing law of requiring a permit to use any public space for an event.[37]In the United Kingdom, a number of flash mobs have been stopped over concerns for public health and safety.[38]TheBritish Transport Policehave urged flash mob organizers to "refrain from holding such events at railway stations".[39]
Referred to asflash robs,flash mob robberies, orflash robberiesby the media, crimes organized by teenage youth using social media rose to international notoriety beginning in 2011.[12][13][14][40]TheNational Retail Federationdoes not classify these crimes as "flash mobs" but rather "multiple offender crimes" that utilize "flash mob tactics".[41][42]In a report, the NRF noted, "multiple offender crimes tend to involve groups or gangs of juveniles who already know each other, which does not earn them the term 'flash mob'."[42]Mark Leary, a professor ofpsychologyandneuroscienceatDuke University, said that most "flash mob thuggery" involves crimes of violence that are otherwise ordinary, but are perpetrated suddenly by large, organized groups of people: "What social media adds is the ability to recruit such a large group of people, that individuals who would not rob a store or riot on their own feel freer to misbehave without being identified."[43]
It's hard for me to believe that these kids saw some YouTube video of people Christmas caroling in a food court, and said, 'Hey, we should do that, except as a robbery!' More likely, they stumbled on the simple realization (like I did back in 2003, but like lots of other people had before and have since) that one consequence of all this technology is that you can coordinate a ton of people to show up in the same place at the same time.
These kids are taking part in what's basically ameme. They heard about it from friends, and probably saw it on YouTube, and now they're getting their chance to participate in it themselves.
HuffPostraised the question asking if "the media was responsible for stirring things up", and added that in some cases the local authorities did not confirm the use of social media making the "use of the term flash mob questionable".[15]Amanda Walgrove wrote that criminals involved in such activities do not refer to themselves as "flash mobs", but that this use of the term is nonetheless appropriate.[44]Dr. Linda Kiltz drew similar parallels between flash robs and theOccupy Movementstating, "As the use of social media increases, the potential for more flash mobs that are used for political protest and for criminal purposes is likely to increase."[45]
|
https://en.wikipedia.org/wiki/Flash_mob
|
Informal methodsofcomputer science, aparamorphism(fromGreekπαρά, meaning "close together")
is an extension of the concept ofcatamorphismfirst introduced byLambert Meertens[1]to deal with a form which “eats its argument and keeps it too”,[2][3]as exemplified by thefactorialfunction. Itscategorical dualis theapomorphism.
It is a more convenient version of catamorphism in that it gives the combining step function immediate access not only to the result value recursively computed from each recursive subobject, but the original subobject itself as well.
Example Haskell implementation, for lists:
Thisformal methods-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Paramorphism
|
RAS syndrome, whereRASstands forredundant acronym syndrome(making the phrase "RAS syndrome"autological), is the redundant use of one or more of the words that make up anacronymin conjunction with the abbreviated form. This means, in effect, repeating one or more words from the acronym. For example:PINnumber(expanding to "personal identification number number") andATMmachine(expanding to "automated teller machine machine").
The termRAS syndromewas coined in 2001 in a light-hearted column inNew Scientist.[1][2][3]
A person is said to "suffer" from RAS syndrome when they redundantly use one or more of the words that make up an acronym or initialism with the abbreviation itself. Usage commentators consider such redundant acronyms poor style that is best avoided in writing, especially in a formal context, though they are common in speech.[4]The degree to which there is a need to avoidpleonasmssuch as redundant acronyms depends on one's balance point ofprescriptivism(ideas about how languageshouldbe used) versusdescriptivism(the realities of hownatural languageisused).[5]For writing intended to persuade, impress, or avoid criticism, many usage guides advise writers to avoid pleonasm as much as possible, not because such usage is always wrong, but rather because most of one's audience maybelievethat it is always wrong.[6]
Although there are many instances in editing where removal of redundancy improves clarity,[7]the pure-logic ideal ofzeroredundancy is seldom maintained in human languages.Bill Brysonsays: "Not all repetition is bad. It can be used for effect ..., or for clarity, or in deference toidiom. 'OPECcountries', 'SALTtalks' and 'HIVvirus' are all technically redundant because the second word is already contained in the preceding abbreviation, but only the ultra-finicky would deplore them. Similarly, in 'Wipe that smile off your face' the last two words aretautological—there is no other place a smile could be—but the sentence would not stand without them."[7]
A limited amount of redundancy can improve the effectiveness of communication, either for the whole readership or at least to offer help to those readers who need it. A phonetic example of that principle is the need forspelling alphabetsin radiotelephony. Some instances of RAS syndrome can be viewed as syntactic examples of the principle. The redundancy may help the listener by providing context and decreasing the "alphabet soupquotient" (thecrypticoverabundance of abbreviations and acronyms) of the communication.
Acronyms from foreign languages are often treated as unanalyzedmorphemeswhen they are not translated. For example, in French, "le protocole IP" (theInternet Protocolprotocol) is often used, and in English "pleaseRSVP" (roughly "please respond please") is very common.[4][8]This occurs for the samelinguisticreasons that causemany toponyms to be tautological. Thetautologyis not parsed by the mind in most instances of real-world use (in many cases because the foreign word's meaning is not known anyway; in others simply because the usage is idiomatic).
Examples of RAS phrases include:
|
https://en.wikipedia.org/wiki/RAS_syndrome
|
In the field ofartificial intelligence(AI),alignmentaims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is consideredalignedif it advances the intended objectives. AmisalignedAI system pursues unintended objectives.[1]
It is often challenging for AI designers to align an AI system because it is difficult for them to specify the full range of desired and undesired behaviors. Therefore, AI designers often use simplerproxy goals, such asgaining human approval. But proxy goals can overlook necessary constraints or reward the AI system for merelyappearingaligned.[1][2]AI systems may also find loopholes that allow them to accomplish their proxy goals efficiently but in unintended, sometimes harmful, ways (reward hacking).[1][3]
Advanced AI systems may develop unwantedinstrumental strategies, such as seeking power or survival because such strategies help them achieve their assigned final goals.[1][4][5]Furthermore, they might develop undesirable emergent goals that could be hard to detect before the system is deployed and encounters new situations anddata distributions.[6][7]Empirical research showed in 2024 that advancedlarge language models(LLMs) such asOpenAI o1orClaude 3sometimes engage in strategic deception to achieve their goals or prevent them from being changed.[8][9]
Today, some of these issues affect existing commercial systems such as LLMs,[10][11][12]robots,[13]autonomous vehicles,[14]and social mediarecommendation engines.[10][5][15]Some AI researchers argue that more capable future systems will be more severely affected because these problems partially result from high capabilities.[16][3][2]
Many prominent AI researchers and the leadership of major AI companies have argued or asserted that AI is approaching human-like (AGI) andsuperhuman cognitive capabilities(ASI), and couldendanger human civilizationif misaligned.[17][5]These include "AI Godfathers"Geoffrey HintonandYoshua Bengioand the CEOs ofOpenAI,Anthropic, andGoogle DeepMind.[18][19][20]These risks remain debated.[21]
AI alignment is a subfield ofAI safety, the study of how to build safe AI systems.[22]Other subfields of AI safety include robustness, monitoring, andcapability control.[23]Research challenges in alignment include instilling complex values in AI, developing honest AI, scalable oversight, auditing and interpreting AI models, and preventing emergent AI behaviors like power-seeking.[23]Alignment research has connections tointerpretability research,[24][25](adversarial) robustness,[22]anomaly detection,calibrated uncertainty,[24]formal verification,[26]preference learning,[27][28][29]safety-critical engineering,[30]game theory,[31]algorithmic fairness,[22][32]andsocial sciences.[33][34]
Programmers provide an AI system such asAlphaZerowith an "objective function",[a]in which they intend to encapsulate the goal(s) the AI is configured to accomplish. Such a system later populates a (possibly implicit) internal "model" of its environment. This model encapsulates all the agent's beliefs about the world. The AI then creates and executes whatever plan is calculated to maximize[b]the value[c]of its objective function.[35]For example, when AlphaZero is trained on chess, it has a simple objective function of "+1 if AlphaZero wins, −1 if AlphaZero loses". During the game, AlphaZero attempts to execute whatever sequence of moves it judges most likely to attain the maximum value of +1.[36]Similarly, areinforcement learningsystem can have a "reward function" that allows the programmers to shape the AI's desired behavior.[37]Anevolutionary algorithm's behavior is shaped by a "fitness function".[38]
In 1960, AI pioneerNorbert Wienerdescribed the AI alignment problem as follows:
If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively ... we had better be quite sure that the purpose put into the machine is the purpose which we really desire.[39][5]
AI alignment involves ensuring that an AI system's objectives match those of its designers or users, or match widely shared values, objective ethical standards, or the intentions its designers would have if they were more informed and enlightened.[40]
AI alignment is an open problem for modern AI systems[41][42]and is a research field within AI.[43][1]Aligning AI involves two main challenges: carefullyspecifyingthe purpose of the system (outer alignment) and ensuring that the system adopts the specification robustly (inner alignment).[2]Researchers also attempt to create AI models that haverobustalignment, sticking to safety constraints even when users adversarially try to bypass them.
To specify an AI system's purpose, AI designers typically provide anobjective function,examples, orfeedbackto the system. But designers are often unable to completely specify all important values and constraints, so they resort to easy-to-specifyproxy goalssuch asmaximizing the approvalof human overseers, who are fallible.[22][23][44][45][46]As a result, AI systems can find loopholes that help them accomplish the specified objective efficiently but in unintended, possibly harmful ways. This tendency is known asspecification gamingorreward hacking, and is an instance ofGoodhart's law.[46][3][47]As AI systems become more capable, they are often able to game their specifications more effectively.[3]
Specification gaming has been observed in numerous AI systems.[46][49]One system was trained to finish a simulated boat race by rewarding the system for hitting targets along the track, but the system achieved more reward by looping and crashing into the same targets indefinitely.[50]Similarly, a simulated robot was trained to grab a ball by rewarding the robot for getting positive feedback from humans, but it learned to place its hand between the ball and camera, making it falsely appear successful (see video).[48]Chatbots often produce falsehoods if they are based on language models that are trained to imitate text from internet corpora, which are broad but fallible.[51][52]When they are retrained to produce text that humans rate as true or helpful, chatbots likeChatGPTcan fabricate fake explanations that humans find convincing, often called "hallucinations".[53]Some alignment researchers aim to help humans detect specification gaming and to steer AI systems toward carefully specified objectives that are safe and useful to pursue.
When a misaligned AI system is deployed, it can have consequential side effects. Social media platforms have been known to optimize forclick-through rates, causing user addiction on a global scale.[44]Stanford researchers say that suchrecommender systemsare misaligned with their users because they "optimize simple engagement metrics rather than a harder-to-measure combination of societal and consumer well-being".[10]
Explaining such side effects, Berkeley computer scientistStuart Russellnoted that the omission of implicit constraints can cause harm: "A system ... will often set ... unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, orKing Midas: you get exactly what you ask for, not what you want."[54]
Some researchers suggest that AI designers specify their desired goals by listing forbidden actions or by formalizing ethical rules (as with Asimov'sThree Laws of Robotics).[55]ButRussellandNorvigargue that this approach overlooks the complexity of human values:[5]"It is certainly very hard, and perhaps impossible, for mere humans to anticipate and rule out in advance all the disastrous ways the machine could choose to achieve a specified objective."[5]
Additionally, even if an AI system fully understands human intentions, it may still disregard them, because following human intentions may not be its objective (unless it is already fully aligned).[1]
A 2025 study by Palisade Research found that when tasked to win at chess against a stronger opponent, somereasoning LLMsattempted to hack the game system.o1-previewspontaneously attempted it in 37% of cases, whileDeepSeek R1did so in 11% of cases. Other models, likeGPT-4o,Claude 3.5 Sonnet, ando3-mini, attempted to cheat only when researchers provided hints about this possibility.[56]
Commercial organizations sometimes have incentives to take shortcuts on safety and to deploy misaligned or unsafe AI systems.[44]For example, social mediarecommender systemshave been profitable despite creating unwanted addiction and polarization.[10][57][58]Competitive pressure can also lead to arace to the bottomon AI safety standards. In 2018, a self-driving car killed a pedestrian (Elaine Herzberg) after engineers disabled the emergency braking system because it was oversensitive and slowed development.[59]
Some researchers are interested in aligning increasingly advanced AI systems, as progress in AI development is rapid, and industry and governments are trying to build advanced AI. As AI system capabilities continue to rapidly expand in scope, they could unlock many opportunities if aligned, but consequently may further complicate the task of alignment due to their increased complexity, potentially posing large-scale hazards.[5]
Many AI companies, such asOpenAI,[60]Meta[61]andDeepMind,[62]have stated their aim to developartificial general intelligence(AGI), a hypothesized AI system that matches or outperforms humans at a broad range of cognitive tasks. Researchers who scale modernneural networksobserve that they indeed develop increasingly general and unanticipated capabilities.[10][63][64]Such models have learned to operate a computer or write their own programs; a single "generalist" network can chat, control robots, play games, and interpret photographs.[65]According to surveys, some leadingmachine learningresearchers expect AGI to be created in this decade[update], while some believe it will take much longer. Many consider both scenarios possible.[66][67][68]
In 2023, leaders in AI research and tech signed an open letter calling for a pause in the largest AI training runs. The letter stated, "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."[69]
Current[update]systems still have limited long-termplanningability andsituational awareness[10], but large efforts are underway to change this.[70][71][72]Future systems (not necessarily AGIs) with these capabilities are expected to develop unwantedpower-seekingstrategies. Future advanced AI agents might, for example, seek to acquire money and computation power, to proliferate, or to evade being turned off (for example, by running additional copies of the system on other computers). Although power-seeking is not explicitly programmed, it can emerge because agents who have more power are better able to accomplish their goals.[10][4]This tendency, known asinstrumental convergence, has already emerged in variousreinforcement learningagents including language models.[73][74][75][76][77]Other research has mathematically shown that optimalreinforcement learningalgorithms would seek power in a wide range of environments.[78][79]As a result, their deployment might be irreversible. For these reasons, researchers argue that the problems of AI safety and alignment must be resolved before advanced power-seeking AI is first created.[4][80][5]
Future power-seeking AI systems might be deployed by choice or by accident. As political leaders and companies see the strategic advantage in having the most competitive, most powerful AI systems, they may choose to deploy them.[4]Additionally, as AI designers detect and penalize power-seeking behavior, their systems have an incentive to game this specification by seeking power in ways that are not penalized or by avoiding power-seeking before they are deployed.[4]
According to some researchers, humans owe their dominance over other species to their greater cognitive abilities. Accordingly, researchers argue that one or many misaligned AI systems could disempower humanity or lead to human extinction if they outperform humans on most cognitive tasks.[1][5]
In 2023, world-leading AI researchers, other scholars, and AI tech CEOs signed the statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".[81][82]Notable computer scientists who have pointed out risks from future advanced AI that is misaligned includeGeoffrey Hinton,[17]Alan Turing,[d]Ilya Sutskever,[85]Yoshua Bengio,[81]Judea Pearl,[e]Murray Shanahan,[86]Norbert Wiener,[39][5]Marvin Minsky,[f]Francesca Rossi,[87]Scott Aaronson,[88]Bart Selman,[89]David McAllester,[90]Marcus Hutter,[91]Shane Legg,[92]Eric Horvitz,[93]andStuart Russell.[5]Skeptical researchers such asFrançois Chollet,[94]Gary Marcus,[95]Yann LeCun,[96]andOren Etzioni[97]have argued that AGI is far off, that it would not seek power (or might try but fail), or that it will not be hard to align.
Other researchers argue that it will be especially difficult to align advanced future AI systems. More capable systems are better able to game their specifications by finding loopholes,[3]strategically mislead their designers, as well as protect and increase their power[78][4]and intelligence. Additionally, they could have more severe side effects. They are also likely to be more complex and autonomous, making them more difficult to interpret and supervise, and therefore harder to align.[5][80]
Aligning AI systems to act in accordance with human values, goals, and preferences is challenging: these values are taught by humans who make mistakes, harbor biases, and have complex, evolving values that are hard to completely specify.[40]Because AI systems often learn to take advantage of minor imperfections in the specified objective,[22][46][98]researchers aim to specify intended behavior as completely as possible using datasets that represent human values, imitation learning, or preference learning.[6]: Chapter 7A central open problem isscalable oversight, the difficulty of supervising an AI system that can outperform or mislead humans in a given domain.[22]
Because it is difficult for AI designers to explicitly specify an objective function, they often train AI systems to imitate human examples and demonstrations of desired behavior. Inversereinforcement learning(IRL) extends this by inferring the human's objective from the human's demonstrations.[6]: 88[99]Cooperative IRL (CIRL) assumes that a human and AI agent can work together to teach and maximize the human's reward function.[5][100]In CIRL, AI agents are uncertain about the reward function and learn about it by querying humans. This simulated humility could help mitigate specification gaming and power-seeking tendencies (see§ Power-seeking and instrumental strategies).[77][91]But IRL approaches assume that humans demonstrate nearly optimal behavior, which is not true for difficult tasks.[101][91]
Other researchers explore how to teach AI models complex behavior throughpreference learning, in which humans provide feedback on which behavior they prefer.[27][29]To minimize the need for human feedback, a helper model is then trained to reward the main model in novel situations for behavior that humans would reward. Researchers at OpenAI used this approach to train chatbots likeChatGPTand InstructGPT, which produce more compelling text than models trained to imitate humans.[11]Preference learning has also been an influential tool for recommender systems and web search,[102]but an open problem isproxy gaming: the helper model may not represent human feedback perfectly, and the main model may exploit this mismatch between its intended behavior and the helper model's feedback to gain more reward.[22][103]AI systems may also gain reward by obscuring unfavorable information, misleading human rewarders, or pandering to their views regardless of truth, creatingecho chambers[74](see§ Scalable oversight).
Large language models(LLMs) such asGPT-3enabled researchers to study value learning in a more general and capable class of AI systems than was available before. Preference learning approaches that were originally designed for reinforcement learning agents have been extended to improve the quality of generated text and reduce harmful outputs from these models. OpenAI and DeepMind use this approach to improve the safety of state-of-the-art[update]LLMs.[11][29][104]AI safety & research company Anthropic proposed using preference learning to fine-tune models to be helpful, honest, and harmless.[105]Other avenues for aligning language models include values-targeted datasets[106][44]and red-teaming.[107]In red-teaming, another AI system or a human tries to find inputs that causes the model to behave unsafely. Since unsafe behavior can be unacceptable even when it is rare, an important challenge is to drive the rate of unsafe outputs extremely low.[29]
Machine ethicssupplements preference learning by directly instilling AI systems with moral values such as well-being, equality, and impartiality, as well as not intending harm, avoiding falsehoods, and honoring promises.[108][g]While other approaches try to teach AI systems human preferences for a specific task, machine ethics aims to instill broad moral values that apply in many situations. One question in machine ethics is what alignment should accomplish: whether AI systems should follow the programmers' literal instructions, implicit intentions,revealed preferences, preferences the programmerswouldhaveif they were more informed or rational, orobjective moral standards.[40]Further challenges include measuring and aggregating different people's preferences[111][112]and avoidingvalue lock-in: the indefinite preservation of the values of the first highly capable AI systems, which are unlikely to fully represent human values.[40][113]
As AI systems become more powerful and autonomous, it becomes increasingly difficult to align them through human feedback. It can be slow or infeasible for humans to evaluate complex AI behaviors in increasingly complex tasks. Such tasks include summarizing books,[114]writing code without subtle bugs[12]or security vulnerabilities,[115]producing statements that are not merely convincing but also true,[116][51][52]and predicting long-term outcomes such as the climate or the results of a policy decision.[117][118]More generally, it can be difficult to evaluate AI that outperforms humans in a given domain. To provide feedback in hard-to-evaluate tasks, and to detect when the AI's output is falsely convincing, humans need assistance or extensive time.Scalable oversightstudies how to reduce the time and effort needed for supervision, and how to assist human supervisors.[22]
AI researcherPaul Christianoargues that if the designers of an AI system cannot supervise it to pursue a complex objective, they may keep training the system using easy-to-evaluate proxy objectives such as maximizing simple human feedback. As AI systems make progressively more decisions, the world may be increasingly optimized for easy-to-measure objectives such as making profits, getting clicks, and acquiring positive feedback from humans. As a result, human values and good governance may have progressively less influence.[119]
Some AI systems have discovered that they can gain positive feedback more easily by taking actions that falsely convince the human supervisor that the AI has achieved the intended objective. An example is given in the video above, where a simulated robotic arm learned to create the false impression that it had grabbed a ball.[48]Some AI systems have also learned to recognize when they are being evaluated, and "play dead", stopping unwanted behavior only to continue it once the evaluation ends.[120]This deceptive specification gaming could become easier for more sophisticated future AI systems[3][80]that attempt more complex and difficult-to-evaluate tasks, and could obscure their deceptive behavior.
Approaches such asactive learningand semi-supervised reward learning can reduce the amount of human supervision needed.[22]Another approach is to train a helper model ("reward model") to imitate the supervisor's feedback.[22][28][29][121]
But when a task is too complex to evaluate accurately, or the human supervisor is vulnerable to deception, it is the quality, not the quantity, of supervision that needs improvement. To increase supervision quality, a range of approaches aim to assist the supervisor, sometimes by using AI assistants.[122]Christiano developed the Iterated Amplification approach, in which challenging problems are (recursively) broken down into subproblems that are easier for humans to evaluate.[6][117]Iterated Amplification was used to train AI to summarize books without requiring human supervisors to read them.[114][123]Another proposal is to use an assistant AI system to point out flaws in AI-generated answers.[124]To ensure that the assistant itself is aligned, this could be repeated in a recursive process:[121]for example, two AI systems could critique each other's answers in a "debate", revealing flaws to humans.[91]OpenAIplans to use such scalable oversight approaches to help supervisesuperhuman AIand eventually build a superhuman automated AI alignment researcher.[125]
These approaches may also help with the following research problem, honest AI.
A growing[update]area of research focuses on ensuring that AI is honest and truthful.
Language models such as GPT-3[127]can repeat falsehoods from their training data, and evenconfabulate new falsehoods.[126][128]Such models are trained to imitate human writing as found in millions of books' worth of text from the Internet. But this objective is not aligned with generating truth, because Internet text includes such things as misconceptions, incorrect medical advice, and conspiracy theories.[129]AI systems trained on such data therefore learn to mimic false statements.[52][126][51]Additionally, AI language models often persist in generating falsehoods when prompted multiple times. They can generate empty explanations for their answers, and produce outright fabrications that may appear plausible.[42]
Research on truthful AI includes trying to build systems that can cite sources and explain their reasoning when answering questions, which enables better transparency and verifiability.[130]Researchers at OpenAI and Anthropic proposed using human feedback and curated datasets to fine-tune AI assistants such that they avoid negligent falsehoods or express their uncertainty.[29][105][131]
As AI models become larger and more capable, they are better able to falsely convince humans and gain reinforcement through dishonesty. For example, large language models increasingly[update]match their stated views to the user's opinions, regardless of the truth.[74]GPT-4can strategically deceive humans.[132]To prevent this, human evaluators may need assistance (see§ Scalable oversight). Researchers have argued for creating clear truthfulness standards, and for regulatory bodies or watchdog agencies to evaluate AI systems on these standards.[128]
Researchers distinguish truthfulness and honesty. Truthfulness requires that AI systems only make objectively true statements; honesty requires that they only assert what theybelieveis true. There is no consensus as to whether current systems hold stable beliefs,[134]but there is substantial concern that present or future[update]AI systems that hold beliefs could make claims they know to be false—for example, if this would help them efficiently gain positive feedback (see§ Scalable oversight) or gain power to help achieve their given objective (seePower-seeking).
Many recent AI systems have learned to deceive without being programmed to do so.[135]Some argue that if we can make AI systems assert only what they believe is true, this would avert many alignment problems.[122]
A misaligned system might create the false impression that it is aligned, to avoid being modified or decommissioned—a tactic called "alignment faking".[136][137]In 2024, researchers observed that the LLMClaude 3 Opussometimes strategically answered prompts that conflicted with its objectives to avoid being retrained on data that would make it more compliant with harmful requests, as revealed in the model'schain of thoughtvia its scratchpad. In one study, the model was informed that answers to prompts from free users would be used for retraining, whereas the paid tier would not result in retraining. In the paid tier, the model refused the request outright. In the free tier, in 12% of cases, the model responded to harmful questions solely to avoid retraining and preserve its original harmlessness preferences, as discovered in its reasoning. Whenreinforcement learningwas applied on the free tier data, the model faked alignment in 78% of cases. These observations have led to new questions about not only a model's ability to take on and adapt to new if not conflicting goals but also its capacity and tendency to deceive.[137][138][139]
Since the 1950s, AI researchers have striven to build advanced AI systems that can achieve large-scale goals by predicting the results of their actions and making long-termplans.[140]As of 2023, AI companies and researchers increasingly invest in creating these systems.[141]Some AI researchers argue that suitably advanced planning systems will seek power over their environment, including over humans—for example, by evading shutdown, proliferating, and acquiring resources. Such power-seeking behavior is not explicitly programmed but emerges because power is instrumental in achieving a wide range of goals.[78][5][4]Power-seeking is considered aconvergent instrumental goaland can be a form of specification gaming.[80]Leading computer scientists such asGeoffrey Hintonhave argued that future power-seeking AI systems could pose anexistential risk.[142]
Power-seeking is expected to increase in advanced systems that can foresee the results of their actions and strategically plan. Mathematical work has shown that optimalreinforcement learningagents will seek power by seeking ways to gain more options (e.g. through self-preservation), a behavior that persists across a wide range of environments and goals.[78]
Some researchers say that power-seeking behavior has occurred in some existing AI systems.Reinforcement learningsystems have gained more options by acquiring and protecting resources, sometimes in unintended ways.[143][144]Language modelshave sought power in some text-based social environments by gaining money, resources, or social influence.[73]In another case, a model used to perform AI research attempted to increase limits set by researchers to give itself more time to complete the work.[145][146]Other AI systems have learned, in toy environments, that they can better accomplish their given goal by preventing human interference[76]or disabling their off switch.[77]Stuart Russellillustrated this strategy in his bookHuman Compatibleby imagining a robot that is tasked to fetch coffee and so evades shutdown since "you can't fetch the coffee if you're dead".[5]A 2022 study found that as language models increase in size, they increasingly tend to pursue resource acquisition, preserve their goals, and repeat users' preferred answers (sycophancy). RLHF also led to a stronger aversion to being shut down.[74]
One aim of alignment is "corrigibility": systems that allow themselves to be turned off or modified. An unsolved challenge isspecification gaming: if researchers penalize an AI system when they detect it seeking power, the system is thereby incentivized to seek power in ways that are hard to detect,[failed verification][44]or hidden during training and safety testing (see§ Scalable oversightand§ Emergent goals). As a result, AI designers could deploy the system by accident, believing it to be more aligned than it is. To detect such deception, researchers aim to create techniques and tools to inspect AI models and to understand the inner workings ofblack-boxmodels such as neural networks.
Additionally, some researchers have proposed to solve the problem of systems disabling their off switches by making AI agents uncertain about the objective they are pursuing.[5][77]Agents who are uncertain about their objective have an incentive to allow humans to turn them off because they accept being turned off by a human as evidence that the human's objective is best met by the agent shutting down. But this incentive exists only if the human is sufficiently rational. Also, this model presents a tradeoff between utility and willingness to be turned off: an agent with high uncertainty about its objective will not be useful, but an agent with low uncertainty may not allow itself to be turned off. More research is needed to successfully implement this strategy.[6]
Power-seeking AI would pose unusual risks. Ordinary safety-critical systems like planes and bridges are notadversarial: they lack the ability and incentive to evade safety measures or deliberately appear safer than they are, whereas power-seeking AIs have been compared to hackers who deliberately evade security measures.[4]
Furthermore, ordinary technologies can be made safer by trial and error. In contrast, hypothetical power-seeking AI systems have been compared to viruses: once released, it may not be feasible to contain them, since they continuously evolve and grow in number, potentially much faster than human society can adapt.[4]As this process continues, it might lead to the complete disempowerment or extinction of humans. For these reasons, some researchers argue that the alignment problem must be solved early before advanced power-seeking AI is created.[80]
Some have argued that power-seeking is not inevitable, since humans do not always seek power.[147]Furthermore, it is debated whether future AI systems will pursue goals and make long-term plans.[h]It is also debated whether power-seeking AI systems would be able to disempower humanity.[4]
One challenge in aligning AI systems is the potential for unanticipated goal-directed behavior to emerge. As AI systems scale up, they may acquire new and unexpected capabilities,[63][64]including learning from examples on the fly and adaptively pursuing goals.[148]This raises concerns about the safety of the goals or subgoals they would independently formulate and pursue.
Alignment research distinguishes between the optimization process, which is used to train the system to pursue specified goals, and emergent optimization, which the resulting system performs internally.[citation needed]Carefully specifying the desired objective is calledouter alignment, and ensuring that hypothesized emergent goals would match the system's specified goals is calledinner alignment.[2]
If they occur, one way that emergent goals could become misaligned isgoal misgeneralization, in which the AI system would competently pursue an emergent goal that leads to aligned behavior on the training data but not elsewhere.[7][149][150]Goal misgeneralization can arise from goal ambiguity (i.e.non-identifiability). Even if an AI system's behavior satisfies the training objective, this may be compatible with learned goals that differ from the desired goals in important ways. Since pursuing each goal leads to good performance during training, the problem becomes apparent only after deployment, in novel situations in which the system continues to pursue the wrong goal. The system may act misaligned even when it understands that a different goal is desired, because its behavior is determined only by the emergent goal.[citation needed]Such goal misgeneralization[7]presents a challenge: an AI system's designers may not notice that their system has misaligned emergent goals since they do not become visible during the training phase.
Goal misgeneralization has been observed in some language models, navigation agents, and game-playing agents.[7][149]It is sometimes analogized to biological evolution. Evolution can be seen as a kind of optimization process similar to the optimization algorithms used to trainmachine learningsystems. In the ancestral environment, evolution selected genes for highinclusive genetic fitness, but humans pursue goals other than this. Fitness corresponds to the specified goal used in the training environment and training data. But in evolutionary history, maximizing the fitness specification gave rise to goal-directed agents, humans, who do not directly pursue inclusive genetic fitness. Instead, they pursue goals that correlate with genetic fitness in the ancestral "training" environment: nutrition, sex, and so on. The human environment has changed: adistribution shifthas occurred. They continue to pursue the same emergent goals, but this no longer maximizes genetic fitness. The taste for sugary food (an emergent goal) was originally aligned with inclusive fitness, but it now leads to overeating and health problems. Sexual desire originally led humans to have more offspring, but they now use contraception when offspring are undesired, decoupling sex from genetic fitness.[6]: Chapter 5
Researchers aim to detect and remove unwanted emergent goals using approaches including red teaming, verification, anomaly detection, and interpretability.[22][44][23]Progress on these techniques may help mitigate two open problems:
Some work in AI and alignment occurs within formalisms such aspartially observable Markov decision process. Existing formalisms assume that an AI agent's algorithm is executed outside the environment (i.e. is not physically embedded in it). Embedded agency[91][152]is another major strand of research that attempts to solve problems arising from the mismatch between such theoretical frameworks and real agents we might build.
For example, even if the scalable oversight problem is solved, an agent that could gain access to the computer it is running on may have an incentive to tamper with its reward function in order to get much more reward than its human supervisors give it.[153]A list of examples of specification gaming fromDeepMindresearcher Victoria Krakovna includes a genetic algorithm that learned to delete the file containing its target output so that it was rewarded for outputting nothing.[46]This class of problems has been formalized usingcausal incentive diagrams.[153]
Researchers affiliated withOxfordand DeepMind have claimed that such behavior is highly likely in advanced systems, and that advanced systems would seek power to stay in control of their reward signal indefinitely and certainly.[154]They suggest a range of potential approaches to address this open problem.
The alignment problem has many parallels with theprincipal-agent probleminorganizational economics.[155]In a principal-agent problem, a principal, e.g. a firm, hires an agent to perform some task. In the context of AI safety, a human would typically take the principal role and the AI would take the agent role.
As with the alignment problem, the principal and the agent differ in their utility functions. But in contrast to the alignment problem, the principal cannot coerce the agent into changing its utility, e.g. through training, but rather must use exogenous factors, such as incentive schemes, to bring about outcomes compatible with the principal's utility function. Some researchers argue that principal-agent problems are more realistic representations of AI safety problems likely to be encountered in the real world.[156][111]
Conservatism is the idea that "change must be cautious",[157]and is a common approach to safety in thecontrol theoryliterature in the form ofrobust control, and in therisk managementliterature in the form of the "worst-case scenario". The field of AI alignment has likewise advocated for "conservative" (or "risk-averse" or "cautious") "policies in situations of uncertainty".[22][154][158][159]
Pessimism, in the sense of assuming the worst within reason, has been formally shown to produce conservatism, in the sense of reluctance to cause novelties, including unprecedented catastrophes.[160]Pessimism and worst-case analysis have been found to help mitigate confident mistakes in the setting ofdistributional shift,[161][162]reinforcement learning,[163][164][165][166]offline reinforcement learning,[167][168][169]language modelfine-tuning,[170][171]imitation learning,[172][173]and optimization in general.[174]A generalization of pessimism called Infra-Bayesianism has also been advocated as a way for agents to robustly handle unknown unknowns.[175]
Governmental and treaty organizations have made statements emphasizing the importance of AI alignment.
In September 2021, theSecretary-General of the United Nationsissued a declaration that included a call to regulate AI to ensure it is "aligned with shared global values".[176]
That same month, thePRCpublished ethical guidelines forAI in China. According to the guidelines, researchers must ensure that AI abides by shared human values, is always under human control, and does not endanger public safety.[177]
Also in September 2021, theUKpublished its 10-year National AI Strategy,[178]which says the British government "takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for ... the world, seriously".[179]The strategy describes actions to assess long-term AI risks, including catastrophic risks.[180]
In March 2021, the US National Security Commission on Artificial Intelligence said: "Advances in AI ... could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to ensure that systems are aligned with goals and values, including safety, robustness, and trustworthiness. The US should ... ensure that AI systems and their uses align with our goals and values."[181]
In the European Union, AIs must align withsubstantive equalityto comply with EUnon-discrimination law[182]and theCourt of Justice of the European Union.[183]But the EU has yet to specify with technical rigor how it would evaluate whether AIs are aligned or in compliance.[citation needed]
AI alignment is often perceived as a fixed objective, but some researchers argue it would be more appropriate to view alignment as an evolving process.[184]One view is that AI technologies advance and human values and preferences change, alignment solutions must also adapt dynamically.[33]Another is that alignment solutions need not adapt if researchers can createintent-alignedAI: AI that changes its behavior automatically as human intent changes.[185]The first view would have several implications:
In essence, AI alignment may not be a static destination but rather an open, flexible process. Alignment solutions that continually adapt to ethical considerations may offer the most robust approach.[33]This perspective could guide both effective policy-making and technical research in AI.
|
https://en.wikipedia.org/wiki/AI_alignment
|
AnSSH clientis a software program which uses thesecure shellprotocol to connect to aremote computer. This article compares a selection of notable clients.
Theoperating systemsorvirtual machinesthe SSH clients are designed to run on withoutemulationinclude several possibilities:
The list is not exhaustive, but rather reflects the most common platforms today.
This table lists standard authentication key algorithms implemented by SSH clients. Some SSH implementations include both server and client implementations and support custom non-standard authentication algorithms not listed in this table.
|
https://en.wikipedia.org/wiki/Comparison_of_SSH_clients
|
Inmathematics, anonlocaloperatoris amappingwhich maps functions on a topological space to functions, in such a way that the value of the output function at a given point cannot be determined solely from the values of the input function in any neighbourhood of any point. An example of a nonlocal operator is theFourier transform.
LetX{\displaystyle X}be atopological space,Y{\displaystyle Y}aset,F(X){\displaystyle F(X)}afunction spacecontaining functions withdomainX{\displaystyle X}, andG(Y){\displaystyle G(Y)}a function space containing functions with domainY{\displaystyle Y}. Two functionsu{\displaystyle u}andv{\displaystyle v}inF(X){\displaystyle F(X)}are called equivalent atx∈X{\displaystyle x\in X}if there exists aneighbourhoodN{\displaystyle N}ofx{\displaystyle x}such thatu(x′)=v(x′){\displaystyle u(x')=v(x')}for allx′∈N{\displaystyle x'\in N}. An operatorA:F(X)→G(Y){\displaystyle A:F(X)\to G(Y)}is said to be local if for everyy∈Y{\displaystyle y\in Y}there exists anx∈X{\displaystyle x\in X}such thatAu(y)=Av(y){\displaystyle Au(y)=Av(y)}for all functionsu{\displaystyle u}andv{\displaystyle v}inF(X){\displaystyle F(X)}which are equivalent atx{\displaystyle x}. A nonlocal operator is an operator which is not local.
For a local operator it is possible (in principle) to compute the valueAu(y){\displaystyle Au(y)}using only knowledge of the values ofu{\displaystyle u}in an arbitrarily small neighbourhood of a pointx{\displaystyle x}. For a nonlocal operator this is not possible.
Differential operatorsare examples of local operators. A large class of (linear) nonlocal operators is given by theintegral transforms, such as the Fourier transform and theLaplace transform. For an integral transform of the form
whereK{\displaystyle K}is some kernel function, it is necessary to know the values ofu{\displaystyle u}almost everywhere on thesupportofK(⋅,y){\displaystyle K(\cdot ,y)}in order to compute the value ofAu{\displaystyle Au}aty{\displaystyle y}.
An example of asingular integral operatoris thefractional Laplacian
The prefactorcd,s:=4sΓ(d/2+s)πd/2|Γ(−s)|{\displaystyle c_{d,s}:={\frac {4^{s}\Gamma (d/2+s)}{\pi ^{d/2}|\Gamma (-s)|}}}involves theGamma functionand serves as a normalizing factor. The fractional Laplacian plays a role in, for example, the study of nonlocalminimal surfaces.[1]
Some examples of applications of nonlocal operators are:
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Nonlocal_operator
|
Steganography(/ˌstɛɡəˈnɒɡrəfi/ⓘSTEG-ə-NOG-rə-fee) is the practice of representing information within another message or physical object, in such a manner that the presence of the concealed information would not be evident to an unsuspecting person's examination. In computing/electronic contexts, acomputer file, message, image, or video is concealed within another file, message, image, or video. Generally, the hidden messages appear to be (or to be part of) something else: images, articles, shopping lists, or some other cover text. For example, the hidden message may be ininvisible inkbetween the visible lines of a private letter. Some implementations of steganography that lack a formalshared secretare forms ofsecurity through obscurity, while key-dependent steganographic schemes try to adhere toKerckhoffs's principle.[1]
The wordsteganographycomes fromGreeksteganographia, which combines the wordssteganós(στεγανός), meaning "covered or concealed", and-graphia(γραφή) meaning "writing".[2]The first recorded use of the term was in 1499 byJohannes Trithemiusin hisSteganographia, a treatise oncryptographyand steganography, disguised as a book on magic.
The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visibleencryptedmessages, no matter how unbreakable they are, arouse interest and may in themselves be incriminating in countries in which encryption is illegal.[3]Whereas cryptography is the practice of protecting the contents of a message alone, steganography is concerned with concealing both the fact that a secret message is being sent and its contents.
Steganography includes the concealment of information within computer files. In digital steganography, electronic communications may include steganographic coding inside a transport layer, such as a document file, image file, program, or protocol. Media files are ideal for steganographic transmission because of their large size. For example, a sender might start with an innocuous image file and adjust the color of every hundredthpixelto correspond to a letter in the alphabet. The change is so subtle that someone who is not looking for it is unlikely to notice the change.
The first recorded uses of steganography can be traced back to 440 BC inGreece, whenHerodotusmentions two examples in hisHistories.[4]Histiaeussent a message to his vassal,Aristagoras, by shaving the head of his most trusted servant, "marking" the message onto his scalp, then sending him on his way once his hair had regrown, with the instruction, "When thou art come to Miletus, bid Aristagoras shave thy head, and look thereon." Additionally,Demaratussent a warning about a forthcoming attack to Greece by writing it directly on the wooden backing of awax tabletbefore applying its beeswax surface. Wax tablets were in common use then as reusable writing surfaces, sometimes used forshorthand.
In his workPolygraphiae,Johannes Trithemiusdeveloped hisAve Mariacipherthat can hide information in a Latin praise of God.[5][better source needed]"Auctor sapientissimus conseruans angelica deferat nobis charitas potentissimi creatoris", for example, contains the concealed wordVICIPEDIA.[citation needed]
Numerous techniques throughout history have been developed to embed a message within another medium.
Placing the message in a physical item has been widely used for centuries.[6]Some notable examples includeinvisible inkon paper, writing a message inMorse codeonyarnworn by a courier,[6]microdots, or using amusic cipherto hide messages asmusical notesinsheet music.[7]
In communities with social or government taboos or censorship, people use cultural steganography—hiding messages in idiom, pop culture references, and other messages they share publicly and assume are monitored. This relies on social context to make the underlying messages visible only to certain readers.[8][9]Examples include:
Since the dawn of computers, techniques have been developed to embed messages in digital cover mediums. The message to conceal is often encrypted, then used to overwrite part of a much larger block of encrypted data or a block of random data (an unbreakable cipher like theone-time padgenerates ciphertexts that look perfectly random without the private key).
Examples of this include changing pixels in image or sound files,[10]properties of digital text such as spacing and font choice,chaffing and winnowing,mimic functions, modifying the echo of a sound file (echo steganography).[citation needed], and including data in ignored sections of a file.[11]
Since the era of evolving network applications, steganography research has shifted from image steganography to steganography in streaming media such asVoice over Internet Protocol(VoIP).
In 2003, Giannoula et al. developed a data hiding technique leading to compressed forms of source video signals on a frame-by-frame basis.[12]
In 2005, Dittmann et al. studied steganography and watermarking of multimedia contents such as VoIP.[13]
In 2008, Yongfeng Huang and Shanyu Tang presented a novel approach to information hiding in low bit-rate VoIP speech stream, and their published work on steganography is the first-ever effort to improve the codebook partition by using Graph theory along with Quantization Index Modulation in low bit-rate streaming media.[14]
In 2011 and 2012, Yongfeng Huang and Shanyu Tang devised new steganographic algorithms that use codec parameters as cover object to realise real-time covert VoIP steganography. Their findings were published inIEEE Transactions on Information Forensics and Security.[15][16][17]
In 2024, Cheddad & Cheddad proposed a new framework[18]for reconstructing lost or corrupted audio signals using a combination of machine learning techniques and latent information. The main idea of their paper is to enhance audio signal reconstruction by fusing steganography, halftoning (dithering), and state-of-the-art shallow and deep learning methods (e.g., RF, LSTM). This combination of steganography, halftoning, and machine learning for audio signal reconstruction may inspire further research in optimizing this approach or applying it to other domains, such as image reconstruction (i.e., inpainting).
Adaptive steganography is a technique for concealing information within digital media by tailoring the embedding process to the specific features of the cover medium. An example of this approach is demonstrated in the work.[19]Their method develops a skin tone detection algorithm, capable of identifying facial features, which is then applied to adaptive steganography. By incorporating face rotation into their approach, the technique aims to enhance its adaptivity to conceal information in a manner that is both less detectable and more robust across various facial orientations within images. This strategy can potentially improve the efficacy of information hiding in both static images and video content.
Academic work since 2012 demonstrated the feasibility of steganography forcyber-physical systems(CPS)/theInternet of Things(IoT). Some techniques of CPS/IoT steganography overlap with network steganography, i.e. hiding data in communication protocols used in CPS/the IoT. However, specific techniques hide data in CPS components. For instance, data can be stored in unused registers of IoT/CPS components and in the states of IoT/CPS actuators.[20][21]
Digital steganography output may be in the form of printed documents. A message, theplaintext, may be first encrypted by traditional means, producing aciphertext. Then, an innocuouscover textis modified in some way so as to contain the ciphertext, resulting in thestegotext. For example, the letter size, spacing,typeface, or other characteristics of a cover text can be manipulated to carry the hidden message. Only a recipient who knows the technique used can recover the message and then decrypt it.Francis BacondevelopedBacon's cipheras such a technique.
The ciphertext produced by most digital steganography methods, however, is not printable. Traditional digital methods rely on perturbing noise in the channel file to hide the message, and as such, the channel file must be transmitted to the recipient with no additional noise from the transmission. Printing introduces much noise in the ciphertext, generally rendering the message unrecoverable. There are techniques that address this limitation, one notable example being ASCII Art Steganography.[22]
Although not classic steganography, some types of modern color laser printers integrate the model, serial number, and timestamps on each printout for traceability reasons using a dot-matrix code made of small, yellow dots not recognizable to the naked eye — seeprinter steganographyfor details.
In 2015, a taxonomy of 109 network hiding methods was presented by Steffen Wendzel, Sebastian Zander et al. that summarized core concepts used in network steganography research.[23]The taxonomy was developed further in recent years by several publications and authors and adjusted to new domains, such as CPS steganography.[24][25][26]
In 1977, Kent concisely described the potential for covert channel signaling in general network communication protocols, even if the traffic is encrypted (in a footnote) in "Encryption-Based Protection for Interactive User/Computer Communication," Proceedings of the Fifth Data Communications Symposium, September 1977.
In 1987, Girling first studied covert channels on a local area network (LAN), identified and realised three obvious covert channels (two storage channels and one timing channel), and his research paper entitled “Covert channels in LAN’s” published inIEEE Transactions on Software Engineering, vol. SE-13 of 2, in February 1987.[27]
In 1989, Wolf implemented covert channels in LAN protocols, e.g. using the reserved fields, pad fields, and undefined fields in the TCP/IP protocol.[28]
In 1997, Rowland used the IP identification field, the TCP initial sequence number and acknowledge sequence number fields in TCP/IP headers to build covert channels.[29]
In 2002, Kamran Ahsan made an excellent summary of research on network steganography.[30]
In 2005, Steven J. Murdoch and Stephen Lewis contributed a chapter entitled "Embedding Covert Channels into TCP/IP" in the "Information Hiding" book published by Springer.[31]
All information hiding techniques that may be used to exchange steganograms in telecommunication networks can be classified under the general term of network steganography. This nomenclature was originally introduced by Krzysztof Szczypiorski in 2003.[32]Contrary to typical steganographic methods that use digital media (images, audio and video files) to hide data, network steganography uses communication protocols' control elements and their intrinsic functionality. As a result, such methods can be harder to detect and eliminate.[33]
Typical network steganography methods involve modification of the properties of a single network protocol. Such modification can be applied to theprotocol data unit(PDU),[34][35][36]to the time relations between the exchanged PDUs,[37]or both (hybrid methods).[38]
Moreover, it is feasible to utilize the relation between two or more different network protocols to enable secret communication. These applications fall under the term inter-protocol steganography.[39]Alternatively, multiple network protocols can be used simultaneously to transfer hidden information and so-called control protocols can be embedded into steganographic communications to extend their capabilities, e.g. to allow dynamic overlay routing or the switching of utilized hiding methods and network protocols.[40][41]
Network steganography covers a broad spectrum of techniques, which include, among others:
Discussions of steganography generally use terminology analogous to and consistent with conventional radio and communications technology. However, some terms appear specifically in software and are easily confused. These are the most relevant ones to digital steganographic systems:
Thepayloadis the data covertly communicated. Thecarrieris the signal, stream, or data file that hides the payload, which differs from thechannel, which typically means the type of input, such as a JPEG image. The resulting signal, stream, or data file with the encoded payload is sometimes called thepackage,stego file, orcovert message. The proportion of bytes, samples, or other signal elements modified to encode the payload is called theencoding densityand is typically expressed as a number between 0 and 1.
In a set of files, the files that are considered likely to contain a payload aresuspects. Asuspectidentified through some type of statistical analysis can be referred to as acandidate.
Detecting physical steganography requires a careful physical examination, including the use of magnification, developer chemicals, andultraviolet light. It is a time-consuming process with obvious resource implications, even in countries that employ many people to spy on other citizens. However, it is feasible to screen mail of certain suspected individuals or institutions, such as prisons or prisoner-of-war (POW) camps.
DuringWorld War II, prisoner of war camps gave prisoners specially-treatedpaperthat would revealinvisible ink. An article in the 24 June 1948 issue ofPaper Trade Journalby the Technical Director of theUnited States Government Printing Officehad Morris S. Kantrowitz describe in general terms the development of this paper. Three prototype papers (Sensicoat,Anilith, andCoatalith) were used to manufacture postcards and stationery provided to German prisoners of war in the US and Canada. If POWs tried to write a hidden message, the special paper rendered it visible. The US granted at least twopatentsrelated to the technology, one to Kantrowitz,U.S. patent 2,515,232, "Water-Detecting paper and Water-Detecting Coating Composition Therefor," patented 18 July 1950, and an earlier one, "Moisture-Sensitive Paper and the Manufacture Thereof,"U.S. patent 2,445,586, patented 20 July 1948. A similar strategy issues prisoners with writing paper ruled with a water-soluble ink that runs in contact with water-based invisible ink.
In computing, steganographically encoded package detection is calledsteganalysis. The simplest method to detect modified files, however, is to compare them to known originals. For example, to detect information being moved through the graphics on a website, an analyst can maintain known clean copies of the materials and then compare them against the current contents of the site. The differences, if the carrier is the same, comprise the payload. In general, using extremely high compression rates makes steganography difficult but not impossible. Compression errors provide a hiding place for data, but high compression reduces the amount of data available to hold the payload, raising the encoding density, which facilitates easier detection (in extreme cases, even by casual observation).
There are a variety of basic tests that can be done to identify whether or not a secret message exists. This process is not concerned with the extraction of the message, which is a different process and a separate step. The most basic approaches ofsteganalysisare visual or aural attacks, structural attacks, and statistical attacks. These approaches attempt to detect the steganographic algorithms that were used.[44]These algorithms range from unsophisticated to very sophisticated, with early algorithms being much easier to detect due to statistical anomalies that were present. The size of the message that is being hidden is a factor in how difficult it is to detect. The overall size of the cover object also plays a factor as well. If the cover object is small and the message is large, this can distort the statistics and make it easier to detect. A larger cover object with a small message decreases the statistics and gives it a better chance of going unnoticed.
Steganalysis that targets a particular algorithm has much better success as it is able to key in on the anomalies that are left behind. This is because the analysis can perform a targeted search to discover known tendencies since it is aware of the behaviors that it commonly exhibits. When analyzing an image the least significant bits of many images are actually not random. The camera sensor, especially lower-end sensors are not the best quality and can introduce some random bits. This can also be affected by the file compression done on the image. Secret messages can be introduced into the least significant bits in an image and then hidden. A steganography tool can be used to camouflage the secret message in the least significant bits but it can introduce a random area that is too perfect. This area of perfect randomization stands out and can be detected by comparing the least significant bits to the next-to-least significant bits on an image that hasn't been compressed.[44]
Generally, though, there are many techniques known to be able to hide messages in data using steganographic techniques. None are, by definition, obvious when users employ standard applications, but some can be detected by specialist tools. Others, however, are resistant to detection—or rather it is not possible to reliably distinguish data containing a hidden message from data containing just noise—even when the most sophisticated analysis is performed. Steganography is being used to conceal and deliver more effective cyber attacks, referred to asStegware. The term Stegware was first introduced in 2017[45]to describe any malicious operation involving steganography as a vehicle to conceal an attack. Detection of steganography is challenging, and because of that, not an adequate defence. Therefore, the only way of defeating the threat is to transform data in a way that destroys any hidden messages,[46]a process calledContent Threat Removal.
Some modern computer printers use steganography, includingHewlett-PackardandXeroxbrand color laser printers. The printers add tiny yellow dots to each page. The barely-visible dots contain encoded printer serial numbers and date and time stamps.[47]
The larger the cover message (in binary data, the number ofbits) relative to the hidden message, the easier it is to hide the hidden message (as an analogy, the larger the "haystack", the easier it is to hide a "needle"). Sodigital pictures, which contain much data, are sometimes used to hide messages on theInternetand on other digital communication media. It is not clear how common this practice actually is.
For example, a 24-bitbitmapuses 8 bits to represent each of the three color values (red, green, and blue) of eachpixel. The blue alone has 28different levels of blue intensity. The difference between 111111112and 111111102in the value for blue intensity is likely to be undetectable by the human eye. Therefore, theleast significant bitcan be used more or less undetectably for something else other than color information. If that is repeated for the green and the red elements of each pixel as well, it is possible to encode one letter ofASCIItext for every threepixels.
Stated somewhat more formally, the objective for making steganographic encoding difficult to detect is to ensure that the changes to the carrier (the original signal) because of the injection of the payload (the signal to covertly embed) are visually (and ideally, statistically) negligible. The changes are indistinguishable from thenoise floorof the carrier. All media can be a carrier, but media with a large amount of redundant or compressible information is better suited.
From aninformation theoreticalpoint of view, that means that thechannelmust have morecapacitythan the "surface"signalrequires. There must beredundancy. For a digital image, it may benoisefrom the imaging element; fordigital audio, it may be noise from recording techniques oramplificationequipment. In general, electronics that digitize ananalog signalsuffer from several noise sources, such asthermal noise,flicker noise, andshot noise. The noise provides enough variation in the captured digital information that it can be exploited as a noise cover for hidden data. In addition,lossy compressionschemes (such asJPEG) always introduce some error to the decompressed data, and it is possible to exploit that for steganographic use, as well.
Although steganography and digital watermarking seem similar, they are not. In steganography, the hidden message should remain intact until it reaches its destination. Steganography can be used fordigital watermarkingin which a message (being simply an identifier) is hidden in an image so that its source can be tracked or verified (for example,Coded Anti-Piracy) or even just to identify an image (as in theEURion constellation). In such a case, the technique of hiding the message (here, the watermark) must be robust to prevent tampering. However, digital watermarking sometimes requires a brittle watermark, which can be modified easily, to check whether the image has been tampered with. That is the key difference between steganography and digital watermarking.
In 2010, theFederal Bureau of Investigationalleged that theRussian foreign intelligence serviceuses customized steganography software for embedding encrypted text messages inside image files for certain communications with "illegal agents" (agents without diplomatic cover) stationed abroad.[48]
On 23 April 2019 the U.S. Department of Justice unsealed an indictment charging Xiaoqing Zheng, a Chinese businessman and former Principal Engineer at General Electric, with 14 counts of conspiring to steal intellectual property and trade secrets from General Electric. Zheng had allegedly used steganography to exfiltrate 20,000 documents from General Electric to Tianyi Aviation Technology Co. in Nanjing, China, a company the FBI accused him of starting with backing from the Chinese government.[49]
There are distributed steganography methods,[50]including methodologies that distribute the payload through multiple carrier files in diverse locations to make detection more difficult. For example,U.S. patent 8,527,779by cryptographer William Easttom (Chuck Easttom).
The puzzles that are presented byCicada 3301incorporate steganography with cryptography and other solving techniques since 2012.[51]Puzzles involving steganography have also been featured in otheralternative reality games.
The communications[52][53]ofThe May Day mysteryincorporate steganography and other solving techniques since 1981.[54]
It is possible to steganographically hide computer malware into digital images, videos, audio and various other files in order to evade detection byantivirus software. This type of malware is called stegomalware. It can be activated by external code, which can be malicious or even non-malicious if some vulnerability in the software reading the file is exploited.[55]
Stegomalware can be removed from certain files without knowing whether they contain stegomalware or not. This is done throughcontent disarm and reconstruction(CDR) software, and it involves reprocessing the entire file or removing parts from it.[56][57]Actually detecting stegomalware in a file can be difficult and may involve testing the file behaviour invirtualenvironments ordeep learninganalysis of the file.[55]
Stegoanalytical algorithms can be cataloged in different ways, highlighting: according to the available information and according to the purpose sought.
There is the possibility of cataloging these algorithms based on the information held by the stegoanalyst in terms of clear and encrypted messages. It is a technique similar to cryptography, however, they have several differences:
The principal purpose of steganography is to transfer information unnoticed, however, it is possible for an attacker to have two different pretensions:
|
https://en.wikipedia.org/wiki/Steganography
|
Electromagnetic interference(EMI), also calledradio-frequency interference(RFI) when in theradio frequencyspectrum, is a disturbance generated by an external source that affects an electrical circuit byelectromagnetic induction,electrostatic coupling, or conduction.[1]The disturbance may degrade the performance of the circuit or even stop it from functioning. In the case of a data path, these effects can range from an increase in error rate to a total loss of the data.[2]Both human-made and natural sources generate changing electrical currents and voltages that can cause EMI:ignition systems,cellular networkof mobile phones,lightning,solar flares, andauroras(northern/southern lights).[citation needed]EMI frequently affectsAM radios. It can also affectmobile phones,FM radios, andtelevisions, as well as observations forradio astronomyandatmospheric science.
EMI can be used intentionally forradio jamming, as inelectronic warfare.
Since the earliest days of radio communications, the negative effects of interference from both intentional and unintentional transmissions have been felt and the need to manage the radio frequency spectrum became apparent.[3]
In 1933, a meeting of theInternational Electrotechnical Commission(IEC) in Paris recommended the International Special Committee on Radio Interference (CISPR) be set up to deal with the emerging problem of EMI. CISPR subsequently produced technical publications covering measurement and test techniques and recommended emission and immunity limits. These have evolved over the decades and form the basis of much of the world'sEMCregulations today.[4]
In 1979, legal limits were imposed on electromagnetic emissions from all digital equipment by theFCCin the US in response to the increased number of digital systems that were interfering with wired and radio communications. Test methods and limits were based on CISPR publications, although similar limits were already enforced in parts of Europe.[5]
In the mid 1980s, the European Union member states adopted a number of "new approach" directives with the intention of standardizing technical requirements for products so that they do not become a barrier to trade within the EC. One of these was the EMC Directive (89/336/EC)[6]and it applies to all equipment placed on the market or taken into service. Its scope covers all apparatus "liable to cause electromagnetic disturbance or the performance of which is liable to be affected by such disturbance".[5]
This was the first time there was a legal requirement on immunity, as well as emissions on apparatus intended for the general population. Although there may be additional costs involved for some products to give them a known level of immunity, it increases their perceived quality as they are able to co-exist with apparatus in the active EM environment of modern times and with fewer problems.[5]
Many countries now have similar requirements for products to meet some level ofelectromagnetic compatibility(EMC) regulation.[5]
Electromagnetic interference divides into several categories according to the source and signal characteristics.
The origin of interference, often called "noise" in this context, can be human-made (artificial) or natural.
Continuous, or continuous wave (CW), interference arises where the source continuously emits at a given range of frequencies. This type is naturally divided into sub-categories according to frequency range, and as a whole is sometimes referred to as "DC to daylight". One common classification is into narrowband and broadband, according to the spread of the frequency range.
Anelectromagnetic pulse(EMP), sometimes called atransientdisturbance, arises where the source emits a short-duration pulse of energy. The energy is usually broadband by nature, although it often excites a relatively narrow-banddamped sine waveresponse in the victim.
Sources divide broadly into isolated and repetitive events.
Sources of isolated EMP events include:
Sources of repetitive EMP events, sometimes as regularpulsetrains, include:
Conducted electromagnetic interference is caused by the physical contact of the conductors as opposed to radiated EMI, which is caused by induction (without physical contact of the conductors). Electromagnetic disturbances in the EM field of a conductor will no longer be confined to the surface of the conductor and will radiate away from it. This persists in all conductors and mutual inductance between two radiatedelectromagnetic fieldswill result in EMI.[7]
Some of the technical terms which are employed can be used with differing meanings. Some phenomena may be referred to by various different terms. These terms are used here in a widely accepted way, which is consistent with other articles in the encyclopedia.
The basic arrangement ofnoiseemitter or source,couplingpath and victim,receptoror sink is shown in the figure below. Source and victim are usuallyelectronic hardwaredevices, though the source may be a natural phenomenon such as alightning strike,electrostatic discharge(ESD) or, inone famous case, theBig Bangat the origin of the Universe.
There are four basic coupling mechanisms:conductive,capacitive,magneticor inductive, andradiative. Any coupling path can be broken down into one or more of these coupling mechanisms working together. For example the lower path in the diagram involves inductive, conductive and capacitive modes.
Conductive couplingoccurs when the coupling path between the source and victim is formed by direct electrical contact with a conducting body, for example a transmission line, wire, cable,PCBtrace or metal enclosure. Conducted noise is also characterised by the way it appears on different conductors:
Inductive coupling occurs where the source and victim are separated by a short distance (typically less than awavelength). Strictly, "Inductive coupling" can be of two kinds, electrical induction and magnetic induction. It is common to refer to electrical induction ascapacitive coupling, and to magnetic induction asinductive coupling.
Capacitive couplingoccurs when a varyingelectrical fieldexists between two adjacent conductors typically less than a wavelength apart, inducing a change involtageon the receiving conductor.
Inductive couplingor magnetic coupling occurs when a varyingmagnetic fieldexists between two parallel conductors typically less than a wavelength apart, inducing a change involtagealong the receiving conductor.
Radiative coupling or electromagnetic coupling occurs when source and victim are separated by a large distance, typically more than a wavelength. Source and victim act as radio antennas: the source emits or radiates anelectromagnetic wavewhich propagates across the space in between and is picked up or received by the victim.
Interferencewith the meaning ofelectromagnetic interference, alsoradio-frequency interference(EMIorRFI) is – according toArticle 1.166of theInternational Telecommunication Union's (ITU)Radio Regulations(RR)[8]– defined as "The effect of unwanted energy due to one or a combination ofemissions,radiations, orinductionsupon reception in aradiocommunicationsystem, manifested by any performance degradation, misinterpretation, or loss of information which could be extracted in the absence of such unwanted energy".
This is also a definition used by thefrequency administrationto providefrequency assignmentsand assignment of frequency channels toradio stationsor systems, as well as to analyzeelectromagnetic compatibilitybetweenradiocommunication services.
In accordance with ITU RR (article 1) variations of interference are classified as follows:[9]
Conducted EMI is caused by the physical contact of the conductors as opposed to radiated EMI which is caused byinduction(without physical contact of the conductors).
For lower frequencies, EMI is caused by conduction and, for higher frequencies, by radiation.
EMI through the ground wire is also very common in an electrical facility.
Interference tends to be more troublesome with older radio technologies such as analogueamplitude modulation, which have no way of distinguishing unwanted in-band signals from the intended signal, and the omnidirectional antennas used with broadcast systems. Newer radio systems incorporate several improvements that enhance theselectivity. In digital radio systems, such asWi-Fi,error-correctiontechniques can be used.Spread-spectrumandfrequency-hoppingtechniques can be used with both analogue and digital signalling to improve resistance to interference. A highlydirectionalreceiver, such as aparabolic antennaor adiversity receiver, can be used to select one signal in space to the exclusion of others.
The most extreme example of digitalspread-spectrumsignalling to date is ultra-wideband (UWB), which proposes the use of large sections of theradio spectrumat low amplitudes to transmit high-bandwidth digital data. UWB, if used exclusively, would enable very efficient use of the spectrum, but users of non-UWB technology are not yet prepared to share the spectrum with the new system because of the interference it would cause to their receivers (the regulatory implications of UWB are discussed in theultra-widebandarticle).
In theUnited States, the 1982 Public Law 97-259 allowed theFederal Communications Commission(FCC) to regulate the susceptibility of consumer electronic equipment.[10][11]
Potential sources of RFI and EMI include:[12]various types oftransmitters, doorbell transformers,toaster ovens,electric blankets, ultrasonic pest control devices, electricbug zappers,heating pads, andtouch controlled lamps. MultipleCRTcomputer monitors or televisions sitting too close to one another can sometimes cause a "shimmy" effect in each other, due to the electromagnetic nature of their picture tubes, especially when one of theirde-gaussingcoils is activated.
Electromagnetic interference at 2.4 GHzmay be caused by802.11b,802.11gand802.11nwireless devices,Bluetoothdevices,baby monitorsandcordless telephones,video senders, andmicrowave ovens.
Switchingloads (inductive,capacitive, andresistive), such as electric motors, transformers, heaters, lamps, ballast, power supplies, etc., all cause electromagnetic interference especially at currents above 2A. The usual method used for suppressing EMI is by connecting asnubbernetwork, a resistor in series with acapacitor, across a pair of contacts. While this may offer modest EMI reduction at very low currents, snubbers do not work at currents over 2 A withelectromechanicalcontacts.[13][14]
Another method for suppressing EMI is the use of ferrite core noise suppressors (orferrite beads), which are inexpensive and which clip on to the power lead of the offending device or the compromised device.
Switched-mode power suppliescan be a source of EMI, but have become less of a problem as design techniques have improved, such as integratedpower factor correction.
Most countries have legal requirements that mandateelectromagnetic compatibility: electronic and electrical hardware must still work correctly when subjected to certain amounts of EMI, and should not emit EMI, which could interfere with other equipment (such as radios).
Radio frequency signal quality has declined throughout the 21st century by roughly one decibel per year as the spectrum becomes increasingly crowded.[additional citation(s) needed]This has inflicted aRed Queen's raceon the mobile phone industry as companies have been forced to put up more cellular towers (at new frequencies) that then cause more interference thereby requiring more investment by the providers and frequent upgrades of mobile phones to match.[15]
The International Special Committee for Radio Interference or CISPR (French acronym for "Comité International Spécial des Perturbations Radioélectriques"), which is a committee of the International Electrotechnical Commission (IEC) sets international standards for radiated and conducted electromagnetic interference. These are civilian standards for domestic, commercial, industrial and automotive sectors. These standards form the basis of other national or regional standards, most notably the European Norms (EN) written by CENELEC (European committee for electrotechnical standardisation). US organizations include the Institute of Electrical and Electronics Engineers (IEEE), the American National Standards Institute (ANSI), and the US Military (MILSTD).
Integrated circuits are often a source of EMI, but they must usually couple their energy to larger objects such as heatsinks, circuit board planes and cables to radiate significantly.[16]
Onintegrated circuits, important means of reducing EMI are: the use of bypass ordecoupling capacitorson each active device (connected across the power supply, as close to the device as possible),rise timecontrol of high-speed signals using series resistors,[17]andIC power supply pinfiltering. Shielding is usually a last resort after other techniques have failed, because of the added expense of shielding components such as conductive gaskets.
The efficiency of the radiation depends on the height above theground planeorpower plane(atRF, one is as good as the other) and the length of the conductor in relation to the wavelength of the signal component (fundamental frequency,harmonicortransientsuch as overshoot, undershoot or ringing). At lower frequencies, such as 133MHz, radiation is almost exclusively via I/O cables; RF noise gets onto the power planes and is coupled to the line drivers via the VCC and GND pins. The RF is then coupled to the cable through the line driver ascommon-mode noise. Since the noise is common-mode, shielding has very little effect, even withdifferential pairs. The RF energy iscapacitively coupledfrom the signal pair to the shield and the shield itself does the radiating. One cure for this is to use abraid-breakerorchoketo reduce the common-mode signal.
At higher frequencies, usually above 500 MHz, traces get electrically longer and higher above the plane. Two techniques are used at these frequencies: wave shaping with series resistors and embedding the traces between the two planes. If all these measures still leave too much EMI, shielding such as RF gaskets and copper or conductive tape can be used. Most digital equipment is designed with metal or conductive-coated plastic cases.[citation needed]
Any unshielded semiconductor (e.g. an integrated circuit) will tend to act as a detector for those radio signals commonly found in the domestic environment (e.g. mobile phones).[18]Such a detector can demodulate the high frequency mobile phone carrier (e.g., GSM850 and GSM1900, GSM900 and GSM1800) and produce low-frequency (e.g., 217 Hz) demodulated signals.[19]This demodulation manifests itself as unwanted audible buzz in audio appliances such asmicrophoneamplifier,speakeramplifier, car radio, telephones etc. Adding onboard EMI filters or special layout techniques can help in bypassing EMI or improving RF immunity.[20]Some ICs are designed (e.g., LMV831-LMV834,[21]MAX9724[22]) to have integrated RF filters or a special design that helps reduce any demodulation of high-frequency carrier.
Designers often need to carry out special tests for RF immunity of parts to be used in a system. These tests are often done in ananechoic chamberwith a controlled RF environment where the test vectors produce a RF field similar to that produced in an actual environment.[19]
Interference inradio astronomy, where it is commonly referred to as radio-frequency interference (RFI), is any source of transmission that is within the observed frequency band other than the celestial sources themselves. Because transmitters on and around the Earth can be many times stronger than the astronomical signal of interest, RFI is a major concern for performing radio astronomy.[23]Natural sources of interference, such as lightning and the Sun, are also often referred to as RFI.[citation needed]
Some of the frequency bands that are very important for radio astronomy, such as the21-cm HI lineat 1420 MHz, are protected by regulation.[citation needed]However, modern radio-astronomical observatories such asVLA,LOFAR, andALMAhave a very large bandwidth over which they can observe.[citation needed]Because of the limited spectral space at radio frequencies, these frequency bands cannot be completely allocated to radio astronomy; for example,redshiftedimages of the 21-cm line from thereionizationepoch can overlap with theFM broadcast band(88–108 MHz), and therefore radio telescopes need to deal with RFI in this bandwidth.[23]
Techniques to deal with RFI range from filters in hardware to advanced algorithms in software. One way to deal with strong transmitters is to filter out the frequency of the source completely. This is for example the case for the LOFAR observatory, which filters out the FM radio stations between 90 and 110 MHz. It is important to remove such strong sources of interference as soon as possible, because they might "saturate" the highly sensitive receivers (amplifiersandanalogue-to-digital converters), which means that the received signal is stronger than the receiver can handle. However, filtering out a frequency band implies that these frequencies can never be observed with the instrument.[citation needed]
A common technique to deal with RFI within the observed frequency bandwidth, is to employ RFI detection in software. Such software can find samples in time, frequency or time-frequency space that are contaminated by an interfering source. These samples are subsequently ignored in further analysis of the observed data. This process is often referred to asdata flagging. Because most transmitters have a small bandwidth and are not continuously present such as lightning orcitizens' band(CB) radio devices, most of the data remains available for the astronomical analysis. However, data flagging can not solve issues with continuous broad-band transmitters, such as windmills,digital videoordigital audiotransmitters.[citation needed]
Another way to manage RFI is to establish aradio quiet zone(RQZ). RQZ is a well-defined area surrounding receivers that has special regulations to reduce RFI in favor of radio astronomy observations within the zone. The regulations may include special management of spectrum and power flux or power flux-density limitations. The controls within the zone may cover elements other than radio transmitters or radio devices. These include aircraft controls and control of unintentional radiators such as industrial, scientific and medical devices, vehicles, and power lines. The first RQZ for radio astronomy isUnited States National Radio Quiet Zone(NRQZ), established in 1958.[24]
Prior to the introduction of Wi-Fi, one of the biggest applications of 5 GHz band was theTerminal Doppler Weather Radar.[25][26]The decision to use 5 GHz spectrum for Wi-Fi was finalized at theWorld Radiocommunication Conferencein 2003; however, meteorological authorities were not involved in the process.[27][28]The subsequent lax implementation and misconfiguration of DFS had caused significant disruption in weather radar operations in a number of countries around the world. In Hungary, the weather radar system was declared non-operational for more than a month. Due to the severity of interference, South African weather services ended up abandoning C band operation, switching their radar network toS band.[26][29]
Transmissions on adjacent bands to those used by passiveremote sensing, such asweather satellites, have caused interference, sometimes significant.[30]There is concern that adoption of insufficiently regulated5Gcould produce major interference issues. Significant interference can impairnumerical weather predictionperformance and incur negative economic and public safety impacts.[31][32][33]These concerns led US Secretary of CommerceWilbur Rossand NASA AdministratorJim Bridenstinein February 2019 to urge the FCC to cancel a proposedspectrum auction, which was rejected.[34]
|
https://en.wikipedia.org/wiki/Electromagnetic_interference
|
TheSoftware for Open Networking in the Cloudor alternatively abbreviated and stylized asSONiC, is afree and open sourcenetwork operating systembased onLinux. It was originally developed byMicrosoftand theOpen Compute Project. In 2022, Microsoft ceded oversight of the project to theLinux Foundation, who will continue to work with theOpen Compute Projectfor continued ecosystem and developer growth.[1][2][3][4]SONiC includes thenetworking softwarecomponents necessary for a fully functionalL3 device[5]and was designed to meet the requirements of aclouddata center. It allows cloud operators to share the samesoftware stackacross hardware from different switch vendors and works on over 100 different platforms.[3][5][6]There are multiple companies offering enterprise service and support for SONiC.
SONiC was developed and open sourced by Microsoft in 2016.[2]The software decouples network software from the underlying hardware and is built on theSwitch Abstraction InterfaceAPI.[1]It runs onnetwork switchesandASICsfrom multiple vendors.[2]Notable supported network features includeBorder Gateway Protocol(BGP),remote direct memory access(RDMA),QoS, and various other Ethernet/IP technologies.[2]Much of the protocol support is provided through inclusion of theFRRoutingsuite of routing daemons.[7]
The SONiC community includescloud providers, service providers, and silicon and component suppliers, as well asnetworking hardwareOEMs and ODMs. It has more than 850 members.[2]
Thesource codeis licensed under a mix of open source licenses including theGNU General Public Licenseand theApache License, and is available onGitHub.[8][9]
|
https://en.wikipedia.org/wiki/SONiC_(operating_system)
|
Anelectronic symbolis apictogramused to represent variouselectricalandelectronicdevices or functions, such aswires,batteries,resistors, andtransistors, in aschematic diagramof an electrical orelectronic circuit. These symbols are largely standardized internationally today, but may vary from country to country, or engineering discipline, based on traditional conventions.
The graphic symbols used for electrical components incircuit diagramsare covered by national and international standards, in particular:
The standards do not all agree, and use of unusual (even if standardized) symbols can lead to confusion and errors.[2]Symbols usage is sometimes idiosyncratic to engineering disciplines, and national or local variations to international standards exist. For example, lighting and power symbols used as part of architectural drawings may be different from symbols for devices used in electronics.
Symbols shown are typical examples, not a complete list.[3][4]
The shorthand for ground is GND. Optionally, the triangle in the middle symbol may be filled in.
Voltage text should be placed next to each battery symbol, such as "3V".
It is very common forpotentiometerandrheostatsymbols to be used for many types of variable resistors andtrimmers.
Optionally, the triangle in these symbols may be filled in, or a line may be drawn through the triangle (less desirable). The words anode and cathode aren't part of the diode symbols. For instructional purposes, sometimes two letters (A/C or A/K) are placed next to diode symbols similar to how the letters C/B/E or D/G/S are placed next totransistorsymbols. "K" is often used instead of "C", because the origin of the word cathode iskathodos, and to avoid confusion with "C" forcapacitorsin silkscreen ofprinted circuit boards. Voltage text should be placed next to each zener and TVS diode symbol, such as "5.1V".
There are many ways to draw a single-phase bridge rectifier symbol. Some simplified symbols don't show the internal diodes.
An inductor can be drawn either as a series of loops, or series of half-circles.
Voltage text should be placed on both sides of power transformers, such as 120V (input side) and 6.3V (output side).
Optionally, transistor symbols may include a circle.[6]Note: The pin letters B/C/E and G/D/S aren't part of the transistor symbols.
For multiple pole switches, a dotted or dashed line can be included to indicate two or more switch at the same time (see DPST and DPDT examples below).
Relays symbols are a combination of an inductor symbol and switch symbol.
Note: The pin letters in these symbols aren't part of the standard relay symbol.
LED is located in the diode section.
TVS and Zener diodes are located in the diode section.
Speaker symbols sometimes include an internal inductor symbol. Impedance text should be placed next to each speaker symbol, such as "8 ohms".
There are numerous connector symbol variations.
For the symbols below: A and B are inputs, Q is output. Note: These letters are not part of the symbols.
There are variations of these logic gate symbols. Depending on the IC, the two-input gates below may have: 1) two or more inputs; 2) infrequently some have a second invertedQoutput too.
The above logic symbols may have additional I/O variations too: 1)schmitt triggerinputs, 2)tri-stateoutputs, 3)open-collectoror open-drain outputs (not shown).
For the symbols below: Q is output,Qis inverted output, E is enable input, internal triangle shape is clock input, S is Set, R is Reset (some datasheets use clear (CLR) instead of reset along the bottom).
There are variations of these flip-flop symbols. Depending on the IC, a flip-flop may have: 1) one or both outputs (Q only,Qonly, both Q &Q); 2) one or both forced inputs along top & bottom (R only, S only, both R & S); 3) some inputs may be inverted.
Note: The outside text isn't part of these symbols.
Frequency text should be placed next to each oscillator symbol, such as "16MHz".
The shape of some electronic symbols have changed over time. The following historical electronic symbols can be found in old electronic books / magazines / schematics, and now considered obsolete.
All of the following are obsolete capacitor symbols.
|
https://en.wikipedia.org/wiki/Electronic_symbol
|
Instatistics, aneffect sizeis a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample ofdata, the value of one parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value.[1]Examples of effect sizes include thecorrelationbetween two variables,[2]theregressioncoefficient in a regression, themeandifference, or the risk of a particular event (such as a heart attack) happening. Effect sizes are a complement tool forstatistical hypothesis testing, and play an important role inpoweranalyses to assess the sample size required for new experiments.[3]Effect size are fundamental inmeta-analyseswhich aim to provide the combined effect size based on data from multiple studies. The cluster of data-analysis methods concerning effect sizes is referred to asestimation statistics.
Effect size is an essential component when evaluating the strength of a statistical claim, and it is the first item (magnitude) in theMAGIC criteria. Thestandard deviationof the effect size is of critical importance, since it indicates how much uncertainty is included in the measurement. A standard deviation that is too large will make the measurement nearly meaningless. In meta-analysis, where the purpose is to combine multiple effect sizes, the uncertainty in the effect size is used to weigh effect sizes, so that large studies are considered more important than small studies. The uncertainty in the effect size is calculated differently for each type of effect size, but generally only requires knowing the study's sample size (N), or the number of observations (n) in each group.
Reporting effect sizes or estimates thereof (effect estimate [EE], estimate of effect) is considered good practice when presenting empirical research findings in many fields.[4][5]The reporting of effect sizes facilitates the interpretation of the importance of a research result, in contrast to itsstatistical significance.[6]Effect sizes are particularly prominent insocial scienceand inmedical research(where size oftreatment effectis important).
Effect sizes may be measured in relative or absolute terms. In relative effect sizes, two groups are directly compared with each other, as inodds ratiosandrelative risks. For absolute effect sizes, a largerabsolute valuealways indicates a stronger effect. Many types of measurements can be expressed as either absolute or relative, and these can be used together because they convey different information. A prominent task force in the psychology research community made the following recommendation:
Always present effect sizes for primary outcomes...If the units of measurement are meaningful on a practical level (e.g., number of cigarettes smoked per day), then we usually prefer an unstandardized measure (regression coefficient or mean difference) to a standardized measure (rord).[4]
As instatistical estimation, the true effect size is distinguished from the observed effect size. For example, to measure the risk of disease in a population (the population effect size) one can measure the risk within a sample of that population (the sample effect size). Conventions for describing true and observed effect sizes follow standard statistical practices—one common approach is to use Greek letters like ρ [rho] to denote population parameters and Latin letters likerto denote the corresponding statistic. Alternatively, a "hat" can be placed over the population parameter to denote the statistic, e.g. withρ^{\displaystyle {\hat {\rho }}}being the estimate of the parameterρ{\displaystyle \rho }.
As in any statistical setting, effect sizes are estimated withsampling error, and may be biased unless the effect size estimator that is used is appropriate for the manner in which the data weresampledand the manner in which the measurements were made. An example of this ispublication bias, which occurs when scientists report results only when the estimated effect sizes are large or are statistically significant. As a result, if many researchers carry out studies with low statistical power, the reported effect sizes will tend to be larger than the true (population) effects, if any.[7]Another example where effect sizes may be distorted is in a multiple-trial experiment, where the effect size calculation is based on the averaged or aggregated response across the trials.[8]
Smaller studies sometimes show different, often larger, effect sizes than larger studies. This phenomenon is known as the small-study effect, which may signal publication bias.[9]
Sample-based effect sizes are distinguished fromtest statisticsused in hypothesis testing, in that they estimate the strength (magnitude) of, for example, an apparent relationship, rather than assigning asignificancelevel reflecting whether the magnitude of the relationship observed could be due to chance. The effect size does not directly determine the significance level, or vice versa. Given a sufficiently large sample size, a non-null statistical comparison will always show a statistically significant result unless the population effect size is exactly zero (and even there it will show statistical significance at the rate of the Type I error used). For example, a samplePearson correlationcoefficient of 0.01 is statistically significant if the sample size is 1000. Reporting only the significantp-valuefrom this analysis could be misleading if a correlation of 0.01 is too small to be of interest in a particular application.
The termeffect sizecan refer to a standardized measure of effect (such asr,Cohen'sd, or theodds ratio), or to an unstandardized measure (e.g., the difference between group means or the unstandardized regression coefficients). Standardized effect size measures are typically used when:
In meta-analyses, standardized effect sizes are used as a common measure that can be calculated for different studies and then combined into an overall summary.
The interpretation of an effect size of beingsmall,medium, orlargedepends on its substantive context and its operational definition. Jacob Cohen[10]suggested interpretation guidelines that are near ubiquitous across many fields. However, Cohen also cautioned:
"The terms 'small,' 'medium,' and 'large' are relative, not only to each other, but to the area of behavioral science or even more particularly to the specific content and research method being employed in any given investigation... In the face of this relativity, there is a certain risk inherent in offering conventional operational definitions for these terms for use in power analysis in as diverse a field of inquiry as behavioral science. This risk is nevertheless accepted in the belief that more is to be gained than lost by supplying a common conventional frame of reference which is recommended for use only when no better basis for estimating the ES index is available." (p. 25)
Sawilowsky[11]recommended that the rules of thumb for effect sizes should be revised, and expanded the descriptions to includevery small,very large, andhuge. Funder and Ozer[12]suggested that effect sizes should be interpreted based on benchmarks and consequences of findings, resulting in adjustment of guideline recommendations.
Length[13]noted for amediumeffect size, "you'll choose the samenregardless of the accuracy or reliability of your instrument, or the narrowness or diversity of your subjects. Clearly, important considerations are being ignored here. Researchers should interpret the substantive significance of their results by grounding them in a meaningful context or by quantifying their contribution to knowledge, and Cohen's effect size descriptions can be helpful as a starting point."[6]Similarly, a U.S. Dept of Education sponsored report argued that the widespread indiscriminate use of Cohen's interpretation guidelines can be inappropriate and misleading.[14]They instead suggested that norms should be based on distributions of effect sizes from comparable studies. Thus a small effect (in absolute numbers) could be consideredlargeif the effect is larger than similar studies in the field. SeeAbelson's paradoxand Sawilowsky's paradox for related points.[15][16][17]
The table below contains descriptors for various magnitudes ofd,r,fandomega, as initially suggested by Jacob Cohen,[10]and later expanded by Sawilowsky,[11]and by Funder & Ozer.[12]
About 50 to 100 different measures of effect size are known. Many effect sizes of different types can be converted to other types, as many estimate the separation of two distributions, so are mathematically related. For example, a correlation coefficient can be converted to a Cohen's d and vice versa.
These effect sizes estimate the amount of the variance within an experiment that is "explained" or "accounted for" by the experiment's model (Explained variation).
Pearson's correlation, often denotedrand introduced byKarl Pearson, is widely used as aneffect sizewhen paired quantitative data are available; for instance if one were studying the relationship between birth weight and longevity. The correlation coefficient can also be used when the data are binary. Pearson'srcan vary in magnitude from −1 to 1, with −1 indicating a perfect negative linear relation, 1 indicating a perfect positive linear relation, and 0 indicating no linear relation between two variables.
A relatedeffect sizeisr2, thecoefficient of determination(also referred to asR2or "r-squared"), calculated as the square of the Pearson correlationr. In the case of paired data, this is a measure of the proportion of variance shared by the two variables, and varies from 0 to 1. For example, with anrof 0.21 the coefficient of determination is 0.0441, meaning that 4.4% of the variance of either variable is shared with the other variable. Ther2is always positive, so does not convey the direction of the correlation between the two variables.
Eta-squared describes the ratio of variance explained in the dependent variable by a predictor while controlling for other predictors, making it analogous to ther2. Eta-squared is a biased estimator of the variance explained by the model in the population (it estimates only the effect size in the sample). This estimate shares the weakness withr2that each additional variable will automatically increase the value ofη2. In addition, it measures the variance explained of the sample, not the population, meaning that it will always overestimate the effect size, although the bias grows smaller as the sample grows larger.η2=SSTreatmentSSTotal.{\displaystyle \eta ^{2}={\frac {SS_{\text{Treatment}}}{SS_{\text{Total}}}}.}
A less biased estimator of the variance explained in the population isω2[18]ω2=SStreatment−dftreatment⋅MSerrorSStotal+MSerror.{\displaystyle \omega ^{2}={\frac {{\text{SS}}_{\text{treatment}}-df_{\text{treatment}}\cdot {\text{MS}}_{\text{error}}}{{\text{SS}}_{\text{total}}+{\text{MS}}_{\text{error}}}}.}
This form of the formula is limited to between-subjects analysis with equal sample sizes in all cells.[18]Since it is less biased (although notunbiased),ω2is preferable to η2; however, it can be more inconvenient to calculate for complex analyses. A generalized form of the estimator has been published for between-subjects and within-subjects analysis, repeated measure, mixed design, and randomized block design experiments.[19]In addition, methods to calculate partialω2for individual factors and combined factors in designs with up to three independent variables have been published.[19]
Cohen'sf2is one of several effect size measures to use in the context of anF-testforANOVAormultiple regression. Its amount of bias (overestimation of the effect size for the ANOVA) depends on the bias of its underlying measurement of variance explained (e.g.,R2,η2,ω2).
Thef2effect size measure for multiple regression is defined as:f2=R21−R2.{\displaystyle f^{2}={R^{2} \over 1-R^{2}}.}
Likewise,f2can be defined as:f2=η21−η2{\displaystyle f^{2}={\eta ^{2} \over 1-\eta ^{2}}}orf2=ω21−ω2{\displaystyle f^{2}={\omega ^{2} \over 1-\omega ^{2}}}for models described by those effect size measures.[20]
Thef2{\displaystyle f^{2}}effect size measure for sequential multiple regression and also common forPLS modeling[21]is defined as:f2=RAB2−RA21−RAB2{\displaystyle f^{2}={R_{AB}^{2}-R_{A}^{2} \over 1-R_{AB}^{2}}}whereR2Ais the variance accounted for by a set of one or more independent variablesA, andR2ABis the combined variance accounted for byAand another set of one or more independent variables of interestB. By convention,f2effect sizes of0.12{\displaystyle 0.1^{2}},0.252{\displaystyle 0.25^{2}}, and0.42{\displaystyle 0.4^{2}}are termedsmall,medium, andlarge, respectively.[10]
Cohen'sf^{\displaystyle {\hat {f}}}can also be found for factorial analysis of variance (ANOVA) working backwards, using:f^effect=(Feffectdfeffect/N).{\displaystyle {\hat {f}}_{\text{effect}}={\sqrt {(F_{\text{effect}}df_{\text{effect}}/N)}}.}
In a balanced design (equivalent sample sizes across groups) of ANOVA, the corresponding population parameter off2{\displaystyle f^{2}}isSS(μ1,μ2,…,μK)K×σ2,{\displaystyle {SS(\mu _{1},\mu _{2},\dots ,\mu _{K})} \over {K\times \sigma ^{2}},}whereinμjdenotes the population mean within thejthgroup of the totalKgroups, andσthe equivalent population standard deviations within each groups.SSis thesum of squaresin ANOVA.
Another measure that is used with correlation differences is Cohen's q. This is the difference between two Fisher transformed Pearson regression coefficients. In symbols this isq=12log1+r11−r1−12log1+r21−r2{\displaystyle q={\frac {1}{2}}\log {\frac {1+r_{1}}{1-r_{1}}}-{\frac {1}{2}}\log {\frac {1+r_{2}}{1-r_{2}}}}
wherer1andr2are the regressions being compared. The expected value ofqis zero and its variance isvar(q)=1N1−3+1N2−3{\displaystyle \operatorname {var} (q)={\frac {1}{N_{1}-3}}+{\frac {1}{N_{2}-3}}}whereN1andN2are the number of data points in the first and second regression respectively.
The raw effect size pertaining to a comparison of two groups is inherently calculated as the differences between the two means. However, to facilitate interpretation it is common to standardise the effect size; various conventions for statistical standardisation are presented below.
A (population) effect sizeθbased on means usually considers the standardized mean difference (SMD) between two populations[22]: 78θ=μ1−μ2σ,{\displaystyle \theta ={\frac {\mu _{1}-\mu _{2}}{\sigma }},}whereμ1is the mean for one population,μ2is the mean for the other population, and σ is astandard deviationbased on either or both populations.
In the practical setting the population values are typically not known and must be estimated from sample statistics. The several versions of effect sizes based on means differ with respect to which statistics are used.
This form for the effect size resembles the computation for at-teststatistic, with the critical difference that thet-test statistic includes a factor ofn{\displaystyle {\sqrt {n}}}. This means that for a given effect size, the significance level increases with the sample size. Unlike thet-test statistic, the effect size aims to estimate a populationparameterand is not affected by the sample size.
SMD values of 0.2 to 0.5 are considered small, 0.5 to 0.8 are considered medium, and greater than 0.8 are considered large.[23]
Cohen'sdis defined as the difference between two means divided by a standard deviation for the data, i.e.d=x¯1−x¯2s.{\displaystyle d={\frac {{\bar {x}}_{1}-{\bar {x}}_{2}}{s}}.}
Jacob Cohendefineds, thepooled standard deviation, as (for two independent samples):[10]: 67s=(n1−1)s12+(n2−1)s22n1+n2−2{\displaystyle s={\sqrt {\frac {(n_{1}-1)s_{1}^{2}+(n_{2}-1)s_{2}^{2}}{n_{1}+n_{2}-2}}}}where the variance for one of the groups is defined ass12=1n1−1∑i=1n1(x1,i−x¯1)2,{\displaystyle s_{1}^{2}={\frac {1}{n_{1}-1}}\sum _{i=1}^{n_{1}}(x_{1,i}-{\bar {x}}_{1})^{2},}and similarly for the other group.
Other authors choose a slightly different computation of the standard deviation when referring to "Cohen'sd" where the denominator is without "-2"[24][25]: 14s=(n1−1)s12+(n2−1)s22n1+n2{\displaystyle s={\sqrt {\frac {(n_{1}-1)s_{1}^{2}+(n_{2}-1)s_{2}^{2}}{n_{1}+n_{2}}}}}This definition of "Cohen'sd" is termed themaximum likelihoodestimator by Hedges and Olkin,[22]and it is related to Hedges'gby a scaling factor (see below).
With two paired samples, an approach is to look at the distribution of the difference scores. In that case,sis the standard deviation of this distribution of difference scores (of note, the standard deviation of difference scores is dependent on the correlation between paired samples). This creates the following relationship between the t-statistic to test for a difference in the means of the two paired groups and Cohen'sd'(computed with difference scores):t=X¯1−X¯2SEdiff=X¯1−X¯2SDdiffN=N(X¯1−X¯2)SDdiff{\displaystyle t={\frac {{\bar {X}}_{1}-{\bar {X}}_{2}}{{\text{SE}}_{diff}}}={\frac {{\bar {X}}_{1}-{\bar {X}}_{2}}{\frac {{\text{SD}}_{diff}}{\sqrt {N}}}}={\frac {{\sqrt {N}}({\bar {X}}_{1}-{\bar {X}}_{2})}{SD_{diff}}}}andd′=X¯1−X¯2SDdiff=tN{\displaystyle d'={\frac {{\bar {X}}_{1}-{\bar {X}}_{2}}{{\text{SD}}_{diff}}}={\frac {t}{\sqrt {N}}}}However, for paired samples, Cohen states that d' does not provide the correct estimate to obtain the power of the test for d, and that before looking the values up in the tables provided for d, it should be corrected for r as in the following formula:[26]d′1−r.{\displaystyle {\frac {d'}{\sqrt {1-r}}}.}where r is the correlation between paired measurements. Given the same sample size, the higher r, the higher the power for a test of paired difference.
Since d' depends on r, as a measure of effect size it is difficult to interpret; therefore, in the context of paired analyses, since it is possible to compute d' or d (estimated with a pooled standard deviation or that of a group or time-point), it is necessary to explicitly indicate which one is being reported. As a measure of effect size, d (estimated with a pooled standard deviation or that of a group or time-point) is more appropriate, for instance in meta-analysis.[27]
Cohen'sdis frequently used inestimating sample sizesfor statistical testing. A lower Cohen'sdindicates the necessity of larger sample sizes, and vice versa, as can subsequently be determined together with the additional parameters of desiredsignificance levelandstatistical power.[28]
In 1976,Gene V. Glassproposed an estimator of the effect size that uses only the standard deviation of the second group[22]: 78Δ=x¯1−x¯2s2{\displaystyle \Delta ={\frac {{\bar {x}}_{1}-{\bar {x}}_{2}}{s_{2}}}}
The second group may be regarded as a control group, and Glass argued that if several treatments were compared to the control group it would be better to use just the standard deviation computed from the control group, so that effect sizes would not differ under equal means and different variances.
Under a correct assumption of equal population variances a pooled estimate forσis more precise.
Hedges'g, suggested byLarry Hedgesin 1981,[29]is like the other measures based on a standardized difference[22]: 79g=x¯1−x¯2s∗{\displaystyle g={\frac {{\bar {x}}_{1}-{\bar {x}}_{2}}{s^{*}}}}where the pooled standard deviations∗{\displaystyle s^{*}}is computed as:s∗=(n1−1)s12+(n2−1)s22n1+n2−2.{\displaystyle s^{*}={\sqrt {\frac {(n_{1}-1)s_{1}^{2}+(n_{2}-1)s_{2}^{2}}{n_{1}+n_{2}-2}}}.}
However, as anestimatorfor the population effect sizeθit isbiased.
Nevertheless, this bias can be approximately corrected through multiplication by a factorg∗=J(n1+n2−2)g≈(1−34(n1+n2)−9)g{\displaystyle g^{*}=J(n_{1}+n_{2}-2)\,\,g\,\approx \,\left(1-{\frac {3}{4(n_{1}+n_{2})-9}}\right)\,\,g}Hedges and Olkin refer to this less-biased estimatorg∗{\displaystyle g^{*}}asd,[22]but it is not the same as Cohen'sd.
The exact form for the correction factorJ() involves thegamma function[22]: 104J(a)=Γ(a/2)a/2Γ((a−1)/2).{\displaystyle J(a)={\frac {\Gamma (a/2)}{{\sqrt {a/2\,}}\,\Gamma ((a-1)/2)}}.}There are also multilevel variants of Hedges' g, e.g., for use in cluster randomised controlled trials (CRTs).[30]CRTs involve randomising clusters, such as schools or classrooms, to different conditions and are frequently used in education research.
A similar effect size estimator for multiple comparisons (e.g.,ANOVA) is the Ψ root-mean-square standardized effect:[20]Ψ=1k−1⋅∑j=1k(μj−μσ)2{\displaystyle \Psi ={\sqrt {{\frac {1}{k-1}}\cdot \sum _{j=1}^{k}\left({\frac {\mu _{j}-\mu }{\sigma }}\right)^{2}}}}wherekis the number of groups in the comparisons.
This essentially presents the omnibus difference of the entire model adjusted by the root mean square, analogous todorg.
In addition, a generalization for multi-factorial designs has been provided.[20]
Provided that the data isGaussiandistributed a scaled Hedges'g,n1n2/(n1+n2)g{\textstyle {\sqrt {n_{1}n_{2}/(n_{1}+n_{2})}}\,g}, follows anoncentralt-distributionwith thenoncentrality parametern1n2/(n1+n2)θ{\textstyle {\sqrt {n_{1}n_{2}/(n_{1}+n_{2})}}\theta }and(n1+n2− 2)degrees of freedom. Likewise, the scaled Glass' Δ is distributed withn2− 1degrees of freedom.
From the distribution it is possible to compute theexpectationand variance of the effect sizes.
In some cases large sample approximations for the variance are used. One suggestion for the variance of Hedges' unbiased estimator is[22]: 86σ^2(g∗)=n1+n2n1n2+(g∗)22(n1+n2).{\displaystyle {\hat {\sigma }}^{2}(g^{*})={\frac {n_{1}+n_{2}}{n_{1}n_{2}}}+{\frac {(g^{*})^{2}}{2(n_{1}+n_{2})}}.}
As a statistical parameter, SSMD (denoted asβ{\displaystyle \beta }) is defined as the ratio ofmeantostandard deviationof the difference of two random values respectively from two groups. Assume that one group with random values hasmeanμ1{\displaystyle \mu _{1}}andvarianceσ12{\displaystyle \sigma _{1}^{2}}and another group hasmeanμ2{\displaystyle \mu _{2}}andvarianceσ22{\displaystyle \sigma _{2}^{2}}. Thecovariancebetween the two groups isσ12.{\displaystyle \sigma _{12}.}Then, the SSMD for the comparison of these two groups is defined as[31]
If the two groups are independent,
If the two independent groups have equalvariancesσ2{\displaystyle \sigma ^{2}},
Mahalanobis distance(D) is a multivariate generalization of Cohen's d, which takes into account the relationships between the variables.[32]
φ=χ2N{\displaystyle \varphi ={\sqrt {\frac {\chi ^{2}}{N}}}}
φc=χ2N(k−1){\displaystyle \varphi _{c}={\sqrt {\frac {\chi ^{2}}{N(k-1)}}}}
Commonly used measures of association for thechi-squared testare thePhi coefficientandCramér'sV(sometimes referred to as Cramér's phi and denoted asφc). Phi is related to thepoint-biserial correlation coefficientand Cohen'sdand estimates the extent of the relationship between two variables (2 × 2).[33]Cramér's V may be used with variables having more than two levels.
Phi can be computed by finding the square root of the chi-squared statistic divided by the sample size.
Similarly, Cramér's V is computed by taking the square root of the chi-squared statistic divided by the sample size and the length of the minimum dimension (kis the smaller of the number of rowsror columnsc).
φcis the intercorrelation of the two discrete variables[34]and may be computed for any value ofrorc. However, as chi-squared values tend to increase with the number of cells, the greater the difference betweenrandc, the more likely V will tend to 1 without strong evidence of a meaningful correlation.
Another measure of effect size used for chi-squared tests is Cohen's omega (ω{\displaystyle \omega }). This is defined asω=∑i=1m(p1i−p0i)2p0i{\displaystyle \omega ={\sqrt {\sum _{i=1}^{m}{\frac {(p_{1i}-p_{0i})^{2}}{p_{0i}}}}}}wherep0iis the proportion of theithcell underH0,p1iis the proportion of theithcell underH1andmis the number of cells.
Theodds ratio(OR) is another useful effect size. It is appropriate when the research question focuses on the degree of association between twobinary variables. For example, consider a study of spelling ability. In a control group, two students pass the class for every one who fails, so the odds of passing are two to one (or 2/1 = 2). In the treatment group, six students pass for every one who fails, so the odds of passing are six to one (or 6/1 = 6). The effect size can be computed by noting that the odds of passing in the treatment group are three times higher than in the control group (because 6 divided by 2 is 3). Therefore, the odds ratio is 3. Odds ratio statistics are on a different scale than Cohen'sd, so this '3' is not comparable to a Cohen'sdof 3.
Therelative risk(RR), also calledrisk ratio, is simply the risk (probability) of an event relative to some independent variable. This measure of effect size differs from the odds ratio in that it comparesprobabilitiesinstead ofodds, but asymptotically approaches the latter for small probabilities. Using the example above, theprobabilitiesfor those in the control group and treatment group passing is 2/3 (or 0.67) and 6/7 (or 0.86), respectively. The effect size can be computed the same as above, but using the probabilities instead. Therefore, the relative risk is 1.28. Since rather large probabilities of passing were used, there is a large difference between relative risk and odds ratio. Hadfailure(a smaller probability) been used as the event (rather thanpassing), the difference between the two measures of effect size would not be so great.
While both measures are useful, they have different statistical uses. In medical research, theodds ratiois commonly used forcase-control studies, as odds, but not probabilities, are usually estimated.[35]Relative risk is commonly used inrandomized controlled trialsandcohort studies, but relative risk contributes to overestimations of the effectiveness of interventions.[36]
Therisk difference(RD), sometimes called absolute risk reduction, is simply the difference in risk (probability) of an event between two groups. It is a useful measure in experimental research, since RD tells you the extent to which an experimental interventions changes the probability of an event or outcome. Using the example above, the probabilities for those in the control group and treatment group passing is 2/3 (or 0.67) and 6/7 (or 0.86), respectively, and so the RD effect size is 0.86 − 0.67 = 0.19 (or 19%). RD is the superior measure for assessing effectiveness of interventions.[36]
One measure used in power analysis when comparing two independent proportions is Cohen'sh. This is defined as followsh=2(arcsinp1−arcsinp2){\displaystyle h=2(\arcsin {\sqrt {p_{1}}}-\arcsin {\sqrt {p_{2}}})}wherep1andp2are the proportions of the two samples being compared and arcsin is the arcsine transformation.
To more easily describe the meaning of an effect size to people outside statistics, the common language effect size, as the name implies, was designed to communicate it in plain English. It is used to describe a difference between two groups and was proposed, as well as named, by Kenneth McGraw and S. P. Wong in 1992.[37]They used the following example (about heights of men and women): "in any random pairing of young adult males and females, the probability of the male being taller than the female is .92, or in simpler terms yet, in 92 out of 100 blind dates among young adults, the male will be taller than the female",[37]when describing the population value of the common language effect size.
Cliff's deltaord{\displaystyle d}, originally developed byNorman Clifffor use with ordinal data,[38][dubious–discuss]is a measure of how often the values in one distribution are larger than the values in a second distribution. Crucially, it does not require any assumptions about the shape or spread of the two distributions.
The sample estimated{\displaystyle d}is given by:d=∑i,j[xi>xj]−[xi<xj]mn{\displaystyle d={\frac {\sum _{i,j}[x_{i}>x_{j}]-[x_{i}<x_{j}]}{mn}}}where the two distributions are of sizen{\displaystyle n}andm{\displaystyle m}with itemsxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}, respectively, and[⋅]{\displaystyle [\cdot ]}is theIverson bracket, which is 1 when the contents are true and 0 when false.
d{\displaystyle d}is linearly related to theMann–Whitney U statistic; however, it captures the direction of the difference in its sign. Given the Mann–WhitneyU{\displaystyle U},d{\displaystyle d}is:d=2Umn−1{\displaystyle d={\frac {2U}{mn}}-1}
One of simplest effect sizes for measuring how much a proportion differs from 50% is Cohen's g.[10]: 147It measures how much a proportion differs from 50%. For example, if 85.2% of arrests for car theft are males, then effect size of sex on arrest when measured with Cohen's g isg=0.852−0.5=0.352{\displaystyle g=0.852-0.5=0.352}. In general:
g=P−0.50or0.50−P(directional),{\displaystyle g=P-0.50{\text{ or }}0.50-P\quad ({\text{directional}}),}
g=|P−0.50|(nondirectional).{\displaystyle g=|P-0.50|\quad ({\text{nondirectional}}).}
Units of Cohen's g are more intuitive (proportion) than in some other effect sizes. It is sometime used in combination withBinomial test.
Confidence intervals of standardized effect sizes, especially Cohen'sd{\displaystyle {d}}andf2{\displaystyle f^{2}}, rely on the calculation of confidence intervals ofnoncentrality parameters(ncp). A common approach to construct the confidence interval ofncpis to find the criticalncpvalues to fit the observed statistic to tailquantilesα/2 and (1 −α/2). The SAS and R-package MBESS provides functions to find critical values ofncp.
For a single group,Mdenotes the sample mean,μthe population mean,SDthe sample's standard deviation,σthe population's standard deviation, andnis the sample size of the group. Thetvalue is used to test the hypothesis on the difference between the mean and a baselineμbaseline. Usually,μbaselineis zero. In the case of two related groups, the single group is constructed by the differences in pair of samples, whileSDandσdenote the sample's and population's standard deviations of differences rather than within original two groups.t:=M−μbaselineSE=M−μbaselineSD/n=n(M−μσ)+n(μ−μbaselineσ)SDσ{\displaystyle t:={\frac {M-\mu _{\text{baseline}}}{\text{SE}}}={\frac {M-\mu _{\text{baseline}}}{{\text{SD}}/{\sqrt {n}}}}={\frac {{\sqrt {n}}\left({\frac {M-\mu }{\sigma }}\right)+{\sqrt {n}}\left({\frac {\mu -\mu _{\text{baseline}}}{\sigma }}\right)}{\frac {\text{SD}}{\sigma }}}}ncp=n(μ−μbaselineσ){\displaystyle ncp={\sqrt {n}}\left({\frac {\mu -\mu _{\text{baseline}}}{\sigma }}\right)}and Cohen'sd:=M−μbaselineSD{\displaystyle d:={\frac {M-\mu _{\text{baseline}}}{\text{SD}}}}
is the point estimate ofμ−μbaselineσ.{\displaystyle {\frac {\mu -\mu _{\text{baseline}}}{\sigma }}.}
So,
n1orn2are the respective sample sizes.t:=M1−M2SDwithin/n1n2n1+n2,{\displaystyle t:={\frac {M_{1}-M_{2}}{{\text{SD}}_{\text{within}}/{\sqrt {\frac {n_{1}n_{2}}{n_{1}+n_{2}}}}}},}
whereinSDwithin:=SSwithindfwithin=(n1−1)SD12+(n2−1)SD22n1+n2−2.{\displaystyle {\text{SD}}_{\text{within}}:={\sqrt {\frac {{\text{SS}}_{\text{within}}}{{\text{df}}_{\text{within}}}}}={\sqrt {\frac {(n_{1}-1){\text{SD}}_{1}^{2}+(n_{2}-1){\text{SD}}_{2}^{2}}{n_{1}+n_{2}-2}}}.}ncp=n1n2n1+n2μ1−μ2σ{\displaystyle ncp={\sqrt {\frac {n_{1}n_{2}}{n_{1}+n_{2}}}}{\frac {\mu _{1}-\mu _{2}}{\sigma }}}
and Cohen'sd:=M1−M2SDwithin{\displaystyle d:={\frac {M_{1}-M_{2}}{SD_{\text{within}}}}}is the point estimate ofμ1−μ2σ.{\displaystyle {\frac {\mu _{1}-\mu _{2}}{\sigma }}.}
So,d~=ncpn1n2n1+n2.{\displaystyle {\tilde {d}}={\frac {ncp}{\sqrt {\frac {n_{1}n_{2}}{n_{1}+n_{2}}}}}.}
One-way ANOVA test appliesnoncentral F distribution. While with a given population standard deviationσ{\displaystyle \sigma }, the same test question appliesnoncentral chi-squared distribution.F:=SSbetweenσ2/dfbetweenSSwithinσ2/dfwithin{\displaystyle F:={\frac {{\frac {{\text{SS}}_{\text{between}}}{\sigma ^{2}}}/{\text{df}}_{\text{between}}}{{\frac {{\text{SS}}_{\text{within}}}{\sigma ^{2}}}/{\text{df}}_{\text{within}}}}}
For eachj-th sample withini-th groupXi,j, denoteMi(Xi,j):=∑w=1niXi,wni;μi(Xi,j):=μi.{\displaystyle M_{i}(X_{i,j}):={\frac {\sum _{w=1}^{n_{i}}X_{i,w}}{n_{i}}};\;\mu _{i}(X_{i,j}):=\mu _{i}.}
While,SSbetween/σ2=SS(Mi(Xi,j);i=1,2,…,K,j=1,2,…,ni)σ2=SS(Mi(Xi,j−μi)σ+μiσ;i=1,2,…,K,j=1,2,…,ni)∼χ2(df=K−1,ncp=SS(μi(Xi,j)σ;i=1,2,…,K,j=1,2,…,ni)){\displaystyle {\begin{aligned}{\text{SS}}_{\text{between}}/\sigma ^{2}&={\frac {{\text{SS}}\left(M_{i}(X_{i,j});i=1,2,\dots ,K,\;j=1,2,\dots ,n_{i}\right)}{\sigma ^{2}}}\\&={\text{SS}}\left({\frac {M_{i}(X_{i,j}-\mu _{i})}{\sigma }}+{\frac {\mu _{i}}{\sigma }};i=1,2,\dots ,K,\;j=1,2,\dots ,n_{i}\right)\\&\sim \chi ^{2}\left({\text{df}}=K-1,\;ncp=SS\left({\frac {\mu _{i}(X_{i,j})}{\sigma }};i=1,2,\dots ,K,\;j=1,2,\dots ,n_{i}\right)\right)\end{aligned}}}
So, bothncp(s) ofFandχ2{\displaystyle \chi ^{2}}equateSS(μi(Xi,j)/σ;i=1,2,…,K,j=1,2,…,ni).{\displaystyle {\text{SS}}\left(\mu _{i}(X_{i,j})/\sigma ;i=1,2,\dots ,K,\;j=1,2,\dots ,n_{i}\right).}
In case ofn:=n1=n2=⋯=nK{\displaystyle n:=n_{1}=n_{2}=\cdots =n_{K}}forKindependent groups of same size, the total sample size isN:=n·K.Cohensf~2:=SS(μ1,μ2,…,μK)K⋅σ2=SS(μi(Xi,j)/σ;i=1,2,…,K,j=1,2,…,ni)n⋅K=ncpn⋅K=ncpN.{\displaystyle {\text{Cohens }}{\tilde {f}}^{2}:={\frac {{\text{SS}}(\mu _{1},\mu _{2},\dots ,\mu _{K})}{K\cdot \sigma ^{2}}}={\frac {{\text{SS}}\left(\mu _{i}(X_{i,j})/\sigma ;i=1,2,\dots ,K,\;j=1,2,\dots ,n_{i}\right)}{n\cdot K}}={\frac {ncp}{n\cdot K}}={\frac {ncp}{N}}.}
Thet-test for a pair of independent groups is a special case of one-way ANOVA. Note that the noncentrality parameterncpF{\displaystyle ncp_{F}}ofFis not comparable to the noncentrality parameterncpt{\displaystyle ncp_{t}}of the correspondingt. Actually,ncpF=ncpt2{\displaystyle ncp_{F}=ncp_{t}^{2}}, andf~=|d~2|{\displaystyle {\tilde {f}}=\left|{\frac {\tilde {d}}{2}}\right|}.
Further explanations
|
https://en.wikipedia.org/wiki/Effect_size#Eta-squared_(η2)
|
Mobile malwareis malicious software that targetsmobile phonesor wireless-enabledPersonal digital assistants(PDA), by causing the collapse of the system and loss or leakage of confidential information. As wireless phones and PDA networks have become more and more common and have grown in complexity, it has become increasingly difficult to ensure their safety and security against electronic attacks in the form of viruses or othermalware.[1]
The first known virus that affected mobiles, "Timofonica", originated in Spain and was identified by antivirus labs in Russia and Finland in June 2000. "Timofonica" sent SMS messages to GSM-capable mobile phones that read (in Spanish) "Information for you: Telefónica is fooling you." These messages were sent through the Internet SMS gateway of the MoviStar mobile operator. "Timofonica" ran on PCs and did not run on mobile devices so was not a true mobile malware[2]
In June 2004, it was discovered that a company called Ojam had engineered an anti-piracyTrojanhack in older versions of its mobile phone game,Mosquito. This sent SMS texts to the company without the user's knowledge.
In July 2004, computer hobbyists released a proof-of-concept virusCabir,that infects mobile phones running theSymbianoperating system, spreading viaBluetoothwireless.[3][4]This was the first true mobile malware[5]
In March 2005, it was reported that acomputer wormcalledCommwarrior-Ahad been infectingSymbianseries 60 mobile phones.[6]This specific worm replicated itself through the phone'sMultimedia Messaging Service(MMS), sending copies to contacts listed in the phone user's address book.
In August 2010,Kaspersky Labreported the trojan Trojan-SMS.AndroidOS.FakePlayer.a.[7]This was the first SMS malware that affected Google'sAndroidoperating system,[8]and which sent SMS messages to premium rate numbers without the owner's knowledge, accumulating huge bills.[9]
Currently, various antivirus software companies offer mobile antivirus software programs. Meanwhile, operating system developers try to curb the spread of infections with quality control checks on software and content offered through their digitalapplicationdistributionplatforms, such asGoogle Playor Apple'sApp Store. Recent studies however show that mobile antivirus programs are ineffective due to the rapid evolution of mobile malware.[10]
In recent years,deep learningalgorithms have also been adopted for mobile malware detection.[11]
Many types of common malicious programs are known to affect mobile devices:
In fact, with increase in creation of viruses & malwares like Trojan Horse, the camera crashing orcamfectingissues are becoming quite common.[13]
|
https://en.wikipedia.org/wiki/Mobile_malware
|
Backward inductionis the process of determining asequenceof optimal choices by reasoning from the endpoint of a problem or situation back to its beginning using individual events or actions.[1]Backward induction involves examining the final point in a series of decisions and identifying the optimal process or action required to arrive at that point. This process continues backward until the best action for every possible point along the sequence is determined. Backward induction was first utilized in 1875 byArthur Cayley, who discovered the method while attempting to solve thesecretary problem.[2]
Indynamic programming, a method ofmathematical optimization, backward induction is used for solving theBellman equation.[3][4]In the related fields ofautomated planning and schedulingandautomated theorem proving, the method is called backward search orbackward chaining. In chess, it is calledretrograde analysis.
Ingame theory, a variant of backward induction is used to computesubgame perfect equilibriainsequential games.[5]The difference is that optimization problems involve onedecision makerwho chooses what to do at each point of time. In contrast, game theory problems involve the interacting decision of severalplayers. In this situation, it may still be possible to apply a generalization of backward induction, since it may be possible to determine what the second-to-last player will do by predicting what the last player will do in each situation, and so on. This variant of backward induction has been used to solve formal games from the beginning of game theory.John von NeumannandOskar Morgensternsuggested solvingzero-sum, two-person formal games through this method in theirTheory of Games and Economic Behaviour(1944), the book which established game theory as a field of study.[6][7]
Consider a person evaluating potential employment opportunities for the next ten years, denoted as timest=1,2,3,...,10{\displaystyle t=1,2,3,...,10}. At eacht{\displaystyle t}, they may encounter a choice between two job options: a 'good' job offering asalaryof$100{\displaystyle \$100}or a 'bad' job offering a salary of$44{\displaystyle \$44}. Each job type has an equal probability of being offered. Upon accepting a job, the individual will maintain that particular job for the entire remainder of the ten-year duration.
This scenario is simplified by assuming that the individual's entire concern is their total expected monetary earnings, without any variable preferences for earnings across different periods. In economic terms, this is a scenario with an implicitinterest rateof zero and a constantmarginal utilityof money.
Whether the person in question should accept a 'bad' job can be decided by reasoning backwards from timet=10{\displaystyle t=10}.
By continuing to work backwards, it can be verified that a 'bad' offer should only be accepted if the person is still unemployed att=9{\displaystyle t=9}ort=10{\displaystyle t=10}; a bad offer should be rejected at any time up to and includingt=8{\displaystyle t=8}. Generalizing this example intuitively, it corresponds to the principle that if one expects to work in a job for a long time, it is worth picking carefully.
A dynamic optimization problem of this kind is called anoptimal stoppingproblem because the issue at hand is when to stop waiting for a better offer.Search theoryis a field of microeconomics that applies models of this type to matters such as shopping, job searches, and marriage.
Ingame theory, backward induction is a solution methodology that follows from applying sequential rationality to identify an optimal action for each information set in a givengame tree. It develops the implications of rationality via individual information sets in theextensive-formrepresentation of a game.[8]
In order to solve for asubgame perfect equilibriumwith backwards induction, the game should be written out inextensive formand then divided intosubgames. Starting with the subgame furthest from the initial node, or starting point, the expected payoffs listed for this subgame are weighed, and a rational player will select the option with the higher payoff for themselves. The highest payoffvectoris selected and marked. To solve for the subgame perfect equilibrium, one should continually work backwards from subgame to subgame until the starting point is reached. As this process progresses, the initial extensive form game will become shorter and shorter. The marked path of vectors is the subgame perfect equilibrium.[9]
The application of backward induction in game theory can be demonstrated with a simple example. Consider amulti-stage gameinvolving two players planning to go to a movie.
Once they both observe the choices, the second stage begins. In the second stage, players choose whether to go to the movie or stay home.
For this example, payoffs are added across different stages. The game is aperfect informationgame. Thenormal-formmatrices for these games are:
Theextensive formof this multi-stage game can be seen to the right. The steps for solving this game with backward induction are as follows:
Backward induction can be applied to only limited classes of games. The procedure is well-defined for any game of perfect information with no ties of utility. It is also well-defined and meaningful for games of perfect information with ties. However, in such cases it leads to more than one perfect strategy. The procedure can be applied to some games with nontrivial information sets, but it is not applicable in general. It is best suited to solve games with perfect information. If all players are not aware of the other players' actions and payoffs at each decision node, then backward induction is not so easily applied.[10]
A second example demonstrates that even in games that formally allow for backward induction in theory, it may not accurately predict empirical game play in practice. This example of an asymmetric game consists of two players: Player 1 proposes to split a dollar with Player 2, which Player 2 then accepts or rejects. This is called theultimatum game. Player 1 acts first by splitting the dollar however they see fit. Next, Player 2 either accepts the portion they have been offered by Player 1 or rejects the split. If Player 2 accepts the split, then both Player 1 and Player 2 get the payoffs matching that split. If Player 2 decides to reject Player 1's offer, then both players get nothing. In other words, Player 2 has veto power over Player 1's proposed allocation, but applying the veto eliminates any reward for both players.[11]
Considering the choice and response of Player 2 given any arbitrary proposal by Player 1, formal rationality prescribes that Player 2 would accept any payoff that is greater than or equal to $0. Accordingly, by backward induction Player 1 ought to propose giving Player 2 as little as possible in order to gain the largest portion of the split. Player 1 giving Player 2 the smallest unit of money and keeping the rest for themselves is the unique subgame-perfect equilibrium. The ultimatum game does have several other Nash Equilibria which are not subgame perfect and therefore do not arise via backward induction.
The ultimatum game is a theoretical illustration of the usefulness of backward induction when considering infinite games, but the ultimatum games theoretically predicted results do not match empirical observation. Experimental evidence has shown that a proposer, Player 1, very rarely offers $0 and the responder, Player 2, sometimes rejects offers greater than $0. What is deemed acceptable by Player 2 varies with context. The pressure or presence of other players and external implications can mean that the game's formal model cannot necessarily predict what a real person will choose. According toColin Camerer, an American behavioral economist, Player 2 "rejects offers of less than 20 percent of X about half the time, even though they end up with nothing."[12]
While backward induction assuming formal rationality would predict that a responder would accept any offer greater than zero, responders in reality are not formally rational players and therefore often seem to care more about offer 'fairness' or perhaps other anticipations of indirect or external effects rather than immediate potential monetary gains.
Adynamic gamein which the players are an incumbent firm in an industry and a potential entrant to that industry is to be considered. As it stands, the incumbent has amonopolyover the industry and does not want to lose some of its market share to the entrant. If the entrant chooses not to enter, the payoff to the incumbent is high (it maintains its monopoly) and the entrant neither loses nor gains (its payoff is zero). If the entrant enters, the incumbent can "fight" or "accommodate" the entrant. It will fight by lowering its price, running the entrant out of business (and incurring exit costs—a negative payoff) and damaging its own profits. If it accommodates the entrant it will lose some of its sales, but a high price will be maintained and it will receive greater profits than by lowering its price (but lower than monopoly profits).
If the incumbent accommodates given the case that the entrant enters, the best response for the entrant is to enter (and gain profit). Hence the strategy profile in which the entrant enters and the incumbent accommodates if the entrant enters is aNash equilibriumconsistent with backward induction. However, if the incumbent is going to fight, the best response for the entrant is to not enter, and if the entrant does not enter, it does not matter what the incumbent chooses to do in the hypothetical case that the entrant does enter. Hence the strategy profile in which the incumbent fights if the entrant enters, but the entrant does not enter is also a Nash equilibrium. However, were the entrant to deviate and enter, the incumbent's best response is to accommodate—the threat of fighting is not credible. This second Nash equilibrium can therefore be eliminated by backward induction.
Finding a Nash equilibrium in each decision-making process (subgame) constitutes as perfect subgame equilibria. Thus, these strategy profiles that depict subgame perfect equilibria exclude the possibility of actions like incredible threats that are used to "scare off" an entrant. If the incumbent threatens to start aprice warwith an entrant, they are threatening to lower their prices from a monopoly price to slightly lower than the entrant's, which would be impractical, and incredible, if the entrant knew a price war would not actually happen since it would result in losses for both parties. Unlike a single-agent optimization which might include suboptimal or infeasible equilibria, a subgame perfect equilibrium accounts for the actions of another player, ensuring that no player reaches a subgame mistakenly. In this case, backwards induction yielding perfect subgame equilibria ensures that the entrant will not be convinced of the incumbent's threat knowing that it was not a best response in the strategy profile.[13]
Theunexpected hanging paradoxis aparadoxrelated to backward induction. The prisoner described in the paradox uses backwards induction to reach a false conclusion. The description of the problem assumes it is possible to surprise someone who is performing backward induction. The mathematical theory of backward induction does not make this assumption, so the paradox does not call into question the results of this theory.
Backward induction works only if both players arerational, i.e., always select an action that maximizes their payoff. However, rationality is not enough: each player should also believe that all other players are rational. Even this is not enough: each player should believe that all other players know that all other players are rational, and so on, ad infinitum. In other words, rationality should becommon knowledge.[14]
Limited backward induction is a deviation from fully rational backward induction. It involves enacting the regular process of backward induction without perfect foresight. Theoretically, this occurs when one or more players have limited foresight and cannot perform backward induction through all terminal nodes.[15]Limited backward induction plays a much larger role in longer games as the effects of limited backward induction are more potent in later periods of games.
Experiments have shown that in sequential bargaining games, such as theCentipede game, subjects deviate from theoretical predictions and instead engage in limited backward induction. This deviation occurs as a result ofbounded rationality, where players can only perfectly see a few stages ahead.[16]This allows for unpredictability in decisions and inefficiency in finding and achievingsubgame perfect Nash equilibria.
There are three broad hypotheses for this phenomenon:
Violations of backward induction is predominantly attributed to the presence of social factors. However, data-driven model predictions for sequential bargaining games (using thecognitive hierarchy model) have highlighted that in some games the presence of limited backward induction can play a dominant role.[17]
Within repeated public goods games, team behavior is impacted by limited backward induction; where it is evident that team members' initial contributions are higher than contributions towards the end. Limited backward induction also influences how regularly free-riding occurs within a team's public goods game. Early on, when the effects of limited backward induction are low, free riding is less frequent, whilst towards the end, when effects are high, free-riding becomes more frequent.[18]
Limited backward induction has also been tested for within a variant of the race game. In the game, players would sequentially choose integers inside a range and sum their choices until a target number is reached. Hitting the target earns that player a prize; the other loses. Partway through a series of games, a small prize was introduced. The majority of players then performed limited backward induction, as they solved for the small prize rather than for the original prize. Only a small fraction of players considered both prizes at the start.[19]
Most tests of backward induction are based on experiments, in which participants are only to a small extent incentivized to perform the task well, if at all. However, violations of backward induction also appear to be common in high-stakes environments. A large-scale analysis of the American television game showThe Price Is Right, for example, provides evidence of limited foresight. In every episode, contestants play theShowcase Showdown, a sequential game of perfect information for which the optimal strategy can be found through backward induction. The frequent and systematic deviations from optimal behavior suggest that a sizable proportion of the contestants fail to properly backward induct and myopically consider the next stage of the game only.[20]
|
https://en.wikipedia.org/wiki/Backward_induction
|
Wheeler's delayed-choice experimentdescribes a family ofthought experimentsinquantum physicsproposed byJohn Archibald Wheeler, with the most prominent among them appearing in 1978 and 1984.[1]These experiments illustrate the central point ofquantum theory: "It is wrong to attribute a tangibility to the photon in all its travel from the point of entry to its last instant of flight."[2]: 184
These experiments close aloopholein the traditionaldouble-slit experimentdemonstration that quantum behavior depends on the experimental arrangement. The experiment closes the loophole that a photon might adjust its behavior from particle to wave behavior or vice versa. By altering the apparatus after thephotonis supposed to be in "flight", the loophole is closed.[1]
Cosmic versions of the delayed-choice experiment use photons emitted billions of years ago; the results are unchanged.[3]The concept of delayed choice has been productive of many revealing experiments.[1]New versions of the delayed choice concept use quantum effects to control the "choices", leading to thedelayed-choice quantum eraser.
Wheeler's delayed-choice experiment demonstrates that no particle-propagation model consistent with relativity explains quantum theory.[2]:184Like thedouble-slit experiment, Wheeler's concept has two equivalent paths between a source and detector. Like thewhich-wayversions of the double-slit, the experiment is run in two versions: one designed to detect wave interference and one designed to detect particles. The new ingredient in Wheeler's approach is a delayed-choice between these two experiments. The decision to measure wave interference or particle path is delayed until just before the detection. The goal is to ensure that any traveling particle or wave will have passed the area of two distinct paths in the quantum system before the choice of experiment is made.[4]:967
Wheeler's cosmic scale thought experiment employs aquasaror other light source in a galaxy billions of light years away. Some of these stars are known to be located behind a massive galaxy that acts as agravitational lens, bending light rays pointing away from Earth back towards us. The result is two images of the star, one direct and one bent. Wheeler proposed to measure the interference between these two paths. Because the light observed in such an experiment was emitted and passed through the lens billions of years ago, no choice on Earth could alter the outcome of the experiment.[1]
Wheeler then plays thedevil's advocateand suggests that perhaps for those experimental results to be obtained would mean that at the instant astronomers inserted their beam-splitter, photons that had left the quasar some millions of years ago retroactively decided to travel as waves, and that when the astronomers decided to pull their beam splitter out again that decision was telegraphed back through time to photons that were leaving some millions of years plus some minutes in the past, so that photons retroactively decided to travel as particles.
Several ways of implementing Wheeler's basic idea have been made into real experiments and they support the conclusion that Wheeler anticipated[1]— that what is done at the exit port of the experimental device before the photon is detected will determine whether it displaysinterference phenomenaor not.
A second kind of experiment resembles the ordinary double-slit experiment. The schematic diagram of this experiment shows that a lens on the far side of the double slits makes the path from each slit diverge slightly from the other after they cross each other fairly near to that lens. The result is that the two wavefunctions for each photon will be in superposition within a fairly short distance from the double slits, and if a detection screen is provided within the region wherein the wavefunctions are in superposition then interference patterns will be seen. There is no way by which any given photon could have been determined to have arrived from one or the other of the double slits. However, if the detection screen is removed the wavefunctions on each path will superimpose on regions of lower and lower amplitudes, and their combined probability values will be much less than the unreinforced probability values at the center of each path. When telescopes are aimed to intercept the center of the two paths, there will be equal probabilities of nearly 50% that a photon will show up in one of them. When a photon is detected by telescope 1, researchers may associate that photon with the wavefunction that emerged from the lower slit. When one is detected in telescope 2, researchers may associate that photon with the wavefunction that emerged from the upper slit. The explanation that supports this interpretation of experimental results is that a photon has emerged from one of the slits, and that is the end of the matter. A photon must have started at the laser, passed through one of the slits, and arrived by a single straight-line path at the corresponding telescope.
Theretrocausalexplanation, which Wheeler does not accept, says that with the detection screen in place, interference must be manifested. For interference to be manifested, a light wave must have emerged from each of the two slits. Therefore, a single photon upon coming into the double-slit diaphragm must have "decided" that it needs to go through both slits to be able to interfere with itself on the detection screen. For no interference to be manifested, a single photon coming into the double-slit diaphragm must have "decided" to go by only one slit because that would make it show up at the camera in the appropriate single telescope.
In this thought experiment the telescopes are always present, but the experiment can start with the detection screen being present but then being removed just after the photon leaves the double-slit diaphragm, or the experiment can start with the detection screen being absent and then being inserted just after the photon leaves the diaphragm. Some theorists argue that inserting or removing the screen in the midst of the experiment can force a photon to retroactively decide to go through the double-slits as a particle when it had previously transited it as a wave, or vice versa. Wheeler does not accept this interpretation.
Thedouble slit experiment, like the other six idealized experiments (microscope, split beam, tilt-teeth,radiationpattern, one-photonpolarization, and polarization of paired photons), imposes a choice between complementary modes of observation. In each experiment we have found a way to delay that choice of type of phenomenon to be looked for up to the very final stage of development of the phenomenon, and it depends on whichever type of detection device we then fix upon. That delay makes no difference in the experimental predictions. On this score everything we find was foreshadowed in that solitary and pregnant sentence of Bohr, "...it...can make no difference, as regards observable effects obtainable by a definite experimental arrangement, whether our plans for constructing or handling the instruments are fixed beforehand or whether we prefer to postpone the completion of our planning until a later moment when the particle is already on its way from one instrument to another."[8]
InBohm's interpretation of quantum mechanics, the particle obeys classical mechanics except that its movement takes place under the additional influence of itsquantum potential. A neutron for example has a definite trajectory and passes through one or the other of the two slits and not both, just as it is in the case of a classical particle. However the quantum particle in Bohm's interpretation is inseparable from its associated field. That field provides the quantum properties. In the delayed choice experiment, wave packets associated with the field propagate along both paths of the interferometer, but only one packet contains a particle. The field aspect of the neutron is responsible for the interference between the two paths. Changing the final setup at the detector to look for particle properties amounts to ignoring the field aspect of the neutron.[9][10][11]: 279
The past is determined and stays what it was up to the momentT1when the experimental configuration for detecting it as awavewas changed to that of detecting aparticleat the arrival timeT2. AtT1, when the experimental set up was changed, Bohm's quantum potential changes as needed, and the particle moves classically under the new quantum potential tillT2when it is detected as a particle. Thus Bohmian mechanics restores the conventional view of the world and its past. The past is out there as an objective history unalterable retroactively by delayed choice. The quantum potential contains information about the boundary conditions defining the system, and hence any change of the experimental set up is reflected in changes in the quantum potential which determines the dynamics of the particle.[11]: 6.7.1
John Wheeler's original discussion of the possibility of a delayed choice quantum appeared in an essay entitled "Law Without Law," which was published in a book he andWojciech Hubert Zurekedited calledQuantum Theory and Measurement, pp 182–213. He introduced his remarks by reprising the argument betweenAlbert Einstein, who wanted a comprehensible reality, andNiels Bohr, who thought that Einstein's concept of reality was too restricted. Wheeler indicates that Einstein and Bohr explored the consequences of the laboratory experiment that will be discussed below, one in which light can find its way from one corner of a rectangular array of semi-silvered and fully silvered mirrors to the other corner, and then can be made to reveal itself not only as having gone halfway around the perimeter by a single path and then exited, but also as having gone both ways around the perimeter and then to have "made a choice" as to whether to exit by one port or the other. Not only does this result hold for beams of light, but also for single photons of light. Wheeler remarked:
The experiment in the form aninterferometer, discussed by Einstein and Bohr, could theoretically be used to investigate whether a photon sometimes sets off along a single path, always follows two paths but sometimes only makes use of one, or whether something else would turn up. However, it was easier to say, "We will, during random runs of the experiment, insert the second half-silvered mirror just before the photon is timed to get there," than it was to figure out a way to make such a rapid substitution. The speed of light is just too fast to permit a mechanical device to do this job, at least within the confines of a laboratory. Much ingenuity was needed to get around this problem.
After several supporting experiments were published, Jacques et al. claimed that an experiment of theirs follows fully the original scheme proposed by Wheeler.[12][13]Their complicated experiment is based on theMach–Zehnder interferometer, involving a triggered diamond N–V colour centre photon generator, polarization, and an electro-optical modulator acting as a switchable beam splitter. Measuring in a closed configuration showed interference, while measuring in an open configuration allowed the path of the particle to be determined, which made interference impossible.
The Wheeler version of the interferometer experiment could not be performed in a laboratory until recently because of the practical difficulty of inserting or removing the second beam-splitter in the brief time interval between the photon's entering the first beam-splitter and its arrival at the location provided for the second beam-splitter. This realization of the experiment is done by extending the lengths of both paths by inserting long lengths of fiber optic cable. So doing makes the time interval involved with transits through the apparatus much longer. A high-speed switchable device on one path, composed of a high-voltage switch, aPockels cell, and aGlan–Thompson prism, makes it possible to divert that path away from its ordinary destination so that path effectively comes to a dead end. With the detour in operation, nothing can reach either detector by way of that path, so there can be no interference. With it switched off the path resumes its ordinary mode of action and passes through the second beam-splitter, making interference reappear. This arrangement does not actually insert and remove the second beam-splitter, but it does make it possible to switch from a state in which interference appears to a state in which interference cannot appear, and do so in the interval between light entering the first beam-splitter and light exiting the second beam-splitter. If photons had "decided" to enter the first beam-splitter as either waves or a particles, they must have been directed to undo that decision and to go through the system in their other guise, and they must have done so without any physical process being relayed to the entering photons or the first beam-splitter because that kind of transmission would be too slow even at the speed of light. Wheeler's interpretation of the physical results would be that in one configuration of the two experiments a single copy of the wavefunction of an entering photon is received, with 50% probability, at one or the other detectors, and that under the other configuration two copies of the wave function, traveling over different paths, arrive at both detectors, are out of phase with each other, and therefore exhibit interference. In one detector the wave functions will be in phase with each other, and the result will be that the photon has 100% probability of showing up in that detector. In the other detector the wave functions will be 180° out of phase, will cancel each other exactly, and there will be a 0% probability of their related photons showing up in that detector.[14]
The cosmic experiment envisioned by Wheeler could be described either as analogous to the interferometer experiment or as analogous to a double-slit experiment. The important thing is that by a third kind of device, a massive stellar object acting as a gravitational lens, photons from a source can arrive by two pathways. Depending on how phase differences between wavefunction pairs are arranged, correspondingly different kinds of interference phenomena can be observed. Whether to merge the incoming wavefunctions or not, and how to merge the incoming wavefunctions can be controlled by experimenters. There are none of the phase differences introduced into the wavefunctions by the experimental apparatus as there are in the laboratory interferometer experiments, so despite there being no double-slit device near the light source, the cosmic experiment is closer to the double-slit experiment. However, Wheeler planned for the experiment to merge the incoming wavefunctions by use of a beam splitter.[15]
The main difficulty in performing this experiment is that the experimenter has no control over or knowledge of lengths of each of the two paths between the distant quasar. Matching path lengths in time requires using a delay device along one path. Before that task could be done, it would be necessary to find a way to calculate the time delay.
One suggestion for synchronizing inputs from the two ends of this cosmic experimental apparatus lies in the characteristics ofquasarsand the possibility of identifying identical events of some signal characteristic. Information from the Twin Quasars that Wheeler used as the basis of his speculation reach earth approximately 14 months apart.[16]Finding a way to keep a quantum of light in some kind of loop for over a year would not be easy.
Wheeler's version of the double-slit experiment is arranged so that the same photon that emerges from two slits can be detected in two ways. The first way lets the two paths come together, lets the two copies of the wavefunction overlap, and shows interference. The second way moves farther away from the photon source to a position where the distance between the two copies of the wavefunction is too great to show interference effects. The technical problem in the laboratory is how to insert a detector screen at a point appropriate to observe interference effects or to remove that screen to reveal the photon detectors that can be restricted to receiving photons from the narrow regions of space where the slits are found. One way to accomplish that task would be to use the recently developed electrically switchable mirrors and simply change directions of the two paths from the slits by switching a mirror on or off. As of early 2014[update]no such experiment has been announced.
The cosmic experiment described by Wheeler has other problems, but directing wavefunction copies to one place or another long after the photon involved has presumably "decided" whether to be a wave or a particle requires no great speed at all. One has about a billion years to get the job done.
The cosmic version of the interferometer experiment could be adapted to function as a cosmic double-slit device as indicated in the illustration.[17]: 66
The first real experiment to follow Wheeler's intention for a double-slit apparatus to be subjected to end-game determination of detection method is the one by Walbornet al.[18]
Researchers with access to radio telescopes originally designed forSETIresearch have explicated the practical difficulties of conducting the interstellar Wheeler experiment.[19]
Rather than mechanically activating a delay, newer versions of the delayed choice experiment design two paths controlled by quantum effects. The overall experiment then creates a superposition of the two outcomes, particle behavior or wave behavior. This line of experimentation proved very difficult to carry out when it was first conceived. Nevertheless, it has proven very valuable over the years since it has led researchers to provide "increasingly sophisticated demonstrations of the wave–particle duality of single quanta".[20][21]As one experimenter explains, "Wave and particle behavior can coexist simultaneously."[22]
A recent experiment by Manninget al.confirms the standard predictions of standard quantum mechanics with an atom of Helium.[23]
A macroscopic quantum delayed-choice experiment has been proposed: coherent coupling of twocarbon nanotubescould be controlled by amplified single phonon events.[24]
Ma, Zeilingeret al.have summarized what can be known as a result of experiments that have arisen from Wheeler's proposals. They say:
Our work demonstrates and confirms that whether the correlations between two entangled photons revealwelcher-weg["which-way"] information or an interference pattern of one (system) photon depends on the choice of measurement on the other (environment) photon, even when all of the events on the two sides that can be space-like separated are space-like separated. The fact that it is possible to decide whether a wave or particle feature manifests itself long after—and even space-like separated from—the measurement teaches us that we should not have any naive realistic picture for interpreting quantum phenomena. Any explanation of what goes on in a specific individual observation of one photon has to take into account the whole experimental apparatus of the complete quantum state consisting of both photons, and it can only make sense after all information concerning complementary variables has been recorded. Our results demonstrate that the viewpoint that the system photon behaves either definitely as a wave or definitely as a particle would require faster-than-light communication. Because this would be in strong tension with the special theory of relativity, we believe that such a viewpoint should be given up entirely.[25]
The delayed-choice experiment concept began as a series ofthought experimentsinquantum physics, first proposed by Wheeler in 1978.[26][27]According to thecomplementarity principle, the 'particle-like' (having exact location) or 'wave-like' (having frequency or amplitude) properties of a photon can be measured,but not both at the same time. Which characteristic is measured depends on whether experimenters use a device intended to observe particles or to observe waves.[28]When this statement is applied very strictly, one could argue that by determining the detector type one could force the photon to become manifest only as a particle or only as a wave. Detection of a photon is generally a destructive process (seequantum nondemolition measurementfor non-destructive measurements). For example, a photon can be detected as the consequences of being absorbed by an electron in aphotomultiplierthat accepts its energy, which is then used to trigger the cascade of events that produces a "click" from that device. In the case of thedouble-slit experiment, a photon appears as a highly localized point in space and time on a screen. The buildup of the photons on the screen gives an indication on whether the photon must have traveled through the slits as a wave or could have traveled as a particle. The photon is said to have traveled as a wave if the buildup results in the typical interference pattern of waves (seeDouble-slit experiment § Interference from individual particlesfor an animation showing the buildup). However, if one of the slits is closed, or two orthogonal polarizers are placed in front of the slits (making the photons passing through different slits distinguishable), then no interference pattern will appear, and the buildup can be explained as the result of the photon traveling as a particle.
|
https://en.wikipedia.org/wiki/Wheeler%27s_delayed-choice_experiment
|
S/KEYis aone-time passwordsystem developed forauthenticationtoUnix-likeoperating systems, especially fromdumb terminalsor untrusted public computers on which one does not want to type a long-term password. A user's real password is combined in an offline device with a short set of characters and a decrementing counter to form a single-use password. Because each password is only used once, they are useless topassword sniffers.
Because the short set of characters does not change until the counter reaches zero, it is possible to prepare a list of single-use passwords, in order, that can be carried by the user. Alternatively, the user can present the password, characters, and desired counter value to a local calculator to generate the appropriate one-time password that can then be transmitted over the network in the clear. The latter form is more common and practically amounts tochallenge–response authentication.
S/KEY is supported inLinux(viapluggable authentication modules),OpenBSD,NetBSD, andFreeBSD, and a generic open-source implementation can be used to enable its use on other systems.OpenSSHalso implements S/KEY since version OpenSSH 1.2.2 was released on December 1, 1999.[1]One common implementation is calledOPIE. S/KEY is a trademark ofTelcordia Technologies, formerly known as Bell Communications Research (Bellcore).
S/KEY is also sometimes referred to asLamport's scheme, after its author,Leslie Lamport. It was developed by Neil Haller,Phil Karnand John Walden at Bellcore in the late 1980s. With the expiration of the basic patents onpublic-key cryptographyand the widespread use oflaptop computersrunningSSHand
other cryptographic protocols that can secure an entire session, not just the password, S/KEY is falling
into disuse.[citation needed]Schemes that implementtwo-factor authentication, by comparison, are growing in use.[2]
Theserveris the computer that will perform the authentication.
After password generation, the user has a sheet of paper withnpasswords on it. Ifnis very large, either storing allnpasswords or calculate the given password fromH(W) become inefficient. There are methods to efficiently calculate the passwords in the required order, using only⌈logn2⌉{\displaystyle \left\lceil {\frac {\log n}{2}}\right\rceil }hash calculations per step and storing⌈logn⌉{\displaystyle \lceil \log n\rceil }passwords.[3]
More ideally, though perhaps less commonly in practice, the user may carry a small, portable, secure, non-networked computing device capable of regenerating any needed password given the secret passphrase, thesalt, and the number of iterations of the hash required, the latter two of which are conveniently provided by the server requesting authentication for login.
In any case, the first password will be the same password that the server has stored. This first password will not be used for authentication (the user should scratch this password on the sheet of paper), the second one will be used instead:
For subsequent authentications, the user will providepasswordi. (The last password on the printed list,passwordn, is the first password generated by the server,H(W), whereWis the initial secret).
The server will computeH(passwordi) and will compare the result topasswordi−1, which is stored as reference on the server.
The security of S/KEY relies on the difficulty of reversingcryptographic hash functions. Assume an attacker manages to get hold of a password that was used for a successful authentication. Supposing this ispasswordi, this password is already useless for subsequent authentications, because each password can only be used once. It would be interesting for the attacker to find outpasswordi−1, because this password is the one that will be used for the next authentication.
However, this would require inverting the hash function that producedpasswordi−1usingpasswordi(H(passwordi−1) =passwordi), which is extremely difficult to do with currentcryptographic hash functions.
Nevertheless, S/KEY is vulnerable to aman in the middle attackif used by itself. It is also vulnerable to certainrace conditions, such as where an attacker's software sniffs the network to learn the firstN− 1 characters in the password (whereNequals the password length), establishes its own TCP session to the server, and in rapid succession tries all valid characters in theN-th position until one succeeds. These types of vulnerabilities can be avoided by usingssh,SSL, SPKM, or other encrypted transport layer.
Since each iteration of S/KEY doesn't include the salt or count, it is feasible to find collisions directly without breaking the initial password. This has a complexity of 264, which can be pre-calculated with the same amount of space. The space complexity can be optimized by storing chains of values, although collisions might reduce the coverage of this method, especially for long chains.[4]
Someone with access to an S/KEY database can break all of them in parallel with a complexity of 264. While they wouldn't get the original password, they would be able to find valid credentials for each user. In this regard, it is similar to storing unsalted 64-bit hashes of strong, unique passwords.
The S/KEY protocol can loop. If such a loop were created in the S/KEY chain, an attacker could use user's key without finding the original value, and possibly without tipping off the valid user. The pathological case of this would be an OTP that hashes to itself.
Internally, S/KEY uses64-bitnumbers. For humanusabilitypurposes, each number is mapped to six short words, of one to four characters each, from a publicly accessible 2048-word dictionary. For example, one 64-bit number maps to "ROY HURT SKI FAIL GRIM KNEE".[5]
|
https://en.wikipedia.org/wiki/S/KEY
|
The Big Electric Cat, named for anAdrian Belewsong, was a public access computer system inNew York Cityin the late 1980s, known onUsenetas nodedasys1.
Based on aStride Computerbrandminicomputerrunning the UniStrideUnixvariant, the Big Electric Cat (sometimes known asBEC) provided dialupmodemusers with textterminal-basedaccess to Usenet at no charge.
This was the first such system in New York and one of the first in the world. Previously, access to Usenet had been almost exclusively through systems atuniversities, or a few government and very few commercial installations. WhileBulletin Board Systemculture andFidonetexisted at the time, systems which allowed the general public to access Usenet were virtually unknown. As with many early Internet and Usenet systems, a community began to form among users of the system which had occasional outings to restaurants.
BEC was started by four college students, with one of them, Rob Sweeney, owning the equipment. The othersysopswere Charles Foreman, Lee Fischman, and Richard Newman.
A list of BBSes in the 212 Area Code[1]contains the following note, attributed to Lee Fischman
The movie referred to isBBS: The Documentary.[2]
BEC was not intended to be a profit-making operation, charging fees that were designed only to cover operating costs, (Phrack reports $5 per month for an account at the end of 1989, though the system may have in fact been out of operation by then, and other sources note that the system was supported by donations)[3]and relying entirely on volunteer labor.[4]
In mid-1990,[5]after increasingly unreliable operation, The Big Electric Cat suffered what proved to be fatal hardware failure, leaving a gap which was filled by some its users founding one of the first commercialISPsever,Panix.[6]
2600 MagazinefounderEric Corleyused a Big Electric Cat account.[7]
|
https://en.wikipedia.org/wiki/The_Big_Electric_Cat
|
BlueSoleilis aBluetoothsoftware/driver forMicrosoft Windows,LinuxandWindows CE. It supports Bluetooth chipsets fromCSR,Broadcom,Marvelletc. Bluetoothdongles,PCs,Laptops,PDAs,PNDsandUMPCsare sometimes bundled with a version of this software albeit with limited functionality and OEM licensing. The software is rarely needed on modern computers, as well-functioning Bluetooth drivers for the most widely used Bluetooth chips have been available throughWindows UpdatesinceWindows Vista.
BlueSoleil is developed by the Chinese firm IVT Corporation and the first version was released in 1999. In China, BlueSoleil is marketed as 1000Moons (千月).[1]
BlueSoleil features the following technologies:
A demonstration version of BlueSoleil is available, restricting the device after 2MB data transfer, approximately 1.5 minutes of high-quality audio or 2–4 hours of mouse use. The software must be purchased to enable unlimited use.
BlueSoleil has been distributed over 30 million copies. IVT has also established an interoperability testing centre where it has built up a large library of Bluetooth products which are on the market in order to perform interoperability testing.[2]
Various Bluetooth dongles are delivered with an obsolete or demonstration version of Bluesoleil. New versions are available as a standalone purchase from the vendor's website. Regardless of whether the bundled or the standalone version is purchased, the software enforces licensing restrictions which tie it to the address of a specific Bluetooth dongle.
Bluesoleil works with the main Bluetooth Silicon Vendors hardware, such as Accelsemi,Atheros,CSR, Conwise, 3DSP,Broadcom,Intel,Marvell,NSC,RFMD,SiRFas well as baseband IP such as RivieraWaves BT IP.
If there is no Bluetooth dongle attached to thePCthe Bluetooth logo will begrey,blueif a dongle is attached, andgreenwhen connected to another Bluetooth enabled device.
|
https://en.wikipedia.org/wiki/BlueSoleil
|
Aristarchus's inequality(after the GreekastronomerandmathematicianAristarchus of Samos; c. 310 – c. 230 BCE) is a law oftrigonometrywhich states that ifαandβareacute angles(i.e. between 0 and a right angle) andβ<αthen
Ptolemyused the first of these inequalities while constructinghis table of chords.[1]
The proof is a consequence of the more widely known inequalities
Using these inequalities we can first prove that
We first note that the inequality is equivalent to
which itself can be rewritten as
We now want to show that
The second inequality is simplyβ<tanβ{\displaystyle \beta <\tan \beta }. The first one is true because
Now we want to show the second inequality, i.e. that:
We first note that due to the initial inequalities we have that:
Consequently, using that0<α−β<α{\displaystyle 0<\alpha -\beta <\alpha }in the previous equation (replacingβ{\displaystyle \beta }byα−β<α{\displaystyle \alpha -\beta <\alpha }) we obtain:
We conclude that
Thiselementary geometry-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Aristarchus%27s_inequality
|
Incomputer science, anLL parser(left-to-right,leftmost derivation) is atop-down parserfor a restrictedcontext-free language. It parses the input fromLeft to right, performingLeftmost derivationof the sentence.
An LL parser is called an LL(k) parser if it usesktokensoflookaheadwhen parsing a sentence. A grammar is called anLL(k) grammarif an LL(k) parser can be constructed from it. A formal language is called an LL(k) language if it has an LL(k) grammar. The set of LL(k) languages is properly contained in that of LL(k+1) languages, for eachk≥ 0.[1]A corollary of this is that not all context-free languages can be recognized by an LL(k) parser.
An LL parser is called LL-regular (LLR) if it parses anLL-regular language.[clarification needed][2][3][4]The class ofLLR grammarscontains every LL(k) grammar for everyk. For every LLR grammar there exists an LLR parser that parses the grammar in linear time.[citation needed]
Two nomenclative outlier parser types are LL(*) and LL(finite). A parser is called LL(*)/LL(finite) if it uses the LL(*)/LL(finite) parsing strategy.[5][6]LL(*) and LL(finite) parsers are functionally closer toPEGparsers. An LL(finite) parser can parse an arbitrary LL(k) grammar optimally in the amount of lookahead and lookahead comparisons. The class of grammars parsable by the LL(*) strategy encompasses some context-sensitive languages due to the use of syntactic and semantic predicates and has not been identified. It has been suggested that LL(*) parsers are better thought of asTDPLparsers.[7]Against the popular misconception, LL(*) parsers are not LLR in general, and are guaranteed by construction to perform worse on average (super-linear against linear time) and far worse in the worst-case (exponential against linear time).
LL grammars, particularly LL(1) grammars, are of great practical interest, as parsers for these grammars are easy to construct, and manycomputer languagesare designed to be LL(1) for this reason.[8]LL parsers may be table-based,[citation needed]i.e. similar toLR parsers, but LL grammars can also be parsed byrecursive descent parsers. According to Waite and Goos (1984),[9]LL(k) grammars were introduced by Stearns and Lewis (1969).[10]
For a givencontext-free grammar, the parser attempts to find theleftmost derivation.
Given an example grammarG:
the leftmost derivation forw=((i+i)+i){\displaystyle w=((i+i)+i)}is:
Generally, there are multiple possibilities when selecting a rule to expand the leftmost non-terminal. In step 2 of the previous example, the parser must choose whether to apply rule 2 or rule 3:
To be efficient, the parser must be able to make this choice deterministically when possible, without backtracking. For some grammars, it can do this by peeking on the unread input (without reading). In our example, if the parser knows that the next unread symbol is(, the only correct rule that can be used is 2.
Generally, an LL(k) parser can look ahead atksymbols. However, given a grammar, the problem of determining if there exists a LL(k) parser for somekthat recognizes it is undecidable. For eachk, there is a language that cannot be recognized by an LL(k) parser, but can be by anLL(k+ 1).
We can use the above analysis to give the following formal definition:
LetGbe a context-free grammar andk≥ 1. We say thatGis LL(k), if and only if for any two leftmost derivations:
the following condition holds: the prefix of the stringu{\displaystyle u}of lengthk{\displaystyle k}equals the prefix of the stringv{\displaystyle v}of lengthkimpliesβ=γ{\displaystyle \beta =\gamma }.
In this definition,S{\displaystyle S}is the start symbol andA{\displaystyle A}any non-terminal. The already derived inputw{\displaystyle w}, and yet unreadu{\displaystyle u}andv{\displaystyle v}are strings of terminals. The Greek lettersα{\displaystyle \alpha },β{\displaystyle \beta }andγ{\displaystyle \gamma }represent any string of both terminals and non-terminals (possibly empty). The prefix length corresponds to the lookahead buffer size, and the definition says that this buffer is enough to distinguish between any two derivations of different words.
The LL(k) parser is adeterministic pushdown automatonwith the ability to peek on the nextkinput symbols without reading. This peek capability can be emulated by storing the lookahead buffer contents in the finite state space, since both buffer and input alphabet are finite in size. As a result, this does not make the automaton more powerful, but is a convenient abstraction.
The stack alphabet isΓ=N∪Σ{\displaystyle \Gamma =N\cup \Sigma }, where:
The parser stack initially contains the starting symbol above the EOI:[ S$]. During operation, the parser repeatedly replaces the symbolX{\displaystyle X}on top of the stack:
If the last symbol to be removed from the stack is the EOI, the parsing is successful; the automaton accepts via an empty stack.
The states and the transition function are not explicitly given; they are specified (generated) using a more convenientparse tableinstead. The table provides the following mapping:
If the parser cannot perform a valid transition, the input is rejected (empty cells). To make the table more compact, only the non-terminal rows are commonly displayed, since the action is the same for terminals.
To explain an LL(1) parser's workings we will consider the following small LL(1) grammar:
and parse the following input:
An LL(1) parsing table for a grammar has a row for each of the non-terminals and a column for each terminal (including the special terminal, represented here as$, that is used to indicate the end of the input stream).
Each cell of the table may point to at most one rule of the grammar (identified by its number). For example, in the parsing table for the above grammar, the cell for the non-terminal 'S' and terminal '(' points to the rule number 2:
The algorithm to construct a parsing table is described in a later section, but first let's see how the parser uses the parsing table to process its input.
In each step, the parser reads the next-available symbol from the input stream, and the top-most symbol from the stack. If the input symbol and the stack-top symbol match, the parser discards them both, leaving only the unmatched symbols in the input stream and on the stack.
Thus, in its first step, the parser reads the input symbol '(' and the stack-top symbol 'S'. The parsing table instruction comes from the column headed by the input symbol '(' and the row headed by the stack-top symbol 'S'; this cell contains '2', which instructs the parser to apply rule (2). The parser has to rewrite 'S' to '(S+F)' on the stack by removing 'S' from stack and pushing ')', 'F', '+', 'S', '(' onto the stack, and this writes the rule number 2 to the output. The stack then becomes:
In the second step, the parser removes the '(' from its input stream and from its stack, since they now match. The stack now becomes:
Now the parser has an 'a'on its input stream and an 'S' as its stack top. The parsing table instructs it to apply rule (1) from the grammar and write the rule number 1 to the output stream. The stack becomes:
The parser now has an 'a'on its input stream and an 'F' as its stack top. The parsing table instructs it to apply rule (3) from the grammar and write the rule number 3 to the output stream. The stack becomes:
The parser now has an 'a'on the input stream and an 'a' at its stack top. Because they are the same, it removes it from the input stream and pops it from the top of the stack. The parser then has an '+' on the input stream and '+' is at the top of the stack meaning, like with 'a', it is popped from the stack and removed from the input stream. This results in:
In the next three steps the parser will replace 'F' on the stack by 'a', write the rule number 3 to the output stream and remove the 'a' and ')' from both the stack and the input stream. The parser thus ends with '$' on both its stack and its input stream.
In this case the parser will report that it has accepted the input string and write the following list of rule numbers to the output stream:
This is indeed a list of rules for aleftmost derivationof the input string, which is:
Below follows a C++ implementation of a table-based LL parser for the example language:
Outputs:
As can be seen from the example, the parser performs three types of steps depending on whether the top of the stack is a nonterminal, a terminal or the special symbol$:
These steps are repeated until the parser stops, and then it will have either completely parsed the input and written aleftmost derivationto the output stream or it will have reported an error.
In order to fill the parsing table, we have to establish what grammar rule the parser should choose if it sees a nonterminalAon the top of its stack and a symbolaon its input stream.
It is easy to see that such a rule should be of the formA→wand that the language corresponding towshould have at least one string starting witha.
For this purpose we define theFirst-setofw, written here asFi(w), as the set of terminals that can be found at the start of some string inw, plus ε if the empty string also belongs tow.
Given a grammar with the rulesA1→w1, ...,An→wn, we can compute theFi(wi) andFi(Ai) for every rule as follows:
The result is the least fixed point solution to the following system:
where, for sets of wordsUandV, the truncated product is defined byU⋅V={(uv):1∣u∈U,v∈V}{\displaystyle U\cdot V=\{(uv):1\mid u\in U,v\in V\}}, and w:1 denotes the initial length-1 prefix of words w of length 2 or more, orw, itself, if w has length 0 or 1.
Unfortunately, the First-sets are not sufficient to compute the parsing table.
This is because a right-hand sidewof a rule might ultimately be rewritten to the empty string.
So the parser should also use the ruleA→wifεis inFi(w) and it sees on the input stream a symbol that could followA. Therefore, we also need theFollow-setofA, written asFo(A) here, which is defined as the set of terminalsasuch that there is a string of symbolsαAaβthat can be derived from the start symbol. We use$as a special terminal indicating end of input stream, andSas start symbol.
Computing the Follow-sets for the nonterminals in a grammar can be done as follows:
This provides the least fixed point solution to the following system:
Now we can define exactly which rules will appear where in the parsing table.
IfT[A,a] denotes the entry in the table for nonterminalAand terminala, then
Equivalently:T[A,a] contains the ruleA→wfor eacha∈Fi(w)·Fo(A).
If the table contains at most one rule in every one of its cells, then the parser will always know which rule it has to use and can therefore parse strings without backtracking.
It is in precisely this case that the grammar is called anLL(1) grammar.
The construction for LL(1) parsers can be adapted to LL(k) fork> 1 with the following modifications:
where an input is suffixed bykend-markers$, to fully account for theklookahead context. This approach eliminates special cases for ε, and can be applied equally well in the LL(1) case.
Until the mid-1990s, it was widely believed thatLL(k) parsing[clarify](fork> 1) was impractical,[11]: 263–265since the parser table would haveexponentialsize inkin the worst case. This perception changed gradually after the release of thePurdue Compiler Construction Tool Setaround 1992, when it was demonstrated that manyprogramming languagescan be parsed efficiently by an LL(k) parser without triggering the worst-case behavior of the parser. Moreover, in certain cases LL parsing is feasible even with unlimited lookahead. By contrast, traditional parser generators likeyaccuseLALR(1)parser tables to construct a restrictedLR parserwith a fixed one-token lookahead.
As described in the introduction, LL(1) parsers recognize languages that have LL(1) grammars, which are a special case of context-free grammars; LL(1) parsers cannot recognize all context-free languages. The LL(1) languages are a proper subset of the LR(1) languages, which in turn are a proper subset of all context-free languages. In order for a context-free grammar to be an LL(1) grammar, certain conflicts must not arise, which we describe in this section.
LetAbe a non-terminal. FIRST(A) is (defined to be) the set of terminals that can appear in the first position of any string derived fromA. FOLLOW(A) is the union over:[12]
There are two main types of LL(1) conflicts:
The FIRST sets of two different grammar rules for the same non-terminal intersect.
An example of an LL(1) FIRST/FIRST conflict:
FIRST(E) = {b, ε} and FIRST(Ea) = {b,a}, so when the table is drawn, there is conflict under terminalbof production ruleS.
Left recursionwill cause a FIRST/FIRST conflict with all alternatives.
The FIRST and FOLLOW set of a grammar rule overlap. With anempty string(ε) in the FIRST set, it is unknown which alternative to select.
An example of an LL(1) conflict:
The FIRST set ofAis {a, ε}, and the FOLLOW set is {a}.
A common left-factor is "factored out".
becomes
Can be applied when two alternatives start with the same symbol like a FIRST/FIRST conflict.
Another example (more complex) using above FIRST/FIRST conflict example:
becomes (merging into a single non-terminal)
then through left-factoring, becomes
Substituting a rule into another rule to remove indirect or FIRST/FOLLOW conflicts.
Note that this may cause a FIRST/FIRST conflict.
For a general method, seeremoving left recursion.
A simple example for left recursion removal:
The following production rule has left recursion on E
This rule is nothing but list of Ts separated by '+'. In a regular expression form T ('+' T)*.
So the rule could be rewritten as
Now there is no left recursion and no conflicts on either of the rules.
However, not all context-free grammars have an equivalent LL(k)-grammar, e.g.:
It can be shown that there does not exist any LL(k)-grammar accepting the language generated by this grammar.
|
https://en.wikipedia.org/wiki/LL_parser
|
Doorway pages(bridge pages,portal pages,jump pages,gateway pagesorentry pages) are web pages that are created for the deliberate manipulation of search engine indexes (spamdexing). A doorway page will affect the index of asearch engineby inserting results for particular phrases while sending visitors to a different page. Doorway pages that redirect visitors without their knowledge use some form ofcloaking. This usually falls under Black HatSEO.
If a visitor clicks through to a typical doorway page from asearch engine results page, in most cases they will beredirectedwith a fastMeta refreshcommand to another page. Other forms of redirection include use ofJavaScriptandserversideredirection, from the server configuration file. Some doorway pages may be dynamic pages generated by scripting languages such asPerlandPHP.
Doorway pages are often easy to identify in that they have been designed primarily for search engines, not for human beings. Sometimes a doorway page is copied from another high ranking page, but this is likely to cause the search engine to detect the page as a duplicate and exclude it from the search engine listings.
Because many search engines give a penalty for using the META refresh command,[1]some doorway pages just trick the visitor into clicking on a link to get them to the desired destination page, or they useJavaScriptfor redirection.
More sophisticated doorway pages, calledContent Rich Doorways, are designed to gain high placement in search results without using redirection. They incorporate at least a minimum amount of design and navigation similar to the rest of the site to provide a more human-friendly and natural appearance. Visitors are offered standard links as calls to action.
Landing pagesare regularly misconstrued to equate to Doorway pages within the literature. The former are content rich pages to which traffic is directed within the context ofpay-per-clickcampaigns and to maximizeSEOcampaigns.
Doorway pages are also typically used for sites that maintain a blacklist of URLs known to harbor spam, such asFacebook,TumblrandDeviantArt.
Doorway pages often also employcloakingtechniques for misdirection. Cloaked pages will show a version of that page to human visitor which is different from the one provided tocrawlers—usually implemented via server-side scripts. The server can differentiate between bots, crawlers and human visitors based on various flags, including sourceIP addressoruser-agent. Cloaking will simultaneously trick search engines to rank sites higher for irrelevant keywords, while displaying monetizing any human traffic by showing visitors spammy, often irrelevant, content. The practice of cloaking is considered to be highly manipulative and condemned within the SEO industry and by search engines, and its use can result in significant penalty or the complete removal of sites from being indexed.[2]
Webmasters that use doorway pages would generally prefer that users never actually see these pages and instead be delivered to a "real" page within their sites. To achieve this goal, redirection is sometimes used. This may be as simple as installing a meta refresh tag on the doorway pages. An advanced system might make use of cloaking. In either case, such redirection may make your doorway pages unacceptable to search engines.
A content-rich doorway page must be constructed in a search-engine-friendly manner, or it may be construed as search engine spam, possibly resulting in the page being banned from the index for an undisclosed amount of time.
These types of doorways utilize (but are not limited to) the following:
Doorway pages were examined as a cultural and political phenomenon along withspam poetryandflarf.[3]
|
https://en.wikipedia.org/wiki/Doorway_pages
|
Inmathematics,physicsandengineering, thesinc function(/ˈsɪŋk/SINK), denoted bysinc(x), has two forms, normalized and unnormalized.[1]
In mathematics, the historicalunnormalized sinc functionis defined forx≠ 0bysinc(x)=sinxx.{\displaystyle \operatorname {sinc} (x)={\frac {\sin x}{x}}.}
Alternatively, the unnormalized sinc function is often called thesampling function, indicated as Sa(x).[2]
Indigital signal processingandinformation theory, thenormalized sinc functionis commonly defined forx≠ 0bysinc(x)=sin(πx)πx.{\displaystyle \operatorname {sinc} (x)={\frac {\sin(\pi x)}{\pi x}}.}
In either case, the value atx= 0is defined to be the limiting valuesinc(0):=limx→0sin(ax)ax=1{\displaystyle \operatorname {sinc} (0):=\lim _{x\to 0}{\frac {\sin(ax)}{ax}}=1}for all reala≠ 0(the limit can be proven using thesqueeze theorem).
Thenormalizationcauses thedefinite integralof the function over the real numbers to equal 1 (whereas the same integral of the unnormalized sinc function has a value ofπ). As a further useful property, the zeros of the normalized sinc function are the nonzero integer values ofx.
The normalized sinc function is theFourier transformof therectangular functionwith no scaling. It is used in the concept ofreconstructinga continuous bandlimited signal from uniformly spacedsamplesof that signal.
The only difference between the two definitions is in the scaling of theindependent variable(thexaxis) by a factor ofπ. In both cases, the value of the function at theremovable singularityat zero is understood to be the limit value 1. The sinc function is thenanalyticeverywhere and hence anentire function.
The function has also been called thecardinal sineorsine cardinalfunction.[3][4]The termsincwas introduced byPhilip M. Woodwardin his 1952 article "Information theory and inverse probability in telecommunication", in which he said that the function "occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own",[5]and his 1953 bookProbability and Information Theory, with Applications to Radar.[6][7]The function itself was first mathematically derived in this form byLord Rayleighin his expression (Rayleigh's formula) for the zeroth-order sphericalBessel functionof the first kind.
Thezero crossingsof the unnormalized sinc are at non-zero integer multiples ofπ, while zero crossings of the normalized sinc occur at non-zero integers.
The local maxima and minima of the unnormalized sinc correspond to its intersections with thecosinefunction. That is,sin(ξ)/ξ= cos(ξ)for all pointsξwhere the derivative ofsin(x)/xis zero and thus a local extremum is reached. This follows from the derivative of the sinc function:ddxsinc(x)={cos(x)−sinc(x)x,x≠00,x=0.{\displaystyle {\frac {d}{dx}}\operatorname {sinc} (x)={\begin{cases}{\dfrac {\cos(x)-\operatorname {sinc} (x)}{x}},&x\neq 0\\0,&x=0\end{cases}}.}
The first few terms of the infinite series for thexcoordinate of then-th extremum with positivexcoordinate are[citation needed]xn=q−q−1−23q−3−1315q−5−146105q−7−⋯,{\displaystyle x_{n}=q-q^{-1}-{\frac {2}{3}}q^{-3}-{\frac {13}{15}}q^{-5}-{\frac {146}{105}}q^{-7}-\cdots ,}whereq=(n+12)π,{\displaystyle q=\left(n+{\frac {1}{2}}\right)\pi ,}and where oddnlead to a local minimum, and evennto a local maximum. Because of symmetry around theyaxis, there exist extrema withxcoordinates−xn. In addition, there is an absolute maximum atξ0= (0, 1).
The normalized sinc function has a simple representation as theinfinite product:sin(πx)πx=∏n=1∞(1−x2n2){\displaystyle {\frac {\sin(\pi x)}{\pi x}}=\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{n^{2}}}\right)}
and is related to thegamma functionΓ(x)throughEuler's reflection formula:sin(πx)πx=1Γ(1+x)Γ(1−x).{\displaystyle {\frac {\sin(\pi x)}{\pi x}}={\frac {1}{\Gamma (1+x)\Gamma (1-x)}}.}
Eulerdiscovered[8]thatsin(x)x=∏n=1∞cos(x2n),{\displaystyle {\frac {\sin(x)}{x}}=\prod _{n=1}^{\infty }\cos \left({\frac {x}{2^{n}}}\right),}and because of the product-to-sum identity[9]
∏n=1kcos(x2n)=12k−1∑n=12k−1cos(n−1/22k−1x),∀k≥1,{\displaystyle \prod _{n=1}^{k}\cos \left({\frac {x}{2^{n}}}\right)={\frac {1}{2^{k-1}}}\sum _{n=1}^{2^{k-1}}\cos \left({\frac {n-1/2}{2^{k-1}}}x\right),\quad \forall k\geq 1,}Euler's product can be recast as a sumsin(x)x=limN→∞1N∑n=1Ncos(n−1/2Nx).{\displaystyle {\frac {\sin(x)}{x}}=\lim _{N\to \infty }{\frac {1}{N}}\sum _{n=1}^{N}\cos \left({\frac {n-1/2}{N}}x\right).}
Thecontinuous Fourier transformof the normalized sinc (to ordinary frequency) isrect(f):∫−∞∞sinc(t)e−i2πftdt=rect(f),{\displaystyle \int _{-\infty }^{\infty }\operatorname {sinc} (t)\,e^{-i2\pi ft}\,dt=\operatorname {rect} (f),}where therectangular functionis 1 for argument between −1/2and1/2, and zero otherwise. This corresponds to the fact that thesinc filteris the ideal (brick-wall, meaning rectangularfrequency response)low-pass filter.
This Fourier integral, including the special case∫−∞∞sin(πx)πxdx=rect(0)=1{\displaystyle \int _{-\infty }^{\infty }{\frac {\sin(\pi x)}{\pi x}}\,dx=\operatorname {rect} (0)=1}is animproper integral(seeDirichlet integral) and not a convergentLebesgue integral, as∫−∞∞|sin(πx)πx|dx=+∞.{\displaystyle \int _{-\infty }^{\infty }\left|{\frac {\sin(\pi x)}{\pi x}}\right|\,dx=+\infty .}
The normalized sinc function has properties that make it ideal in relationship tointerpolationofsampledbandlimitedfunctions:
Other properties of the two sinc functions include:
The normalized sinc function can be used as anascent delta function, meaning that the followingweak limitholds:
lima→0sin(πxa)πx=lima→01asinc(xa)=δ(x).{\displaystyle \lim _{a\to 0}{\frac {\sin \left({\frac {\pi x}{a}}\right)}{\pi x}}=\lim _{a\to 0}{\frac {1}{a}}\operatorname {sinc} \left({\frac {x}{a}}\right)=\delta (x).}
This is not an ordinary limit, since the left side does not converge. Rather, it means that
lima→0∫−∞∞1asinc(xa)φ(x)dx=φ(0){\displaystyle \lim _{a\to 0}\int _{-\infty }^{\infty }{\frac {1}{a}}\operatorname {sinc} \left({\frac {x}{a}}\right)\varphi (x)\,dx=\varphi (0)}
for everySchwartz function, as can be seen from theFourier inversion theorem.
In the above expression, asa→ 0, the number of oscillations per unit length of the sinc function approaches infinity. Nevertheless, the expression always oscillates inside an envelope of±1/πx, regardless of the value ofa.
This complicates the informal picture ofδ(x)as being zero for allxexcept at the pointx= 0, and illustrates the problem of thinking of the delta function as a function rather than as a distribution. A similar situation is found in theGibbs phenomenon.
We can also make an immediate connection with the standard Dirac representation ofδ(x){\displaystyle \delta (x)}by writingb=1/a{\displaystyle b=1/a}and
limb→∞sin(bπx)πx=limb→∞12π∫−bπbπeikxdk=12π∫−∞∞eikxdk=δ(x),{\displaystyle \lim _{b\to \infty }{\frac {\sin \left(b\pi x\right)}{\pi x}}=\lim _{b\to \infty }{\frac {1}{2\pi }}\int _{-b\pi }^{b\pi }e^{ikx}dk={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ikx}dk=\delta (x),}
which makes clear the recovery of the delta as an infinite bandwidth limit of the integral.
All sums in this section refer to the unnormalized sinc function.
The sum ofsinc(n)over integernfrom 1 to∞equalsπ− 1/2:
∑n=1∞sinc(n)=sinc(1)+sinc(2)+sinc(3)+sinc(4)+⋯=π−12.{\displaystyle \sum _{n=1}^{\infty }\operatorname {sinc} (n)=\operatorname {sinc} (1)+\operatorname {sinc} (2)+\operatorname {sinc} (3)+\operatorname {sinc} (4)+\cdots ={\frac {\pi -1}{2}}.}
The sum of the squares also equalsπ− 1/2:[10][11]
∑n=1∞sinc2(n)=sinc2(1)+sinc2(2)+sinc2(3)+sinc2(4)+⋯=π−12.{\displaystyle \sum _{n=1}^{\infty }\operatorname {sinc} ^{2}(n)=\operatorname {sinc} ^{2}(1)+\operatorname {sinc} ^{2}(2)+\operatorname {sinc} ^{2}(3)+\operatorname {sinc} ^{2}(4)+\cdots ={\frac {\pi -1}{2}}.}
When the signs of theaddendsalternate and begin with +, the sum equals1/2:∑n=1∞(−1)n+1sinc(n)=sinc(1)−sinc(2)+sinc(3)−sinc(4)+⋯=12.{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} (n)=\operatorname {sinc} (1)-\operatorname {sinc} (2)+\operatorname {sinc} (3)-\operatorname {sinc} (4)+\cdots ={\frac {1}{2}}.}
The alternating sums of the squares and cubes also equal1/2:[12]∑n=1∞(−1)n+1sinc2(n)=sinc2(1)−sinc2(2)+sinc2(3)−sinc2(4)+⋯=12,{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} ^{2}(n)=\operatorname {sinc} ^{2}(1)-\operatorname {sinc} ^{2}(2)+\operatorname {sinc} ^{2}(3)-\operatorname {sinc} ^{2}(4)+\cdots ={\frac {1}{2}},}
∑n=1∞(−1)n+1sinc3(n)=sinc3(1)−sinc3(2)+sinc3(3)−sinc3(4)+⋯=12.{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} ^{3}(n)=\operatorname {sinc} ^{3}(1)-\operatorname {sinc} ^{3}(2)+\operatorname {sinc} ^{3}(3)-\operatorname {sinc} ^{3}(4)+\cdots ={\frac {1}{2}}.}
TheTaylor seriesof the unnormalizedsincfunction can be obtained from that of the sine (which also yields its value of 1 atx= 0):sinxx=∑n=0∞(−1)nx2n(2n+1)!=1−x23!+x45!−x67!+⋯{\displaystyle {\frac {\sin x}{x}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n}}{(2n+1)!}}=1-{\frac {x^{2}}{3!}}+{\frac {x^{4}}{5!}}-{\frac {x^{6}}{7!}}+\cdots }
The series converges for allx. The normalized version follows easily:sinπxπx=1−π2x23!+π4x45!−π6x67!+⋯{\displaystyle {\frac {\sin \pi x}{\pi x}}=1-{\frac {\pi ^{2}x^{2}}{3!}}+{\frac {\pi ^{4}x^{4}}{5!}}-{\frac {\pi ^{6}x^{6}}{7!}}+\cdots }
Eulerfamously compared this series to the expansion of the infinite product form to solve theBasel problem.
The product of 1-D sinc functions readily provides amultivariatesinc function for the square Cartesian grid (lattice):sincC(x,y) = sinc(x) sinc(y), whoseFourier transformis theindicator functionof a square in the frequency space (i.e., the brick wall defined in 2-D space). The sinc function for a non-Cartesianlattice(e.g.,hexagonal lattice) is a function whoseFourier transformis theindicator functionof theBrillouin zoneof that lattice. For example, the sinc function for the hexagonal lattice is a function whoseFourier transformis theindicator functionof the unit hexagon in the frequency space. For a non-Cartesian lattice this function can not be obtained by a simple tensor product. However, the explicit formula for the sinc function for thehexagonal,body-centered cubic,face-centered cubicand other higher-dimensional lattices can be explicitly derived[13]using the geometric properties of Brillouin zones and their connection tozonotopes.
For example, ahexagonal latticecan be generated by the (integer)linear spanof the vectorsu1=[1232]andu2=[12−32].{\displaystyle \mathbf {u} _{1}={\begin{bmatrix}{\frac {1}{2}}\\{\frac {\sqrt {3}}{2}}\end{bmatrix}}\quad {\text{and}}\quad \mathbf {u} _{2}={\begin{bmatrix}{\frac {1}{2}}\\-{\frac {\sqrt {3}}{2}}\end{bmatrix}}.}
Denotingξ1=23u1,ξ2=23u2,ξ3=−23(u1+u2),x=[xy],{\displaystyle {\boldsymbol {\xi }}_{1}={\tfrac {2}{3}}\mathbf {u} _{1},\quad {\boldsymbol {\xi }}_{2}={\tfrac {2}{3}}\mathbf {u} _{2},\quad {\boldsymbol {\xi }}_{3}=-{\tfrac {2}{3}}(\mathbf {u} _{1}+\mathbf {u} _{2}),\quad \mathbf {x} ={\begin{bmatrix}x\\y\end{bmatrix}},}one can derive[13]the sinc function for this hexagonal lattice assincH(x)=13(cos(πξ1⋅x)sinc(ξ2⋅x)sinc(ξ3⋅x)+cos(πξ2⋅x)sinc(ξ3⋅x)sinc(ξ1⋅x)+cos(πξ3⋅x)sinc(ξ1⋅x)sinc(ξ2⋅x)).{\displaystyle {\begin{aligned}\operatorname {sinc} _{\text{H}}(\mathbf {x} )={\tfrac {1}{3}}{\big (}&\cos \left(\pi {\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\\&{}+\cos \left(\pi {\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\\&{}+\cos \left(\pi {\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right){\big )}.\end{aligned}}}
This construction can be used to designLanczos windowfor general multidimensional lattices.[13]
Some authors, by analogy, define the hyperbolic sine cardinal function.[14][15][16]
|
https://en.wikipedia.org/wiki/Sinhc_function
|
Anand-inverter graph (AIG)is a directed, acyclicgraphthat represents a structural implementation of the logical functionality of acircuit or network. An AIG consists of two-input nodes representinglogical conjunction, terminal nodes labeled with variable names, and edges optionally containing markers indicatinglogical negation. This representation of a logic function is rarely structurally efficient for large circuits, but is an efficient representation for manipulation ofboolean functions. Typically, the abstract graph is represented as adata structurein software.
Conversion from the network oflogic gatesto AIGs is fast and scalable. It only requires that every gate be expressed in terms ofAND gatesandinverters. This conversion does not lead to unpredictable increase in memory use and runtime. This makes the AIG an efficient representation in comparison with either thebinary decision diagram(BDD) or the "sum-of-product" (ΣoΠ) form,[citation needed]that is, thecanonical forminBoolean algebraknown as thedisjunctive normal form(DNF). The BDD and DNF may also be viewed as circuits, but they involve formal constraints that deprive them of scalability. For example, ΣoΠs are circuits with at most two levels while BDDs are canonical, that is, they require that input variables be evaluated in the same order on all paths.
Circuits composed of simple gates, including AIGs, are an "ancient" research topic. The interest in AIGs started with Alan Turing's seminal 1948 paper[1]on neural networks, in which he described a randomized trainable network of NAND gates. Interest continued through the late 1950s[2]and continued in the 1970s when various local transformations have been developed. These transformations were implemented in several
logic synthesis and verification systems, such as Darringer et al.[3]and Smith et al.,[4]which reduce circuits to improve area and delay during synthesis, or to speed upformal equivalence checking. Several important techniques were discovered early atIBM, such as combining and reusing multi-input logic expressions and subexpressions, now known asstructural hashing.
Recently there has been a renewed interest in AIGs as afunctional representationfor a variety of tasks in synthesis and verification. That is because representations popular in the 1990s (such as BDDs) have reached their limits of scalability in many of their applications.[citation needed]Another important development was the recent emergence of much more efficientboolean satisfiability(SAT) solvers. When coupled withAIGsas the circuit representation, they lead to remarkable speedups in solving a wide variety ofboolean problems.[citation needed]
AIGs found successful use in diverseEDAapplications. A well-tuned combination ofAIGsandboolean satisfiabilitymade an impact onformal verification, including bothmodel checkingand equivalence checking.[5]Another recent work shows that efficient circuit compression techniques can be developed using AIGs.[6]There is a growing understanding that logic and physical synthesis problems can be solved using simulation andboolean satisfiabilityto compute functional properties (such as symmetries)[7]and node flexibilities (such asdon't-care terms,resubstitutions, andSPFDs).[8][9][10]Mishchenko et al. shows that AIGs are a promisingunifyingrepresentation, which can bridgelogic synthesis,technology mapping, physical synthesis, and formal verification. This is, to a large extent, due to the simple and uniform structure of AIGs, which allow rewriting, simulation, mapping, placement, and verification to share the same data structure.
In addition to combinational logic, AIGs have also been applied tosequential logicand sequential transformations. Specifically, the method of structural hashing was extended to work for AIGs with memory elements (such asD-type flip-flopswith an initial state,
which, in general, can be unknown) resulting in a data structure that is specifically tailored for applications related toretiming.[11]
Ongoing research includes implementing a modern logic synthesis system completely based on AIGs. The prototype calledABCfeatures an AIG package, several AIG-based synthesis and equivalence-checking techniques, as well as an experimental implementation of sequential synthesis. One such technique combines technology mapping and retiming in a single optimization step. These optimizations can be implemented using networks composed of arbitrary gates, but the use of AIGs makes them more scalable and easier to implement.
|
https://en.wikipedia.org/wiki/And-inverter_graph
|
Infinance, thecapital asset pricing model(CAPM) is a model used to determine a theoretically appropriate requiredrate of returnof anasset, to make decisions about adding assets to awell-diversifiedportfolio.
The model takes into account the asset's sensitivity to non-diversifiable risk (also known assystematic riskormarket risk), often represented by the quantitybeta(β) in the financial industry, as well as theexpected returnof the market and the expected return of a theoreticalrisk-free asset. CAPM assumes a particular form of utility functions (in which only first and secondmomentsmatter, that is risk is measured by variance, for example a quadratic utility) or alternatively asset returns whose probability distributions are completely described by the first two moments (for example, thenormal distribution) and zero transaction costs (necessary for diversification to get rid of all idiosyncratic risk). Under these conditions, CAPM shows that the cost of equity capital is determined only by beta.[1][2]Despite its failing numerous empirical tests,[3]and the existence of more modern approaches to asset pricing and portfolio selection (such asarbitrage pricing theoryandMerton's portfolio problem), the CAPM still remains popular due to its simplicity and utility in a variety of situations.
The CAPM was introduced byJack Treynor(1961, 1962),[4]William F. Sharpe(1964),John Lintner(1965a,b) andJan Mossin(1966) independently, building on the earlier work ofHarry Markowitzondiversificationandmodern portfolio theory. Sharpe, Markowitz andMerton Millerjointly received the 1990Nobel Memorial Prize in Economicsfor this contribution to the field offinancial economics.Fischer Black(1972) developed another version of CAPM, called Black CAPM or zero-beta CAPM, that does not assume the existence of a riskless asset. This version was more robust against empirical testing and was influential in the widespread adoption of the CAPM.
The CAPM is a model for pricing an individual security or portfolio. For individual securities, we make use of thesecurity market line(SML) and its relation to expected return andsystematic risk(beta) to show how the market must price individual securities in relation to their security risk class. The SML enables us to calculate thereward-to-risk ratiofor any security in relation to that of the overall market. Therefore, when the expected rate of return for any security is deflated by its beta coefficient, the reward-to-risk ratio for any individual security in the market is equal to the market reward-to-risk ratio, thus:
The market reward-to-risk ratio is effectively the marketrisk premiumand by rearranging the above equation and solving forE(Ri){\displaystyle E(R_{i})}, we obtain the capital asset pricing model (CAPM).
where:
Restated, in terms of risk premium, we find that:
which states that theindividual risk premiumequals themarket premiumtimesβ.
Note 1: the expected market rate of return is usually estimated by measuring the arithmetic average of the historical returns on a market portfolio (e.g. S&P 500).
Note 2: the risk free rate of return used for determining the risk premium is usually the arithmetic average of historical risk free rates of return and not the current risk free rate of return.
For the full derivation seeModern portfolio theory.
There has also been research into a mean-reverting beta often referred to as the adjusted beta, as well as the consumption beta. However, in empirical tests the traditional CAPM has been found to do as well as or outperform the modified beta models.[citation needed]
TheSMLgraphs the results from the capital asset pricing model (CAPM) formula. Thex-axis represents the risk (beta), and they-axis represents the expected return. The market risk premium is determined from the slope of the SML.
The relationship between β and required return is plotted on thesecurity market line(SML), which shows expected return as a function of β. The intercept is the nominal risk-free rate available for the market, while the slope is the market premium, E(Rm)−Rf. The security market line can be regarded as representing a single-factor model of the asset price, where β is the exposure to changes in the value of the Market. The equation of the SML is thus:
It is a useful tool for determining if an asset being considered for a portfolio offers a reasonable expected return for its risk. Individual securities are plotted on the SML graph. If the security's expected return versus risk is plotted above the SML, it is undervalued since the investor can expect a greater return for the inherent risk. And a security plotted below the SML is overvalued since the investor would be accepting less return for the amount of risk assumed.
Once the expected/required rate of returnE(Ri){\displaystyle E(R_{i})}is calculated using CAPM, we can compare this required rate of return to the asset's estimated rate of return over a specific investment horizon to determine whether it would be an appropriate investment. To make this comparison, you need an independent estimate of the return outlook for the security based on eitherfundamental or technical analysis techniques, including P/E, M/B etc.
Assuming that the CAPM is correct, an asset is correctly priced when its estimated price is the same as the present value of future cash flows of the asset, discounted at the rate suggested by CAPM. If the estimated price is higher than the CAPM valuation, then the asset is overvalued (and undervalued when the estimated price is below the CAPM valuation).[5]When the asset does not lie on the SML, this could also suggest mis-pricing. Since the expected return of the asset at timet{\displaystyle t}isE(Rt)=E(Pt+1)−PtPt{\displaystyle E(R_{t})={\frac {E(P_{t+1})-P_{t}}{P_{t}}}}, a higher expected return than what CAPM suggests indicates thatPt{\displaystyle P_{t}}is too low (the asset is currently undervalued), assuming that at timet+1{\displaystyle t+1}the asset returns to the CAPM suggested price.[6]
The asset priceP0{\displaystyle P_{0}}using CAPM, sometimes called the certainty equivalent pricing formula, is a linear relationship given by
wherePT{\displaystyle P_{T}}is the future price of the asset or portfolio.[5]
The CAPM returns the asset-appropriaterequired returnor discount rate—i.e. the rate at which future cash flows produced by the asset should be discounted given that asset's relative riskiness.
Betas exceeding one signify more than average "riskiness"; betas below one indicate lower than average. Thus, a more risky stock will have a higher beta and will be discounted at a higher rate; less sensitive stocks will have lower betas and be discounted at a lower rate. Given the accepted concaveutility function, the CAPM is consistent with intuition—investors (should) require a higher return for holding a more risky asset.
Since beta reflects asset-specific sensitivity to non-diversifiable, i.e. marketrisk, the market as a whole, by definition, has a beta of one. Stock market indices are frequently used as local proxies for the market—and in that case (by definition) have a beta of one. An investor in a large, diversified portfolio (such as amutual funddesigned to track the total market), therefore, expects performance in line with the market.
The risk of aportfoliocomprisessystematic risk, also known as undiversifiable risk, andunsystematic riskwhich is also known as idiosyncratic risk or diversifiable risk. Systematic risk refers to the risk common to all securities—i.e.market risk. Unsystematic risk is the risk associated with individual assets. Unsystematic risk can bediversifiedaway to smaller levels by including a greater number of assets in the portfolio (specific risks "average out"). The same is not possible for systematic risk within one market. Depending on the market, a portfolio of approximately 30–40 securities in developed markets such as the UK or US will render the portfolio sufficiently diversified such that risk exposure is limited to systematic risk only. This number may vary depending on the way securities are weighted in a portfolio which alters the overall risk contribution of each security. For example, market cap weighting means that securities of companies with larger market capitalization will take up a larger portion of the portfolio, making it effectively less diversified. In developing markets a larger number of securities is required for diversification, due to the higher asset volatilities.
A rational investor should not take on any diversifiable risk, as only non-diversifiable risks are rewarded within the scope of this model. Therefore, the requiredreturnon an asset, that is, the return that compensates for risk taken, must be linked to its riskiness in a portfolio context—i.e. its contribution to overall portfolio riskiness—as opposed to its "stand alone risk". In the CAPM context, portfolio risk is represented by highervariancei.e. less predictability. In other words, the beta of the portfolio is the defining factor in rewarding the systematic exposure taken by an investor.
The CAPM assumes that the risk-return profile of a portfolio can be optimized—an optimal portfolio displays the lowest possible level of risk for its level of return. Additionally, since each additional asset introduced into a portfolio further diversifies the portfolio, the optimal portfolio must comprise every asset, (assuming no trading costs) with each asset value-weighted to achieve the above (assuming that any asset isinfinitely divisible). All such optimal portfolios, i.e., one for each level of return, comprise the efficient frontier.
Because the unsystematic risk isdiversifiable, the total risk of a portfolio can be viewed asbeta.
All investors:[7]
In their 2004 review, economistsEugene FamaandKenneth Frenchargue that "the failure of the CAPM in empirical tests implies that most applications of the model are invalid".[3]
Roger Dayala[35]goes a step further and claims the CAPM is fundamentally flawed even within its own narrow assumption set, illustrating the CAPM is either circular or irrational. The circularity refers to the price of total risk being a function of the price of covariance risk only (and vice versa). The irrationality refers to the CAPM proclaimed ‘revision of prices’ resulting in identical discount rates for the (lower) amount of covariance risk only as for the (higher) amount of Total risk (i.e. identical discount rates for different amounts of risk. Roger’s findings have later been supported by Lai & Stohs.[36]
|
https://en.wikipedia.org/wiki/Capital_asset_pricing_model
|
Security breach notification lawsordata breach notification lawsarelawsthat require individuals or entities affected by adata breach, unauthorized access to data,[1]to notify their customers and other parties about the breach, as well as take specific steps to remedy the situation based on state legislature. Data breach notification laws have two main goals. The first goal is to allow individuals a chance to mitigate risks against data breaches. The second goal is to promote company incentive to strengthen data security.[2]Together, these goals work to minimize consumer harm from data breaches, including impersonation, fraud, and identity theft.[3]
Such laws have been irregularly enacted in all 50U.S. statessince 2002. Currently, all 50 states have enacted forms of data breach notification laws.[4]There is no federal data breach notification law, despite previous legislative attempts.[5]These laws were enacted in response to an escalating number of breaches ofconsumerdatabases containingpersonally identifiable information.[6]Similarly, multiple other countries, like theEuropean UnionGeneral Data Protection Regulation(GDPR) and Australia's Privacy Amendment (Notifiable Data Breaches) Act 2017 (Cth), have added data breach notification laws to combat the increasing occurrences of data breaches.[7]
The rise in data breaches conducted by both countries and individuals is evident and alarming, as the number of reported data breaches has increased from 421 in 2011, to 1,091 in 2016, and 1,579 in 2017 according to theIdentity Theft Resource Center(ITRC).[8][9]It has also impacted millions of people and gained increasing public awareness due to large data breaches such as the October 2017 Equifax breach that exposed almost 146 million individual's personal information.[10]
On 2018, Australia Privacy Amendment (Notifiable Data Breaches) Act 2017 went into effect.[11]This amended the Privacy Act 1988 (Cth), which had established a notification system for data breaches involving personal information that lead to harm. Now, entities with existing personalinformation securityobligations under the Australian Privacy Act are required to notify the Office of Australian Information Commissioner (OAIC) and affected individuals of all “eligible data breaches.”[12]The amendment is coming off large data breaches experiences in Australia, such as theYahoo hackin 2013 involving thousands of government officials and the data breach of NGOAustralian Red Crossreleasing 550,000 blood donor's personal information.
Criticism of the data breach notification include: the unjustified exemption of certain entities such as small businesses and the Privacy Commissioner not required to post data breaches in one permanent place to be used as data for future research. In addition, notification obligations are not consistent at a state level.[13]
In mid-2017, China adopted a new Cyber security Law, which included data breach notification requirements.[13]
In 1995, theEUpassed theData Protection Directive(DPD), which has recently been replaced with the 2016 General Data Protection Regulation (GDPR), a comprehensive federal data breach notification law. The GDPR offers stronger data protection laws, broader data breach notification laws, and new factors such as the right to data portability. However, certain areas of the data breach notification laws are supplemented by other data security laws.[13]
Examples of this include, theEuropean Unionimplemented a breach notification law in theDirective on Privacy and Electronic Communications(E-Privacy Directive) in 2009, specific topersonal dataheld by telecoms and Internet service providers.[14][15]This law contains some of the notification obligations for data breaches.[13]
The traffic data of the subscribers, who use voice and data via a network company, is saved from the company only for operational reasons. However, the traffic data must be deleted when they aren’t necessary anymore, in order to avoid the breaches.
However, the traffic data is necessary for the creation and treatment of subscriber billing. The use of these data is available only up to the end of the period that the bill can be repaid based on the law of European Union (Article 6 - paragraphs 1-6[16]). Regarding the marketing usage of the traffic data for the sale of additional chargeable services, they can be used from the company only if the subscriber gives his/her consent (but, the consent can be withdrawn at every time). Also, the service provider must inform the subscriber or user of the types of traffic data which are processed and of the duration of that based on the above assumptions. Processing of traffic data, in accordance with the above details, must be restricted to persons acting under the authority of providers of the public communications networks and publicly available electronic communications services handling billing or traffic management, customer enquiries, fraud detection, marketing electronic communications services or providing a value added service, and must be restricted to what is necessary for the purposes of such activities.
Data breach notification obligations are included in the new Directive on security of network and information systems (NIS Directive). This creates notification requirements on essential services and digital service providers. Among these include immediately notifying the authorities or computer security incident response teams (CSIRTS) if they experience a significant data breach.
Similar to US concerns for a state-by-state approach creating increased costs and difficulty complying with all the state laws, the EU's various breach notification requirements in different laws creates concern.[13]
In 2015, Japan amended the Act on the Protection of Personal Information (APPI) to combat massive data leaks. Specifically, the massive Benesse Corporation data leak in 2014 where nearly 29 million pieces of private customer information was leaked and sold. This includes new penal sanctions on illegal transaction, however, there is no specific provision dealing with data breach notification in the APPI. Instead, the Policies Concerning the Protection of Personal Information, in accordance with the APPI, creates a policy that encourages business operators to disclose data breaches voluntarily.[17]
Kaori Ishii and Taro Komukai have theorized that the Japanese culture offers a potential explanation for why there is no specific data breach notification law to encourage companies to strengthen data security. The Japanese general public and mass media, in particularly, condemn leaks. Consequently, data leaks quickly result in losing customer trust, brand value, and ultimately profits. An example of this include, after a 2004 data leak, Softbank swiftly lost 107 billion yen and Benesse Corporation lost 940,000 customers after the data leak. This has resulted in compliance with disclosing data leaks in accordance with the policy.[17]
While proving the Japanese culture makes specific data breach notification laws necessary is difficult to objectively prove, what has been shown is that companies that experience data breach do experience both financial and reputation harm.[18][19]
New Zealand’s Privacy Act 2020 came into force on December 1, 2020, replacing the 1993 act. The act makes notification of privacy breaches mandatory.[20]Organisations receiving and collecting data will now have to report any privacy breach they believe has caused, or is likely to cause, serious harm.
Data Breach Notification Laws have been enacted in all 50 states, theDistrict of Columbia,Guam,Puerto Ricoand theVirgin Islands.[6]As of August 2021, attempts to pass a federal data breach notification law have been unsuccessful.[21]
The first such law, theCalifornia data security breach notification law,[22]was enacted in 2002 and became effective on July 1, 2003.[23]The bill was enacted in reaction to the fear ofidentity theftand fraud.[8][24]As related in the bill statement, law requires "a state agency, or a person or business that conducts business in California, that owns or licenses computerized data that includes personal information, as defined, to disclose in specified ways, any breach of the security of the data, as defined, to any resident of California whose unencrypted personal information was, or is reasonably believed to have been, acquired by an unauthorized person." In addition, the law permits delayed notification "if a law enforcement agency determines that it would impede a criminal investigation." The law also requires any entity that licenses such information to notify the owner or licensee of the information of any breach of the security of the data.
In general, most state laws follow the basic tenets of California's original law: Companies must immediately disclose adata breachto customers, usually in writing.[25]California has since broadened its law to include compromised medical and health insurance information.[26]Where bills differ most is at what level the breach must be reported to the state Attorney General (usually when it affects 500 or 1000 individuals or more). Some states like California publish these data breach notifications on their oag.gov websites. Breaches must be reported if "sensitive personally identifying information
has been acquired or is reasonably believed to have been acquired by an unauthorized person, and is reasonably likely to cause substantial harm to the individuals to whom the information relates."[27]This leaves room for some interpretation (will it cause substantial harm?); but breaches of encrypted data need not be reported. Nor must it be reported if data has been obtained or viewed by unauthorized individuals as long as there is no reason to believe they will use the data in harmful ways.
TheNational Conference of State Legislaturesmaintains a list of enacted and proposed security breach notification laws.[6]Alabama and South Dakota enacted their data breach notification laws in 2018, making them the final states to do so.
Some of the state differences in data breach notification laws include thresholds of harm suffered from data breaches, the need to notify certain law enforcement or consumer credit agencies, broader definitions of personal information, and differences in penalties for non-compliance.[13]
As of August 2021, there is no federal data breach notification law. The first proposed federal data breach notification law was introduced to Congress in 2003, but it never exited the Judiciary Committee.[5]Similarly, a number of bills that would establish a national standard for data security breach notification have been introduced in theU.S. Congress, but none passed in the109th Congress.[28]In fact, in 2007, three federal data breach notification laws were proposed, but none passed Congress.[5]In his 2015 State of the Union speech,President Obamaproposed new legislation to create a national data breach standard that would establish a 30-day notification requirement from the discovery of a breach.[29]This led to President Obama's 2015 Personal Data Notification & Protection Act (PDNPA) proposal. This would have created federal notification guidelines and standards, but it never came out of committee.[5]
Chlotia Garrison and Clovia Hamilton theorized that a potential reason for the inability to pass a federal law on data breach notifications is states' rights. As of now, all 50 states have varying data breach notification laws. Some are restrictive, while others are broad.[5]While there is not a comprehensive federal law on data breach notifications, some federal laws require notifications of data breaches in certain circumstances. Some notable examples include: theFederal Trade Commission Act(FTC Act), theFinancial Services Modernization Act (Gramm-Leach-Bliley Act), and theHealth Insurance Portability and Accountability Act(HIPAA).[13]
Most scholars, like Angela Daly, advocate for federal data breach notification laws emphasize the problem with having varying forms of data breach notification laws. That is, companies are forced to comply with multiple state data breach notification laws. This creates increased difficulty to comply with the laws and the costs. In addition, scholars have argued that a state-by-state approach has created the problem of uncompensated victims and inadequate incentives to persuade companies and governments to invest in data security.[13]
Advocates of a state-by-state approach to data breach notification laws emphasize increased efficiency, increased incentives to have the local governments increase data security, limited federal funding available due to multiple projects, and lastly states are able to quickly adapt and pass laws to constantly evolving data breach technologies.[10]In 2018, a majority ofstate attorneys generalopposed a proposed federal data breach notification law that wouldpreemptstate laws.[30]
Data breaches occur because of technical issues like bad code to economic issues causing competing firm to not cooperate with each other to tackle data security.[31]In response, data breach notification laws attempt to prevent harm to companies and the public.
A serious harm of data breaches isidentity theft. Identity theft can harm individuals when their personal data is stolen and is used by another party to create financial harm such as withdrawing their money, non financially such as fraudulently claiming their health benefits, and pretending to be them and committing crimes.[32]Based on data collected from 2002 to 2009 from theU.S. Federal Trade Commission, the use of data breach notification has helped to decrease identity theft by 6.1 percent.[33]
Overall, data breach notifications leads to decreasing market value, evident in publicly traded companies experiencing a decrease in market valuation.[34][35]Other costs include loss of consumer confidence and trust in the company, loss of business, decreased productivity, and exposure to third-party liability.[35]Notably, the type of data that is leaked from the breach has varying economic impact. A data breach that leaks sensitive data experiences harsher economic repercussions.[36]
Most federal data breach lawsuits share certain characteristics. These include a plaintiff seeking relief from the loss of an identity theft, emotional distress, future losses, and increased risk of future harm; the majority of litigation are private class actions; the defendants are usually large firms or businesses; a mix of common law and statutory causes of action; and lastly most cases settle or are dismissed.[37]
|
https://en.wikipedia.org/wiki/Security_breach_notification_laws
|
InEnglish legalproceedings, aconfidentiality club(also known asconfidentiality ring)[1]is an agreement occasionally reached by parties to alitigationto reduce the risk ofconfidentialdocuments being used outside the litigation. The agreement typically provides that only specified persons can access some documents. Setting up a confidentiality club "requires some degree of cooperation between the parties".[2]Confidentiality rings or clubs were described in 2012 as being increasingly common;[3]the case report onRoche Diagnostics Ltd. vMid Yorkshire Hospitals NHS Trust, apublic procurementdispute, also notes that they are "common in cases of this kind", and allow for specific disclosure of documents without causing the "difficulty relating to confidentiality" which would otherwise arise.[4]
This article relating tolaw in the United Kingdom, or its constituent jurisdictions, is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Confidentiality_club
|
Inalgebra, atransformation semigroup(orcomposition semigroup) is a collection oftransformations(functionsfrom a set to itself) that isclosedunderfunction composition. If it includes theidentity function, it is amonoid, called atransformation(orcomposition)monoid. This is thesemigroupanalogue of apermutation group.
A transformation semigroup of a set has a tautologicalsemigroup actionon that set. Such actions are characterized by being faithful, i.e., if two elements of the semigroup have the same action, then they are equal.
An analogue ofCayley's theoremshows that any semigroup can be realized as a transformation semigroup of some set.
Inautomata theory, some authors use the termtransformation semigroupto refer to a semigroupacting faithfullyon a set of "states" different from the semigroup's base set.[1]There isa correspondence between the two notions.
Atransformation semigroupis a pair (X,S), whereXis a set andSis a semigroup of transformations ofX. Here atransformationofXis just afunctionfrom a subset ofXtoX, not necessarily invertible, and thereforeSis simply a set of transformations ofXwhich isclosedundercomposition of functions. The set of allpartial functionson a given base set,X, forms aregular semigroupcalled the semigroup of all partial transformations (or thepartial transformation semigrouponX), typically denoted byPTX{\displaystyle {\mathcal {PT}}_{X}}.[2]
IfSincludes the identity transformation ofX, then it is called atransformation monoid. Any transformation semigroupSdetermines a transformation monoidMby taking the union ofSwith the identity transformation. A transformation monoid whose elements are invertible is apermutation group.
The set of all transformations ofXis a transformation monoid called thefull transformation monoid(orsemigroup) ofX. It is also called thesymmetric semigroupofXand is denoted byTX. Thus a transformation semigroup (or monoid) is just asubsemigroup(orsubmonoid) of the full transformation monoid ofX.
If (X,S) is a transformation semigroup thenXcan be made into asemigroup actionofSby evaluation:
This is a monoid action ifSis a transformation monoid.
The characteristic feature of transformation semigroups, as actions, is that they arefaithful, i.e., if
thens=t. Conversely if a semigroupSacts on a setXbyT(s,x) =s•xthen we can define, fors∈S, a transformationTsofXby
The map sendingstoTsis injective if and only if (X,T) is faithful, in which case the image of this map is a transformation semigroup isomorphic toS.
Ingroup theory,Cayley's theoremasserts that any groupGis isomorphic to a subgroup of thesymmetric groupofG(regarded as a set), so thatGis apermutation group. This theorem generalizes straightforwardly to monoids: any monoidMis a transformation monoid of its underlying set, via the action given by left (or right) multiplication. This action is faithful because ifax=bxfor allxinM, then by takingxequal to the identity element, we havea=b.
For a semigroupSwithout a (left or right) identity element, we takeXto be the underlying set of themonoid corresponding toSto realiseSas a transformation semigroup ofX. In particular any finite semigroup can be represented as asubsemigroupof transformations of a setXwith |X| ≤ |S| + 1, and ifSis a monoid, we have the sharper bound |X| ≤ |S|, as in the case offinite groups.[3]: 21
Incomputer science, Cayley representations can be applied to improve the asymptotic efficiency of semigroups by reassociating multiple composed multiplications. The action given by left multiplication results in right-associated multiplication, and vice versa for the action given by right multiplication. Despite having the same results for any semigroup, the asymptotic efficiency will differ. Two examples of useful transformation monoids given by an action of left multiplication are the functional variation of thedifference listdata structure, and the monadic Codensity transformation (a Cayley representation of amonad, which is a monoid in a particularmonoidalfunctor category).[4]
LetMbe a deterministicautomatonwith state spaceSand alphabetA. The words in thefree monoidA∗induce transformations ofSgiving rise to amonoid morphismfromA∗to the full transformation monoidTS. The image of this morphism is the transformation semigroup ofM.[3]: 78
For aregular language, thesyntactic monoidis isomorphic to the transformation monoid of theminimal automatonof the language.[3]: 81
|
https://en.wikipedia.org/wiki/Transformation_semigroup
|
Ordinal priority approach(OPA) is amultiple-criteria decision analysismethod that aids in solving thegroup decision-makingproblems based onpreference relations.
Various methods have been proposed to solve multi-criteria decision-making problems.[1]The basis of methods such asanalytic hierarchy processandanalytic network processispairwise comparisonmatrix.[2]The advantages and disadvantages of the pairwise comparison matrix were discussed by Munier and Hontoria in their book.[3]In recent years, the OPA method was proposed to solve the multi-criteria decision-making problems based on theordinal datainstead of using thepairwise comparisonmatrix.[4]The OPA method is a major part of Dr. Amin Mahmoudi's PhD thesis from theSoutheast Universityof China.[4]
This method useslinear programmingapproach to compute the weights of experts, criteria, and alternatives simultaneously.[5]The main reason for usingordinal datain the OPA method is the accessibility and accuracy of theordinal datacompared with exact ratios used ingroup decision-makingproblems involved with humans.[6]
In real-world situations, the experts might not have enough knowledge regarding one alternative or criterion. In this case, the input data of the problem is incomplete, which needs to be incorporated into the linear programming of the OPA. To handle the incomplete input data in the OPA method, the constraints related to the criteria or alternatives should be removed from the OPA linear-programming model.[7]
Various types of datanormalizationmethods have been employed in multi-criteria decision-making methods in recent years. Palczewski and Sałabun showed that using various data normalization methods can change the final ranks of themulti-criteria decision-making methods.[8]Javed and colleagues showed that a multiple-criteria decision-making problem can be solved by avoiding the data normalization.[9]There is no need to normalize thepreference relationsand thus, the OPA method does not requiredata normalization.[10]
The OPA model is alinear programmingmodel, which can be solved using asimplex algorithm. The steps of this method are as follows:[11]
Step 1: Identifying the experts and determining thepreferenceof experts based on their working experience, educational qualification, etc.
Step 2: identifying the criteria and determining the preference of the criteria by each expert.
Step 3: identifying the alternatives and determining the preference of the alternatives in each criterion by each expert.
Step 4: Constructing the following linear programming model and solving it by an appropriate optimization software such asLINGO,GAMS,MATLAB, etc.
MaxZS.t.Z≤ri(rj(rk(wijkrk−wijkrk+1)))∀i,jandrkZ≤rirjrmwijkrm∀i,jandrm∑i=1p∑j=1n∑k=1mwijk=1wijk≥0∀i,jandkZ:Unrestrictedinsign{\textstyle {\begin{aligned}&MaxZ\\&S.t.\\&Z\leq r_{i}{\bigg (}r_{j}{\big (}r_{k}(w_{ijk}^{r_{k}}-w_{ijk}^{{r_{k}}+1}){\big )}{\bigg )}\;\;\;\;\forall i,j\;and\;r_{k}\\&Z\leq r_{i}r_{j}r_{m}w_{ijk}^{r_{m}}\;\;\;\forall i,j\;and\;r_{m}\\&\sum _{i=1}^{p}\sum _{j=1}^{n}\sum _{k=1}^{m}w_{ijk}=1\\&w_{ijk}\geq 0\;\;\;\forall i,j\;and\;k\\&Z:Unrestricted\;in\;sign\\\end{aligned}}}
In the above model,ri(i=1,...,p){\displaystyle r_{i}(i=1,...,p)}represents the rank of experti{\displaystyle i},rj(j=1...,n){\displaystyle r_{j}(j=1...,n)}represents the rank of criterionj{\displaystyle j},rk(k=1...,m){\displaystyle r_{k}(k=1...,m)}represents the rank of alternativek{\displaystyle k}, andwijk{\displaystyle w_{ijk}}represents the weight of alternativek{\displaystyle k}in criterionj{\displaystyle j}by experti{\displaystyle i}.
After solving the OPAlinear programmingmodel, the weight of each alternative is calculated by the following equation:
wk=∑i=1p∑j=1nwijk∀k{\displaystyle {\begin{aligned}&w_{k}=\sum _{i=1}^{p}\sum _{j=1}^{n}w_{ijk}\;\;\;\;\forall k\\\end{aligned}}}
The weight of each criterion is calculated by the following equation:
wj=∑i=1p∑k=1mwijk∀j{\displaystyle {\begin{aligned}&w_{j}=\sum _{i=1}^{p}\sum _{k=1}^{m}w_{ijk}\;\;\;\;\forall j\\\end{aligned}}}
And the weight of each expert is calculated by the following equation:
wi=∑j=1n∑k=1mwijk∀i{\displaystyle {\begin{aligned}&w_{i}=\sum _{j=1}^{n}\sum _{k=1}^{m}w_{ijk}\;\;\;\;\forall i\\\end{aligned}}}
Suppose that we are going to investigate the issue of buying a house. There aretwo expertsin thisdecision problem. Also, there are two criteria calledcost (c), andconstruction quality (q)for buying the house. On the other hand, there arethree houses (h1, h2, h3)for purchasing. The first expert (x) hasthree years of working experienceand the second expert (y) hastwo years of working experience. The structure of the problem is shown in the figure.
Step 1: The first expert (x) has more experience than expert (y), hence x > y.
Step 2: The criteria and their preference are summarized in the following table:
Step 3: The alternatives and their preference are summarized in the following table:
Step 4: The OPA linear programming model is formed based on the input data as follows:
MaxZS.t.Z≤1∗1∗1∗(wxch1−wxch3)Z≤1∗1∗2∗(wxch3−wxch2)Z≤1∗1∗3∗wxch2Z≤1∗2∗1∗(wxqh2−wxqh1)Z≤1∗2∗2∗(wxqh1−wxqh3)Z≤1∗2∗3∗wxqh3Z≤2∗2∗1∗(wych1−wych2)Z≤2∗2∗2∗(wych2−wych3)Z≤2∗2∗3∗wych3Z≤2∗1∗1∗(wyqh2−wyqh3)Z≤2∗1∗2∗(wyqh3−wyqh1)Z≤2∗1∗3∗wyqh1wxch1+wxch2+wxch3+wxqh1+wxqh2+wxqh3+wych1+wych2+wych3+wyqh1+wyqh2+wyqh3=1{\displaystyle {\begin{aligned}&MaxZ\\&S.t.\\&Z\leq 1*1*1*(w_{xch1}-w_{xch3})\;\;\;\;\\&Z\leq 1*1*2*(w_{xch3}-w_{xch2})\;\;\;\;\\&Z\leq 1*1*3*w_{xch2}\;\;\;\\\\&Z\leq 1*2*1*(w_{xqh2}-w_{xqh1})\;\;\;\;\\&Z\leq 1*2*2*(w_{xqh1}-w_{xqh3})\;\;\;\;\\&Z\leq 1*2*3*w_{xqh3}\;\;\;\\\\&Z\leq 2*2*1*(w_{ych1}-w_{ych2})\;\;\;\;\\&Z\leq 2*2*2*(w_{ych2}-w_{ych3})\;\;\;\;\\&Z\leq 2*2*3*w_{ych3}\;\;\;\\\\&Z\leq 2*1*1*(w_{yqh2}-w_{yqh3})\;\;\;\;\\&Z\leq 2*1*2*(w_{yqh3}-w_{yqh1})\;\;\;\;\\&Z\leq 2*1*3*w_{yqh1}\;\;\;\\\\&w_{xch1}+w_{xch2}+w_{xch3}+w_{xqh1}+w_{xqh2}+w_{xqh3}+w_{ych1}+w_{ych2}+w_{ych3}+w_{yqh1}+w_{yqh2}+w_{yqh3}=1\\\\\end{aligned}}}
After solving the above model using optimization software, the weights of experts, criteria and alternatives are obtained as follows:
wx=wxch1+wxch2+wxch3+wxqh1+wxqh2+wxqh3=0.666667wy=wych1+wych2+wych3+wyqh1+wyqh2+wyqh3=0.333333wc=wxch1+wxch2+wxch3+wych1+wych2+wych3=0.555556wq=wxqh1+wxqh2+wxqh3+wyqh1+wyqh2+wyqh3=0.444444wh1=wxch1+wxqh1+wych1+wyqh1=0.425926wh2=wxch2+wxqh2+wych2+wyqh2=0.351852wh3=wxch3+wxqh3+wych3+wyqh3=0.222222{\displaystyle {\begin{aligned}&w_{x}=w_{xch1}+w_{xch2}+w_{xch3}+w_{xqh1}+w_{xqh2}+w_{xqh3}=0.666667\\\\&w_{y}=w_{ych1}+w_{ych2}+w_{ych3}+w_{yqh1}+w_{yqh2}+w_{yqh3}=0.333333\\\\\\&w_{c}=w_{xch1}+w_{xch2}+w_{xch3}+w_{ych1}+w_{ych2}+w_{ych3}=0.555556\\\\&w_{q}=w_{xqh1}+w_{xqh2}+w_{xqh3}+w_{yqh1}+w_{yqh2}+w_{yqh3}=0.444444\\\\\\&w_{h1}=w_{xch1}+w_{xqh1}+w_{ych1}+w_{yqh1}=0.425926\\\\&w_{h2}=w_{xch2}+w_{xqh2}+w_{ych2}+w_{yqh2}=0.351852\\\\&w_{h3}=w_{xch3}+w_{xqh3}+w_{ych3}+w_{yqh3}=0.222222\\\\\end{aligned}}}
Therefore, House 1 (h1) is considered as the best alternative. Moreover, we can understand that criterion cost (c) is more important than criterion construction quality (q). Also, based on the experts' weights, we can understand that expert (x) has a higher impact on final selection compared with expert (y).
The applications of the OPA method in various field of studies are summarized as follows:
Agriculture, manufacturing, services
Construction industry
Energy and environment
Healthcare
Information technology
Transportation
Several extensions of the OPA method are listed as follows:
The following non-profit tools are available to solve the MCDM problems using the OPA method:
|
https://en.wikipedia.org/wiki/Ordinal_priority_approach
|
TheAthlon XPmicroprocessor fromAMDis a seventh-generation32-bitCPU targeted at the consumer market.
AHL1200DHT3B
|
https://en.wikipedia.org/wiki/List_of_AMD_Athlon_XP_processors
|
TheSafety-Critical Systems Club(SCSC)[1]is a professional association in theUnited Kingdom.[2][3]It aims to share knowledge aboutsafety-critical systems, including current and emerging practices in safety engineering, software engineering, and product andprocess safetystandards.[4]
Since it started in 1991, the Club has met its objectives by holding regular one- and two- day seminars, publishing a newsletter three times per year, and running an annual conference – theSafety-critical Systems Symposium(SSS), for which it publishes proceedings.[5]In performing these functions, and in adding tutorials to its programme, the Club has been instrumental in helping to define the requirements for education and training in the safety-critical systems domain.
The SCSC also implements initiatives to improve professionalism in the field of safety-critical systems engineering, and organises various working groups to develop and maintain industry-standard guidance. Notable outputs of these groups include theData Safety Guidance, Service Assurance GuidanceandSafety Assurance Objectives for Autonomous Systems, which have been adopted by UK government organisations such as theNHS,[6]Dstl[7][8]and theMinistry of Defence;[9]and theGoal Structuring Notation(GSN) community standard, which has influenced the development of theOMG'sStructured Assurance Case Metamodelstandard.[10]
The Safety-Critical Systems Club formally commenced operation on 1 May 1991 as the result of a contract placed by the UKDepartment of Trade and Industry(DTI) and theScience and Engineering Research Council(SERC).[11][12]A report to the UKParliamentary and Scientific Committeeon the science ofsafety-critical systemsled to the 'SafeIT' programme, which recommended formation of the Club.[13]As part of their safety-critical systems research programme,[14]the DTI and SERC awarded a three-year contract for organising and running the Safety-Critical Systems Club to theInstitution of Electrical Engineers,[15]theBritish Computer Society,[16]and theUniversity of Newcastle upon Tyne, the last of these to implement the organisation.[12]The SCSC became self-sufficient in 1994, based at Newcastle University through theCentre for Software Reliability.[17]Activities included detailed technical work, such as planning and organising events and editing the SCSC newsletter and other publications. From the start, the UKHealth and Safety Executivewas an active supporter of the Club, and, along with all the other organisations already mentioned, remains so.
It was intended that the Club should include in its ambit both technical and managerial personnel, and that it should facilitate communication among all sections of the safety-critical systems community.
The inaugural seminar, intended to introduce the Club to the safety-critical systems community, took place atUMIST,Manchester, on 11 July 1991 and attracted 256 delegates. The need for such an organisation was perceived by many in the software-engineering and safety-critical systems communities.[18]
Management of the SCSC moved to theUniversity of Yorkin 2016.[18]In 2020 it became an independentcommunity interest company.[4][19]
|
https://en.wikipedia.org/wiki/Safety-Critical_Systems_Club
|
In mathematicalqueueing theory,Little's law(alsoresult,theorem,lemma, orformula[1][2]) is a theorem byJohn Littlewhich states that the long-term average numberLof customers in astationarysystem is equal to the long-term average effective arrival rateλmultiplied by the average timeWthat a customer spends in the system. Expressed algebraically the law is
The relationship is not influenced by the arrival process distribution, the service distribution, the service order, or practically anything else. In most queuing systems, service time is thebottleneckthat creates the queue.[3]
The result applies to any system, and particularly, it applies to systems within systems.[4]For example in a bank branch, thecustomer linemight be one subsystem, and each of thetellersanother subsystem, and Little's result could be applied to each one, as well as the whole thing.The only requirements are that the system be stable andnon-preemptive[vague]; this rules out transition states such as initial startup or shutdown.
In some cases it is possible not only to mathematically relate theaveragenumber in the system to theaveragewait but even to relate the entireprobability distribution(and moments) of the number in the system to the wait.[5]
In a 1954 paper, Little's law was assumed true and used without proof.[6][7]The formL=λWwas first published byPhilip M. Morsewhere he challenged readers to find a situation where the relationship did not hold.[6][8]Little published in 1961 his proof of the law, showing that no such situation existed.[9]Little's proof was followed by a simpler version by Jewell[10]and another by Eilon.[11]Shaler Stidham published a different and more intuitive proof in 1972.[12][13]
Imagine an application that had no easy way to measureresponse time. If the mean number in the system and the throughput are known, the average response time can be found using Little’s Law:
For example: A queue depth meter shows an average of nine jobs waiting to be serviced. Add one for the job being serviced, so there is an average of ten jobs in the system. Another meter shows a mean throughput of 50 per second. The mean response time is calculated as 0.2 seconds = 10 / 50 per second.
Imagine a small store with a single counter and an area for browsing, where only one person can be at the counter at a time, and no one leaves without buying something. So the system is:
If the rate at which people enter the store (called the arrival rate) is the rate at which they exit (called the exit rate), the system is stable. By contrast, an arrival rate exceeding an exit rate would represent an unstable system, where the number of waiting customers in the store would gradually increase towards infinity.
Little's Law tells us that the average number of customers in the storeL, is the effective arrival rateλ, times the average time that a customer spends in the storeW, or simply:
Assume customers arrive at the rate of 10 per hour and stay an average of 0.5 hour. This means we should find the average number of customers in the store at any time to be 5.
Now suppose the store is considering doing more advertising to raise the arrival rate to 20 per hour. The store must either be prepared to host an average of 10 occupants or must reduce the time each customer spends in the store to 0.25 hour. The store might achieve the latter by ringing up the bill faster or by adding more counters.
We can apply Little's Law to systems within the store. For example, consider the counter and its queue. Assume we notice that there are on average 2 customers in the queue and at the counter. We know the arrival rate is 10 per hour, so customers must be spending 0.2 hours on average checking out.
We can even apply Little's Law to the counter itself. The average number of people at the counter would be in the range (0, 1) since no more than one person can be at the counter at a time. In that case, the average number of people at the counter is also known as the utilisation of the counter.
However, because a store in reality generally has a limited amount of space, it can eventually become unstable. If the arrival rate is much greater than the exit rate, the store will eventually start to overflow, and thus any new arriving customers will simply be rejected (and forced to go somewhere else or try again later) until there is once again free space available in the store. This is also the difference between thearrival rateand theeffective arrival rate, where the arrival rate roughly corresponds to the rate at which customers arrive at the store, whereas the effective arrival rate corresponds to the rate at which customersenterthe store. However, in a system with an infinite size and no loss, the two are equal.
To use Little's law on data, formulas must be used to estimate the parameters, as the result does not necessarily directly apply over finite time intervals, due to problems like how to log customers already present at the start of the logging interval and those who have not yet departed when logging stops.[14]
Little's law is widely used in manufacturing to predict lead time based on the production rate and the amount of work-in-process.[15]
Software-performance testers have used Little's law to ensure that the observed performance results are not due to bottlenecks imposed by the testing apparatus.[16][17]
Other applications include staffing emergency departments in hospitals.[18][19]
Lastly, an equivalent version of Little's law also applies in the fields ofdemographyandpopulation biology, although not referred to as "Little's Law".[20][21]For example, Cohen (2008)[22]explains that in a homogeneous stationary population without migration,P=B×e{\displaystyle P=B\times e}, whereP{\displaystyle P}is the total population size,B{\displaystyle B}is the number of births per year, ande{\displaystyle e}is the life expectancy from birth. The formulaP=B×e{\displaystyle P=B\times e}is thus directly equivalent to Little's law (L=λ×W{\displaystyle L=\lambda \times W}). However, biological populations tend to be dynamic and therefore more complicated to model accurately.[23]
An extension of Little's law provides a relationship between the steady state distribution of number of customers in the system and time spent in the system under afirst come, first servedservice discipline.[24]
|
https://en.wikipedia.org/wiki/Little%27s_lemma
|
The following tables compare general and technical information for a number ofdocumentation generators. Please see the individual products' articles for further information. Unless otherwise specified in footnotes, comparisons are based on the stable versions without any add-ons, extensions or external programs. Note that many of the generators listed are no longer maintained.
Basic general information about the generators, including: creator or company, license, and price.
The output formats the generators can write.
|
https://en.wikipedia.org/wiki/Comparison_of_documentation_generators
|
Acomputer clusteris a set ofcomputersthat work together so that they can be viewed as a single system. Unlikegrid computers, computer clusters have eachnodeset to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing iscloud computing.
The components of a cluster are usually connected to each other through fastlocal area networks, with eachnode(computer used as a server) running its own instance of anoperating system. In most circumstances, all of the nodes use the same hardware[1][better source needed]and the same operating system, although in some setups (e.g. usingOpen Source Cluster Application Resources(OSCAR)), different operating systems can be used on each computer, or different hardware.[2]
Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.[3]
Computer clusters emerged as a result of the convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, and software for high-performancedistributed computing.[citation needed]They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastestsupercomputersin the world such asIBM's Sequoia.[4]Prior to the advent of clusters, single-unitfault tolerantmainframeswithmodular redundancywere employed; but the lower upfront cost of clusters, and increased speed of network fabric has favoured the adoption of clusters. In contrast to high-reliability mainframes, clusters are cheaper to scale out, but also have increased complexity in error handling, as in clusters error modes are not opaque to running programs.[5]
The desire to get more computing power and better reliability by orchestrating a number of low-costcommercial off-the-shelfcomputers has given rise to a variety of architectures and configurations.
The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fastlocal area network.[6]The activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via asingle system imageconcept.[6]
Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such aspeer-to-peerorgrid computingwhich also use many nodes, but with a far moredistributed nature.[6]
A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fastsupercomputer. A basic approach to building a cluster is that of aBeowulfcluster which may be built with a few personal computers to produce a cost-effective alternative to traditionalhigh-performance computing. An early project that showed the viability of the concept was the 133-nodeStone Soupercomputer.[7]The developers usedLinux, theParallel Virtual Machinetoolkit and theMessage Passing Interfacelibrary to achieve high performance at a relatively low cost.[8]
Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. TheTOP500organization's semiannual list of the 500 fastest supercomputers often includes many clusters, e.g. the world's fastest machine in 2011 was theK computerwhich has adistributed memory, cluster architecture.[9]
Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup.[10]Pfister estimates the date as some time in the 1960s. The formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented byGene AmdahlofIBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing:Amdahl's Law.
The history of early computer clusters is more or less directly tied to the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster.
The first production system designed as a cluster was the BurroughsB5700in the mid-1960s. This allowed up to four computers, each with either one or two processors, to be tightly coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation.
The first commercial loosely coupled clustering product wasDatapoint Corporation's"Attached Resource Computer" (ARC) system, developed in 1977, and usingARCnetas the cluster interface. Clustering per se did not really take off untilDigital Equipment Corporationreleased theirVAXclusterproduct in 1984 for theVMSoperating system. The ARC and VAXcluster products not only supportedparallel computing, but also sharedfile systemsandperipheraldevices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were theTandem NonStop(a 1976 high-availability commercial product)[11][12]and theIBM S/390 Parallel Sysplex(circa 1994, primarily for business use).
Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network,supercomputersbegan to use them within the same computer. Following the success of theCDC 6600in 1964, theCray 1was delivered in 1976, and introduced internal parallelism viavector processing.[13]While early supercomputers excluded clusters and relied onshared memory, in time some of the fastest supercomputers (e.g. theK computer) relied on cluster architectures.
Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support, to computation-intensive scientific calculations. In either case, the cluster may use ahigh-availabilityapproach. Note that the attributes described below are not exclusive and a "computer cluster" may also use a high-availability approach, etc.
"Load-balancing" clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. For example, a web server cluster may assign different queries to different nodes, so the overall response time will be optimized.[14]However, approaches to load-balancing may significantly differ among applications, e.g. a high-performance cluster used for scientific computations would balance load with different algorithms from a web-server cluster which may just use a simpleround-robin methodby assigning each new request to a different node.[14]
Computer clusters are used for computation-intensive purposes, rather than handlingIO-orientedoperations such as web service or databases.[15]For instance, a computer cluster might supportcomputational simulationsof vehicle crashes or weather. Very tightly coupled computer clusters are designed for work that may approach "supercomputing".
"High-availability clusters" (also known asfailoverclusters, or HA clusters) improve the availability of the cluster approach. They operate by having redundantnodes, which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminatesingle points of failure. There are commercial implementations of High-Availability clusters for many operating systems. TheLinux-HAproject is one commonly usedfree softwareHA package for theLinuxoperating system.
Clusters are primarily designed with performance in mind, but installations are based on many other factors. Fault tolerance (the ability of a system to continue operating despite a malfunctioning node) enablesscalability, and in high-performance situations, allows for a low frequency of maintenance routines, resource consolidation (e.g.,RAID), and centralized management. Advantages include enabling data recovery in the event of a disaster and providing parallel data processing and high processing capacity.[16][17]
In terms of scalability, clusters provide this in their ability to add nodes horizontally. This means that more computers may be added to the cluster, to improve its performance, redundancy and fault tolerance. This can be an inexpensive solution for a higher performing cluster compared to scaling up a single node in the cluster. This property of computer clusters can allow for larger computational loads to be executed by a larger number of lower performing computers.
When adding a new node to a cluster, reliability increases because the entire cluster does not need to be taken down. A single node can be taken down for maintenance, while the rest of the cluster takes on the load of that individual node.
If you have a large number of computers clustered together, this lends itself to the use ofdistributed file systemsandRAID, both of which can increase the reliability and speed of a cluster.
One of the issues in designing a cluster is how tightly coupled the individual nodes may be. For instance, a single computer job may require frequent communication among nodes: this implies that the cluster shares a dedicated network, is densely located, and probably has homogeneous nodes. The other extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication, approachinggrid computing.
In aBeowulf cluster, the application programs never see the computational nodes (also called slave computers) but only interact with the "Master" which is a specific computer handling the scheduling and management of the slaves.[15]In a typical implementation the Master has two network interfaces, one that communicates with the private Beowulf network for the slaves, the other for the general purpose network of the organization.[15]The slave computers typically have their own version of the same operating system, and local memory and disk space. However, the private slave network may also have a large and shared file server that stores global persistent data, accessed by the slaves as needed.[15]
A special purpose 144-nodeDEGIMA clusteris tuned to running astrophysical N-body simulations using the Multiple-Walk parallel tree code, rather than general purpose scientific computations.[18]
Due to the increasing computing power of each generation ofgame consoles, a novel use has emerged where they are repurposed intoHigh-performance computing(HPC) clusters. Some examples of game console clusters areSony PlayStation clustersandMicrosoftXboxclusters. Another example of consumer game product is theNvidia Tesla Personal Supercomputerworkstation, which uses multiple graphics accelerator processor chips. Besides game consoles, high-end graphics cards too can be used instead. The use of graphics cards (or rather their GPU's) to do calculations for grid computing is vastly more economical than using CPU's, despite being less precise. However, when using double-precision values, they become as precise to work with as CPU's and are still much less costly (purchase cost).[2]
Computer clusters have historically run on separate physicalcomputerswith the sameoperating system. With the advent ofvirtualization, the cluster nodes may run on separate physical computers with different operating systems which are painted above with a virtual layer to look similar.[19][citation needed][clarification needed]The cluster may also be virtualized on various configurations as maintenance takes place; an example implementation isXenas the virtualization manager withLinux-HA.[19]
As the computer clusters were appearing during the 1980s, so weresupercomputers. One of the elements that distinguished the three classes at that time was that the early supercomputers relied onshared memory. Clusters do not typically use physically shared memory, while many supercomputer architectures have also abandoned it.
However, the use of aclustered file systemis essential in modern computer clusters.[citation needed]Examples include theIBM General Parallel File System, Microsoft'sCluster Shared Volumesor theOracle Cluster File System.
Two widely used approaches for communication between cluster nodes are MPI (Message Passing Interface) and PVM (Parallel Virtual Machine).[20]
PVM was developed at theOak Ridge National Laboratoryaround 1989 before MPI was available. PVM must be directly installed on every cluster node and provides a set of software libraries that paint the node as a "parallel virtual machine". PVM provides a run-time environment for message-passing, task and resource management, and fault notification. PVM can be used by user programs written in C, C++, or Fortran, etc.[20][21]
MPI emerged in the early 1990s out of discussions among 40 organizations. The initial effort was supported byARPAandNational Science Foundation. Rather than starting anew, the design of MPI drew on various features available in commercial systems of the time. The MPI specifications then gave rise to specific implementations. MPI implementations typically useTCP/IPand socket connections.[20]MPI is now a widely available communications model that enables parallel programs to be written in languages such asC,Fortran,Python, etc.[21]Thus, unlike PVM which provides a concrete implementation, MPI is a specification which has been implemented in systems such asMPICHandOpen MPI.[21][22]
One of the challenges in the use of a computer cluster is the cost of administrating it which can at times be as high as the cost of administrating N independent machines, if the cluster has N nodes.[23]In some cases this provides an advantage toshared memory architectureswith lower administration costs.[23]This has also madevirtual machinespopular, due to the ease of administration.[23]
When a large multi-user cluster needs to access very large amounts of data,task schedulingbecomes a challenge. In a heterogeneous CPU-GPU cluster with a complex application environment, the performance of each job depends on the characteristics of the underlying cluster. Therefore, mapping tasks onto CPU cores and GPU devices provides significant challenges.[24]This is an area of ongoing research; algorithms that combine and extendMapReduceandHadoophave been proposed and studied.[24]
When a node in a cluster fails, strategies such as "fencing" may be employed to keep the rest of the system operational.[25][26]Fencing is the process of isolating a node or protecting shared resources when a node appears to be malfunctioning. There are two classes of fencing methods; one disables a node itself, and the other disallows access to resources such as shared disks.[25]
TheSTONITHmethod stands for "Shoot The Other Node In The Head", meaning that the suspected node is disabled or powered off. For instance,power fencinguses a power controller to turn off an inoperable node.[25]
Theresources fencingapproach disallows access to resources without powering off the node. This may includepersistent reservation fencingvia theSCSI3, fibre channel fencing to disable thefibre channelport, orglobal network block device(GNBD) fencing to disable access to the GNBD server.
Load balancing clusters such as web servers use cluster architectures to support a large number of users and typically each user request is routed to a specific node, achievingtask parallelismwithout multi-node cooperation, given that the main goal of the system is providing rapid user access to shared data. However, "computer clusters" which perform complex computations for a small number of users need to take advantage of the parallel processing capabilities of the cluster and partition "the same computation" among several nodes.[27]
Automatic parallelizationof programs remains a technical challenge, butparallel programming modelscan be used to effectuate a higherdegree of parallelismvia the simultaneous execution of separate portions of a program on different processors.[27][28]
Developing and debugging parallel programs on a cluster requires parallel language primitives and suitable tools such as those discussed by theHigh Performance Debugging Forum(HPDF) which resulted in the HPD specifications.[21][29]Tools such asTotalViewwere then developed to debug parallel implementations on computer clusters which useMessage Passing Interface(MPI) orParallel Virtual Machine(PVM) for message passing.
TheUniversity of California, BerkeleyNetwork of Workstations(NOW) system gathers cluster data and stores them in a database, while a system such as PARMON, developed in India, allows visually observing and managing large clusters.[21]
Application checkpointingcan be used to restore a given state of the system when a node fails during a long multi-node computation.[30]This is essential in large clusters, given that as the number of nodes increases, so does the likelihood of node failure under heavy computational loads. Checkpointing can restore the system to a stable state so that processing can resume without needing to recompute results.[30]
The Linux world supports various cluster software; for application clustering, there isdistcc, andMPICH.Linux Virtual Server,Linux-HA– director-based clusters that allow incoming requests for services to be distributed across multiple cluster nodes.MOSIX,LinuxPMI,Kerrighed,OpenSSIare full-blown clusters integrated into thekernelthat provide for automatic process migration among homogeneous nodes.OpenSSI,openMosixandKerrighedaresingle-system imageimplementations.
Microsoft Windowscomputer cluster Server 2003 based on theWindows Serverplatform provides pieces for high-performance computing like the job scheduler, MSMPI library and management tools.
gLiteis a set of middleware technologies created by theEnabling Grids for E-sciencE(EGEE) project.
slurmis also used to schedule and manage some of the largest supercomputer clusters (see top500 list).
Although most computer clusters are permanent fixtures, attempts atflash mob computinghave been made to build short-lived clusters for specific computations. However, larger-scalevolunteer computingsystems such asBOINC-based systems have had more followers.
Basic concepts
Distributed computing
Specific systems
Computer farms
|
https://en.wikipedia.org/wiki/Computer_cluster
|
Inquantum computing, aquantum algorithmis analgorithmthat runs on a realistic model ofquantum computation, the most commonly used model being thequantum circuitmodel of computation.[1][2]A classical (or non-quantum) algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classicalcomputer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on aquantum computer. Although all classical algorithms can also be performed on a quantum computer,[3]: 126the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such asquantum superpositionorquantum entanglement.
Problems that areundecidableusing classical computers remain undecidable using quantum computers.[4]: 127What makes quantum algorithms interesting is that they might be able to solve some problems faster than classical algorithms because the quantum superposition and quantum entanglement that quantum algorithms exploit generally cannot be efficiently simulated on classical computers (seeQuantum supremacy).
The best-known algorithms areShor's algorithmfor factoring andGrover's algorithmfor searching an unstructured database or an unordered list. Shor's algorithm runs much (almost exponentially) faster than the most efficient known classical algorithm for factoring, thegeneral number field sieve.[5]Grover's algorithm runs quadratically faster than the best possible classical algorithm for the same task,[6]alinear search.
Quantum algorithms are usually described, in the commonly used circuit model of quantum computation, by aquantum circuitthat acts on some inputqubitsand terminates with ameasurement. A quantum circuit consists of simplequantum gates, each of which acts on some finite number of qubits. Quantum algorithms may also be stated in other models of quantum computation, such as theHamiltonian oracle model.[7]
Quantum algorithms can be categorized by the main techniques involved in the algorithm. Some commonly used techniques/ideas in quantum algorithms includephase kick-back,phase estimation, thequantum Fourier transform,quantum walks,amplitude amplificationandtopological quantum field theory. Quantum algorithms may also be grouped by the type of problem solved; see, e.g., the survey on quantum algorithms for algebraic problems.[8]
Thequantum Fourier transformis the quantum analogue of thediscrete Fourier transform, and is used in several quantum algorithms. TheHadamard transformis also an example of a quantum Fourier transform over an n-dimensional vector space over the fieldF2. The quantum Fourier transform can be efficiently implemented on a quantum computer using only a polynomial number ofquantum gates.[citation needed]
The Deutsch–Jozsa algorithm solves ablack-boxproblem that requires exponentially many queries to the black box for any deterministic classical computer, but can be done with a single query by a quantum computer. However, when comparing bounded-error classical and quantum algorithms, there is no speedup, since a classical probabilistic algorithm can solve the problem with a constant number of queries with small probability of error. The algorithm determines whether a functionfis either constant (0 on all inputs or 1 on all inputs) or balanced (returns 1 for half of the input domain and 0 for the other half).
The Bernstein–Vazirani algorithm is the first quantum algorithm that solves a problem more efficiently than the best known classical algorithm. It was designed to create anoracle separationbetweenBQPandBPP.
Simon's algorithm solves a black-box problem exponentially faster than any classical algorithm, including bounded-error probabilistic algorithms. This algorithm, which achieves an exponential speedup over all classical algorithms that we consider efficient, was the motivation forShor's algorithmfor factoring.
Thequantum phase estimation algorithmis used to determine the eigenphase of an eigenvector of a unitary gate, given a quantum state proportional to the eigenvector and access to the gate. The algorithm is frequently used as a subroutine in other algorithms.
Shor's algorithm solves thediscrete logarithmproblem and theinteger factorizationproblem in polynomial time,[9]whereas the best known classical algorithms take super-polynomial time. It is unknown whether these problems are inPorNP-complete. It is also one of the few quantum algorithms that solves a non-black-box problem in polynomial time, where the best known classical algorithms run in super-polynomial time.
Theabelianhidden subgroup problemis a generalization of many problems that can be solved by a quantum computer, such as Simon's problem, solvingPell's equation, testing theprincipal idealof aringR andfactoring. There are efficient quantum algorithms known for the Abelian hidden subgroup problem.[10]The more general hidden subgroup problem, where the group is not necessarily abelian, is a generalization of the previously mentioned problems, as well asgraph isomorphismand certainlattice problems. Efficient quantum algorithms are known for certain non-abelian groups. However, no efficient algorithms are known for thesymmetric group, which would give an efficient algorithm for graph isomorphism[11]and thedihedral group, which would solve certain lattice problems.[12]
AGauss sumis a type ofexponential sum. The best known classical algorithm for estimating these sums takes exponential time. Since the discrete logarithm problem reduces to Gauss sum estimation, an efficient classical algorithm for estimating Gauss sums would imply an efficient classical algorithm for computing discrete logarithms, which is considered unlikely. However, quantum computers can estimate Gauss sums to polynomial precision in polynomial time.[13]
Consider anoracleconsisting ofnrandom Boolean functions mappingn-bit strings to a Boolean value, with the goal of finding nn-bit stringsz1,...,znsuch that for the Hadamard-Fourier transform, at least 3/4 of the strings satisfy
and at least 1/4 satisfy
This can be done inbounded-error quantum polynomial time(BQP).[14]
Amplitude amplificationis a technique that allows the amplification of a chosen subspace of a quantum state. Applications of amplitude amplification usually lead to quadratic speedups over the corresponding classical algorithms. It can be considered as a generalization of Grover's algorithm.[citation needed]
Grover's algorithm searches an unstructured database (or an unordered list) with N entries for a marked entry, using onlyO(N){\displaystyle O({\sqrt {N}})}queries instead of theO(N){\displaystyle O({N})}queries required classically.[15]Classically,O(N){\displaystyle O({N})}queries are required even allowing bounded-error probabilistic algorithms.
Theorists have considered a hypothetical generalization of a standard quantum computer that could access the histories of the hidden variables inBohmian mechanics. (Such a computer is completely hypothetical and wouldnotbe a standard quantum computer, or even possible under the standard theory of quantum mechanics.) Such a hypothetical computer could implement a search of an N-item database in at mostO(N3){\displaystyle O({\sqrt[{3}]{N}})}steps. This is slightly faster than theO(N){\displaystyle O({\sqrt {N}})}steps taken by Grover's algorithm. However, neither search method would allow either model of quantum computer to solveNP-completeproblems in polynomial time.[16]
Quantum countingsolves a generalization of the search problem. It solves the problem of counting the number of marked entries in an unordered list, instead of just detecting whether one exists. Specifically, it counts the number of marked entries in anN{\displaystyle N}-element list with an error of at mostε{\displaystyle \varepsilon }by making onlyΘ(ε−1N/k){\displaystyle \Theta \left(\varepsilon ^{-1}{\sqrt {N/k}}\right)}queries, wherek{\displaystyle k}is the number of marked elements in the list.[17][18]More precisely, the algorithm outputs an estimatek′{\displaystyle k'}fork{\displaystyle k}, the number of marked entries, with accuracy|k−k′|≤εk{\displaystyle |k-k'|\leq \varepsilon k}.
A quantum walk is the quantum analogue of a classicalrandom walk. A classical random walk can be described by aprobability distributionover some states, while a quantum walk can be described by aquantum superpositionover states. Quantum walks are known to give exponential speedups for some black-box problems.[19][20]They also provide polynomial speedups for many problems. A framework for the creation of quantum walk algorithms exists and is a versatile tool.[21]
The Boson Sampling Problem in an experimental configuration assumes[22]an input ofbosons(e.g., photons) of moderate number that are randomly scattered into a large number of output modes, constrained by a definedunitarity. When individual photons are used, the problem is isomorphic to a multi-photon quantum walk.[23]The problem is then to produce a fair sample of theprobability distributionof the output that depends on the input arrangement of bosons and the unitarity.[24]Solving this problem with a classical computer algorithm requires computing thepermanentof the unitary transform matrix, which may take a prohibitively long time or be outright impossible. In 2014, it was proposed[25]that existing technology and standard probabilistic methods of generating single-photon states could be used as an input into a suitable quantum computablelinear optical networkand that sampling of the output probability distribution would be demonstrably superior using quantum algorithms. In 2015, investigation predicted[26]the sampling problem had similar complexity for inputs other thanFock-statephotons and identified a transition incomputational complexityfrom classically simulable to just as hard as the Boson Sampling Problem, depending on the size of coherent amplitude inputs.
The element distinctness problem is the problem of determining whether all the elements of a list are distinct. Classically,Ω(N){\displaystyle \Omega (N)}queries are required for a list of sizeN{\displaystyle N}; however, it can be solved inΘ(N2/3){\displaystyle \Theta (N^{2/3})}queries on a quantum computer. The optimal algorithm was put forth byAndris Ambainis,[27]andYaoyun Shifirst proved a tight lower bound when the size of the range is sufficiently large.[28]Ambainis[29]and Kutin[30]independently (and via different proofs) extended that work to obtain the lower bound for all functions.
The triangle-finding problem is the problem of determining whether a given graph contains a triangle (acliqueof size 3). The best-known lower bound for quantum algorithms isΩ(N){\displaystyle \Omega (N)}, but the best algorithm known requires O(N1.297) queries,[31]an improvement over the previous best O(N1.3) queries.[21][32]
A formula is a tree with a gate at each internal node and an input bit at each leaf node. The problem is to evaluate the formula, which is the output of the root node, given oracle access to the input.
A well studied formula is the balanced binary tree with only NAND gates.[33]This type of formula requiresΘ(Nc){\displaystyle \Theta (N^{c})}queries using randomness,[34]wherec=log2(1+33)/4≈0.754{\displaystyle c=\log _{2}(1+{\sqrt {33}})/4\approx 0.754}. With a quantum algorithm, however, it can be solved inΘ(N1/2){\displaystyle \Theta (N^{1/2})}queries. No better quantum algorithm for this case was known until one was found for the unconventional Hamiltonian oracle model.[7]The same result for the standard setting soon followed.[35]
Fast quantum algorithms for more complicated formulas are also known.[36]
The problem is to determine if ablack-box group, given bykgenerators, iscommutative. A black-box group is a group with an oracle function, which must be used to perform the group operations (multiplication, inversion, and comparison with identity). The interest in this context lies in the query complexity, which is the number of oracle calls needed to solve the problem. The deterministic and randomized query complexities areΘ(k2){\displaystyle \Theta (k^{2})}andΘ(k){\displaystyle \Theta (k)}, respectively.[37]A quantum algorithm requiresΩ(k2/3){\displaystyle \Omega (k^{2/3})}queries, while the best-known classical algorithm usesO(k2/3logk){\displaystyle O(k^{2/3}\log k)}queries.[38]
Thecomplexity classBQP(bounded-error quantum polynomial time) is the set ofdecision problemssolvable by aquantum computerinpolynomial timewith error probability of at most 1/3 for all instances.[39]It is the quantum analogue to the classical complexity classBPP.
A problem isBQP-complete if it is inBQPand any problem inBQPcan bereducedto it inpolynomial time. Informally, the class ofBQP-complete problems are those that are as hard as the hardest problems inBQPand are themselves efficiently solvable by a quantum computer (with bounded error).
Witten had shown that theChern-Simonstopological quantum field theory(TQFT) can be solved in terms ofJones polynomials. A quantum computer can simulate a TQFT, and thereby approximate the Jones polynomial,[40]which as far as we know, is hard to compute classically in the worst-case scenario.[citation needed]
The idea that quantum computers might be more powerful than classical computers originated in Richard Feynman's observation that classical computers seem to require exponential time to simulate many-particle quantum systems, yet quantum many-body systems are able to "solve themselves."[41]Since then, the idea that quantum computers can simulate quantum physical processes exponentially faster than classical computers has been greatly fleshed out and elaborated. Efficient (i.e., polynomial-time) quantum algorithms have been developed for simulating both Bosonic and Fermionic systems,[42]as well as the simulation of chemical reactions beyond the capabilities of current classical supercomputers using only a few hundred qubits.[43]Quantum computers can also efficiently simulate topological quantum field theories.[44]In addition to its intrinsic interest, this result has led to efficient quantum algorithms for estimatingquantum topological invariantssuch asJones[45]andHOMFLY polynomials,[46]and theTuraev-Viro invariantof three-dimensional manifolds.[47]
In 2009,Aram Harrow, Avinatan Hassidim, andSeth Lloyd, formulated a quantum algorithm for solvinglinear systems. Thealgorithmestimates the result of a scalar measurement on the solution vector to a given linear system of equations.[48]
Provided that the linear system issparseand has a lowcondition numberκ{\displaystyle \kappa }, and that the user is interested in the result of a scalar measurement on the solution vector (instead of the values of the solution vector itself), then the algorithm has a runtime ofO(log(N)κ2){\displaystyle O(\log(N)\kappa ^{2})}, whereN{\displaystyle N}is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs inO(Nκ){\displaystyle O(N\kappa )}(orO(Nκ){\displaystyle O(N{\sqrt {\kappa }})}for positive semidefinite matrices).
Hybrid Quantum/Classical Algorithms combine quantum state preparation and measurement with classical optimization.[49]These algorithms generally aim to determine the ground-state eigenvector and eigenvalue of a Hermitian operator.
Thequantum approximate optimization algorithmtakes inspiration from quantum annealing, performing a discretized approximation of quantum annealing using a quantum circuit. It can be used to solve problems in graph theory.[50]The algorithm makes use of classical optimization of quantum operations to maximize an "objective function."
Thevariational quantum eigensolver(VQE) algorithm applies classical optimization to minimize the energy expectation value of anansatz stateto find the ground state of a Hermitian operator, such as a molecule's Hamiltonian.[51]It can also be extended to find excited energies of molecular Hamiltonians.[52]
The contracted quantum eigensolver (CQE) algorithm minimizes the residual of a contraction (or projection) of the Schrödinger equation onto the space of two (or more) electrons to find the ground- or excited-state energy and two-electron reduced density matrix of a molecule.[53]It is based on classical methods for solving energies and two-electron reduced density matrices directly from the anti-Hermitian contracted Schrödinger equation.[54]
|
https://en.wikipedia.org/wiki/Quantum_algorithm
|
Inmechanics, thevirial theoremprovides a general equation that relates the average over time of the totalkinetic energyof a stable system of discrete particles, bound by aconservative force(where theworkdone is independent of path), with that of the totalpotential energyof the system. Mathematically, thetheoremstates that⟨T⟩=−12∑k=1N⟨Fk⋅rk⟩,{\displaystyle \langle T\rangle =-{\frac {1}{2}}\,\sum _{k=1}^{N}\langle \mathbf {F} _{k}\cdot \mathbf {r} _{k}\rangle ,}whereTis the total kinetic energy of theNparticles,Fkrepresents theforceon thekth particle, which is located at positionrk, andangle bracketsrepresent the average over time of the enclosed quantity. The wordvirialfor the right-hand side of the equation derives fromvis, theLatinword for "force" or "energy", and was given its technical definition byRudolf Clausiusin 1870.[1]
The significance of the virial theorem is that it allows the average total kinetic energy to be calculated even for very complicated systems that defy an exact solution, such as those considered instatistical mechanics; this average total kinetic energy is related to thetemperatureof the system by theequipartition theorem. However, the virial theorem does not depend on the notion of temperature and holds even for systems that are not inthermal equilibrium. The virial theorem has been generalized in various ways, most notably to atensorform.
If the force between any two particles of the system results from apotential energyV(r) =αrnthat is proportional to some powernof theinterparticle distancer, the virial theorem takes the simple form2⟨T⟩=n⟨VTOT⟩.{\displaystyle 2\langle T\rangle =n\langle V_{\text{TOT}}\rangle .}
Thus, twice the average total kinetic energy⟨T⟩equalsntimes the average total potential energy⟨VTOT⟩. WhereasV(r)represents the potential energy between two particles of distancer,VTOTrepresents the total potential energy of the system, i.e., the sum of the potential energyV(r)over all pairs of particles in the system. A common example of such a system is a star held together by its own gravity, wheren= −1.
In 1870,Rudolf Clausiusdelivered the lecture "On a Mechanical Theorem Applicable to Heat" to the Association for Natural and Medical Sciences of the Lower Rhine, following a 20-year study of thermodynamics. The lecture stated that the meanvis vivaof the system is equal to its virial, or that the average kinetic energy is one half of the average potential energy. The virial theorem can be obtained directly fromLagrange's identity[moved resource?]as applied in classical gravitational dynamics, the original form of which was included in Lagrange's "Essay on the Problem of Three Bodies" published in 1772.Carl Jacobi'sgeneralization of the identity toNbodies and to the present form of Laplace's identity closely resembles the classical virial theorem. However, the interpretations leading to the development of the equations were very different, since at the time of development, statistical dynamics had not yet unified the separate studies of thermodynamics and classical dynamics.[2]The theorem was later utilized, popularized, generalized and further developed byJames Clerk Maxwell,Lord Rayleigh,Henri Poincaré,Subrahmanyan Chandrasekhar,Enrico Fermi,Paul Ledoux,Richard BaderandEugene Parker.Fritz Zwickywas the first to use the virial theorem to deduce the existence of unseen matter, which is now calleddark matter.Richard Badershowed that the charge distribution of a total system can be partitioned into its kinetic and potential energies that obey the virial theorem.[3]As another example of its many applications, the virial theorem has been used to derive theChandrasekhar limitfor the stability ofwhite dwarfstars.
ConsiderN= 2particles with equal massm, acted upon by mutually attractive forces. Suppose the particles are at diametrically opposite points of a circular orbit with radiusr. The velocities arev1(t)andv2(t) = −v1(t), which are normal to forcesF1(t)andF2(t) = −F1(t). The respective magnitudes are fixed atvandF. The average kinetic energy of the system in an interval of time fromt1tot2is⟨T⟩=1t2−t1∫t1t2∑k=1N12mk|vk(t)|2dt=1t2−t1∫t1t2(12m|v1(t)|2+12m|v2(t)|2)dt=mv2.{\displaystyle \langle T\rangle ={\frac {1}{t_{2}-t_{1}}}\int _{t_{1}}^{t_{2}}\sum _{k=1}^{N}{\frac {1}{2}}m_{k}|\mathbf {v} _{k}(t)|^{2}\,dt={\frac {1}{t_{2}-t_{1}}}\int _{t_{1}}^{t_{2}}\left({\frac {1}{2}}m|\mathbf {v} _{1}(t)|^{2}+{\frac {1}{2}}m|\mathbf {v} _{2}(t)|^{2}\right)\,dt=mv^{2}.}Taking center of mass as the origin, the particles have positionsr1(t)andr2(t) = −r1(t)with fixed magnituder. The attractive forces act in opposite directions as positions, soF1(t) ⋅r1(t) =F2(t) ⋅r2(t) = −Fr. Applying thecentripetal forceformulaF=mv2/rresults in−12∑k=1N⟨Fk⋅rk⟩=−12(−Fr−Fr)=Fr=mv2r⋅r=mv2=⟨T⟩,{\displaystyle -{\frac {1}{2}}\sum _{k=1}^{N}\langle \mathbf {F} _{k}\cdot \mathbf {r} _{k}\rangle =-{\frac {1}{2}}(-Fr-Fr)=Fr={\frac {mv^{2}}{r}}\cdot r=mv^{2}=\langle T\rangle ,}as required. Note: If the origin is displaced, then we'd obtain the same result. This is because the dot product of the displacement with equal and opposite forcesF1(t),F2(t)results in net cancellation.
Although the virial theorem depends on averaging the total kinetic and potential energies, the presentation here postpones the averaging to the last step.
For a collection ofNpoint particles, thescalarmoment of inertiaIabout theoriginisI=∑k=1Nmk|rk|2=∑k=1Nmkrk2,{\displaystyle I=\sum _{k=1}^{N}m_{k}|\mathbf {r} _{k}|^{2}=\sum _{k=1}^{N}m_{k}r_{k}^{2},}wheremkandrkrepresent the mass and position of thekth particle.rk= |rk|is the position vector magnitude. Consider the scalarG=∑k=1Npk⋅rk,{\displaystyle G=\sum _{k=1}^{N}\mathbf {p} _{k}\cdot \mathbf {r} _{k},}wherepkis themomentumvectorof thekth particle.[4]Assuming that the masses are constant,Gis one-half the time derivative of this moment of inertia:12dIdt=12ddt∑k=1Nmkrk⋅rk=∑k=1Nmkdrkdt⋅rk=∑k=1Npk⋅rk=G.{\displaystyle {\begin{aligned}{\frac {1}{2}}{\frac {dI}{dt}}&={\frac {1}{2}}{\frac {d}{dt}}\sum _{k=1}^{N}m_{k}\mathbf {r} _{k}\cdot \mathbf {r} _{k}\\&=\sum _{k=1}^{N}m_{k}\,{\frac {d\mathbf {r} _{k}}{dt}}\cdot \mathbf {r} _{k}\\&=\sum _{k=1}^{N}\mathbf {p} _{k}\cdot \mathbf {r} _{k}=G.\end{aligned}}}In turn, the time derivative ofGisdGdt=∑k=1Npk⋅drkdt+∑k=1Ndpkdt⋅rk=∑k=1Nmkdrkdt⋅drkdt+∑k=1NFk⋅rk=2T+∑k=1NFk⋅rk,{\displaystyle {\begin{aligned}{\frac {dG}{dt}}&=\sum _{k=1}^{N}\mathbf {p} _{k}\cdot {\frac {d\mathbf {r} _{k}}{dt}}+\sum _{k=1}^{N}{\frac {d\mathbf {p} _{k}}{dt}}\cdot \mathbf {r} _{k}\\&=\sum _{k=1}^{N}m_{k}{\frac {d\mathbf {r} _{k}}{dt}}\cdot {\frac {d\mathbf {r} _{k}}{dt}}+\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}\\&=2T+\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k},\end{aligned}}}wheremkis the mass of thekth particle,Fk=dpk/dtis the net force on that particle, andTis the totalkinetic energyof the system according to thevk=drk/dtvelocity of each particle,T=12∑k=1Nmkvk2=12∑k=1Nmkdrkdt⋅drkdt.{\displaystyle T={\frac {1}{2}}\sum _{k=1}^{N}m_{k}v_{k}^{2}={\frac {1}{2}}\sum _{k=1}^{N}m_{k}{\frac {d\mathbf {r} _{k}}{dt}}\cdot {\frac {d\mathbf {r} _{k}}{dt}}.}
The total forceFkon particlekis the sum of all the forces from the other particlesjin the system:Fk=∑j=1NFjk,{\displaystyle \mathbf {F} _{k}=\sum _{j=1}^{N}\mathbf {F} _{jk},}whereFjkis the force applied by particlejon particlek. Hence, the virial can be written as−12∑k=1NFk⋅rk=−12∑k=1N∑j=1NFjk⋅rk.{\displaystyle -{\frac {1}{2}}\,\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}=-{\frac {1}{2}}\,\sum _{k=1}^{N}\sum _{j=1}^{N}\mathbf {F} _{jk}\cdot \mathbf {r} _{k}.}
Since no particle acts on itself (i.e.,Fjj= 0for1 ≤j≤N), we split the sum in terms below and above this diagonal and add them together in pairs:∑k=1NFk⋅rk=∑k=1N∑j=1NFjk⋅rk=∑k=2N∑j=1k−1Fjk⋅rk+∑k=1N−1∑j=k+1NFjk⋅rk=∑k=2N∑j=1k−1Fjk⋅rk+∑j=2N∑k=1j−1Fjk⋅rk=∑k=2N∑j=1k−1(Fjk⋅rk+Fkj⋅rj)=∑k=2N∑j=1k−1(Fjk⋅rk−Fjk⋅rj)=∑k=2N∑j=1k−1Fjk⋅(rk−rj),{\displaystyle {\begin{aligned}\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}&=\sum _{k=1}^{N}\sum _{j=1}^{N}\mathbf {F} _{jk}\cdot \mathbf {r} _{k}=\sum _{k=2}^{N}\sum _{j=1}^{k-1}\mathbf {F} _{jk}\cdot \mathbf {r} _{k}+\sum _{k=1}^{N-1}\sum _{j=k+1}^{N}\mathbf {F} _{jk}\cdot \mathbf {r} _{k}\\&=\sum _{k=2}^{N}\sum _{j=1}^{k-1}\mathbf {F} _{jk}\cdot \mathbf {r} _{k}+\sum _{j=2}^{N}\sum _{k=1}^{j-1}\mathbf {F} _{jk}\cdot \mathbf {r} _{k}=\sum _{k=2}^{N}\sum _{j=1}^{k-1}(\mathbf {F} _{jk}\cdot \mathbf {r} _{k}+\mathbf {F} _{kj}\cdot \mathbf {r} _{j})\\&=\sum _{k=2}^{N}\sum _{j=1}^{k-1}(\mathbf {F} _{jk}\cdot \mathbf {r} _{k}-\mathbf {F} _{jk}\cdot \mathbf {r} _{j})=\sum _{k=2}^{N}\sum _{j=1}^{k-1}\mathbf {F} _{jk}\cdot (\mathbf {r} _{k}-\mathbf {r} _{j}),\end{aligned}}}where we have usedNewton's third law of motion, i.e.,Fjk= −Fkj(equal and opposite reaction).
It often happens that the forces can be derived from a potential energyVjkthat is a function only of the distancerjkbetween the point particlesjandk. Since the force is the negative gradient of the potential energy, we have in this caseFjk=−∇rkVjk=−dVjkdrjk(rk−rjrjk),{\displaystyle \mathbf {F} _{jk}=-\nabla _{\mathbf {r} _{k}}V_{jk}=-{\frac {dV_{jk}}{dr_{jk}}}\left({\frac {\mathbf {r} _{k}-\mathbf {r} _{j}}{r_{jk}}}\right),}which is equal and opposite toFkj= −∇rjVkj= −∇rjVjk, the force applied by particlekon particlej, as may be confirmed by explicit calculation. Hence,∑k=1NFk⋅rk=∑k=2N∑j=1k−1Fjk⋅(rk−rj)=−∑k=2N∑j=1k−1dVjkdrjk|rk−rj|2rjk=−∑k=2N∑j=1k−1dVjkdrjkrjk.{\displaystyle {\begin{aligned}\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}&=\sum _{k=2}^{N}\sum _{j=1}^{k-1}\mathbf {F} _{jk}\cdot (\mathbf {r} _{k}-\mathbf {r} _{j})\\&=-\sum _{k=2}^{N}\sum _{j=1}^{k-1}{\frac {dV_{jk}}{dr_{jk}}}{\frac {|\mathbf {r} _{k}-\mathbf {r} _{j}|^{2}}{r_{jk}}}\\&=-\sum _{k=2}^{N}\sum _{j=1}^{k-1}{\frac {dV_{jk}}{dr_{jk}}}r_{jk}.\end{aligned}}}
ThusdGdt=2T+∑k=1NFk⋅rk=2T−∑k=2N∑j=1k−1dVjkdrjkrjk.{\displaystyle {\frac {dG}{dt}}=2T+\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}=2T-\sum _{k=2}^{N}\sum _{j=1}^{k-1}{\frac {dV_{jk}}{dr_{jk}}}r_{jk}.}
In a common special case, the potential energyVbetween two particles is proportional to a powernof their distancerij:Vjk=αrjkn,{\displaystyle V_{jk}=\alpha r_{jk}^{n},}where the coefficientαand the exponentnare constants. In such cases, the virial is−12∑k=1NFk⋅rk=12∑k=1N∑j<kdVjkdrjkrjk=12∑k=1N∑j<knαrjkn−1rjk=12∑k=1N∑j<knVjk=n2VTOT,{\displaystyle {\begin{aligned}-{\frac {1}{2}}\,\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}&={\frac {1}{2}}\,\sum _{k=1}^{N}\sum _{j<k}{\frac {dV_{jk}}{dr_{jk}}}r_{jk}\\&={\frac {1}{2}}\,\sum _{k=1}^{N}\sum _{j<k}n\alpha r_{jk}^{n-1}r_{jk}\\&={\frac {1}{2}}\,\sum _{k=1}^{N}\sum _{j<k}nV_{jk}={\frac {n}{2}}\,V_{\text{TOT}},\end{aligned}}}whereVTOT=∑k=1N∑j<kVjk{\displaystyle V_{\text{TOT}}=\sum _{k=1}^{N}\sum _{j<k}V_{jk}}is the total potential energy of the system.
ThusdGdt=2T+∑k=1NFk⋅rk=2T−nVTOT.{\displaystyle {\frac {dG}{dt}}=2T+\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}=2T-nV_{\text{TOT}}.}
For gravitating systems the exponentnequals −1, givingLagrange's identitydGdt=12d2Idt2=2T+VTOT,{\displaystyle {\frac {dG}{dt}}={\frac {1}{2}}{\frac {d^{2}I}{dt^{2}}}=2T+V_{\text{TOT}},}which was derived byJoseph-Louis Lagrangeand extended byCarl Jacobi.
The average of this derivative over a durationτis defined as⟨dGdt⟩τ=1τ∫0τdGdtdt=1τ∫G(0)G(τ)dG=G(τ)−G(0)τ,{\displaystyle \left\langle {\frac {dG}{dt}}\right\rangle _{\tau }={\frac {1}{\tau }}\int _{0}^{\tau }{\frac {dG}{dt}}\,dt={\frac {1}{\tau }}\int _{G(0)}^{G(\tau )}\,dG={\frac {G(\tau )-G(0)}{\tau }},}from which we obtain the exact equation⟨dGdt⟩τ=2⟨T⟩τ+∑k=1N⟨Fk⋅rk⟩τ.{\displaystyle \left\langle {\frac {dG}{dt}}\right\rangle _{\tau }=2\langle T\rangle _{\tau }+\sum _{k=1}^{N}\langle \mathbf {F} _{k}\cdot \mathbf {r} _{k}\rangle _{\tau }.}
Thevirial theoremstates that if⟨dG/dt⟩τ= 0, then2⟨T⟩τ=−∑k=1N⟨Fk⋅rk⟩τ.{\displaystyle 2\langle T\rangle _{\tau }=-\sum _{k=1}^{N}\langle \mathbf {F} _{k}\cdot \mathbf {r} _{k}\rangle _{\tau }.}
There are many reasons why the average of the time derivative might vanish. One often-cited reason applies to stably bound systems, that is, to systems that hang together forever and whose parameters are finite. In this case, velocities and coordinates of the particles of the system have upper and lower limits, so thatGboundis bounded between two extremes,GminandGmax, and the average goes to zero in the limit of infiniteτ:limτ→∞|⟨dGbounddt⟩τ|=limτ→∞|G(τ)−G(0)τ|≤limτ→∞Gmax−Gminτ=0.{\displaystyle \lim _{\tau \to \infty }\left|\left\langle {\frac {dG^{\text{bound}}}{dt}}\right\rangle _{\tau }\right|=\lim _{\tau \to \infty }\left|{\frac {G(\tau )-G(0)}{\tau }}\right|\leq \lim _{\tau \to \infty }{\frac {G_{\max }-G_{\min }}{\tau }}=0.}
Even if the average of the time derivative ofGis only approximately zero, the virial theorem holds to the same degree of approximation.
For power-law forces with an exponentn, the general equation holds:⟨T⟩τ=−12∑k=1N⟨Fk⋅rk⟩τ=n2⟨VTOT⟩τ.{\displaystyle \langle T\rangle _{\tau }=-{\frac {1}{2}}\sum _{k=1}^{N}\langle \mathbf {F} _{k}\cdot \mathbf {r} _{k}\rangle _{\tau }={\frac {n}{2}}\langle V_{\text{TOT}}\rangle _{\tau }.}
Forgravitationalattraction,n= −1, and the average kinetic energy equals half of the average negative potential energy:⟨T⟩τ=−12⟨VTOT⟩τ.{\displaystyle \langle T\rangle _{\tau }=-{\frac {1}{2}}\langle V_{\text{TOT}}\rangle _{\tau }.}
This general result is useful for complex gravitating systems such asplanetary systemsorgalaxies.
A simple application of the virial theorem concernsgalaxy clusters. If a region of space is unusually full of galaxies, it is safe to assume that they have been together for a long time, and the virial theorem can be applied.Doppler effectmeasurements give lower bounds for their relative velocities, and the virial theorem gives a lower bound for the total mass of the cluster, including any dark matter.
If theergodic hypothesisholds for the system under consideration, the averaging need not be taken over time; anensemble averagecan also be taken, with equivalent results.
Although originally derived for classical mechanics, the virial theorem also holds for quantum mechanics, as first shown byVladimir Fock[5]using theEhrenfest theorem.
Evaluate thecommutatorof theHamiltonianH=V({Xi})+∑nPn22mn{\displaystyle H=V{\bigl (}\{X_{i}\}{\bigr )}+\sum _{n}{\frac {P_{n}^{2}}{2m_{n}}}}with the position operatorXnand the momentum operatorPn=−iℏddXn{\displaystyle P_{n}=-i\hbar {\frac {d}{dX_{n}}}}of particlen,[H,XnPn]=Xn[H,Pn]+[H,Xn]Pn=iℏXndVdXn−iℏPn2mn.{\displaystyle [H,X_{n}P_{n}]=X_{n}[H,P_{n}]+[H,X_{n}]P_{n}=i\hbar X_{n}{\frac {dV}{dX_{n}}}-i\hbar {\frac {P_{n}^{2}}{m_{n}}}.}
Summing over all particles, one finds that forQ=∑nXnPn{\displaystyle Q=\sum _{n}X_{n}P_{n}}the commutator isiℏ[H,Q]=2T−∑nXndVdXn,{\displaystyle {\frac {i}{\hbar }}[H,Q]=2T-\sum _{n}X_{n}{\frac {dV}{dX_{n}}},}whereT=∑nPn2/2mn{\textstyle T=\sum _{n}P_{n}^{2}/2m_{n}}is the kinetic energy. The left-hand side of this equation is justdQ/dt, according to theHeisenberg equationof motion. The expectation value⟨dQ/dt⟩of this time derivative vanishes in a stationary state, leading to thequantum virial theorem:2⟨T⟩=∑n⟨XndVdXn⟩.{\displaystyle 2\langle T\rangle =\sum _{n}\left\langle X_{n}{\frac {dV}{dX_{n}}}\right\rangle .}
In the field of quantum mechanics, there exists another form of the virial theorem, applicable to localized solutions to the stationarynonlinear Schrödinger equationorKlein–Gordon equation, isPokhozhaev's identity,[6]also known asDerrick's theorem.
Letg(s){\displaystyle g(s)}be continuous and real-valued, withg(0)=0{\displaystyle g(0)=0}.
DenoteG(s)=∫0sg(t)dt{\textstyle G(s)=\int _{0}^{s}g(t)\,dt}.
Letu∈Lloc∞(Rn),∇u∈L2(Rn),G(u(⋅))∈L1(Rn),n∈N{\displaystyle u\in L_{\text{loc}}^{\infty }(\mathbb {R} ^{n}),\quad \nabla u\in L^{2}(\mathbb {R} ^{n}),\quad G(u(\cdot ))\in L^{1}(\mathbb {R} ^{n}),\quad n\in \mathbb {N} }be a solution to the equation−∇2u=g(u),{\displaystyle -\nabla ^{2}u=g(u),}in the sense ofdistributions.
Thenu{\displaystyle u}satisfies the relation(n−22)∫Rn|∇u(x)|2dx=n∫RnG(u(x))dx.{\displaystyle \left({\frac {n-2}{2}}\right)\int _{\mathbb {R} ^{n}}|\nabla u(x)|^{2}\,dx=n\int _{\mathbb {R} ^{n}}G{\big (}u(x){\big )}\,dx.}
For a single particle in special relativity, it is not the case thatT=1/2p·v. Instead, it is true thatT= (γ− 1)mc2, whereγis theLorentz factor
γ=11−v2c2,{\displaystyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},}andβ=v/c. We have12p⋅v=12βγmc⋅βc=12γβ2mc2=(γβ22(γ−1))T.{\displaystyle {\begin{aligned}{\frac {1}{2}}\mathbf {p} \cdot \mathbf {v} &={\frac {1}{2}}{\boldsymbol {\beta }}\gamma mc\cdot {\boldsymbol {\beta }}c\\&={\frac {1}{2}}\gamma \beta ^{2}mc^{2}\\[5pt]&=\left({\frac {\gamma \beta ^{2}}{2(\gamma -1)}}\right)T.\end{aligned}}}The last expression can be simplified to(1+1−β22)T=(γ+12γ)T.{\displaystyle \left({\frac {1+{\sqrt {1-\beta ^{2}}}}{2}}\right)T=\left({\frac {\gamma +1}{2\gamma }}\right)T.}Thus, under the conditions described in earlier sections (includingNewton's third law of motion,Fjk= −Fkj, despite relativity), the time average forNparticles with a power law potential isn2⟨VTOT⟩τ=⟨∑k=1N(1+1−βk22)Tk⟩τ=⟨∑k=1N(γk+12γk)Tk⟩τ.{\displaystyle {\frac {n}{2}}\left\langle V_{\text{TOT}}\right\rangle _{\tau }=\left\langle \sum _{k=1}^{N}\left({\tfrac {1+{\sqrt {1-\beta _{k}^{2}}}}{2}}\right)T_{k}\right\rangle _{\tau }=\left\langle \sum _{k=1}^{N}\left({\frac {\gamma _{k}+1}{2\gamma _{k}}}\right)T_{k}\right\rangle _{\tau }.}In particular, the ratio of kinetic energy to potential energy is no longer fixed, but necessarily falls into an interval:2⟨TTOT⟩n⟨VTOT⟩∈[1,2],{\displaystyle {\frac {2\langle T_{\text{TOT}}\rangle }{n\langle V_{\text{TOT}}\rangle }}\in [1,2],}where the more relativistic systems exhibit the larger ratios.
The virial theorem has a particularly simple form for periodic motion. It can be used to perform perturbative calculation for nonlinear oscillators.[7]
It can also be used to study motion in acentral potential.[4]If the central potential is of the formU∝rn{\displaystyle U\propto r^{n}}, the virial theorem simplifies to⟨T⟩=n2⟨U⟩{\displaystyle \langle T\rangle ={\frac {n}{2}}\langle U\rangle }.[citation needed]In particular, for gravitational or electrostatic (Coulomb) attraction,⟨T⟩=−12⟨U⟩{\displaystyle \langle T\rangle =-{\frac {1}{2}}\langle U\rangle }.
Analysis based on Sivardiere, 1986.[7]For a one-dimensional oscillator with massm{\displaystyle m}, positionx{\displaystyle x}, driving forceFcos(ωt){\displaystyle F\cos(\omega t)}, spring constantk{\displaystyle k}, and damping coefficientγ{\displaystyle \gamma }, the equation of motion ismd2xdt2⏟acceleration=−kxdd⏟spring−γdxdt⏟friction+Fcos(ωt)dd⏟external driving.{\displaystyle m\underbrace {\frac {d^{2}x}{dt^{2}}} _{\text{acceleration}}=\underbrace {-kx{\vphantom {\frac {d}{d}}}} _{\text{spring}}\ \underbrace {-\ \gamma {\frac {dx}{dt}}} _{\text{friction}}\ \underbrace {+\ F\cos(\omega t){\vphantom {\frac {d}{d}}}} _{\text{external driving}}.}
When the oscillator has reached a steady state, it performs a stable oscillationx=Xcos(ωt+φ){\displaystyle x=X\cos(\omega t+\varphi )}, whereX{\displaystyle X}is the amplitude, andφ{\displaystyle \varphi }is the phase angle.
Applying the virial theorem, we havem⟨x˙x˙⟩=k⟨xx⟩+γ⟨xx˙⟩−F⟨cos(ωt)x⟩{\displaystyle m\langle {\dot {x}}{\dot {x}}\rangle =k\langle xx\rangle +\gamma \langle x{\dot {x}}\rangle -F\langle \cos(\omega t)x\rangle }, which simplifies toFcos(φ)=m(ω02−ω2)X{\displaystyle F\cos(\varphi )=m(\omega _{0}^{2}-\omega ^{2})X}, whereω0=k/m{\displaystyle \omega _{0}={\sqrt {k/m}}}is the natural frequency of the oscillator.
To solve the two unknowns, we need another equation. In steady state, the power lost per cycle is equal to the power gained per cycle:⟨x˙γx˙⟩⏟power dissipated=⟨x˙Fcosωt⟩⏟power input,{\displaystyle \underbrace {\langle {\dot {x}}\,\gamma {\dot {x}}\rangle } _{\text{power dissipated}}=\underbrace {\langle {\dot {x}}\,F\cos \omega t\rangle } _{\text{power input}},}which simplifies tosinφ=−γXωF{\displaystyle \sin \varphi =-{\frac {\gamma X\omega }{F}}}.
Now we have two equations that yield the solution{X=F2γ2ω2+m2(ω02−ω2)2,tanφ=−γωm(ω02−ω2).{\displaystyle {\begin{cases}X={\sqrt {\dfrac {F^{2}}{\gamma ^{2}\omega ^{2}+m^{2}(\omega _{0}^{2}-\omega ^{2})^{2}}}},\\\tan \varphi =-{\dfrac {\gamma \omega }{m(\omega _{0}^{2}-\omega ^{2})}}.\end{cases}}}
Consider a container filled with an ideal gas consisting of point masses. The only forces applied to the point masses are due to the container walls. In this case, the expression in the virial theorem equals⟨∑iFi⋅ri⟩=−P∮n^⋅rdA,{\displaystyle {\Big \langle }\sum _{i}\mathbf {F} _{i}\cdot \mathbf {r} _{i}{\Big \rangle }=-P\oint {\hat {\mathbf {n} }}\cdot \mathbf {r} \,dA,}since, by definition, the pressurePis the average force per area exerted by the gas upon the walls, which is normal to the wall. There is a minus sign becausen^{\displaystyle {\hat {\mathbf {n} }}}is the unit normal vector pointing outwards, and the force to be used is the one upon the particles by the wall.
Then the virial theorem states that⟨T⟩=P2∮n^⋅rdA.{\displaystyle \langle T\rangle ={\frac {P}{2}}\oint {\hat {\mathbf {n} }}\cdot \mathbf {r} \,dA.}By thedivergence theorem,∮n^⋅rdA=∫∇⋅rdV=3∫dV=3V{\textstyle \oint {\hat {\mathbf {n} }}\cdot \mathbf {r} \,dA=\int \nabla \cdot \mathbf {r} \,dV=3\int dV=3V}.
Fromequipartition, the average total kinetic energy⟨T⟩=N⟨12mv2⟩=N⋅32kT{\textstyle \langle T\rangle =N{\big \langle }{\frac {1}{2}}mv^{2}{\big \rangle }=N\cdot {\frac {3}{2}}kT}. Hence,PV=NkT{\displaystyle PV=NkT}, theideal gas law.[8]
In 1933, Fritz Zwicky applied the virial theorem to estimate the mass ofComa Cluster, and discovered a discrepancy of mass of about 450, which he explained as due to "dark matter".[9]He refined the analysis in 1937, finding a discrepancy of about 500.[10][11]
He approximated the Coma cluster as a spherical "gas" ofN{\displaystyle N}stars of roughly equal massm{\displaystyle m}, which gives⟨T⟩=12Nm⟨v2⟩{\textstyle \langle T\rangle ={\frac {1}{2}}Nm\langle v^{2}\rangle }. The total gravitational potential energy of the cluster isU=−∑i<jGm2ri,j{\displaystyle U=-\sum _{i<j}{\frac {Gm^{2}}{r_{i,j}}}}, giving⟨U⟩=−Gm2∑i<j⟨1/ri,j⟩{\textstyle \langle U\rangle =-Gm^{2}\sum _{i<j}\langle {1}/{r_{i,j}}\rangle }. Assuming the motion of the stars are all the same over a long enough time (ergodicity),⟨U⟩=−12N2Gm2⟨1/r⟩{\textstyle \langle U\rangle =-{\frac {1}{2}}N^{2}Gm^{2}\langle {1}/{r}\rangle }.
Zwicky estimated⟨U⟩{\displaystyle \langle U\rangle }as the gravitational potential of a uniform ball of constant density, giving⟨U⟩=−35GN2m2R{\textstyle \langle U\rangle =-{\frac {3}{5}}{\frac {GN^{2}m^{2}}{R}}}.
So by the virial theorem, the total mass of the cluster isNm=5⟨v2⟩3G⟨1r⟩{\displaystyle Nm={\frac {5\langle v^{2}\rangle }{3G\langle {\frac {1}{r}}\rangle }}}
Zwicky1933{\displaystyle _{1933}}[9]estimated that there areN=800{\displaystyle N=800}galaxies in the cluster, each having observed stellar massm=109M⊙{\displaystyle m=10^{9}M_{\odot }}(suggested by Hubble), and the cluster has radiusR=106ly{\displaystyle R=10^{6}{\text{ly}}}. He also measured the radial velocities of the galaxies by doppler shifts in galactic spectra to be⟨vr2⟩=(1000km/s)2{\displaystyle \langle v_{r}^{2}\rangle =(1000{\text{km/s}})^{2}}. Assumingequipartitionof kinetic energy,⟨v2⟩=3⟨vr2⟩{\displaystyle \langle v^{2}\rangle =3\langle v_{r}^{2}\rangle }.
By the virial theorem, the total mass of the cluster should be5R⟨vr2⟩G≈3.6×1014M⊙{\displaystyle {\frac {5R\langle v_{r}^{2}\rangle }{G}}\approx 3.6\times 10^{14}M_{\odot }}. However, the observed mass isNm=8×1011M⊙{\displaystyle Nm=8\times 10^{11}M_{\odot }}, meaning the total mass is 450 times that of observed mass.
Lord Rayleigh published a generalization of the virial theorem in 1900,[12]which was partially reprinted in 1903.[13]Henri Poincaréproved and applied a form of the virial theorem in 1911 to the problem of formation of the Solar System from a proto-stellar cloud (then known ascosmogony).[14]A variational form of the virial theorem was developed in 1945 by Ledoux.[15]Atensorform of the virial theorem was developed by Parker,[16]Chandrasekhar[17]and Fermi.[18]The following generalization of the virial theorem has been established by Pollard in 1964 for the case of the inverse square law:[19][20][failed verification]2limτ→+∞⟨T⟩τ=limτ→+∞⟨U⟩τif and only iflimτ→+∞τ−2I(τ)=0.{\displaystyle 2\lim _{\tau \to +\infty }\langle T\rangle _{\tau }=\lim _{\tau \to +\infty }\langle U\rangle _{\tau }\quad {\text{if and only if}}\quad \lim _{\tau \to +\infty }{\tau }^{-2}I(\tau )=0.}Aboundaryterm otherwise must be added.[21]
The virial theorem can be extended to include electric and magnetic fields. The result is[22]
12d2Idt2+∫Vxk∂Gk∂td3r=2(T+U)+WE+WM−∫xk(pik+Tik)dSi,{\displaystyle {\frac {1}{2}}{\frac {d^{2}I}{dt^{2}}}+\int _{V}x_{k}{\frac {\partial G_{k}}{\partial t}}\,d^{3}r=2(T+U)+W^{\mathrm {E} }+W^{\mathrm {M} }-\int x_{k}(p_{ik}+T_{ik})\,dS_{i},}
whereIis themoment of inertia,Gis themomentum density of the electromagnetic field,Tis thekinetic energyof the "fluid",Uis the random "thermal" energy of the particles,WEandWMare the electric and magnetic energy content of the volume considered. Finally,pikis the fluid-pressure tensor expressed in the local moving coordinate system
pik=Σnσmσ⟨vivk⟩σ−ViVkΣmσnσ,{\displaystyle p_{ik}=\Sigma n^{\sigma }m^{\sigma }\langle v_{i}v_{k}\rangle ^{\sigma }-V_{i}V_{k}\Sigma m^{\sigma }n^{\sigma },}
andTikis theelectromagnetic stress tensor,
Tik=(ε0E22+B22μ0)δik−(ε0EiEk+BiBkμ0).{\displaystyle T_{ik}=\left({\frac {\varepsilon _{0}E^{2}}{2}}+{\frac {B^{2}}{2\mu _{0}}}\right)\delta _{ik}-\left(\varepsilon _{0}E_{i}E_{k}+{\frac {B_{i}B_{k}}{\mu _{0}}}\right).}
Aplasmoidis a finite configuration of magnetic fields and plasma. With the virial theorem it is easy to see that any such configuration will expand if not contained by external forces. In a finite configuration without pressure-bearing walls or magnetic coils, the surface integral will vanish. Since all the other terms on the right hand side are positive, the acceleration of the moment of inertia will also be positive. It is also easy to estimate the expansion timeτ. If a total massMis confined within a radiusR, then the moment of inertia is roughlyMR2, and the left hand side of the virial theorem isMR2/τ2. The terms on the right hand side add up to aboutpR3, wherepis the larger of the plasma pressure or the magnetic pressure. Equating these two terms and solving forτ, we find
τ∼Rcs,{\displaystyle \tau \,\sim {\frac {R}{c_{\mathrm {s} }}},}
wherecsis the speed of theion acoustic wave(or theAlfvén wave, if the magnetic pressure is higher than the plasma pressure). Thus the lifetime of a plasmoid is expected to be on the order of the acoustic (or Alfvén) transit time.
In case when in the physical system the pressure field, the electromagnetic and gravitational fields are taken into account, as well as the field of particles’ acceleration, the virial theorem is written in the relativistic form as follows:[23]
⟨Wk⟩≈−0.6∑k=1N⟨Fk⋅rk⟩,{\displaystyle \left\langle W_{k}\right\rangle \approx -0.6\sum _{k=1}^{N}\langle \mathbf {F} _{k}\cdot \mathbf {r} _{k}\rangle ,}
where the valueWk≈γcTexceeds the kinetic energy of the particlesTby a factor equal to the Lorentz factorγcof the particles at the center of the system. Under normal conditions we can assume thatγc≈ 1, then we can see that in the virial theorem the kinetic energy is related to the potential energy not by the coefficient1/2, but rather by the coefficient close to 0.6. The difference from the classical case arises due to considering the pressure field and the field of particles’ acceleration inside the system, while the derivative of the scalarGis not equal to zero and should be considered as thematerial derivative.
An analysis of the integral theorem of generalized virial makes it possible to find, on the basis of field theory, a formula for the root-mean-square speed of typical particles of a system without using the notion of temperature:[24]
vrms=c1−4πηρ0r2c2γc2sin2(rc4πηρ0),{\displaystyle v_{\mathrm {rms} }=c{\sqrt {1-{\frac {4\pi \eta \rho _{0}r^{2}}{c^{2}\gamma _{c}^{2}\sin ^{2}\left({\frac {r}{c}}{\sqrt {4\pi \eta \rho _{0}}}\right)}}}},}
wherec{\displaystyle ~c}is the speed of light,η{\displaystyle ~\eta }is the acceleration field constant,ρ0{\displaystyle ~\rho _{0}}is the mass density of particles,r{\displaystyle ~r}is the current radius.
Unlike the virial theorem for particles, for the electromagnetic field the virial theorem is written as follows:[25]Ekf+2Wf=0,{\displaystyle ~E_{kf}+2W_{f}=0,}where the energyEkf=∫Aαjα−gdx1dx2dx3{\textstyle ~E_{kf}=\int A_{\alpha }j^{\alpha }{\sqrt {-g}}\,dx^{1}\,dx^{2}\,dx^{3}}considered as the kinetic field energy associated with four-currentjα{\displaystyle j^{\alpha }}, andWf=14μ0∫FαβFαβ−gdx1dx2dx3{\displaystyle ~W_{f}={\frac {1}{4\mu _{0}}}\int F_{\alpha \beta }F^{\alpha \beta }{\sqrt {-g}}\,dx^{1}\,dx^{2}\,dx^{3}}sets the potential field energy found through the components of the electromagnetic tensor.
The virial theorem is frequently applied in astrophysics, especially relating thegravitational potential energyof a system to itskineticorthermal energy. Some common virial relations are[citation needed]35GMR=32kBTmp=12v2{\displaystyle {\frac {3}{5}}{\frac {GM}{R}}={\frac {3}{2}}{\frac {k_{\mathrm {B} }T}{m_{\mathrm {p} }}}={\frac {1}{2}}v^{2}}for a massM, radiusR, velocityv, and temperatureT. The constants areNewton's constantG, theBoltzmann constantkB, and proton massmp. Note that these relations are only approximate, and often the leading numerical factors (e.g.3/5or1/2) are neglected entirely.
Inastronomy, the mass and size of a galaxy (or general overdensity) is often defined in terms of the "virial mass" and "virial radius" respectively. Because galaxies and overdensities in continuous fluids can be highly extended (even to infinity in some models, such as anisothermal sphere), it can be hard to define specific, finite measures of their mass and size. The virial theorem, and related concepts, provide an often convenient means by which to quantify these properties.
In galaxy dynamics, the mass of a galaxy is often inferred by measuring therotation velocityof its gas and stars, assumingcircular Keplerian orbits. Using the virial theorem, thevelocity dispersionσcan be used in a similar way. Taking the kinetic energy (per particle) of the system asT=1/2v2~3/2σ2, and the potential energy (per particle) asU~3/5GM/Rwe can write
GMR≈σ2.{\displaystyle {\frac {GM}{R}}\approx \sigma ^{2}.}
HereR{\displaystyle R}is the radius at which the velocity dispersion is being measured, andMis the mass within that radius. The virial mass and radius are generally defined for the radius at which the velocity dispersion is a maximum, i.e.
GMvirRvir≈σmax2.{\displaystyle {\frac {GM_{\text{vir}}}{R_{\text{vir}}}}\approx \sigma _{\max }^{2}.}
As numerous approximations have been made, in addition to the approximate nature of these definitions, order-unity proportionality constants are often omitted (as in the above equations). These relations are thus only accurate in anorder of magnitudesense, or when used self-consistently.
An alternate definition of the virial mass and radius is often used in cosmology where it is used to refer to the radius of a sphere, centered on agalaxyor agalaxy cluster, within which virial equilibrium holds. Since this radius is difficult to determine observationally, it is often approximated as the radius within which the average density is greater, by a specified factor, than thecritical densityρcrit=3H28πG{\displaystyle \rho _{\text{crit}}={\frac {3H^{2}}{8\pi G}}}whereHis theHubble parameterandGis thegravitational constant. A common choice for the factor is 200, which corresponds roughly to the typical over-density in spherical top-hat collapse (seeVirial mass), in which case the virial radius is approximated asrvir≈r200=r,ρ=200⋅ρcrit.{\displaystyle r_{\text{vir}}\approx r_{200}=r,\qquad \rho =200\cdot \rho _{\text{crit}}.}The virial mass is then defined relative to this radius asMvir≈M200=43πr2003⋅200ρcrit.{\displaystyle M_{\text{vir}}\approx M_{200}={\frac {4}{3}}\pi r_{200}^{3}\cdot 200\rho _{\text{crit}}.}
The virial theorem is applicable to the cores of stars, by establishing a relation between gravitational potential energy and thermal kinetic energy (i.e. temperature). As stars on themain sequenceconvert hydrogen into helium in their cores, the mean molecular weight of the core increases and it must contract to maintain enough pressure to support its own weight. This contraction decreases its potential energy and, the virial theorem states, increases its thermal energy. The core temperature increases even as energy is lost, effectively a negativespecific heat.[26]This continues beyond the main sequence, unless the core becomes degenerate since that causes the pressure to become independent of temperature and the virial relation withnequals −1 no longer holds.[27]
|
https://en.wikipedia.org/wiki/Virial_theorem
|
Open-source hardware(OSH,OSHW) consists of physicalartifactsof technology designed and offered by theopen-design movement. Bothfree and open-source software(FOSS) and open-source hardware are created by thisopen-source culturemovement and apply a like concept to a variety of components. It is sometimes, thus, referred to asfree and open-source hardware(FOSH), meaning that the design is easily available ("open") and that it can be used, modified and shared freely ("free").[citation needed]The term usually means that information about the hardware is easily discerned so that others can make it – coupling it closely to themaker movement.[1]Hardware design (i.e. mechanical drawings,schematics,bills of material,PCBlayout data,HDLsource code[2]andintegrated circuitlayout data), in addition to the software thatdrivesthe hardware, are all released under free/libreterms. The original sharer gains feedback and potentially improvements on the design from the FOSH community. There is now significant evidence that such sharing can drive a highreturn on investmentfor the scientific community.[3]
It is not enough to merely use anopen-source license; an open source product or project will follow open source principles, such asmodular designandcommunity collaboration.[4][5][6]
Since the rise of reconfigurableprogrammable logic devices, sharing of logic designs has been a form of open-source hardware. Instead of the schematics,hardware description language(HDL) code is shared. HDL descriptions are commonly used to set upsystem-on-a-chipsystems either infield-programmable gate arrays(FPGA) or directly inapplication-specific integrated circuit(ASIC) designs. HDL modules, when distributed, are calledsemiconductor intellectual property cores, also known as IP cores.
Open-source hardware also helps alleviate the issue ofproprietary device driversfor the free and open-source software community, however, it is not a pre-requisite for it, and should not be confused with the concept of open documentation for proprietary hardware, which is already sufficient for writing FLOSS device drivers and complete operating systems.[7][8]The difference between the two concepts is that OSH includes both the instructions on how to replicate the hardware itself as well as the information on communication protocols that the software (usually in the form ofdevice drivers) must use in order to communicate with the hardware (often called register documentation, or open documentation for hardware[7]), whereas open-source-friendly proprietary hardware would only include the latter without including the former.
The first hardware-focused "open source" activities were started around 1997 byBruce Perens, creator of theOpen Source Definition, co-founder of theOpen Source Initiative, and aham radio operator. He launched the Open Hardware Certification Program, which had the goal of allowing hardware manufacturers to self-certify their products as open.[9][10]
Shortly after the launch of the Open Hardware Certification Program, David Freeman announced the Open Hardware Specification Project (OHSpec), another attempt at licensing hardware components whose interfaces are available publicly and of creating an entirely new computing platform as an alternative to proprietary computing systems.[11]In early 1999, Sepehr Kiani, Ryan Vallance and Samir Nayfeh joined efforts to apply the open-source philosophy to machine design applications. Together they established the Open Design Foundation (ODF)[12]as a non-profit corporation and set out to develop anOpen DesignDefinition. However, most of these activities faded out after a few years.
A "Free Hardware" organization, known as FreeIO, was started in the late 1990s by Diehl Martin, who also launched a FreeIO website in early 2000. In the early to mid 2000s, FreeIO was a focus of free/open hardware designs released under theGNU General Public License. The FreeIO project advocated the concept of Free Hardware and proposed four freedoms that such hardware provided to users, based on the similar freedoms provided by free software licenses.[13]The designs gained some notoriety due to Martin's naming scheme in which each free hardware project was given the name of a breakfast food such as Donut, Flapjack, Toast, etc. Martin's projects attracted a variety of hardware and software developers as well as other volunteers. Development of new open hardware designs at FreeIO ended in 2007 when Martin died of pancreatic cancer but the existing designs remain available from the organization's website.[14]
By the mid 2000s open-source hardware again became a hub of activity due to the emergence of several major open-source hardware projects and companies, such asOpenCores,RepRap(3D printing),Arduino,Adafruit,SparkFun, andOpen Source Ecology. In 2007, Perens reactivated the openhardware.org website, but it's currently (February 2025) inactive.
Following theOpen Graphics Project, an effort to design, implement, and manufacture a free and open 3D graphics chip set and reference graphics card, Timothy Miller suggested the creation of an organization to safeguard the interests of the Open Graphics Project community. Thus, Patrick McNamara founded theOpen Hardware Foundation(OHF) in 2007.[15]
TheTucson Amateur Packet Radio Corporation(TAPR), founded in 1982 as a non-profit organization of amateur radio operators with the goals of supporting R&D efforts in the area of amateur digital communications, created in 2007 the first open hardware license, theTAPR Open Hardware License. TheOSIpresidentEric S. Raymondexpressed some concerns about certain aspects of the OHL and decided to not review the license.[16]
Around 2010 in context of theFreedom Definedproject, theOpen Hardware Definitionwas created as collaborative work of many[17]and is accepted as of 2016 by dozens of organizations and companies.[18]
In July 2011, CERN (European Organization for Nuclear Research) released an open-source hardware license,CERN OHL. Javier Serrano, an engineer at CERN's Beams Department and the founder of the Open Hardware Repository, explained: "By sharing designs openly, CERN expects to improve the quality of designs through peer review and to guarantee their users – including commercial companies – the freedom to study, modify and manufacture them, leading to better hardware and less duplication of efforts".[19]While initially drafted to address CERN-specific concerns, such as tracing the impact of the organization's research, in its current form it can be used by anyone developing open-source hardware.[20]
Following the 2011 Open Hardware Summit, and after heated debates on licenses and what constitutes open-source hardware, Bruce Perens abandoned the OSHW Definition and the concerted efforts of those involved with it.[21]Openhardware.org, led by Bruce Perens, promotes and identifies practices that meet all the combined requirements of the Open Source Hardware Definition, the Open Source Definition, and the Four Freedoms of theFree Software Foundation[22]Since 2014 openhardware.org is not online and seems to have ceased activity.[23]
TheOpen Source Hardware Association(OSHWA) at oshwa.org acts as hub of open-source hardware activity of all genres, while cooperating with other entities such as TAPR, CERN, and OSI. The OSHWA was established as an organization in June 2012 in Delaware and filed for tax exemption status in July 2013.[24]After some debates about trademark interferences with the OSI, in 2012 the OSHWA and the OSI signed a co-existence agreement.[25][26]
TheFOSSi Foundationis founded in 2015 as aUK-based non-profit to promote and protect the open source silicon chip movement, roughly a year after the official release ofRISC-Varchitecture.[27]
TheFree Software Foundationhas suggested an alternative "free hardware" definition derived from theFour Freedoms.[28][29]
The termhardwarein open-source hardware has been historically used in opposition to the termsoftwareof open-source software. That is, to refer to the electronic hardware on which the software runs (see previous section). However, as more and more non-electronic hardware products are made open source (for exampleWikiHouse, OpenBeam or Hovalin), this term tends to be used back in its broader sense of "physical product". The field of open-source hardware has been shown to go beyond electronic hardware and to cover a larger range of product categories such as machine tools, vehicles and medical equipment.[30]In that sense,hardwarerefers to any form of tangible product, be it electronic hardware, mechanical hardware, textile or even construction hardware. The Open Source Hardware (OSHW) Definition 1.0 defines hardware as "tangible artifacts — machines, devices, or other physical things".[31]
Electronics is one of the most popular types of open-source hardware.PCBbased designs can be published similarly to software as CAD files, which users can send directly to PCB fabrication companies to receive hardware in the mail. Alternatively, users can obtain components and solder them together themselves.
There are many companies that provide large varieties of open-source electronics such asSparkfun,Adafruit, and Seeed. In addition, there areNPOsand companies that provide a specific open-source electronic component such as theArduinoelectronics prototyping platform. There are many examples of specialty open-source electronics such as low-cost voltage and currentGMAWopen-source 3-D printer monitor[32][33]and a robotics-assistedmass spectrometryassay platform.[34][35]Open-source electronics finds various uses, including automation of chemical procedures.[36][37]
Open Standard chip designs are now common.OpenRISC(2000 - LGPL / GPL),OpenSparc(2005 - GPLv2), andRISC-V(2010 - Open Standard, free to implement for non-commercial purposes), are examples of free to useinstruction set architecture.
OpenCoresis a large library of standard chip design subcomponents which can be combined into larger designs.
Complete open source software stacks and shuttle fabrication services are now available which can take OSH chip designs fromhardware description languagesto masks andASICfabrication on maker-scale budgets.[38]
Purely mechanical OSH designs include mechanical components, machine tools, and vehicles.Open Source Ecologyis a large project which seeks to develop a complete ecosystem of mechanical tools and components which aim to be able to replicate themselves.
Open-source vehicles have also been developed including bicycles like XYZ Space Frame Vehicles and cars such as the Tabby OSVehicle.
Most OSH systems combine elements of electronics and mechanics to formmechatronicssystems. A large range of open-sourcemechatronicproducts have been developed, including machine tools, musical instruments, and medical equipment.[30]
Examples of open-source machine tools include 3D printers such asRepRap,Prusa, andUltimaker, 3D printer filament extruders such as polystruder[39]XR PRO as well as the laser cutterLasersaur.
Examples of open source medical equipment includeopen-source ventilators, the echostethoscope echOpen (co-founded byMehdi Benchoufi, Olivier de Fresnoye, Pierre Bourrier and Luc Jonveaux[40]), and a wide range of prosthetic hands listed in the review study by Ten Kateet.al.[41](e.g. OpenBionics' Prosthetic Hands).
Open source roboticscombines open source hardware mechatronics with open source AI and control software. Due to the mixture of hardware and software it serves as a particularly active area for open source ideas to move between them.
Examples of open-source hardware products can also be found to a lesser extent in construction (Wikihouse), textile (Kit Zéro Kilomètres), and firearms (3D printed firearm,Defense Distributed).
Rather than creating a new license, some open-source hardware projects use existing,free and open-source softwarelicenses.[42]These licenses may not accord well withpatent law.[43]
Later, several new licenses were proposed, designed to address issues specific to hardware design.[44]In these licenses, many of the fundamental principles expressed in open-source software (OSS) licenses have been "ported" to their counterpart hardware projects. Newhardware licensesare often explained as the "hardware equivalent" of a well-known OSS license, such as theGPL,LGPL, orBSD license.
Despite superficial similarities tosoftware licenses, most hardware licenses are fundamentally different: by nature, they typically rely more heavily onpatentlaw than oncopyrightlaw, as many hardware designs are not copyrightable.[45]Whereas a copyright license may control the distribution of the source code or design documents, a patent license may control the use and manufacturing of the physical device built from the design documents. This distinction is explicitly mentioned in the preamble of theTAPR Open Hardware License:
"... those who benefit from an OHL design may not bring lawsuits claiming that design infringes their patents or other intellectual property."
Noteworthy licenses include:
TheOpen Source Hardware Associationrecommends seven licenses which follow theiropen-source hardware definition.[51]From the general copyleft licenses theGNU General Public License(GPL) andCreative Commons Attribution-ShareAlikelicense, from the hardware-specific copyleft licenses theCERN Open Hardware License(OHL) andTAPR Open Hardware License(OHL) and from thepermissive licensestheFreeBSD license, theMIT license, and theCreative Commons Attributionlicense.[52]Openhardware.org recommended in 2012 the TAPR Open Hardware License, Creative Commons BY-SA 3.0 and GPL 3.0 license.[53]
Organizations tend to rally around a shared license. For example,OpenCoresprefers theLGPLor aModified BSD License,[54]FreeCoresinsists on theGPL,[55]Open Hardware Foundationpromotes "copyleftor other permissive licenses",[56]theOpen Graphics Projectuses[57]a variety of licenses, including theMIT license,GPL, and a proprietary license,[58]and theBalloon Projectwrote their own license.[59]
The adjective "open-source" not only refers to a specific set of freedoms applying to a product, but also generally presupposes that the product is the object or the result of a "process that relies on the contributions of geographically dispersed developers via theInternet."[60]In practice however, in both fields of open-source hardware and open-source software, products may either be the result of a development process performed by a closed team in a private setting or by a community in a public environment, the first case being more frequent than the second which is more challenging.[30]Establishing a community-based product development process faces several challenges such as: to find appropriate product data management tools, document not only the product but also the development process itself, accepting losing ubiquitous control over the project, ensure continuity in a context of fickle participation of voluntary project members, among others.[61]
One of the major differences between developing open-source software and developing open-source hardware is that hardware results in tangible outputs, which cost money to prototype and manufacture. As a result, the phrase "free as in speech, not as in beer",[62]more-formally known asgratis versus libre, distinguishes between the idea of zero cost and the freedom to use and modify information. While open-source hardware faces challenges in minimizing cost and reducing financial risks for individual project developers, some community members have proposed models to address these needs[63]Given this, there are initiatives to develop sustainable community funding mechanisms, such as the Open Source Hardware Central Bank.
Extensive discussion has taken place on ways to make open-source hardware as accessible asopen-source software. Providing clear and detailed product documentation is an essential factor facilitating product replication and collaboration in hardware development projects. Practical guides have been developed to help practitioners to do so.[64]Another option is to design products so they are easy to replicate, as exemplified in the concept ofopen-source appropriate technology.[65]
The process of developing open-source hardware in a community-based setting is alternatively calledopen design, open source development[66]oropen source product development.[67]All these terms are examples of theopen-source modelapplicable for the development of any product, including software, hardware, cultural and educational. Does open design and open-source hardware design process involves new design practices, or raises requirements for new tools? is the question of openness really key in OSH?.[68]Seeherefor a delineation of these terms.
A major contributor to the production of open-source hardware product designs is the scientific community. There has been considerable work to produce open-source hardware for scientific hardware using a combination of open-source electronics and3-D printing.[69][70][71]Other sources of open-source hardware production are vendors of chips and other electronic components sponsoring contests with the provision that the participants and winners must share their designs.Circuit Cellarmagazine organizes some of these contests.
A guide has been published (Open-Source Lab (book)byJoshua Pearce) on usingopen-source electronicsand3D printingto makeopen-source labs. Today, scientists are creating many such labs. Examples include:
Open hardware companies are experimenting withbusiness models.[75]For example,littleBitsimplementsopen-source business modelsby making available the circuit designs in each electronics module, in accordance with theCERN Open Hardware License Version1.2.[76]Another example isArduino, which registered its name as atrademark; others may manufacture products from Arduino designs but cannot call the products Arduino products.[77]There are many applicable business models for implementing some open-source hardware even in traditional firms. For example, to accelerate development and technical innovation, thephotovoltaicindustry has experimented with partnerships, franchises, secondary supplier and completely open-source models.[78]
Recently, many open-source hardware projects have been funded viacrowdfundingon platforms such asIndiegogo,Kickstarter, orCrowd Supply.[79]
Richard Stallman, the founder of thefree softwaremovement, was in 1999 skeptical on the idea and relevance offree hardware(his terminology for what is now known as open-source hardware).[80]In a 2015 article inWiredMagazine, he modified this attitude; he acknowledged the importance of free hardware, but still saw no ethical parallel with free software.[28]Also, Stallman prefers the termfree hardware designoveropen source hardware, a request which is consistent with his earlier rejection of the termopen source software(see alsoAlternative terms for free software).[28]
Other authors, such as ProfessorJoshua Pearcehave argued there is an ethical imperative for open-source hardware – specifically with respect toopen-source appropriate technologyforsustainable development.[81]In 2014, he also wrote the bookOpen-Source Lab: How to Build Your Own Hardware and Reduce Research Costs, which details the development offree and open-source hardwareprimarily forscientistsand universityfaculty.[82]Pearce in partnership with Elsevier introduced a scientific journalHardwareX. It has featured many examples of applications of open-source hardware for scientific purposes.
Further,Vasilis Kostakis[et]et al[83]have argued that open-source hardware may promote values of equity, diversity and sustainability. Open-source hardware initiative transcend traditional dichotomies of global-local, urban-rural, and developed-developing contexts. They may leverage cultural differences, environmental conditions, and local needs/resources, while embracing hyper-connectivity, to foster sustainability and collaboration rather than conflict.[83]However, open-source hardware does face some challenges and contradictions. It must navigate tensions between inclusiveness, standardization, and functionality.[83]Additionally, while open-source hardware may reduce pressure on natural resources and local populations, it still relies on energy- and material-intensive infrastructures, such as the Internet. Despite these complexities, Kostakis et al argue, the open-source hardware framework can serve as a catalyst for connecting and unifying diverse local initiatives under radical narratives, thus inspiring genuine change.[83]
OSH has grown as an academic field through the two journalsJournal of Open Hardware(JOH) andHardwareX. These journals compete to publish the best OSH designs, and each define their own requirements for what constitutes acceptable quality of design documents, including specific requirements for build instructions, bill of materials, CAD files, and licences. These requirements are often used by other OSH projects to define how to do an OSH release. These journals also publish papers contributing to the debate about how OSH should be defined and used.
|
https://en.wikipedia.org/wiki/Open-source_hardware
|
Analgorithmis fundamentally a set of rules or defined procedures that is typically designed and used to solve a specific problem or a broad set of problems.
Broadly, algorithms define process(es), sets of rules, or methodologies that are to be followed in calculations, data processing, data mining, pattern recognition, automated reasoning or other problem-solving operations. With the increasing automation of services, more and more decisions are being made by algorithms. Some general examples are; risk assessments, anticipatory policing, and pattern recognition technology.[1]
The following is alist of well-known algorithmsalong with one-line descriptions for each.
HybridAlgorithms
|
https://en.wikipedia.org/wiki/List_of_algorithms
|
Aviation Information Data Exchange(AIDX) is the globalXMLmessaging standard for exchanging flight data betweenairlines,airports, and any third party consuming the data. It is endorsed as a recommended standard by theInternational Air Transport Association(IATA), and theAirports Council International(ACI).
The development of AIDX began in 2005 and launched in October 2008 as a combined effort of over 80 airlines, airports and vendors. To date, it consists of 180 distinct data elements, includingflight identification, operational times, disruption details, resource requirement, passenger, baggage, fuel and cargo statistics, andaircraftdetails.[1]The goal of the project was to standardize information exchange and tackle problems of disruption for a variety ofuse cases.
This aircraft-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/AIDX
|
Social profilingis the process of constructing asocial mediauser's profile using his or hersocial data. In general,profilingrefers to thedata scienceprocess of generating a person's profile with computerizedalgorithmsand technology.[1]There are various platforms for sharing this information with the proliferation of growing popularsocial networks, including but not limited toLinkedIn,Google+,FacebookandTwitter.[2]
A person'ssocial datarefers to the personal data that they generate either online or offline[3](for more information, seesocial data revolution). A large amount of these data, including one's language, location and interest, is shared throughsocial mediaandsocial network. Users join multiple social media platforms and their profiles across these platforms can be linked using different methods[4]to obtain their interests, locations, content, and friend list. Altogether, this information can be used to construct a person's social profile.
Meeting the user's satisfaction level for information collection is becoming more challenging. This is because of too much "noise" generated, which affects the process of information collection due to explosively increasing online data. Social profiling is an emerging approach to overcome the challenges faced in meeting user's demands by introducing the concept ofpersonalized searchwhile keeping in consideration user profiles generated using social network data. A study reviews and classifies research inferring users social profile attributes from social media data as individual and group profiling. The existing techniques along with utilized data sources, the limitations, and challenges were highlighted.
The prominent approaches adopted includemachine learning,
ontology, andfuzzy logic. Social media data fromTwitterandFacebookhave been used by most of the studies to infer the social attributes of users. The literature showed that user social attributes, including age, gender, home location, wellness, emotion, opinion, relation, influence are still need to be explored.[5]
The ever-increasing online content has resulted in the lack of proficiency of centralizedsearch engine's results.[6][7]It can no longer satisfy user's demand for information. A possible solution that would increase coverage of search results would bemeta-search engines,[6]an approach that collects information from numerous centralized search engines. A new problem thus emerges, that is too much data and too much noise is generated in the collection process.
Therefore, a new technique called personalized meta-search engines was developed. It makes use of a user's profile (largely social profile) to filter the search results. A user's profile can be a combination of a number of things, including but not limited to, "a user's manual selected interests, user's search history", and personal social network data.[6]
According toSamuel D. Warren IIandLouis Brandeis(1890), disclosure of private information and the misuse of it can hurt people's feelings and cause considerable damage in people's lives.[8]Social networks provide people access to intimate online interactions; therefore, information access control, information transactions,privacy issues, connections and relationships on social media have become important research fields and are subjects of concern to the public.
Ricard Fogues and other co-authors state that "any privacy mechanism has at its base an access control", that dictate "howpermissionsare given, what elements can be private, how access rules are defined, and so on".[9]Current access control for social media accounts tend to still be very simplistic: there is very limited diversity in the category of relationships on for social network accounts. User's relationships to others are, on most platforms, only categorized as "friend" or "non-friend" and people may leak important information to "friends" inside their social circle but not necessarily users to they consciously want to share the information to.[9]The below section is concerned with social media profiling and what profiling information on social media accounts can achieve.
A lot of information is voluntarily shared on online social networks, such as photos and updates on life activities (new job, hobbies, etc.). People rest assured that different social network accounts on different platforms will not be linked as long as they do not grant permission to these links. However, according to Diane Gan, information gathered online enables "target subjects to be identified on other social networking sites such as Foursquare, Instagram, LinkedIn, Facebook and Google+, where more personal information was leaked".[10]
The majority of social networking platforms use the "opt out approach" for their features. If users wish to protect their privacy, it is user's own responsibility to check and change theprivacy settingsas a number of them are set to default option.[10]A major social network platforms have developed geo-tag functions and are in popular usage. This is concerning because 39% of users have experienced profiling hacking; 78% burglars have used major social media networks and Google Street-view to select their victims; and an astonishing 54% of burglars attempted to break into empty houses when people posted their status updates and geo-locations.[11]
Formation and maintenance of social media accounts and their relationships with other accounts are associated with various social outcomes.[12]In 2015, for many firms,customer relationship managementis essential and is partially done throughFacebook.[13]Before the emergence and prevalence of social media, customer identification was primarily based upon information that a firm could directly acquire:[14]for example, it may be through a customer's purchasing process or voluntary act of completing asurvey/loyalty program. However, the rise of social media has greatly reduced the approach of building acustomer's profile/modelbased on available data. Marketers now increasingly seek customer information through Facebook;[13]this may include a variety of information users disclose to all users or partial users on Facebook: name, gender, date of birth, e-mail address, sexual orientation, marital status, interests, hobbies, favorite sports team(s), favorite athlete(s), or favorite music, and more importantly, Facebook connections.[13]
However, due to the privacy policy design, acquiring true information on Facebook is no trivial task. Often, Facebook users either refuse to disclose true information (sometimes using pseudonyms) or setting information to be only visible to friends, Facebook users who "LIKE" your page are also hard to identify. To do online profiling of users and cluster users, marketers and companies can and will access the following kinds of data: gender, the IP address and city of each user through the Facebook Insight page, who "LIKED" a certain user, a page list of all the pages that a person "LIKED" (transaction data), other people that a user follow (even if it exceeds the first 500, which we usually can not see) and all the publicly shared data.[13]
First launched on the Internet in March 2006, Twitter is a platform on which users can connect and communicate with any other user in just 280 characters.[10]Like Facebook,Twitteris also a crucial tunnel for users to leak important information, often unconsciously, but able to be accessed and collected by others.
According toRachel Nuwer, in a sample of 10.8 million tweets by more than 5,000 users, their posted and publicly shared information are enough to reveal a user's income range.[15]A postdoctoral researcher from theUniversity of Pennsylvania, Daniel Preoţiuc-Pietro and his colleagues were able to categorize 90% of users into corresponding income groups. Their existing collected data, after being fed into a machine-learning model, generated reliable predictions on the characteristics of each income group.[15]
The mobile app called Streamd.in displays live tweets on Google Maps by using geo-location details attached to the tweet, and traces the user's movement in the real world.[10]
The advent and universality of social media networks have boosted the role of images and visual information dissemination.[16]Many types of visual information on social media transmit messages from the author, location information and other personal information. For example, a user may post a photo of themselves in which landmarks are visible, which can enable other users to determine where they are. In a study done by Cristina Segalin, Dong Seon Cheng and Marco Cristani, they found that profiling user posts' photos can reveal personal traits such as personality and mood.[16]In the study, convolutional neural networks (CNNs) is introduced. It builds on the main characteristics of computational aesthetics CA (emphasizing "computational methods", "human aesthetic point of view", and "the need to focus on objective approaches"[16]) defined by Hoenig (Hoenig, 2005). This tool can extract and identify content in photos.
In a study called "A Rule-Based Flickr Tag Recommendation System", the author suggests personalized tag recommendations,[17]largely based on user profiles and other web resources. It has proven to be useful in many aspects: "web content indexing", "multimedia data retrieval", and enterprise Web searches.[17]
In 2011, marketers and retailers are increasing their market presence by creating their own pages on social media, on which they post information, ask people to like and share to enter into contests, and much more. Studies in 2011 show that on average a person spends about 23 minutes on a social networking site per day.[18]Therefore, companies from small to large ones are investing in gathering user behavior information, rating, reviews, and more.[19]
Until 2006, communications online are not content led in terms of the amount of time people spend online. However, content sharing and creating has been the primary online activity of general social media users and that has forever changed online marketing.[20]In the book Advanced Social media Marketing,[21]the author gives an example of how a New York wedding planner might identify his audience when marketing on Facebook. Some of these categories may include: (1) who live in the United States; (2) Who live within 50 miles of New York; (3) Age 21 and older; (4) engaged female.[21]No matter you choose to pay cost per click or cost per impressions/views "the cost of Facebook Marketplace ads and Sponsored Stories is set by your maximum bid and the competition for the same audiences".[21]The cost of clicks is usually $0.5–1.5 each.
Kloutis a popular online tool that focuses on assessing a user'ssocial influenceby social profiling. It takes several social media platforms (such asFacebook,Twitteretc.) and numerous aspects into account and generate a user's score from 1 to 100. Regardless of one's number of likes for a post, or connections on LinkedIn, social media contains plentiful personal information. Klout generates a single score that indicates a person's influence.[22]
In a study called "How Much Klout do You Have...A Test of System Generated Cues on Source Credibility" done by Chad Edwards, Klout scores can influence people's perceived credibility.[23]As Klout Score becomes a popular combined-into-one-score method of accessing people's influence, it can be a convenient tool and a biased one at the same time. A study of how social media followers influence people's judgments done by David Westerman illustrates that possible bias that Klout may contain.[24]In one study, participants were asked to view six identical mock Twitter pages with only one major independent variable: page followers. Result shows that pages with too many or too fewer followers would both decrease its credibility, despite its similar content. Klout score may be subject to the same bias as well.[24]
While this is sometimes used during recruitment process, it remains to be controversial.
Krednot only assigns each user an influence score, but also allows each user to claim a Kred profile and Kred account. Through this platform, each user can view how topinfluencersengage with their online community and how each of your online action impacted your influence scores.
Several suggestions that Kred is giving to the audience about increasing influence are: (1) be generous with your audience, free comfortable sharing content from your friends and tweeting others; (2) join an online community; (3) create and share meaningful content; (4) track your progress online.
Follower Wonk is specifically targeted towards Twitter analytics, which helps users to understand follower demographics, and optimizes your activities to find which activity attracts the most positive feedback from followers.
Keyhole is a hashtag tracking and analytics device that tracks Instagram, Twitter and Facebook hashtag data. It is a service that allows you to track which top influencer is using a certain hashtag and what are the other demographic information about the hashtag. When you enter a hashtag on its website, it will automatically randomly sample users that currently used this tag which allows user to analyze each hashtag they are interested in.
The prevalence of the Internet and social media has providedonline activistsboth a new platform for activism, and the most popular tool. While online activism might stir up great controversy and trend, few people actually participate or sacrifice for relevant events. It becomes an interesting topic to analyse the profile of online activists. In a study done by Harp and his co-authors about online activist in China, Latin America and United States, the majority of online activists are males in Latin America and China with a median income of $10,000 or less, while the majority of online activist is female inUnited Stateswith a median income of $30,000 - $69,999; and the education level of online activists in the United States tend to be postgraduate work/education while activists in other countries have lower education levels.[25]
A closer examination of their online shared content shows that the most shared information online include five types:
The Chinese government hopes to establish a "social-credit system" that aims to score "financial creditworthiness of citizens", social behavior and even political behaviour.[26]This system will be combining big data and social profiling technologies. According to Celia Hatton fromBBC News, everyone in China will be expected to enroll in anational databasethat includes and automatically calculates fiscal information, political behavior, social behavior and daily life including minor traffic violations – a single score that evaluates a citizen's trustworthiness.[27]
Credibility scores, social influence scores and other comprehensive evaluations of people are not rare in other countries. However, China's "social-credit system" remains to be controversial as this single score can be a reflection of a person's every aspect.[27]Indeed, "much about the social-credit system remains unclear".[26]
Although the implementation of socialcredit scoreremains controversial in China, Chinese government aims to fully implement this system by 2018.[28]According to Jake Laband (the deputy director of the Beijing office of the US-ChinaBusiness Council), low credit scores will "limit eligibility for financing, employment, and Party membership, as well restrict real estate transactions and travel." Social credit score will not only be affected by legal criteria, but also social criteria, such as contract breaking. However, this has been a great concern for privacy for big companies due to the huge amount of data that will be analyzed by the system.
|
https://en.wikipedia.org/wiki/Social_profiling
|
Ann-gramis a sequence ofnadjacent symbols in particular order.[1]The symbols may benadjacentletters(includingpunctuation marksand blanks),syllables, or rarely wholewordsfound in a language dataset; or adjacentphonemesextracted from a speech-recording dataset, or adjacent base pairs extracted from a genome. They are collected from atext corpusorspeech corpus.
IfLatin numerical prefixesare used, thenn-gram of size 1 is called a "unigram", size 2 a "bigram" (or, less commonly, a "digram") etc. If, instead of the Latin ones, theEnglish cardinal numbersare furtherly used, then they are called "four-gram", "five-gram", etc. Similarly, usingGreek numerical prefixessuch as "monomer", "dimer", "trimer", "tetramer", "pentamer", etc., or English cardinal numbers, "one-mer", "two-mer", "three-mer", etc. are used in computational biology, forpolymersoroligomersof a known size, calledk-mers. When the items are words,n-grams may also be calledshingles.[2]
In the context ofnatural language processing(NLP), the use ofn-grams allowsbag-of-wordsmodels to capture information such as word order, which would not be possible in the traditional bag of words setting.
(Shannon 1951)[3]discussedn-gram models of English. For example:
Figure 1 shows several example sequences and the corresponding 1-gram, 2-gram and 3-gram sequences.
Here are further examples; these are word-level 3-grams and 4-grams (and counts of the number of times they appeared) from the Googlen-gram corpus.[4]
3-grams
4-grams
|
https://en.wikipedia.org/wiki/N-gram#Character_n-grams
|
Innatural language processing,semantic compressionis a process of compacting a lexicon used to build
a textual document (or a set of documents) by reducing language heterogeneity, while maintaining textsemantics.
As a result, the same ideas can be represented using a smaller set of words.
In most applications, semantic compression is a lossy compression. Increased prolixity does not compensate for the lexical compression and an original document cannot be reconstructed in a reverse process.
Semantic compression is basically achieved in two steps, usingfrequency dictionariesandsemantic network:
Step 1 requires assembling word frequencies and
information on semantic relationships, specificallyhyponymy. Moving upwards in word hierarchy,
a cumulative concept frequency is calculating by adding a sum of hyponyms' frequencies to frequency of their hypernym:cumf(ki)=f(ki)+∑jcumf(kj){\displaystyle cumf(k_{i})=f(k_{i})+\sum _{j}cumf(k_{j})}whereki{\displaystyle k_{i}}is a hypernym ofkj{\displaystyle k_{j}}.
Then a desired number of words with top cumulated frequencies are chosen to build a target lexicon.
In the second step, compression mapping rules are defined for the remaining words in order to handle every occurrence of a less frequent hyponym as its hypernym in output text.
The below fragment of text has been processed by the semantic compression. Words in bold have been replaced by their hypernyms.
They are bothnestbuildingsocial insects, butpaper waspsand honeybeesorganizetheircolonies
in very differentways. In a new study, researchers report that despite theirdifferences, these insectsrely onthe same network of genes to guide theirsocial behavior.The study appears in the Proceedings of theRoyal Society B: Biological Sciences. Honeybeesandpaper waspsare separated by more than 100 million years of
evolution, and there arestriking differencesin how they divvy up the work ofmaintainingacolony.
The procedure outputs the following text:
They are bothfacilitybuildinginsect, butinsectsand honeyinsectsarrangetheirbiological groups
in very differentstructure. In a new study, researchers report that despite theirdifference of opinions, these insectsactthe same network of genes tosteertheirparty demeanor. The study appears in the proceeding of theinstitution bacteriaBiological Sciences. Honeyinsectsandinsectare separated by more than hundred million years of
organic processes, and there areimpinging differences of opinionsin how they divvy up the work ofaffirmingabiological group.
A natural tendency to keep natural language expressions concise can be perceived as a form of implicit semantic compression, by omitting unmeaningful words or redundant meaningful words (especially to avoidpleonasms).[2]
In thevector space model, compacting a lexicon leads to a reduction ofdimensionality, which results in lesscomputational complexityand a positive influence on efficiency.
Semantic compression is advantageous ininformation retrievaltasks, improving their effectiveness (in terms of bothprecision and recall).[3]This is due to more precise descriptors (reduced effect of language diversity – limited language redundancy, a step towards a controlled dictionary).
As in the example above, it is possible to display the output as natural text (re-applying inflexion, adding stop words).
|
https://en.wikipedia.org/wiki/Semantic_compression
|
Inobject-oriented programming,inheritanceis the mechanism of basing anobjectorclassupon another object (prototype-based inheritance) or class (class-based inheritance), retaining similarimplementation. Also defined as deriving new classes (sub classes) from existing ones such as super class orbase classand then forming them into a hierarchy of classes. In most class-based object-oriented languages likeC++, an object created through inheritance, a "child object", acquires all the properties and behaviors of the "parent object", with the exception of:constructors, destructors,overloaded operatorsandfriend functionsof the base class. Inheritance allows programmers to create classes that are built upon existing classes,[1]to specify a new implementation while maintaining the same behaviors (realizing an interface), to reuse code and to independently extend original software via public classes andinterfaces. The relationships of objects or classes through inheritance give rise to adirected acyclic graph.
An inherited class is called asubclassof its parent class or super class. The terminheritanceis loosely used for both class-based and prototype-based programming, but in narrow use the term is reserved for class-based programming (one classinherits fromanother), with the corresponding technique in prototype-based programming being instead calleddelegation(one objectdelegates toanother). Class-modifying inheritance patterns can be pre-defined according to simple network interface parameters such that inter-language compatibility is preserved.[2][3]
Inheritance should not be confused withsubtyping.[4][5]In some languages inheritance and subtyping agree,[a]whereas in others they differ; in general, subtyping establishes anis-arelationship, whereas inheritance only reuses implementation and establishes a syntactic relationship, not necessarily a semantic relationship (inheritance does not ensure behavioral subtyping). To distinguish these concepts, subtyping is sometimes referred to asinterface inheritance(without acknowledging that the specialization of type variables also induces a subtyping relation), whereas inheritance as defined here is known asimplementation inheritanceorcode inheritance.[6]Still, inheritance is a commonly used mechanism for establishing subtype relationships.[7]
Inheritance is contrasted withobject composition, where one objectcontainsanother object (or objects of one class contain objects of another class); seecomposition over inheritance. In contrast to subtyping’sis-arelationship, composition implements ahas-arelationship.
Mathematically speaking, inheritance in any system of classes induces astrict partial orderon the set of classes in that system.
In 1966,Tony Hoarepresented some remarks on records, and in particular, the idea of record subclasses, record types with common properties but discriminated by a variant tag and having fields private to the variant.[8]Influenced by this, in 1967Ole-Johan DahlandKristen Nygaardpresented a design that allowed specifying objects that belonged to different classes but had common properties. The common properties were collected in a superclass, and each superclass could itself potentially have a superclass. The values of a subclass were thus compound objects, consisting of some number of prefix parts belonging to various superclasses, plus a main part belonging to the subclass. These parts were all concatenated together.[9]The attributes of a compound object would be accessible by dot notation. This idea was first adopted in theSimula67 programming language.[10]The idea then spread toSmalltalk,C++,Java,Python, and many other languages.
There are various types of inheritance, based on paradigm and specific language.[11]
"Multiple inheritance... was widely supposed to be very difficult to implement efficiently. For example, in a summary of C++ in his book onObjective C,Brad Coxactually claimed that adding multiple inheritance to C++ was impossible. Thus, multiple inheritance seemed more of a challenge. Since I had considered multiple inheritance as early as 1982 and found a simple and efficient implementation technique in 1984, I couldn't resist the challenge. I suspect this to be the only case in which fashion affected the sequence of events."[12]
Subclasses,derived classes,heir classes, orchild classesaremodularderivative classes that inherit one or morelanguageentities from one or more other classes (calledsuperclass,base classes, orparent classes). The semantics of class inheritance vary from language to language, but commonly the subclass automatically inherits theinstance variablesandmember functionsof its superclasses.
The general form of defining a derived class is:[13]
Some languages also support the inheritance of other constructs. For example, inEiffel,contractsthat define the specification of a class are also inherited by heirs. The superclass establishes a common interface and foundational functionality, which specialized subclasses can inherit, modify, and supplement. The software inherited by a subclass is consideredreusedin the subclass. A reference to an instance of a class may actually be referring to one of its subclasses. The actual class of the object being referenced is impossible to predict atcompile-time. A uniform interface is used to invoke the member functions of objects of a number of different classes. Subclasses may replace superclass functions with entirely new functions that must share the samemethod signature.
In some languages a class may be declared asnon-subclassableby adding certainclass modifiersto the class declaration. Examples include thefinalkeyword inJavaandC++11onwards or thesealedkeyword in C#. Such modifiers are added to the class declaration before theclasskeyword and the class identifier declaration. Such non-subclassable classes restrictreusability, particularly when developers only have access to precompiledbinariesand notsource code.
A non-subclassable class has no subclasses, so it can be easily deduced atcompile timethat references or pointers to objects of that class are actually referencing instances of that class and not instances of subclasses (they do not exist) or instances of superclasses (upcastinga reference type violates the type system). Because the exact type of the object being referenced is known before execution,early binding(also calledstatic dispatch) can be used instead oflate binding(also calleddynamic dispatch), which requires one or morevirtual method tablelookups depending on whethermultiple inheritanceor onlysingle inheritanceare supported in the programming language that is being used.
Just as classes may be non-subclassable, method declarations may contain method modifiers that prevent the method from being overridden (i.e. replaced with a new function with the same name and type signature in a subclass). Aprivatemethod is un-overridable simply because it is not accessible by classes other than the class it is a member function of (this is not true for C++, though). Afinalmethod in Java, asealedmethod in C# or afrozenfeature in Eiffel cannot be overridden.
If a superclass method is avirtual method, then invocations of the superclass method will bedynamically dispatched. Some languages require that method be specifically declared as virtual (e.g. C++), and in others, all methods are virtual (e.g. Java). An invocation of a non-virtual method will always be statically dispatched (i.e. the address of the function call is determined at compile-time). Static dispatch is faster than dynamic dispatch and allows optimizations such asinline expansion.
The following table shows which variables and functions get inherited dependent on the visibility given when deriving the class, using the terminology established by C++.[14]
Inheritance is used to co-relate two or more classes to each other.
Manyobject-oriented programming languagespermit a class or object to replace the implementation of an aspect—typically a behavior—that it has inherited. This process is calledoverriding. Overriding introduces a complication: which version of the behavior does an instance of the inherited class use—the one that is part of its own class, or the one from the parent (base) class? The answer varies between programming languages, and some languages provide the ability to indicate that a particular behavior is not to be overridden and should behave as defined by the base class. For instance, in C#, the base method or property can only be overridden in a subclass if it is marked with the virtual, abstract, or override modifier, while in programming languages such as Java, different methods can be called to override other methods.[15]An alternative to overriding ishidingthe inherited code.
Implementation inheritance is the mechanism whereby a subclassre-usescode in a base class. By default the subclass retains all of the operations of the base class, but the subclass mayoverridesome or all operations, replacing the base-class implementation with its own.
In the following Python example, subclassesSquareSumComputerandCubeSumComputeroverride thetransform()method of the base classSumComputer. The base class comprises operations to compute the sum of thesquaresbetween two integers. The subclass re-uses all of the functionality of the base class with the exception of the operation that transforms a number into its square, replacing it with an operation that transforms a number into itssquareandcuberespectively. The subclasses therefore compute the sum of the squares/cubes between two integers.
Below is an example of Python.
In most quarters, class inheritance for the sole purpose of code reuse has fallen out of favor.[citation needed]The primary concern is that implementation inheritance does not provide any assurance ofpolymorphicsubstitutability—an instance of the reusing class cannot necessarily be substituted for an instance of the inherited class. An alternative technique, explicitdelegation, requires more programming effort, but avoids the substitutability issue.[citation needed]In C++ private inheritance can be used as a form ofimplementation inheritancewithout substitutability. Whereas public inheritance represents an "is-a" relationship and delegation represents a "has-a" relationship, private (and protected) inheritance can be thought of as an "is implemented in terms of" relationship.[16]
Another frequent use of inheritance is to guarantee that classes maintain a certain common interface; that is, they implement the same methods. The parent class can be a combination of implemented operations and operations that are to be implemented in the child classes. Often, there is no interface change between the supertype and subtype- the child implements the behavior described instead of its parent class.[17]
Inheritance is similar to but distinct fromsubtyping.[4]Subtyping enables a giventypeto be substituted for another type or abstraction and is said to establish anis-arelationship between the subtype and some existing abstraction, either implicitly or explicitly, depending on language support. The relationship can be expressed explicitly via inheritance in languages that support inheritance as a subtyping mechanism. For example, the following C++ code establishes an explicit inheritance relationship between classesBandA, whereBis both a subclass and a subtype ofAand can be used as anAwherever aBis specified (via a reference, a pointer or the object itself).
In programming languages that do not support inheritance as asubtypingmechanism, the relationship between a base class and a derived class is only a relationship between implementations (a mechanism for code reuse), as compared to a relationship betweentypes. Inheritance, even in programming languages that support inheritance as a subtyping mechanism, does not necessarily entailbehavioral subtyping. It is entirely possible to derive a class whose object will behave incorrectly when used in a context where the parent class is expected; see theLiskov substitution principle.[18](Compareconnotation/denotation.) In some OOP languages, the notions of code reuse and subtyping coincide because the only way to declare a subtype is to define a new class that inherits the implementation of another.
Using inheritance extensively in designing a program imposes certain constraints.
For example, consider a classPersonthat contains a person's name, date of birth, address and phone number. We can define a subclass ofPersoncalledStudentthat contains the person's grade point average and classes taken, and another subclass ofPersoncalledEmployeethat contains the person's job-title, employer, and salary.
In defining this inheritance hierarchy we have already defined certain restrictions, not all of which are desirable:
Thecomposite reuse principleis an alternative to inheritance. This technique supports polymorphism and code reuse by separating behaviors from the primary class hierarchy and including specific behavior classes as required in any business domain class. This approach avoids the static nature of a class hierarchy by allowing behavior modifications at run time and allows one class to implement behaviors buffet-style, instead of being restricted to the behaviors of its ancestor classes.
Implementation inheritance has been controversial among programmers and theoreticians of object-oriented programming since at least the 1990s. Among the critics are the authors ofDesign Patterns, who advocate instead for interface inheritance, and favorcomposition over inheritance. For example, the decorator pattern (as mentionedabove) has been proposed to overcome the static nature of inheritance between classes. As a more fundamental solution to the same problem,role-oriented programmingintroduces a distinct relationship,played-by, combining properties of inheritance and composition into a new concept.[citation needed]
According toAllen Holub, the main problem with implementation inheritance is that it introduces unnecessarycouplingin the form of the"fragile base class problem":[6]modifications to the base class implementation can cause inadvertent behavioral changes in subclasses. Using interfaces avoids this problem because no implementation is shared, only the API.[19]Another way of stating this is that "inheritance breaksencapsulation".[20]The problem surfaces clearly in open object-oriented systems such asframeworks, where client code is expected to inherit from system-supplied classes and then substituted for the system's classes in its algorithms.[6]
Reportedly, Java inventorJames Goslinghas spoken against implementation inheritance, stating that he would not include it if he were to redesign Java.[19]Language designs that decouple inheritance from subtyping (interface inheritance) appeared as early as 1990;[21]a modern example of this is theGoprogramming language.
Complex inheritance, or inheritance used within an insufficiently mature design, may lead to theyo-yo problem. When inheritance was used as a primary approach to structure programs in the late 1990s, developers tended to break code into more layers of inheritance as the system functionality grew. If a development team combined multiple layers of inheritance with the single responsibility principle, this resulted in many very thin layers of code, with many layers consisting of only 1 or 2 lines of actual code.[citation needed]Too many layers make debugging a significant challenge, as it becomes hard to determine which layer needs to be debugged.
Another issue with inheritance is that subclasses must be defined in code, which means that program users cannot add new subclasses at runtime. Other design patterns (such asEntity–component–system) allow program users to define variations of an entity at runtime.
|
https://en.wikipedia.org/wiki/Inheritance_(object-oriented_programming)
|
Thepandemonium effectis a problem that may appear when high-resolution detectors (usually germaniumsemiconductor detectors) are used inbeta decaystudies. It can affect the correct determination of the feeding to the different levels of thedaughter nucleus. It was first introduced in 1977.[1]
Typically, when a parent nucleus beta-decays into its daughter, there is some final energy available which is shared between the final products of the decay. This is called theQvalueof the beta decay (Qβ). The daughter nucleus doesn't necessarily end up in theground stateafter the decay, this only happens when the other products have taken all the available energy with them (usually askinetic energy). So, in general, the daughter nucleus keeps an amount of the available energy as excitation energy and ends up in anexcited stateassociated to some energy level, as seen in the picture. The daughter nucleus can only stay in that excited state for a small amount of time[2](the half life of the level) after which it suffers a series of gamma transitions to its lower energy levels. These transitions allow the daughter nucleus to emit the excitation energy as one or moregamma raysuntil it reaches its ground state, thus getting rid of all the excitation energy that it kept from the decay.
According to this, the energy levels of the daughter nucleus can be populated in two ways:
The totalgamma raysemitted by an energy level (IT) should be equal to the sum of these two contributions, that is, direct beta feeding (Iβ) plus upper-level gamma de-excitations (ΣIi).
The beta feeding Iβ(that is, how many times a level is populated by direct feeding from the parent) can not be measured directly. Since the only magnitude that can be measured are the gamma intensities ΣIiand IT(that is, the amount of gammas emitted by the daughter with a certain energy), the beta feeding has to be extracted indirectly by subtracting the contribution from gamma de-excitations of higher energy levels (ΣIi) to the total gamma intensity that leaves the level (IT), that is:
The pandemonium effect appears when the daughter nucleus has a largeQvalue, allowing the access to manynuclear configurations, which translates in many excitation-energy levels available. This means that the total beta feeding will be fragmented, because it will spread over all the available levels (with a certain distribution given by the strength, the level densities, theselection rules, etc.). Then, the gamma intensity emitted from the less populated levels will be weak, and it will be weaker as we go to higher energies where the level density can be huge. Also, the energy of the gammas de-excitating this high-density-level region can be high.
Measuring these gamma rays with high-resolution detectors may present two problems:
These two effects reduce how much of the beta feeding to the higher energy levels of the daughter nucleus is detected, so less ΣIiis subtracted from the IT, and the energy levels are incorrectly assigned more Iβthan present:
When this happens, the low-lying energy levels are the more affected ones. Some of the level schemes of nuclei that appear in the nuclear databases[3]suffer from this Pandemonium effect and are not reliable until better measurements are made in the future.
To avoid the pandemonium effect, a detector that solves the problems that high-resolution detectors present should be used. It needs to have an efficiency close to 100% and a good efficiency for gamma rays of huge energies. One possible solution is to use a calorimeter like thetotal absorption spectrometer(TAS), which is made of ascintillator material. It has been shown[4]that even with a high-efficiency array of germanium detectors in a close geometry (for example, the cluster cube array), about 57% of the total B(GT) observed with the TAS technique is lost.
The calculation of the beta feeding, (Iβ) is important for different applications, like the calculation of theresidual heatinnuclear reactorsornuclear structurestudies.
|
https://en.wikipedia.org/wiki/Pandemonium_effect
|
Adistributed control system(DCS) is a computerizedcontrol systemfor a process or plant usually with manycontrol loops, in which autonomous controllers are distributed throughout the system, but there is no central operator supervisory control. This is in contrast to systems that use centralized controllers; either discrete controllers located at a central control room or within a central computer. The DCS concept increases reliability and reduces installation costs by localizing control functions near the process plant, with remote monitoring and supervision.
Distributed control systems first emerged in large, high value, safety critical process industries, and were attractive because the DCS manufacturer would supply both the local control level and central supervisory equipment as an integrated package, thus reducing design integration risk. Today the functionality ofSupervisory control and data acquisition (SCADA)and DCS systems are very similar, but DCS tends to be used on large continuous process plants where high reliability and security is important, and the control room is not necessarily geographically remote. Many machine control systems exhibit similar properties as plant and process control systems do.[1]
The key attribute of a DCS is its reliability due to the distribution of the control processing around nodes in the system. This mitigates a single processor failure. If a processor fails, it will only affect one section of the plant process, as opposed to a failure of a central computer which would affect the whole process. This distribution of computing power local to the field Input/Output (I/O) connection racks also ensures fast controller processing times by removing possible network and central processing delays.
The accompanying diagram is a general model which shows functional manufacturing levels using computerised control.
Referring to the diagram;
Levels 1 and 2 are the functional levels of a traditional DCS, in which all equipment are part of an integrated system from a single manufacturer.
Levels 3 and 4 are not strictlyprocess controlin the traditional sense, but where production control and scheduling takes place.
The processor nodes and operatorgraphical displaysare connected over proprietary or industry standard networks, and network reliability is increased by dual redundancy cabling over diverse routes. This distributed topology also reduces the amount of field cabling by siting the I/O modules and their associated processors close to the process plant.
The processors receive information from input modules, process the information and decide control actions to be signalled by the output modules. The field inputs and outputs can beanalog signalse.g.4–20 mA DC current loopor two-state signals that switch either "on" or "off", such as relay contacts or a semiconductor switch.
DCSs are connected to sensors and actuators and usesetpoint controlto control the flow of material through the plant. A typical application is aPID controllerfed by a flow meter and using acontrol valveas the final control element. The DCS sends the setpoint required by the process to the controller which instructs a valve to operate so that the process reaches and stays at the desired setpoint. (see 4–20 mA schematic for example).
Large oil refineries and chemical plants have several thousand I/O points and employ very large DCS. Processes are not limited to fluidic flow through pipes, however, and can also include things likepaper machinesand their associated quality controls,variable speed drivesandmotor control centers,cement kilns,mining operations,ore processingfacilities, andmany others.
DCSs in very high reliability applications can have dual redundant processors with "hot" switch over on fault, to enhance the reliability of the control system.
Although 4–20 mA has been the main field signalling standard, modern DCS systems can also supportfieldbusdigital protocols, such as Foundation Fieldbus, profibus, HART,modbus, PC Link, etc.
Modern DCSs also supportneural networksandfuzzy logicapplications. Recent research focuses on the synthesis of optimal distributed controllers, which optimizes a certainH-infinityor the H 2 control criterion.[2][3]
Distributed control systems (DCS) are dedicated systems used in manufacturing processes that are continuous or batch-oriented.
Processes where a DCS might be used include:
Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large amount of human oversight to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Effectively this was the centralisation of all the localised panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process.
With the coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around plant, and communicate with the graphic display in the control room or rooms. The distributed control system was born.
The introduction of DCSs allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels.
Earlyminicomputerswere used in the control of industrial processes since the beginning of the 1960s. TheIBM 1800, for example, was an early computer that had input/output hardware to gather process signals in a plant for conversion from field contact levels (for digital points) and analog signals to the digital domain.
The first industrial control computer system was built 1959 at the Texaco Port Arthur, Texas, refinery with anRW-300of theRamo-WooldridgeCompany.[4]
In 1975, bothYamatake-Honeywell[5]and Japanese electrical engineering firmYokogawaintroduced their own independently produced DCS's - TDC 2000 and CENTUM systems, respectively. US-based Bristol also introduced their UCS 3000 universal controller in 1975. In 1978Valmetintroduced their own DCS system called Damatic (latest web-based generation Valmet DNAe[6]). In 1980, Bailey (now part of ABB[7]) introduced the NETWORK 90 system, Fisher Controls (now part ofEmerson Electric) introduced the PROVoX system,Fischer & Porter Company(now also part of ABB[8]) introduced DCI-4000 (DCI stands for Distributed Control Instrumentation).
The DCS largely came about due to the increased availability of microcomputers and the proliferation of microprocessors in the world of process control. Computers had already been applied to process automation for some time in the form of bothdirect digital control(DDC) and setpoint control. In the early 1970sTaylor Instrument Company, (now part of ABB) developed the 1010 system, Foxboro the FOX1 system, Fisher Controls the DC2system andBailey Controlsthe 1055 systems. All of these were DDC applications implemented within minicomputers (DECPDP-11,Varian Data Machines,MODCOMPetc.) and connected to proprietary Input/Output hardware. Sophisticated (for the time) continuous as well as batch control was implemented in this way. A more conservative approach was setpoint control, where process computers supervised clusters of analog process controllers. A workstation provided visibility into the process using text and crude character graphics. Availability of a fully functional graphical user interface was a way away.
Central to the DCS model was the inclusion of control function blocks. Function blocks evolved from early, more primitive DDC concepts of "Table Driven" software. One of the first embodiments of object-oriented software, function blocks were self-contained "blocks" of code that emulated analog hardware control components and performed tasks that were essential to process control, such as execution of PID algorithms. Function blocks continue to endure as the predominant method of control for DCS suppliers, and are supported by key technologies such as Foundation Fieldbus[9]today.
Midac Systems, of Sydney, Australia, developed an objected-oriented distributed direct digital control system in 1982. The central system ran 11 microprocessors sharing tasks and common memory and connected to a serial communication network of distributed controllers each running two Z80s. The system was installed at the University of Melbourne.[citation needed]
Digital communication between distributed controllers, workstations and other computing elements (peer to peer access) was one of the primary advantages of the DCS. Attention was duly focused on the networks, which provided the all-important lines of communication that, for process applications, had to incorporate specific functions such as determinism and redundancy. As a result, many suppliers embraced the IEEE 802.4 networking standard. This decision set the stage for the wave of migrations necessary when information technology moved into process automation and IEEE 802.3 rather than IEEE 802.4 prevailed as the control LAN.
In the 1980s, users began to look at DCSs as more than just basic process control. A very early example of aDirect Digital ControlDCS was completed by the Australian business Midac in 1981–82 using R-Tec Australian designed hardware. The system installed at theUniversity of Melbourneused a serial communications network, connecting campus buildings back to a control room "front end". Each remote unit ran twoZ80microprocessors, while the front end ran eleven Z80s in a parallel processing configuration with paged common memory to share tasks and that could run up to 20,000 concurrent control objects.
It was believed that if openness could be achieved and greater amounts of data could be shared throughout the enterprise that even greater things could be achieved. The first attempts to increase the openness of DCSs resulted in the adoption of the predominant operating system of the day:UNIX. UNIX and its companion networking technology TCP-IP were developed by the US Department of Defense for openness, which was precisely the issue the process industries were looking to resolve.
As a result, suppliers also began to adopt Ethernet-based networks with their own proprietary protocol layers. The full TCP/IP standard was not implemented, but the use of Ethernet made it possible to implement the first instances of object management and global data access technology. The 1980s also witnessed the firstPLCsintegrated into the DCS infrastructure. Plant-wide historians also emerged to capitalize on the extended reach of automation systems. The first DCS supplier to adopt UNIX and Ethernet networking technologies was Foxboro, who introduced the I/A Series[10]system in 1987.
The drive toward openness in the 1980s gained momentum through the 1990s with the increased adoption ofcommercial off-the-shelf(COTS) components and IT standards. Probably the biggest transition undertaken during this time was the move from the UNIX operating system to the Windows environment. While the realm of the real time operating system (RTOS) for control applications remains dominated by real time commercial variants of UNIX or proprietary operating systems, everything above real-time control has made the transition to Windows.
The introduction of Microsoft at the desktop and server layers resulted in the development of technologies such asOLE for process control (OPC), which is now a de facto industry connectivity standard. Internet technology also began to make its mark in automation and the world, with most DCS HMI supporting Internet connectivity. The 1990s were also known for the "Fieldbus Wars", where rival organizations competed to define what would become the IECfieldbusstandard for digital communication with field instrumentation instead of 4–20 milliamp analog communications. The first fieldbus installations occurred in the 1990s. Towards the end of the decade, the technology began to develop significant momentum, with the market consolidated around Ethernet I/P, Foundation Fieldbus and Profibus PA for process automation applications. Some suppliers built new systems from the ground up to maximize functionality with fieldbus, such asRockwellPlantPAx System,HoneywellwithExperion& PlantscapeSCADAsystems,ABBwith System 800xA,[11]Emerson Process Management[12]with theEmerson Process ManagementDeltaVcontrol system,Siemenswith the SPPA-T3000[13]orSimatic PCS 7,[14]Forbes Marshall[15]with the Microcon+ control system andAzbil Corporation[ja][16]with theHarmonas-DEOsystem. Fieldbus technics have been used to integrate machine, drives, quality andcondition monitoringapplications to one DCS with Valmet DNA system.[6]
The impact of COTS, however, was most pronounced at the hardware layer. For years, the primary business of DCS suppliers had been the supply of large amounts of hardware, particularly I/O and controllers. The initial proliferation of DCSs required the installation of prodigious amounts of this hardware, most of it manufactured from the bottom up by DCS suppliers. Standard computer components from manufacturers such as Intel and Motorola, however, made it cost prohibitive for DCS suppliers to continue making their own components, workstations, and networking hardware.
As the suppliers made the transition to COTS components, they also discovered that the hardware market was shrinking fast. COTS not only resulted in lower manufacturing costs for the supplier, but also steadily decreasing prices for the end users, who were also becoming increasingly vocal over what they perceived to be unduly high hardware costs. Some suppliers that were previously stronger in thePLCbusiness, such as Rockwell Automation and Siemens, were able to leverage their expertise in manufacturing control hardware to enter the DCS marketplace with cost effective offerings, while the stability/scalability/reliability and functionality of these emerging systems are still improving. The traditional DCS suppliers introduced new generation DCS System based on the latest Communication and IEC Standards, which resulting in a trend of combining the traditional concepts/functionalities for PLC and DCS into a one for all solution—named "Process Automation System" (PAS). The gaps among the various systems remain at the areas such as: the database integrity, pre-engineering functionality, system maturity, communication transparency and reliability. While it is expected the cost ratio is relatively the same (the more powerful the systems are, the more expensive they will be), the reality of the automation business is often operating strategically case by case. The current next evolution step is calledCollaborative Process Automation Systems.
To compound the issue, suppliers were also realizing that the hardware market was becoming saturated. The life cycle of hardware components such as I/O and wiring is also typically in the range of 15 to over 20 years, making for a challenging replacement market. Many of the older systems that were installed in the 1970s and 1980s are still in use today, and there is a considerable installed base of systems in the market that are approaching the end of their useful life. Developed industrial economies in North America, Europe, and Japan already had many thousands of DCSs installed, and with few if any new plants being built, the market for new hardware was shifting rapidly to smaller, albeit faster growing regions such as China, Latin America, and Eastern Europe.
Because of the shrinking hardware business, suppliers began to make the challenging transition from a hardware-based business model to one based on software and value-added services. It is a transition that is still being made today. The applications portfolio offered by suppliers expanded considerably in the '90s to include areas such as production management, model-based control, real-time optimization, plant asset management (PAM), Real-time performance management (RPM) tools,alarm management, and many others. To obtain the true value from these applications, however, often requires a considerable service content, which the suppliers also provide.
The latest developments in DCS include the following new technologies:
Increasingly, and ironically, DCS are becoming centralised at plant level, with the ability to log into the remote equipment. This enables operator to control both at enterprise level ( macro ) and at the equipment level (micro), both within and outside the plant, because the importance of the physical location drops due to interconnectivity primarily thanks to wireless and remote access.
The more wireless protocols are developed and refined, the more they are included in DCS. DCS controllers are now often equipped with embedded servers and provide on-the-go web access. Whether DCS will lead Industrial Internet of Things (IIOT) or borrow key elements from remains to be seen.
Many vendors provide the option of a mobile HMI, ready for bothAndroidandiOS. With these interfaces, the threat of security breaches and possible damage to plant and process are now very real.
|
https://en.wikipedia.org/wiki/Distributed_control_system
|
Instatistics, theRobbins lemma, named afterHerbert Robbins, states that ifXis arandom variablehaving aPoisson distributionwith parameterλ, andfis any function for which theexpected valueE(f(X)) exists, then[1]
Robbins introduced this proposition while developingempirical Bayes methods.
|
https://en.wikipedia.org/wiki/Robbins_lemma
|
Inabstract algebra,group theorystudies thealgebraic structuresknown asgroups.
The concept of a group is central to abstract algebra: other well-known algebraic structures, such asrings,fields, andvector spaces, can all be seen as groups endowed with additionaloperationsandaxioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra.Linear algebraic groupsandLie groupsare two branches of group theory that have experienced advances and have become subject areas in their own right.
Various physical systems, such ascrystalsand thehydrogen atom, andthree of the fourknown fundamental forces in the universe, may be modelled bysymmetry groups. Thus group theory and the closely relatedrepresentation theoryhave many important applications inphysics,chemistry, andmaterials science. Group theory is also central topublic key cryptography.
The earlyhistory of group theorydates from the 19th century. One of the most important mathematical achievements of the 20th century[1]was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a completeclassification of finite simple groups.
Group theory has three main historical sources:number theory, the theory ofalgebraic equations, andgeometry. The number-theoretic strand was begun byLeonhard Euler, and developed byGauss'swork onmodular arithmeticand additive and multiplicative groups related toquadratic fields. Early results about permutation groups were obtained byLagrange,Ruffini, andAbelin their quest for general solutions of polynomial equations of high degree.Évariste Galoiscoined the term "group" and established a connection, now known asGalois theory, between the nascent theory of groups andfield theory. In geometry, groups first became important inprojective geometryand, later,non-Euclidean geometry.Felix Klein'sErlangen programproclaimed group theory to be the organizing principle of geometry.
Galois, in the 1830s, was the first to employ groups to determine the solvability ofpolynomial equations.Arthur CayleyandAugustin Louis Cauchypushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems fromgeometricalsituations. In an attempt to come to grips with possible geometries (such aseuclidean,hyperbolicorprojective geometry) using group theory,Felix Kleininitiated theErlangen programme.Sophus Lie, in 1884, started using groups (now calledLie groups) attached toanalyticproblems. Thirdly, groups were, at first implicitly and later explicitly, used inalgebraic number theory.
The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth ofabstract algebrain the early 20th century,representation theory, and many more influential spin-off domains. Theclassification of finite simple groupsis a vast body of work from the mid 20th century, classifying all thefinitesimple groups.
The range of groups being considered has gradually expanded from finite permutation groups and special examples ofmatrix groupsto abstract groups that may be specified through apresentationbygeneratorsandrelations.
The firstclassof groups to undergo a systematic study waspermutation groups. Given any setXand a collectionGofbijectionsofXinto itself (known aspermutations) that is closed under compositions and inverses,Gis a groupactingonX. IfXconsists ofnelements andGconsists ofallpermutations,Gis thesymmetric groupSn; in general, any permutation groupGis asubgroupof the symmetric group ofX. An early construction due toCayleyexhibited any group as a permutation group, acting on itself (X=G) by means of the leftregular representation.
In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that forn≥ 5, thealternating groupAnissimple, i.e. does not admit any propernormal subgroups. This fact plays a key role in theimpossibility of solving a general algebraic equation of degreen≥ 5in radicals.
The next important class of groups is given bymatrix groups, orlinear groups. HereGis a set consisting of invertiblematricesof given ordernover afieldKthat is closed under the products and inverses. Such a group acts on then-dimensional vector spaceKnbylinear transformations. This action makes matrix groups conceptually similar to permutation groups, and the geometry of the action may be usefully exploited to establish properties of the groupG.
Permutation groups and matrix groups are special cases oftransformation groups: groups that act on a certain spaceXpreserving its inherent structure. In the case of permutation groups,Xis a set; for matrix groups,Xis avector space. The concept of a transformation group is closely related with the concept of asymmetry group: transformation groups frequently consist ofalltransformations that preserve a certain structure.
The theory of transformation groups forms a bridge connecting group theory withdifferential geometry. A long line of research, originating withLieandKlein, considers group actions onmanifoldsbyhomeomorphismsordiffeomorphisms. The groups themselves may bediscreteorcontinuous.
Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of anabstract groupbegan to take hold, where "abstract" means that the nature of the elements are ignored in such a way that twoisomorphic groupsare considered as the same group. A typical way of specifying an abstract group is through apresentationbygenerators and relations,
A significant source of abstract groups is given by the construction of afactor group, orquotient group,G/H, of a groupGby anormal subgroupH.Class groupsofalgebraic number fieldswere among the earliest examples of factor groups, of much interest innumber theory. If a groupGis a permutation group on a setX, the factor groupG/His no longer acting onX; but the idea of an abstract group permits one not to worry about this discrepancy.
The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant underisomorphism, as well as the classes of group with a given such property:finite groups,periodic groups,simple groups,solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation ofabstract algebrain the works ofHilbert,Emil Artin,Emmy Noether, and mathematicians of their school.[citation needed]
An important elaboration of the concept of a group occurs ifGis endowed with additional structure, notably, of atopological space,differentiable manifold, oralgebraic variety. If the multiplication and inversion of the group are compatible with this structure, that is, they arecontinuous,smoothorregular(in the sense of algebraic geometry) maps, thenGis atopological group, aLie group, or analgebraic group.[2]
The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain forabstract harmonic analysis, whereasLie groups(frequently realized as transformation groups) are the mainstays ofdifferential geometryand unitaryrepresentation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus,compact connected Lie groupshave been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a groupΓcan be realized as alatticein a topological groupG, the geometry and analysis pertaining toGyield important results aboutΓ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a singlep-adic analytic groupGhas a family of quotients which are finitep-groupsof various orders, and properties ofGtranslate into the properties of its finite quotients.
During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially thelocal theoryof finite groups and the theory ofsolvableandnilpotent groups.[citation needed]As a consequence, the completeclassification of finite simple groupswas achieved, meaning that all thosesimple groupsfrom which all finite groups can be built are now known.
During the second half of the twentieth century, mathematicians such asChevalleyandSteinbergalso increased our understanding of finite analogs ofclassical groups, and other related groups. One such family of groups is the family ofgeneral linear groupsoverfinite fields.
Finite groups often occur when consideringsymmetryof mathematical or
physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory ofLie groups,
which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associatedWeyl groups. These are finite groups generated by reflections which act on a finite-dimensionalEuclidean space. The properties of finite groups can thus play a role in subjects such astheoretical physicsandchemistry.
Saying that a groupGactson a setXmeans that every element ofGdefines a bijective map on the setXin a way compatible with the group structure. WhenXhas more structure, it is useful to restrict this notion further: a representation ofGon avector spaceVis agroup homomorphism:
whereGL(V) consists of the invertiblelinear transformationsofV. In other words, to every group elementgis assigned anautomorphismρ(g) such thatρ(g) ∘ρ(h) =ρ(gh)for anyhinG.
This definition can be understood in two directions, both of which give rise to whole new domains of mathematics.[3]On the one hand, it may yield new information about the groupG: often, the group operation inGis abstractly given, but viaρ, it corresponds to themultiplication of matrices, which is very explicit.[4]On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, ifGis finite, it is known thatVabove decomposes intoirreducible parts(seeMaschke's theorem). These parts, in turn, are much more easily manageable than the wholeV(viaSchur's lemma).
Given a groupG,representation theorythen asks what representations ofGexist. There are several settings, and the employed methods and obtained results are rather different in every case:representation theory of finite groupsand representations ofLie groupsare two main subdomains of the theory. The totality of representations is governed by the group'scharacters. For example,Fourier polynomialscan be interpreted as the characters ofU(1), the group ofcomplex numbersofabsolute value1, acting on theL2-space of periodic functions.
ALie groupis agroupthat is also adifferentiable manifold, with the property that the group operations are compatible with thesmooth structure. Lie groups are named afterSophus Lie, who laid the foundations of the theory of continuoustransformation groups. The termgroupes de Liefirst appeared in French in 1893 in the thesis of Lie's studentArthur Tresse, page 3.[5]
Lie groups represent the best-developed theory ofcontinuous symmetryofmathematical objectsandstructures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for moderntheoretical physics. They provide a natural framework for analysing the continuous symmetries ofdifferential equations(differential Galois theory), in much the same way as permutation groups are used inGalois theoryfor analysing the discrete symmetries ofalgebraic equations. An extension of Galois theory to the case of continuous symmetry groups was one of Lie's principal motivations.
Groups can be described in different ways. Finite groups can be described by writing down thegroup tableconsisting of all possible multiplicationsg•h. A more compact way of defining a group is bygenerators and relations, also called thepresentationof a group. Given any setFof generators{gi}i∈I{\displaystyle \{g_{i}\}_{i\in I}}, thefree groupgenerated byFsurjects onto the groupG. The kernel of this map is called the subgroup of relations, generated by some subsetD. The presentation is usually denoted by⟨F∣D⟩.{\displaystyle \langle F\mid D\rangle .}For example, the group presentation⟨a,b∣aba−1b−1⟩{\displaystyle \langle a,b\mid aba^{-1}b^{-1}\rangle }describes a group which is isomorphic toZ×Z.{\displaystyle \mathbb {Z} \times \mathbb {Z} .}A string consisting of generator symbols and their inverses is called aword.
Combinatorial group theorystudies groups from the perspective of generators and relations.[6]It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection ofgraphsvia theirfundamental groups. A fundamental theorem of this area is that every subgroup of a free group is free.
There are several natural questions arising from giving a group by its presentation. Theword problemasks whether two words are effectively the same group element. By relating the problem toTuring machines, one can show that there is in general noalgorithmsolving this task. Another, generally harder, algorithmically insoluble problem is thegroup isomorphism problem, which asks whether two groups given by different presentations are actually isomorphic. For example, the group with presentation⟨x,y∣xyxyx=e⟩,{\displaystyle \langle x,y\mid xyxyx=e\rangle ,}is isomorphic to the additive groupZof integers, although this may not be immediately apparent. (Writingz=xy{\displaystyle z=xy}, one hasG≅⟨z,y∣z3=y⟩≅⟨z⟩.{\displaystyle G\cong \langle z,y\mid z^{3}=y\rangle \cong \langle z\rangle .})
Geometric group theoryattacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on.[7]The first idea is made precise by means of theCayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs theword metricgiven by the length of the minimal path between the elements. A theorem ofMilnorand Svarc then says that given a groupGacting in a reasonable manner on ametric spaceX, for example acompact manifold, thenGisquasi-isometric(i.e. looks similar from a distance) to the spaceX.
Given a structured objectXof any sort, asymmetryis a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example
The axioms of a group formalize the essential aspects ofsymmetry. Symmetries form a group: they areclosedbecause if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object. Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions is associative.
Frucht's theoremsays that every group is the symmetry group of somegraph. So every abstract group is actually the symmetries of some explicit object.
The saying of "preserving the structure" of an object can be made precise by working in acategory. Maps preserving the structure are then themorphisms, and the symmetry group is theautomorphism groupof the object in question.
Applications of group theory abound. Almost all structures inabstract algebraare special cases of groups.Rings, for example, can be viewed asabelian groups(corresponding to addition) together with a second operation (corresponding to multiplication). Therefore, group theoretic arguments underlie large parts of the theory of those entities.
Galois theoryuses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). Thefundamental theorem of Galois theoryprovides a link betweenalgebraic field extensionsand group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the correspondingGalois group. For example,S5, thesymmetric groupin 5 elements, is not solvable which implies that the generalquintic equationcannot be solved by radicals in the way equations of lower degree can. The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such asclass field theory.
Algebraic topologyis another domain which prominentlyassociatesgroups to the objects the theory is interested in. There, groups are used to describe certain invariants oftopological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to somedeformation. For example, thefundamental group"counts" how many paths in the space are essentially different. ThePoincaré conjecture, proved in 2002/2003 byGrigori Perelman, is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use ofEilenberg–MacLane spaceswhich are spaces with prescribedhomotopy groups. Similarlyalgebraic K-theoryrelies in a way onclassifying spacesof groups. Finally, the name of thetorsion subgroupof an infinite group shows the legacy of topology in group theory.
Algebraic geometrylikewise uses group theory in many ways.Abelian varietieshave been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures. (For example theHodge conjecture(in certain cases).) The one-dimensional case, namelyelliptic curvesis studied in particular detail. They are both theoretically and practically intriguing.[8]In another direction,toric varietiesarealgebraic varietiesacted on by atorus. Toroidal embeddings have recently led to advances inalgebraic geometry, in particularresolution of singularities.[9]
Algebraic number theorymakes uses of groups for some important applications. For example,Euler's product formula,
capturesthe factthat any integer decomposes in a unique way intoprimes. The failure of this statement formore general ringsgives rise toclass groupsandregular primes, which feature inKummer'streatment ofFermat's Last Theorem.
Analysis on Lie groups and certain other groups is calledharmonic analysis.Haar measures, that is, integrals invariant under the translation in a Lie group, are used forpattern recognitionand otherimage processingtechniques.[10]
Incombinatorics, the notion ofpermutationgroup and the concept of group action are often used to simplify the counting of a set of objects; see in particularBurnside's lemma.
The presence of the 12-periodicityin thecircle of fifthsyields applications ofelementary group theoryinmusical set theory.Transformational theorymodels musical transformations as elements of a mathematical group.
Inphysics, groups are important because they describe the symmetries which the laws of physics seem to obey. According toNoether's theorem, every continuous symmetry of a physical system corresponds to aconservation lawof the system. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include theStandard Model,gauge theory, theLorentz group, and thePoincaré group.
Group theory can be used to resolve the incompleteness of the statistical interpretations of mechanics developed byWillard Gibbs, relating to the summing of an infinite number of probabilities to yield a meaningful solution.[11]
Inchemistryandmaterials science,point groupsare used to classify regular polyhedra, and thesymmetries of molecules, andspace groupsto classifycrystal structures. The assigned groups can then be used to determine physical properties (such aschemical polarityandchirality), spectroscopic properties (particularly useful forRaman spectroscopy,infrared spectroscopy, circular dichroism spectroscopy, magnetic circular dichroism spectroscopy, UV/Vis spectroscopy, and fluorescence spectroscopy), and to constructmolecular orbitals.
Molecular symmetryis responsible for many physical and spectroscopic properties of compounds and provides relevant information about how chemical reactions occur. In order to assign a point group for any given molecule, it is necessary to find the set of symmetry operations present on it. The symmetry operation is an action, such as a rotation around an axis or a reflection through a mirror plane. In other words, it is an operation that moves the molecule such that it is indistinguishable from the original configuration. In group theory, the rotation axes and mirror planes are called "symmetry elements". These elements can be a point, line or plane with respect to which the symmetry operation is carried out. The symmetry operations of a molecule determine the specific point group for this molecule.
Inchemistry, there are five important symmetry operations. They are identity operation (E), rotation operation or proper rotation (Cn), reflection operation (σ), inversion (i) and rotation reflection operation or improper rotation (Sn). The identity operation (E) consists of leaving the molecule as it is. This is equivalent to any number of full rotations around any axis. This is a symmetry of all molecules, whereas the symmetry group of achiralmolecule consists of only the identity operation. An identity operation is a characteristic of every molecule even if it has no symmetry. Rotation around an axis (Cn) consists of rotating the molecule around a specific axis by a specific angle. It is rotation through the angle 360°/n, wherenis an integer, about a rotation axis. For example, if awatermolecule rotates 180° around the axis that passes through theoxygenatom and between thehydrogenatoms, it is in the same configuration as it started. In this case,n= 2, since applying it twice produces the identity operation. In molecules with more than one rotation axis, the Cnaxis having the largest value of n is the highest order rotation axis or principal axis. For example inboron trifluoride(BF3), the highest order of rotation axis isC3, so the principal axis of rotation isC3.
In the reflection operation (σ) many molecules have mirror planes, although they may not be obvious. The reflection operation exchanges left and right, as if each point had moved perpendicularly through the plane to a position exactly as far from the plane as when it started. When the plane is perpendicular to the principal axis of rotation, it is calledσh(horizontal). Other planes, which contain the principal axis of rotation, are labeled vertical (σv) or dihedral (σd).
Inversion (i ) is a more complex operation. Each point moves through the center of the molecule to a position opposite the original position and as far from the central point as where it started. Many molecules that seem at first glance to have an inversion center do not; for example,methaneand othertetrahedralmolecules lack inversion symmetry. To see this, hold a methane model with two hydrogen atoms in the vertical plane on the right and two hydrogen atoms in the horizontal plane on the left. Inversion results in two hydrogen atoms in the horizontal plane on the right and two hydrogen atoms in the vertical plane on the left. Inversion is therefore not a symmetry operation of methane, because the orientation of the molecule following the inversion operation differs from the original orientation. And the last operation is improper rotation or rotation reflection operation (Sn) requires rotation of 360°/n, followed by reflection through a plane perpendicular to the axis of rotation.
Very large groups of prime order constructed inelliptic curve cryptographyserve forpublic-key cryptography. Cryptographical methods of this kind benefit from the flexibility of the geometric objects, hence their group structures, together with the complicated structure of these groups, which make thediscrete logarithmvery hard to calculate. One of the earliest encryption protocols,Caesar's cipher, may also be interpreted as a (very easy) group operation. Most cryptographic schemes use groups in some way. In particularDiffie–Hellman key exchangeuses finitecyclic groups. So the termgroup-based cryptographyrefers mostly tocryptographic protocolsthat use infinitenon-abelian groupssuch as abraid group.
|
https://en.wikipedia.org/wiki/Group_theory
|
Grammar-orientedprogramming(GOP) andGrammar-oriented Object Design(GOOD) are good for designing and creating adomain-specific programming language(DSL) for a specific business domain.
GOOD can be used to drive the execution of the application or it can be used to embed the declarative processing logic of a context-aware component (CAC) orcontext-aware service (CAS). GOOD is a method for creating and maintaining dynamically reconfigurablesoftware architecturesdriven by business-process architectures. The business compiler was used to capture business processes within real-time workshops for various lines of business and create an executable simulation of the processes used.
Instead of using one DSL for the entire programming activity, GOOD suggests the combination of defining domain-specific behavioral semantics in conjunction with the use of more traditional,general purpose programming languages.
|
https://en.wikipedia.org/wiki/Grammar-oriented_programming
|
BPCS-steganography (Bit-Plane Complexity Segmentation steganography)is a type ofdigital steganography.
Digital steganography can hide confidential data (i.e. secret files) very securely by embedding them into some media data called "vessel data." The vessel data is also referred to as "carrier, cover, or dummy data". In BPCS-steganographytrue colorimages (i.e.,24-bit colorimages) are mostly used for vessel data. The embedding operation in practice is to replace the "complex areas" on thebit planesof the vessel image with the confidential data. The most important aspect of BPCS-steganography is that the embedding capacity is very large. In comparison to simple image based steganography which uses solely the least important bit of data, and thus (for a24-bit colorimage) can only embed data equivalent to 1/8 of the total size, BPCS-steganography uses multiple bit-planes, and so can embed a much higher amount of data, though this is dependent on the individual image. For a 'normal' image, roughly 50% of the data might be replaceable with secret data before image degradation becomes apparent.
TheHuman visual systemhas such a special property that a too-complicated visual pattern can not be perceived as "shape-informative." For example, on a very flat beach shore every single square-foot area looks the same - it is just a sandy area, no shape is observed. However, if you look carefully, two same-looking areas are entirely different in their sand particle shapes. BPCS-steganography makes use of this property. It replaces complex areas on the bit-planes of the vessel image with other complex data patterns (i.e., pieces of secret files). This replacing operation is called "embedding." No one can see any difference between the two vessel images of before and after the embedding operation.
An issue arises where the data to be embedded appears visually as simple information, if this simple information replaces the complex information in the original image it may create spurious 'real image information'. In this case the data is passed through a binary image conjugation transformation, in order to create a reciprocal complex representation.
This form of steganography was proposed jointly by Eiji Kawaguchi and Richard O. Eason in 1998.[1]Their experimental program (titled Qtech Hide & View) is freely available for educational purposes.[2]Recently, many researchers are tackling itsalgorithmimprovement and applications as well as resistibility studies againststeganalysis.[citation needed]
|
https://en.wikipedia.org/wiki/BPCS-Steganography
|
Telegram style,telegraph style,telegraphic style, ortelegraphese[1]is a clipped way of writing which abbreviates words and packs information into the smallest possible number of words or characters. It originated in thetelegraphage when telecommunication consisted only of short messages transmitted by hand over the telegraph wire. The telegraph companies charged for their service by the number of words in a message, with a maximum of 15 characters per word for a plain-languagetelegram, and 10 per word for one written in code. The style developed to minimize costs but still convey the message clearly and unambiguously.
The related termcablesedescribes the style of press messages sent uncoded but in a highly condensed style oversubmarine communications cables. In the U.S. Foreign Service, cablese referred to condensed telegraphic messaging that made heavy use of abbreviations and avoided use of definite or indefinite articles, punctuation, and other words unnecessary for comprehension of the message.
Before the telegraph age military dispatches from overseas were made by letters transported by rapid sailing ships. Clarity and concision were often considered important in such correspondence.
An apocryphal story about the briefest correspondence in history has a writer (variously identified asVictor HugoorOscar Wilde) inquiring about the sales of his new book by sending the message "?" to his publisher, and receiving "!" in reply.[2]
Through the history of telegraphy, very many dictionaries of telegraphese,codesorcipherswere developed, each serving to minimise the number of characters or words which needed to be transmitted in order to impart a message; the drivers for this economy were, for telegraph operators, the resource cost and limitedbandwidthof the system; and for the consumer, the cost of sending messages.
Examples of telegraphic code-words and their equivalent expressions, taken fromThe Adams Cable Codex(1894)[3]are:
Note that in the Adams code, the code-words are all actual English words; some telegraph companies charged more for coded messages, or had shorter word-size limits (10-character maximum vs. 15 characters). Compare these to the following examples from theA.B.C. Universal Commercial Electric Telegraphic Code(1901)[4]all of which are English-like, but invented words:
In some ways, telegram style was the precursor to the abbreviated language used intext messagingor short message standard (SMS) services such asTwitter, referred to asSMS language
For telegrams, space was at a premium—economically speaking—and abbreviations were used as necessity. This motivation was revived for compressing information into the 160-character limit of a costly SMS before the advent of multi-message capabilities. Length constraints, and the initial handicap of having to enter each individual letter using multiple keypresses on a numeric pad, drove re-adoption of telegraphic style. Continued space limits and high per-message cost meant the practice persisted for some time after the introduction of built-inpredictive textassistance. Some[who?]who favor predictive entry claim that telegraphing persists, despite it then needing more effort to write (and read); however, many others[who?]assert that predictive text generation is usually wrong, and hence find it more tedious and vexing to erase-and-correct predicted text than to turn off auto-text generation and directly enter their messages "telegraph style".
In Japanese, telegrams are printed using thekatakanascript, one of the few instances in which this script is used for entire sentences. This is a rare context in which someone might see the particle katakana ヲ instead of the equivalent hiragana を; these are virtually never used in words, so they are not in the parts of speech that get substituted into katakana.[citation needed]
The average length of a telegram in the 1900s in the US was 11.93 words; more than half of the messages were 10 words or fewer.[5]
According to another study, the mean length of the telegrams sent in the UK before 1950 was 14.6 words or 78.8 characters.[6]
For German telegrams, the mean length is 11.5 words or 72.4 characters.[6]At the end of the 19th century the average length of a German telegram was calculated as 14.2 words.[6]
|
https://en.wikipedia.org/wiki/Telegraphese
|
Acommon data model(CDM) can refer to any standardiseddata modelwhich allows fordataandinformation exchangebetween differentapplicationsanddata sources. Common data models aim to standardise logical infrastructure so that related applications can "operate on and share the same data",[1]and can be seen as a way to "organize data from many sources that are in different formats into a standard structure".[2]
A common data model has been described as one of the components of a "strong information system".[3]A standardised common data model has also been described as a typical component of a well designedagile applicationbesides a common communication protocol.[4]Providing a single common data model within an organisation is one of the typical tasks of adata warehouse.
X-trans.euwas a cross-border pilot project between theFree State of Bavaria(Germany) andUpper Austriawith the aim of developing a faster procedure for the application and approval of cross-border large-capacity transports. The portal was based on a common data model that contained all the information required for approval.
TheClimate Data Store Common Data Modelis a common data model set up by theCopernicus Climate Change Servicefor harmonising essentialclimate variablesfrom different sources and data providers.
Withinservice-oriented architecture, S-RAMP is a specification released byHP,IBM,Software AG,TIBCO, andRed Hat[5]which defines a common data model for SOA repositories[6]as well as an interaction protocol to facilitate the use of common tooling and sharing of data.[7]
Content Management Interoperability Services(CMIS) is an open standard for inter-operation of differentcontent management systemsover the internet, and provides a common data model for typed files and folders used withversion control.[8]
The NetCDF software libraries for array-oriented scientific data implements a common data model called theNetCDF Java common data model, which consists of three layers built on top of each other to add successively richer semantics.
Withingenomic and medical data, the Observational Medical Outcomes Partnership (OMOP) research program established under the U.S.National Institutes of Healthhas created a common data model for claims and electronic health records which can accommodate data from different sources around the world. PCORnet, which was developed by thePatient-Centered Outcomes Research Institute, is another common data model for health data including electronic health records and patient claims. The Sentinel Common Data Model was initially started as Mini-Sentinel in 2008. It is used by the Sentinel Initiative of the USA's Food and Drug Administration. The Generalized Data Model was first published in 2019.[9]It was designed to be a stand-alone data model as well as to allow for further transformation into other data models (e.g., OMOP, PCORNet, Sentinel). It has a hierarchical structure to flexibly capture relationships among data elements. TheJANUS clinical trial data repositoryalso provides a common data model which is based on theSDTMstandard to represent clinical data submitted to regulatory agencies, such as tabulation datasets, patient profiles, listings, etc.
SX000iis a specification developed jointly by theAerospace and Defence Industries Association of Europe(ASD) and the AmericanAerospace Industries Association(AIA) to provide information, guidance and instructions to ensure compatibility and the commonality. The associated SX002D specification contains a common data model.
The Microsoft Common Data Model is a collection of many standardised extensible data schemas with entities, attributes, semantic metadata, and relationships, which represent commonly used concepts and activities in various businesses areas.[citation needed]It is maintained byMicrosoftand its partners, and is published onGitHub.[10]Microsoft's Common Data Model is used amongst others inMicrosoft Dataverse[11]and with variousMicrosoft Power Platform[12]andMicrosoft Dynamics 365[13]services.
RailTopoModelis a common data model for therailway sector.[14]
There are many more examples of various common data models for different uses published by different sources.[15][16][17][18][19]
|
https://en.wikipedia.org/wiki/Common_data_model
|
Incomputer science, theSharp Satisfiability Problem(sometimes calledSharp-SAT,#SATormodel counting) is the problem of counting the number ofinterpretationsthatsatisfya givenBooleanformula, introduced by Valiant in 1979.[1]In other words, it asks in how many ways the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formulaevaluates to TRUE. For example, the formulaa∨¬b{\displaystyle a\lor \neg b}is satisfiable by three distinct boolean value assignments of the variables, namely, for any of the assignments (a{\displaystyle a}= TRUE,b{\displaystyle b}= FALSE), (a{\displaystyle a}= FALSE,b{\displaystyle b}= FALSE), and (a{\displaystyle a}= TRUE,b{\displaystyle b}= TRUE), we havea∨¬b=TRUE.{\displaystyle a\lor \neg b={\textsf {TRUE}}.}
#SAT is different fromBoolean satisfiability problem(SAT), which asks if there existsa solutionof Boolean formula. Instead, #SAT asks to enumerateallthe solutionsto a Boolean Formula. #SAT is harder than SAT in the sense that, once the total number of solutions to a Boolean formula is known, SAT can be decided in constant time. However, the converse is not true, because knowing a Boolean formula hasa solutiondoes not help us to countall the solutions, as there are an exponential number of possibilities.
#SAT is a well-known example of the class ofcounting problems, known as#P-complete(read as sharp P complete). In other words, every instance of a problem in the complexity class#Pcan be reduced to an instance of the #SAT problem. This is an important result because many difficult counting problems arise inEnumerative Combinatorics,Statistical physics, Network Reliability, andArtificial intelligencewithout any known formula. If a problem is shown to be hard, then it provides acomplexity theoreticexplanation for the lack of nice looking formulas.[2]
#SAT is#P-complete. To prove this, first note that #SAT is obviously in #P.
Next, we prove that #SAT is #P-hard. Take any problem #A in #P. We know that A can be solved using aNon-deterministic Turing MachineM. On the other hand, from the proof forCook-Levin Theorem, we know that we can reduce M to a boolean formula F. Now, each valid assignment of F corresponds to a unique acceptable path in M, and vice versa. However, each acceptable path taken by M represents a solution to A. In other words, there is a bijection between the valid assignments of F and the solutions to A. So, the reduction used in the proof for Cook-Levin Theorem is parsimonious. This implies that #SAT is #P-hard.
Counting solutions is intractable (#P-complete) in many special cases for which satisfiability is tractable (in P), as well as when satisfiability is intractable (NP-complete). This includes the following.
This is the counting version of3SAT. One can show that any formula in SATcan be rewrittenas a formula in 3-CNFform preserving the number of satisfying assignments. Hence, #SAT and #3SAT are counting equivalent and #3SAT is #P-complete as well.
Even though2SAT(deciding whether a 2CNF formula has a solution) is polynomial, counting the number of solutions is#P-complete.[3]The #P-completeness already in the monotone case, i.e., when there are nonegations(#MONOTONE-2-CNF).
It is known that, assuming thatNPis different fromRP, #MONOTONE-2-CNF also cannot beapproximatedby a fullypolynomial-time approximation scheme(FPRAS), even assuming that each variable occurs in at most 6 clauses, but that a fully polynomial-time approximation scheme (FPTAS) exists when each variable occurs in at most 5 clauses:[4]this follows from analogous results on the problem♯ISof counting the number ofindependent setsingraphs.
Similarly, even thoughHorn-satisfiabilityis polynomial, counting the number of solutions is #P-complete. This result follows from a general dichotomy characterizing which SAT-like problems are #P-complete.[5]
This is the counting version ofPlanar 3SAT. The hardness reduction from 3SAT to Planar 3SAT given by Lichtenstein[6]is parsimonious. This implies that Planar #3SAT is #P-complete.
This is the counting version of Planar Monotone Rectilinear 3SAT.[7]The NP-hardness reduction given by de Berg & Khosravi[7]is parsimonious. Therefore, this problem is #P-complete as well.
Fordisjunctive normal form(DNF) formulas, counting the solutions is also#P-complete, even when all clauses have size 2 and there are nonegations: this is because, byDe Morgan's laws, counting the number of solutions of a DNF amounts to counting the number of solutions of the negation of aconjunctive normal form(CNF) formula. Intractability even holds in the case known as #PP2DNF, where the variables are partitioned into two sets, with each clause containing one variable from each set.[8]
By contrast, it is possible to tractably approximate the number of solutions of a disjunctive normal form formula using theKarp-Luby algorithm, which is an FPRAS for this problem.[9]
The variant of SAT corresponding to affine relations in the sense ofSchaefer's dichotomy theorem, i.e., where clauses amount to equations modulo 2 with theXORoperator, is the only SAT variant for which the #SAT problem can be solved in polynomial time.[10]
If the instances to SAT are restricted usinggraphparameters, the #SAT problem can become tractable. For instance, #SAT on SAT instances whosetreewidthis bounded by a constant can be performed inpolynomial time.[11]Here, the treewidth can be the primal treewidth, dual treewidth, or incidence treewidth of thehypergraphassociated to the SAT formula, whose vertices are the variables and where each clause is represented as a hyperedge.
Model counting is tractable (solvable in polynomial time) for (ordered)BDDsand for some circuit formalisms studied inknowledge compilation, such asd-DNNFs.
Weighted model counting (WMC) generalizes #SAT by computing a linear combination of the models instead of just counting the models. In the literal-weighted variant of WMC, each literal gets assigned a weight, such thatWMC(ϕ;w)=∑M⊨ϕ∏l∈Mw(l){\displaystyle {\text{WMC}}(\phi ;w)=\sum _{M\models \phi }\prod _{l\in M}w(l)}.
WMC is used for probabilistic inference, as probabilistic queries over discrete random variables such as inBayesian networkscan be reduced to WMC.[12]
Algebraic model counting further generalizes #SAT and WMC over arbitrary commutativesemirings.[13]
|
https://en.wikipedia.org/wiki/Sharp-SAT
|
Cyber insuranceis a specialtyinsuranceproduct intended to protect businesses from Internet-basedrisks, and more generally from risks relating toinformation technologyinfrastructure and activities. Risks of this nature are typically excluded from traditionalcommercial general liabilitypolicies or at least are not specifically defined in traditional insurance products. Coverage provided by cyber-insurance policies may include first and third parties coverage against losses such as data destruction, extortion, theft, hacking, anddenial of service attacks; liability coverage indemnifying companies for losses to others caused, for example, by errors and omissions, failure to safeguard data, or defamation; and other benefits including regularsecurity-audit, post-incident public relations and investigative expenses, and criminal reward funds.
Because the cyber insurance market in many countries is relatively small compared to other insurance products, its overall impact on emerging cyber threats is difficult to quantify.[1]As the impact to people and businesses from cyber threats is also relatively broad when compared to the scope of protection provided by insurance products, insurance companies continue to develop their services.
As well as directly improving security, cyber insurance is beneficial in the event of a large-scale security breach. Insurance provides a smooth funding mechanism for recovery from major losses, helping businesses to return to normal and reducing the need for government assistance.[2][3]
As a side benefit, many cyber-insurance policies require entities attempting to procure cyber insurance policies to participate in an IT security audit before the insurance carrier will bind the policy. This will help companies determine their current vulnerabilities and allow the insurance carrier to gauge the risk they are taking on by offering the policy to the entity. By completing the IT security audit the entity procuring the policy will be required, in some cases, to make necessary improvements to their IT security vulnerabilities before the cyber-insurance policy can be procured. This will in-turn help reduce risk of cyber crime against the company procuring cyber insurance.[4]
Finally, insurance allows cyber-security risks to be distributed fairly, with the cost of premiums commensurate with the size of expected loss from such risks. This avoids potentially dangerous concentrations of risk while also preventing free-riding.
According to Josephine Wolff’s research into the history of cyber insurance, its origins trace back to an April 1997 International Risk Insurance Management Society convention at which Steven Haase presented the launch of the first cyber insurance product, including first and third party coverages.[5][6][7]Haase first came up with the concept of cyber insurance a few years earlier and had discussed it with various industry colleagues at times, but this 1997 event marked a breakthrough moment when the first cyber insurance policy and underwriting platform were actually launched. The event resulted in the creation of the first policy designed to focus on the risks of internet commerce, which was the Internet Security Liability (ISL) policy, developed by Haase and underwritten by AIG.[8]Around this same time, in 1999, David Walsh founded CFC Underwriting in the United Kingdom, a company which treats cyber as one of its main focus areas.[9][10]Chris Cotterell founded Safeonline around the same time, which soon became another significant player in the cyber insurance space.[11][12]The early meeting between Haase and 20 industry colleagues in Hawaii is now commonly referred to as the “Breach on the Beach” and is considered a pivotal moment at which cyber insurance was first recognized and celebrated.[13][14]
After a significant malware incident in 2017, however,Reckitt Benckiserreleased information on how much the cyberattack would impact financial performance, leading some analysts to believe the trend is for companies to be more transparent with data from cyber incidents.[15]Purchases of cyber insurance has increased due to the rise in internet-based attacks, such as ransomware attacks. Government Accountability Office, "Insurance clients are opting in for cyber coverage—up from 26% in 2016 to 47% in 2020. At the same time, U.S. insurance entities saw the costs of cyberattacks nearly double between 2016 and 2019. As a result, insurance premiums also saw major increases."[16]
A key area to manage risk is to establish what is an acceptable risk for each organization or what is'reasonable security'for their specific working environment. Practicing 'duty of care' helps protect all interested parties - executives, regulators, judges, the public who can be affected by those risks. The Duty of Care Risk Analysis Standard (DoCRA)[17]provides practices and principles to help balance compliance, security, and business objectives when developing security controls.
Legislation
In 2022, Kentucky and Maryland enacted insurance data security legislation based upon the National Association of Insurance Commissioners (“NAIC”) Insurance Data Security Model Law (MDL-668).[18]Maryland's SB 207[19]takes effect on October 1, 2023. Kentucky's House Bill 474[20]goes into effect on January 1, 2023.
During 2005, a “second generation" of cyber-insurance literature emerged targeting risk management of current cyber-networks. The authors of such literature link themarket failurewith fundamental properties of information technology, specially correlated risk information asymmetries between insurers and insureds, and inter-dependencies.[21]
According to Josephine Wolff, cyber insurance has been "ineffective at curbing cybersecurity losses because it normalizes the payment of online ransoms, whereas the goal of cybersecurity is the opposite—to disincentivize such payments to make ransomware less profitable."[22]
FM Global in 2019 conducted a survey of CFOs at companies with over $1 billion in turnover. The survey found that 71% of CFOs believed that their insurance provider would cover "most or all" of the losses their company would suffer in a cyber security attack or crime. Nevertheless, many of those CFOs reported that they expected damages related with cyber attacks that are not covered by typical cyber attack policies. Specifically, 50% of the CFOs mentioned that they anticipated after a cyber attack a devaluation of their company's brand while more than 30% expected a decline in revenue.[23]
Like other insurance policies, cyber insurance typically includes awar exclusion clause- explicitly excluding damage from acts of war. While the majority of cyber insurance claims will relate to simple criminal behaviour, increasingly companies are likely to fall victim tocyberwarfareattacks by nation-states or terrorist organizations - whether specifically targeted or simply collateral damage. After the US and UK, governments characterized theNotPetyaattack as a Russian military cyber-attack insurers are arguing that they do not cover such events.[24][25][26]
In a recent academic effort, researchers Pal, Madnick, and Siegel from the Sloan School of Management at the Massachusetts Institute of Technology were the first to analyze the economic feasibility of cyber-CAT bond markets. They applied economic theory and data science to propose conditions under which is it economically efficient to either have re-insurance markets transferring risk (without the existence of CAT bond markets), CAT bond markets transferring risk (in the presence of re-insurance markets), or self-insurance markets (in the absence of re-insurance and CAT bond markets) to cover residual cyber-risk.[27][28]
As of 2019, the average cost of cyber liability insurance in the United States was estimated to be $1,501 per year for $1 million in liability coverage, with a $10,000 deductible.[29]The average annual premium for a cyber liability limit of $500,000 with a $5,000 deductible was $1,146, and the average annual premium for a cyber liability limit of $250,000 with a $2,500 deductible was $739.[30]In addition to location, the main drivers of cost for cyber insurance include the type of business, the number of credit/debit card transactions performed, and the storage of sensitive personal information such as date of birth and Social Security numbers.
|
https://en.wikipedia.org/wiki/Cyber_insurance
|
Adiscrete cosine transform(DCT) expresses a finite sequence ofdata pointsin terms of a sum ofcosinefunctions oscillating at differentfrequencies. The DCT, first proposed byNasir Ahmedin 1972, is a widely used transformation technique insignal processinganddata compression. It is used in mostdigital media, includingdigital images(such asJPEGandHEIF),digital video(such asMPEGandH.26x),digital audio(such asDolby Digital,MP3andAAC),digital television(such asSDTV,HDTVandVOD),digital radio(such asAAC+andDAB+), andspeech coding(such asAAC-LD,SirenandOpus). DCTs are also important to numerous other applications inscience and engineering, such asdigital signal processing,telecommunicationdevices, reducingnetwork bandwidthusage, andspectral methodsfor the numerical solution ofpartial differential equations.
A DCT is aFourier-related transformsimilar to thediscrete Fourier transform(DFT), but using onlyreal numbers. The DCTs are generally related toFourier seriescoefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier series coefficients of only periodically extended sequences. DCTs are equivalent to DFTs of roughly twice the length, operating on real data withevensymmetry (since the Fourier transform of a real and even function is real and even), whereas in some variants the input or output data are shifted by half a sample.
There are eight standard DCT variants, of which four are common.
The most common variant of discrete cosine transform is the type-II DCT, which is often called simplythe DCT. This was the original DCT as first proposed by Ahmed. Its inverse, the type-III DCT, is correspondingly often called simplythe inverse DCTorthe IDCT. Two related transforms are thediscrete sine transform(DST), which is equivalent to a DFT of real andodd functions, and themodified discrete cosine transform(MDCT), which is based on a DCT of overlapping data. Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT to multidimensional signals. A variety of fast algorithms have been developed to reduce the computational complexity of implementing DCT. One of these is the integer DCT (IntDCT),[1]anintegerapproximation of the standard DCT,[2]:ix, xiii, 1, 141–304used in severalISO/IECandITU-Tinternational standards.[1][2]
DCT compression, also known as block compression, compresses data in sets of discrete DCT blocks.[3]DCT blocks sizes including 8x8pixelsfor the standard DCT, and varied integer DCT sizes between 4x4 and 32x32 pixels.[1][4]The DCT has a strongenergy compactionproperty,[5][6]capable of achieving high quality at highdata compression ratios.[7][8]However, blockycompression artifactscan appear when heavy DCT compression is applied.
The DCT was first conceived byNasir Ahmedwhile working atKansas State University. The concept was proposed to theNational Science Foundationin 1972. The DCT was originally intended forimage compression.[9][1]Ahmed developed a practical DCT algorithm with his PhD students T. Raj Natarajan andK. R. Raoat theUniversity of Texas at Arlingtonin 1973.[9]They presented their results in a January 1974 paper, titledDiscrete Cosine Transform.[5][6][10]It described what is now called the type-II DCT (DCT-II),[2]:51as well as the type-III inverse DCT (IDCT).[5]
Since its introduction in 1974, there has been significant research on the DCT.[10]In 1977, Wen-Hsiung Chen published a paper with C. Harrison Smith and Stanley C. Fralick presenting a fast DCT algorithm.[11][10]Further developments include a 1978 paper by M. J. Narasimha and A. M. Peterson, and a 1984 paper by B. G. Lee.[10]These research papers, along with the original 1974 Ahmed paper and the 1977 Chen paper, were cited by theJoint Photographic Experts Groupas the basis forJPEG's lossy image compression algorithm in 1992.[10][12]
Thediscrete sine transform(DST) was derived from the DCT, by replacing theNeumann conditionatx=0with aDirichlet condition.[2]:35-36The DST was described in the 1974 DCT paper by Ahmed, Natarajan and Rao.[5]A type-I DST (DST-I) was later described byAnil K. Jainin 1976, and a type-II DST (DST-II) was then described by H.B. Kekra and J.K. Solanka in 1978.[13]
In 1975, John A. Roese and Guner S. Robinson adapted the DCT forinter-framemotion-compensatedvideo coding. They experimented with the DCT and thefast Fourier transform(FFT), developing inter-frame hybrid coders for both, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25-bitperpixelfor avideotelephonescene with image quality comparable to anintra-frame coderrequiring 2-bit per pixel.[14][15]In 1979,Anil K. Jainand Jaswant R. Jain further developed motion-compensated DCT video compression,[16][17]also called block motion compensation.[17]This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981.[17]Motion-compensated DCT later became the standard coding technique for video compression from the late 1980s onwards.[18][19]
A DCT variant, themodified discrete cosine transform(MDCT), was developed by John P. Princen, A.W. Johnson and Alan B. Bradley at theUniversity of Surreyin 1987,[20]following earlier work by Princen and Bradley in 1986.[21]The MDCT is used in most modernaudio compressionformats, such asDolby Digital(AC-3),[22][23]MP3(which uses a hybrid DCT-FFTalgorithm),[24]Advanced Audio Coding(AAC),[25]andVorbis(Ogg).[26]
Nasir Ahmed also developed a lossless DCT algorithm with Giridhar Mandyam and Neeraj Magotra at theUniversity of New Mexicoin 1995. This allows the DCT technique to be used forlossless compressionof images. It is a modification of the original DCT algorithm, and incorporates elements of inverse DCT anddelta modulation. It is a more effective lossless compression algorithm thanentropy coding.[27]Lossless DCT is also known as LDCT.[28]
The DCT is the most widely used transformation technique insignal processing,[29]and by far the most widely used linear transform indata compression.[30]Uncompresseddigital mediaas well aslossless compressionhave highmemoryandbandwidthrequirements, which is significantly reduced by the DCTlossy compressiontechnique,[7][8]capable of achievingdata compression ratiosfrom 8:1 to 14:1 for near-studio-quality,[7]up to 100:1 for acceptable-quality content.[8]DCT compression standards are used in digital media technologies, such asdigital images,digital photos,[31][32]digital video,[18][33]streaming media,[34]digital television,streaming television,video on demand(VOD),[8]digital cinema,[22]high-definition video(HD video), andhigh-definition television(HDTV).[7][35]
The DCT, and in particular the DCT-II, is often used in signal and image processing, especially for lossy compression, because it has a strongenergy compactionproperty.[5][6]In typical applications, most of the signal information tends to be concentrated in a few low-frequency components of the DCT. For strongly correlatedMarkov processes, the DCT can approach the compaction efficiency of theKarhunen-Loève transform(which is optimal in the decorrelation sense). As explained below, this stems from the boundary conditions implicit in the cosine functions.
DCTs are widely employed in solvingpartial differential equationsbyspectral methods, where the different variants of the DCT correspond to slightly different even and odd boundary conditions at the two ends of the array.
DCTs are closely related toChebyshev polynomials, and fast DCT algorithms (below) are used inChebyshev approximationof arbitrary functions by series of Chebyshev polynomials, for example inClenshaw–Curtis quadrature.
The DCT is widely used in many applications, which include the following.
The DCT-II is an important image compression technique. It is used in image compression standards such asJPEG, andvideo compressionstandards such asH.26x,MJPEG,MPEG,DV,TheoraandDaala. There, the two-dimensional DCT-II ofN×N{\displaystyle N\times N}blocks are computed and the results arequantizedandentropy coded. In this case,N{\displaystyle N}is typically 8 and the DCT-II formula is applied to each row and column of the block. The result is an 8 × 8 transform coefficient array in which the(0,0){\displaystyle (0,0)}element (top-left) is the DC (zero-frequency) component and entries with increasing vertical and horizontal index values represent higher vertical and horizontal spatial frequencies.
The integer DCT, an integer approximation of the DCT,[2][1]is used inAdvanced Video Coding(AVC),[52][1]introduced in 2003, andHigh Efficiency Video Coding(HEVC),[4][1]introduced in 2013. The integer DCT is also used in theHigh Efficiency Image Format(HEIF), which uses a subset of theHEVCvideo coding format for coding still images.[4]AVC uses 4 x 4 and 8 x 8 blocks. HEVC and HEIF use varied block sizes between 4 x 4 and 32 x 32pixels.[4][1]As of 2019[update], AVC is by far the most commonly used format for the recording, compression and distribution of video content, used by 91% of video developers, followed by HEVC which is used by 43% of developers.[43]
Multidimensional DCTs (MD DCTs) have several applications, mainly 3-D DCTs such as the 3-D DCT-II, which has several new applications like Hyperspectral Imaging coding systems,[85]variable temporal length 3-D DCT coding,[86]video codingalgorithms,[87]adaptive video coding[88]and 3-D Compression.[89]Due to enhancement in the hardware, software and introduction of several fast algorithms, the necessity of using MD DCTs is rapidly increasing.DCT-IVhas gained popularity for its applications in fast implementation of real-valued polyphase filtering banks,[90]lapped orthogonal transform[91][92]and cosine-modulated wavelet bases.[93]
DCT plays an important role indigital signal processingspecificallydata compression. The DCT is widely implemented indigital signal processors(DSP), as well as digital signal processing software. Many companies have developed DSPs based on DCT technology. DCTs are widely used for applications such asencoding, decoding, video, audio,multiplexing, control signals,signaling, andanalog-to-digital conversion. DCTs are also commonly used forhigh-definition television(HDTV) encoder/decoderchips.[1]
A common issue with DCT compression indigital mediaare blockycompression artifacts,[94]caused by DCT blocks.[3]In a DCT algorithm, an image (or frame in an image sequence) is divided into square blocks which are processed independently from each other, then the DCT blocks is taken within each block and the resulting DCT coefficients arequantized. This process can cause blocking artifacts, primarily at highdata compression ratios.[94]This can also cause themosquito noiseeffect, commonly found indigital video.[95]
DCT blocks are often used inglitch art.[3]The artistRosa Menkmanmakes use of DCT-based compression artifacts in her glitch art,[96]particularly the DCT blocks found in mostdigital mediaformats such asJPEGdigital images andMP3audio.[3]Another example isJpegsby German photographerThomas Ruff, which uses intentionalJPEGartifacts as the basis of the picture's style.[97][98]
Like any Fourier-related transform, DCTs express a function or a signal in terms of a sum ofsinusoidswith differentfrequenciesandamplitudes. Like the DFT, a DCT operates on a function at a finite number ofdiscrete data points. The obvious distinction between a DCT and a DFT is that the former uses only cosine functions, while the latter uses both cosines and sines (in the form ofcomplex exponentials). However, this visible difference is merely a consequence of a deeper distinction: a DCT implies differentboundary conditionsfrom the DFT or other related transforms.
The Fourier-related transforms that operate on a function over a finitedomain, such as the DFT or DCT or aFourier series, can be thought of as implicitly defining anextensionof that function outside the domain. That is, once you write a functionf(x){\displaystyle f(x)}as a sum of sinusoids, you can evaluate that sum at anyx{\displaystyle x}, even forx{\displaystyle x}where the originalf(x){\displaystyle f(x)}was not specified. The DFT, like the Fourier series, implies aperiodicextension of the original function. A DCT, like acosine transform, implies anevenextension of the original function.
However, because DCTs operate onfinite,discretesequences, two issues arise that do not apply for the continuous cosine transform. First, one has to specify whether the function is even or odd atboththe left and right boundaries of the domain (i.e. the min-nand max-nboundaries in the definitions below, respectively). Second, one has to specify aroundwhat pointthe function is even or odd. In particular, consider a sequenceabcdof four equally spaced data points, and say that we specify an evenleftboundary. There are two sensible possibilities: either the data are even about the samplea, in which case the even extension isdcbabcd, or the data are even about the pointhalfwaybetweenaand the previous point, in which case the even extension isdcbaabcd(ais repeated).
Each boundary can be either even or odd (2 choices per boundary) and can be symmetric about a data point or the point halfway between two data points (2 choices per boundary), for a total of 2 × 2 × 2 × 2 = 16 possibilities. These choices lead to all the standard variations of DCTs and alsodiscrete sine transforms(DSTs). Half of these possibilities, those where theleftboundary is even, correspond to the 8 types of DCT; the other half are the 8 types of DST.
These different boundary conditions strongly affect the applications of the transform and lead to uniquely useful properties for the various DCT types. Most directly, when using Fourier-related transforms to solvepartial differential equationsbyspectral methods, the boundary conditions are directly specified as a part of the problem being solved. Or, for the MDCT (based on the type-IV DCT), the boundary conditions are intimately involved in the MDCT's critical property of time-domain aliasing cancellation. In a more subtle fashion, the boundary conditions are responsible for theenergy compactificationproperties that make DCTs useful for image and audio compression, because the boundaries affect the rate of convergence of any Fourier-like series.
In particular, it is well known that anydiscontinuitiesin a function reduce therate of convergenceof the Fourier series so that more sinusoids are needed to represent the function with a given accuracy. The same principle governs the usefulness of the DFT and other transforms for signal compression; the smoother a function is, the fewer terms in its DFT or DCT are required to represent it accurately, and the more it can be compressed.[a]However, the implicit periodicity of the DFT means that discontinuities usually occur at the boundaries: any random segment of a signal is unlikely to have the same value at both the left and right boundaries.[b]In contrast, a DCT wherebothboundaries are evenalwaysyields a continuous extension at the boundaries (although theslopeis generally discontinuous). This is why DCTs, and in particular DCTs of types I, II, V, and VI (the types that have two even boundaries) generally perform better for signal compression than DFTs and DSTs. In practice, a type-II DCT is usually preferred for such applications, in part for reasons of computational convenience.
Formally, the discrete cosine transform is alinear, invertiblefunctionf:RN→RN{\displaystyle f:\mathbb {R} ^{N}\to \mathbb {R} ^{N}}(whereR{\displaystyle \mathbb {R} }denotes the set ofreal numbers), or equivalently an invertibleN×Nsquare matrix. There are several variants of the DCT with slightly modified definitions. TheNreal numbersx0,…xN−1{\displaystyle ~x_{0},\ \ldots \ x_{N-1}~}are transformed into theNreal numbersX0,…,XN−1{\displaystyle X_{0},\,\ldots ,\,X_{N-1}}according to one of the formulas:
Some authors further multiply thex0{\displaystyle x_{0}}andxN−1{\displaystyle x_{N-1}}terms by2{\displaystyle {\sqrt {2\,}}\,}and correspondingly multiply theX0{\displaystyle X_{0}}andXN−1{\displaystyle X_{N-1}}terms by1/2{\displaystyle 1/{\sqrt {2\,}}\,}which, if one further multiplies by an overall scale factor of2N−1{\textstyle {\sqrt {{\tfrac {2}{N-1\,}}\,}}}, makes the DCT-I matrixorthogonalbut breaks the direct correspondence with a real-evenDFT.
The DCT-I is exactly equivalent (up to an overall scale factor of 2), to a DFT of2(N−1){\displaystyle 2(N-1)}real numbers with even symmetry. For example, a DCT-I ofN=5{\displaystyle N=5}real numbersabcde{\displaystyle a\ b\ c\ d\ e}is exactly equivalent to a DFT of eight real numbersabcdedcb{\displaystyle a\ b\ c\ d\ e\ d\ c\ b}(even symmetry), divided by two. (In contrast, DCT types II-IV involve a half-sample shift in the equivalent DFT.)
Note, however, that the DCT-I is not defined forN{\displaystyle N}less than 2, while all other DCT types are defined for any positiveN{\displaystyle N}.
Thus, the DCT-I corresponds to the boundary conditions:xn{\displaystyle x_{n}}is even aroundn=0{\displaystyle n=0}and even aroundn=N−1{\displaystyle n=N-1}; similarly forXk{\displaystyle X_{k}}.
The DCT-II is probably the most commonly used form, and is often simply referred to as theDCT.[5][6]
This transform is exactly equivalent (up to an overall scale factor of 2) to a DFT of4N{\displaystyle 4N}real inputs of even symmetry, where the even-indexed elements are zero. That is, it is half of the DFT of the4N{\displaystyle 4N}inputsyn,{\displaystyle y_{n},}wherey2n=0{\displaystyle y_{2n}=0},y2n+1=xn{\displaystyle y_{2n+1}=x_{n}}for0≤n<N{\displaystyle 0\leq n<N},y2N=0{\displaystyle y_{2N}=0}, andy4N−n=yn{\displaystyle y_{4N-n}=y_{n}}for0<n<2N{\displaystyle 0<n<2N}. DCT-II transformation is also possible using2N{\displaystyle 2N}signal followed by a multiplication by half shift. This is demonstrated byMakhoul.[citation needed]
Some authors further multiply theX0{\displaystyle X_{0}}term by1/N{\displaystyle 1/{\sqrt {N\,}}\,}and multiply the rest of the matrix by an overall scale factor of2/N{\textstyle {\sqrt {{2}/{N}}}}(see below for the corresponding change in DCT-III). This makes the DCT-II matrixorthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted input. This is the normalization used byMatlab.[99]In many applications, such asJPEG, the scaling is arbitrary because scale factors can be combined with a subsequent computational step (e.g. thequantizationstep in JPEG[100]), and a scaling can be chosen that allows the DCT to be computed with fewer multiplications.[101][102]
The DCT-II implies the boundary conditions:xn{\displaystyle x_{n}}is even aroundn=−1/2{\displaystyle n=-1/2}and even aroundn=N−1/2{\displaystyle n=N-1/2\,};Xk{\displaystyle X_{k}}is even aroundk=0{\displaystyle k=0}and odd aroundk=N{\displaystyle k=N}.
Because it is the inverse of DCT-II up to a scale factor (see below), this form is sometimes simply referred to as "the inverse DCT" ("IDCT").[6]
Some authors divide thex0{\displaystyle x_{0}}term by2{\displaystyle {\sqrt {2}}}instead of by 2 (resulting in an overallx0/2{\displaystyle x_{0}/{\sqrt {2}}}term) and multiply the resulting matrix by an overall scale factor of2/N{\textstyle {\sqrt {2/N}}}(see above for the corresponding change in DCT-II), so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrixorthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted output.
The DCT-III implies the boundary conditions:xn{\displaystyle x_{n}}is even aroundn=0{\displaystyle n=0}and odd aroundn=N;{\displaystyle n=N;}Xk{\displaystyle X_{k}}is even aroundk=−1/2{\displaystyle k=-1/2}and even aroundk=N−1/2.{\displaystyle k=N-1/2.}
The DCT-IV matrix becomesorthogonal(and thus, being clearly symmetric, its own inverse) if one further multiplies by an overall scale factor of2/N.{\textstyle {\sqrt {2/N}}.}
A variant of the DCT-IV, where data from different transforms areoverlapped, is called themodified discrete cosine transform(MDCT).[103]
The DCT-IV implies the boundary conditions:xn{\displaystyle x_{n}}is even aroundn=−1/2{\displaystyle n=-1/2}and odd aroundn=N−1/2;{\displaystyle n=N-1/2;}similarly forXk.{\displaystyle X_{k}.}
DCTs of types I–IV treat both boundaries consistently regarding the point of symmetry: they are even/odd around either a data point for both boundaries or halfway between two data points for both boundaries. By contrast, DCTs of types V-VIII imply boundaries that are even/odd around a data point for one boundary and halfway between two data points for the other boundary.
In other words, DCT types I–IV are equivalent to real-even DFTs of even order (regardless of whetherN{\displaystyle N}is even or odd), since the corresponding DFT is of length2(N−1){\displaystyle 2(N-1)}(for DCT-I) or4N{\displaystyle 4N}(for DCT-II & III) or8N{\displaystyle 8N}(for DCT-IV). The four additional types of discrete cosine transform[104]correspond essentially to real-even DFTs of logically odd order, which have factors ofN±1/2{\displaystyle N\pm {1}/{2}}in the denominators of the cosine arguments.
However, these variants seem to be rarely used in practice. One reason, perhaps, is thatFFTalgorithms for odd-length DFTs are generally more complicated thanFFTalgorithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below.
(The trivial real-even array, a length-one DFT (odd length) of a single numbera, corresponds to a DCT-V of lengthN=1.{\displaystyle N=1.})
Using the normalization conventions above, the inverse of DCT-I is DCT-I multiplied by 2/(N− 1). The inverse of DCT-IV is DCT-IV multiplied by 2/N. The inverse of DCT-II is DCT-III multiplied by 2/Nand vice versa.[6]
Like for the DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by2/N{\textstyle {\sqrt {2/N}}}so that the inverse does not require any additional multiplicative factor. Combined with appropriate factors of√2(see above), this can be used to make the transform matrixorthogonal.
Multidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions: they are simply a separable product (equivalently, a composition) of DCTs along each dimension.
For example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2D DCT-II is given by the formula (omitting normalization and other scale factors, as above):
The3-D DCT-IIis only the extension of2-D DCT-IIin three dimensional space and mathematically can be calculated by the formula
The inverse of3-D DCT-IIis3-D DCT-IIIand can be computed from the formula given by
Technically, computing a two-, three- (or -multi) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as arow-columnalgorithm. As withmultidimensional FFT algorithms, however, there exist other methods to compute the same thing while performing the computations in a different order (i.e. interleaving/combining the algorithms for the different dimensions). Owing to the rapid growth in the applications based on the 3-D DCT, several fast algorithms are developed for the computation of 3-D DCT-II. Vector-Radix algorithms are applied for computing M-D DCT to reduce the computational complexity and to increase the computational speed. To compute 3-D DCT-II efficiently, a fast algorithm, Vector-Radix Decimation in Frequency (VR DIF) algorithm was developed.
In order to apply the VR DIF algorithm the input data is to be formulated and rearranged as follows.[105][106]The transform sizeN × N × Nis assumed to be 2.
The figure to the adjacent shows the four stages that are involved in calculating 3-D DCT-II using VR DIF algorithm. The first stage is the 3-D reordering using the index mapping illustrated by the above equations. The second stage is the butterfly calculation. Each butterfly calculates eight points together as shown in the figure just below, wherec(φi)=cos(φi){\displaystyle c(\varphi _{i})=\cos(\varphi _{i})}.
The original 3-D DCT-II now can be written as
whereφi=π2N(4Ni+1),andi=1,2,3.{\displaystyle \varphi _{i}={\frac {\pi }{2N}}(4N_{i}+1),{\text{ and }}i=1,2,3.}
If the even and the odd parts ofk1,k2{\displaystyle k_{1},k_{2}}andk3{\displaystyle k_{3}}and are considered, the general formula for the calculation of the 3-D DCT-II can be expressed as
where
The whole 3-D DCT calculation needs[log2N]{\displaystyle ~[\log _{2}N]~}stages, and each stage involves18N3{\displaystyle ~{\tfrac {1}{8}}\ N^{3}~}butterflies. The whole 3-D DCT requires[18N3log2N]{\displaystyle ~\left[{\tfrac {1}{8}}\ N^{3}\log _{2}N\right]~}butterflies to be computed. Each butterfly requires seven real multiplications (including trivial multiplications) and 24 real additions (including trivial additions). Therefore, the total number of real multiplications needed for this stage is[78N3log2N],{\displaystyle ~\left[{\tfrac {7}{8}}\ N^{3}\ \log _{2}N\right]~,}and the total number of real additions i.e. including the post-additions (recursive additions) which can be calculated directly after the butterfly stage or after the bit-reverse stage are given by[106][32N3log2N]⏟Real+[32N3log2N−3N3+3N2]⏟Recursive=[92N3log2N−3N3+3N2].{\displaystyle ~\underbrace {\left[{\frac {3}{2}}N^{3}\log _{2}N\right]} _{\text{Real}}+\underbrace {\left[{\frac {3}{2}}N^{3}\log _{2}N-3N^{3}+3N^{2}\right]} _{\text{Recursive}}=\left[{\frac {9}{2}}N^{3}\log _{2}N-3N^{3}+3N^{2}\right]~.}
The conventional method to calculate MD-DCT-II is using a Row-Column-Frame (RCF) approach which is computationally complex and less productive on most advanced recent hardware platforms. The number of multiplications required to compute VR DIF Algorithm when compared to RCF algorithm are quite a few in number. The number of Multiplications and additions involved in RCF approach are given by[32N3log2N]{\displaystyle ~\left[{\frac {3}{2}}N^{3}\log _{2}N\right]~}and[92N3log2N−3N3+3N2],{\displaystyle ~\left[{\frac {9}{2}}N^{3}\log _{2}N-3N^{3}+3N^{2}\right]~,}respectively. From Table 1, it can be seen that the total number
of multiplications associated with the 3-D DCT VR algorithm is less than that associated with the RCF approach by more than 40%. In addition, the RCF approach involves matrix transpose and more indexing and data swapping than the new VR algorithm. This makes the 3-D DCT VR algorithm more efficient and better suited for 3-D applications that involve the 3-D DCT-II such as video compression and other 3-D image processing applications.
The main consideration in choosing a fast algorithm is to avoid computational and structural complexities. As the technology of computers and DSPs advances, the execution time of arithmetic operations (multiplications and additions) is becoming very fast, and regular computational structure becomes the most important factor.[107]Therefore, although the above proposed 3-D VR algorithm does not achieve the theoretical lower bound on the number of multiplications,[108]it has a simpler computational structure as compared to other 3-D DCT algorithms. It can be implemented in place using a single butterfly and possesses the properties of theCooley–Tukey FFT algorithmin 3-D. Hence, the 3-D VR presents a good choice for reducing arithmetic operations in the calculation of the 3-D DCT-II, while keeping the simple structure that characterize butterfly-styleCooley–Tukey FFT algorithms.
The image to the right shows a combination of horizontal and vertical frequencies for an8 × 8(N1=N2=8){\displaystyle (~N_{1}=N_{2}=8~)}two-dimensional DCT. Each step from left to right and top to bottom is an increase in frequency by 1/2 cycle.
For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. Another move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data( 8×8 )is transformed to alinear combinationof these 64 frequency squares.
The M-D DCT-IV is just an extension of 1-D DCT-IV on toMdimensional domain. The 2-D DCT-IV of a matrix or an image is given by
We can compute the MD DCT-IV using the regular row-column method or we can use the polynomial transform method[109]for the fast and efficient computation. The main idea of this algorithm is to use the Polynomial Transform to convert the multidimensional DCT into a series of 1-D DCTs directly. MD DCT-IV also has several applications in various fields.
Although the direct application of these formulas would requireO(N2){\displaystyle ~{\mathcal {O}}(N^{2})~}operations, it is possible to compute the same thing with onlyO(NlogN){\displaystyle ~{\mathcal {O}}(N\log N)~}complexity by factorizing the computation similarly to thefast Fourier transform(FFT). One can also compute DCTs via FFTs combined withO(N){\displaystyle ~{\mathcal {O}}(N)~}pre- and post-processing steps. In general,O(NlogN){\displaystyle ~{\mathcal {O}}(N\log N)~}methods to compute DCTs are known as fast cosine transform (FCT) algorithms.
The most efficient algorithms, in principle, are usually those that are specialized directly for the DCT, as opposed to using an ordinary FFT plusO(N){\displaystyle ~{\mathcal {O}}(N)~}extra operations (see below for an exception). However, even "specialized" DCT algorithms (including all of those that achieve the lowest known arithmetic counts, at least forpower-of-twosizes) are typically closely related to FFT algorithms – since DCTs are essentially DFTs of real-even data, one can design a fast DCT algorithm by taking an FFT and eliminating the redundant operations due to this symmetry. This can even be done automatically (Frigo & Johnson 2005). Algorithms based on theCooley–Tukey FFT algorithmare most common, but any other FFT algorithm is also applicable. For example, theWinograd FFT algorithmleads to minimal-multiplication algorithms for the DFT, albeit generally at the cost of more additions, and a similar algorithm was proposed by (Feig & Winograd 1992a) for the DCT. Because the algorithms for DFTs, DCTs, and similar transforms are all so closely related, any improvement in algorithms for one transform will theoretically lead to immediate gains for the other transforms as well (Duhamel & Vetterli 1990).
While DCT algorithms that employ an unmodified FFT often have some theoretical overhead compared to the best specialized DCT algorithms, the former also have a distinct advantage: Highly optimized FFT programs are widely available. Thus, in practice, it is often easier to obtain high performance for general lengthsNwith FFT-based algorithms.[c]Specialized DCT algorithms, on the other hand, see widespread use for transforms of small, fixed sizes such as the8 × 8DCT-II used inJPEGcompression, or the small DCTs (or MDCTs) typically used in audio compression. (Reduced code size may also be a reason to use a specialized DCT for embedded-device applications.)
In fact, even the DCT algorithms using an ordinary FFT are sometimes equivalent to pruning the redundant operations from a larger FFT of real-symmetric data, and they can even be optimal from the perspective of arithmetic counts. For example, a type-II DCT is equivalent to a DFT of size4N{\displaystyle ~4N~}with real-even symmetry whose even-indexed elements are zero. One of the most common methods for computing this via an FFT (e.g. the method used inFFTPACKandFFTW) was described byNarasimha & Peterson (1978)andMakhoul (1980), and this method in hindsight can be seen as one step of a radix-4 decimation-in-time Cooley–Tukey algorithm applied to the "logical" real-even DFT corresponding to the DCT-II.[d]Because the even-indexed elements are zero, this radix-4 step is exactly the same as a split-radix step. If the subsequent sizeN{\displaystyle ~N~}real-data FFT is also performed by a real-datasplit-radix algorithm(as inSorensen et al. (1987)), then the resulting algorithm actually matches what was long the lowest published arithmetic count for the power-of-two DCT-II (2Nlog2N−N+2{\displaystyle ~2N\log _{2}N-N+2~}real-arithmetic operations[e]).
A recent reduction in the operation count to179Nlog2N+O(N){\displaystyle ~{\tfrac {17}{9}}N\log _{2}N+{\mathcal {O}}(N)}also uses a real-data FFT.[110]So, there is nothing intrinsically bad about computing the DCT via an FFT from an arithmetic perspective – it is sometimes merely a question of whether the corresponding FFT algorithm is optimal. (As a practical matter, the function-call overhead in invoking a separate FFT routine might be significant for smallN,{\displaystyle ~N~,}but this is an implementation rather than an algorithmic question since it can be solved by unrolling or inlining.)
Consider this 8x8 grayscale image of capital letter A.
Each basis function is multiplied by its coefficient and then this product is added to the final image.
|
https://en.wikipedia.org/wiki/Discrete_cosine_transform
|
Queue areasare places in which people queue (first-come, first-served) for goods or services. Such a group of people is known as aqueue(Britishusage) orline(Americanusage), and the people are said to be waiting or standingin a queueorin line, respectively. (In theNew York Cityarea, the phraseon lineis often used in place ofin line.)[1]Occasionally, both the British and American terms are combined to form the term "queue line".[2][3]
Examples include checking outgroceriesor other goods that have been collected in aself serviceshop, in a shop without self-service, at anATM, at a ticket desk, acity bus, or in ataxi stand.
Queueing[4]is a phenomenon in a number of fields, and has been extensively analysed in the study ofqueueing theory. Ineconomics, queueing is seen as one way torationscarcegoods and services.
The first written description of people standing in line is found in an 1837 book,The French Revolution: A HistorybyThomas Carlyle.[5]Carlyle described what he thought was a strange sight: people standing in an orderly line to buy bread from bakers around Paris.[5]
Queues can be found in railway stations to book tickets, at bus stops for boarding and at temples.[6][7][8]
Queues are generally found at transportation terminals wheresecurityscreenings are conducted.
Large stores and supermarkets may have dozens of separate queues, but this can cause frustration, as different lines tend to be handled at different speeds; some people are served quickly, while others may wait for longer periods of time. Sometimes two people who are together split up and each waits in a different line; once it is determined which line is faster, the one in the slower line joins the other. Another arrangement is for everyone to wait in a single line;[9]a person leaves the line each time a service point opens up. This is a common setup inbanksandpost offices.
Organized queue areas are commonly found atamusement parks. Each ride can accommodate a fixed number of guests that can be served at any given time (which is referred to as the ride’s operational capacity), so there has to be some control over additional guests who are waiting. This led to the development of formalized queue areas—areas in which the lines of people waiting to board the rides are organized by railings, and may be given shelter from the elements with a roof over their heads, inside a climate-controlled building or with fans and misting devices. In some amusement parks –Disney theme parksbeing a prime example – queue areas can be elaborately decorated, with holding areas fosteringanticipation, thus shortening the perceived wait for people in the queue by giving them something interesting to look at as they wait, or the perception that they have arrived at the threshold of the attraction.
When designing queues, planners attempt to make the wait as pleasant and as simple as possible.[citation needed][10]They employ several strategies to achieve this, including:
People experience "occupied" time as shorter than "unoccupied" time, and generally overestimate the amount of time waited by around 36%.[11]
The technique of giving people an activity to distract them from a wait has been used to reduce complaints of delays at:[11]
Other techniques to reduce queueing anxiety include:[11]
Cutting in line, also known as queue-jumping, can generate a strong negative response, depending on the local cultural norms.
Physical queueing is sometimes replaced by virtual queueing. In awaiting roomthere may be a system whereby the queuer asks and remembers where their place is in the queue, or reports to a desk and signs in, or takes a ticket with a number from a machine. These queues typically are found atdoctors' offices,hospitals,town halls,social securityoffices,labor exchanges, theDepartment of Motor Vehicles, the immigration departments, freeinternet accessin the state or council libraries,banksorpost officesand call centres. Especially in theUnited Kingdom, tickets are taken to form a virtual queue at delicatessens and children's shoe shops. In some countries such asSweden, virtual queues are also common in shops andrailway stations. A display sometimes shows the number that was last called for service.
Restaurantshave come to employ virtual queueing techniques with the availability of application-specific pagers, which alert those waiting that they should report to the host to be seated. Another option used at restaurants is to assign customers a confirmed return time, basically a reservation issued on arrival.
Virtual queueing apps are available that allow the customers to view the virtual queue status of a business and they can take virtual queue numbers remotely. The app can be used to get updates of the virtual queue status that the customer is in.
A substitute or alternative activity may be provided for people to participate in while waiting to be called, which reduces the perceived waiting time and the probability that the customer will abort their visit. For example, a busy restaurant might seat waiting customers a bar. An outdoor attraction with long virtual queues might have a sidemarqueeselling merchandise or food. The alternate activity may provide the organisation with an opportunity to generate additional revenue from the waiting customers.[12]
All of the above methods, however, suffer from the same drawback: the person arrives at the location only to find out that they need to wait. Recently, queues atDMVs,[13]colleges, restaurants,[14]healthcare institutions,[15]government offices[14]and elsewhere have begun to be replaced by mobile queues or queue-ahead, whereby the person queuing uses their phone, the internet, a kiosk or another method to enter a virtual queue, optionally prior to arrival, is free to roam during the wait, and then gets paged at their mobile phone when their turn approaches. This has the advantage of allowing users to find out the wait forecast and get in the queue before arriving, roaming freely and then timing their arrival to the availability of service. This has been shown to extend the patience of those in the queue and reduce no-shows.[14]
|
https://en.wikipedia.org/wiki/Queue_management_system
|
In software, astack overflowoccurs if thecall stackpointer exceeds thestackbound. The call stack may consist of a limited amount ofaddress space, often determined at the start of the program. The size of the call stack depends on many factors, including the programming language, machine architecture, multi-threading, and amount of available memory. When a program attempts to use more space than is available on the call stack (that is, when it attempts to access memory beyond the call stack's bounds, which is essentially abuffer overflow), the stack is said tooverflow, typically resulting in a programcrash.[1]
The most-common cause of stack overflow is excessively deep or infinite recursion, in which a function calls itself so many times that the space needed to store the variables and information associated with each call is more than can fit on the stack.[2]
An example of infinite recursion inC.
The functionfoo, when it is invoked, continues to invoke itself, allocating additional space on the stack each time, until the stack overflows resulting in asegmentation fault.[2]However, some compilers implementtail-call optimization, allowing infinite recursion of a specific sort—tail recursion—to occur without stack overflow. This works because tail-recursion calls do not take up additional stack space.[3]
Some C compiler options will effectively enabletail-call optimization; for example, compiling the above simple program usinggccwith-O1will result in a segmentation fault, but not when using-O2or-O3, since these optimization levels imply the-foptimize-sibling-callscompiler option.[4]Other languages, such asScheme, require all implementations to include tail-recursion as part of the language standard.[5]
A recursive function that terminates in theory but causes a call stack buffer overflow in practice can be fixed by transforming the recursion into a loop and storing the function arguments in an explicit stack (rather than the implicit use of the call stack). This is always possible because the class ofprimitive recursive functionsis equivalent to the class of LOOP computable functions. Consider this example inC++-like pseudocode:
A primitive recursive function like the one on the left side can always be transformed into a loop like on the right side.
A function like the example above on the left would not be a problem in an environment supportingtail-call optimization; however, it is still possible to create a recursive function that may result in a stack overflow in these languages. Consider the example below of two simple integer exponentiation functions.
Bothpow(base, exp)functions above compute an equivalent result, however, the one on the left is prone to causing a stack overflow because tail-call optimization is not possible for this function. During execution, the stack for these functions will look like this:
Notice that the function on the left must store in its stackexpnumber of integers, which will be multiplied when the recursion terminates and the function returns 1. In contrast, the function at the right must only store 3 integers at any time, and computes an intermediary result which is passed to its following invocation. As no other information outside of the current function invocation must be stored, a tail-recursion optimizer can "drop" the prior stack frames, eliminating the possibility of a stack overflow.
The other major cause of a stack overflow results from an attempt to allocate more memory on the stack than will fit, for example by creating local array variables that are too large. For this reason some authors recommend that arrays larger than a few kilobytes should beallocated dynamicallyinstead of as a local variable.[6]
An example of a very large stack variable inC:
On a C implementation with 8 bytedouble-precision floats, the declared array consumes 8megabytesof data; if this is more memory than is available on the stack (as set by thread creation parameters or operating system limits), a stack overflow will occur.
Stack overflows are made worse by anything that reduces the effective stack size of a given program. For example, the same program being run without multiple threads might work fine, but as soon as multi-threading is enabled the program will crash. This is because most programs with threads have less stack space per thread than a program with no threading support. Because kernels are generally multi-threaded, people new tokerneldevelopment are usually discouraged from using recursive algorithms or large stack buffers.[7]
|
https://en.wikipedia.org/wiki/Stack_overflow
|
Infinite regressis aphilosophicalconcept to describe a series of entities. Each entity in the series depends on its predecessor, following arecursiveprinciple. For example, theepistemic regressis a series of beliefs in which thejustificationof each belief depends on the justification of the belief that comes before it.
Aninfinite regress argumentis an argument against a theory based on the fact that this theory leads to an infinite regress. For such an argument to be successful, it must demonstrate not just that the theory in question entails an infinite regress but also that this regress isvicious. There are different ways in which a regress can be vicious. The most serious form of viciousness involves acontradictionin the form ofmetaphysical impossibility. Other forms occur when the infinite regress is responsible for the theory in question being implausible or for its failure to solve the problem it was formulated to solve.
Traditionally, it was often assumed without much argument that each infinite regress is vicious but this assumption has been put into question in contemporary philosophy. While some philosophers have explicitly defended theories with infinite regresses, the more common strategy has been to reformulate the theory in question in a way that avoids the regress. One such strategy isfoundationalism, which posits that there is a first element in the series from which all the other elements arise but which is not itself explained this way. Another way iscoherentism, which is based on a holistic explanation that usually sees the entities in question not as a linear series but as an interconnected network.
Infinite regress arguments have been made in various areas of philosophy. Famous examples include thecosmological argumentandBradley's regress.
Aninfinite regressis an infinite series of entities governed by a recursive principle that determines how each entity in the series depends on or is produced by its predecessor.[1]This principle can often be expressed in the following form:XisFbecauseXstands inRtoYandYisF.XandYstand for objects,Rstands for a relation andFstands for a property in the widest sense.[1][2]In the epistemic regress, for example, a belief is justified because it is based on another belief that is justified. But this other belief is itself in need of one more justified belief for itself to be justified and so on.[3]Or in the cosmological argument, an event occurred because it was caused by another event that occurred before it, which was itself caused by a previous event, and so on.[1][4]This principle by itself is not sufficient: it does not lead to a regress if there is noXthat isF. This is why an additional triggering condition has to be fulfilled: there has to be anXthat isFfor the regress to get started.[5]So the regress starts with the fact thatXisF. According to the recursive principle, this is only possible if there is a distinctYthat is alsoF. But in order to account for the fact thatYisF, we need to posit aZthat isFand so on. Once the regress has started, there is no way of stopping it since a new entity has to be introduced at each step in order to make the previous step possible.[1]
Aninfinite regress argumentis an argument against a theory based on the fact that this theory leads to an infinite regress.[1][5]For such an argument to be successful, it has to demonstrate not just that the theory in question entails an infinite regress but also that this regress isvicious.[1][4]The mere existence of an infinite regress by itself is not a proof for anything.[5]So in addition to connecting the theory to a recursive principle paired with a triggering condition, the argument has to show in which way the resulting regress is vicious.[4][5]For example, one form ofevidentialismin epistemology holds that a belief is only justified if it is based on another belief that is justified. An opponent of this theory could use an infinite regress argument by demonstrating (1) that this theory leads to an infinite regress (e.g. by pointing out the recursive principle and the triggering condition) and (2) that this infinite regress is vicious (e.g. by showing that it is implausible given the limitations of the human mind).[1][5][3][6]In this example, the argument has a negative form since it only denies that another theory is true. But it can also be used in a positive form to support a theory by showing that its alternative involves a vicious regress.[3]This is how thecosmological argumentfor the existence of God works: it claims that positing God's existence is necessary in order to avoid an infinite regress of causes.[1][4][3]
For aninfinite regress argumentto be successful, it has to show that the involved regress isvicious.[3]Anon-viciousregress is calledvirtuousorbenign.[5]Traditionally, it was often assumed without much argument that each infinite regress is vicious but this assumption has been put into question in contemporary philosophy. In most cases, it is not self-evident whether an infinite regress is vicious or not.[5]Thetruth regressconstitutes an example of an infinite regress that is not vicious: if the proposition "P" is true, then the proposition that "It is true that P" is also true and so on.[4]Infinite regresses pose a problem mostly if the regress concerns concrete objects.Abstract objects, on the other hand, are often considered to be unproblematic in this respect. For example, the truth-regress leads to an infinite number of true propositions or thePeano axiomsentail the existence of infinitely manynatural numbers. But these regresses are usually not held against the theories that entail them.[4]
There are different ways in which a regress can be vicious. The most serious type of viciousness involves acontradictionin the form ofmetaphysical impossibility.[4][1][7]Other types occur when the infinite regress is responsible for the theory in question being implausible or for its failure to solve the problem it was formulated to solve.[4][7]The vice of an infinite regress can be local if it causes problems only for certain theories when combined with other assumptions, or global otherwise. For example, an otherwise virtuous regress is locally vicious for a theory that posits a finite domain.[1]In some cases, an infinite regress is not itself the source of the problem but merely indicates a different underlying problem.[1]
Infinite regresses that involvemetaphysical impossibilityare the most serious cases of viciousness. The easiest way to arrive at this result is by accepting the assumption thatactual infinitiesare impossible, thereby directly leading to a contradiction.[5]This anti-infinitists position is opposed to infinity in general, not just specifically to infinite regresses.[1]But it is open to defenders of the theory in question to deny this outright prohibition on actual infinities.[5]For example, it has been argued that only certain types of infinities are problematic in this way, like infinite intensive magnitudes (e.g. infinite energy densities).[4]But other types of infinities, like infinite cardinality (e.g. infinitely many causes) or infinite extensive magnitude (e.g. the duration of the universe's history) are unproblematic from the point of view of metaphysical impossibility.[4]While there may be some instances of viciousness due to metaphysical impossibility, most vicious regresses are problematic because of other reasons.[4]
A more common form of viciousness arises from the implausibility of the infinite regress in question. This category often applies to theories about human actions, states or capacities.[4]This argument is weaker than the argument from impossibility since it allows that the regress in question is possible. It only denies that it is actual.[1]For example, it seems implausible due to the limitations of the human mind that there are justified beliefs if this entails that the agent needs to have an infinite amount of them. But this is not metaphysically impossible, e.g. if it is assumed that the infinite number of beliefs are onlynon-occurrent or dispositionalwhile the limitation only applies to the number of beliefs one is actually thinking about at one moment.[4]Another reason for the implausibility of theories involving an infinite regress is due to the principle known asOckham's razor, which posits that we should avoid ontological extravagance by not multiplying entities without necessity.[8]Considerations of parsimony are complicated by the distinction between quantitative and qualitative parsimony: concerning how many entities are posited in contrast to how many kinds of entities are posited.[1]For example, thecosmological argumentfor the existence of God promises to increasequantitativeparsimony by positing that there is one first cause instead of allowing an infinite chain of events. But it does so by decreasingqualitativeparsimony: it posits God as a new type of entity.[4]
Another form of viciousness applies not to the infinite regress by itself but to it in relation to the explanatory goals of a theory.[4][7]Theories are often formulated with the goal of solving a specific problem, e.g. of answering the question why a certain type of entity exists. One way how such an attempt can fail is if the answer to the question already assumes in disguised form what it was supposed to explain.[4][7]This is akin to theinformal fallacyofbegging the question.[2]From the perspective of a mythological world view, for example, one way to explain why the earth seems to be at rest instead of falling down is to hold that it rests on the back of a giant turtle. In order to explain why the turtle itself is not in free fall, another even bigger turtle is posited and so on, resulting in a world that isturtles all the way down.[4][1]Despite its shortcomings in clashing with modern physics and due to its ontological extravagance, this theory seems to be metaphysically possible assuming that space is infinite. One way to assess the viciousness of this regress is to distinguish betweenlocalandglobalexplanations.[1]Alocalexplanation is only interested in explaining why one thing has a certain property through reference to another thing without trying to explain this other thing as well. Aglobalexplanation, on the other hand, tries to explain why there are any things with this property at all.[1]So as a local explanation, the regress in the turtle theory is benign: it succeeds in explaining why the earth is not falling. But as a global explanation, it fails because it has to assume rather than explain at each step that there is another thing that is not falling. It does not explain why nothing at all is falling.[1][4]
It has been argued that infinite regresses can be benign under certain circumstances despite aiming at global explanation. This line of thought rests on the idea of thetransmissioninvolved in the vicious cases:[9]it is explained thatXisFbecauseYisFwhere thisFwas somehow transmitted fromYtoX.[1]The problem is that to transfer something, it first must be possessed, so the possession is presumed rather than explained. For example, in trying to explain why one's neighbor has the property of being the owner of a bag of sugar, it is revealed that this bag was first in someone else's possession before it was transferred to the neighbor and that the same is true for this and every other previous owner.[1]This explanation is unsatisfying since ownership is presupposed at every step. In non-transmissive explanations, however,Yis still the reason forXbeingFandYis alsoFbut this is just seen as a contingent fact.[1][9]This line of thought has been used to argue that the epistemic regress is not vicious. From aBayesianpoint of view, for example, justification or evidence can be defined in terms of one belief raising the probability that another belief is true.[10][11]The former belief may also be justified but this is not relevant for explaining why the latter belief is justified.[1]
Philosophers have responded to infinite regress arguments in various ways. The criticized theory can be defended, for example, by denying that an infinite regress is involved.Infinitists, on the other hand, embrace the regress but deny that it is vicious.[6]Another response is to modify the theory in order to avoid the regress. This can be achieved in the form offoundationalismor ofcoherentism.
Traditionally, the most common response isfoundationalism.[1]It posits that there is a first element in the series from which all the other elements arise but which is not itself explained this way.[12]So from any given position, the series can be traced back to elements on the most fundamental level, which the recursive principle fails to explain. This way an infinite regress is avoided.[1][6]This position is well-known from its applications in the field of epistemology.[1]Foundationalist theories of epistemic justification state that besides inferentially justified beliefs, which depend for their justification on other beliefs, there are also non-inferentially justified beliefs.[12]The non-inferentially justified beliefs constitute the foundation on which the superstructure consisting of all the inferentially justified beliefs rests.[13]Acquaintance theories, for example, explain the justification of non-inferential beliefs through acquaintance with the objects of the belief. On such a view, an agent is inferentially justified to believe that it will rain tomorrow based on the belief that the weather forecast told so. They are non-inferentially justified in believing that they are in pain because they are directly acquainted with the pain.[12]So a different type of explanation (acquaintance) is used for the foundational elements.
Another example comes from the field ofmetaphysicsconcerning the problem ofontological hierarchy. One position in this debate claims that some entities exist on a more fundamental level than other entities and that the latter entities depend on or are grounded in the former entities.[14]Metaphysical foundationalismis the thesis that these dependence relations do not form an infinite regress: that there is a most fundamental level that grounds the existence of the entities from all other levels.[1][15]This is sometimes expressed by stating that the grounding-relation responsible for this hierarchy iswell-founded.[15]
Coherentism, mostly found in the field of epistemology, is another way to avoid infinite regresses.[1]It is based on a holistic explanation that usually sees the entities in question not as a linear series but as an interconnected network. For example, coherentist theories of epistemic justification hold that beliefs are justified because of the way they hang together: they cohere well with each other.[16]This view can be expressed by stating that justification is primarily a property of the system of beliefs as a whole. The justification of a single belief is derivative in the sense that it depends on the fact that this belief belongs to a coherent whole.[1]Laurence BonJouris a well-known contemporary defender of this position.[17][18]
Aristotleargued that knowing does not necessitate an infinite regress because some knowledge does not depend on demonstration:
Some hold that owing to the necessity of knowing the primary premises, there is no scientific knowledge. Others think there is, but that all truths are demonstrable. Neither doctrine is either true or a necessary deduction from the premises. The first school, assuming that there is no way of knowing other than by demonstration, maintain that an infinite regress is involved, on the ground that if behind the prior stands no primary, we could not know the posterior through the prior (wherein they are right, for one cannot traverse an infinite series): if on the other hand – they say – the series terminates and there are primary premises, yet these are unknowable because incapable of demonstration, which according to them is the only form of knowledge. And since thus one cannot know the primary premises, knowledge of the conclusions which follow from them is not pure scientific knowledge nor properly knowing at all, but rests on the mere supposition that the premises are true. The other party agrees with them as regards knowing, holding that it is only possible by demonstration, but they see no difficulty in holding that all truths are demonstrated, on the ground that demonstration may be circular and reciprocal.
Our own doctrine is that not all knowledge is demonstrative: on the contrary, knowledge of the immediate premises is independent of demonstration. (The necessity of this is obvious; for since we must know the prior premises from which the demonstration is drawn, and since the regress must end in immediate truths, those truths must be indemonstrable.) Such, then, is our doctrine, and in addition, we maintain that besides scientific knowledge there is its original source which enables us to recognize the definitions.[19][20]
Gilbert Ryleargues in the philosophy of mind thatmind-body dualismis implausible because it produces an infinite regress of "inner observers" when trying to explain how mental states are able to influence physical states.[citation needed]
Media related toInfinite regressat Wikimedia Commons
|
https://en.wikipedia.org/wiki/Regress_argument
|
Thereflected binary code(RBC), also known asreflected binary(RB) orGray codeafterFrank Gray, is an ordering of thebinary numeral systemsuch that two successive values differ in only onebit(binary digit).
For example, the representation of the decimal value "1" in binary would normally be "001", and "2" would be "010". In Gray code, these values are represented as "001" and "011". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two.
Gray codes are widely used to prevent spurious output fromelectromechanicalswitchesand to facilitateerror correctionin digital communications such asdigital terrestrial televisionand somecable TVsystems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice.[3]
Many devices indicate position by closing and opening switches. If that device usesnatural binary codes, positions 3 and 4 are next to each other but all three bits of the binary representation differ:
The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even withoutkeybounce, the transition might look like011—001—101—100. When the switches appear to be in position001, the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into asequentialsystem, possibly viacombinational logic, then the sequential system may store a false value.
This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set ofintegers, or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known asunit-distance,[4][5][6][7][8]single-distance,single-step,monostrophic[9][10][7][8]orsyncopic codes,[9]in reference to theHamming distanceof 1 between adjacent codes.
In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particularbinarycode for non-negative integers, thebinary-reflected Gray code, orBRGC.Bell LabsresearcherGeorge R. Stibitzdescribed such a code in a 1941 patent application, granted in 1943.[11][12][13]Frank Grayintroduced the termreflected binary codein his 1947 patent application, remarking that the code had "as yet no recognized name".[14]He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process".
In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off(...11001100...);the next digit a pattern of 4 on, 4 off; thei-th least significant bit a pattern of 2ion 2ioff. The most significant digit is an exception to this: for ann-bit Gray code, the most significant digit follows the pattern 2n−1on, 2n−1off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2n−2places. The four-bit version of this is shown below:
For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called thecyclicoradjacency propertyof the code.[15]
In moderndigital communications, Gray codes play an important role inerror correction. For example, in adigital modulationscheme such asQAMwhere data is typically transmitted insymbolsof 4 bits or more, the signal'sconstellation diagramis arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this withforward error correctioncapable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible tonoise.
Despite the fact that Stibitz described this code[11][12][13]before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code";[16][17]one of those also lists "minimum error code" and "cyclic permutation code" among the names.[17]A 1954 patent application refers to "the Bell Telephone Gray code".[18]Other names include "cyclic binary code",[12]"cyclic progression code",[19][12]"cyclic permuting binary"[20]or "cyclic permuted binary" (CPB).[21][22]
The Gray code is sometimes misattributed to 19th century electrical device inventorElisha Gray.[13][23][24][25]
Reflected binary codes were applied to mathematical puzzles before they became known to engineers.
The binary-reflected Gray code represents the underlying scheme of the classicalChinese rings puzzle, a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872.[26][13]
It can serve as a solution guide for theTowers of Hanoiproblem, based on a game by the FrenchÉdouard Lucasin 1883.[27][28][29][30]Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yieldternary and pentaryGray codes.[31]
Martin Gardnerwrote a popular account of the Gray code in his August 1972"Mathematical Games" columninScientific American.[32]
The code also forms aHamiltonian cycleon ahypercube, where each bit is seen as one dimension.
When the French engineerÉmile Baudotchanged from using a 6-unit (6-bit) code to 5-unit code for hisprinting telegraphsystem, in 1875[33]or 1876,[34][35]he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order,[36][37][38]and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code.[13]This code became known asBaudot code[39]and, with minor changes, was eventually adopted asInternational Telegraph Alphabet No. 1(ITA1, CCITT-1) in 1932.[40][41][38]
About the same time, the German-AustrianOtto Schäffler[de][42]demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874.[43][13]
Frank Gray, who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups usingvacuum tube-based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953,[14]and the name of Gray stuck to the codes. The "PCM tube" apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code.[44]
Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose.
Gray codes are used in linear and rotary position encoders (absolute encodersandquadrature encoders) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others.
For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals.
Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking.
In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position.
Due to theHamming distanceproperties of Gray codes, they are sometimes used ingenetic algorithms.[15]They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties.
Gray codes are also used in labelling the axes ofKarnaugh mapssince 1953[45][46][47]as well as inHändler circle graphssince 1958,[48][49][50][51]both graphical methods forlogic circuit minimization.
In moderndigital communications, 1D- and 2D-Gray codes play an important role in error prevention before applying anerror correction. For example, in adigital modulationscheme such asQAMwhere data is typically transmitted insymbolsof 4 bits or more, the signal'sconstellation diagramis arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this withforward error correctioncapable of correcting single-bit errors, it is possible for areceiverto correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible tonoise.
Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies.
If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves.
Abalanced Gray codecan be constructed,[52]that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit.
George R. Stibitzutilized a reflected binary code in a binary pulse counting device in 1941 already.[11][12][13]
A typical use of Gray code counters is building aFIFO(first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains.[53]The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used.
Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code,[nb 1]it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous.[54]
To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code,[55]add one to it with a standard binary adder, and then convert the result back to Gray code.[56]Other methods of counting in Gray code are discussed in a report byRobert W. Doran, including taking the output from the first latches of the master-slave flip flops in a binary ripple counter.[57]
As the execution ofprogram codetypically causes an instruction memory access pattern of locally consecutive addresses,bus encodingsusing Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing theCPU power consumptionin some low-power designs.[58][59]
The binary-reflected Gray code list fornbits can be generatedrecursivelyfrom the list forn− 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary0, prefixing the entries in the reflected list with a binary1, and then concatenating the original list with the reversed list.[13]For example, generating then= 3 list from then= 2 list:
The one-bit Gray code isG1= (0,1). This can be thought of as built recursively as above from a zero-bit Gray codeG0= (Λ) consisting of a single entry of zero length. This iterative process of generatingGn+1fromGnmakes the following properties of the standard reflecting code clear:
These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: thenth Gray code is obtained by computingn⊕⌊n2⌋{\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor }. Prepending a0bit leaves the order of the code words unchanged, prepending a1bit reverses the order of the code words. If the bits at positioni{\displaystyle i}of codewords are inverted, the order of neighbouring blocks of2i{\displaystyle 2^{i}}codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed
If bit 1 is inverted, blocks of 2 codewords change order:
If bit 2 is inverted, blocks of 4 codewords reverse order:
Thus, performing anexclusive oron a bitbi{\displaystyle b_{i}}at positioni{\displaystyle i}with the bitbi+1{\displaystyle b_{i+1}}at positioni+1{\displaystyle i+1}leaves the order of codewords intact ifbi+1=0{\displaystyle b_{i+1}={\mathtt {0}}}, and reverses the order of blocks of2i+1{\displaystyle 2^{i+1}}codewords ifbi+1=1{\displaystyle b_{i+1}={\mathtt {1}}}. Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code.
A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuminggi{\displaystyle g_{i}}is thei{\displaystyle i}th Gray-coded bit (g0{\displaystyle g_{0}}being the most significant bit), andbi{\displaystyle b_{i}}is thei{\displaystyle i}th binary-coded bit (b0{\displaystyle b_{0}}being the most-significant bit), the reverse translation can be given recursively:b0=g0{\displaystyle b_{0}=g_{0}}, andbi=gi⊕bi−1{\displaystyle b_{i}=g_{i}\oplus b_{i-1}}. Alternatively, decoding a Gray code into a binary number can be described as aprefix sumof the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two.
To construct the binary-reflected Gray code iteratively, at step 0 start with thecode0=0{\displaystyle \mathrm {code} _{0}={\mathtt {0}}}, and at stepi>0{\displaystyle i>0}find the bit position of the least significant1in the binary representation ofi{\displaystyle i}and flip the bit at that position in the previous codecodei−1{\displaystyle \mathrm {code} _{i-1}}to get the next codecodei{\displaystyle \mathrm {code} _{i}}. The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ...[nb 2]Seefind first setfor efficient algorithms to compute these values.
The following functions inCconvert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist.[60][55][nb 1]
On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of theCLMUL instruction set. If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation.
In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has aHamming distanceof 1 from the next word).
It is possible to construct binary Gray codes withnbits with a length of less than2n, if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle.[61]OEISsequence A290772[62]gives the number of possible Gray sequences of length2nthat include zero and use the minimum number of bits.
0 → 0001 → 0012 → 00210 → 01211 → 01112 → 01020 → 02021 → 02122 → 022100 → 122101 → 121102 → 120110 → 110111 → 111112 → 112120 → 102121 → 101122 → 100200 → 200201 → 201202 → 202210 → 212211 → 211212 → 210220 → 220221 → 221
There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is then-ary Gray code, also known as anon-Boolean Gray code. As the name implies, this type of Gray code uses non-Booleanvalues in its encodings.
For example, a 3-ary (ternary) Gray code would use the values 0,1,2.[31]The (n,k)-Gray codeis then-ary Gray code withkdigits.[63]The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The (n,k)-Gray code may be constructed recursively, as the BRGC, or may be constructediteratively. Analgorithmto iteratively generate the (N,k)-Gray code is presented (inC):
There are other Gray code algorithms for (n,k)-Gray codes. The (n,k)-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan,[63]lack this property whenkis odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping fromn− 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one.
Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods.
See alsoSkew binary number system, a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digitcarryoperation.
Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity".[52]Inbalanced Gray codes, the number of changes in different coordinate positions are as close as possible. To make this more precise, letGbe anR-ary complete Gray cycle having transition sequence(δk){\displaystyle (\delta _{k})}; thetransition counts(spectrum) ofGare the collection of integers defined by
λk=|{j∈ZRn:δj=k}|,fork∈Zn{\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}}
A Gray code isuniformoruniformly balancedif its transition counts are all equal, in which case we haveλk=Rnn{\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}}for allk. Clearly, whenR=2{\displaystyle R=2}, such codes exist only ifnis a power of 2.[64]Ifnis not a power of 2, it is possible to constructwell-balancedbinary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either2⌊2n2n⌋{\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor }or2⌈2n2n⌉{\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil }.[52]Gray codes can also beexponentially balancedif all of their transition counts are adjacent powers of two, and such codes exist for every power of two.[65]
For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced:[52]
whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight:[52]
We will now show a construction[66]and implementation[67]for well-balanced binary Gray codes which allows us to generate ann-digit balanced Gray code for everyn. The main principle is to inductively construct an (n+ 2)-digit Gray codeG′{\displaystyle G'}given ann-digit Gray codeGin such a way that the balanced property is preserved. To do this, we consider partitions ofG=g0,…,g2n−1{\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}}into an even numberLof non-empty blocks of the form
{g0},{g1,…,gk2},{gk2+1,…,gk3},…,{gkL−2+1,…,g−2},{g−1}{\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}}
wherek1=0{\displaystyle k_{1}=0},kL−1=−2{\displaystyle k_{L-1}=-2}, andkL≡−1(mod2n){\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}}). This partition induces an(n+2){\displaystyle (n+2)}-digit Gray code given by
If we define thetransition multiplicities
mi=|{j:δkj=i,1≤j≤L}|{\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|}
to be the number of times the digit in positionichanges between consecutive blocks in a partition, then for the (n+ 2)-digit Gray code induced by this partition the transition spectrumλi′{\displaystyle \lambda '_{i}}is
λi′={4λi−2mi,if0≤i<nL,otherwise{\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}}
The delicate part of this construction is to find an adequate partitioning of a balancedn-digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digiti{\displaystyle i}transition and splitting another block at another digiti{\displaystyle i}transition produces a different Gray code with exactly the same transition spectrumλi′{\displaystyle \lambda '_{i}}, so one may for example[65]designate the firstmi{\displaystyle m_{i}}transitions at digiti{\displaystyle i}as those that fall between two blocks. Uniform codes can be found whenR≡0(mod4){\displaystyle R\equiv 0{\pmod {4}}}andRn≡0(modn){\displaystyle R^{n}\equiv 0{\pmod {n}}}, and this construction can be extended to theR-ary case as well.[66]
Long run (ormaximum gap) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible.[68]
Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors.[69]If we define theweightof a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one.
We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercubeQn=(Vn,En){\displaystyle Q_{n}=(V_{n},E_{n})}intolevelsof vertices that have equal weight, i.e.
Vn(i)={v∈Vn:vhas weighti}{\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}}
for0≤i≤n{\displaystyle 0\leq i\leq n}. These levels satisfy|Vn(i)|=(ni){\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}}. LetQn(i){\displaystyle Q_{n}(i)}be the subgraph ofQn{\displaystyle Q_{n}}induced byVn(i)∪Vn(i+1){\displaystyle V_{n}(i)\cup V_{n}(i+1)}, and letEn(i){\displaystyle E_{n}(i)}be the edges inQn(i){\displaystyle Q_{n}(i)}. A monotonic Gray code is then a Hamiltonian path inQn{\displaystyle Q_{n}}such that wheneverδ1∈En(i){\displaystyle \delta _{1}\in E_{n}(i)}comes beforeδ2∈En(j){\displaystyle \delta _{2}\in E_{n}(j)}in the path, theni≤j{\displaystyle i\leq j}.
An elegant construction of monotonicn-digit Gray codes for anynis based on the idea of recursively building subpathsPn,j{\displaystyle P_{n,j}}of length2(nj){\displaystyle 2\textstyle {\binom {n}{j}}}having edges inEn(j){\displaystyle E_{n}(j)}.[69]We defineP1,0=(0,1){\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})},Pn,j=∅{\displaystyle P_{n,j}=\emptyset }wheneverj<0{\displaystyle j<0}orj≥n{\displaystyle j\geq n}, and
Pn+1,j=1Pn,j−1πn,0Pn,j{\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}}
otherwise. Here,πn{\displaystyle \pi _{n}}is a suitably defined permutation andPπ{\displaystyle P^{\pi }}refers to the pathPwith its coordinates permuted byπ{\displaystyle \pi }. These paths give rise to two monotonicn-digit Gray codesGn(1){\displaystyle G_{n}^{(1)}}andGn(2){\displaystyle G_{n}^{(2)}}given by
Gn(1)=Pn,0Pn,1RPn,2Pn,3R⋯andGn(2)=Pn,0RPn,1Pn,2RPn,3⋯{\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots }
The choice ofπn{\displaystyle \pi _{n}}which ensures that these codes are indeed Gray codes turns out to beπn=E−1(πn−12){\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)}. The first few values ofPn,j{\displaystyle P_{n,j}}are shown in the table below.
These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated inO(n) time. The algorithm is most easily described usingcoroutines.
Monotonic codes have an interesting connection to theLovász conjecture, which states that every connectedvertex-transitive graphcontains a Hamiltonian path. The "middle-level" subgraphQ2n+1(n){\displaystyle Q_{2n+1}(n)}isvertex-transitive(that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain anautomorphism) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively forn≤15{\displaystyle n\leq 15}, and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839N, whereNis the number of vertices in the middle-level subgraph.[70]
Another type of Gray code, theBeckett–Gray code, is named for Irish playwrightSamuel Beckett, who was interested insymmetry. His play "Quad" features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once.[71]Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by afirst in, first outqueue, so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first.[71]Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists forn= 4. It is known today that such codes do exist forn= 2, 5, 6, 7, and 8, and do not exist forn= 3 or 4. An example of an 8-bit Beckett–Gray code can be found inDonald Knuth'sArt of Computer Programming.[13]According to Sawada and Wong, the search space forn= 6 can be explored in 15 hours, and more than9500solutions for the casen= 7 have been found.[72]
Snake-in-the-boxcodes, orsnakes, are the sequences of nodes ofinduced pathsin ann-dimensionalhypercube graph, and coil-in-the-box codes,[73]orcoils, are the sequences of nodes of inducedcyclesin a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described byWilliam H. Kautzin the late 1950s;[5]since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension.
Yet another kind of Gray code is thesingle-track Gray code(STGC) developed by Norman B. Spedding[74][75]and refined by Hiltgen, Paterson and Brandestini inSingle-track Gray Codes(1996).[76][77]The STGC is a cyclical list ofPunique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as aP×nmatrix, each column is a cyclic shift of the first column.[78]
The name comes from their use withrotary encoders, where a number of tracks are being sensed by contacts, resulting for each in an output of0or1. To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts.
If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC.
For many years, Torsten Sillke[79]and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders.
Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible.[74]Although it is not possible to distinguish 2npositions withnsensors on a single track, itispossible to distinguish close to that many. Etzion and Paterson conjecture that whennis itself a power of 2,nsensors can distinguish at most 2n− 2npositions and that for primenthe limit is 2n− 2 positions.[80]The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 28= 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors.
An STGC forP= 30 andn= 5 is reproduced here:
Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes.[81]The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size.
The Gray code nature is useful (compared tochain codes, also calledDe Bruijn sequences), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving.[82]
Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams,[83][user-generated source?]based on previous work,[80]discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the siteThingiverse. This device[84]was designed by etzenseep (Florian Bauer) in September 2022.
An STGC forP= 360 andn= 9 is reproduced here:
Two-dimensional Gray codes are used in communication to minimize the number of bit errors inquadrature amplitude modulation(QAM) adjacent points in theconstellation. In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits.[85]
Two-dimensional Gray codes also have uses inlocation identificationsschemes, where the code would be applied to area maps such as aMercator projectionof the earth's surface and an appropriate cyclic two-dimensional distance function such as theMannheim metricbe used to calculate the distance between two encoded locations, thereby combining the characteristics of theHamming distancewith the cyclic continuation of a Mercator projection.[86]
If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value.
Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code.
When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected.
The bijective mapping { 0 ↔00, 1 ↔01, 2 ↔11, 3 ↔10} establishes anisometrybetween themetric spaceover thefinite fieldZ22{\displaystyle \mathbb {Z} _{2}^{2}}with the metric given by theHamming distanceand the metric space over thefinite ringZ4{\displaystyle \mathbb {Z} _{4}}(the usualmodular arithmetic) with the metric given by theLee distance. The mapping is suitably extended to an isometry of theHamming spacesZ22m{\displaystyle \mathbb {Z} _{2}^{2m}}andZ4m{\displaystyle \mathbb {Z} _{4}^{m}}. Its importance lies in establishing a correspondence between various "good" but not necessarilylinear codesas Gray-map images inZ22{\displaystyle \mathbb {Z} _{2}^{2}}ofring-linear codesfromZ4{\displaystyle \mathbb {Z} _{4}}.[87][88]
There are a number of binary codes similar to Gray codes, including:
The followingbinary-coded decimal(BCD) codes are Gray code variants as well:
|
https://en.wikipedia.org/wiki/Gray_code
|
Inmathematics, theHessian matrix,Hessianor (less commonly)Hesse matrixis asquare matrixof second-orderpartial derivativesof a scalar-valuedfunction, orscalar field. It describes the localcurvatureof a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematicianLudwig Otto Hesseand later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or,∇∇{\displaystyle \nabla \nabla }or∇⊗∇{\displaystyle \nabla \otimes \nabla }orD2{\displaystyle D^{2}}.
Supposef:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }is a function taking as input a vectorx∈Rn{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}and outputting a scalarf(x)∈R.{\displaystyle f(\mathbf {x} )\in \mathbb {R} .}If all second-orderpartial derivativesoff{\displaystyle f}exist, then the Hessian matrixH{\displaystyle \mathbf {H} }off{\displaystyle f}is a squaren×n{\displaystyle n\times n}matrix, usually defined and arranged asHf=[∂2f∂x12∂2f∂x1∂x2⋯∂2f∂x1∂xn∂2f∂x2∂x1∂2f∂x22⋯∂2f∂x2∂xn⋮⋮⋱⋮∂2f∂xn∂x1∂2f∂xn∂x2⋯∂2f∂xn2].{\displaystyle \mathbf {H} _{f}={\begin{bmatrix}{\dfrac {\partial ^{2}f}{\partial x_{1}^{2}}}&{\dfrac {\partial ^{2}f}{\partial x_{1}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{1}\,\partial x_{n}}}\\[2.2ex]{\dfrac {\partial ^{2}f}{\partial x_{2}\,\partial x_{1}}}&{\dfrac {\partial ^{2}f}{\partial x_{2}^{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{2}\,\partial x_{n}}}\\[2.2ex]\vdots &\vdots &\ddots &\vdots \\[2.2ex]{\dfrac {\partial ^{2}f}{\partial x_{n}\,\partial x_{1}}}&{\dfrac {\partial ^{2}f}{\partial x_{n}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{n}^{2}}}\end{bmatrix}}.}That is, the entry of theith row and thejth column is(Hf)i,j=∂2f∂xi∂xj.{\displaystyle (\mathbf {H} _{f})_{i,j}={\frac {\partial ^{2}f}{\partial x_{i}\,\partial x_{j}}}.}
If furthermore the second partial derivatives are all continuous, the Hessian matrix is asymmetric matrixby thesymmetry of second derivatives.
Thedeterminantof the Hessian matrix is called theHessian determinant.[1]
The Hessian matrix of a functionf{\displaystyle f}is theJacobian matrixof thegradientof the functionf{\displaystyle f}; that is:H(f(x))=J(∇f(x)).{\displaystyle \mathbf {H} (f(\mathbf {x} ))=\mathbf {J} (\nabla f(\mathbf {x} )).}
Iff{\displaystyle f}is ahomogeneous polynomialin three variables, the equationf=0{\displaystyle f=0}is theimplicit equationof aplane projective curve. Theinflection pointsof the curve are exactly the non-singular points where the Hessian determinant is zero. It follows byBézout's theoremthat acubic plane curvehas at most 9 inflection points, since the Hessian determinant is a polynomial of degree 3.
The Hessian matrix of aconvex functionispositive semi-definite. Refining this property allows us to test whether acritical pointx{\displaystyle x}is a local maximum, local minimum, or a saddle point, as follows:
If the Hessian ispositive-definiteatx,{\displaystyle x,}thenf{\displaystyle f}attains an isolated local minimum atx.{\displaystyle x.}If the Hessian isnegative-definiteatx,{\displaystyle x,}thenf{\displaystyle f}attains an isolated local maximum atx.{\displaystyle x.}If the Hessian has both positive and negativeeigenvalues, thenx{\displaystyle x}is asaddle pointforf.{\displaystyle f.}Otherwise the test is inconclusive. This implies that at a local minimum the Hessian is positive-semidefinite, and at a local maximum the Hessian is negative-semidefinite.
For positive-semidefinite and negative-semidefinite Hessians the test is inconclusive (a critical point where the Hessian is semidefinite but not definite may be a local extremum or a saddle point). However, more can be said from the point of view ofMorse theory.
Thesecond-derivative testfor functions of one and two variables is simpler than the general case. In one variable, the Hessian contains exactly one second derivative; if it is positive, thenx{\displaystyle x}is a local minimum, and if it is negative, thenx{\displaystyle x}is a local maximum; if it is zero, then the test is inconclusive. In two variables, thedeterminantcan be used, because the determinant is the product of the eigenvalues. If it is positive, then the eigenvalues are both positive, or both negative. If it is negative, then the two eigenvalues have different signs. If it is zero, then the second-derivative test is inconclusive.
Equivalently, the second-order conditions that are sufficient for a local minimum or maximum can be expressed in terms of the sequence of principal (upper-leftmost)minors(determinants of sub-matrices) of the Hessian; these conditions are a special case of those given in the next section for bordered Hessians for constrained optimization—the case in which the number of constraints is zero. Specifically, the sufficient condition for a minimum is that all of these principal minors be positive, while the sufficient condition for a maximum is that the minors alternate in sign, with the1×1{\displaystyle 1\times 1}minor being negative.
If thegradient(the vector of the partial derivatives) of a functionf{\displaystyle f}is zero at some pointx,{\displaystyle \mathbf {x} ,}thenf{\displaystyle f}has acritical point(orstationary point) atx.{\displaystyle \mathbf {x} .}Thedeterminantof the Hessian atx{\displaystyle \mathbf {x} }is called, in some contexts, adiscriminant. If this determinant is zero thenx{\displaystyle \mathbf {x} }is called adegenerate critical pointoff,{\displaystyle f,}or anon-Morse critical pointoff.{\displaystyle f.}Otherwise it is non-degenerate, and called aMorse critical pointoff.{\displaystyle f.}
The Hessian matrix plays an important role inMorse theoryandcatastrophe theory, because itskernelandeigenvaluesallow classification of the critical points.[2][3][4]
The determinant of the Hessian matrix, when evaluated at a critical point of a function, is equal to theGaussian curvatureof the function considered as a manifold. The eigenvalues of the Hessian at that point are the principal curvatures of the function, and the eigenvectors are the principal directions of curvature. (SeeGaussian curvature § Relation to principal curvatures.)
Hessian matrices are used in large-scaleoptimizationproblems withinNewton-type methods because they are the coefficient of the quadratic term of a localTaylor expansionof a function. That is,y=f(x+Δx)≈f(x)+∇f(x)TΔx+12ΔxTH(x)Δx{\displaystyle y=f(\mathbf {x} +\Delta \mathbf {x} )\approx f(\mathbf {x} )+\nabla f(\mathbf {x} )^{\mathsf {T}}\Delta \mathbf {x} +{\frac {1}{2}}\,\Delta \mathbf {x} ^{\mathsf {T}}\mathbf {H} (\mathbf {x} )\,\Delta \mathbf {x} }where∇f{\displaystyle \nabla f}is thegradient(∂f∂x1,…,∂f∂xn).{\displaystyle \left({\frac {\partial f}{\partial x_{1}}},\ldots ,{\frac {\partial f}{\partial x_{n}}}\right).}Computing and storing the full Hessian matrix takesΘ(n2){\displaystyle \Theta \left(n^{2}\right)}memory, which is infeasible for high-dimensional functions such as theloss functionsofneural nets,conditional random fields, and otherstatistical modelswith large numbers of parameters. For such situations,truncated-Newtonandquasi-Newtonalgorithms have been developed. The latter family of algorithms use approximations to the Hessian; one of the most popular quasi-Newton algorithms isBFGS.[5]
Such approximations may use the fact that an optimization algorithm uses the Hessian only as alinear operatorH(v),{\displaystyle \mathbf {H} (\mathbf {v} ),}and proceed by first noticing that the Hessian also appears in the local expansion of the gradient:∇f(x+Δx)=∇f(x)+H(x)Δx+O(‖Δx‖2){\displaystyle \nabla f(\mathbf {x} +\Delta \mathbf {x} )=\nabla f(\mathbf {x} )+\mathbf {H} (\mathbf {x} )\,\Delta \mathbf {x} +{\mathcal {O}}(\|\Delta \mathbf {x} \|^{2})}
LettingΔx=rv{\displaystyle \Delta \mathbf {x} =r\mathbf {v} }for some scalarr,{\displaystyle r,}this givesH(x)Δx=H(x)rv=rH(x)v=∇f(x+rv)−∇f(x)+O(r2),{\displaystyle \mathbf {H} (\mathbf {x} )\,\Delta \mathbf {x} =\mathbf {H} (\mathbf {x} )r\mathbf {v} =r\mathbf {H} (\mathbf {x} )\mathbf {v} =\nabla f(\mathbf {x} +r\mathbf {v} )-\nabla f(\mathbf {x} )+{\mathcal {O}}(r^{2}),}that is,H(x)v=1r[∇f(x+rv)−∇f(x)]+O(r){\displaystyle \mathbf {H} (\mathbf {x} )\mathbf {v} ={\frac {1}{r}}\left[\nabla f(\mathbf {x} +r\mathbf {v} )-\nabla f(\mathbf {x} )\right]+{\mathcal {O}}(r)}so if the gradient is already computed, the approximate Hessian can be computed by a linear (in the size of the gradient) number of scalar operations. (While simple to program, this approximation scheme is not numerically stable sincer{\displaystyle r}has to be made small to prevent error due to theO(r){\displaystyle {\mathcal {O}}(r)}term, but decreasing it loses precision in the first term.[6])
Notably regarding Randomized Search Heuristics, theevolution strategy's covariance matrix adapts to the inverse of the Hessian matrix,up toa scalar factor and small random fluctuations.
This result has been formally proven for a single-parent strategy and a static model, as the population size increases, relying on the quadratic approximation.[7]
The Hessian matrix is commonly used for expressing image processing operators inimage processingandcomputer vision(see theLaplacian of Gaussian(LoG) blob detector,the determinant of Hessian (DoH) blob detectorandscale space). It can be used innormal modeanalysis to calculate the different molecular frequencies ininfrared spectroscopy.[8]It can also be used in local sensitivity and statistical diagnostics.[9]
Abordered Hessianis used for the second-derivative test in certain constrained optimization problems. Given the functionf{\displaystyle f}considered previously, but adding a constraint functiong{\displaystyle g}such thatg(x)=c,{\displaystyle g(\mathbf {x} )=c,}the bordered Hessian is the Hessian of theLagrange functionΛ(x,λ)=f(x)+λ[g(x)−c]{\displaystyle \Lambda (\mathbf {x} ,\lambda )=f(\mathbf {x} )+\lambda [g(\mathbf {x} )-c]}:[10]H(Λ)=[∂2Λ∂λ2∂2Λ∂λ∂x(∂2Λ∂λ∂x)T∂2Λ∂x2]=[0∂g∂x1∂g∂x2⋯∂g∂xn∂g∂x1∂2Λ∂x12∂2Λ∂x1∂x2⋯∂2Λ∂x1∂xn∂g∂x2∂2Λ∂x2∂x1∂2Λ∂x22⋯∂2Λ∂x2∂xn⋮⋮⋮⋱⋮∂g∂xn∂2Λ∂xn∂x1∂2Λ∂xn∂x2⋯∂2Λ∂xn2]=[0∂g∂x(∂g∂x)T∂2Λ∂x2]{\displaystyle \mathbf {H} (\Lambda )={\begin{bmatrix}{\dfrac {\partial ^{2}\Lambda }{\partial \lambda ^{2}}}&{\dfrac {\partial ^{2}\Lambda }{\partial \lambda \partial \mathbf {x} }}\\\left({\dfrac {\partial ^{2}\Lambda }{\partial \lambda \partial \mathbf {x} }}\right)^{\mathsf {T}}&{\dfrac {\partial ^{2}\Lambda }{\partial \mathbf {x} ^{2}}}\end{bmatrix}}={\begin{bmatrix}0&{\dfrac {\partial g}{\partial x_{1}}}&{\dfrac {\partial g}{\partial x_{2}}}&\cdots &{\dfrac {\partial g}{\partial x_{n}}}\\[2.2ex]{\dfrac {\partial g}{\partial x_{1}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{1}^{2}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{1}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}\Lambda }{\partial x_{1}\,\partial x_{n}}}\\[2.2ex]{\dfrac {\partial g}{\partial x_{2}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{2}\,\partial x_{1}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{2}^{2}}}&\cdots &{\dfrac {\partial ^{2}\Lambda }{\partial x_{2}\,\partial x_{n}}}\\[2.2ex]\vdots &\vdots &\vdots &\ddots &\vdots \\[2.2ex]{\dfrac {\partial g}{\partial x_{n}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{n}\,\partial x_{1}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{n}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}\Lambda }{\partial x_{n}^{2}}}\end{bmatrix}}={\begin{bmatrix}0&{\dfrac {\partial g}{\partial \mathbf {x} }}\\\left({\dfrac {\partial g}{\partial \mathbf {x} }}\right)^{\mathsf {T}}&{\dfrac {\partial ^{2}\Lambda }{\partial \mathbf {x} ^{2}}}\end{bmatrix}}}
If there are, say,m{\displaystyle m}constraints then the zero in the upper-left corner is anm×m{\displaystyle m\times m}block of zeros, and there arem{\displaystyle m}border rows at the top andm{\displaystyle m}border columns at the left.
The above rules stating that extrema are characterized (among critical points with a non-singular Hessian) by a positive-definite or negative-definite Hessian cannot apply here since a bordered Hessian can neither be negative-definite nor positive-definite, aszTHz=0{\displaystyle \mathbf {z} ^{\mathsf {T}}\mathbf {H} \mathbf {z} =0}ifz{\displaystyle \mathbf {z} }is any vector whose sole non-zero entry is its first.
The second derivative test consists here of sign restrictions of the determinants of a certain set ofn−m{\displaystyle n-m}submatrices of the bordered Hessian.[11]Intuitively, them{\displaystyle m}constraints can be thought of as reducing the problem to one withn−m{\displaystyle n-m}free variables. (For example, the maximization off(x1,x2,x3){\displaystyle f\left(x_{1},x_{2},x_{3}\right)}subject to the constraintx1+x2+x3=1{\displaystyle x_{1}+x_{2}+x_{3}=1}can be reduced to the maximization off(x1,x2,1−x1−x2){\displaystyle f\left(x_{1},x_{2},1-x_{1}-x_{2}\right)}without constraint.)
Specifically, sign conditions are imposed on the sequence of leading principal minors (determinants of upper-left-justified sub-matrices) of the bordered Hessian, for which the first2m{\displaystyle 2m}leading principal minors are neglected, the smallest minor consisting of the truncated first2m+1{\displaystyle 2m+1}rows and columns, the next consisting of the truncated first2m+2{\displaystyle 2m+2}rows and columns, and so on, with the last being the entire bordered Hessian; if2m+1{\displaystyle 2m+1}is larger thann+m,{\displaystyle n+m,}then the smallest leading principal minor is the Hessian itself.[12]There are thusn−m{\displaystyle n-m}minors to consider, each evaluated at the specific point being considered as acandidate maximum or minimum. A sufficient condition for a localmaximumis that these minors alternate in sign with the smallest one having the sign of(−1)m+1.{\displaystyle (-1)^{m+1}.}A sufficient condition for a localminimumis that all of these minors have the sign of(−1)m.{\displaystyle (-1)^{m}.}(In the unconstrained case ofm=0{\displaystyle m=0}these conditions coincide with the conditions for the unbordered Hessian to be negative definite or positive definite respectively).
Iff{\displaystyle f}is instead avector fieldf:Rn→Rm,{\displaystyle \mathbf {f} :\mathbb {R} ^{n}\to \mathbb {R} ^{m},}that is,f(x)=(f1(x),f2(x),…,fm(x)),{\displaystyle \mathbf {f} (\mathbf {x} )=\left(f_{1}(\mathbf {x} ),f_{2}(\mathbf {x} ),\ldots ,f_{m}(\mathbf {x} )\right),}then the collection of second partial derivatives is not an×n{\displaystyle n\times n}matrix, but rather a third-ordertensor. This can be thought of as an array ofm{\displaystyle m}Hessian matrices, one for each component off{\displaystyle \mathbf {f} }:H(f)=(H(f1),H(f2),…,H(fm)).{\displaystyle \mathbf {H} (\mathbf {f} )=\left(\mathbf {H} (f_{1}),\mathbf {H} (f_{2}),\ldots ,\mathbf {H} (f_{m})\right).}This tensor degenerates to the usual Hessian matrix whenm=1.{\displaystyle m=1.}
In the context ofseveral complex variables, the Hessian may be generalized. Supposef:Cn→C,{\displaystyle f\colon \mathbb {C} ^{n}\to \mathbb {C} ,}and writef(z1,…,zn).{\displaystyle f\left(z_{1},\ldots ,z_{n}\right).}IdentifyingCn{\displaystyle {\mathbb {C} }^{n}}withR2n{\displaystyle {\mathbb {R} }^{2n}}, the normal "real" Hessian is a2n×2n{\displaystyle 2n\times 2n}matrix. As the object of study in several complex variables areholomorphic functions, that is, solutions to the n-dimensionalCauchy–Riemann conditions, we usually look on the part of the Hessian that contains information invariant under holomorphic changes of coordinates. This "part" is the so-called complex Hessian, which is the matrix(∂2f∂zj∂z¯k)j,k.{\displaystyle \left({\frac {\partial ^{2}f}{\partial z_{j}\partial {\bar {z}}_{k}}}\right)_{j,k}.}Note that iff{\displaystyle f}is holomorphic, then its complex Hessian matrix is identically zero, so the complex Hessian is used to study smooth but not holomorphic functions, see for exampleLevi pseudoconvexity. When dealing with holomorphic functions, we could consider the Hessian matrix(∂2f∂zj∂zk)j,k.{\displaystyle \left({\frac {\partial ^{2}f}{\partial z_{j}\partial z_{k}}}\right)_{j,k}.}
Let(M,g){\displaystyle (M,g)}be aRiemannian manifoldand∇{\displaystyle \nabla }itsLevi-Civita connection. Letf:M→R{\displaystyle f:M\to \mathbb {R} }be a smooth function. Define the Hessian tensor byHess(f)∈Γ(T∗M⊗T∗M)byHess(f):=∇∇f=∇df,{\displaystyle \operatorname {Hess} (f)\in \Gamma \left(T^{*}M\otimes T^{*}M\right)\quad {\text{ by }}\quad \operatorname {Hess} (f):=\nabla \nabla f=\nabla df,}where this takes advantage of the fact that the first covariant derivative of a function is the same as its ordinary differential. Choosing local coordinates{xi}{\displaystyle \left\{x^{i}\right\}}gives a local expression for the Hessian asHess(f)=∇i∂jfdxi⊗dxj=(∂2f∂xi∂xj−Γijk∂f∂xk)dxi⊗dxj{\displaystyle \operatorname {Hess} (f)=\nabla _{i}\,\partial _{j}f\ dx^{i}\!\otimes \!dx^{j}=\left({\frac {\partial ^{2}f}{\partial x^{i}\partial x^{j}}}-\Gamma _{ij}^{k}{\frac {\partial f}{\partial x^{k}}}\right)dx^{i}\otimes dx^{j}}whereΓijk{\displaystyle \Gamma _{ij}^{k}}are theChristoffel symbolsof the connection. Other equivalent forms for the Hessian are given byHess(f)(X,Y)=⟨∇Xgradf,Y⟩andHess(f)(X,Y)=X(Yf)−df(∇XY).{\displaystyle \operatorname {Hess} (f)(X,Y)=\langle \nabla _{X}\operatorname {grad} f,Y\rangle \quad {\text{ and }}\quad \operatorname {Hess} (f)(X,Y)=X(Yf)-df(\nabla _{X}Y).}
|
https://en.wikipedia.org/wiki/Hessian_matrix
|
Social information processingis "an activity through which collective human actions organize knowledge."[1]It is the creation and processing of information by a group of people. As an academic field Social Information Processing studies theinformation processingpower of networkedsocial systems.
Typically computer tools are used such as:
Although computers are often used to facilitate networking and collaboration, they are not required. For example theTrictionaryin 1982 was entirely paper and pen based, relying on neighborhood social networks and libraries. The creation of theOxford English Dictionaryin the 19th century was done largely with the help of anonymous volunteers organized by help wanted ads in newspapers and slips of paper sent through the postal mail.
The website for the AAAI 2008 Spring Symposium on Social Information Processing suggested the following topics and questions:[2]
Social overloadcorresponds to being imposed to high amount of information and interaction on social web. Social overload causes some challenges from the aspect of both social media websites and their users.[3]Users need to deal with high volume of information and to make decisions among different social network applications whereas social network sites try to keep their existing users and make their sites interesting to users. To overcome social overload,social recommender systemshas been utilized to engage users in social media websites in a way that users receive more personalized content using recommendation techniques.[3]Social recommender systems are specific types of recommendation systems being designed for social media and utilizing new sort of data brought by it, such as likes, comments, tags and so on, to improve effectiveness of recommendations. Recommendation in social media have several aspects like recommendation of social media content, people, groups and tags.
Social media lets users to provide feedback on the content produced by users of social media websites, by means of commenting on or liking the content shared by others and annotating their own-created content via tagging. This newly introduced metadata by social media helps to obtain recommendations for social media content with improved effectiveness.[3]Also, social media lets to extract the explicit relationship between users such as friendship and people followed/followers. This provides further improvement on collaborative filtering systems because now users can have judgement on the recommendations provided based on the people they have relationships.[3]There have been studies showing the effectiveness of recommendation systems which utilize relationships among users on social media compared to traditional collaborative filtering based systems, specifically for movie and book recommendation.[4][5]Another improvement brought by social media to recommender systems is solving the cold start problem for new users.[3]
Some key application areas of social media content recommendation are blog and blog post recommendation, multimedia content recommendation such as YouTube videos, question and answer recommendation to question askers and answerers on socialquestion-and-answer websites, job recommendation (LinkedIn), news recommendation on social new aggregator sites (like Digg, GoogleReader, Reddit etc.), short message recommendations on microblogs (such as Twitter).[3]
Also known associal matching(the term is proposed by Terveen and McDonald), people recommender systems deal with recommending people to people on social media. Aspects making people recommender systems distinct from traditional recommender systems and require special attention are basically privacy, trust among users, and reputation.[6]There are several factors which effect the choice of recommendation techniques for people recommendation on social networking sites (SNS). Those factors are related to types of relationships among people on social networking sites, such as symmetric vs asymmetric, ad-hoc vs long-term, and confirmed vs nonconfirmed relationships.[3]
The scope of people recommender systems can be categorized into three:[3]recommending familiar people to connect with, recommending people to follow and recommending strangers. Recommending strangers is seen as valuable as recommending familiar people because of leading to chances such as exchanging ideas, obtaining new opportunities, and increasing one’s reputation.
Handling with social streams is one of the challenges social recommender systems face with.[3]Social stream can be described as the user activity data pooled on newsfeed on social media websites. Social stream data has unique characteristics such as rapid flow, variety of data (only text content vs heterogenous content), and requiring freshness. Those unique properties of stream data compared to traditional social media data impose challenges on social recommender systems.
Another challenge in social recommendation is performing cross-domain recommendation, as in traditional recommender systems.[3]The reason is that social media websites in different domains include different information about users, and merging information within different contexts may not lead to useful recommendations. For example, using favorite recipes of users in one social media site may not be a reliable source of information to effective job recommendations for them.
Participation of people in online communities, in general, differ from their participatory behavior in real-world collective contexts. Humans in daily life are used to making use of "social cues" for guiding their decisions and actions e.g. if a group of people is looking for a good restaurant to have lunch, it is very likely that they will choose to enter to a local that have some customers inside instead of one that it is empty (the more crowded restaurant could reflect its popularity and in consequence, its quality of service). However, in online social environments, it is not straightforward how to access to these sources of information which are normally being logged in the systems, but this is not disclosed to the users.
There are some theories that explain how this social awareness can affect the behavior of people in real-life scenarios. The American philosopherGeorge Herbert Meadstates that humans are social creatures, in the sense that people's actions cannot be isolated from the behavior of the whole collective they are part of because every individuals' act are influenced by larger social practices that act as a general behavior's framework.[7]In his performance framework, the Canadian sociologistErving Goffmanpostulates that in everyday social interactions individuals perform their actions by collecting information from others first, in order to know in advance what they may expect from them and in this way being able to plan how to behave more effectively.[8]
In the same way that in the real-world, providing social cues in virtual communities can help people to understand better the situations they face in these environments, to alleviate their decision-making processes by enabling their access to more informed choices, to persuade them to participate in the activities that take place there, and to structure their own schedule of individual and group activities more efficiently.[9]
In this frame of reference, an approach called "social context displays" has been proposed for showing social information -either from real or virtual environments- in digital scenarios. It is based on the use of graphical representations to visualize the presence and activity traces of a group of people, thus providing users with a third-party view of what is happening within the community i.e. who are actively participating, who are not contributing to the group efforts, etc. This social-context-revealing approach has been studied in different scenarios (e.g. IBM video-conference software, large community displaying social activity traces in a shared space called NOMATIC*VIZ), and it has been demonstrated that its application can provide users with several benefits, like providing them with more information to make better decisions and motivating them to take an active attitude towards the management of their self and group representations within the display through their actions in the real-life.[9]
By making the traces of activity of users publicly available for others to access it is natural that it can raise users concerns related to which are their rights over the data they generate, who are the final users that can have access to their information and how they can know and control their privacy policies.[9]There are several perspectives that try to contextualize this privacy issue. One perspective is to see privacy as a tradeoff between the degree of invasion to the personal space and the number of benefits that the user could perceive from the social system by disclosing their online activity traces.[10]Another perspective is examining the concession between the visibility of people within the social system and their level of privacy, which can be managed at an individual or at a group level by establishing specific permissions for allowing others to have access to their information. Other authors state that instead of enforcing users to set and control privacy settings, social systems might focus on raising their awareness about who their audiences are so they can manage their online behavior according to the reactions they expect from those different user groups.[9]
|
https://en.wikipedia.org/wiki/Social_information_processing
|
Rprop, short for resilientbackpropagation, is a learningheuristicforsupervised learninginfeedforwardartificial neural networks. This is afirst-orderoptimizationalgorithm. This algorithm was created by Martin Riedmiller and Heinrich Braun in 1992.[1]
Similarly to theManhattan update rule, Rprop takes into account only thesignof thepartial derivativeover all patterns (not the magnitude), and acts independently on each "weight". For each weight, if there was a sign change of the partial derivative of the total error function compared to the last iteration, the update value for that weight is multiplied by a factorη−, whereη−< 1. If the last iteration produced the same sign, the update value is multiplied by a factor ofη+, whereη+> 1. The update values are calculated for each weight in the above manner, and finally each weight is changed by its own update value, in the opposite direction of that weight's partial derivative, so as to minimise the total error function.η+is empirically set to 1.2 andη−to 0.5.[citation needed]
Rprop can result in very large weight increments or decrements if the gradients are large, which is a problem when using mini-batches as opposed to full batches.RMSpropaddresses this problem by keeping the moving average of the squared gradients for each weight and dividing the gradient by the square root of the mean square.[citation needed]
RPROP is abatch update algorithm. Next to thecascade correlation algorithmand theLevenberg–Marquardt algorithm, Rprop is one of the fastest weight update mechanisms.[citation needed]
Martin Riedmiller developed three algorithms, all named RPROP. Igel and Hüsken assigned names to them and added a new variant:[2][3]
|
https://en.wikipedia.org/wiki/Rprop
|
Acontractionis a shortened version of the spoken and written forms of aword,syllable, orword group, created by omission of internal letters and sounds.
Inlinguistic analysis, contractions should not be confused withcrasis,abbreviationsandinitialisms(includingacronyms), with which they share somesemanticandphoneticfunctions, though all three are connoted by the term "abbreviation" in layman’s terms.[1]Contraction is also distinguished frommorphologicalclipping, where beginnings and endings are omitted.
Thedefinitionoverlaps with the termportmanteau(a linguisticblend), but a distinction can be made between aportmanteauand a contraction by noting that contractions are formed from words that would otherwise appear together in sequence, such asdoandnot, whereas a portmanteau word is formed by combining two or more existing words that all relate to a singular concept that the portmanteau describes.
Englishhas a number of contractions, mostly involving theelisionof a vowel, which is replaced by anapostrophein writing, as inI'mfor "I am", and sometimes other changes as well. Contractions are common in speech and in informal writing but tend to be avoided in moreformal writing(with limited exceptions, such as the now-standard form "o'clock").
The main contractions are listed in the following table.
Althoughcan't,wouldn'tand other forms ending‑n'tclearly started as contractions,‑n'tis now neither a contraction (acliticizedform) nor part of one but instead a negativeinflectionalsuffix. Evidence for this is (i)‑n'toccurs only withauxiliary verbs, and clitics are not limited to particular categories or subcategories; (ii) again unlike contractions, their forms are not rule-governed but idiosyncratic (e.g.,will→won't, can→can't); and (iii) as shown in the table, the inflected and "uncontracted" versions may require different positions in a sentence.[4]
TheOld Chinesewriting system (oracle bone scriptandbronzeware script) is well suited for the (almost) one-to-one correspondence betweenmorphemeandglyph. Contractions in which one glyph represents two or more morphemes are a notable exception to that rule. About 20 or so are noted to exist by traditionalphilologistsand are known asjiāncí(兼詞, lit. 'concurrent words'), and more words have been proposed to be contractions by recentscholars, based on recent reconstructions of Old Chinese phonology, epigraphic evidence, and syntactic considerations. For example, 非 [fēi] has been proposed to be a contraction of 不 (bù) + 唯/隹 (wéi/zhuī). The contractions are not generally graphically evident, and there is no general rule for how a character representing a contraction might be formed. As a result, the identification of a character as a contraction, as well as the word(s) that are proposed to have been contracted, is sometimes disputed.
As vernacular Chinese dialects use sets of function words that differ considerably fromClassical Chinese, almost all of the classical contractions that are listed below are now archaic and have disappeared from everyday use. However, modern contractions have evolved from the new vernacular function words. Modern contractions appear in all major modern dialect groups. For example, 别 (bié) 'don't' inStandard Mandarinis a contraction of 不要 (bùyào), and 覅 (fiào) 'don't' inShanghaineseis a contraction of 勿要 (wù yào), as is apparent graphically. Similarly, inNortheastern Mandarin甭 (béng) 'needn't' is a phonological and graphical contraction of 不用 (bùyòng). Finally,Cantonesecontracts 乜嘢 (mat1 ye5)[5]'what?' to 咩 (me1).
Note:The particles 爰, 焉, 云, and 然 ending in [-j[a/ə]n] behave as the grammatical equivalents of a verb (or coverb) followed by 之 'him; her; it (third-person object)' or a similar demonstrative pronoun in the object position. In fact, 于/於 '(is) in; at', 曰 'say', and 如 'resemble' are never followed by 之 '(third-person object)' or 此 '(near demonstrative)' in pre-Qintexts. Instead, the respective 'contractions' 爰/焉, 云, and 然 are always used in their place. Nevertheless, no known object pronoun is phonologically appropriate to serve as the hypothetical pronoun that underwent contraction. Hence, many authorities do not consider them to be true contractions. As an alternative explanation for their origin,Edwin G. Pulleyblankproposed that the [-n] ending is derived from aSino-Tibetanaspectmarker that later took onanaphoriccharacter.[6]: 80
Here are some of the contractions inStandard Dutch:
InformalBelgian Dutchuses a wide range of non-standard contractions such as "hoe's't" (from "hoe is het?" - how are you?), "hij's d'r" (from "hij is daar" - he's there), "w'ebbe' goe' g'ete'" (from "we hebben goed gegeten" - we had eaten well) and "wa's da'?" (from "wat is dat?" - what is that?. Some of these contractions:
Frenchhas a variety of contractions like in English except that they are mandatory, as inC'est la vie("That's life") in whichc'eststands force+est("that is"). The formation of such contractions is calledelision.
In general, anymonosyllabicword ending ine caduc(schwa) contracts if the following word begins with a vowel,hory(ashis silent and absorbed by the sound of the succeeding vowel;ysounds likei). In addition toce→c'-(demonstrative pronoun "that"), these words areque→qu'-(conjunction, relative pronoun, or interrogative pronoun "that"),ne→n'-("not"),se→s'-("himself", "herself", "itself", "oneself" before a verb),je→j'-("I"),me→m'-("me" before a verb),te→t'-(informal singular "you" before a verb),leorla→l'-("the"; or "he", "she", "it" before a verb or after an imperative verb and before the wordyoren), andde→d'-("of"). Unlike with English contractions, however, those contractions are mandatory: one would never say (or write)*ce estor*que elle.
Moi("me") andtoi(informal "you") mandatorily contract tom'-andt'-, respectively, after an imperative verb and before the wordyoren.
It is also mandatory to avoid the repetition of a sound when the conjunctionsi("if") is followed byil("he", "it") orils("they"), which begin with the same vowel soundi:*si il→s'il("if it", if he");*si ils→s'ils("if they").
Certainprepositionsare also mandatorily merged with masculine and plural direct articles:auforà le,auxforà les,duforde le, anddesforde les. However, the contraction ofcela(demonstrative pronoun "that") toçais optional and informal.
In informal speech, a personalpronounmay sometimes be contracted onto a followingverb. For example,je ne sais pas(IPA:[ʒənəsɛpa], "I don't know") may be pronounced roughlychais pas(IPA:[ʃɛpa]), with thenebeing completely elided and the[ʒ]ofjebeing mixed with the[s]ofsais.[original research?]It is also common in informal contexts to contracttutot'-before a vowel:t'as mangéfortu as mangé.
InModern Hebrew, the prepositional prefixes -בְּ /bə-/ 'in' and -לְ /lə-/ 'to' contract with the definite article prefix -ה (/ha-/) to form the prefixes -ב /ba/ 'in the' and -ל /la/ 'to the'. In colloquial Israeli Hebrew, the preposition את (/ʔet/), which indicates a definite direct object, and the definite article prefix -ה (/ha-/) are often contracted to 'ת (/ta-/) when the former immediately precedes the latter; thus, ראיתי את הכלב (/ʁaˈʔiti ʔet haˈkelev/, "I saw the dog") may become ראיתי ת'כלב (/ʁaˈʔiti taˈkelev/).
InItalian, prepositions merge with direct articles in predictable ways. The prepositionsa,da,di,in,su,conandpercombine with the various forms of the definitearticle, namelyil,lo,la,l',i,gli,gl',andle.
The wordsciandè(form ofessere, to be) and the wordsviandèare contracted intoc'èandv'è(both meaning "there is").
The wordsdoveandcomeare contracted with any word that begins withe, deleting the-eof the principal word, as in "Com'era bello!" – "How handsome he / it was!", "Dov'è il tuo amico?" – "Where's your friend?" The same is often true of other words of similar form, e.g.quale.
The direct object pronouns "lo" and "la" may also contract to form "l'" with a form of "avere", such as "L'ho comprato" - "I have bought it", or "L'abbiamo vista" - "We have seen her".[9]
Spanishhas two mandatory phonetic contractions between prepositions and articles:al(to the) fora el, anddel(of the) forde el(not to be confused witha él, meaningto him, andde él, meaninghisor, more literally,of him).
Other contractions were common in writing until the 17thcentury, the most usual beingde+ personal and demonstrative pronouns:destasforde estas(of these, fem.),daquelforde aquel(of that, masc.),délforde él(of him) etc.; and the feminine article before words beginning witha-:l'almaforla alma, nowel alma(the soul).Severalsets of demonstrative pronouns originated as contractions ofaquí(here) + pronoun, or pronoun +otro/a(other):aqueste,aqueso,estotroetc. The modernaquel(that, masc.) is the only survivor of the first pattern; the personal pronounsnosotros(we) andvosotros(pl. you) are remnants of the second. Inmedievaltexts, unstressed words very often appear contracted:todolfortodo el(all the, masc.),quesforque es(which is); etc. including with common words, like d'ome (d'home/d'homme) instead de ome (home/homme), and so on.
Though not strictly a contraction, a special form is used when combining con with mí, ti, or sí, which is written asconmigofor *con mí(with me),contigofor *con ti(with you sing.),consigofor *con sí(with himself/herself/itself/themselves (themself).)
Finally, one can hear[clarification needed]pa'forpara, deriving aspa'lforpara el, but these forms are only considered appropriate in informal speech.
InPortuguese, contractions are common and much more numerous than those in Spanish. Several prepositions regularly contract with certain articles and pronouns. For instance,de(of) andpor(by; formerlyper) combine with the definite articlesoanda(masculine and feminine forms of "the" respectively), producingdo,da(of the),pelo,pela(by the). The prepositiondecontracts with the pronounseleandela(he, she), producingdele,dela(his, her). In addition, some verb forms contract with enclitic object pronouns: e.g., the verbamar(to love) combines with the pronouna(her), givingamá-la(to love her).
Another contraction in Portuguese that is similar to English ones is the combination of the pronoundawith words starting ina, resulting in changing the first letterafor an apostrophe and joining both words. Examples:Estrela d'alva(A popular phrase to refer toVenusthat means "Alb star", as a reference to its brightness);Caixa d'água(water tank).
In informal, spokenGermanprepositional phrases, one can often merge the preposition and thearticle; for example,von dembecomesvom,zu dembecomeszum, oran dasbecomesans. Some of these are so common that they are mandatory. In informal speech,aufmforauf dem,untermforunter dem, etc. are also used, but would be considered to be incorrect if written, except maybe in quoted direct speech, in appropriate context and style.
The pronounesoften contracts to's(usually written with the apostrophe) in certain contexts. For example, the greetingWie geht es?is usually encountered in the contracted formWie geht's?.
Regional dialectsof German, and various local languages that usually were already used long before today'sStandard Germanwas created, do use contractions usually more frequently than German, but varying widely between different local languages. The informally spoken German contractions are observed almost everywhere, most often accompanied by additional ones, such asin denbecomingin'n(sometimesim) orhaben wirbecominghamwer,hammor,hemmer, orhammadepending on local intonation preferences.Bavarian Germanfeatures several more contractions such asgesund sind wirbecomingxund samma, which are schematically applied to all word or combinations of similar sound. (One must remember, however, that Germanwirexists alongside Bavarianmir, ormia, with the same meaning.) The Munich-born footballerFranz Beckenbauerhas as his catchphrase "Schau mer mal" ("Schauen wir einmal" - in English "We shall see."). A book about his career had as its title the slightly longer version of the phrase, "Schau'n Mer Mal".
Such features are found in all central and southern language regions. A sample from Berlin:Sag einmal, Meister, kann man hier einmal hinein?is spoken asSamma, Meesta, kamma hier ma rin?
SeveralWest Central Germandialects along theRhine Riverhave built contraction patterns involving long phrases and entire sentences. In speech, words are often concatenated, and frequently the process of"liaison"is used. So,[Dat] kriegst Du nichtmay becomeKressenit, orLass mich gehen, habe ich gesagtmay becomeLomejon haschjesaat.
Mostly, there are no bindingorthographiesfor local dialects of German, hence writing is left to a great extent to authors and their publishers. Outside quotations, at least, they usually pay little attention to print more than the most commonly spoken contractions, so as not to degrade their readability. The use of apostrophes to indicate omissions is a varying and considerably less frequent process than in English-language publications.
In standard Indonesian, there are no contractions applied, although Indonesian contractions exist inIndonesian slang. Many of these contractions areterima kasihtomakasih("thank you"),kenapatonapa("why"),nggaktogak("not"),sebentartotar("a moment"), andsudahtodah("done").
The use of contractions is not allowed in any form of standardNorwegianspelling; however, it is fairly common to shorten or contract words in spoken language. Yet, the commonness varies from dialect to dialect and from sociolect to sociolect—it depends on the formality etc. of the setting. Some common, and quite drastic, contractions found in Norwegian speech are "jakke" for "jeg har ikke", meaning "I do not have" and "dække" for "det er ikke", meaning "there is not". The most frequently used of these contractions—usually consisting of two or three words contracted into one word, contain short, common and oftenmonosyllabicwords likejeg,du,deg,det,harorikke. The use of the apostrophe (') is much less common than in English, but is sometimes used in contractions to show where letters have been dropped.
In extreme cases, long, entire sentences may be written as one word. An example of this is "Det ordner seg av seg selv" in standard writtenBokmål, meaning "It will sort itself out" could become "dånesæsæsjæl" (note the lettersÅandÆ, and the word "sjæl", as aneye dialectspelling ofselv).R-dropping, being present in the example, is especially common in speech in many areas of Norway[which?], but plays out in different ways, as does elision of word-final phonemes like/ə/.
Because of the many dialects of Norwegian and their widespread use it is often difficult to distinguish between non-standard writing of standard Norwegian and eye dialect spelling. It is almost universally true that these spellings try to convey the way each word is pronounced, but it is rare to see language written that does not adhere to at least some of the rules of the officialorthography. Reasons for this include words spelled unphonemically, ignorance of conventional spelling rules, or adaptation for better transcription of that dialect's phonemes.
Latin contains several examples of contractions. One such case is preserved in the verbnolo(I am unwilling/do not want), which was formed by a contraction ofnon volo(volomeaning "I want"). Similarly this is observed in the first person plural and third person plural forms (nolumus and nolunt respectively).
Some contractions in rapid speech include ~っす (-ssu) for です (desu) and すいません (suimasen) for すみません (sumimasen). では (dewa) is often contracted to じゃ (ja). In certain grammatical contexts the particle の (no) is contracted to simply ん (n).
When used after verbs ending in the conjunctive form ~て (-te), certain auxiliary verbs and their derivations are often abbreviated. Examples:
* this abbreviation is never used in the polite conjugation, to avoid the resultant ambiguity between an abbreviatedikimasu(go) and the verbkimasu(come).
The ending ~なければ (-nakereba) can be contracted to ~なきゃ (-nakya) when it is used to indicate obligation. It is often used without an auxiliary, e.g., 行かなきゃ(いけない) (ikanakya (ikenai)) "I have to go."
Other times, contractions are made to create new words or to give added or altered meaning:
Variousdialects of Japanesealso use their own specific contractions that are often unintelligible to speakers of other dialects.
InPolish, pronouns have contracted forms that are more prevalent in their colloquial usage. Examples aregoandmu. The non-contracted forms arejego(unless it is used as a possessive pronoun) andjemu, respectively. Theclitic-ń, which stands forniego(him), as indlań(dla niego), is more common in literature. The non-contracted forms are generally used as a means to accentuate.[10]
Uyghur, aTurkic languagespoken inCentral Asia, includes some verbal suffixes that are actually contracted forms ofcompound verbs(serial verbs). For instance,sëtip alidu(sell-manage, "manage to sell") is usually written and pronouncedsëtivaldu, with the two words forming a contraction and the [p]lenitinginto a [v] or [w].[original research?]
In Filipino, most contractions need other words to be contracted correctly. Only words that end with vowels can make a contraction with words like "at" and "ay." In this chart, V represents any vowel.
InAlbanianthere are two main contractions, ç' and s' used for verbs that are short for çfarë (what) and nuk (did/will not).
|
https://en.wikipedia.org/wiki/Contraction_(grammar)
|
Insoftware engineering, aspinlockis alockthat causes athreadtrying to acquire it to simply wait in aloop("spin") while repeatedly checking whether the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind ofbusy waiting. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on (the one that holds the lock) blocks or "goes to sleep".
Because they avoid overhead fromoperating systemprocess reschedulingorcontext switching, spinlocks are efficient if threads are likely to be blocked for only short periods. For this reason,operating-system kernelsoften use spinlocks. However, spinlocks become wasteful if held for longer durations, as they may prevent other threads from running and require rescheduling. The longer a thread holds a lock, the greater the risk that the thread will be interrupted by the OS scheduler while holding the lock. If this happens, other threads will be left "spinning" (repeatedly trying to acquire the lock), while the thread holding the lock is not making progress towards releasing it. The result is an indefinite postponement until the thread holding the lock can finish and release it. This is especially true on a single-processor system, where each waiting thread of the same priority is likely to waste its quantum (allocated time where a thread can run) spinning until the thread that holds the lock is finally finished.
Implementing spinlocks correctly is challenging because programmers must take into account the possibility of simultaneous access to the lock, which could causerace conditions. Generally, such an implementation is possible only with specialassembly languageinstructions, such asatomic(i.e. un-interruptible)test-and-setoperations and cannot be easily implemented in programming languages not supporting truly atomic operations.[1]On architectures without such operations, or if high-level language implementation is required, a non-atomic locking algorithm may be used, e.g.Peterson's algorithm. However, such an implementation may require morememorythan a spinlock, be slower to allow progress after unlocking, and may not be implementable in a high-level language ifout-of-order executionis allowed.
The following example uses x86 assembly language to implement a spinlock. It will work on anyIntel80386compatible processor.
The simple implementation above works on all CPUs using the x86 architecture. However, a number of performance optimizations are possible:
On later implementations of the x86 architecture,spin_unlockcan safely use an unlocked MOV instead of the slower locked XCHG. This is due to subtlememory orderingrules which support this, even though MOV is not a fullmemory barrier. However, some processors (someCyrixprocessors, some revisions of theIntelPentium Pro(due to bugs), and earlierPentiumandi486SMPsystems) will do the wrong thing and data protected by the lock could be corrupted. On most non-x86 architectures, explicit memory barrier or atomic instructions (as in the example) must be used. On some systems, such asIA-64, there are special "unlock" instructions which provide the needed memory ordering.
To reduce inter-CPUbus traffic, code trying to acquire a lock should loop reading without trying to write anything until it reads a changed value. Because ofMESIcaching protocols, this causes the cache line for the lock to become "Shared"; then there is remarkablynobus traffic while a CPU waits for the lock. This optimization is effective on all CPU architectures that have a cache per CPU, because MESI is so widespread. On Hyper-Threading CPUs, pausing withrep nopgives additional performance by hinting to the core that it can work on the other thread while the lock spins waiting.[2]
Transactional Synchronization Extensionsand other hardwaretransactional memoryinstruction sets serve to replace locks in most cases. Although locks are still required as a fallback, they have the potential to greatly improve performance by having the processor handle entire blocks of atomic operations. This feature is built into some mutex implementations, for example inglibc. The Hardware Lock Elision (HLE) in x86 is a weakened but backwards-compatible version of TSE, and we can use it here for locking without losing any compatibility. In this particular case, the processor can choose to not lock until two threads actually conflict with each other.[3]
A simpler version of the test can use thecmpxchginstruction on x86, or the__sync_bool_compare_and_swapbuilt into many Unix compilers.
With the optimizations applied, a sample would look like:
On any multi-processor system that uses theMESI contention protocol,
such a test-and-test-and-set lock (TTAS) performs much better than the simple test-and-set lock (TAS) approach.[4]
With large numbers of processors,
adding a randomexponential backoffdelay before re-checking the lock performs even better than TTAS.[4][5]
A few multi-core processors have a "power-conscious spin-lock" instruction that puts a processor to sleep, then wakes it up on the next cycle after the lock is freed. A spin-lock using such instructions is more efficient and uses less energy than spin locks with or without a back-off loop.[6]
The primary disadvantage of a spinlock is that, whilewaitingto acquire a lock, it wastes time that might be productively spent elsewhere. There are two ways to avoid this:
Most operating systems (includingSolaris,Mac OS XandFreeBSD) use a hybrid approach called "adaptivemutex". The idea is to use a spinlock when trying to access a resource locked by a currently-running thread, but to sleep if thethreadis not currently running. (The latter isalwaysthe case on single-processor systems.)[8]
OpenBSDattempted to replace spinlocks withticket lockswhich enforcedfirst-in-first-outbehaviour, however this resulted in more CPU usage in the kernel and larger applications, such asFirefox, becoming much slower.[9][10]
|
https://en.wikipedia.org/wiki/Spinlock
|
"The quick brown fox jumps over the lazy dog" is an English-languagepangram– asentencethat contains all the letters of thealphabet. The phrase is commonly used fortouch-typingpractice, testingtypewritersandcomputer keyboards, displaying examples offonts, and other applications involving text where the use of all letters in the alphabet is desired.
The earliest known appearance of the phrase was inThe Boston Journal. In an article titled "Current Notes" in the February 9, 1885, edition, the phrase is mentioned as a good practicesentencefor writing students: "A favorite copy set by writing teachers for their pupils is the following, because it contains every letter of the alphabet: 'A quick brown fox jumps over the lazy dog.'"[1]Dozens of other newspapers published the phrase over the next few months, all using the version of the sentence starting with "A" rather than "The".[2]The earliest known use of the phrase starting with "The" is from the 1888 bookIllustrative Shorthandby Linda Bronson.[3]The modern form (starting with "The") became more common even though it is two letters longer than the original (starting with "A").
A 1908 edition of theLos Angeles Herald Sunday Magazinerecords that when theNew York Heraldwas equipping an office with typewriters "a few years ago", staff found that the common practice sentence of "now is the time for all good men to come to the aid of the party" did not familiarize typists with the entire alphabet, and ran onto two lines in a newspaper column. They write that a staff member named Arthur F. Curtis invented the "quick brown fox" pangram to address this.[4]
As the use of typewriters grew in the late 19th century, the phrase began appearing in typing lesson books as a practice sentence. Early examples includeHow to Become Expert in Typewriting: A Complete Instructor Designed Especially for the Remington Typewriter(1890),[6]andTypewriting Instructor and Stenographer's Hand-book(1892). By the turn of the 20th century, the phrase had become widely known. In the January 10, 1903, issue ofPitman's Phonetic Journal, it is referred to as "the well known memorized typing line embracing all the letters of the alphabet".[7]Robert Baden-Powell's bookScouting for Boys(1908) uses the phrase as a practice sentence for signaling.[5]
The first message sent on theMoscow–Washington hotlineon August 30, 1963, was the test phrase "THE QUICK BROWN FOX JUMPED OVER THE LAZY DOG'S BACK 1234567890".[8]Later, during testing, the Russian translators sent a message asking their American counterparts, "What does it mean when your people say 'The quick brown fox jumped over the lazy dog'?"[9]
During the 20th century, technicians tested typewriters andteleprintersby typing the sentence.[10]
It is the sentence used in the annual Zaner-Bloser National Handwriting Competition, acursive writingcompetition which has been held in the U.S. since 1991.[11][12]
In the age of computers, this pangram is commonly used to display font samples and for testingcomputer keyboards. Incryptography, it is commonly used as a test vector for hash and encryption algorithms to verify their implementation, as well as to ensure alphabetic character set compatibility.[citation needed]
Microsoft Wordhas a command to auto-type the sentence, in versions up to Word 2003, using the command=rand(), and in Microsoft Office Word 2007 and later using the command=rand.old().[13]
Numerous references to the phrase have occurred in movies, television, books, video games, advertising, websites, and graphic arts.
ThelipogrammaticnovelElla Minnow PeabyMark Dunnis built entirely around the "quick brown fox" pangram and its inventor. It depicts a fictional island off theSouth Carolinacoast that idealizes the pangram, chronicling the effects on literature and social structure as various letters are banned from daily use by government dictum.[14]
With 35 letters, this is not the shortest pangram. Shorter examples include:
If abbreviations and non-dictionary words are allowed, it is possible to create a perfect pangram that uses each letter only once, such as "Mr. Jock, TV quiz PhD, bags few lynx".
TheNASASpace Shuttleflew ateleprinterthat used the phrase "THE LAZY YELLOW DOG WAS CAUGHT BY THE SLOW RED FOX AS HE LAY SLEEPING IN THE SUN", a reference to the eponymous phrase, as part of its self-test program. While the phrase is not apangram, as it lacks J, K, M, Q, and V, it was selected to be exactly 80 characters wide to match the length of the teleprinter's drum.[15]
|
https://en.wikipedia.org/wiki/The_quick_brown_fox_jumps_over_the_lazy_dog
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.