text
stringlengths
21
172k
source
stringlengths
32
113
Algorithms for calculating varianceplay a major role incomputational statistics. A key difficulty in the design of goodalgorithmsfor this problem is that formulas for thevariancemay involve sums of squares, which can lead tonumerical instabilityas well as toarithmetic overflowwhen dealing with large values. A formula for calculating the variance of an entirepopulationof sizeNis: UsingBessel's correctionto calculate anunbiasedestimate of the population variance from a finitesampleofnobservations, the formula is: Therefore, a naïve algorithm to calculate the estimated variance is given by the following: This algorithm can easily be adapted to compute the variance of a finite population: simply divide byninstead ofn− 1 on the last line. BecauseSumSqand(Sum×Sum)/ncan be very similar numbers,cancellationcan lead to theprecisionof the result to be much less than the inherent precision of thefloating-point arithmeticused to perform the computation. Thus this algorithm should not be used in practice,[1][2]and several alternate, numerically stable, algorithms have been proposed.[3]This is particularly bad if the standard deviation is small relative to the mean. The variance isinvariantwith respect to changes in alocation parameter, a property which can be used to avoid the catastrophic cancellation in this formula. withK{\displaystyle K}any constant, which leads to the new formula the closerK{\displaystyle K}is to the mean value the more accurate the result will be, but just choosing a value inside the samples range will guarantee the desired stability. If the values(xi−K){\displaystyle (x_{i}-K)}are small then there are no problems with the sum of its squares, on the contrary, if they are large it necessarily means that the variance is large as well. In any case the second term in the formula is always smaller than the first one therefore no cancellation may occur.[2] If just the first sample is taken asK{\displaystyle K}the algorithm can be written inPython programming languageas This formula also facilitates the incremental computation that can be expressed as An alternative approach, using a different formula for the variance, first computes the sample mean, and then computes the sum of the squares of the differences from the mean, wheresis the standard deviation. This is given by the following code: This algorithm is numerically stable ifnis small.[1][4]However, the results of both of these simple algorithms ("naïve" and "two-pass") can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such ascompensated summationcan be used to combat this error to a degree. It is often useful to be able to compute the variance in asingle pass, inspecting each valuexi{\displaystyle x_{i}}only once; for example, when the data is being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such anonline algorithm, arecurrence relationis required between quantities from which the required statistics can be calculated in a numerically stable fashion. The following formulas can be used to update themeanand (estimated) variance of the sequence, for an additional elementxn. Here,x¯n=1n∑i=1nxi{\textstyle {\overline {x}}_{n}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}}denotes the sample mean of the firstnsamples(x1,…,xn){\displaystyle (x_{1},\dots ,x_{n})},σn2=1n∑i=1n(xi−x¯n)2{\textstyle \sigma _{n}^{2}={\frac {1}{n}}\sum _{i=1}^{n}\left(x_{i}-{\overline {x}}_{n}\right)^{2}}theirbiased sample variance, andsn2=1n−1∑i=1n(xi−x¯n)2{\textstyle s_{n}^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}\left(x_{i}-{\overline {x}}_{n}\right)^{2}}theirunbiased sample variance. These formulas suffer from numerical instability[citation needed], as they repeatedly subtract a small number from a big number which scales withn. A better quantity for updating is the sum of squares of differences from the current mean,∑i=1n(xi−x¯n)2{\textstyle \sum _{i=1}^{n}(x_{i}-{\bar {x}}_{n})^{2}}, here denotedM2,n{\displaystyle M_{2,n}}: This algorithm was found by Welford,[5][6]and it has been thoroughly analyzed.[2][7]It is also common to denoteMk=x¯k{\displaystyle M_{k}={\bar {x}}_{k}}andSk=M2,k{\displaystyle S_{k}=M_{2,k}}.[8] An example Python implementation for Welford's algorithm is given below. This algorithm is much less prone to loss of precision due tocatastrophic cancellation, but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, one can first compute and subtract an estimate of the mean, and then use this algorithm on the residuals. Theparallel algorithmbelow illustrates how to merge multiple sets of statistics calculated online. The algorithm can be extended to handle unequal sample weights, replacing the simple counternwith the sum of weights seen so far. West (1979)[9]suggests thisincremental algorithm: Chan et al.[10]note that Welford's online algorithm detailed above is a special case of an algorithm that works for combining arbitrary setsA{\displaystyle A}andB{\displaystyle B}: This may be useful when, for example, multiple processing units may be assigned to discrete parts of the input. Chan's method for estimating the mean is numerically unstable whennA≈nB{\displaystyle n_{A}\approx n_{B}}and both are large, because the numerical error inδ=x¯B−x¯A{\displaystyle \delta ={\bar {x}}_{B}-{\bar {x}}_{A}}is not scaled down in the way that it is in thenB=1{\displaystyle n_{B}=1}case. In such cases, preferx¯AB=nAx¯A+nBx¯BnAB{\textstyle {\bar {x}}_{AB}={\frac {n_{A}{\bar {x}}_{A}+n_{B}{\bar {x}}_{B}}{n_{AB}}}}. This can be generalized to allow parallelization withAVX, withGPUs, andcomputer clusters, and to covariance.[3] Assume that all floating point operations use standardIEEE 754 double-precisionarithmetic. Consider the sample (4, 7, 13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the unbiased estimate of population variance is 30. Both the naïve algorithm and two-pass algorithm compute these values correctly. Next consider the sample (108+ 4,108+ 7,108+ 13,108+ 16), which gives rise to the same estimated variance as the first sample. The two-pass algorithm computes this variance estimate correctly, but the naïve algorithm returns 29.333333333333332 instead of 30. While this loss of precision may be tolerable and viewed as a minor flaw of the naïve algorithm, further increasing the offset makes the error catastrophic. Consider the sample (109+ 4,109+ 7,109+ 13,109+ 16). Again the estimated population variance of 30 is computed correctly by the two-pass algorithm, but the naïve algorithm now computes it as −170.66666666666666. This is a serious problem with naïve algorithm and is due tocatastrophic cancellationin the subtraction of two similar numbers at the final stage of the algorithm. Terriberry[11]extends Chan's formulae to calculating the third and fourthcentral moments, needed for example when estimatingskewnessandkurtosis: Here theMk{\displaystyle M_{k}}are again the sums of powers of differences from the mean∑(x−x¯)k{\textstyle \sum (x-{\overline {x}})^{k}}, giving For the incremental case (i.e.,B={x}{\displaystyle B=\{x\}}), this simplifies to: By preserving the valueδ/n{\displaystyle \delta /n}, only one division operation is needed and the higher-order statistics can thus be calculated for little incremental cost. An example of the online algorithm for kurtosis implemented as described is: Pébaÿ[12]further extends these results to arbitrary-ordercentral moments, for the incremental and the pairwise cases, and subsequently Pébaÿ et al.[13]for weighted and compound moments. One can also find there similar formulas forcovariance. Choi and Sweetman[14]offer two alternative methods to compute the skewness and kurtosis, each of which can save substantial computer memory requirements and CPU time in certain applications. The first approach is to compute the statistical moments by separating the data into bins and then computing the moments from the geometry of the resulting histogram, which effectively becomes aone-pass algorithmfor higher moments. One benefit is that the statistical moment calculations can be carried out to arbitrary accuracy such that the computations can be tuned to the precision of, e.g., the data storage format or the original measurement hardware. A relative histogram of a random variable can be constructed in the conventional way: the range of potential values is divided into bins and the number of occurrences within each bin are counted and plotted such that the area of each rectangle equals the portion of the sample values within that bin: whereh(xk){\displaystyle h(x_{k})}andH(xk){\displaystyle H(x_{k})}represent the frequency and the relative frequency at binxk{\displaystyle x_{k}}andA=∑k=1Kh(xk)Δxk{\textstyle A=\sum _{k=1}^{K}h(x_{k})\,\Delta x_{k}}is the total area of the histogram. After this normalization, then{\displaystyle n}raw moments and central moments ofx(t){\displaystyle x(t)}can be calculated from the relative histogram: where the superscript(h){\displaystyle ^{(h)}}indicates the moments are calculated from the histogram. For constant bin widthΔxk=Δx{\displaystyle \Delta x_{k}=\Delta x}these two expressions can be simplified usingI=A/Δx{\displaystyle I=A/\Delta x}: The second approach from Choi and Sweetman[14]is an analytical methodology to combine statistical moments from individual segments of a time-history such that the resulting overall moments are those of the complete time-history. This methodology could be used for parallel computation of statistical moments with subsequent combination of those moments, or for combination of statistical moments computed at sequential times. IfQ{\displaystyle Q}sets of statistical moments are known:(γ0,q,μq,σq2,α3,q,α4,q){\displaystyle (\gamma _{0,q},\mu _{q},\sigma _{q}^{2},\alpha _{3,q},\alpha _{4,q})\quad }forq=1,2,…,Q{\displaystyle q=1,2,\ldots ,Q}, then eachγn{\displaystyle \gamma _{n}}can be expressed in terms of the equivalentn{\displaystyle n}raw moments: whereγ0,q{\displaystyle \gamma _{0,q}}is generally taken to be the duration of theqth{\displaystyle q^{th}}time-history, or the number of points ifΔt{\displaystyle \Delta t}is constant. The benefit of expressing the statistical moments in terms ofγ{\displaystyle \gamma }is that theQ{\displaystyle Q}sets can be combined by addition, and there is no upper limit on the value ofQ{\displaystyle Q}. where the subscriptc{\displaystyle _{c}}represents the concatenated time-history or combinedγ{\displaystyle \gamma }. These combined values ofγ{\displaystyle \gamma }can then be inversely transformed into raw moments representing the complete concatenated time-history Known relationships between the raw moments (mn{\displaystyle m_{n}}) and the central moments (θn=E⁡[(x−μ)n]){\displaystyle \theta _{n}=\operatorname {E} [(x-\mu )^{n}])}) are then used to compute the central moments of the concatenated time-history. Finally, the statistical moments of the concatenated history are computed from the central moments: Very similar algorithms can be used to compute thecovariance. The naïve algorithm is For the algorithm above, one could use the following Python code: As for the variance, the covariance of two random variables is also shift-invariant, so given any two constant valueskx{\displaystyle k_{x}}andky,{\displaystyle k_{y},}it can be written: and again choosing a value inside the range of values will stabilize the formula against catastrophic cancellation as well as make it more robust against big sums. Taking the first value of each data set, the algorithm can be written as: The two-pass algorithm first computes the sample means, and then the covariance: The two-pass algorithm may be written as: A slightly more accurate compensated version performs the full naive algorithm on the residuals. The final sums∑ixi{\textstyle \sum _{i}x_{i}}and∑iyi{\textstyle \sum _{i}y_{i}}shouldbe zero, but the second pass compensates for any small error. A stable one-pass algorithm exists, similar to the online algorithm for computing the variance, that computes co-momentCn=∑i=1n(xi−x¯n)(yi−y¯n){\textstyle C_{n}=\sum _{i=1}^{n}(x_{i}-{\bar {x}}_{n})(y_{i}-{\bar {y}}_{n})}: The apparent asymmetry in that last equation is due to the fact that(xn−x¯n)=n−1n(xn−x¯n−1){\textstyle (x_{n}-{\bar {x}}_{n})={\frac {n-1}{n}}(x_{n}-{\bar {x}}_{n-1})}, so both update terms are equal ton−1n(xn−x¯n−1)(yn−y¯n−1){\textstyle {\frac {n-1}{n}}(x_{n}-{\bar {x}}_{n-1})(y_{n}-{\bar {y}}_{n-1})}. Even greater accuracy can be achieved by first computing the means, then using the stable one-pass algorithm on the residuals. Thus the covariance can be computed as A small modification can also be made to compute the weighted covariance: Likewise, there is a formula for combining the covariances of two sets that can be used to parallelize the computation:[3] A version of the weighted online algorithm that does batched updated also exists: letw1,…wN{\displaystyle w_{1},\dots w_{N}}denote the weights, and write The covariance can then be computed as
https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
Inmathematics,point-free geometryis ageometrywhose primitiveontologicalnotion isregionrather thanpoint. Twoaxiomatic systemsare set out below, one grounded inmereology, the other inmereotopologyand known asconnection theory. Point-free geometry was first formulated byAlfred North Whitehead,[1]not as a theory ofgeometryor ofspacetime, but of "events" and of an "extensionrelation" between events. Whitehead's purposes were as muchphilosophicalas scientific and mathematical.[2] Whitehead did not set out his theories in a manner that would satisfy present-day canons of formality. The two formalfirst-order theoriesdescribed in this entry were devised by others in order to clarify and refine Whitehead's theories. Thedomain of discoursefor both theories consists of "regions." Allunquantifiedvariables in this entry should be taken as tacitlyuniversally quantified; hence all axioms should be taken asuniversal closures. No axiom requires more than three quantified variables; hence a translation of first-order theories intorelation algebrais possible. Each set of axioms has but fourexistential quantifiers. The fundamental primitivebinary relationisinclusion, denoted by theinfix operator"≤", which corresponds to the binaryParthoodrelation that is a standard feature inmereologicaltheories. The intuitive meaning ofx≤yis "xis part ofy." Assuming that equality, denoted by the infix operator "=", is part of the background logic, the binary relationProper Part, denoted by the infix operator "<", is defined as: The axioms are:[3] AmodelofG1–G7is aninclusion space. Definition.[4]Given some inclusion space S, anabstractive classis a classGof regions such thatS\Gistotally orderedby inclusion. Moreover, there does not exist a region included in all of the regions included inG. Intuitively, an abstractive class defines a geometrical entity whose dimensionality is less than that of the inclusion space. For example, if the inclusion space is theEuclidean plane, then the corresponding abstractive classes arepointsandlines. Inclusion-based point-free geometry (henceforth "point-free geometry") is essentially an axiomatization of Simons's systemW.[5]In turn,Wformalizes a theory of Whitehead[6]whose axioms are not made explicit. Point-free geometry isWwith this defect repaired. Simons did not repair this defect, instead proposing in a footnote that the reader do so as an exercise. The primitive relation ofWis Proper Part, astrict partial order. The theory[7]of Whitehead (1919) has a single primitive binary relationKdefined asxKy↔y<x. HenceKis theconverseof Proper Part. Simons'sWP1asserts that Proper Part isirreflexiveand so corresponds toG1.G3establishes that inclusion, unlike Proper Part, isantisymmetric. Point-free geometry is closely related to adense linear orderD, whose axioms areG1-3,G5, and the totality axiomx≤y∨y≤x.{\displaystyle x\leq y\lor y\leq x.}[8]Hence inclusion-based point-free geometry would be a proper extension ofD(namelyD∪ {G4,G6,G7}), were it not that theDrelation "≤" is atotal order. A different approach was proposed in Whitehead (1929), one inspired by De Laguna (1922). Whitehead took as primitive thetopologicalnotion of "contact" between two regions, resulting in a primitive "connection relation" between events. Connection theoryCis afirst-order theorythat distills the first 12 of Whitehead's 31 assumptions[9]into 6 axioms,C1-C6.[10]Cis a proper fragment of the theories proposed by Clarke,[11]who noted theirmereologicalcharacter. Theories that, likeC, feature both inclusion and topological primitives, are calledmereotopologies. Chas one primitiverelation, binary "connection," denoted by theprefixedpredicate letterC. Thatxis included inycan now be defined asx≤y↔ ∀z[Czx→Czy]. Unlike the case with inclusion spaces, connection theory enables defining "non-tangential" inclusion,[12]a total order that enables the construction of abstractive classes. Gerla and Miranda (2008) argue that only thus can mereotopology unambiguously define apoint. A model ofCis aconnection space. Following the verbal description of each axiom is the identifier of the corresponding axiom in Casati and Varzi (1999). Their systemSMT(strong mereotopology) consists ofC1-C3, and is essentially due to Clarke (1981).[13]Any mereotopology can be madeatomlessby invokingC4, without risking paradox or triviality. HenceCextends the atomless variant ofSMTby means of the axiomsC5andC6, suggested by chapter 2 of part 4 ofProcess and Reality.[14] Biacino and Gerla (1991) showed that everymodelof Clarke's theory is aBoolean algebra, and models of such algebras cannot distinguish connection from overlap. It is doubtful whether either fact is faithful to Whitehead's intent.
https://en.wikipedia.org/wiki/Whitehead%27s_point-free_geometry
Internetworkingis the practice ofinterconnectingmultiplecomputer networks.[1]: 169Typically, this enables any pair ofhostsin the connected networks to exchange messages irrespective of their hardware-level networking technology. The resulting system of interconnected networks is called aninternetwork, or simply aninternet. The most notable example of internetworking is theInternet, a network of networks based on many underlying hardware technologies. The Internet is defined by a unifiedglobal addressing system,packetformat, androutingmethods provided by theInternet Protocol.[2]: 103 The terminternetworkingis a combination of the componentsinter(between) andnetworking. An earlier term for an internetwork iscatenet,[3]a short-form of(con)catenating networks. The first international heterogenousresource sharingnetwork was developed by the computer science department atUniversity College London(UCL) who interconnected theARPANETwith earlyBritish academic networksbeginning in 1973.[4][5][6]In the ARPANET, the network elements used to connect individual networks were calledgateways, but the term has been deprecated in this context, because of possible confusion with functionally different devices. By 1973-4, researchers in France, the United States, and the United Kingdom had worked out an approach to internetworking where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible, as demonstrated in theCYCLADESnetwork.[7][8][9]Researchers atXerox PARCoutlined the idea ofEthernetand thePARC Universal Packet(PUP) for internetworking.[10][11]Research at theNational Physical Laboratoryin the United Kingdom confirmed establishing a common host protocol would be more reliable and efficient.[12]The ARPANET connection to UCL later evolved intoSATNET. In 1977, ARPA demonstrated a three-way internetworking experiment, which linked a mobile vehicle inPRNETwith nodes in the ARPANET, and, via SATNET, to nodes at UCL. TheX.25protocol, on whichpublic data networkswere based in the 1970s and 1980s, was supplemented by theX.75protocol which enabled internetworking. Today the interconnecting gateways are calledrouters. The definition of an internetwork today includes the connection of other types of computer networks such aspersonal area networks. Catenet, a short-form of(con)catenating networks,is obsolete terminolgy for a system ofpacket-switchedcommunication networks interconnected viagateways.[3] The term was coined byLouis Pouzin, who designed theCYCLADESnetwork, in an October 1973 note circulated to theInternational Network Working Group,[13][14]which was published in a 1974 paper "A Proposal for Interconnecting Packet Switching Networks".[15]Pouzin was a pioneer of internetworking at a time whennetworkmeant what is now called alocal area network. Catenet was the concept of linking these networks into anetwork of networkswith specifications for compatibility of addressing and routing. The term was used in technical writing in the late 1970s and early 1980s,[16]including inRFCsandIENs.[17]Catenet was gradually displaced by the short-form of the term internetwork,internet(lower-casei), when theInternet Protocolspread more widely from the mid 1980s and the use of the term internet took on a broader sense and became well known in the 1990s.[18][19][20][21][22][23][24][25] Internetworking, a combination of the componentsinter(between) andnetworking, started as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or morelocal area networksvia some sort ofwide area network. To build an internetwork, the following are needed:[2]: 103A standardized scheme toaddresspackets to any host on any participating network; a standardizedprotocoldefining format and handling of transmitted packets; components interconnecting the participating networks byroutingpackets to their destinations based on standardized addresses. Another type of interconnection of networks often occurs within enterprises at thelink layerof the networking model, i.e. at the hardware-centric layer below the level of the TCP/IP logical interfaces. Such interconnection is accomplished withnetwork bridgesandnetwork switches. This is sometimes incorrectly termed internetworking, but the resulting system is simply a larger, singlesubnetwork, and no internetworkingprotocol, such asInternet Protocol, is required to traverse these devices. However, a single computer network may be converted into an internetwork by dividing the network into segments and logically dividing the segment traffic with routers and having an internetworking software layer that applications employ. The Internet Protocol is designed to provide anunreliable(not guaranteed)packet serviceacross the network. The architecture avoids intermediate network elements maintaining any state of the network. Instead, this function is assigned to the endpoints of each communication session. To transfer data reliably, applications must utilize an appropriatetransport layerprotocol, such asTransmission Control Protocol(TCP), which provides areliable stream. Some applications use a simpler, connection-less transport protocol,User Datagram Protocol(UDP), for tasks which do not require reliable delivery of data or that require real-time service, such asvideo streaming[26]or voice chat. Two architectural models are commonly used to describe the protocols and methods used in internetworking. TheOpen System Interconnection(OSI) reference model was developed under the auspices of theInternational Organization for Standardization(ISO) and provides a rigorous description for layering protocol functions from the underlying hardware to the software interface concepts in user applications. Internetworking is implemented in theNetwork Layer(Layer 3) of the model. TheInternet Protocol Suite, also known as the TCP/IP model, was not designed to conform to the OSI model and does not refer to it in any of the normative specifications inRequest for CommentsandInternet standards. Despite similar appearance as a layered model, it has a much less rigorous, loosely defined architecture that concerns itself only with the aspects of the style of networking in its own historical provenance. It assumes the availability of any suitable hardware infrastructure, without discussing hardware-specific low-level interfaces, and that a host has access to this local network to which it is connected via a link layer interface. For a period in the late 1980s and early 1990s, the network engineering community was polarized over the implementation of competing protocol suites, commonly known as theProtocol Wars. It was unclear which of the OSI model and the Internet protocol suite would result in the best and most robust computer networks.[27][28][29]
https://en.wikipedia.org/wiki/Internetworking
Mathematical and theoretical biology, orbiomathematics, is a branch ofbiologywhich employs theoretical analysis,mathematical modelsand abstractions of livingorganismsto investigate the principles that govern the structure, development and behavior of the systems, as opposed toexperimental biologywhich deals with the conduction of experiments to test scientific theories.[1]The field is sometimes calledmathematical biologyorbiomathematicsto stress the mathematical side, ortheoretical biologyto stress the biological side.[2]Theoretical biology focuses more on the development of theoretical principles for biology while mathematical biology focuses on the use of mathematical tools to study biological systems, even though the two terms interchange; overlapping asArtificial Immune SystemsofAmorphous Computation.[3][4] Mathematical biology aims at the mathematical representation and modeling ofbiological processes, using techniques and tools ofapplied mathematics. It can be useful in boththeoreticalandpracticalresearch. Describing systems in a quantitative manner means their behavior can be better simulated, and hence properties can be predicted that might not be evident to the experimenter; requiringmathematical models. Because of the complexity of theliving systems, theoretical biology employs several fields of mathematics,[5]and has contributed to the development of new techniques. Mathematics has been used in biology as early as the 13th century, whenFibonacciused the famousFibonacci seriesto describe a growing population of rabbits. In the 18th century,Daniel Bernoulliapplied mathematics to describe the effect of smallpox on the human population.Thomas Malthus' 1789 essay on the growth of the human population was based on the concept of exponential growth.Pierre François Verhulstformulated the logistic growth model in 1836.[citation needed] Fritz Müllerdescribed the evolutionary benefits of what is now calledMüllerian mimicryin 1879, in an account notable for being the first use of a mathematical argument inevolutionary ecologyto show how powerful the effect of natural selection would be, unless one includesMalthus's discussion of the effects ofpopulation growththat influencedCharles Darwin: Malthus argued that growth would be exponential (he uses the word "geometric") while resources (the environment'scarrying capacity) could only grow arithmetically.[6] The term "theoretical biology" was first used as a monograph title byJohannes Reinkein 1901, and soon after byJakob von Uexküllin 1920. One founding text is considered to beOn Growth and Form(1917) byD'Arcy Thompson,[7]and other early pioneers includeRonald Fisher,Hans Leo Przibram,Vito Volterra,Nicolas RashevskyandConrad Hal Waddington.[8] Interest in the field has grown rapidly from the 1960s onwards. Some reasons for this include: Several areas of specialized research in mathematical and theoretical biology[10][11][12][13][14]as well as external links to related projects in various universities are concisely presented in the following subsections, including also a large number of appropriate validating references from a list of several thousands of published authors contributing to this field. Many of the included examples are characterised by highly complex, nonlinear, and supercomplex mechanisms, as it is being increasingly recognised that the result of such interactions may only be understood through a combination of mathematical, logical, physical/chemical, molecular and computational models. Abstract relational biology (ARB) is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957–1958 as abstract, relational models of cellular and organismal organization. Other approaches include the notion ofautopoiesisdeveloped byMaturanaandVarela,Kauffman's Work-Constraints cycles, and more recently the notion of closure of constraints.[15] Algebraic biology (also known as symbolic systems biology) applies the algebraic methods ofsymbolic computationto the study of biological problems, especially ingenomics,proteomics, analysis ofmolecular structuresand study ofgenes.[16][17][18] An elaboration of systems biology to understand the more complex life processes was developed since 1970 in connection with molecular set theory, relational biology and algebraic biology. A monograph on this topic summarizes an extensive amount of published research in this area up to 1986,[19][20][21]including subsections in the following areas:computer modelingin biology and medicine, arterial system models,neuronmodels, biochemical andoscillationnetworks, quantum automata,quantum computersinmolecular biologyandgenetics,[22]cancer modelling,[23]neural nets,genetic networks, abstract categories in relational biology,[24]metabolic-replication systems,category theory[25]applications in biology and medicine,[26]automata theory,cellular automata,[27]tessellationmodels[28][29]and complete self-reproduction,chaotic systemsinorganisms, relational biology and organismic theories.[16][30] Modeling cell and molecular biology This area has received a boost due to the growing importance ofmolecular biology.[13] Modelling physiological systems Computational neuroscience(also known as theoretical neuroscience or mathematical neuroscience) is the theoretical study of the nervous system.[43][44] Ecologyandevolutionary biologyhave traditionally been the dominant fields of mathematical biology. Evolutionary biology has been the subject of extensive mathematical theorizing. The traditional approach in this area, which includes complications from genetics, ispopulation genetics. Most population geneticists consider the appearance of newallelesbymutation, the appearance of newgenotypesbyrecombination, and changes in the frequencies of existing alleles and genotypes at a small number ofgeneloci. Wheninfinitesimaleffects at a large number of gene loci are considered, together with the assumption oflinkage equilibriumorquasi-linkage equilibrium, one derivesquantitative genetics.Ronald Fishermade fundamental advances in statistics, such asanalysis of variance, via his work on quantitative genetics. Another important branch of population genetics that led to the extensive development ofcoalescent theoryisphylogenetics. Phylogenetics is an area that deals with the reconstruction and analysis of phylogenetic (evolutionary) trees and networks based on inherited characteristics[45]Traditional population genetic models deal with alleles and genotypes, and are frequentlystochastic. Many population genetics models assume that population sizes are constant. Variable population sizes, often in the absence of genetic variation, are treated by the field ofpopulation dynamics. Work in this area dates back to the 19th century, and even as far as 1798 whenThomas Malthusformulated the first principle of population dynamics, which later became known as theMalthusian growth model. TheLotka–Volterra predator-prey equationsare another famous example. Population dynamics overlap with another active area of research in mathematical biology:mathematical epidemiology, the study of infectious disease affecting populations. Various models of the spread ofinfectionshave been proposed and analyzed, and provide important results that may be applied to health policy decisions. Inevolutionary game theory, developed first byJohn Maynard SmithandGeorge R. Price, selection acts directly on inherited phenotypes, without genetic complications. This approach has been mathematically refined to produce the field ofadaptive dynamics. The earlier stages of mathematical biology were dominated by mathematicalbiophysics, described as the application of mathematics in biophysics, often involving specific physical/mathematical models of biosystems and their components or compartments. The following is a list of mathematical descriptions and their assumptions. A fixed mapping between an initial state and a final state. Starting from an initial condition and moving forward in time, a deterministic process always generates the same trajectory, and no two trajectories cross in state space. A random mapping between an initial state and a final state, making the state of the system arandom variablewith a correspondingprobability distribution. One classic work in this area isAlan Turing's paper onmorphogenesisentitledThe Chemical Basis of Morphogenesis, published in 1952 in thePhilosophical Transactions of the Royal Society. A model of a biological system is converted into a system of equations, although the word 'model' is often used synonymously with the system of corresponding equations. The solution of the equations, by either analytical or numerical means, describes how the biological system behaves either over time or atequilibrium. There are many different types of equations and the type of behavior that can occur is dependent on both the model and the equations used. The model often makes assumptions about the system. The equations may also make assumptions about the nature of what may occur. Molecular set theory is a mathematical formulation of the wide-sensechemical kineticsof biomolecular reactions in terms of sets of molecules and their chemical transformations represented by set-theoretical mappings between molecular sets. It was introduced byAnthony Bartholomay, and its applications were developed in mathematical biology and especially in mathematical medicine.[52]In a more general sense, Molecular set theory is the theory of molecular categories defined as categories of molecular sets and their chemical transformations represented as set-theoretical mappings of molecular sets. The theory has also contributed to biostatistics and the formulation of clinical biochemistry problems in mathematical formulations of pathological, biochemical changes of interest to Physiology, Clinical Biochemistry and Medicine.[52] Theoretical approaches to biological organization aim to understand the interdependence between the parts of organisms. They emphasize the circularities that these interdependences lead to. Theoretical biologists developed several concepts to formalize this idea. For example, abstract relational biology (ARB)[53]is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or(M,R)--systems introduced byRobert Rosenin 1957–1958 as abstract, relational models of cellular and organismal organization.[54] The eukaryoticcell cycleis very complex and has been the subject of intense study, since its misregulation leads tocancers. It is possibly a good example of a mathematical model as it deals with simple calculus but gives valid results. Two research groups[55][56]have produced several models of the cell cycle simulating several organisms. They have recently produced a generic eukaryotic cell cycle model that can represent a particular eukaryote depending on the values of the parameters, demonstrating that the idiosyncrasies of the individual cell cycles are due to different protein concentrations and affinities, while the underlying mechanisms are conserved (Csikasz-Nagy et al., 2006). By means of a system ofordinary differential equationsthese models show the change in time (dynamical system) of the protein inside a single typical cell; this type of model is called adeterministic process(whereas a model describing a statistical distribution of protein concentrations in a population of cells is called astochastic process). To obtain these equations an iterative series of steps must be done: first the several models and observations are combined to form a consensus diagram and the appropriate kinetic laws are chosen to write the differential equations, such asrate kineticsfor stoichiometric reactions,Michaelis-Menten kineticsfor enzyme substrate reactions andGoldbeter–Koshland kineticsfor ultrasensitive transcription factors, afterwards the parameters of the equations (rate constants, enzyme efficiency coefficients and Michaelis constants) must be fitted to match observations; when they cannot be fitted the kinetic equation is revised and when that is not possible the wiring diagram is modified. The parameters are fitted and validated using observations of both wild type and mutants, such as protein half-life and cell size. To fit the parameters, the differential equations must be studied. This can be done either by simulation or by analysis. In a simulation, given a startingvector(list of the values of the variables), the progression of the system is calculated by solving the equations at each time-frame in small increments. In analysis, the properties of the equations are used to investigate the behavior of the system depending on the values of the parameters and variables. A system of differential equations can be represented as avector field, where each vector described the change (in concentration of two or more protein) determining where and how fast the trajectory (simulation) is heading. Vector fields can have several special points: astable point, called a sink, that attracts in all directions (forcing the concentrations to be at a certain value), anunstable point, either a source or asaddle point, which repels (forcing the concentrations to change away from a certain value), and a limit cycle, a closed trajectory towards which several trajectories spiral towards (making the concentrations oscillate). A better representation, which handles the large number of variables and parameters, is abifurcation diagramusingbifurcation theory. The presence of these special steady-state points at certain values of a parameter (e.g. mass) is represented by a point and once the parameter passes a certain value, a qualitative change occurs, called a bifurcation, in which the nature of the space changes, with profound consequences for the protein concentrations: the cell cycle has phases (partially corresponding to G1 and G2) in which mass, via a stable point, controls cyclin levels, and phases (S and M phases) in which the concentrations change independently, but once the phase has changed at a bifurcation event (Cell cycle checkpoint), the system cannot go back to the previous levels since at the current mass the vector field is profoundly different and the mass cannot be reversed back through the bifurcation event, making a checkpoint irreversible. In particular the S and M checkpoints are regulated by means of special bifurcations called aHopf bifurcationand aninfinite period bifurcation.[citation needed]
https://en.wikipedia.org/wiki/Mathematical_biology
Quid pro quo(Latin: "something for something"[2]) is aLatin phraseused inEnglishto mean an exchange of goods or services, in which one transfer is contingent upon the other; "a favor for a favor". Phrases with similar meanings include: "give and take", "tit for tat", "you scratch my back, and I'll scratch yours", "this for that,"[3]and "one hand washes the other". Other languages usedo ut desto express a reciprocal exchange, which aligns with the Latin meaning,[4]whereas the widespread use ofquid pro quoin English for this concept arose from a "misunderstanding".[5] The Latin phrasequid pro quooriginally implied that something had been substituted, meaning "something for something" as inI gave you sugar for salt. Early usage by English speakers followed the original Latin meaning, with occurrences in the 1530s where the term referred to substituting one medicine for another, whether unintentionally or fraudulently.[6][7]By the end of the same century,quid pro quoevolved into a more current use to describe equivalent exchanges.[8] In 1654, the expressionquid pro quowas used to generally refer to something done for personal gain or with the expectation of reciprocity in the textThe Reign of King Charles: An History Disposed into Annalls, with a somewhat positive connotation. It refers to the covenant with Christ as something "that prove not anudum pactum, a naked contract, withoutquid pro quo." Believers in Christ have to do their part in return, namely "foresake the devil and all his works".[9] Quid pro quowould go on to be used, by English speakers in legal and diplomatic contexts, as an exchange of equally valued goods or services and continues to be today.[10] The Latin phrase corresponding to the English usage ofquid pro quoisdo ut des(Latin for "I give, so that you may give").[11]Other languages continue to usedo ut desfor this purpose, whilequid pro quo(or its equivalentqui pro quo, as widely used in Italian, French, Spanish and Portuguese) still keeps its original meaning of something being unwittingly mistaken, or erroneously told or understood, instead of something else. Incommon law,quid pro quoindicates that an item or a service has been traded in return for something of value, usually when the propriety or equity of the transaction is in question. Acontractmust involveconsideration: that is, the exchange of something of value for something else of value. For example, when buying an item of clothing or a gallon of milk, a pre-determined amount of money is exchanged for the product the customer is purchasing; therefore, they have received something but have given up something of equal value in return. In the United Kingdom, the one-sidedness of a contract is covered by theUnfair Contract Terms Act 1977and various revisions and amendments to it; a clause can be held void or the entire contract void if it is deemed unfair (that is to say, one-sided and not aquid pro quo); however, this is a civil law and not a common law matter. Political donors must be resident in the UK. There are fixed limits to how much they may donate (£5000 in any single donation), and it must be recorded in the House of CommonsRegister of Members' Interestsor at theHouse of Commons Library; thequid pro quois strictly not allowed, that a donor can by his donation have some personal gain. This is overseen by theParliamentary Commissioner for Standards. There are also prohibitions on donations being given in the six weeks before the election for which it is being campaigned.[citation needed]It is also illegal for donors to supportparty political broadcasts, which are tightly regulated, free to air, and scheduled and allotted to the various parties according to a formula agreed by Parliament and enacted with theCommunications Act 2003. In the United States, if an exchange appears excessively one sided, courts in some jurisdictions may question whether aquid pro quodid actually exist and the contract may be heldvoid. In cases of "quid pro quo" business contracts, the term takes on a negative connotation because major corporations may cross ethical boundaries in order to enter into these very valuable, mutually beneficial, agreements with other major big businesses. In these deals, large sums of money are often at play and can consequently lead to promises of exclusive partnerships indefinitely or promises of distortion of economic reports.[12][13] In the U.S.,lobbyistsare legally entitled to support candidates that hold positions with which the donors agree, or which will benefit the donors. Such conduct becomesbriberyonly when there is an identifiable exchange between the contribution and official acts, previous or subsequent, and the termquid pro quodenotes such an exchange.[14] In terms of criminal law,quid pro quotends to get used as a euphemism for crimes such asextortionandbribery.[15] InUnited States labor law, workplace sexual harassment can take two forms; either "quid pro quo" harassment orhostile work environmentharassment.[16]"Quid pro quo" harassment takes place when a supervisor requires sex, sexual favors, or sexual contact from an employee/job candidate as a condition of their employment. Only supervisors who have the authority to make tangible employment actions (i.e. hire, fire, promote, etc.), can commit "quid pro quo" harassment.[17]The supervising harasser must have "immediate (or successively higher) authority over the employee."[18]The power dynamic between a supervisor and subordinate/job candidate is such that a supervisor could use their position of authority to extract sexual relations based on the subordinate/job candidate's need for employment. Co-workers and non-decision making supervisors cannot engage in "quid pro quo" harassment with other employees, but an employer could potentially be liable for the behavior of these employees under a hostile work environment claim. The harassing employee's status as a supervisor is significant because if the individual is found to be a supervisor then the employing company can be heldvicariously liablefor the actions of that supervisor.[19]UnderAgency law, the employer is held responsible for the actions of the supervisor because they were in a position of power within the company at the time of the harassment. To establish aprima faciecase of "quid pro quo" harassment, the plaintiff must prove that they were subjected to "unwelcome sexual conduct", that submission to such conduct was explicitly or implicitly a term of their employment, and submission to or rejection of this conduct was used as a basis for an employment decision,[20]as follows: Once the plaintiff has established these three factors, the employer can not assert an affirmative defense (such as the employer had a sexual harassment policy in place to prevent and properly respond to issues of sexual harassment), but can only dispute whether the unwelcome conduct did not in fact take place, the employee was not a supervisor, and that there was no tangible employment action involved. Although these terms are popular among lawyers and scholars, neither "hostile work environment" nor "quid pro quo" are found inTitle VII of the Civil Rights Act of 1964, which prohibits employers from discriminating on the basis of race, sex, color, national origin, and religion. The Supreme Court noted inBurlington Industries, Inc. v. Ellerththat these terms are useful in differentiating between cases where threats of harassment are "carried out and those where they are not or absent altogether," but otherwise these terms serve a limited purpose.[25]Therefore, sexual harassment can take place by a supervisor, and an employer can be potentially liable, even if that supervisor's behavior does not fall within the criteria of a "Quid pro quo" harassment claim. Quid pro quowas frequently mentioned during the firstimpeachment inquiryinto U.S. presidentDonald Trump, in reference to the charge that his request for an investigation ofHunter Bidenwas a precondition for the delivery of congressionally authorized military aid during a call with Ukrainian presidentVolodymyr Zelenskyy.[26] For languages that come from Latin, such as Italian, Portuguese, Spanish and French,quid pro quois used to define a misunderstanding or blunder made by the substituting of one thing for another. The Oxford English Dictionary describes this alternative definition in English as "now rare". TheVocabolario Treccani(an authoritative dictionary published by the EncyclopediaTreccani), under the entry "qui pro quo", states that the latter expression probably derives from the Latin used in late medieval pharmaceutical compilations.[27]This can be clearly seen from the work appearing precisely under this title, "Tractatus quid pro quo," (Treatise on what substitutes for what) in the medical collection headed up byMesue cum expositione Mondini super Canones universales...(Venice: per Joannem & Gregorium de gregorijs fratres, 1497), folios 334r-335r. Some examples of what could be used in place of what in this list are:Pro uva passa dactili('in place of raisins, [use] dates');Pro mirto sumac('in place of myrtle, [use] sumac');Pro fenugreco semen lini('in place of fenugreek, [use] flaxseed'), etc. This list was an essential resource in the medieval apothecary, especially for occasions when certain essential medicinal substances were not available. SatiristAmbrose Biercedefined political influence as "a visionaryquogiven in exchange for a substantialquid",[28]making a pun onquidas a form of currency.[29] Quidis slang forpounds, the British currency, originating on this expression as in:if you want the quo you'll need to give them some quid, which explains the plural withouts, as inI gave them five hundred quid.
https://en.wikipedia.org/wiki/Quid_pro_quo
Inmathematicsandphysics,Lieb–Thirring inequalitiesprovide an upper bound on the sums of powers of the negativeeigenvaluesof aSchrödinger operatorin terms of integrals of the potential. They are named afterE. H. LiebandW. E. Thirring. The inequalities are useful in studies ofquantum mechanicsanddifferential equationsand imply, as a corollary, a lower bound on thekinetic energyofN{\displaystyle N}quantum mechanical particles that plays an important role in the proof ofstability of matter.[1] For the Schrödinger operator−Δ+V(x)=−∇2+V(x){\displaystyle -\Delta +V(x)=-\nabla ^{2}+V(x)}onRn{\displaystyle \mathbb {R} ^{n}}with real-valued potentialV(x):Rn→R,{\displaystyle V(x):\mathbb {R} ^{n}\to \mathbb {R} ,}the numbersλ1≤λ2≤⋯≤0{\displaystyle \lambda _{1}\leq \lambda _{2}\leq \dots \leq 0}denote the (not necessarily finite) sequence of negative eigenvalues. Then, forγ{\displaystyle \gamma }andn{\displaystyle n}satisfying one of the conditions there exists a constantLγ,n{\displaystyle L_{\gamma ,n}}, which only depends onγ{\displaystyle \gamma }andn{\displaystyle n}, such that whereV(x)−:=max(−V(x),0){\displaystyle V(x)_{-}:=\max(-V(x),0)}is the negative part of the potentialV{\displaystyle V}. The casesγ>1/2,n=1{\displaystyle \gamma >1/2,n=1}as well asγ>0,n≥2{\displaystyle \gamma >0,n\geq 2}were proven by E. H. Lieb and W. E. Thirring in 1976[1]and used in their proof of stability of matter. In the caseγ=0,n≥3{\displaystyle \gamma =0,n\geq 3}the left-hand side is simply the number of negative eigenvalues, and proofs were given independently by M. Cwikel,[2]E. H. Lieb[3]and G. V. Rozenbljum.[4]The resultingγ=0{\displaystyle \gamma =0}inequality is thus also called the Cwikel–Lieb–Rosenbljum bound. The remaining critical caseγ=1/2,n=1{\displaystyle \gamma =1/2,n=1}was proven to hold by T. Weidl[5]The conditions onγ{\displaystyle \gamma }andn{\displaystyle n}are necessary and cannot be relaxed. The Lieb–Thirring inequalities can be compared to the semi-classical limit. The classicalphase spaceconsists of pairs(p,x)∈R2n.{\displaystyle (p,x)\in \mathbb {R} ^{2n}.}Identifying themomentum operator−i∇{\displaystyle -\mathrm {i} \nabla }withp{\displaystyle p}and assuming that every quantum state is contained in a volume(2π)n{\displaystyle (2\pi )^{n}}in the2n{\displaystyle 2n}-dimensional phase space, the semi-classical approximation is derived with the constant While the semi-classical approximation does not need any assumptions onγ>0{\displaystyle \gamma >0}, the Lieb–Thirring inequalities only hold for suitableγ{\displaystyle \gamma }. Numerous results have been published about the best possible constantLγ,n{\displaystyle L_{\gamma ,n}}in (1) but this problem is still partly open. The semiclassical approximation becomes exact in the limit of large coupling, that is for potentialsβV{\displaystyle \beta V}theWeylasymptotics hold. This implies thatLγ,ncl≤Lγ,n{\displaystyle L_{\gamma ,n}^{\mathrm {cl} }\leq L_{\gamma ,n}}. Lieb and Thirring[1]were able to show thatLγ,n=Lγ,ncl{\displaystyle L_{\gamma ,n}=L_{\gamma ,n}^{\mathrm {cl} }}forγ≥3/2,n=1{\displaystyle \gamma \geq 3/2,n=1}.M. Aizenmanand E. H. Lieb[6]proved that for fixed dimensionn{\displaystyle n}the ratioLγ,n/Lγ,ncl{\displaystyle L_{\gamma ,n}/L_{\gamma ,n}^{\mathrm {cl} }}is amonotonic, non-increasing function ofγ{\displaystyle \gamma }. SubsequentlyLγ,n=Lγ,ncl{\displaystyle L_{\gamma ,n}=L_{\gamma ,n}^{\mathrm {cl} }}was also shown to hold for alln{\displaystyle n}whenγ≥3/2{\displaystyle \gamma \geq 3/2}byA. Laptevand T. Weidl.[7]Forγ=1/2,n=1{\displaystyle \gamma =1/2,\,n=1}D. Hundertmark, E. H. Lieb and L. E. Thomas[8]proved that the best constant is given byL1/2,1=2L1/2,1cl=1/2{\displaystyle L_{1/2,1}=2L_{1/2,1}^{\mathrm {cl} }=1/2}. On the other hand, it is known thatLγ,ncl<Lγ,n{\displaystyle L_{\gamma ,n}^{\mathrm {cl} }<L_{\gamma ,n}}for1/2≤γ<3/2,n=1{\displaystyle 1/2\leq \gamma <3/2,n=1}[1]and forγ<1,d≥1{\displaystyle \gamma <1,d\geq 1}.[9]In the former case Lieb and Thirring conjectured that the sharp constant is given by The best known value for the physical relevant constantL1,3{\displaystyle L_{1,3}}is1.456L1,3cl{\displaystyle 1.456L_{1,3}^{\mathrm {cl} }}[10]and the smallest known constant in the Cwikel–Lieb–Rosenbljum inequality is6.869L0,3cl{\displaystyle 6.869L_{0,3}^{\mathrm {cl} }}.[3]A complete survey of the presently best known values forLγ,n{\displaystyle L_{\gamma ,n}}can be found in the literature.[11] The Lieb–Thirring inequality forγ=1{\displaystyle \gamma =1}is equivalent to a lower bound on the kinetic energy of a given normalisedN{\displaystyle N}-particlewave functionψ∈L2(RNn){\displaystyle \psi \in L^{2}(\mathbb {R} ^{Nn})}in terms of the one-body density. For an anti-symmetric wave function such that for all1≤i,j≤N{\displaystyle 1\leq i,j\leq N}, the one-body density is defined as The Lieb–Thirring inequality (1) forγ=1{\displaystyle \gamma =1}is equivalent to the statement that where the sharp constantKn{\displaystyle K_{n}}is defined via The inequality can be extended to particles withspinstates by replacing the one-body density by the spin-summed one-body density. The constantKn{\displaystyle K_{n}}then has to be replaced byKn/q2/n{\displaystyle K_{n}/q^{2/n}}whereq{\displaystyle q}is the number of quantum spin states available to each particle (q=2{\displaystyle q=2}for electrons). If the wave function is symmetric, instead of anti-symmetric, such that for all1≤i,j≤N{\displaystyle 1\leq i,j\leq N}, the constantKn{\displaystyle K_{n}}has to be replaced byKn/N2/n{\displaystyle K_{n}/N^{2/n}}. Inequality (2) describes the minimum kinetic energy necessary to achieve a given densityρψ{\displaystyle \rho _{\psi }}withN{\displaystyle N}particles inn{\displaystyle n}dimensions. IfL1,3=L1,3cl{\displaystyle L_{1,3}=L_{1,3}^{\mathrm {cl} }}was proven to hold, the right-hand side of (2) forn=3{\displaystyle n=3}would be precisely the kinetic energy term inThomas–Fermitheory. The inequality can be compared to theSobolev inequality. M. Rumin[12]derived the kinetic energy inequality (2) (with a smaller constant) directly without the use of the Lieb–Thirring inequality. (for more information, read theStability of matterpage) The kinetic energy inequality plays an important role in the proof ofstability of matteras presented by Lieb and Thirring.[1]TheHamiltonianunder consideration describes a system ofN{\displaystyle N}particles withq{\displaystyle q}spin states andM{\displaystyle M}fixednucleiat locationsRj∈R3{\displaystyle R_{j}\in \mathbb {R} ^{3}}withchargesZj>0{\displaystyle Z_{j}>0}. The particles and nuclei interact with each other through the electrostaticCoulomb forceand an arbitrarymagnetic fieldcan be introduced. If the particles under consideration arefermions(i.e. the wave functionψ{\displaystyle \psi }is antisymmetric), then the kinetic energy inequality (2) holds with the constantKn/q2/n{\displaystyle K_{n}/q^{2/n}}(notKn/N2/n{\displaystyle K_{n}/N^{2/n}}). This is a crucial ingredient in the proof of stability of matter for a system of fermions. It ensures that theground stateenergyEN,M(Z1,…,ZM){\displaystyle E_{N,M}(Z_{1},\dots ,Z_{M})}of the system can be bounded from below by a constant depending only on the maximum of the nuclei charges,Zmax{\displaystyle Z_{\max }}, times the number of particles, The system is then stable of the first kind since the ground-state energy is bounded from below and also stable of the second kind, i.e. the energy of decreases linearly with the number of particles and nuclei. In comparison, if the particles are assumed to bebosons(i.e. the wave functionψ{\displaystyle \psi }is symmetric), then the kinetic energy inequality (2) holds only with the constantKn/N2/n{\displaystyle K_{n}/N^{2/n}}and for the ground state energy only a bound of the form−CN5/3{\displaystyle -CN^{5/3}}holds. Since the power5/3{\displaystyle 5/3}can be shown to be optimal, a system of bosons is stable of the first kind but unstable of the second kind. If the Laplacian−Δ=−∇2{\displaystyle -\Delta =-\nabla ^{2}}is replaced by(i∇+A(x))2{\displaystyle (\mathrm {i} \nabla +A(x))^{2}}, whereA(x){\displaystyle A(x)}is a magnetic field vector potential inRn,{\displaystyle \mathbb {R} ^{n},}the Lieb–Thirring inequality (1) remains true. The proof of this statement uses thediamagnetic inequality. Although all presently known constantsLγ,n{\displaystyle L_{\gamma ,n}}remain unchanged, it is not known whether this is true in general for the best possible constant. The Laplacian can also be replaced by other powers of−Δ{\displaystyle -\Delta }. In particular for the operator−Δ{\displaystyle {\sqrt {-\Delta }}}, a Lieb–Thirring inequality similar to (1) holds with a different constantLγ,n{\displaystyle L_{\gamma ,n}}and with the power on the right-hand side replaced byγ+n{\displaystyle \gamma +n}. Analogously a kinetic inequality similar to (2) holds, with1+2/n{\displaystyle 1+2/n}replaced by1+1/n{\displaystyle 1+1/n}, which can be used to prove stability of matter for the relativistic Schrödinger operator under additional assumptions on the chargesZk{\displaystyle Z_{k}}.[13] In essence, the Lieb–Thirring inequality (1) gives an upper bound on the distances of the eigenvaluesλj{\displaystyle \lambda _{j}}to theessential spectrum[0,∞){\displaystyle [0,\infty )}in terms of the perturbationV{\displaystyle V}. Similar inequalities can be proved forJacobi operators.[14]
https://en.wikipedia.org/wiki/Lieb%E2%80%93Thirring_inequality
Syllabic octalandsplit octalare two similar notations for 8-bit and 16-bitoctal numbers, respectively, used in some historical contexts. Syllabic octalis an 8-bit octalnumber representationthat was used byEnglish Electricin conjunction with theirKDF9machine in the mid-1960s. Although the word 'byte' had been coined by the designers of theIBM 7030 Stretchfor a group of eightbits, it was not yet well known, and English Electric used the word 'syllable' for what is now called a byte. Machine codeprogramming used an unusual form ofoctal, known locally as 'bastardized octal'. It represented 8 bits with three octal digits but the first digit represented only the two most-significant bits (with values 0..3), whilst the others the remaining two groups of three bits (with values 0..7) each.[1]A more polite colloquial name was 'silly octal', derived from the official name which wassyllabic octal[2][3](also known as 'slob-octal' or 'slob' notation,[4][5]). This 8-bit notation was similar to the later 16-bit split octal notation. Split octalis an unusual address notation used byHeathkit's PAM8 and portions ofHDOSfor theHeathkit H8in the late 1970s (and sometimes up to the present).[6][7]It was also used byDigital Equipment Corporation(DEC). Following this convention, 16-bit addresses were split into two 8-bit numbers printed separately in octal, that is base 8 on 8-bit boundaries: the first memory location was "000.000" and the memory location after "000.377" was "001.000" (rather than "000.400"). In order to distinguish numbers in split-octal notation from ordinary 16-bit octal numbers, the two digit groups were often separated by a slash (/),[8]dot (.),[9]colon (:),[10]comma (,),[11]hyphen (-),[12]or hash mark (#).[13][14] Mostminicomputersandmicrocomputersused either straight octal (where 377 is followed by 400) orhexadecimal. With the introduction of the optional HA8-6Z80processor replacement for the8080board, the front-panel keyboard got a new set of labels and hexadecimal notation was used instead of octal.[15] Through tricky number alignment theHP-16Cand otherHewlett-PackardRPNcalculators supportingbase conversioncan implicitly support numbers in split octal as well.[16]
https://en.wikipedia.org/wiki/Syllabic_octal
Inmachine learningandinformation retrieval, thecluster hypothesisis an assumption about the nature of the data handled in those fields, which takes various forms. In information retrieval, it states that documents that areclusteredtogether "behave similarly with respect to relevance to information needs".[1]In terms ofclassification, it states that if points are in the same cluster, they are likely to be of the same class.[2]There may be multiple clusters forming a single class. The cluster hypothesis was formulated first by van Rijsbergen:[3]"closely associated documents tend to be relevant to the same requests". Thus, theoretically, asearch enginecould try to locate only the appropriate cluster for a query, and then allow users to browse through this cluster. Although experiments showed that the cluster hypothesis as such holds, exploiting it for retrieval did not lead to satisfying results.[4] The cluster assumption is assumed in many machine learning algorithms such as thek-nearest neighbor classification algorithmand thek-means clustering algorithm. As the word "likely" appears in the definition, there is no clear border differentiating whether the assumption does hold or does not hold. In contrast the amount of adherence of data to this assumption can be quantitatively measured. The cluster assumption is equivalent to theLow density separation assumptionwhich states that the decision boundary should lie on a low-density region. To prove this, suppose the decision boundary crosses one of the clusters. Then this cluster will contain points from two different classes, therefore it is violated on this cluster.
https://en.wikipedia.org/wiki/Cluster_hypothesis
Takethis kiss upon the brow!And, in parting from you now,Thus much let me avow—You are not wrong, who deemThat my days have been a dream;Yet if hope has flown awayIn a night, or in a day,In a vision, or in none,Is it therefore the lessgone?Allthat we see or seemIs but a dream within a dream.I stand amid the roarOf a surf-tormented shore,And I hold within my handGrains of the golden sand—How few! yet how they creepThrough my fingers to the deep,While I weep—while I weep!O God! can I not graspThem with a tighter clasp?O God! can I not saveOnefrom the pitiless wave?Isallthat we see or seemBut a dream within a dream? "A Dream Within a Dream" is a poem written by American poetEdgar Allan Poe, first published in1849. The poem has 24 lines, divided into two stanzas. The poem dramatizes the confusion felt by the narrator as he watches the important things in life slip away.[1]Realizing he cannot hold on to even one grain of sand, he is led to his final question whether all things are just a dream.[2] It has been suggested that the "golden sand" referenced in the 15th line signifies that which is to be found in anhourglass, consequently time itself.[3]Another interpretation holds that the expression evokes an image derived from the 1848 finding ofgold in California.[1]The latter interpretation seems unlikely, however, given the presence of the four, almost identical, lines describing the sand in another poem "To ——," which is regarded as a blueprint for "A Dream Within a Dream" and preceding its publication by two decades.[3] The poem was first published in the March 31, 1849, edition of theBoston-based story paperThe Flag of Our Union.[2]The same publication had only two weeks before first published Poe's short story "Hop-Frog." The next month, owner Frederick Gleason announced it could no longer pay for whatever articles or poems it published.
https://en.wikipedia.org/wiki/A_Dream_Within_a_Dream_(poem)
Inboolean logic, adisjunctive normal form(DNF) is acanonical normal formof a logical formula consisting of a disjunction of conjunctions; it can also be described as anOR of ANDs, asum of products, or — inphilosophical logic— acluster concept.[1]As anormal form, it is useful inautomated theorem proving. A logical formula is considered to be in DNF if it is adisjunctionof one or moreconjunctionsof one or moreliterals.[2][3][4]A DNF formula is infull disjunctive normal formif each of its variables appears exactly once in every conjunction and each conjunction appears at most once (up to the order of variables). As inconjunctive normal form(CNF), the only propositional operators in DNF areand(∧{\displaystyle \wedge }),or(∨{\displaystyle \vee }), andnot(¬{\displaystyle \neg }). Thenotoperator can only be used as part of a literal, which means that it can only precede apropositional variable. The following is acontext-free grammarfor DNF: WhereVariableis any variable. For example, all of the following formulas are in DNF: The formulaA∨B{\displaystyle A\lor B}is in DNF, but not in full DNF; an equivalent full-DNF version is(A∧B)∨(A∧¬B)∨(¬A∧B){\displaystyle (A\land B)\lor (A\land \lnot B)\lor (\lnot A\land B)}. The following formulas arenotin DNF: Inclassical logiceach propositional formula can be converted to DNF[6]... The conversion involves usinglogical equivalences, such asdouble negation elimination,De Morgan's laws, and thedistributive law. Formulas built from theprimitiveconnectives{∧,∨,¬}{\displaystyle \{\land ,\lor ,\lnot \}}[7]can be converted to DNF by the followingcanonical term rewriting system:[8] The full DNF of a formula can be read off itstruth table.[9][10]For example, consider the formula The correspondingtruth tableis A propositional formula can be represented by one and only one full DNF.[13]In contrast, severalplainDNFs may be possible. For example, by applying the rule((a∧b)∨(¬a∧b))⇝b{\displaystyle ((a\land b)\lor (\lnot a\land b))\rightsquigarrow b}three times, the full DNF of the aboveϕ{\displaystyle \phi }can be simplified to(¬p∧¬q)∨(¬p∧r)∨(¬q∧r){\displaystyle (\lnot p\land \lnot q)\lor (\lnot p\land r)\lor (\lnot q\land r)}. However, there are also equivalent DNF formulas that cannot be transformed one into another by this rule, see the pictures for an example. It is a theorem that all consistent formulas inpropositional logiccan be converted to disjunctive normal form.[14][15][16][17]This is called theDisjunctive Normal Form Theorem.[14][15][16][17]The formal statement is as follows: Disjunctive Normal Form Theorem:SupposeX{\displaystyle X}is a sentence in a propositional languageL{\displaystyle {\mathcal {L}}}withn{\displaystyle n}sentence letters, which we shall denote byA1,...,An{\displaystyle A_{1},...,A_{n}}. IfX{\displaystyle X}is not a contradiction, then it is truth-functionally equivalent to a disjunction of conjunctions of the form±A1∧...∧±An{\displaystyle \pm A_{1}\land ...\land \pm A_{n}}, where+Ai=Ai{\displaystyle +A_{i}=A_{i}}, and−Ai=¬Ai{\displaystyle -A_{i}=\neg A_{i}}.[15] The proof follows from the procedure given above for generating DNFs fromtruth tables. Formally, the proof is as follows: SupposeX{\displaystyle X}is a sentence in a propositional language whose sentence letters areA,B,C,…{\displaystyle A,B,C,\ldots }. For each row ofX{\displaystyle X}'s truth table, write out a correspondingconjunction±A∧±B∧±C∧…{\displaystyle \pm A\land \pm B\land \pm C\land \ldots }, where±A{\displaystyle \pm A}is defined to beA{\displaystyle A}ifA{\displaystyle A}takes the valueT{\displaystyle T}at that row, and is¬A{\displaystyle \neg A}ifA{\displaystyle A}takes the valueF{\displaystyle F}at that row; similarly for±B{\displaystyle \pm B},±C{\displaystyle \pm C}, etc. (thealphabetical orderingofA,B,C,…{\displaystyle A,B,C,\ldots }in the conjunctions is quite arbitrary; any other could be chosen instead). Now form thedisjunctionof all these conjunctions which correspond toT{\displaystyle T}rows ofX{\displaystyle X}'s truth table. This disjunction is a sentence inL[A,B,C,…;∧,∨,¬]{\displaystyle {\mathcal {L}}[A,B,C,\ldots ;\land ,\lor ,\neg ]},[18]which by the reasoning above is truth-functionally equivalent toX{\displaystyle X}. This construction obviously presupposes thatX{\displaystyle X}takes the valueT{\displaystyle T}on at least one row of its truth table; ifX{\displaystyle X}doesn’t, i.e., ifX{\displaystyle X}is acontradiction, thenX{\displaystyle X}is equivalent toA∧¬A{\displaystyle A\land \neg A}, which is, of course, also a sentence inL[A,B,C,…;∧,∨,¬]{\displaystyle {\mathcal {L}}[A,B,C,\ldots ;\land ,\lor ,\neg ]}.[15] This theorem is a convenient way to derive many usefulmetalogicalresults in propositional logic, such as,trivially, the result that the set of connectives{∧,∨,¬}{\displaystyle \{\land ,\lor ,\neg \}}isfunctionally complete.[15] Any propositional formula is built fromn{\displaystyle n}variables, wheren≥1{\displaystyle n\geq 1}. There are2n{\displaystyle 2n}possible literals:L={p1,¬p1,p2,¬p2,…,pn,¬pn}{\displaystyle L=\{p_{1},\lnot p_{1},p_{2},\lnot p_{2},\ldots ,p_{n},\lnot p_{n}\}}. L{\displaystyle L}has(22n−1){\displaystyle (2^{2n}-1)}non-empty subsets.[19] This is the maximum number of conjunctions a DNF can have.[13] A full DNF can have up to2n{\displaystyle 2^{n}}conjunctions, one for each row of the truth table. Example 1 Consider a formula with two variablesp{\displaystyle p}andq{\displaystyle q}. The longest possible DNF has2(2×2)−1=15{\displaystyle 2^{(2\times 2)}-1=15}conjunctions:[13] The longest possible full DNF has 4 conjunctions: they are underlined. This formula is atautology. It can be simplified to(¬p∨p){\displaystyle (\neg p\lor p)}or to(¬q∨q){\displaystyle (\neg q\lor q)}, which are also tautologies, as well as valid DNFs. Example 2 Each DNF of the e.g. formula(X1∨Y1)∧(X2∨Y2)∧⋯∧(Xn∨Yn){\displaystyle (X_{1}\lor Y_{1})\land (X_{2}\lor Y_{2})\land \dots \land (X_{n}\lor Y_{n})}has2n{\displaystyle 2^{n}}conjunctions. TheBoolean satisfiability problemonconjunctive normal formformulas isNP-complete. By theduality principle, so is the falsifiability problem on DNF formulas. Therefore, it isco-NP-hardto decide if a DNF formula is atautology. Conversely, a DNF formula is satisfiable if, and only if, one of its conjunctions is satisfiable. This can be decided inpolynomial timesimply by checking that at least one conjunction does not contain conflicting literals. An important variation used in the study ofcomputational complexityisk-DNF. A formula is ink-DNFif it is in DNF and each conjunction contains at most k literals.[20]
https://en.wikipedia.org/wiki/Disjunctive_normal_form
Thebouba–kiki effect(/ˈbuːbəˈkiːkiː/) ortakete–malumaphenomenon[1][2][3]is a non-arbitrarymental associationbetween certain speech sounds and certain visual shapes. The most typical research finding is that people, when presented withnonsense words, tend to associate certain ones (likeboubaandmaluma) with a rounded shape and other ones (likekikiandtakete) with a spiky shape. Its discovery dates back to the 1920s, when psychologists documented experimental participants as connecting nonsense words to shapes in consistent ways. There is a strong general tendency towards the effect worldwide; it has been robustly confirmed across a majority of cultures and languages in which it has been researched,[4]for example including among English-speaking American university students,Tamilspeakers in India, speakers of certain languages with no writing system, young children, infants, and (though to a much lesser degree) thecongenitally blind.[4]It has also been shown to occur with familiar names. The bouba–kiki effect is one form ofsound symbolism.[5] This effect was first observed by GeorgianpsychologistDimitri Uznadzein a 1924 paper.[6][non-primary source needed]He conducted an experiment with 10 participants who were given a list with nonsense words, shown six drawings for five seconds each, then instructed to pick a name for the drawing from the list of given words. He describes the different "strategies" participants developed to match words to drawings and quotes their reasoning. He also describes situations where participants described very specific forms that they associated with a nonsense word, without reference to the shown drawings. He develops a theory of four factors that influence the way names for objects are decided. In total, there were 42 words. For one particular drawing, 45% picked the same word. For three others, the percentages were 40%. Uznadze points out that this is significantly more overlap than one could expect, given the high number of possible words. He speculates that there must therefore be certain regularities "which the human soul follows in the process of name-giving". German AmericanpsychologistWolfgang Köhlerreferred to Uznadze's experiment in a 1929 book[7]which showed two forms and asked readers which shape was called "takete" and which was called "maluma". Although he does not say so outright, Köhler implies that there is a strong preference to pair the jagged shape with "takete" and the rounded shape with "maluma".[8] In 2001,V. S. Ramachandranand Edward Hubbard repeated Köhler's experiment, introducing the words "kiki" and "bouba", and asked American college undergraduates andTamilspeakers in India, "Which of these shapes is bouba and which is kiki?" In both groups, 95% to 98% selected the curvy shape as "bouba" and the jagged one as "kiki", suggesting that the human brain somehow attaches abstract meanings to the shapes and sounds consistently.[9][failed verification–see discussion] A research experiment was conducted in 2022 that found evidence supporting the idea that the bouba/kiki effect is across-cultural phenomenon. 917 participants speaking 25 different languages, with 10 different writing systems, maintain a higher than chance consistency in bouba/kiki identification, intuitively associating the "bouba" with a rounded shape and "kiki" with a sharp, pointed shape, regardless of their native language, though the effect is stronger in some languages than others. It also supports thatRoman orthographyis a factor that could enhance the bouba/kiki effect. However, this biasing effect of orthography is rather weak since the participants that speak languages with Roman orthography are only marginally more likely to show the bouba/kiki effect.[clarification needed][4] Daphne Maurerand colleagues showed that even children as young as 21⁄2years old may show this preference.[10]More recent work by Ozge Ozturk and colleagues in 2013 showed that even 4-month-old infants have the same sound–shape mapping biases as adults and toddlers.[11]Infants are able to differentiate between congruent trials (pairing an angular shape with "kiki" or a curvy shape with "bubu") and incongruent trials (pairing a curvy shape with "kiki" or an angular shape with "bubu"). Infants looked longer at incongruent pairings than at congruent pairings. Infants' mapping was based on the combination ofconsonantsandvowelsin the words, and neither consonants nor vowels alone sufficed for mapping. These results suggest that some sound–shape mappings precedelanguage learning, and may in fact aid in language learning by establishing a basis for matching labels to referents and narrowing the hypothesis space for young infants. Adults in this study, like infants, used a combination of consonant and vowel information to match the labels they heard with the shapes they saw. However, this was not the only strategy that was available to them. Adults, unlike infants, were also able to use consonant information alone and vowel information alone to match the labels to the shapes, albeit less frequently than the consonant–vowel combination. When vowels and consonants were put in conflict, adults used consonants more often than vowels. The effect has also been shown to emerge in other contexts, such as when words are paired with evaluative meanings (with "bouba" words associated with positive concepts and "kiki" words associated with negative concepts)[12]or when the words to be paired are existing first names, suggesting that some familiarity with the linguistic stimuli does not eliminate the effect. A study showed that individuals will pair names such as "Molly" with round silhouettes, and names such as "Kate" with sharp silhouettes. Moreover, individuals will associate different personality traits with either group of names (e.g., easygoingness with "round names"; determination with "sharp names"). This may hint at a role of abstract concepts in the effect.[13] Other research suggests that this effect does not occur in all communities,[14]and it appears that the effect breaks if the sounds do not make licit words in the language.[15]The bouba–kiki effect seems to be dependent on a longsensitive period, with high visual capacities in childhood being necessary for its typical development. Although the congenitally blind have been reported to show a bouba–kiki effect, they show a much smaller one for touched shapes than sighted individuals do for visual shapes.[16][17] A major 2021 study showed that certain languages, namely Mandarin Chinese, Turkish, Romanian, and Albanian, on average showed lower-than-50% matches for both associating bouba with roundedness and kiki with jaggedness. However, the authors consider their analysis conservative and not clear enough to confirm if these four definitively lacked the bouba–kiki phenomenon. For example, the phonetic structures of these languages or their participants' cultural associations with sound and shape could have led to the weaker correlations observed.[4]Further research is being conducted to further verify the correlation between low-effect languages and the bouba-kiki phenomenon. In 2019, Nathan Peiffer-Smadja and Laurent Cohen published the first study usingfMRIto explore the bouba–kiki effect.[18]They found that prefrontal activation is stronger to mismatching (bouba with spiky shape) than to matching (bouba with round shape) stimuli. A subsequent study by Kelly McCormick and colleagues reported a similar pattern of greater activation for mismatched word-shape stimuli, but with most activity inparietal regionsincluding theintraparietal sulcusandsupramarginal gyrus, regions known to play a role in sensory association and perceptual-motor processing.[19]Peiffer-Smadja and Cohen also found that sound-shape matching also influences activations in the auditory and visual cortices, suggesting an effect of matching at an early stage insensory processing.[18] Ramachandran and Hubbard suggest that the kiki/bouba effect has implications for the evolution of language, because it suggests that the naming of objects is not completely arbitrary.[9]: 17The rounded shape may most commonly be named "bouba" because the mouth makes a more rounded shape to produce that sound while a more taut, angular mouth shape is needed to make the sounds in "kiki".[20]Alternatively, the distinction may be betweencoronalordorsalconsonants like/k/andlabialconsonants like/b/,[21]or, as Fort and Schwartz suggest, the difference may be attributed to the noise a "bouba" shape makes when bounced (lower frequency and more continuous) in comparison to a spiked object.[22]Additionally, it was shown that it is not only different consonants (e.g., voiceless versus voiced) and different vowel qualities (e.g., /a/ versus /i/) that play a role in the effect, but also vowel quantity (long versus short vowels). In one study, participants rated words containing long vowels to refer to longer objects and short vowels to short objects, at least for languages that make avowel lengthdistinction.[23]The presence of these "synesthesia-like mappings" suggest that this effect may be the neurological basis forsound symbolism, in which sounds are non-arbitrarily mapped to objects and events in the world.[citation needed]Research has also indicated that the effect may be a case ofideasthesia,[24]a phenomenon in which activations of concepts (inducers) evoke perception-like experiences (concurrents). The name comes from the Greekideaandaisthesis, meaning "sensing concepts" or "sensing ideas", and was introduced by Danko Nikolić.[25]
https://en.wikipedia.org/wiki/Bouba/kiki_effect
Incryptography, thesimple XOR cipheris a type ofadditivecipher,[1]anencryption algorithmthat operates according to the principles: For example where⊕{\displaystyle \oplus }denotes theexclusive disjunction(XOR) operation.[2]This operation is sometimes called modulus 2 addition (or subtraction, which is identical).[3]With this logic, a string of text can be encrypted by applying the bitwise XOR operator to every character using a given key. To decrypt the output, merely reapplying the XOR function with the key will remove the cipher. The string "Wiki" (01010111 01101001 01101011 01101001in 8-bitASCII) can be encrypted with the repeating key11110011as follows: And conversely, for decryption: The XOR operator is extremely common as a component in more complex ciphers. By itself, using a constant repeating key, a simple XOR cipher can trivially be broken usingfrequency analysis. If the content of any message can be guessed or otherwise known then the key can be revealed. Its primary merit is that it is simple to implement, and that the XOR operation is computationally inexpensive. A simple repeating XOR (i.e. using the same key for xor operation on the whole data) cipher is therefore sometimes used for hiding information in cases where no particular security is required. The XOR cipher is often used in computermalwareto make reverse engineering more difficult. If the key is random and is at least as long as the message, the XOR cipher is much more secure than when there is key repetition within a message.[4]When the keystream is generated by apseudo-random number generator, the result is astream cipher. With a key that istruly random, the result is aone-time pad, which isunbreakable in theory. The XOR operator in any of these ciphers is vulnerable to aknown-plaintext attack, sinceplaintext⊕{\displaystyle \oplus }ciphertext=key. It is also trivial to flip arbitrary bits in the decrypted plaintext by manipulating the ciphertext. This is calledmalleability. The primary reason XOR is so useful in cryptography is because it is "perfectly balanced"; for a given plaintext input 0 or 1, the ciphertext result is equally likely to be either 0 or 1 for a truly random key bit.[5] The table below shows all four possible pairs of plaintext and key bits. It is clear that if nothing is known about the key or plaintext, nothing can be determined from the ciphertext alone.[5] Other logical operations such andANDorORdo not have such a mapping (for example, AND would produce three 0's and one 1, so knowing that a given ciphertext bit is a 0 implies that there is a 2/3 chance that the original plaintext bit was a 0, as opposed to the ideal 1/2 chance in the case of XOR)[a] Example using thePythonprogramming language.[b] A shorter example using theRprogramming language, based on apuzzleposted on Instagram byGCHQ.
https://en.wikipedia.org/wiki/XOR_cipher
Digital currency(digital money,electronic moneyorelectronic currency) is anycurrency,money, or money-like asset that is primarily managed, stored or exchanged on digital computer systems, especially over theinternet. Types of digital currencies includecryptocurrency,virtual currencyandcentral bank digital currency. Digital currency may be recorded on adistributed databaseon the internet, a centralized electroniccomputer databaseowned by a company or bank, withindigital filesor even on astored-value card.[1] Digital currencies exhibit properties similar to traditional currencies, but generally do not have a classical physical form offiat currencyhistorically that can be held in the hand, like currencies with printedbanknotesor mintedcoins. However, they do have a physical form in an unclassical sense coming from the computer to computer and computer to human interactions and the information and processing power of the servers that store and keep track of money. This unclassical physical form allows nearly instantaneous transactions over the internet and vastly lowers the cost associated with distributing notes and coins: for example, of the types of money in theUK economy, 3% are notes and coins, and 79% as electronic money (in the form of bank deposits).[2]Usually not issued by a governmental body, virtual currencies are not considered alegal tenderand they enableownershiptransfer across governmentalborders.[3] This type of currency may be used to buy physicalgoodsandservices, but may also be restricted to certaincommunitiessuch as for use inside an online game.[4] Digital money can either be centralized, where there is a central point of control over the money supply (for instance, a bank), ordecentralized, where the control over the money supply is predetermined or agreed upon democratically. Precursory ideas for digital currencies were presented in electronic payment methods such as theSabre (travel reservation system).[5]In 1983, a research paper titled "Blind Signatures for Untraceable Payments" byDavid Chaumintroduced the idea of digital cash.[6][7]In 1989, he foundedDigiCash, an electronic cash company, in Amsterdam to commercialize the ideas in his research.[8]It filed for bankruptcy in 1998.[8][9] e-goldwas the first widely used Internet money, introduced in 1996, and grew to several million users before the US Government shut it down in 2008. e-gold has been referenced to as "digital currency" by both US officials and academia.[10][11][12][13][14]In 1997, Coca-Cola offered buying from vending machines using mobile payments.[15]PayPallaunched its USD-denominated service in 1998. In 2009,bitcoinwas launched, which marked the start of decentralizedblockchain-based digital currencies with no central server, and no tangible assets held in reserve. Also known as cryptocurrencies, blockchain-based digital currencies proved resistant to attempt by government to regulate them, because there was no central organization or person with the power to turn them off.[16] Origins of digital currencies date back to the 1990sDot-com bubble. Another known digital currency service wasLiberty Reserve, founded in 2006; it lets users convert dollars or euros to Liberty Reserve Dollars or Euros, and exchange them freely with one another at a 1% fee. Several digital currency operations were reputed to be used for Ponzi schemes and money laundering, and were prosecuted by the U.S. government for operating without MSB licenses.[17]Q coins or QQ coins, were used as a type of commodity-based digital currency onTencent QQ's messaging platform and emerged in early 2005. Q coins were so effective in China that they were said to have had a destabilizing effect on theChinese yuancurrency due to speculation.[18]Recent interest incryptocurrencieshas prompted renewed interest in digital currencies, withbitcoin, introduced in 2008, becoming the most widely used and accepted digital currency. Digital currency is a term that refers to a specific type of electronic currency with specific properties. Digital currency is also a term used to include the meta-group of sub-types of digital currency, the specific meaning can only be determined within the specific legal or contextual case. Legally and technically, there already are a myriad of legal definitions of digital currency and the many digital currency sub-types. Combining different possible properties, there exists an extensive number of implementations creating many and numerous sub-types of digital currency. Many governmental jurisdictions have implemented their own unique definition for digital currency, virtual currency, cryptocurrency, e-money, network money, e-cash, and other types of digital currency. Within any specific government jurisdiction, different agencies and regulators define different and often conflicting meanings for the different types of digital currency based on the specific properties of a specific currency type or sub-type. A virtual currency has been defined in 2012 by theEuropean Central Bankas "a type of unregulated, digital money, which is issued and usually controlled by its developers, and used and accepted among the members of a specificvirtual community".[19]TheUS Department of Treasuryin 2013 defined it more tersely as "a medium of exchange that operates like a currency in some environments, but does not have all the attributes of real currency".[20]The US Department of Treasury also stated that, "Virtual currency does not have legal-tender status in any jurisdiction."[20] According to theEuropean Central Bank's 2015 "Virtual currency schemes – a further analysis" report, virtual currency is a digital representation of value, not issued by a central bank, credit institution or e-money institution, which, in some circumstances, can be used as an alternative to money.[21]In the previous report of October 2012, the virtual currency was defined as a type of unregulated, digital money, which is issued and usually controlled by its developers, and used and accepted among the members of a specific virtual community.[19] According to theBank for International Settlements' November 2015 "Digital currencies" report, it is an asset represented in digital form and having some monetary characteristics.[22]Digital currency can be denominated to a sovereign currency and issued by the issuer responsible to redeem digital money for cash. In that case, digital currency represents electronic money (e-money). Digital currency denominated in its own units of value or with decentralized or automatic issuance will be considered as a virtual currency. As such, bitcoin is a digital currency but also a type of virtual currency. bitcoin and its alternatives are based on cryptographic algorithms, so these kinds of virtual currencies are also called cryptocurrencies. Cryptocurrencyis a sub-type of digital currency and a digitalassetthat relies oncryptographyto chain togetherdigital signaturesof asset transfers,peer-to-peernetworking anddecentralization. In some cases aproof-of-workorproof-of-stakescheme is used to create and manage the currency.[23][24][25][26]Cryptocurrencies can allow electronic money systems to be decentralized. When implemented with a blockchain, the digital ledger system or record keeping system usescryptographyto edit separate shards of database entries that are distributed across many separate servers. The first and most popular system isbitcoin, a peer-to-peer electronic monetary system based on cryptography. Most of the traditionalmoney supplyisbank moneyheld on computers. They are considered digital currency in some cases. One could argue that our increasingly cashless society means that all currencies are becoming digital currencies, but they are not presented to us as such.[27] Currency can be exchanged electronically usingdebit cardsandcredit cardsusingelectronic funds transfer at point of sale. A number of electronic money systems usecontactless paymenttransfer in order to facilitate easy payment and give the payee more confidence in not letting go of their electronic wallet during the transaction. Acentral bank digital currency(CBDC) is a form of universally accessible digital money in a nation and holds the same value as the country's paper currency. Like acryptocurrency, a CBCD is held in the form of tokens. CBDCs are different from regular digital cash forms like in online bank accounts because CBDCs are established through the central bank within a country, with liabilities held by one's government, rather than from a commercial bank.[36]Approximately nine countries have already[when?]established a CBDC, with interest in the system increasing highly throughout the world. In these nations, CBDCs have been used as a form of exchange and a way for governments to try to prevent risks from occurring within theirfinancial systems.[37] A major problem with central bank digital currencies is deciding whether the currency should be easily trackable. If it's traceable, the government has more control than it currently does. Additionally, there's a technical aspect to consider: whether CBDCs should be based on tokens or accounts and how much anonymity users should have.[38] Digital Currency has been implemented in some cases as adecentralizedsystem of any combination of currencyissuance,ownershiprecord, ownership transferauthorizationandvalidation, and currency storage. Per theBank for International Settlements(BIS), "These schemes do not distinguish between users based on location, and therefore allow value to be transferred between users across borders. Moreover, the speed of a transaction is not conditional on the location of the payer and payee."[3] Since 2001, the European Union has implemented theE-Money Directive"on the taking up, pursuit and prudential supervision of the business of electronic money institutions" last amended in 2009.[39] In the United States, electronic money is governed by Article 4A of theUniform Commercial Codefor wholesale transactions and theElectronic Fund Transfer Actfor consumer transactions. Provider's responsibility and consumer's liability are regulated under Regulation E.[40][41] Virtual currencies pose challenges for central banks, financial regulators, departments or ministries of finance, as well as fiscal authorities and statistical authorities. As of 2016, over 24 countries are investing in distributed ledger technologies (DLT) with $1.4bn in investments. In addition, over 90central banksare engaged in DLT discussions, including implications of acentral bank issued digital currency.[42] In March 2018, theMarshall Islandsbecame the first country to issue their own cryptocurrency and certify it as legal tender; the currency is called the "sovereign".[48] The USCommodity Futures Trading Commission(CFTC) has determined virtual currencies are properly defined as commodities in 2015.[49]TheCFTCwarned investors againstpump and dumpschemes that use virtual currencies.[50] TheUS Internal Revenue Service(IRS) ruling Notice 2014-21[51]defines any virtual currency, cryptocurrency and digital currency as property; gains and losses are taxable within standard property policies. On 20 March 2013, the Financial Crimes Enforcement Network issued a guidance to clarify how the U.S.Bank Secrecy Actapplied to persons creating, exchanging, and transmitting virtual currencies.[52] In May 2014 the USSecurities and Exchange Commission(SEC) "warned about the hazards of bitcoin and other virtual currencies".[53][54] In July 2014, theNew York State Department of Financial Servicesproposed the most comprehensive regulation of virtual currencies to date, commonly calledBitLicense. It has gathered input from bitcoin supporters and the financial industry through public hearings and a comment period until 21 October 2014 to customize the rules. The proposal per NY DFS press release "sought to strike an appropriate balance that helps protect consumers and root out illegal activity".[55]It has been criticized by smaller companies to favor established institutions, and Chinese bitcoin exchanges have complained that the rules are "overly broad in its application outside the United States".[56] TheBank of Canadahas explored the possibility of creating a version of its currency on the blockchain.[57] The Bank of Canada teamed up with the nation's five largest banks – and the blockchain consulting firm R3 – for what was known as Project Jasper. In a simulation run in 2016, the central bank issued CAD-Coins onto a blockchain similarEthereum.[58]The banks used the CAD-Coins to exchange money the way they do at the end of each day to settle their master accounts.[58] In 2016, Fan Yifei, a deputy governor of China's central bank, thePeople's Bank of China(PBOC), wrote that "the conditions are ripe for digital currencies, which can reduce operating costs, increase efficiency and enable a wide range of new applications".[58]According to Fan Yifei, the best way to take advantage of the situation is for central banks to take the lead, both in supervising private digital currencies and in developing digital legal tender of their own.[59] In October 2019, the PBOC announced that a digitalrenminbiwould be released after years of preparation.[60]The version of the currency, known as DCEP (Digital Currency Electronic Payment),[61]is based oncryptocurrencywhich can be "decoupled" from the banking system.[62]The announcement received a variety of responses: some believe it is more about domestic control and surveillance.[63] In December 2020, the PBOC distributed CN¥20 million worth of digital renminbi to the residents ofSuzhouthrough a lottery program to further promote the government-backed digital currency. Recipients of the currency could make both offline and online purchases, expanding on an earlier trial that did not require internet connection through the inclusion of online stores in the program. Around 20,000 transactions were reported by the e-commerce companyJD.comin the first 24 hours of the trial. Contrary to otheronline paymentplatforms such asAlipayorWeChat Pay, the digital currency does not have transaction fees.[64] The Danish government proposed getting rid of the obligation for selected retailers to accept payment in cash, moving the country closer to a "cashless" economy.[65]The Danish Chamber of Commerce is backing the move.[66]Nearly a third of the Danish population usesMobilePay, a smartphone application for transferring money.[65] A law passed by theNational Assembly of Ecuadorgives the government permission to make payments in electronic currency and proposes the creation of a national digital currency. "Electronic money will stimulate the economy; it will be possible to attract more Ecuadorian citizens, especially those who do not have checking or savings accounts and credit cards alone. The electronic currency will be backed by the assets of the Central Bank of Ecuador", the National Assembly said in a statement.[67]In December 2015, Sistema de Dinero Electrónico ("electronic money system") was launched, making Ecuador the first country with a state-run electronic payment system.[68] On 9 June 2021, theLegislative Assembly of El Salvadorhas become the first country in the world to officially classifybitcoinaslegal currency. Starting 90 days after approval, every business must acceptbitcoinas legal tender for goods or services, unless it is unable to provide the technology needed to do the transaction.[69] TheDutch central bankis experimenting with a blockchain-based virtual currency called "DNBCoin".[58][70] The Unified Payments Interface (UPI) is a real-time payment system for instant money transfers between any two bank accounts held in participating banks in India. The interface has been developed by the National Payments Corporation of India and is regulated by the Reserve Bank of India. This digital payment system is available 24 hours a day, every day of the year. UPI is agnostic to the type of user and is used for person to person, person to business, business to person and business to business transactions. Transactions can be initiated by the payer or the payee. To identify a bank account it uses a unique Virtual Payment Address (VPA) of the type 'accountID@bankID'. The VPA can be assigned by the bank, but can also be self specified just like an email address. The simplest and most common form of VPA is 'mobilenumber@upi'. Money can be transferred from one VPA to another or from one VPA to any bank account in a participating bank using account number and bank branch details. Transfers can be inter-bank or intra-bank. UPI has no intermediate holding pond for money. It withdraws funds directly from the bank account of the sender and deposits them directly into the recipient's bank account whenever a transaction is requested. A sender can initiate and authorise a transfer using a two step secure process: login using a pass code → initiate → verify using a passcode. A receiver can initiate a payment request on the system to send the payer a notification or by presenting a QR code. On receiving the request, the payer can decline or confirm the payment using the same two step process: login → confirm → verify. The system is extraordinarily user friendly to the extent that even technophobes and barely literate users are adopting it in huge numbers. Government-controlledSberbank of RussiaownsYooMoney– electronic payment service and digital currency of the same name.[71] Swedenis in the process of replacing all of its physical banknotes, and most of its coins by mid-2017.[needs update]However, the new banknotes and coins of theSwedish kronawill probably be circulating at about half the 2007 peak of 12,494 kronor per capita. TheRiksbankis planning to begin discussions of an electronic currency issued by the central bank to which "is not to replace cash, but to act as complement to it".[72]Deputy GovernorCecilia Skingsleystates that cash will continue to spiral out of use in Sweden, and while it is currently fairly easy to get cash in Sweden, it is often very difficult to deposit it into bank accounts, especially in rural areas. No decision has been currently made about the decision to create "e-krona". In her speech,[when?]Skingsley states: "The first question is whether e-krona should be booked in accounts or whether the ekrona should be some form of a digitally transferable unit that does not need an underlying account structure, roughly like cash." Skingsley also states: "Another important question is whether the Riksbank should issue e-krona directly to the general public or go via the banks, as we do now with banknotes and coins." Other questions will be addressed like interest rates, should they be positive, negative, or zero?[citation needed] In 2016, acity governmentfirst accepted digital currency in payment of city fees.Zug, Switzerland, added bitcoin as a means of paying small amounts, up to SFr 200, in a test and an attempt to advance Zug as a region that is advancing future technologies. In order to reduce risk, Zug immediately converts any bitcoin received into the Swiss currency.[73]Swiss Federal Railways, government-owned railway company of Switzerland, sells bitcoins at its ticket machines.[74] In 2016, the UK's chief scientific adviser,Sir Mark Walport, advised the government to consider using a blockchain-based digital currency.[75] The chief economist ofBank of England, the central bank of the United Kingdom, proposed the abolition of paper currency. The Bank has also taken an interest in blockchain.[58][76]In 2016 it has embarked on a multi-year research programme to explore the implications of a central bank issued digital currency.[42]The Bank of England has produced several research papers on the topic. One suggests that the economic benefits of issuing a digital currency on a distributed ledger could add as much as 3 percent to a country's economic output.[58]The Bank said that it wanted the next version of the bank's basic software infrastructure to be compatible with distributed ledgers.[58] Government attitude dictates the tendency among established heavy financial actors that both are risk-averse and conservative. None of these offered services around cryptocurrencies and much of the criticism came from them. "The first mover among these has beenFidelity Investments,BostonbasedFidelity Digital AssetsLLC will provide enterprise-grade custody solutions, a cryptocurrency trading execution platform and institutional advising services 24 hours a day, seven days a week designed to align with blockchain's always-on trading cycle".[77]It will work withbitcoinandEthereumwith general availability scheduled for 2019.[needs update] Hard electronic currency does not have the ability to be disputed or reversed when used. It is nearly impossible to reverse a transaction, justified or not. It is very similar to cash. Contrarily, soft electronic currency payments can be reversed. Usually, when a payment is reversed there is a "clearing time." A hard currency can be "softened" with a third-party service. Many existing digital currencies have not yet seen widespread usage, and may not be easily used or exchanged. Banks generally do not accept or offer services for them.[78]There are concerns that cryptocurrencies are extremely risky due to their very highvolatility[79]and potential forpump and dumpschemes.[80]Regulators in several countries have warned against their use and some have taken concrete regulatory measures to dissuade users.[81]The non-cryptocurrencies are allcentralized. As such, they may be shut down or seized by a government at any time.[82]The more anonymous a currency is, the more attractive it is to criminals, regardless of the intentions of its creators.[82]bitcoin has also been criticised for its energy inefficient SHA-256-basedproof of work.[83] According toBarry Eichengreen, an economist known for his work on monetary and financial economics, "cryptocurrencies like bitcoin are too volatile to possess the essential attributes of money. Stablecoins have fragile currency pegs that diminish their utility in transactions. And central bank digital currencies are a solution in search of a problem."[84]
https://en.wikipedia.org/wiki/Electronic_money
Inautomata theory,sequential logicis a type oflogic circuitwhose output depends on the present value of its input signals and on thesequenceof past inputs, the input history.[1][2][3][4]This is in contrast tocombinational logic, whose output is a function of only the present input. That is, sequential logic hasstate(memory) while combinational logic does not. Sequential logic is used to constructfinite-state machines, a basic building block in all digital circuitry. Virtually all circuits in practical digital devices are a mixture of combinational and sequential logic. A familiar example of a device with sequential logic is atelevision setwith "channel up" and "channel down" buttons.[1]Pressing the "up" button gives the television an input telling it to switch to the next channel above the one it is currently receiving. If the television is on channel 5, pressing "up" switches it to receive channel 6. However, if the television is on channel 8, pressing "up" switches it to channel "9". In order for the channel selection to operate correctly, the television must be aware of which channel it is currently receiving, which was determined by past channel selections.[1]The television stores the current channel as part of itsstate. When a "channel up" or "channel down" input is given to it, the sequential logic of the channel selection circuitry calculates the new channel from the input and the current channel. Digital sequential logic circuits are divided intosynchronousandasynchronoustypes. In synchronous sequential circuits, the state of the device changes only at discrete times in response to aclock signal. In asynchronous circuits the state of the device can change at any time in response to changing inputs. Nearly all sequential logic today isclockedorsynchronouslogic. In a synchronous circuit, anelectronic oscillatorcalled aclock(orclock generator) generates a sequence of repetitive pulses called theclock signalwhich is distributed to all the memory elements in the circuit. The basic memory element in synchronous logic is theflip-flop. The output of each flip-flop only changes when triggered by the clock pulse, so changes to the logic signals throughout the circuit all begin at the same time, at regular intervals, synchronized by the clock. The output of all the storage elements (flip-flops) in the circuit at any given time, the binary data they contain, is called thestateof the circuit. The state of the synchronous circuit only changes on clock pulses. At each cycle, the next state is determined by the current state and the value of the input signals when the clock pulse occurs. The main advantage of synchronous logic is its simplicity. The logic gates which perform the operations on the data require a finite amount of time to respond to changes to their inputs. This is calledpropagation delay. The interval between clock pulses must be long enough so that all the logic gates have time to respond to the changes and their outputs "settle" to stable logic values before the next clock pulse occurs. As long as this condition is met (ignoring certain other details) the circuit is guaranteed to be stable and reliable. This determines the maximum operating speed of the synchronous circuit. Synchronous logic has two main disadvantages: Asynchronous(clocklessorself-timed)sequential logicis not synchronized by a clock signal; the outputs of the circuit change directly in response to changes in inputs. The advantage of asynchronous logic is that it can be faster than synchronous logic, because the circuit doesn't have to wait for a clock signal to process inputs. The speed of the device is potentially limited only by thepropagation delaysof thelogic gatesused. However, asynchronous logic is more difficult to design and is subject to problems not encountered in synchronous designs. The main problem is that digital memory elements are sensitive to the order that their input signals arrive; if two signals arrive at aflip-flopor latch at almost the same time, which state the circuit goes into can depend on which signal gets to the gate first. Therefore, the circuit can go into the wrong state, depending on small differences in thepropagation delaysof the logic gates. This is called arace condition. This problem is not as severe in synchronous circuits because the outputs of the memory elements only change at each clock pulse. The interval between clock signals is designed to be long enough to allow the outputs of the memory elements to "settle" so they are not changing when the next clock comes. Therefore, the only timing problems are due to "asynchronous inputs"; inputs to the circuit from other systems which are not synchronized to the clock signal. Asynchronous sequential circuits are typically used only in a few critical parts of otherwise synchronous systems where speed is at a premium, such as parts of microprocessors anddigital signal processingcircuits. The design of asynchronous logic uses different mathematical models and techniques from synchronous logic, and is an active area of research.
https://en.wikipedia.org/wiki/Sequential_logic
Titan Rainwas a series of coordinated attacks oncomputersystems in theUnited Statessince 2003; they were known to have been ongoing for at least three years.[1]The attacks originated inGuangdong,China.[2]The activity is believed to be associated with a state-sponsoredadvanced persistent threat. It was given the designationTitan Rainby thefederal government of the United States. Titan Rain hackers gained access to many United Statesdefense contractorcomputer networks, which were targeted for their sensitive information,[1]including those atLockheed Martin,Sandia National Laboratories,Redstone Arsenal, andNASA. The attacks are reported to be the result of actions byPeople's Liberation ArmyUnit 61398.[3]These hackers attacked both the US government (Defense Intelligence Agency) and the UK government (Ministry of Defence). In 2006, an "organised Chinese hacking group" shut down a part of the UK House of Commons computer system.[4]The Chinese government has denied responsibility. The U.S. government has blamed the Chinese government for the 2004 attacks.Alan Paller,SANS Instituteresearch director, stated that the attacks came from individuals with "intense discipline" and that "no other organization could do this if they were not a military". Such sophistication has pointed toward the People's Liberation Army as the attackers.[5] Titan Rain reportedly attacked multiple organizations, such as NASA and theFBI. Although no classified information was reported stolen, the hackers were able to steal unclassified information (e.g., information from a home computer) that could reveal strengths and weaknesses of the United States.[6] Titan Rain has also caused distrust between other countries (such as the United Kingdom andRussia) and China. The United Kingdom has stated officially that Chinese hackers attacked its governmental offices. Titan Rain has caused the rest of the world to be more cautious of attacks not just from China but from other countries as well.
https://en.wikipedia.org/wiki/Titan_Rain
In cryptography, apadding oracle attackis an attack which uses thepaddingvalidation of a cryptographic message to decrypt the ciphertext. In cryptography, variable-length plaintext messages often have to be padded (expanded) to be compatible with the underlyingcryptographic primitive. The attack relies on having a "paddingoracle" who freely responds to queries about whether a message is correctly padded or not. The information could be directly given, or leaked through aside-channel. The earliest well-known attack that uses a padding oracle isBleichenbacher's attackof 1998, which attacksRSAwithPKCS #1 v1.5padding.[1]The term "padding oracle" appeared in literature in 2002,[2]afterSerge Vaudenay's attack on theCBC mode decryptionused within symmetricblock ciphers.[3]Variants of both attacks continue to find success more than one decade after their original publication.[1][4][5] In 1998,Daniel Bleichenbacherpublished a seminal paper on what became known asBleichenbacher's attack(also known as "million message attack"). The attack uses a padding oracle againstRSAwithPKCS #1 v1.5padding, but it does not include the term. Later authors have classified his attack as a padding oracle attack.[1] Manger (2001) reports an attack on the replacement for PKCS #1 v1.5 padding, PKCS #1 v2.0 "OAEP".[6] In symmetric cryptography, the paddingoracle attackcan be applied to theCBC mode of operation. Leaked data on padding validity can allow attackers to decrypt (and sometimes encrypt) messages through the oracle using the oracle's key, without knowing the encryption key. Compared to Bleichenbacher's attack on RSA with PKCS #1 v1.5, Vaudenay's attack on CBC is much more efficient.[1]Both attacks target crypto systems commonly used for the time: CBC is the original mode used inSecure Sockets Layer(SSL) and had continued to be supported in TLS.[4] A number of mitigations have been performed to prevent the decryption software from acting as an oracle, but newerattacks based on timinghave repeatedly revived this oracle. TLS 1.2 introduces a number ofauthenticated encryption with additional datamodes that do not rely on CBC.[4] The standard implementation of CBC decryption in block ciphers is to decrypt all ciphertext blocks, validate the padding, remove thePKCS7 padding, and return the message's plaintext. If the server returns an "invalid padding" error instead of a generic "decryption failed" error, the attacker can use the server as a padding oracle to decrypt (and sometimes encrypt) messages. The mathematical formula for CBC decryption is As depicted above, CBC decryption XORs each plaintext block with the previous block. As a result, a single-byte modification in blockC1{\displaystyle C_{1}}will make a corresponding change to a single byte inP2{\displaystyle P_{2}}. Suppose the attacker has two ciphertext blocksC1,C2{\displaystyle C_{1},C_{2}}and wants to decrypt the second block to get plaintextP2{\displaystyle P_{2}}. The attacker changes the last byte ofC1{\displaystyle C_{1}}(creatingC1′{\displaystyle C_{1}'}) and sends(IV,C1′,C2){\displaystyle (IV,C_{1}',C_{2})}to the server. The server then returns whether or not the padding of the last decrypted block (P2′{\displaystyle P_{2}'}) is correct (a valid PKCS#7 padding). If the padding is correct, the attacker now knows that the last byte ofDK(C2)⊕C1′{\displaystyle D_{K}(C_{2})\oplus C_{1}'}is0x01{\displaystyle \mathrm {0x01} }, the last two bytes are 0x02, the last three bytes are 0x03, …, or the last eight bytes are 0x08. The attacker can modify the second-last byte (flip any bit) to ensure that the last byte is 0x01. (Alternatively, the attacker can flip earlier bytes andbinary searchfor the position to identify the padding. For example, if modifying the third-last byte is correct, but modifying the second-last byte is incorrect, then the last two bytes are known to be 0x02, allowing both of them to be decrypted.) Therefore, the last byte ofDK(C2){\displaystyle D_{K}(C_{2})}equalsC1′⊕0x01{\displaystyle C_{1}'\oplus \mathrm {0x01} }. If the padding is incorrect, the attacker can change the last byte ofC1′{\displaystyle C_{1}'}to the next possible value. At most, the attacker will need to make 256 attempts to find the last byte ofP2{\displaystyle P_{2}}, 255 attempts for every possible byte (256 possible, minus one bypigeonhole principle), plus one additional attempt to eliminate an ambiguous padding.[7] After determining the last byte ofP2{\displaystyle P_{2}}, the attacker can use the same technique to obtain the second-to-last byte ofP2{\displaystyle P_{2}}. The attacker sets the last byte ofP2{\displaystyle P_{2}}to0x02{\displaystyle \mathrm {0x02} }by setting the last byte ofC1{\displaystyle C_{1}}toDK(C2)⊕0x02{\displaystyle D_{K}(C_{2})\oplus \mathrm {0x02} }. The attacker then uses the same approach described above, this time modifying the second-to-last byte until the padding is correct (0x02, 0x02). If a block consists of 128 bits (AES, for example), which is 16 bytes, the attacker will obtain plaintextP2{\displaystyle P_{2}}in no more than 256⋅16 = 4096 attempts. This is significantly faster than the2128{\displaystyle 2^{128}}attempts required to bruteforce a 128-bit key. CBC-R[8]turns a decryption oracle into an encryption oracle, and is primarily demonstrated against padding oracles. Using padding oracle attack CBC-R can craft an initialization vector and ciphertext block for any plaintext: To generate a ciphertext that isNblocks long, attacker must performNnumbers of padding oracle attacks. These attacks are chained together so that proper plaintext is constructed in reverse order, from end of message (CN) to beginning message (C0, IV). In each step, padding oracle attack is used to construct the IV to the previous chosen ciphertext. The CBC-R attack will not work against an encryption scheme that authenticates ciphertext (using amessage authentication codeor similar) before decrypting. The original attack against CBC was published in 2002 bySerge Vaudenay.[3]Concrete instantiations of the attack were later realised against SSL[9]and IPSec.[10][11]It was also applied to severalweb frameworks, includingJavaServer Faces,Ruby on Rails[12]andASP.NET[13][14][15]as well as other software, such as theSteamgaming client.[16]In 2012 it was shown to be effective againstPKCS 11cryptographic tokens.[1] While these earlier attacks were fixed by mostTLSimplementors following its public announcement, a new variant, theLucky Thirteen attack, published in 2013, used a timing side-channel to re-open the vulnerability even in implementations that had previously been fixed. As of early 2014, the attack is no longer considered a threat in real-life operation, though it is still workable in theory (seesignal-to-noise ratio) against a certain class of machines. As of 2015[update], the most active area of development for attacks upon cryptographic protocols used to secure Internet traffic aredowngrade attack, such as Logjam[17]and Export RSA/FREAK[18]attacks, which trick clients into using less-secure cryptographic operations provided for compatibility with legacy clients when more secure ones are available. An attack calledPOODLE[19](late 2014) combines both a downgrade attack (to SSL 3.0) with a padding oracle attack on the older, insecure protocol to enable compromise of the transmitted data. In May 2016 it has been revealed inCVE-2016-2107that the fix against Lucky Thirteen in OpenSSL introduced another timing-based padding oracle.[20][21]
https://en.wikipedia.org/wiki/Padding_oracle_attack
As the32-bitIntelArchitecture became the dominant computing platform during the 1980s and 1990s, multiple companies have tried to buildmicroprocessorsthat are compatible with that Intelinstruction setarchitecture. Most of these companies were not successful in the mainstream computing market. So far, onlyAMDhas had any market presence in the computing market for more than a couple of product generations. Cyrix was successful during the386and486generations of products but did not do well after thePentiumwas introduced. List of formerIA-32compatiblemicroprocessorvendors:
https://en.wikipedia.org/wiki/List_of_former_IA-32_compatible_processor_manufacturers
Inlogicandcomputer science, theBoolean satisfiability problem(sometimes calledpropositional satisfiability problemand abbreviatedSATISFIABILITY,SATorB-SAT) asks whether there exists aninterpretationthatsatisfiesa givenBooleanformula. In other words, it asks whether the formula's variables can be consistently replaced by the values TRUE or FALSE to make the formula evaluate to TRUE. If this is the case, the formula is calledsatisfiable, elseunsatisfiable. For example, the formula "aAND NOTb" is satisfiable because one can find the valuesa= TRUE andb= FALSE, which make (aAND NOTb) = TRUE. In contrast, "aAND NOTa" is unsatisfiable. SAT is the first problem that was proven to beNP-complete—this is theCook–Levin theorem. This means that all problems in the complexity classNP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. There is no known algorithm that efficiently solves each SAT problem (where "efficiently" informally means "deterministically in polynomial time"), and it is generally believed that no such algorithm exists, but this belief has not been proven mathematically, and resolving the question of whether SAT has apolynomial-timealgorithm is equivalent to theP versus NP problem, which is a famous open problem in the theory of computing. Nevertheless, as of 2007, heuristic SAT-algorithms are able to solve problem instances involving tens of thousands of variables and formulas consisting of millions of symbols,[1]which is sufficient for many practical SAT problems from, e.g.,artificial intelligence,circuit design,[2]andautomatic theorem proving. Apropositional logicformula, also calledBoolean expression, is built fromvariables, operators AND (conjunction, also denoted by ∧), OR (disjunction, ∨), NOT (negation, ¬), and parentheses. A formula is said to besatisfiableif it can be made TRUE by assigning appropriatelogical values(i.e. TRUE, FALSE) to its variables. TheBoolean satisfiability problem(SAT) is, given a formula, to check whether it is satisfiable. Thisdecision problemis of central importance in many areas ofcomputer science, includingtheoretical computer science,complexity theory,[3][4]algorithmics,cryptography[5][6]andartificial intelligence.[7][additional citation(s) needed] Aliteralis either a variable (in which case it is called apositive literal) or the negation of a variable (called anegative literal). Aclauseis a disjunction of literals (or a single literal). A clause is called aHorn clauseif it contains at most one positive literal. A formula is inconjunctive normal form(CNF) if it is a conjunction of clauses (or a single clause). For example,x1is a positive literal,¬x2is a negative literal, andx1∨ ¬x2is a clause. The formula(x1∨ ¬x2) ∧ (¬x1∨x2∨x3) ∧ ¬x1is in conjunctive normal form; its first and third clauses are Horn clauses, but its second clause is not. The formula is satisfiable, by choosingx1= FALSE,x2= FALSE, andx3arbitrarily, since (FALSE ∨ ¬FALSE) ∧ (¬FALSE ∨ FALSE ∨x3) ∧ ¬FALSE evaluates to (FALSE ∨ TRUE) ∧ (TRUE ∨ FALSE ∨x3) ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE (i.e. to TRUE). In contrast, the CNF formulaa∧ ¬a, consisting of two clauses of one literal, is unsatisfiable, since fora=TRUE ora=FALSE it evaluates to TRUE ∧ ¬TRUE (i.e., FALSE) or FALSE ∧ ¬FALSE (i.e., again FALSE), respectively. For some versions of the SAT problem, it is useful to define the notion of ageneralized conjunctive normal formformula, viz. as a conjunction of arbitrarily manygeneralized clauses, the latter being of the formR(l1,...,ln)for someBoolean functionRand (ordinary) literalsli. Different sets of allowed Boolean functions lead to different problem versions. As an example,R(¬x,a,b) is a generalized clause, andR(¬x,a,b) ∧R(b,y,c) ∧R(c,d,¬z) is a generalized conjunctive normal form. This formula is usedbelow, withRbeing the ternary operator that is TRUE just when exactly one of its arguments is. Using the laws ofBoolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, which may, however, be exponentially longer. For example, transforming the formula (x1∧y1) ∨ (x2∧y2) ∨ ... ∨ (xn∧yn) into conjunctive normal form yields while the former is a disjunction ofnconjunctions of 2 variables, the latter consists of 2nclauses ofnvariables. However, with use of theTseytin transformation, we may find an equisatisfiable conjunctive normal form formula with length linear in the size of the original propositional logic formula. SAT was the first problem known to beNP-complete, as proved byStephen Cookat theUniversity of Torontoin 1971[8]and independently byLeonid Levinat theRussian Academy of Sciencesin 1973.[9]Until that time, the concept of an NP-complete problem did not even exist. The proof shows how every decision problem in thecomplexity classNPcan bereducedto the SAT problem for CNF[a]formulas, sometimes calledCNFSAT. A useful property of Cook's reduction is that it preserves the number of accepting answers. For example, deciding whether a givengraphhas a3-coloringis another problem in NP; if a graph has 17 valid 3-colorings, then the SAT formula produced by the Cook–Levin reduction will have 17 satisfying assignments. NP-completeness only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much more quickly. See§Algorithms for solving SATbelow. Like the satisfiability problem for arbitrary formulas, determining the satisfiability of a formula in conjunctive normal form where each clause is limited to at most three literals is NP-complete also; this problem is called3-SAT,3CNFSAT, or3-satisfiability. To reduce the unrestricted SAT problem to 3-SAT, transform each clausel1∨ ⋯ ∨lnto a conjunction ofn- 2clauses wherex2, ⋯ ,xn−2arefresh variablesnot occurring elsewhere. Although the two formulas are notlogically equivalent, they areequisatisfiable. The formula resulting from transforming all clauses is at most 3 times as long as its original; that is, the length growth is polynomial.[10] 3-SAT is one ofKarp's 21 NP-complete problems, and it is used as a starting point for proving that other problems are alsoNP-hard.[b]This is done bypolynomial-time reductionfrom 3-SAT to the other problem. An example of a problem where this method has been used is theclique problem: given a CNF formula consisting ofcclauses, the correspondinggraphconsists of a vertex for each literal, and an edge between each two non-contradicting[c]literals from different clauses; see the picture. The graph has ac-clique if and only if the formula is satisfiable.[11] There is a simple randomized algorithm due to Schöning (1999) that runs in time (4/3)nwherenis the number of variables in the 3-SAT proposition, and succeeds with high probability to correctly decide 3-SAT.[12] Theexponential time hypothesisasserts that no algorithm can solve 3-SAT (or indeedk-SAT for anyk> 2) inexp(o(n))time (that is, fundamentally faster than exponential inn). Selman, Mitchell, and Levesque (1996) give empirical data on the difficulty of randomly generated 3-SAT formulas, depending on their size parameters. Difficulty is measured in number recursive calls made by aDPLL algorithm. They identified a phase transition region from almost-certainly-satisfiable to almost-certainly-unsatisfiable formulas at the clauses-to-variables ratio at about 4.26.[13] 3-satisfiability can be generalized tok-satisfiability(k-SAT, alsok-CNF-SAT), when formulas in CNF are considered with each clause containing up tokliterals.[citation needed]However, since for anyk≥ 3, this problem can neither be easier than 3-SAT nor harder than SAT, and the latter two are NP-complete, so must be k-SAT. Some authors restrict k-SAT to CNF formulas withexactly k literals.[citation needed]This does not lead to a different complexity class either, as each clausel1∨ ⋯ ∨ljwithj<kliterals can be padded with fixed dummy variables tol1∨ ⋯ ∨lj∨dj+1∨ ⋯ ∨dk. After padding all clauses, 2k–1 extra clauses[d]must be appended to ensure that onlyd1= ⋯ =dk= FALSEcan lead to a satisfying assignment. Sincekdoes not depend on the formula length, the extra clauses lead to a constant increase in length. For the same reason, it does not matter whetherduplicate literalsare allowed in clauses, as in¬x∨ ¬y∨ ¬y. Conjunctive normal form (in particular with 3 literals per clause) is often considered the canonical representation for SAT formulas. As shown above, the general SAT problem reduces to 3-SAT, the problem of determining satisfiability for formulas in this form. SAT is trivial if the formulas are restricted to those indisjunctive normal form, that is, they are a disjunction of conjunctions of literals. Such a formula is indeed satisfiable if and only if at least one of its conjunctions is satisfiable, and a conjunction is satisfiable if and only if it does not contain bothxand NOTxfor some variablex. This can be checked in linear time. Furthermore, if they are restricted to being infull disjunctive normal form, in which every variable appears exactly once in every conjunction, they can be checked in constant time (each conjunction represents one satisfying assignment). But it can take exponential time and space to convert a general SAT problem to disjunctive normal form; to obtain an example, exchange "∧" and "∨" in theaboveexponential blow-up example for conjunctive normal forms. Another NP-complete variant of the 3-satisfiability problem is theone-in-three 3-SAT(also known variously as1-in-3-SATandexactly-1 3-SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine whether there exists a truth assignment to the variables so that each clause hasexactlyone TRUE literal (and thus exactly two FALSE literals). Another variant is thenot-all-equal 3-satisfiabilityproblem (also calledNAE3SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine if an assignment to the variables exists such that in no clause all three literals have the same truth value. This problem is NP-complete, too, even if no negation symbols are admitted, by Schaefer's dichotomy theorem.[14] A 3-SAT formula isLinear SAT(LSAT) if each clause (viewed as a set of literals) intersects at most one other clause, and, moreover, if two clauses intersect, then they have exactly one literal in common. An LSAT formula can be depicted as a set of disjoint semi-closed intervals on a line. Deciding whether an LSAT formula is satisfiable is NP-complete.[15] SAT is easier if the number of literals in a clause is limited to at most 2, in which case the problem is called2-SAT. This problem can be solved in polynomial time, and in fact iscompletefor the complexity classNL. If additionally all OR operations in literals are changed toXORoperations, then the result is calledexclusive-or 2-satisfiability, which is a problem complete for the complexity classSL=L. The problem of deciding the satisfiability of a given conjunction ofHorn clausesis calledHorn-satisfiability, orHORN-SAT. It can be solved in polynomial time by a single step of theunit propagationalgorithm, which produces the single minimal model of the set of Horn clauses (w.r.t. the set of literals assigned to TRUE). Horn-satisfiability isP-complete. It can be seen asP'sversion of the Boolean satisfiability problem. Also, deciding the truth of quantified Horn formulas can be done in polynomial time.[16] Horn clauses are of interest because they are able to expressimplicationof one variable from a set of other variables. Indeed, one such clause ¬x1∨ ... ∨ ¬xn∨ycan be rewritten asx1∧ ... ∧xn→y; that is, ifx1,...,xnare all TRUE, thenymust be TRUE as well. A generalization of the class of Horn formulas is that of renameable-Horn formulae, which is the set of formulas that can be placed in Horn form by replacing some variables with their respective negation. For example, (x1∨ ¬x2) ∧ (¬x1∨x2∨x3) ∧ ¬x1is not a Horn formula, but can be renamed to the Horn formula (x1∨ ¬x2) ∧ (¬x1∨x2∨ ¬y3) ∧ ¬x1by introducingy3as negation ofx3. In contrast, no renaming of (x1∨ ¬x2∨ ¬x3) ∧ (¬x1∨x2∨x3) ∧ ¬x1leads to a Horn formula. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula. Another special case is the class of problems where each clause contains XOR (i.e.exclusive or) rather than (plain) OR operators.[e]This is inP, since an XOR-SAT formula can also be viewed as a system of linear equations mod 2, and can be solved in cubic time byGaussian elimination;[17]see the box for an example. This recast is based on thekinship between Boolean algebras and Boolean rings, and the fact that arithmetic modulo two forms afinite field. SinceaXORbXORcevaluates to TRUE if and only if exactly 1 or 3 members of {a,b,c} are TRUE, each solution of the 1-in-3-SAT problem for a given CNF formula is also a solution of the XOR-3-SAT problem, and in turn each solution of XOR-3-SAT is a solution of 3-SAT; see the picture. As a consequence, for each CNF formula, it is possible to solve the XOR-3-SAT problem defined by the formula, and based on the result infer either that the 3-SAT problem is solvable or that the 1-in-3-SAT problem is unsolvable. Provided that thecomplexity classes P and NP are not equal, neither 2-, nor Horn-, nor XOR-satisfiability is NP-complete, unlike SAT. The restrictions above (CNF, 2CNF, 3CNF, Horn, XOR-SAT) bound the considered formulae to be conjunctions of subformulas; each restriction states a specific form for all subformulas: for example, only binary clauses can be subformulas in 2CNF. Schaefer's dichotomy theorem states that, for any restriction to Boolean functions that can be used to form these subformulas, the corresponding satisfiability problem is in P or NP-complete. The membership in P of the satisfiability of 2CNF, Horn, and XOR-SAT formulae are special cases of this theorem.[14] The following table summarizes some common variants of SAT. An extension that has gained significant popularity since 2003 issatisfiability modulo theories(SMT) that can enrich CNF formulas with linear constraints, arrays, all-different constraints,uninterpreted functions,[18]etc. Such extensions typically remain NP-complete, but very efficient solvers are now available that can handle many such kinds of constraints. The satisfiability problem becomes more difficult if both "for all" (∀) and "there exists" (∃)quantifiersare allowed to bind the Boolean variables. An example of such an expression would be∀x∀y∃z(x∨y∨z) ∧ (¬x∨ ¬y∨ ¬z); it is valid, since for all values ofxandy, an appropriate value ofzcan be found, viz.z=TRUE if bothxandyare FALSE, andz=FALSE else. SAT itself (tacitly) uses only ∃ quantifiers. If only ∀ quantifiers are allowed instead, the so-calledtautologyproblemis obtained, which isco-NP-complete. If any number of both quantifiers are allowed, the problem is called thequantified Boolean formula problem(QBF), which can be shown to bePSPACE-complete. It is widely believed that PSPACE-complete problems are strictly harder than any problem in NP, although this has not yet been proved. Using highly parallelP systems, QBF-SAT problems can be solved in linear time.[19] Ordinary SAT asks if there is at least one variable assignment that makes the formula true. A variety of variants deal with the number of such assignments: Other generalizations include satisfiability forfirst- andsecond-order logic,constraint satisfaction problems,0-1 integer programming. While SAT is adecision problem, thesearch problemof finding a satisfying assignment reduces to SAT. That is, each algorithm which correctly answers whether an instance of SAT is solvable can be used to find a satisfying assignment. First, the question is asked on the given formula Φ. If the answer is "no", the formula is unsatisfiable. Otherwise, the question is asked on the partly instantiated formula Φ{x1=TRUE}, that is, Φ with the first variablex1replaced by TRUE, and simplified accordingly. If the answer is "yes", thenx1=TRUE, otherwisex1=FALSE. Values of other variables can be found subsequently in the same way. In total,n+1 runs of the algorithm are required, wherenis the number of distinct variables in Φ. This property is used in several theorems in complexity theory: Since the SAT problem is NP-complete, only algorithms with exponential worst-case complexity are known for it. In spite of this, efficient and scalable algorithms for SAT were developed during the 2000s and have contributed to dramatic advances in the ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints (i.e. clauses).[1]Examples of such problems inelectronic design automation(EDA) includeformal equivalence checking,model checking,formal verificationofpipelined microprocessors,[18]automatic test pattern generation,routingofFPGAs,[26]planning, andscheduling problems, and so on. A SAT-solving engine is also considered to be an essential component in theelectronic design automationtoolbox. Major techniques used by modern SAT solvers include theDavis–Putnam–Logemann–Loveland algorithm(or DPLL),conflict-driven clause learning(CDCL), andstochasticlocal searchalgorithms such asWalkSAT. Almost all SAT solvers include time-outs, so they will terminate in reasonable time even if they cannot find a solution. Different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. Recent[when?]attempts have been made to learn an instance's satisfiability using deep learning techniques.[27] SAT solvers are developed and compared in SAT-solving contests.[28]Modern SAT solvers are also having significant impact on the fields of software verification, constraint solving in artificial intelligence, and operations research, among others. (by date of publication)
https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
Studies that estimate and rank themost common words in Englishexamine texts written in English. Perhaps the most comprehensive such analysis is one that was conducted against theOxford English Corpus(OEC), a massivetext corpusthat is written in the English language. In total, the texts in the Oxford English Corpus contain more than 2 billion words.[1]The OEC includes a wide variety of writing samples, such as literary works, novels, academic journals, newspapers, magazines,Hansard's Parliamentary Debates,blogs,chat logs, and emails.[2] Another English corpus that has been used to study word frequency is theBrown Corpus, which was compiled by researchers atBrown Universityin the 1960s. The researchers published their analysis of the Brown Corpus in 1967. Their findings were similar, but not identical, to the findings of the OEC analysis. According toThe Reading Teacher's Book of Lists, the first 25 words in the OEC make up about one-third of all printed material in English, and the first 100 words make up about half of all written English.[3]According to a study cited byRobert McCruminThe Story of English,all of the first hundred of the most common words in English are of eitherOld EnglishorOld Norseorigin,[4]except for "just", ultimately from Latin "iustus", "people", ultimately from Latin "populus", "use", ultimately from Latin "usare", and "because", in part from Latin "causa". Some lists of common words distinguish betweenword forms, while others rank all forms of a word as a singlelexeme(the form of the word as it would appear in a dictionary). For example, the lexemebe(as into be) comprises all its conjugations (am,are,is,was,were, etc.), andcontractionsof those conjugations.[5]These top 100lemmaslisted below account for 50% of all the words in the Oxford English Corpus.[1] A list of 100 words that occur most frequently in written English is given below, based on an analysis of theOxford English Corpus(a collection of texts in the English language, comprising over 2 billion words).[1]Apart of speechis provided for most of the words, but part-of-speech categories vary between analyses, and not all possibilities are listed. For example, "I" may be a pronoun or a Roman numeral; "to" may be a preposition or an infinitive marker; "time" may be a noun or a verb. Also, a single spelling can represent more than oneroot word. For example, "singer" may be a form of either "sing" or "singe". Different corpora may treat such difference differently. The number of distinct senses that are listed inWiktionaryis shown in thepolysemycolumn. For example, "out" can refer to an escape, a removal from play in baseball, or any of 36 other concepts. On average, each word in the list has 15.38 senses. The sense count does not include the use of terms inphrasal verbssuch as "put out" (as in "inconvenienced") and othermultiword expressionssuch as the interjection "get out!", where the word "out" does not have an individual meaning.[6]As an example, "out" occurs in at least 560 phrasal verbs[7]and appears in nearly 1700 multiword expressions.[8] The table also includes frequencies from other corpora. As well as usage differences,lemmatisationmay differ from corpus to corpus – for example splitting the prepositional use of "to" from the use as a particle. Also, theCorpus of Contemporary American English(COCA) list includes dispersion as well as frequency to calculate rank. The following is a very similar list, also from the OEC, subdivided bypart of speech.[1]The list labeled "Others" includespronouns,possessives,articles,modal verbs,adverbs, andconjunctions.
https://en.wikipedia.org/wiki/Most_common_words_in_English
Incomputing, atrojan horse(or simplytrojan;[1]often capitalized,[2]but see below) is a kind ofmalwarethat misleads users as to its true intent by disguising itself as a normal program. Trojans are generally spread by some form ofsocial engineering. For example, a user may be duped into executing anemailattachment disguised to appear innocuous (e.g., a routine form to be filled in), or into clicking on a fake advertisement on theInternet. Although their payload can be anything, many modern forms act as abackdoor, contacting a controller who can then have unauthorized access to the affected device.[3]Ransomwareattacks are often carried out using a trojan. Unlikecomputer virusesandworms, trojans generally do not attempt to inject themselves into other files or otherwise propagate themselves.[4] The term is derived from theancient Greekstory of the deceptiveTrojan Horsethat led to the fall of the city ofTroy.[2] It is unclear where and when the computing concept, and this term for it, originated; but by 1971 the firstUnixmanual assumed its readers knew both.[5] Another early reference is in a US Air Force report in 1974 on the analysis of vulnerability in theMulticscomputer systems.[6] The term "Trojan horse" was popularized byKen Thompsonin his 1983Turing Awardacceptance lecture "Reflections on Trusting Trust",[7]subtitled: "To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software." He mentioned that he knew about the possible existence of trojans from a report on the security of Multics.[8][9] The computer term "Trojan horse" is derived from the legendaryTrojan Horseof the ancient city ofTroy. For this reason "Trojan" is often capitalized, especially in older sources. However, many modernstyle guides[10]and dictionaries[1]suggest a lower-case "trojan" for this technical use. Once installed, trojans may perform a range of malicious actions. Many tend to contact one or moreCommand and Control(C2) servers across the Internet and await instruction. Since individual trojans typically use a specific set of ports for this communication, it can be relatively simple to detect them. Moreover, other malware could potentially "take over" the trojan, using it as a proxy for malicious action.[11] In German-speaking countries,spywareused or made by the government is sometimes calledgovware. Govware is typically used to intercept communications from the target device. Some countries like Switzerland and Germany have a legal framework governing the use of such software.[12][13]Examples of govware trojans include the SwissMiniPanzer and MegaPanzer[14]and theGerman "state trojan" nicknamed R2D2.[12]German govware works by exploiting security gaps unknown to the general public and accessing smartphone data before it becomes encrypted via other applications.[15] Due to the popularity ofbotnetsamong hackers and the availability of advertising services that permit authors to violate their users' privacy, trojans are becoming more common. According to a survey conducted byBitDefenderfrom January to June 2009, "Trojan-type malware is on the rise, accounting for 83% of the global malware detected in the world." trojans have a relationship with worms, as they spread with the help given by worms and travel across the internet with them.[16]BitDefender has stated that approximately 15% of computers are members of a botnet, usually recruited by a trojan infection.[17] Recent investigations have revealed that the trojan-horse method has been used as an attack oncloud computingsystems. A trojan attack on cloud systems tries to insert an application or service into the system that can impact the cloud services by changing or stopping the functionalities. When the cloud system identifies the attacks as legitimate, the service or application is performed which can damage and infect the cloud system.[18] A trojan horse is aprogramthat purports to perform some legitimate function, yet upon execution it compromises the user's security.[19]One simple example[20]is the following malicious version of the Linuxlscommand. An attacker would place this executable script in a publicly writable and "high-traffic" location (e.g.,/tmp/ls). Then, any victim who tried to runlsfrom that directory —if and only ifthe victim's executable searchPATHunwisely[20]included the current directory.— would execute/tmp/lsinstead of/usr/bin/ls, and have their home directory deleted. Similar scripts could hijack other common commands; for example, a script purporting to be thesudocommand (which prompts for the user's password) could instead mail that password to the attacker.[19] In these examples, the malicious program imitates the name of a well-known useful program, rather than pretending to be a novel and unfamiliar (but harmless) program. As such, these examples also resembletyposquattingandsupply chain attacks.
https://en.wikipedia.org/wiki/Trojan_horse_(computing)
Superfluid vacuum theory(SVT), sometimes known as theBEC vacuum theory, is an approach intheoretical physicsandquantum mechanicswhere the fundamental physicalvacuum(non-removable background) is considered as asuperfluidor as aBose–Einstein condensate(BEC). The microscopic structure of this physical vacuum is currently unknown and is a subject of intensive studies in SVT. An ultimate goal of this research is to developscientific modelsthat unify quantum mechanics (which describes three of the four knownfundamental interactions) withgravity, making SVT a derivative ofquantum gravityand describes all known interactions in the Universe, at both microscopic and astronomic scales, as different manifestations of the same entity, superfluid vacuum. The concept of aluminiferous aetheras a medium sustainingelectromagnetic waveswas discarded after the advent of thespecial theory of relativity, as the presence of the concept alongside special relativity results in several contradictions; in particular, aether having a definite velocity at each spacetime point will exhibit a preferred direction. This conflicts with the relativistic requirement that all directions within a light cone are equivalent. However, as early as in 1951P.A.M. Diracpublished two papers where he pointed out that we should take into account quantum fluctuations in the flow of the aether.[1][2]His arguments involve the application of theuncertainty principleto the velocity of aether at any spacetime point, implying that the velocity will not be a well-defined quantity. In fact, it will be distributed over various possible values. At best, one could represent the aether by a wave function representing the perfectvacuum statefor which all aether velocities are equally probable. Inspired by Dirac's ideas, K. P. Sinha, C. Sivaram andE. C. G. Sudarshanpublished in 1975 a series of papers that suggested a new model for the aether according to which it is a superfluid state of fermion and anti-fermion pairs, describable by a macroscopicwave function.[3][4][5]They noted that particle-like small fluctuations of superfluid background obey theLorentz symmetry, even if the superfluid itself is non-relativistic. Nevertheless, they decided to treat the superfluid as therelativisticmatter – by putting it into the stress–energy tensor of theEinstein field equations. This did not allow them to describe therelativistic gravityas a small fluctuation of the superfluid vacuum, as subsequent authors have noted[citation needed]. Since then, several theories have been proposed within the SVT framework. They differ in how the structure and properties of the backgroundsuperfluidmust look. In absence of observational data which would rule out some of them, these theories are being pursued independently. According to the approach, the background superfluid is assumed to be essentially non-relativistic whereas theLorentz symmetryis not an exact symmetry of Nature but rather the approximate description valid only for small fluctuations. An observer who resides inside such vacuum and is capable of creating or measuring the small fluctuations would observe them asrelativisticobjects – unless theirenergyandmomentumare sufficiently high to make theLorentz-breakingcorrections detectable.[6]If the energies and momenta are below the excitation threshold then thesuperfluidbackground behaves like theideal fluid, therefore, theMichelson–Morley-type experiments would observe nodrag forcefrom such aether.[1][2] Further, in the theory of relativity theGalilean symmetry(pertinent to ourmacroscopicnon-relativistic world) arises as the approximate one – when particles' velocities are small compared tospeed of lightin vacuum. In SVT one does not need to go through Lorentz symmetry to obtain the Galilean one – the dispersion relations of most non-relativistic superfluids are known to obey the non-relativistic behavior at large momenta.[7][8][9] To summarize, the fluctuations of vacuum superfluid behave like relativistic objects at "small"[nb 1]momenta (a.k.a. the "phononic limit") and like non-relativistic ones at large momenta. The yet unknown nontrivial physics is believed to be located somewhere between these two regimes. In the relativisticquantum field theorythe physical vacuum is also assumed to be some sort of non-trivial medium to which one can associatecertain energy. This is because the concept of absolutely empty space (or "mathematical vacuum") contradicts the postulates ofquantum mechanics. According to QFT, even in absence of real particles the background is always filled by pairs of creating and annihilatingvirtual particles. However, a direct attempt to describe such medium leads to the so-calledultraviolet divergences. In some QFT models, such as quantum electrodynamics, these problems can be "solved" using therenormalizationtechnique, namely, replacing the diverging physical values by their experimentally measured values. In other theories, such as thequantum general relativity, this trickdoes not work, and reliable perturbation theory cannot be constructed. According to SVT, this is because in the high-energy ("ultraviolet") regime theLorentz symmetrystarts failing so dependent theories cannot be regarded valid for all scales of energies and momenta. Correspondingly, while the Lorentz-symmetric quantum field models are obviously a good approximation below the vacuum-energy threshold, in its close vicinity the relativistic description becomes more and more "effective" and less and less natural since one will need to adjust the expressions for thecovariantfield-theoretical actions by hand. According togeneral relativity, gravitational interaction is described in terms ofspacetimecurvatureusing the mathematical formalism ofdifferential geometry. This was supported by numerous experiments and observations in the regime of low energies. However, the attempts to quantize general relativity led to varioussevere problems, therefore, the microscopic structure of gravity is still ill-defined. There may be a fundamental reason for this—thedegrees of freedomof general relativity are based on what may be only approximate andeffective. The question of whether general relativity is an effective theory has been raised for a long time.[10] According to SVT, the curved spacetime arises as the small-amplitudecollective excitationmode of the non-relativistic background condensate.[6][11]The mathematical description of this is similar tofluid-gravity analogywhich is being used also in theanalog gravitymodels.[12]Thus,relativistic gravityis essentially a long-wavelength theory of the collective modes whose amplitude is small compared to the background one. Outside this requirement the curved-space description of gravity in terms of the Riemannian geometry becomes incomplete or ill-defined. The notion of thecosmological constantmakes sense in a relativistic theory only, therefore, within the SVT framework this constant can refer at most to the energy of small fluctuations of the vacuum above a background value, but not to the energy of the vacuum itself.[13]Thus, in SVT this constant does not have any fundamental physical meaning, and related problems such as thevacuum catastrophe, simply do not occur in the first place. According togeneral relativity, the conventionalgravitational waveis: Superfluid vacuum theory brings into question the possibility that a relativistic object possessing both of these properties exists in nature.[11]Indeed, according to the approach, the curved spacetime itself is the smallcollective excitationof the superfluid background, therefore, the property (1) means that thegravitonwould be in fact the "small fluctuation of the small fluctuation", which does not look like a physically robust concept (as if somebody tried to introduce small fluctuations inside aphonon, for instance). As a result, it may be not just a coincidence that in general relativity the gravitational field alone has no well-definedstress–energy tensor, only thepseudotensorone.[14]Therefore, the property (2) cannot be completely justified in a theory with exactLorentz symmetrywhich the general relativity is. Though, SVT does nota prioriforbid an existence of the non-localizedwave-like excitations of the superfluid background which might be responsible for the astrophysical phenomena which are currently beingattributedto gravitational waves, such as theHulse–Taylor binary. However, such excitations cannot be correctly described within the framework of a fullyrelativistictheory. TheHiggs bosonis the spin-0 particle that has been introduced inelectroweak theoryto give mass to theweak bosons. The origin of mass of the Higgs boson itself is not explained by electroweak theory. Instead, this mass is introduced as a free parameter by means of theHiggs potential, which thus makes it yet another free parameter of theStandard Model.[15]Within the framework of theStandard Model(or its extensions) the theoretical estimates of this parameter's value are possible only indirectly and results differ from each other significantly.[16]Thus, the usage of the Higgs boson (or any other elementary particle with predefined mass) alone is not the most fundamental solution of themass generationproblem but only its reformulationad infinitum. Another known issue of theGlashow–Weinberg–Salam modelis the wrong sign of mass term in the (unbroken) Higgs sector for energies above thesymmetry-breaking scale.[nb 2] While SVT does not explicitly forbid the existence of theelectroweak Higgs particle, it has its own idea of the fundamental mass generation mechanism – elementary particles acquire mass due to the interaction with the vacuum condensate, similarly to the gap generation mechanism insuperconductorsorsuperfluids.[11][17]Although this idea is not entirely new, one could recall the relativisticColeman-Weinberg approach,[18]SVT gives the meaning to the symmetry-breaking relativisticscalar fieldas describing small fluctuations of background superfluid which can be interpreted as an elementary particle only under certain conditions.[19]In general, one allows two scenarios to happen: Thus, the Higgs boson, even if it exists, would be a by-product of the fundamental mass generation phenomenon rather than its cause.[19] Also, some versions of SVT favor awave equation based on the logarithmic potentialrather than on thequarticone. The former potential has not only the Mexican-hat shape, necessary for thespontaneous symmetry breaking, but also someother featureswhich make it more suitable for the vacuum's description. In this model the physical vacuum is conjectured to be strongly-correlatedquantum Bose liquidwhose ground-statewavefunctionis described by thelogarithmic Schrödinger equation. It was shown that therelativistic gravitational interactionarises as the small-amplitudecollective excitationmode whereas relativisticelementary particlescan be described by theparticle-like modesin the limit of low energies and momenta.[17]The essential difference of this theory from others is that in the logarithmic superfluid the maximal velocity of fluctuations is constant in the leading (classical) order. This allows to fully recover the relativity postulates in the "phononic" (linearized) limit.[11] The proposed theory has many observational consequences. They are based on the fact that at high energies and momenta the behavior of the particle-like modes eventually becomes distinct from therelativisticone – they can reach thespeed of light limitat finite energy.[20]Among other predicted effects is thesuperluminalpropagation and vacuumCherenkov radiation.[21] Theory advocates the mass generation mechanism which is supposed to replace or alter theelectroweak Higgsone. It was shown that masses of elementary particles can arise as a result of interaction with the superfluid vacuum, similarly to the gap generation mechanism insuperconductors.[11][17]For instance, thephotonpropagating in the averageinterstellarvacuum acquires a tiny mass which is estimated to be about 10−35electronvolt. One can also derive an effective potential for the Higgs sector which is different from the one used in theGlashow–Weinberg–Salam model, yet it yields the mass generation and it is free of the imaginary-mass problem[nb 2]appearing in theconventional Higgs potential.[19]
https://en.wikipedia.org/wiki/Superfluid_vacuum_theory
Manycore processorsare special kinds ofmulti-core processorsdesigned for a high degree ofparallel processing, containing numerous simpler, independentprocessor cores(from a few tens of cores to thousands or more). Manycore processors are used extensively inembedded computersandhigh-performance computing. Manycore processors are distinct frommulti-core processorsin being optimized from the outset for a higher degree ofexplicit parallelism, and for higher throughput (or lower power consumption) at the expense of latency and lowersingle-thread performance. The broader category ofmulti-core processors, by contrast, are usually designed to efficiently runbothparallelandserial code, and therefore place more emphasis on high single-thread performance (e.g. devoting more silicon toout-of-order execution, deeperpipelines, moresuperscalarexecution units, and larger, more general caches), andshared memory. These techniques devote runtime resources toward figuring out implicit parallelism in a single thread. They are used in systems where they have evolved continuously (with backward compatibility) from single core processors. They usually have a 'few' cores (e.g. 2, 4, 8) and may be complemented by a manycoreaccelerator(such as aGPU) in aheterogeneous system. Cache coherencyis an issue limiting the scaling of multicore processors. Manycore processors may bypass this with methods such asmessage passing,[1]scratchpad memory,DMA,[2]partitioned global address space,[3]or read-only/non-coherent caches. A manycore processor using anetwork on a chipand local memories gives software the opportunity to explicitly optimise the spatial layout of tasks (e.g. as seen in tooling developed forTrueNorth).[4] Manycore processors may have more in common (conceptually) with technologies originating inhigh-performance computingsuch asclustersandvector processors.[5] GPUs may be considered a form of manycore processor having multipleshader processing units, and only being suitable for highly parallel code (high throughput, but extremely poor single thread performance). A number of computers built from multicore processors have one million or more individual CPU cores. Examples include: Quite a fewsupercomputershave over 5 million CPU cores. When there are also coprocessors, e.g. GPUs used with, then those cores are not listed in the core-count, then quite a few more computers would hit those targets.
https://en.wikipedia.org/wiki/Manycore_processor
A language ishead-markingif thegrammaticalmarks showingagreementbetween different words of aphrasetend to be placed on theheads(or nuclei) of phrases, rather than on themodifiersordependents. Many languages employ both head-marking anddependent-marking, and some languages double up and are thusdouble-marking. The concept of head/dependent-marking was proposed byJohanna Nicholsin 1986 and has come to be widely used as a basic category inlinguistic typology.[1] The concepts of head-marking and dependent-marking are commonly applied to languages that have richer inflectional morphology thanEnglish. There are, however, a few types of agreement in English that can be used to illustrate those notions. The following graphic representations of aclause, anoun phrase, and aprepositional phraseinvolve agreement. The three tree structures shown are those of adependency grammar, as opposed to those of aphrase structure grammar:[2] Heads and dependents are identified by the actual hierarchy of words, and the concepts of head-marking and dependent-marking are indicated with the arrows. Subject-verb agreement, shown in the tree on the left, is a case of head-marking because the singular subjectJohnrequires the inflectional suffix-sto appear on the finite verbcheats, the head of the clause. The determiner-noun agreement, shown in the tree in the middle, is a case of dependent-marking because the plural nounhousesrequires the dependent determiner to appear in its plural form,these, not in its singular form,this. The preposition-pronoun agreement ofcase government, shown in the tree on the right, is also an instance of dependent-marking because the head prepositionwithrequires the dependent pronoun to appear in its object form,him, not in its subject form,he. The distinction between head-marking and dependent-marking shows the most in noun phrases and verb phrases, which have significant variation among and within languages.[3] Languages may be head-marking in verb phrases and dependent-marking in noun phrases, such as mostBantu languages, or vice versa, and it has been argued that the subject rather than the verb is the head of a clause so "head-marking" is not necessarily a coherent typology. Still, languages that are head-marking in both noun and verb phrases are common enough to make the term useful for typological description. Head-marked possessive noun phrases are common in the Americas, Melanesia,Afro-Asiatic languages(status constructus) andTurkic languagesand infrequent elsewhere. Dependent-marked noun phrases have a complementary distribution and are frequent inAfrica,Eurasia,Australia, andNew Guinea, the only area in which both types overlap appreciably. Double-marked possession is rare but found in languages around the Eurasian periphery such asFinnish, in theHimalayas, and along thePacific CoastofNorth America.Zero-markedpossession is also uncommon, with instances mostly found near theequator, but it does not form any true clusters.[4] The head-markedclauseis common in theAmericas, Australia, New Guinea, and the Bantu languages but is very rare elsewhere. The dependent-marked clause is common in Eurasia andNorthern Africa, sparse inSouth America, and rare in North America. In New Guinea, it clusters in the Eastern Highlands and in Australia in the south, east, and interior with the very oldPama-Nyunganfamily. Double-marking is moderately well attested in the Americas, Australia, and New Guinea, and the southern fringe of Eurasia (chiefly in theCaucasian languagesand Himalayan mountain enclaves), and it is particularly favored in Australia and the westernmost Americas. The zero-marked object is unsurprisingly common inSoutheast AsiaandWestern Africa, two centers ofmorphologicalsimplicity, but it is also very common in New Guinea and moderately common inEastern AfricaandCentral Americaand South America, among languages of average or higher morphological complexity.[5][6] ThePacific Rimdistribution of head-marking may reflectpopulation movements beginning tens of thousands of years agoandfounder effects.Kusundahas traces in the Himalayas, and there are Caucasian enclaves, both of which are perhaps remnants oftypologypreceding the spreads ofinterior Eurasian language families. The dependent-marking type is found everywhere but rare in the Americas, possibly another result of founder effects. In the Americas, all four types are found along the Pacific Coast, but in the East, only head-marking is common. Whether the diversity of types along the Pacific Coast reflects a great age or an overlay of more recent Eurasian colonizations on an earlier American stratum remains to be seen.[7]
https://en.wikipedia.org/wiki/Head-marking_language
Acollateral adjectiveis anadjectivethat is identified with a particularnounin meaning, but that is not derived from that noun. For example, the wordbovineis considered the adjectival equivalent for the nouncattle, but it is derived from a different word, which happens to be the Latin word for "cattle" (n.b. the collateral adjective forcowas specifically restricted to adult female cattle, isvaccine). Similarly,lunarserves as an adjective to describe attributes of theMoon;Mooncomes fromOld Englishmōna"moon" andlunarfromLatinluna"moon". The adjectivethermaland the nounheathave a similar semantic relationship. As in these examples, collateral adjectives in English very often derive from the Latin or Greek translations of the corresponding nouns. In some cases both the noun and the adjective are borrowed, but from different languages, such as the nounair(from French) and the adjectiveaerial(from Latin). The term "collateral" refers to these two sides of the relationship. In English, mostordinal numberssound like theircardinal numbers, such as the ordinal 3rd (third) sounding like the cardinal number 3 (three), 4th (fourth) sounding like 4 (four), 10th (tenth) sounding like 10 (ten), 117th (one-hundred seventeenth) sounding like 117 (one-hundred seventeen), etc. However, 1st (first) and 2nd (second) sound unfamiliar to their cardinal counterparts 1 (one) and 2 (two). This is because these two ordinal numbers were derived from different roots, with "first" being derived from theProto-Indo-Europeanroot meaning "forward", and "second" deriving from the Latin word "secundus", meaning "following".[1] The phenomenon of ordinal numbers being collateral adjectives of cardinal numbers is common in theSinospheric languages, includingJapanese,Korean, andVietnamese.[citation needed]For example, Japanese usually useSino-Japanese numerals(words for numbers based on the Chinese language) formeasure wordsthat use ordinal numbers. Since Japanese, much like Chinese, does not have any inflections that indicatenumber, it uses measure words alongside a number to determine amounts of things.[citation needed]The numerals 1, 2, 3, 5, 6, 8, 9, and 10 usually use the pronunciation derived from Chinese (on'yomi), i.e.ichi, ni, san, go, roku, hachi, kyū,andjūrespectively. However, 4 can be pronounced using either its on'yomishior its native Japanese pronunciation (kun'yomi)yon, depending on context, and likewise 7 can be pronounced eithershichiornana, depending on context. Most measure words require the speaker to use the Sino-Japanese on'yomi numbers, e.g. 3 years issannenkan(3年間), 6 o'clock isrokuji(6時), 9 dogs iskyūhiki no inu(9匹の犬), 7 people isshichinin(7人), and 4 seasons isshiki(四季). However, there are some measure words (and even a select few numbers among certain measure words) that require the native kun'yomi numbers: 7 minutes isnanafun(7分), 4 apples isyonko no ringo(4個のリンゴ). Measure words that use native numbers include days of the month andtsu, which is the generic measure word that roughly translates into "things". 1–10 arehitotsu(1つ), futatsu(2つ), mittsu(3つ),yottsu(4つ),itsutsu(5つ),muttsu(6つ),nanatsu(7つ),yattsu(8つ),kokonotsu(9つ), and tō(10). While the measure word for people,nin(人), usually uses Sino-Japanese numbers, such assannin(3人),hachinin(8人), andjūnin(10人), the measures for 1 and 2 people use the native numbers, which arehitori(1人) andfutari(2人).[2] Attributiveusage of a collateral adjective is generally similar in meaning to attributive use of the corresponding noun. For example,lunar rocketandmoon rocketare accepted as synonyms, as arethermal capacityandheat capacity. However, in other cases the two words may havelexicalizeduses so that one cannot replace the other, as innocturnal viewandnight view, orfeline gracebutcat food(not *cat graceor *feline food). Collateral adjectives contrast withderived(denominal) adjectives. For the nounfather, for example, there is a derived adjectivefatherlyin addition to the collateral adjectivepaternal.Similarly, for the nounrain,there is derivedrainyand collateralpluvial,and forchild,there are derivedchildishandchildlikeas well as collateralinfantileandpuerile. The term "collateral adjective" was coined by theFunk and Wagnallsdictionaries, but as they are currently out of print, the term has become rare. A synonym sometimes seen in linguistics is asuppletive(denominal) adjective,though this is a liberal and arguably incorrect use of the word 'suppletive'.
https://en.wikipedia.org/wiki/Collateral_adjective
Intheoretical linguisticsandcomputational linguistics,probabilistic context free grammars(PCFGs) extendcontext-free grammars, similar to howhidden Markov modelsextendregular grammars. Eachproductionis assigned a probability. The probability of a derivation (parse) is the product of the probabilities of the productions used in that derivation. These probabilities can be viewed as parameters of the model, and for large problems it is convenient to learn these parameters viamachine learning. A probabilistic grammar's validity is constrained by context of its training dataset. PCFGs originated fromgrammar theory, and have application in areas as diverse asnatural language processingto the study the structure ofRNAmolecules and design ofprogramming languages. Designing efficient PCFGs has to weigh factors of scalability and generality. Issues such as grammar ambiguity must be resolved. The grammar design affects results accuracy. Grammar parsing algorithms have various time and memory requirements. Derivation:The process of recursive generation of strings from a grammar. Parsing:Finding a valid derivation using an automaton. Parse Tree:The alignment of the grammar to a sequence. An example of a parser for PCFG grammars is thepushdown automaton. The algorithm parses grammar nonterminals from left to right in astack-likemanner. Thisbrute-forceapproach is not very efficient. In RNA secondary structure prediction variants of theCocke–Younger–Kasami (CYK) algorithmprovide more efficient alternatives to grammar parsing than pushdown automata.[1]Another example of a PCFG parser is the Stanford Statistical Parser which has been trained usingTreebank.[2] Similar to aCFG, a probabilistic context-free grammarGcan be defined by a quintuple: where PCFGs models extendcontext-free grammarsthe same way ashidden Markov modelsextendregular grammars. TheInside-Outside algorithmis an analogue of theForward-Backward algorithm. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. This is equivalent to the probability of the PCFG generating the sequence, and is intuitively a measure of how consistent the sequence is with the given grammar. The Inside-Outside algorithm is used in modelparametrizationto estimate prior frequencies observed from training sequences in the case of RNAs. Dynamic programmingvariants of theCYK algorithmfind theViterbi parseof a RNA sequence for a PCFG model. This parse is the most likely derivation of the sequence by the given PCFG. Context-free grammars are represented as a set of rules inspired from attempts to model natural languages.[3][4][5]The rules are absolute and have a typical syntax representation known asBackus–Naur form. The production rules consist of terminal{a,b}{\displaystyle \left\{a,b\right\}}and non-terminalSsymbols and a blankϵ{\displaystyle \epsilon }may also be used as an end point. In the production rules of CFG and PCFG the left side has only one nonterminal whereas the right side can be any string of terminal or nonterminals. In PCFG nulls are excluded.[1]An example of a grammar: This grammar can be shortened using the '|' ('or') character into: Terminals in a grammar are words and through the grammar rules a non-terminal symbol is transformed into a string of either terminals and/or non-terminals. The above grammar is read as "beginning from a non-terminalSthe emission can generate eitheraorborϵ{\displaystyle \epsilon }". Its derivation is: Ambiguous grammarmay result in ambiguous parsing if applied onhomographssince the same word sequence can have more than one interpretation.Pun sentencessuch as the newspaper headline "Iraqi Head Seeks Arms" are an example of ambiguous parses. One strategy of dealing with ambiguous parses (originating with grammarians as early asPāṇini) is to add yet more rules, or prioritize them so that one rule takes precedence over others. This, however, has the drawback of proliferating the rules, often to the point where they become difficult to manage. Another difficulty is overgeneration, where unlicensed structures are also generated. Probabilistic grammars circumvent these problems by ranking various productions on frequency weights, resulting in a "most likely" (winner-take-all) interpretation. As usage patterns are altered indiachronicshifts, these probabilistic rules can be re-learned, thus updating the grammar. Assigning probability to production rules makes a PCFG. These probabilities are informed by observing distributions on a training set of similar composition to the language to be modeled. On most samples of broad language, probabilistic grammars where probabilities are estimated from data typically outperform hand-crafted grammars. CFGs when contrasted with PCFGs are not applicable to RNA structure prediction because while they incorporate sequence-structure relationship they lack the scoring metrics that reveal a sequence structural potential[6] Aweighted context-free grammar(WCFG) is a more general category ofcontext-free grammar, where each production has a numeric weight associated with it. The weight of a specificparse treein a WCFG is the product[7](or sum[8]) of all rule weights in the tree. Each rule weight is included as often as the rule is used in the tree. A special case of WCFGs are PCFGs, where the weights are (logarithmsof[9][10])probabilities. An extended version of theCYK algorithmcan be used to find the "lightest" (least-weight) derivation of a string given some WCFG. When the tree weight is the product of the rule weights, WCFGs and PCFGs can express the same set ofprobability distributions.[7] Since the 1990s, PCFG has been applied to modelRNA structures.[11][12][13][14][15] Energy minimization[16][17]and PCFG provide ways of predicting RNA secondary structure with comparable performance.[11][12][1]However structure prediction by PCFGs is scored probabilistically rather than by minimum free energy calculation. PCFG model parameters are directly derived from frequencies of different features observed in databases of RNA structures[6]rather than by experimental determination as is the case with energy minimization methods.[18][19] The types of various structure that can be modeled by a PCFG include long range interactions, pairwise structure and other nested structures. However, pseudoknots can not be modeled.[11][12][1]PCFGs extend CFG by assigning probabilities to each production rule. A maximum probability parse tree from the grammar implies a maximum probability structure. Since RNAs preserve their structures over their primary sequence, RNA structure prediction can be guided by combining evolutionary information from comparative sequence analysis with biophysical knowledge about a structure plausibility based on such probabilities. Also search results for structural homologs using PCFG rules are scored according to PCFG derivations probabilities. Therefore, building grammar to model the behavior of base-pairs and single-stranded regions starts with exploring features of structuralmultiple sequence alignmentof related RNAs.[1] The above grammar generates a string in an outside-in fashion, that is the basepair on the furthest extremes of the terminal is derived first. So a string such asaabaabaa{\displaystyle aabaabaa}is derived by first generating the distala's on both sides before moving inwards: A PCFG model extendibility allows constraining structure prediction by incorporating expectations about different features of an RNA . Such expectation may reflect for example the propensity for assuming a certain structure by an RNA.[6]However incorporation of too much information may increase PCFG space and memory complexity and it is desirable that a PCFG-based model be as simple as possible.[6][20] Every possible stringxa grammar generates is assigned a probability weightP(x|θ){\displaystyle P(x|\theta )}given the PCFG modelθ{\displaystyle \theta }. It follows that the sum of all probabilities to all possible grammar productions is∑xP(x|θ)=1{\displaystyle \sum _{\text{x}}P(x|\theta )=1}. The scores for each paired and unpaired residue explain likelihood for secondary structure formations. Production rules also allow scoring loop lengths as well as the order of base pair stacking hence it is possible to explore the range of all possible generations including suboptimal structures from the grammar and accept or reject structures based on score thresholds.[1][6] RNA secondary structure implementations based on PCFG approaches can be utilized in : Different implementation of these approaches exist. For example, Pfold is used in secondary structure prediction from a group of related RNA sequences,[20]covariance models are used in searching databases for homologous sequences and RNA annotation and classification,[11][24]RNApromo, CMFinder and TEISER are used in finding stable structural motifs in RNAs.[25][26][27] PCFG design impacts the secondary structure prediction accuracy. Any useful structure prediction probabilistic model based on PCFG has to maintain simplicity without much compromise to prediction accuracy. Too complex a model of excellent performance on a single sequence may not scale.[1]A grammar based model should be able to: The resulting of multipleparse treesper grammar denotes grammar ambiguity. This may be useful in revealing all possible base-pair structures for a grammar. However an optimal structure is the one where there is one and only one correspondence between the parse tree and the secondary structure. Two types of ambiguities can be distinguished. Parse tree ambiguity and structural ambiguity. Structural ambiguity does not affect thermodynamic approaches as the optimal structure selection is always on the basis of lowest free energy scores.[6]Parse tree ambiguity concerns the existence of multiple parse trees per sequence. Such an ambiguity can reveal all possible base-paired structures for the sequence by generating all possible parse trees then finding the optimal one.[28][29][30]In the case of structural ambiguity multiple parse trees describe the same secondary structure. This obscures the CYK algorithm decision on finding an optimal structure as the correspondence between the parse tree and the structure is not unique.[31]Grammar ambiguity can be checked for by the conditional-inside algorithm.[1][6] A probabilistic context free grammar consists of terminal and nonterminal variables. Each feature to be modeled has a production rule that is assigned a probability estimated from a training set of RNA structures. Production rules are recursively applied until only terminal residues are left. A starting non-terminalS{\displaystyle \mathbf {\mathit {S}} }produces loops. The rest of the grammar proceeds with parameterL{\displaystyle \mathbf {\mathit {L}} }that decide whether a loop is a start of a stem or a single stranded regionsand parameterF{\displaystyle \mathbf {\mathit {F}} }that produces paired bases. The formalism of this simple PCFG looks like: The application of PCFGs in predicting structures is a multi-step process. In addition, the PCFG itself can be incorporated into probabilistic models that consider RNA evolutionary history or search homologous sequences in databases. In an evolutionary history context inclusion of prior distributions of RNA structures of astructural alignmentin the production rules of the PCFG facilitates good prediction accuracy.[21] A summary of general steps for utilizing PCFGs in various scenarios: Several algorithms dealing with aspects of PCFG based probabilistic models in RNA structure prediction exist. For instance the inside-outside algorithm and the CYK algorithm. The inside-outside algorithm is a recursive dynamic programming scoring algorithm that can followexpectation-maximizationparadigms. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. The inside part scores the subtrees from a parse tree and therefore subsequences probabilities given an PCFG. The outside part scores the probability of the complete parse tree for a full sequence.[32][33]CYK modifies the inside-outside scoring. Note that the term 'CYK algorithm' describes the CYK variant of the inside algorithm that finds an optimal parse tree for a sequence using a PCFG. It extends the actualCYK algorithmused in non-probabilistic CFGs.[1] The inside algorithm calculatesα(i,j,v){\displaystyle \alpha (i,j,v)}probabilities for alli,j,v{\displaystyle i,j,v}of a parse subtree rooted atWv{\displaystyle W_{v}}for subsequencexi,...,xj{\displaystyle x_{i},...,x_{j}}. Outside algorithm calculatesβ(i,j,v){\displaystyle \beta (i,j,v)}probabilities of a complete parse tree for sequencexfrom root excluding the calculation ofxi,...,xj{\displaystyle x_{i},...,x_{j}}. The variablesαandβrefine the estimation of probability parameters of an PCFG. It is possible to reestimate the PCFG algorithm by finding the expected number of times a state is used in a derivation through summing all the products ofαandβdivided by the probability for a sequencexgiven the modelP(x|θ){\displaystyle P(x|\theta )}. It is also possible to find the expected number of times a production rule is used by an expectation-maximization that utilizes the values ofαandβ.[32][33]The CYK algorithm calculatesγ(i,j,v){\displaystyle \gamma (i,j,v)}to find the most probable parse treeπ^{\displaystyle {\hat {\pi }}}and yieldslog⁡P(x,π^|θ){\displaystyle \log P(x,{\hat {\pi }}|\theta )}.[1] Memory and time complexity for general PCFG algorithms in RNA structure predictions areO(L2M){\displaystyle O(L^{2}M)}andO(L3M3){\displaystyle O(L^{3}M^{3})}respectively. Restricting a PCFG may alter this requirement as is the case with database searches methods. Covariance models (CMs) are a special type of PCFGs with applications in database searches for homologs, annotation and RNA classification. Through CMs it is possible to build PCFG-based RNA profiles where related RNAs can be represented by a consensus secondary structure.[11][12]The RNA analysis package Infernal uses such profiles in inference of RNA alignments.[34]The Rfam database also uses CMs in classifying RNAs into families based on their structure and sequence information.[24] CMs are designed from a consensus RNA structure. A CM allowsindelsof unlimited length in the alignment. Terminals constitute states in the CM and the transition probabilities between the states is 1 if no indels are considered.[1]Grammars in a CM are as follows: The model has 6 possible states and each state grammar includes different types of secondary structure probabilities of the non-terminals. The states are connected by transitions. Ideally current node states connect to all insert states and subsequent node states connect to non-insert states. In order to allow insertion of more than one base insert states connect to themselves.[1] In order to score a CM model the inside-outside algorithms are used. CMs use a slightly different implementation of CYK. Log-odds emission scores for the optimum parse tree -log⁡e^{\displaystyle \log {\hat {e}}}- are calculated out of the emitting statesP,L,R{\displaystyle P,~L,~R}. Since these scores are a function of sequence length a more discriminative measure to recover an optimum parse tree probability score-log⁡P(x,π^|θ){\displaystyle \log {\text{P}}(x,{\hat {\pi }}|\theta )}- is reached by limiting the maximum length of the sequence to be aligned and calculating the log-odds relative to a null. The computation time of this step is linear to the database size and the algorithm has a memory complexity ofO(MaD+MbD2){\displaystyle O(M_{a}D+M_{b}D^{2})}.[1] The KH-99 algorithm by Knudsen and Hein lays the basis of the Pfold approach to predicting RNA secondary structure.[20]In this approach the parameterization requires evolutionary history information derived from an alignment tree in addition to probabilities of columns and mutations. The grammar probabilities are observed from a training dataset. In a structural alignment the probabilities of the unpaired bases columns and the paired bases columns are independent of other columns. By counting bases in single base positions and paired positions one obtains the frequencies of bases in loops and stems. For basepairXandYan occurrence ofXY{\displaystyle XY}is also counted as an occurrence ofYX{\displaystyle YX}. Identical basepairs such asXX{\displaystyle XX}are counted twice. By pairing sequences in all possible ways overall mutation rates are estimated. In order to recover plausible mutations a sequence identity threshold should be used so that the comparison is between similar sequences. This approach uses 85% identity threshold between pairing sequences. First single base positions differences -except for gapped columns- between sequence pairs are counted such that if the same position in two sequences had different basesX, Ythe count of the difference is incremented for each sequence. For unpaired bases a 4 X 4 mutation rate matrix is used that satisfies that the mutation flow from X to Y is reversible:[35] For basepairs a 16 X 16 rate distribution matrix is similarly generated.[36][37]The PCFG is used to predict the prior probability distribution of the structure whereas posterior probabilities are estimated by the inside-outside algorithm and the most likely structure is found by the CYK algorithm.[20] After calculating the column prior probabilities the alignment probability is estimated by summing over all possible secondary structures. Any columnCin a secondary structureσ{\displaystyle \sigma }for a sequenceDof lengthlsuch thatD=(C1,C2,...Cl){\displaystyle D=(C_{1},~C_{2},...C_{l})}can be scored with respect to the alignment treeTand the mutational modelM. The prior distribution given by the PCFG isP(σ|M){\displaystyle P(\sigma |M)}. The phylogenetic tree,Tcan be calculated from the model by maximum likelihood estimation. Note that gaps are treated as unknown bases and the summation can be done throughdynamic programming.[38] Each structure in the grammar is assigned production probabilities devised from the structures of the training dataset. These prior probabilities give weight to predictions accuracy.[21][32][33]The number of times each rule is used depends on the observations from the training dataset for that particular grammar feature. These probabilities are written in parentheses in the grammar formalism and each rule will have a total of 100%.[20]For instance: Given the prior alignment frequencies of the data the most likely structure from the ensemble predicted by the grammar can then be computed by maximizingP(σ|D,T,M){\displaystyle P(\sigma |D,T,M)}through the CYK algorithm. The structure with the highest predicted number of correct predictions is reported as the consensus structure.[20] PCFG based approaches are desired to be scalable and general enough. Compromising speed for accuracy needs to as minimal as possible. Pfold addresses the limitations of the KH-99 algorithm with respect to scalability, gaps, speed and accuracy.[20] Whereas PCFGs have proved powerful tools for predicting RNA secondary structure, usage in the field of protein sequence analysis has been limited. Indeed, the size of theamino acidalphabet and the variety of interactions seen in proteins make grammar inference much more challenging.[39]As a consequence, most applications offormal language theoryto protein analysis have been mainly restricted to the production of grammars of lower expressive power to model simple functional patterns based on local interactions.[40][41]Since protein structures commonly display higher-order dependencies including nested and crossing relationships, they clearly exceed the capabilities of any CFG.[39]Still, development of PCFGs allows expressing some of those dependencies and providing the ability to model a wider range of protein patterns.
https://en.wikipedia.org/wiki/Probabilistic_context-free_grammar
Positive recallis a term used inquality systems, most notablyISO9000. It is part of receiving inspection procedures.[1]It defines the concept that if a producer or manufacturer receives aproductorprocessthat requiresinspectionand it wishes to postpone theinspection process, it must have a system in place that will ensure that the postponed inspection process will take place at some point prior to final product/process acceptance. In ISO 9000 it is defined as clause 4.10.2.3, also known as Urgent production release.[2] This business-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Positive_recall
Hexadecimalfloating point(now calledHFPbyIBM) is a format for encoding floating-point numbers first introduced on theIBMSystem/360computers, and supported on subsequent machines based on that architecture,[1][2][3]as well as machines which were intended to be application-compatible with System/360.[4][5] In comparison toIEEE 754floating point, the HFP format has a longersignificand, and a shorterexponent. All HFP formats have 7 bits of exponent with abiasof 64. The normalized range of representable numbers is from 16−65to 1663(approx. 5.39761 × 10−79to 7.237005 × 1075). The number is represented as the following formula: (−1)sign× 0.significand× 16exponent−64. Asingle-precisionHFP number (called "short" by IBM) is stored in a 32-bit word: In this format the initial bit is not suppressed, and the radix (hexadecimal) point is set to the left of the significand (fraction in IBM documentation and the figures). Since the base is 16, the exponent in this form is about twice as large as the equivalent in IEEE 754, in order to have similar exponent range in binary, 9 exponent bits would be required. Consider encoding the value −118.625 as an HFP single-precision floating-point value. The value is negative, so the sign bit is 1. The value 118.62510in binary is 1110110.1012. This value is normalized by moving the radix point left four bits (one hexadecimal digit) at a time until the leftmost digit is zero, yielding 0.011101101012. The remaining rightmost digits are padded with zeros, yielding a 24-bit fraction of .0111 0110 1010 0000 0000 00002. The normalized value moved the radix point two hexadecimal digits to the left, yielding a multiplier and exponent of 16+2. A bias of +64 is added to the exponent (+2), yielding +66, which is 100 00102. Combining the sign, exponent plus bias, and normalized fraction produces this encoding: In other words, the number represented is −0.76A00016× 1666 − 64= −0.4633789… × 16+2= −118.625 The number represented is +0.FFFFFF16× 16127 − 64= (1 − 16−6) × 1663≈ +7.2370051 × 1075 The number represented is +0.116× 160 − 64= 16−1× 16−64≈ +5.397605 × 10−79. Zero (0.0) is represented in normalized form as all zero bits, which is arithmetically the value +0.016× 160 − 64= +0 × 16−64≈ +0.000000 × 10−79= 0. Given a fraction of all-bits zero, any combination of positive or negative sign bit and a non-zero biased exponent will yield a value arithmetically equal to zero. However, the normalized form generated for zero by CPU hardware is all-bits zero. This is true for all three floating-point precision formats. Addition or subtraction with other exponent values can lose precision in the result. Since the base is 16, there can be up to three leading zero bits in the binary significand. That means when the number is converted into binary, there can be as few as 21 bits of precision. Because of the "wobbling precision" effect, this can cause some calculations to be very inaccurate. This has caused considerable criticism.[6] A good example of the inaccuracy is representation of decimal value 0.1. It has no exact binary or hexadecimal representation. In hexadecimal format, it is represented as 0.19999999...16or 0.0001 1001 1001 1001 1001 1001 1001...2, that is: This has only 21 bits, whereas the binary version has 24 bits of precision. Six hexadecimal digits of precision is roughly equivalent to six decimal digits (i.e. (6 − 1) log10(16) ≈ 6.02). A conversion of single precision hexadecimal float to decimal string would require at least 9 significant digits (i.e. 6 log10(16) + 1 ≈ 8.22) in order to convert back to the same hexadecimal float value. Thedouble-precisionHFP format (called "long" by IBM) is the same as the "short" format except that the fraction field is wider and the double-precision number is stored in a double word (8 bytes): The exponent for this format covers only about a quarter of the range as the corresponding IEEE binary format. 14 hexadecimal digits of precision is roughly equivalent to 17 decimal digits. A conversion of double precision hexadecimal float to decimal string would require at least 18 significant digits in order to convert back to the same hexadecimal float value. Called extended-precision by IBM, aquadruple-precisionHFP format was added to the System/370 series and was available on some S/360 models (S/360-85, -195, and others by special request or simulated by OS software). The extended-precision fraction field is wider, and the extended-precision number is stored as two double words (16 bytes): 28 hexadecimal digits of precision is roughly equivalent to 32 decimal digits. A conversion of extended precision HFP to decimal string would require at least 35 significant digits in order to convert back to the same HFP value. The stored exponent in the low-order part is 14 less than the high-order part, unless this would be less than zero. Available arithmetic operations are add and subtract, both normalized and unnormalized, and compare. Prenormalization is done based on the exponent difference. Multiply and divide prenormalize unnormalized values, and truncate the result after one guard digit. There is a halve operation to simplify dividing by two. Starting in ESA/390, there is a square root operation. All operations have one hexadecimal guard digit to avoid precision loss. Most arithmetic operations truncate like simple pocket calculators. Therefore, 1 − 16−8= 1. In this case, the result is rounded away from zero.[7] Starting with theS/390G5 in 1998,[8]IBM mainframes have also included IEEE binary floating-point units which conform to theIEEE 754 Standard for Floating-Point Arithmetic. IEEE decimal floating-point was added toIBM System z9GA2[9]in 2007 usingmillicode[10]and in 2008 to theIBM System z10in hardware.[11] Modern IBM mainframes support three floating-point radices with 3 hexadecimal (HFP) formats, 3 binary (BFP) formats, and 3 decimal (DFP) formats. There are two floating-point units per core; one supporting HFP and BFP, and one supporting DFP; there is one register file, FPRs, which holds all 3 formats. Starting with thez13in 2015, processors have added a vector facility that includes 32 vector registers, each 128 bits wide; a vector register can contain two 64-bit or four 32-bit floating-point numbers.[12]The traditional 16 floating-point registers are overlaid on the new vector registers so some data can be manipulated with traditional floating-point instructions or with the newer vector instructions. The IBM HFP format is used in: As IBM is the only remaining provider of hardware using the HFP format, and as the only IBM machines that support that format are their mainframes, few file formats require it. One exception is the SAS 5 Transport file format, which the FDA requires; in that format, "All floating-point numbers in the file are stored using the IBM mainframe representation. [...] Most platforms use the IEEE representation for floating-point numbers. [...] To assist you in reading and/or writing transport files, we are providing routines to convert from IEEE representation (either big endian or little endian) to transport representation and back again."[13]Code for IBM's format is also available underLGPLv2.1.[15] The article "Architecture of the IBM System/360" explains the choice as being because "the frequency of pre-shift, overflow, and precision-loss post-shift on floating-point addition are substantially reduced by this choice."[16]This allowed higher performance for the large System/360 models, and reduced cost for the small ones. The authors were aware of the potential for precision loss, but assumed that this would not be significant for 64-bit floating-point variables. Unfortunately, the designers seem not to have been aware ofBenford's Lawwhich means that a large proportion of numbers will suffer reduced precision. The book "Computer Architecture" by two of the System/360 architects quotes Sweeney's study of 1958-65 which showed that using a base greater than 2 greatly reduced the number of shifts required for alignment and normalisation, in particular the number ofdifferentshifts needed. They used a larger base to make the implementations run faster, and the choice of base 16 was natural given 8-bit bytes. The intention was that 32-bit floats would only be used for calculations that would not propagate rounding errors, and 64-bit double precision would be used for all scientific and engineering calculations. The initial implementation of double precision lacked a guard digit to allow proper rounding, but this was changed soon after the first customer deliveries.[17]
https://en.wikipedia.org/wiki/IBM_hexadecimal_floating-point
Instatistical mechanics, themean squared displacement(MSD), also calledmean square displacement,average squared displacement, ormean square fluctuation, is a measure of thedeviationof thepositionof aparticlewith respect to a reference position over time. It is the most common measure of the spatial extent of randommotion, and can be thought of as measuring the portion of the system "explored" by therandom walker. In the realm ofbiophysicsandenvironmental engineering, the MSD is measured over time to determine if a particle is spreading slowly due todiffusion, or if anadvectiveforce is also contributing.[1]Another relevant concept, thevariance-related diameter(VRD), defined as twice the square root of MSD, is also used in studying the transportation and mixing phenomena inenvironmental engineering.[2]It prominently appears in theDebye–Waller factor(describing vibrations within the solid state) and in theLangevin equation(describing diffusion of aBrownian particle). The MSD at timet{\displaystyle t}is defined as anensemble average:MSD≡⟨|x(t)−x0|2⟩=1N∑i=1N|x(i)(t)−x(i)(0)|2{\displaystyle {\text{MSD}}\equiv \left\langle \left|\mathbf {x} (t)-\mathbf {x_{0}} \right|^{2}\right\rangle ={\frac {1}{N}}\sum _{i=1}^{N}\left|\mathbf {x^{(i)}} (t)-\mathbf {x^{(i)}} (0)\right|^{2}}whereNis the number of particles to be averaged, vectorx(i)(0)=x0(i){\displaystyle \mathbf {x^{(i)}} (0)=\mathbf {x_{0}^{(i)}} }is the reference position of thei{\displaystyle i}-th particle, and vectorx(i)(t){\displaystyle \mathbf {x^{(i)}} (t)}is the position of thei{\displaystyle i}-th particle at timet.[3] Theprobability density function(PDF) for a particle in one dimension is found by solving the one-dimensionaldiffusion equation. (This equation states that the position probability density diffuses out over time - this is the method used by Einstein to describe a Brownian particle. Another method to describe the motion of a Brownian particle was described by Langevin, now known for its namesake as theLangevin equation.)∂p(x,t∣x0)∂t=D∂2p(x,t∣x0)∂x2,{\displaystyle {\frac {\partial p(x,t\mid x_{0})}{\partial t}}=D{\frac {\partial ^{2}p(x,t\mid x_{0})}{\partial x^{2}}},}given the initial conditionp(x,t=0∣x0)=δ(x−x0){\displaystyle p(x,t=0\mid x_{0})=\delta (x-x_{0})}; wherex(t){\displaystyle x(t)}is the position of the particle at some given time,x0{\displaystyle x_{0}}is the tagged particle's initial position, andD{\displaystyle D}is the diffusion constant with the S.I. unitsm2s−1{\displaystyle m^{2}s^{-1}}(an indirect measure of the particle's speed). The bar in the argument of the instantaneous probability refers to the conditional probability. The diffusion equation states that the speed at which the probability for finding the particle atx(t){\displaystyle x(t)}is position dependent. Thedifferential equationabove takes the form of 1Dheat equation. The one-dimensional PDF below is theGreen's functionof heat equation (also known asHeat kernelin mathematics):P(x,t)=14πDtexp⁡(−(x−x0)24Dt).{\displaystyle P(x,t)={\frac {1}{\sqrt {4\pi Dt}}}\exp \left(-{\frac {(x-x_{0})^{2}}{4Dt}}\right).}This states that the probability of finding the particle atx(t){\displaystyle x(t)}is Gaussian, and the width of the Gaussian is time dependent. More specifically thefull width at half maximum(FWHM)(technically/pedantically, this is actually the Fulldurationat half maximum as the independent variable is time) scales likeFWHM∼t.{\displaystyle {\text{FWHM}}\sim {\sqrt {t}}.}Using the PDF one is able to derive the average of a given function,L{\displaystyle L}, at timet{\displaystyle t}:⟨L(t)⟩≡∫−∞∞L(x,t)P(x,t)dx,{\displaystyle \langle L(t)\rangle \equiv \int _{-\infty }^{\infty }L(x,t)P(x,t)\,dx,}where the average is taken over all space (or any applicable variable). The Mean squared displacement is defined asMSD≡⟨(x(t)−x0)2⟩,{\displaystyle {\text{MSD}}\equiv \left\langle \left(x(t)-x_{0}\right)^{2}\right\rangle ,}expanding out the ensemble average⟨(x−x0)2⟩=⟨x2⟩+x02−2x0⟨x⟩,{\displaystyle \left\langle \left(x-x_{0}\right)^{2}\right\rangle =\left\langle x^{2}\right\rangle +x_{0}^{2}-2x_{0}\langle x\rangle ,}dropping the explicit time dependence notation for clarity. To find the MSD, one can take one of two paths: one can explicitly calculate⟨x2⟩{\displaystyle \langle x^{2}\rangle }and⟨x⟩{\displaystyle \langle x\rangle }, then plug the result back into the definition of the MSD; or one could find themoment-generating function, an extremely useful, and general function when dealing with probability densities. The moment-generating function describes thek{\displaystyle k}-thmoment of the PDF. The first moment of the displacement PDF shown above is simply the mean:⟨x⟩{\displaystyle \langle x\rangle }. The second moment is given as⟨x2⟩{\displaystyle \langle x^{2}\rangle }. So then, to find the moment-generating function it is convenient to introduce thecharacteristic function:G(k)=⟨eikx⟩≡∫IeikxP(x,t∣x0)dx,{\displaystyle G(k)=\langle e^{ikx}\rangle \equiv \int _{I}e^{ikx}P(x,t\mid x_{0})\,dx,}one can expand out the exponential in the above equation to giveG(k)=∑m=0∞(ik)mm!μm.{\displaystyle G(k)=\sum _{m=0}^{\infty }{\frac {(ik)^{m}}{m!}}\mu _{m}.}By taking the natural log of the characteristic function, a new function is produced, thecumulant generating function,ln⁡(G(k))=∑m=1∞(ik)mm!κm,{\displaystyle \ln(G(k))=\sum _{m=1}^{\infty }{\frac {(ik)^{m}}{m!}}\kappa _{m},}whereκm{\displaystyle \kappa _{m}}is them{\displaystyle m}-thcumulantofx{\displaystyle x}. The first two cumulants are related to the first two moments,μ{\displaystyle \mu }, viaκ1=μ1;{\displaystyle \kappa _{1}=\mu _{1};}andκ2=μ2−μ12,{\displaystyle \kappa _{2}=\mu _{2}-\mu _{1}^{2},}where the second cumulant is the so-called variance,σ2{\displaystyle \sigma ^{2}}. With these definitions accounted for one can investigate the moments of the Brownian particle PDF,G(k)=14πDt∫Iexp⁡(ikx−(x−x0)24Dt)dx;{\displaystyle G(k)={\frac {1}{\sqrt {4\pi Dt}}}\int _{I}\exp \left(ikx-{\frac {\left(x-x_{0}\right)^{2}}{4Dt}}\right)\,dx;}bycompleting the squareand knowing the total area under a Gaussian one arrives atG(k)=exp⁡(ikx0−k2Dt).{\displaystyle G(k)=\exp(ikx_{0}-k^{2}Dt).}Taking the natural log, and comparing powers ofik{\displaystyle ik}to the cumulant generating function, the first cumulant isκ1=x0,{\displaystyle \kappa _{1}=x_{0},}which is as expected, namely that the mean position is the Gaussian centre. The second cumulant isκ2=2Dt,{\displaystyle \kappa _{2}=2Dt,\,}the factor 2 comes from the factorial factor in the denominator of the cumulant generating function. From this, the second moment is calculated,μ2=κ2+μ12=2Dt+x02.{\displaystyle \mu _{2}=\kappa _{2}+\mu _{1}^{2}=2Dt+x_{0}^{2}.}Plugging the results for the first and second moments back, one finds the MSD,⟨(x(t)−x0)2⟩=2Dt.{\displaystyle \left\langle \left(x(t)-x_{0}\right)^{2}\right\rangle =2Dt.} For a Brownian particle in higher-dimensionEuclidean space, its position is represented by a vectorx=(x1,x2,…,xn){\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{n})}, where theCartesian coordinatesx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}arestatistically independent. Then-variable probability distribution function is the product of thefundamental solutionsin each variable; i.e., P(x,t)=P(x1,t)P(x2,t)…P(xn,t)=1(4πDt)nexp⁡(−x⋅x4Dt).{\displaystyle P(\mathbf {x} ,t)=P(x_{1},t)P(x_{2},t)\dots P(x_{n},t)={\frac {1}{\sqrt {(4\pi Dt)^{n}}}}\exp \left(-{\frac {\mathbf {x} \cdot \mathbf {x} }{4Dt}}\right).} The Mean squared displacement is defined as MSD≡⟨|x−x0|2⟩=⟨(x1(t)−x1(0))2+(x2(t)−x2(0))2+⋯+(xn(t)−xn(0))2⟩{\displaystyle \mathrm {MSD} \equiv \left\langle |\mathbf {x} -\mathbf {x_{0}} |^{2}\right\rangle =\left\langle \left(x_{1}(t)-x_{1}(0)\right)^{2}+\left(x_{2}(t)-x_{2}(0)\right)^{2}+\dots +\left(x_{n}(t)-x_{n}(0)\right)^{2}\right\rangle } Since all the coordinates are independent, their deviation from the reference position is also independent. Therefore, MSD=⟨(x1(t)−x1(0))2⟩+⟨(x2(t)−x2(0))2⟩+⋯+⟨(xn(t)−xn(0))2⟩{\displaystyle {\text{MSD}}=\left\langle \left(x_{1}(t)-x_{1}(0)\right)^{2}\right\rangle +\left\langle \left(x_{2}(t)-x_{2}(0)\right)^{2}\right\rangle +\dots +\left\langle \left(x_{n}(t)-x_{n}(0)\right)^{2}\right\rangle } For each coordinate, following the same derivation as in 1D scenario above, one obtains the MSD in that dimension as2Dt{\displaystyle 2Dt}. Hence, the final result of mean squared displacement inn-dimensional Brownian motion is: MSD=2nDt.{\displaystyle {\text{MSD}}=2nDt.} In the measurements of single particle tracking (SPT), displacements can be defined for different time intervals between positions (also called time lags or lag times). SPT yields the trajectoryr→(t)=[x(t),y(t)]{\displaystyle {\vec {r}}(t)=[x(t),y(t)]}, representing a particle undergoing two-dimensional diffusion. Assuming that the trajectory of a single particle measured at time points1Δt,2Δt,…,NΔt{\displaystyle 1\,\Delta t,2\,\Delta t,\ldots ,N\,\Delta t}, whereΔt{\displaystyle \Delta t}is any fixed number, then there areN(N−1)/2{\displaystyle N(N-1)/2}non-trivial forward displacementsd→ij=r→j−r→i{\displaystyle {\vec {d}}_{ij}={\vec {r}}_{j}-{\vec {r}}_{i}}(1⩽i<j⩽N{\displaystyle 1\leqslant i<j\leqslant N}, the cases wheni=j{\displaystyle i=j}are not considered) which correspond to time intervals (or time lags)Δtij=(j−i)Δt{\displaystyle \,\Delta t_{ij}=(j-i)\,\Delta t}. Hence, there are many distinct displacements for small time lags, and very few for large time lags,MSD{\displaystyle {\rm {MSD}}}can be defined as an average quantity over time lags:[4][5] δ2(n)¯=1N−n∑i=1N−n(r→i+n−r→i)2n=1,…,N−1.{\displaystyle {\overline {\delta ^{2}(n)}}={\frac {1}{N-n}}\sum _{i=1}^{N-n}{({\vec {r}}_{i+n}-{\vec {r}}_{i}})^{2}\qquad n=1,\ldots ,N-1.} Similarly, for continuoustime series: δ2(Δ)¯=1T−Δ∫0T−Δ[r(t+Δ)−r(t)]2dt{\displaystyle {\overline {\delta ^{2}(\Delta )}}={\frac {1}{T-\Delta }}\int _{0}^{T-\Delta }[r(t+\Delta )-r(t)]^{2}\,dt} It's clear that choosing largeT{\displaystyle T}andΔ≪T{\displaystyle \Delta \ll T}can improve statistical performance. This technique allow us estimate the behavior of the whole ensembles by just measuring a single trajectory, but note that it's only valid for the systems withergodicity, like classicalBrownian motion(BM),fractional Brownian motion(fBM), andcontinuous-time random walk(CTRW) with limited distribution of waiting times, in these cases,δ2(Δ)¯=⟨[r(t)−r(0)]2⟩{\displaystyle {\overline {\delta ^{2}(\Delta )}}=\left\langle [r(t)-r(0)]^{2}\right\rangle }(defined above), here⟨⋅⟩{\displaystyle \left\langle \cdot \right\rangle }denotes ensembles average. However, for non-ergodic systems, like the CTRW with unlimited waiting time, waiting time can go to infinity at some time, in this case,δ2(Δ)¯{\displaystyle {\overline {\delta ^{2}(\Delta )}}}strongly depends onT{\displaystyle T},δ2(Δ)¯{\displaystyle {\overline {\delta ^{2}(\Delta )}}}and⟨[r(t)−r(0)]2⟩{\displaystyle \left\langle [r(t)-r(0)]^{2}\right\rangle }don't equal each other anymore, in order to get better asymptotics, introduce the averaged time MSD: ⟨δ2(Δ)¯⟩=1N∑δ2(Δ)¯{\displaystyle \left\langle {\overline {\delta ^{2}(\Delta )}}\right\rangle ={\frac {1}{N}}\sum {\overline {\delta ^{2}(\Delta )}}} Here⟨⋅⟩{\displaystyle \left\langle \cdot \right\rangle }denotes averaging overNensembles. Also, one can easily derive the autocorrelation function from the MSD: ⟨[r(t)−r(0)]2⟩=⟨r2(t)⟩+⟨r2(0)⟩−2⟨r(t)r(0)⟩,{\displaystyle \left\langle {[r(t)-r(0)]^{2}}\right\rangle =\left\langle r^{2}(t)\right\rangle +\left\langle r^{2}(0)\right\rangle -2\left\langle r(t)r(0)\right\rangle ,}where⟨r(t)r(0)⟩{\displaystyle \left\langle r(t)r(0)\right\rangle }is so-calledautocorrelationfunction for position of particles. Experimental methods to determine MSDs includeneutron scatteringandphoton correlation spectroscopy. The linear relationship between the MSD and timetallows for graphical methods to determine the diffusivity constantD. This is especially useful for rough calculations of the diffusivity in environmental systems. In someatmospheric dispersion models, the relationship between MSD and timetis not linear. Instead, a series of power laws empirically representing the variation of the square root of MSD versus downwind distance are commonly used in studying the dispersion phenomenon.[6]
https://en.wikipedia.org/wiki/Mean_squared_displacement
"All models are wrong" is a commonaphorismandanapodotoninstatistics. It is often expanded as "All models are wrong, but some are useful". The aphorism acknowledges thatstatistical modelsalways fall short of the complexities of reality but can still be useful nonetheless. The aphorism is generally attributed toGeorge E. P. Box, a Britishstatistician, although the underlying concept predates Box's writings. The phrase "all models are wrong" was attributed[1]to George Box who used the phrase in a 1976 paper to refer to the limitations of models, arguing that while no model is ever completely accurate, simpler models can still provide valuable insights if applied judiciously.[2]: 792In their 1983 book on generalized linear models, Peter McCullagh and John Nelder stated that while modeling in science is a creative process, some models are better than others, even though none can claim eternal truth.[3][4]In 1996, an Applied Statistician's Creed was proposed by M.R. Nester, which incorporated the aphorism as a central tenet.[1] Although the aphorism is most commonly associated with George Box, the underlying idea has been historically expressed by various thinkers in the past.Alfred Korzybskinoted in 1933, "A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness."[5]In 1939,Walter Shewhartdiscussed the impossibility of constructing a model that fully characterizes a state ofstatistical control, noting that no model can exactly represent any specific characteristic of such a state.[6]John von Neumann, in 1947, remarked that "truth is much too complicated to allow anything but approximations."[2] Box used the aphorism again in 1979, where he expanded on the idea by discussing how models serve as useful approximations, despite failing to perfectly describe empirical phenomena.[7]He reiterated this sentiment in his later works,where he discussed how models should be judged based on their utility rather than their absolute correctness.[8][6] David Cox, in a 1995 commentary, argued that stating all models are wrong is unhelpful, as models by their nature simplify reality. He emphasized that statistical models, like other scientific models, aim to capture important aspects of systems through idealized representations.[9] In their 2002 book on statistical model selection, Burnham and Anderson reiterated Box’s statement, noting that while models are simplifications of reality, they vary in usefulness, from highly useful to essentially useless.[10] J. Michael Steeleused the analogy of city maps to explain that models, like maps, serve practical purposes despite their limitations, emphasizing that certain models, though simplified, are not necessarily wrong.[11]In response, Andrew Gelman acknowledged Steele’s point but defended the usefulness of the aphorism, particularly in drawing attention to the inherent imperfections of models.[12] Philosopher Peter Truran, in a 2013 essay, discussed how seemingly incompatible models can make accurate predictions by representing different aspects of the same phenomenon, illustrating the point with an example of two observers viewing a cylindrical object from different angles.[13] In 2014,David Handreiterated that models are meant to aid in understanding or decision-making about the real world, a point emphasized by Box’s famous remark.[14]
https://en.wikipedia.org/wiki/All_models_are_wrong
This is acomparison of online backup services. Online backup is a special kind of online storage service; however, various products that are designed for file storage may not have features or characteristics that others designed for backup have. Online Backup usually requires a backup client program. A browser-only online storage service is usually not considered a valid online backup service. Online folder sync services can be used for backup purposes. However, some Online Folder Sync services may not provide a safe Online Backup. If a file is accidentally locally corrupted or deleted, it depends on the versioning features of a Folder Sync service, whether this file will still be retrievable. Any changes can be undone, and files can be undeleted. Other notable limitations or features.
https://en.wikipedia.org/wiki/Comparison_of_online_backup_services
In the domain ofphysicsandprobability, aMarkov random field(MRF),Markov networkorundirectedgraphical modelis a set ofrandom variableshaving aMarkov propertydescribed by anundirected graph. In other words, arandom fieldis said to be aMarkovrandom field if it satisfies Markov properties. The concept originates from theSherrington–Kirkpatrick model.[1] A Markov network or MRF is similar to aBayesian networkin its representation of dependencies; the differences being that Bayesian networks aredirected and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies[further explanation needed]); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies[further explanation needed]). The underlying graph of a Markov random field may be finite or infinite. When thejoint probability densityof the random variables is strictly positive, it is also referred to as aGibbs random field, because, according to theHammersley–Clifford theorem, it can then be represented by aGibbs measurefor an appropriate (locally defined) energy function. The prototypical Markov random field is theIsing model; indeed, the Markov random field was introduced as the general setting for the Ising model.[2]In the domain ofartificial intelligence, a Markov random field is used to model various low- to mid-level tasks inimage processingandcomputer vision.[3] Given an undirected graphG=(V,E){\displaystyle G=(V,E)}, a set of random variablesX=(Xv)v∈V{\displaystyle X=(X_{v})_{v\in V}}indexed byV{\displaystyle V}form a Markov random field with respect toG{\displaystyle G}if they satisfy the local Markov properties: The Global Markov property is stronger than the Local Markov property, which in turn is stronger than the Pairwise one.[4]However, the above three Markov properties are equivalent for positive distributions[5](those that assign only nonzero probabilities to the associated variables). The relation between the three Markov properties is particularly clear in the following formulation: As the Markov property of an arbitrary probability distribution can be difficult to establish, a commonly used class of Markov random fields are those that can be factorized according to thecliquesof the graph. Given a set of random variablesX=(Xv)v∈V{\displaystyle X=(X_{v})_{v\in V}}, letP(X=x){\displaystyle P(X=x)}be theprobabilityof a particular field configurationx{\displaystyle x}inX{\displaystyle X}—that is,P(X=x){\displaystyle P(X=x)}is the probability of finding that the random variablesX{\displaystyle X}take on the particular valuex{\displaystyle x}. BecauseX{\displaystyle X}is a set, the probability ofx{\displaystyle x}should be understood to be taken with respect to ajoint distributionof theXv{\displaystyle X_{v}}. If this joint density can be factorized over the cliques ofG{\displaystyle G}as thenX{\displaystyle X}forms a Markov random field with respect toG{\displaystyle G}. Here,cl⁡(G){\displaystyle \operatorname {cl} (G)}is the set of cliques ofG{\displaystyle G}. The definition is equivalent if only maximal cliques are used. The functionsφC{\displaystyle \varphi _{C}}are sometimes referred to asfactor potentialsorclique potentials. Note, however, conflicting terminology is in use: the wordpotentialis often applied to the logarithm ofφC{\displaystyle \varphi _{C}}. This is because, instatistical mechanics,log⁡(φC){\displaystyle \log(\varphi _{C})}has a direct interpretation as thepotential energyof aconfigurationxC{\displaystyle x_{C}}. Some MRF's do not factorize: a simple example can be constructed on a cycle of 4 nodes with some infinite energies, i.e. configurations of zero probabilities,[6]even if one, more appropriately, allows the infinite energies to act on the complete graph onV{\displaystyle V}.[7] MRF's factorize if at least one of the following conditions is fulfilled: When such a factorization does exist, it is possible to construct afactor graphfor the network. Any positive Markov random field can be written as exponential family in canonical form with feature functionsfk{\displaystyle f_{k}}such that the full-joint distribution can be written as where the notation is simply adot productover field configurations, andZis thepartition function: Here,X{\displaystyle {\mathcal {X}}}denotes the set of all possible assignments of values to all the network's random variables. Usually, the feature functionsfk,i{\displaystyle f_{k,i}}are defined such that they areindicatorsof the clique's configuration,i.e.fk,i(x{k})=1{\displaystyle f_{k,i}(x_{\{k\}})=1}ifx{k}{\displaystyle x_{\{k\}}}corresponds to thei-th possible configuration of thek-th clique and 0 otherwise. This model is equivalent to the clique factorization model given above, ifNk=|dom⁡(Ck)|{\displaystyle N_{k}=|\operatorname {dom} (C_{k})|}is the cardinality of the clique, and the weight of a featurefk,i{\displaystyle f_{k,i}}corresponds to the logarithm of the corresponding clique factor,i.e.wk,i=log⁡φ(ck,i){\displaystyle w_{k,i}=\log \varphi (c_{k,i})}, whereck,i{\displaystyle c_{k,i}}is thei-th possible configuration of thek-th clique,i.e.thei-th value in the domain of the cliqueCk{\displaystyle C_{k}}. The probabilityPis often called the Gibbs measure. This expression of a Markov field as a logistic model is only possible if all clique factors are non-zero,i.e.if none of the elements ofX{\displaystyle {\mathcal {X}}}are assigned a probability of 0. This allows techniques from matrix algebra to be applied,e.g.that thetraceof a matrix is log of thedeterminant, with the matrix representation of a graph arising from the graph'sincidence matrix. The importance of the partition functionZis that many concepts fromstatistical mechanics, such asentropy, directly generalize to the case of Markov networks, and anintuitiveunderstanding can thereby be gained. In addition, the partition function allowsvariational methodsto be applied to the solution of the problem: one can attach a driving force to one or more of the random variables, and explore the reaction of the network in response to thisperturbation. Thus, for example, one may add a driving termJv, for each vertexvof the graph, to the partition function to get: Formally differentiating with respect toJvgives theexpectation valueof the random variableXvassociated with the vertexv: Correlation functionsare computed likewise; the two-point correlation is: Unfortunately, though the likelihood of a logistic Markov network is convex, evaluating the likelihood or gradient of the likelihood of a model requires inference in the model, which is generally computationally infeasible (see'Inference'below). Amultivariate normal distributionforms a Markov random field with respect to a graphG=(V,E){\displaystyle G=(V,E)}if the missing edges correspond to zeros on theprecision matrix(the inversecovariance matrix): such that As in aBayesian network, one may calculate theconditional distributionof a set of nodesV′={v1,…,vi}{\displaystyle V'=\{v_{1},\ldots ,v_{i}\}}given values to another set of nodesW′={w1,…,wj}{\displaystyle W'=\{w_{1},\ldots ,w_{j}\}}in the Markov random field by summing over all possible assignments tou∉V′,W′{\displaystyle u\notin V',W'}; this is calledexact inference. However, exact inference is a#P-completeproblem, and thus computationally intractable in the general case. Approximation techniques such asMarkov chain Monte Carloand loopybelief propagationare often more feasible in practice. Some particular subclasses of MRFs, such as trees (seeChow–Liu tree), have polynomial-time inference algorithms; discovering such subclasses is an active research topic. There are also subclasses of MRFs that permit efficientMAP, or most likely assignment, inference; examples of these include associative networks.[9][10]Another interesting sub-class is the one of decomposable models (when the graph ischordal): having a closed-form for theMLE, it is possible to discover a consistent structure for hundreds of variables.[11] One notable variant of a Markov random field is aconditional random field, in which each random variable may also be conditioned upon a set of global observationso{\displaystyle o}. In this model, each functionφk{\displaystyle \varphi _{k}}is a mapping from all assignments to both thecliquekand the observationso{\displaystyle o}to the nonnegative real numbers. This form of the Markov network may be more appropriate for producingdiscriminative classifiers, which do not model the distribution over the observations. CRFs were proposed byJohn D. Lafferty,Andrew McCallumandFernando C.N. Pereirain 2001.[12] Markov random fields find application in a variety of fields, ranging fromcomputer graphicsto computer vision,[13]machine learningorcomputational biology,[2][14]andinformation retrieval.[15]MRFs are used in image processing to generate textures as they can be used to generate flexible and stochastic image models. In image modelling, the task is to find a suitable intensity distribution of a given image, where suitability depends on the kind of task and MRFs are flexible enough to be used for image and texture synthesis,image compressionand restoration,image segmentation, 3D image inference from 2D images,image registration,texture synthesis,super-resolution,stereo matchingandinformation retrieval. They can be used to solve various computer vision problems which can be posed as energy minimization problems or problems where different regions have to be distinguished using a set of discriminating features, within a Markov random field framework, to predict the category of the region.[16]Markov random fields were a generalization over the Ising model and have, since then, been used widely in combinatorial optimizations and networks.
https://en.wikipedia.org/wiki/Markov_random_field
Beaconsare small devices that enable relatively accurate location within a narrow range. Beacons periodically transmit small amounts of data within a range of approximately 70 meters, and are often used for indoor location technology.[1]Compared to devices based onGlobal Positioning System(GPS), beacons provide more accurate location information and can be used for indoor location. Various types of beacons exist, which can be classified based on their type of Beacon protocol, power source and location technology. In December 2013,AppleannouncediBeacon: the first beacon protocol in the market. iBeacon works with Apple's iOS and Google's Android. The beacon using the iBeacon protocol transmits a so-called UUID. The UUID is a string of 24 numbers, which communicate with an installed Mobile App.[2] Advantages: GoogleannouncedEddystonein July 2015, after it was renamed from its former name UriBeacon. Beacons with support from Eddystone are able to transmit three different frame-types, which work with both iOS and Android.[3]A single beacon can transmit one, two or all three frametypes. The three frametypes are: Advantages: Radius Networks announced AltBeacon in July 2014. This open source beacon protocol was designed to overcome the issue of protocols favouring one vendor over the other.[4] Advantages: The Web & Information Systems engineering lab (WISE) at the Vrije Universiteit Brussel (VUB) announced SemBeacon in September 2023. It is an open source[5]beacon protocol andontologybased on AltBeacon and Eddystone-URL to create interoperable applications that do not require a local database.[6] Advantages: Tecno-World (Pitius Tec S.L., Manufacture-ID 0x015C) announced GeoBeacon in July 2017. This open source beacon protocol was designed for usage in GeoCaching applications due to the very compact type of data storage.[7] Advantages: In general, there are three types of power source for beacons: Most beacons use bluetooth technology to communicate with other devices and retrieve thelocationinformation. Apart from bluetooth technology however, several other location technologies exist. The most common location technologies are the following: The majority of beacon location devices rely onBluetooth low energy(BLE) technology. Compared to 'classic'Bluetoothtechnology, BLE consumes less power, has a lower range, and transmits less data. BLE is designed for periodic transfers of very small amounts of data. In July 2015, theWi-Fi Allianceannounced Wi-Fi Aware. Similar to BLE, Wi-Fi Aware has a lower power consumption than regular Wi-Fi and is designed for indoor location purposes. Whereas most beacon vendors focus on merely one technology, some vendors combine multiple location technologies.
https://en.wikipedia.org/wiki/Types_of_beacons#AltBeacon_(Radius_Networks)
Inmathematical logic, thediagonal lemma(also known asdiagonalization lemma,self-reference lemmaorfixed point theorem) establishes the existence ofself-referentialsentences in certain formal theories. A particular instance of the diagonal lemma was used byKurt Gödelin 1931 to construct his proof of theincompleteness theoremsas well as in 1933 byTarskito prove hisundefinability theorem. In 1934,Carnapwas the first to publish the diagonal lemma at some level of generality.[1]The diagonal lemma is named in reference toCantor's diagonal argumentin set and number theory. The diagonal lemma applies to any sufficiently strong theories capable of representing the diagonal function. Such theories includefirst-order Peano arithmeticPA{\displaystyle {\mathsf {PA}}}, the weakerRobinson arithmeticQ{\displaystyle {\mathsf {Q}}}as well as any theory containingQ{\displaystyle {\mathsf {Q}}}(i.e. which interprets it).[2]A common statement of the lemma (as given below) makes the stronger assumption that the theory can represent allrecursive functions, but all the theories mentioned have that capacity, as well. The diagonal lemma also requires aGödel numberingα{\displaystyle \alpha }. We writeα(φ){\displaystyle \alpha (\varphi )}for the code assigned toφ{\displaystyle \varphi }by the numbering. Forn¯{\displaystyle {\overline {n}}}, the standard numeral ofn{\displaystyle n}(i.e.0¯=df0{\displaystyle {\overline {0}}=_{df}{\mathsf {0}}}andn+1¯=dfS(n¯){\displaystyle {\overline {n+1}}=_{df}{\mathsf {S}}({\overline {n}})}), let⌜φ⌝{\displaystyle \ulcorner \varphi \urcorner }be the standard numeral of the code ofφ{\displaystyle \varphi }(i.e.⌜φ⌝{\displaystyle \ulcorner \varphi \urcorner }isα(φ)¯{\displaystyle {\overline {\alpha (\varphi )}}}). We assume astandard Gödel numbering LetN{\displaystyle \mathbb {N} }be the set ofnatural numbers. Afirst-ordertheoryT{\displaystyle T}in the language of arithmetic containingQ{\displaystyle {\mathsf {Q}}}representsthek{\displaystyle k}-ary recursive functionf:Nk→N{\displaystyle f:\mathbb {N} ^{k}\rightarrow \mathbb {N} }if there is aformulaφf(x1,…,xk,y){\displaystyle \varphi _{f}(x_{1},\dots ,x_{k},y)}in the language ofT{\displaystyle T}s.t. for allm1,…,mk∈N{\displaystyle m_{1},\dots ,m_{k}\in \mathbb {N} }, iff(m1,…,mk)=n{\displaystyle f(m_{1},\dots ,m_{k})=n}thenT⊢∀y(φf(m1¯,…,mk¯,y)↔y=n¯){\displaystyle T\vdash \forall y(\varphi _{f}({\overline {m_{1}}},\dots ,{\overline {m_{k}}},y)\leftrightarrow y={\overline {n}})}. The representation theorem is provable, i.e. every recursive function is representable inT{\displaystyle T}.[3] Diagonal Lemma: LetT{\displaystyle T}a first-order theory containingQ{\displaystyle {\mathsf {Q}}}(Robinson arithmetic) and letψ(x){\displaystyle \psi (x)}be any formula in the language ofT{\displaystyle T}with onlyx{\displaystyle x}as free variable. Then there is a sentenceφ{\displaystyle \varphi }in the language ofT{\displaystyle T}s.t.T⊢φ↔ψ(⌜φ⌝){\displaystyle T\vdash \varphi \leftrightarrow \psi (\ulcorner \varphi \urcorner )}. Intuitively,φ{\displaystyle \varphi }is aself-referentialsentence which "says of itself that it has the propertyψ{\displaystyle \psi }." Proof: LetdiagT:N→N{\displaystyle diag_{T}:\mathbb {N} \to \mathbb {N} }be the recursive function which associates the code of each formulaφ(x){\displaystyle \varphi (x)}with only one free variablex{\displaystyle x}in the language ofT{\displaystyle T}with the code of the closed formulaφ(⌜φ⌝){\displaystyle \varphi (\ulcorner \varphi \urcorner )}(i.e. the substitution of⌜φ⌝{\displaystyle \ulcorner \varphi \urcorner }intoφ{\displaystyle \varphi }forx{\displaystyle x}) and0{\displaystyle 0}for other arguments. (The fact thatdiagT{\displaystyle diag_{T}}is recursive depends on the choice of the Gödel numbering, here thestandard one.) By the representation theorem,T{\displaystyle T}represents every recursive function. Thus, there is a formulaδ(x,y){\displaystyle \delta (x,y)}be the formula representingdiagT{\displaystyle diag_{T}}, in particular, for eachφ(x){\displaystyle \varphi (x)},T⊢δ(⌜φ⌝,y)↔y=⌜φ(⌜φ⌝)⌝{\displaystyle T\vdash \delta (\ulcorner \varphi \urcorner ,y)\leftrightarrow y=\ulcorner \varphi (\ulcorner \varphi \urcorner )\urcorner }. Letψ(x){\displaystyle \psi (x)}be an arbitrary formula with onlyx{\displaystyle x}as free variable. We now defineχ(x){\displaystyle \chi (x)}as∃y(δ(x,y)∧ψ(y)){\displaystyle \exists y(\delta (x,y)\land \psi (y))}, and letφ{\displaystyle \varphi }beχ(⌜χ⌝){\displaystyle \chi (\ulcorner \chi \urcorner )}. Then the following equivalences are provable inT{\displaystyle T}: φ↔χ(⌜χ⌝)↔∃y(δ(⌜χ⌝,y)∧ψ(y))↔∃y(y=⌜χ(⌜χ⌝)⌝∧ψ(y))↔∃y(y=⌜φ⌝∧ψ(y))↔ψ(⌜φ⌝){\displaystyle \varphi \leftrightarrow \chi (\ulcorner \chi \urcorner )\leftrightarrow \exists y(\delta (\ulcorner \chi \urcorner ,y)\land \psi (y))\leftrightarrow \exists y(y=\ulcorner \chi (\ulcorner \chi \urcorner )\urcorner \land \psi (y))\leftrightarrow \exists y(y=\ulcorner \varphi \urcorner \land \psi (y))\leftrightarrow \psi (\ulcorner \varphi \urcorner )}. There are various generalizations of the Diagonal Lemma. We present only three of them; in particular, combinations of the below generalizations yield new generalizations.[4]LetT{\displaystyle T}be a first-order theory containingQ{\displaystyle {\mathsf {Q}}}(Robinson arithmetic). Letψ(x,y1,…,yn){\displaystyle \psi (x,y_{1},\dots ,y_{n})}be any formula with free variablesx,y1,…,yn{\displaystyle x,y_{1},\dots ,y_{n}}. Then there is a formulaφ(y1,…yn){\displaystyle \varphi (y_{1},\dots y_{n})}with free variablesy1,…,yn{\displaystyle y_{1},\dots ,y_{n}}s.t.T⊢φ(y1,…,yn)↔ψ(⌜φ(y1,…,yn)⌝,y1,…,yn){\displaystyle T\vdash \varphi (y_{1},\dots ,y_{n})\leftrightarrow \psi (\ulcorner \varphi (y_{1},\dots ,y_{n})\urcorner ,y_{1},\dots ,y_{n})}. Letψ(x,y1,…,yn){\displaystyle \psi (x,y_{1},\dots ,y_{n})}be any formula with free variablesx,y1,…,yn{\displaystyle x,y_{1},\dots ,y_{n}}. Then there is a formulaφ(y1,…yn){\displaystyle \varphi (y_{1},\dots y_{n})}with free variablesy1,…,yn{\displaystyle y_{1},\dots ,y_{n}}s.t. for allm1,…,mn∈N{\displaystyle m_{1},\dots ,m_{n}\in \mathbb {N} },T⊢φ(m1¯,…,mn¯)↔ψ(⌜φ(m1¯,…,mn¯)⌝,m1¯,…,mn¯){\displaystyle T\vdash \varphi ({\overline {m_{1}}},\dots ,{\overline {m_{n}}})\leftrightarrow \psi (\ulcorner \varphi ({\overline {m_{1}}},\dots ,{\overline {m_{n}}})\urcorner ,{\overline {m_{1}}},\dots ,{\overline {m_{n}}})}. Letψ1(x1,x2){\displaystyle \psi _{1}(x_{1},x_{2})}andψ2(x1,x2){\displaystyle \psi _{2}(x_{1},x_{2})}be formulae with free variablex1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}. Then there are sentenceφ1{\displaystyle \varphi _{1}}andφ2{\displaystyle \varphi _{2}}s.t.T⊢φ1↔ψ1(⌜φ1⌝,⌜φ2⌝){\displaystyle T\vdash \varphi _{1}\leftrightarrow \psi _{1}(\ulcorner \varphi _{1}\urcorner ,\ulcorner \varphi _{2}\urcorner )}andT⊢φ2↔ψ2(⌜φ1⌝,⌜φ2⌝){\displaystyle T\vdash \varphi _{2}\leftrightarrow \psi _{2}(\ulcorner \varphi _{1}\urcorner ,\ulcorner \varphi _{2}\urcorner )}. The case withn{\displaystyle n}many formulae is similar. The lemma is called "diagonal" because it bears some resemblance toCantor's diagonal argument.[5]The terms "diagonal lemma" or "fixed point" do not appear inKurt Gödel's1931 articleor inAlfred Tarski's1936 article. In 1934,Rudolf Carnapwas the first to publish the diagonal lemma in some level of generality, which says that for any formulaψ(x){\displaystyle \psi (x)}withx{\displaystyle x}as free variable (in a sufficiently expressive language), then there exists a sentenceφ{\displaystyle \varphi }such thatφ↔ψ(⌜φ⌝){\displaystyle \varphi \leftrightarrow \psi (\ulcorner \varphi \urcorner )}is true (in some standard model).[6]Carnap's work was phrased in terms oftruthrather thanprovability(i.e. semantically rather than syntactically).[7]Remark also that the concept ofrecursive functionswas not yet developed in 1934. The diagonal lemma is closely related toKleene's recursion theoremincomputability theory, and their respective proofs are similar.[8]In 1952,Léon Henkinasked whether sentences that state their own provability are provable. His question led to more general analyses of the diagonal lemma, especially withLöb's theoremandprovability logic.[9]
https://en.wikipedia.org/wiki/Diagonal_lemma
Atautonymis ascientific nameof a species in which both parts of the name have the same spelling, such asRattus rattus. The first part of the name is the name of the genus and the second part is referred to as thespecific epithetin theInternational Code of Nomenclature for algae, fungi, and plantsand thespecific namein theInternational Code of Zoological Nomenclature. Tautonymy (i.e., the usage of tautonymous names) is permissible in zoological nomenclature (seeList of tautonymsfor examples). In past editions of the zoological code, the term tautonym was used, but it has now been replaced by the more inclusive "tautonymous names"; these includetrinomial namesfor subspecies such asGorilla gorilla gorillaandBison bison bison. Tautonyms can be formed when animals are given scientific names for the first time, or when they are reclassified and given new scientific names.[1]An example of the former is the hidden mirror skipper of Brazil with the scientific nameSpeculum speculum, which comes from a Latin word for "mirror" in reference to the shiny, mirror-like coloring on its wings.[2][3]An example of the latter isNombe nombe, an extinct kangaroo from the late Pleistocene epoch found in Papua New Guinea's Nombe Rockshelter that was classified asProtemnodon nombeuntil 2022 when it was reclassified in light of a more recent review of the animal's dental attributes.[4]Animals with tautonymous names can also be reclassified so that they no longer have tautonymous names, as was the case withPolyspila polyspila(nowCalligrapha polyspila).[5] For animals, a tautonym implicitly (though not always) indicates that the species is thetype speciesof its genus.[6]This can also be indicated by a species name with the specific epithettypusortypicus,[7]although more commonly the type species is designated another way. Regarding other living organisms, tautonyms were prohibited in bacteriological nomenclature from 1947 until 1975, but they are now permitted for all bacteria andprokaryotes.[8]Tautonyms are prohibited by the codes of nomenclature for botany and for cultivated plants, but they are not prohibited by the code of nomenclature for viruses.[9] In the current rules forbotanical nomenclature(which apply retroactively), tautonyms are explicitly prohibited.[10]The reason for prohibiting tautonyms is not explained in current or historical botanical nomenclatural codes, but it appears to have resulted from concerns over a century ago that identical taxon names could result in confusion where those names share identical spelling and identical capitalization.[11] One example of a former botanical tautonym is 'Larix larix'. The earliest name for theEuropean larchisPinus larixL. (1753) butGustav Karl Wilhelm Hermann Karstendid not agree with the placement of the species inPinusand decided to move it toLarixin 1880. His proposed name created a tautonym. Under rules first established in 1906, which are applied retroactively,Larix larixcannot exist as a formal name. In such a case either the next earliest validly published name must be found, in this caseLarix deciduaMill. (1768), or (in its absence) a new epithet must be published. However, it is allowed for both parts of the name of a species to mean the same (pleonasm), without being identical in spelling. For instance,Arctostaphylos uva-ursimeansbearberrytwice, in Greek and Latin respectively;Picea omorikauses the Latin and Serbian terms for aspruce. Instances that repeat the genus name with a slight modification, such asLycopersicon lycopersicum(Greek and Latinized Greek, a rejected name for thetomato) andZiziphus zizyphus, have been contentious, but are in accord with the Code of Nomenclature.[12] In April 2023, a proposal was made to permit tautonyms in botanical nomenclature on a non-retroactive basis, noting that tautonyms have been allowed in zoological and bacteriological codes for decades without incident, and that allowing tautonyms would simplify botany's nomenclatural code while eliminating certain naming problems and preserving the epithets originally assigned to species.[13]
https://en.wikipedia.org/wiki/Tautonym
Incomplex analysis, thePhragmén–Lindelöf principle(ormethod), first formulated byLars Edvard Phragmén(1863–1937) andErnst Leonard Lindelöf(1870–1946) in 1908, is a technique which employs an auxiliary, parameterized function to prove the boundedness of a holomorphic functionf{\displaystyle f}(i.e,|f(z)|<M(z∈Ω){\displaystyle |f(z)|<M\ \ (z\in \Omega )}) on an unbounded domainΩ{\displaystyle \Omega }when an additional (usually mild) condition constraining the growth of|f|{\displaystyle |f|}onΩ{\displaystyle \Omega }is given. It is a generalization of themaximum modulus principle, which is only applicable to bounded domains. In the theory of complex functions, it is known that themodulus(absolute value) of aholomorphic(complex differentiable) function in the interior of aboundedregion is bounded by its modulus on the boundary of the region. More precisely, if a non-constant functionf:C→C{\displaystyle f:\mathbb {C} \to \mathbb {C} }is holomorphic in a bounded region[1]Ω{\displaystyle \Omega }andcontinuouson its closureΩ¯=Ω∪∂Ω{\displaystyle {\overline {\Omega }}=\Omega \cup \partial \Omega }, then|f(z0)|<supz∈∂Ω|f(z)|{\textstyle |f(z_{0})|<\sup _{z\in \partial \Omega }|f(z)|}for allz0∈Ω{\displaystyle z_{0}\in \Omega }. This is known as themaximum modulus principle.(In fact, sinceΩ¯{\displaystyle {\overline {\Omega }}}is compact and|f|{\displaystyle |f|}is continuous, there actually exists somew0∈∂Ω{\displaystyle w_{0}\in \partial \Omega }such that|f(w0)|=supz∈Ω|f(z)|{\textstyle |f(w_{0})|=\sup _{z\in \Omega }|f(z)|}.) The maximum modulus principle is generally used to conclude that a holomorphic function is bounded in a region after showing that it is bounded on its boundary. However, the maximum modulus principle cannot be applied to an unbounded region of the complex plane. As a concrete example, let us examine the behavior of the holomorphic functionf(z)=exp⁡(exp⁡(z)){\displaystyle f(z)=\exp(\exp(z))}in the unbounded strip Although|f(x±πi/2)|=1{\displaystyle |f(x\pm \pi i/2)|=1}, so that|f|{\displaystyle |f|}is bounded on boundary∂S{\displaystyle \partial S},|f|{\displaystyle |f|}grows rapidly without bound when|z|→∞{\displaystyle |z|\to \infty }along the positive real axis. The difficulty here stems from the extremely fast growth of|f|{\displaystyle |f|}along the positive real axis. If the growth rate of|f|{\displaystyle |f|}is guaranteed to not be "too fast," as specified by an appropriate growth condition, thePhragmén–Lindelöf principlecan be applied to show that boundedness off{\displaystyle f}on the region's boundary implies thatf{\displaystyle f}is in fact bounded in the whole region, effectively extending the maximum modulus principle to unbounded regions. Suppose we are given a holomorphic functionf{\displaystyle f}and an unbounded regionS{\displaystyle S}, and we want to show that|f|≤M{\displaystyle |f|\leq M}onS{\displaystyle S}. In a typical Phragmén–Lindelöf argument, we introduce a certain multiplicative factorhϵ{\displaystyle h_{\epsilon }}satisfyinglimϵ→0hϵ=1{\textstyle \lim _{\epsilon \to 0}h_{\epsilon }=1}to "subdue" the growth off{\displaystyle f}. In particular,hϵ{\displaystyle h_{\epsilon }}is chosen such that (i):fhϵ{\displaystyle fh_{\epsilon }}is holomorphic for allϵ>0{\displaystyle \epsilon >0}and|fhϵ|≤M{\displaystyle |fh_{\epsilon }|\leq M}on the boundary∂Sbdd{\displaystyle \partial S_{\mathrm {bdd} }}of an appropriateboundedsubregionSbdd⊂S{\displaystyle S_{\mathrm {bdd} }\subset S}; and (ii): the asymptotic behavior offhϵ{\displaystyle fh_{\epsilon }}allows us to establish that|fhϵ|≤M{\displaystyle |fh_{\epsilon }|\leq M}forz∈S∖Sbdd¯{\displaystyle z\in S\setminus {\overline {S_{\mathrm {bdd} }}}}(i.e., the unbounded part ofS{\displaystyle S}outside the closure of the bounded subregion). This allows us to apply the maximum modulus principle to first conclude that|fhϵ|≤M{\displaystyle |fh_{\epsilon }|\leq M}onSbdd¯{\displaystyle {\overline {S_{\mathrm {bdd} }}}}and then extend the conclusion to allz∈S{\displaystyle z\in S}. Finally, we letϵ→0{\displaystyle \epsilon \to 0}so thatf(z)hϵ(z)→f(z){\displaystyle f(z)h_{\epsilon }(z)\to f(z)}for everyz∈S{\displaystyle z\in S}in order to conclude that|f|≤M{\displaystyle |f|\leq M}onS{\displaystyle S}. In the literature of complex analysis, there are many examples of the Phragmén–Lindelöf principle applied to unbounded regions of differing types, and also a version of this principle may be applied in a similar fashion tosubharmonicand superharmonic functions. To continue the example above, we can impose a growth condition on a holomorphic functionf{\displaystyle f}that prevents it from "blowing up" and allows the Phragmén–Lindelöf principle to be applied. To this end, we now include the condition that for some real constantsc<1{\displaystyle c<1}andA<∞{\displaystyle A<\infty }, for allz∈S{\displaystyle z\in S}. It can then be shown that|f(z)|≤1{\displaystyle |f(z)|\leq 1}for allz∈∂S{\displaystyle z\in \partial S}implies that|f(z)|≤1{\displaystyle |f(z)|\leq 1}in fact holds for allz∈S{\displaystyle z\in S}. Thus, we have the following proposition: Proposition.Let Letf{\displaystyle f}be holomorphic onS{\displaystyle S}and continuous onS¯{\displaystyle {\overline {S}}}, and suppose there exist real constantsc<1,A<∞{\displaystyle c<1,\ A<\infty }such that for allz∈S{\displaystyle z\in S}and|f(z)|≤1{\displaystyle |f(z)|\leq 1}for allz∈S¯∖S=∂S{\displaystyle z\in {\overline {S}}\setminus S=\partial S}. Then|f(z)|≤1{\displaystyle |f(z)|\leq 1}for allz∈S{\displaystyle z\in S}. Note that this conclusion fails whenc=1{\displaystyle c=1}, precisely as the motivating counterexample in the previous section demonstrates. The proof of this statement employs a typical Phragmén–Lindelöf argument:[2] Proof:(Sketch)We fixb∈(c,1){\displaystyle b\in (c,1)}and define for eachϵ>0{\displaystyle \epsilon >0}the auxiliary functionhϵ{\displaystyle h_{\epsilon }}byhϵ(z)=e−ϵ(ebz+e−bz){\textstyle h_{\epsilon }(z)=e^{-\epsilon (e^{bz}+e^{-bz})}}. Moreover, for a givena>0{\displaystyle a>0}, we defineSa{\displaystyle S_{a}}to be the open rectangle in the complex plane enclosed within the vertices{a±iπ/2,−a±iπ/2}{\displaystyle \{a\pm i\pi /2,-a\pm i\pi /2\}}. Now, fixϵ>0{\displaystyle \epsilon >0}and consider the functionfhϵ{\displaystyle fh_{\epsilon }}. Because one can show that|hϵ(z)|≤1{\displaystyle |h_{\epsilon }(z)|\leq 1}for allz∈S¯{\displaystyle z\in {\overline {S}}}, it follows that|f(z)hϵ(z)|≤1{\displaystyle |f(z)h_{\epsilon }(z)|\leq 1}forz∈∂S{\displaystyle z\in \partial S}. Moreover, one can show forz∈S¯{\displaystyle z\in {\overline {S}}}that|f(z)hϵ(z)|→0{\displaystyle |f(z)h_{\epsilon }(z)|\to 0}uniformly as|ℜ(z)|→∞{\displaystyle |\Re (z)|\to \infty }. This allows us to find anx0{\displaystyle x_{0}}such that|f(z)hϵ(z)|≤1{\displaystyle |f(z)h_{\epsilon }(z)|\leq 1}wheneverz∈S¯{\displaystyle z\in {\overline {S}}}and|ℜ(z)|≥x0{\displaystyle |\Re (z)|\geq x_{0}}. Now consider the bounded rectangular regionSx0{\displaystyle S_{x_{0}}}. We have established that|f(z)hϵ(z)|≤1{\displaystyle |f(z)h_{\epsilon }(z)|\leq 1}for allz∈∂Sx0{\displaystyle z\in \partial S_{x_{0}}}. Hence, the maximum modulus principle implies that|f(z)hϵ(z)|≤1{\displaystyle |f(z)h_{\epsilon }(z)|\leq 1}for allz∈Sx0¯{\displaystyle z\in {\overline {S_{x_{0}}}}}. Since|f(z)hϵ(z)|≤1{\displaystyle |f(z)h_{\epsilon }(z)|\leq 1}also holds wheneverz∈S{\displaystyle z\in S}and|ℜ(z)|>x0{\displaystyle |\Re (z)|>x_{0}}, we have in fact shown that|f(z)hϵ(z)|≤1{\displaystyle |f(z)h_{\epsilon }(z)|\leq 1}holds for allz∈S{\displaystyle z\in S}. Finally, becausefhϵ→f{\displaystyle fh_{\epsilon }\to f}asϵ→0{\displaystyle \epsilon \to 0}, we conclude that|f(z)|≤1{\displaystyle |f(z)|\leq 1}for allz∈S{\displaystyle z\in S}.Q.E.D. A particularly useful statement proved using the Phragmén–Lindelöf principle bounds holomorphic functions on a sector of the complex plane if it is bounded on its boundary. This statement can be used to give a complex analytic proof of theHardy'suncertainty principle, which states that a function and its Fourier transform cannot both decay faster than exponentially.[3] Proposition.LetF{\displaystyle F}be a function that isholomorphicin asector of central angleβ−α=π/λ{\displaystyle \beta -\alpha =\pi /\lambda }, and continuous on its boundary. If forz∈∂S{\displaystyle z\in \partial S}, and for allz∈S{\displaystyle z\in S}, whereρ∈[0,λ){\displaystyle \rho \in [0,\lambda )}andC>0{\displaystyle C>0}, then|F(z)|≤1{\displaystyle |F(z)|\leq 1}holds also for allz∈S{\displaystyle z\in S}. The condition (2) can be relaxed to with the same conclusion. In practice the point 0 is often transformed into the point ∞ of theRiemann sphere. This gives a version of the principle that applies to strips, for example bounded by two lines of constantreal partin the complex plane. This special case is sometimes known asLindelöf's theorem. Carlson's theoremis an application of the principle to functions bounded on the imaginary axis.
https://en.wikipedia.org/wiki/Phragm%C3%A9n%E2%80%93Lindel%C3%B6f_principle
In themathematical theoryof functions ofoneormore complex variables, and also incomplex algebraic geometry, abiholomorphismorbiholomorphic functionis abijectiveholomorphic functionwhoseinverseis alsoholomorphic. Formally, abiholomorphic functionis a functionϕ{\displaystyle \phi }defined on anopen subsetUof then{\displaystyle n}-dimensional complex spaceCnwith values inCnwhich isholomorphicandone-to-one, such that itsimageis an open setV{\displaystyle V}inCnand the inverseϕ−1:V→U{\displaystyle \phi ^{-1}:V\to U}is alsoholomorphic. More generally,UandVcan becomplex manifolds. As in the case of functions of a single complex variable, a sufficient condition for a holomorphic map to be biholomorphic onto its image is that the map is injective, in which case the inverse is also holomorphic (e.g., see Gunning 1990, Theorem I.11 or Corollary E.10 pg. 57). If there exists a biholomorphismϕ:U→V{\displaystyle \phi \colon U\to V}, we say thatUandVarebiholomorphically equivalentor that they arebiholomorphic. Ifn=1,{\displaystyle n=1,}everysimply connectedopen set other than the whole complex plane is biholomorphic to theunit disc(this is theRiemann mapping theorem). The situation is very different in higher dimensions. For example, openunit ballsand open unitpolydiscsare not biholomorphically equivalent forn>1.{\displaystyle n>1.}In fact, there does not exist even aproperholomorphic function from one to the other. In the case of mapsf:U→Cdefined on an open subsetUof the complex planeC, some authors (e.g., Freitag 2009, Definition IV.4.1) define aconformal mapto be an injective map with nonzero derivative i.e.,f’(z)≠ 0 for everyzinU. According to this definition, a mapf:U→Cis conformal if and only iff:U→f(U) is biholomorphic. Notice that per definition of biholomorphisms, nothing is assumed about their derivatives, so, this equivalence contains the claim that a homeomorphism that is complex differentiable must actually have nonzero derivative everywhere. Other authors (e.g., Conway 1978) define a conformal map as one with nonzero derivative, but without requiring that the map be injective. According to this weaker definition, a conformal map need not be biholomorphic, even though it is locally biholomorphic, for example, by the inverse function theorem. For example, iff:U→Uis defined byf(z) =z2withU=C–{0}, thenfis conformal onU, since its derivativef’(z) = 2z≠ 0, but it is not biholomorphic, since it is 2-1. This article incorporates material frombiholomorphically equivalentonPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Biholomorphic_mapping
Sensitive compartmented information(SCI) is a type ofUnited Statesclassified informationconcerning or derived from sensitive intelligence sources, methods, or analytical processes. All SCI must be handled within formal access control systems established by theDirector of National Intelligence.[1] SCI is not a classification; SCI clearance has sometimes been called "above Top Secret",[2]but information at any classification level may exist within an SCI control system. When "decompartmentalized", this information is treated the same as collateral information at the same classification level. The federal government requires[3]the SCI be processed, stored, used or discussed in aSensitive compartmented information facility(SCIF). Eligibility for access to SCI is determined by aSingle Scope Background Investigation(SSBI) or periodic reinvestigation.[4]Because the same investigation is used to grantTop Secretsecurity clearances, the two are often written together asTS//SCI. Eligibility alone does not confer access to any specific SCI material; it is simply a qualification. One must receive explicit permission to access an SCI control system or compartment. This process may include apolygraphor other approved investigative or adjudicative action.[5] Once it is determined a person should have access to an SCI compartment, they sign a nondisclosure agreement, are "read in" or indoctrinated, and the fact of this access is recorded in a local access register or in a computer database. Upon termination from a particular compartment, the employee again signs the nondisclosure agreement. SCI is divided into control systems, which are further subdivided into compartments and sub-compartments. These systems and compartments are usually identified by a classified codeword. Several such codewords have been declassified. The following SCI control systems, with their abbreviations and compartments, are known: SCI control system markings are placed immediately after the classification level markings in a banner line (banner spells out TOP SECRET in full) or portion marking (here TS is used).[24]Sometimes, especially on older documents, they are stamped. The following banner line and portion marking describe a top secret document containing information from the notional SI-GAMMA 1234 subcompartment, the notional SI-MANSION compartment, and the notional TALENT KEYHOLE-BLUEFISH compartment (TK is always abbreviated, because in some cases even the full meaning may be classified, like for BUR keyword, BUR-BLG-HCAS, BUR-BLG-JETS): Older documents were marked with HANDLE VIA xxxx CONTROL CHANNELS (or "HVxCC"), HANDLE VIA xxxx CHANNELS ONLY (or "HVxCO"), or HANDLE VIA xxxx CHANNELS JOINTLY (or "HVxCJ"), but this requirement was rescinded in 2006.[25]For example, COMINT documents were marked as HANDLE VIA COMINT CHANNELS ONLY. This marking led to the use of thecaveatCCO (COMINT Channels Only) in portion markings,[26]but CCO is also obsolete.[27]
https://en.wikipedia.org/wiki/Sensitive_compartmented_information
Aproduct integralis anyproduct-based counterpart of the usualsum-basedintegralofcalculus. The product integral was developed by the mathematicianVito Volterrain 1887 to solve systems oflinear differential equations.[1][2] The classicalRiemann integralof afunctionf:[a,b]→R{\displaystyle f:[a,b]\to \mathbb {R} }can be defined by the relation where thelimitis taken over allpartitionsof theinterval[a,b]{\displaystyle [a,b]}whosenormsapproach zero. Product integrals are similar, but take thelimitof aproductinstead of thelimitof asum. They can be thought of as "continuous" versions of "discrete"products. They are defined as For the case off:[a,b]→R{\displaystyle f:[a,b]\to \mathbb {R} }, the product integral reduces exactly to the case ofLebesgue integration, that is, to classical calculus. Thus, the interesting cases arise for functionsf:[a,b]→A{\displaystyle f:[a,b]\to A}whereA{\displaystyle A}is either somecommutative algebra, such as a finite-dimensionalmatrix field, or ifA{\displaystyle A}is anon-commutative algebra. The theories for these two cases, the commutative and non-commutative cases, have little in common. The non-commutative case is far more complicated; it requires properpath-orderingto make the integral well-defined. For the commutative case, three distinct definitions are commonplace in the literature, referred to as Type-I, Type-II orgeometric, and type-III orbigeometric.[3][4][5]Such integrals have found use inepidemiology(theKaplan–Meier estimator) and stochasticpopulation dynamics. The geometric integral, together with the geometric derivative, is useful inimage analysis[6]and in the study of growth/decay phenomena (e.g., ineconomic growth,bacterial growth, andradioactive decay).[7][8]Thebigeometric integral, together with the bigeometric derivative, is useful in some applications offractals,[9][10][11][12]and in the theory ofelasticityin economics.[3][5][13] The non-commutative case commonly arises inquantum mechanicsandquantum field theory. The integrand is generally an operator belonging to somenon-commutative algebra. In this case, one must be careful to establish apath-orderingwhile integrating. A typical result is theordered exponential. TheMagnus expansionprovides one technique for computing the Volterra integral. Examples include theDyson expansion, the integrals that occur in theoperator product expansionand theWilson line, a product integral over a gauge field. TheWilson loopis the trace of a Wilson line. The product integral also occurs incontrol theory, as thePeano–Baker seriesdescribing state transitions inlinear systemswritten in amaster equationtype form. The Volterra product integral is most useful when applied to matrix-valued functions or functions with values in aBanach algebra. When applied to scalars belonging to a non-commutative field, to matrixes, and to operators,i.e.to mathematical objects that don't commute, the Volterra integral splits in two definitions.[14] Theleft product integralis With this notation of left products (i.e. normal products applied from left) Theright product integral With this notation of right products (i.e. applied from right) Where1{\displaystyle \mathbb {1} }is the identity matrix and D is a partition of the interval [a,b] in the Riemann sense,i.e.the limit is over the maximum interval in the partition. Note how in this casetime orderingbecomes evident in the definitions. TheMagnus expansionprovides a technique for computing the product integral. It defines a continuous-time version of theBaker–Campbell–Hausdorff formula. The product integral satisfies a collection of properties defining a one-parametercontinuous group; these are stated in two articles showing applications: theDyson seriesand thePeano–Baker series. The commutative case is vastly simpler, and, as a result, a large variety of distinct notations and definitions have appeared. Three distinct styles are popular in the literature. This subsection adopts the product∏{\displaystyle \textstyle \prod }notation for product integration instead of the integral∫{\displaystyle \textstyle \int }(usually modified by a superimposed times symbol or letter P) favoured byVolterraand others. An arbitrary classification of types is adopted to impose some order in the field. When the function to be integrated is valued in the real numbers, then the theory reduces exactly to the theory ofLebesgue integration. The type I product integral corresponds toVolterra's original definition.[2][15][16]The following relationship exists forscalar functionsf:[a,b]→R{\displaystyle f:[a,b]\to \mathbb {R} }: which is called thegeometric integral. The logarithm is well-defined ifftakes values in the real or complex numbers, or ifftakes values in a commutative field of commutingtrace-classoperators. This definition of the product integral is thecontinuousanalog of thediscreteproductoperator∏i=ab{\displaystyle \textstyle \prod _{i=a}^{b}}(withi,a,b∈Z{\displaystyle i,a,b\in \mathbb {Z} }) and themultiplicativeanalog to the (normal/standard/additive)integral∫abdx{\displaystyle \textstyle \int _{a}^{b}dx}(withx∈[a,b]{\displaystyle x\in [a,b]}): It is very useful instochastics, where thelog-likelihood(i.e. thelogarithmof a product integral ofindependentrandom variables) equals theintegralof thelogarithmof these (infinitesimallymany)random variables: The type III product integral is called thebigeometric integral. For the commutative case, the following results hold for the type II product integral (the geometric integral). The geometric integral (type II above) plays a central role in thegeometric calculus,[3][4][17]which is a multiplicative calculus. The inverse of the geometric integral, which is thegeometric derivative, denotedf∗(x){\displaystyle f^{*}(x)}, is defined using the following relationship: Thus, the following can be concluded: whereXis arandom variablewithprobability distributionF(x). Compare with the standardlaw of large numbers: When the integrand takes values in thereal numbers, then the product intervals become easy to work with by usingsimple functions. Just as in the case ofLebesgue version of (classical) integrals, one can compute product integrals by approximating them with the product integrals ofsimple functions. The case of Type II geometric integrals reduces to exactly the case of classical Lebesgue integration. Becausesimple functionsgeneralizestep functions, in what follows we will only consider the special case of simple functions that are step functions. This will also make it easier to compare theLebesgue definitionwith theRiemann definition. Given a step functionf:[a,b]→R{\displaystyle f:[a,b]\to \mathbb {R} }with atagged partition oneapproximationof the "Riemann definition" of thetype I product integralis given by[18] The (type I) product integral was defined to be, roughly speaking, thelimitof theseproductsbyLudwig Schlesingerin a 1931 article.[which?] Another approximation of the "Riemann definition" of the type I product integral is defined as Whenf{\displaystyle f}is aconstant function, the limit of the first type of approximation is equal to the second type of approximation.[19]Notice that in general, for a step function, the value of the second type of approximation doesn't depend on the partition, as long as the partition is arefinementof the partition defining the step function, whereas the value of the first type of approximationdoesdepend on the fineness of the partition, even when it is a refinement of the partition defining the step function. It turns out that[20]foranyproduct-integrable functionf{\displaystyle f}, the limit of the first type of approximation equals the limit of the second type of approximation. Since, for step functions, the value of the second type of approximation doesn't depend on the fineness of the partition for partitions "fine enough", it makes sense to define[21]the "Lebesgue (type I) product integral" of a step function as wherey0<a=s1<y1<⋯<yn−1<sn<yn=b{\displaystyle y_{0}<a=s_{1}<y_{1}<\dots <y_{n-1}<s_{n}<y_{n}=b}is the tagged partition corresponding to the step functionf{\displaystyle f}. (In contrast, the corresponding quantity would not be unambiguously defined using the first type of approximation.) This generalizes to arbitrarymeasure spacesreadily. IfX{\displaystyle X}is a measure space withmeasureμ{\displaystyle \mu }, then for any product-integrable simple functionf(x)=∑k=1nakIAk(x){\displaystyle f(x)=\sum _{k=1}^{n}a_{k}I_{A_{k}}(x)}(i.e. aconical combinationof theindicator functionsfor somedisjointmeasurable setsA1,A2,…,An⊆X{\displaystyle A_{1},A_{2},\dots ,A_{n}\subseteq X}), its type I product integral is defined to be sinceak{\displaystyle a_{k}}is the value off{\displaystyle f}at any point ofAk{\displaystyle A_{k}}. In the special case whereX=R{\displaystyle X=\mathbb {R} },μ{\displaystyle \mu }isLebesgue measure, and all of the measurable setsAk{\displaystyle A_{k}}areintervals, one can verify that this is equal to the definition given above for that special case. Analogous tothe theory of Lebesgue (classical) integrals, the Type I product integral of any product-integrable functionf{\displaystyle f}can be written as the limit of an increasingsequenceof Volterra product integrals of product-integrable simple functions. Takinglogarithmsof both sides of the above definition, one gets that for any product-integrable simple functionf{\displaystyle f}: where we used the definition of integral for simple functions. Moreover, becausecontinuous functionslikeexp{\displaystyle \exp }can be interchanged with limits, and the product integral of any product-integrable functionf{\displaystyle f}is equal to the limit of product integrals of simple functions, it follows that the relationship holds generally foranyproduct-integrablef{\displaystyle f}. This clearly generalizes the property mentioned above. The Type I integral is multiplicative as aset function,[22]which can be shown using the above property. More specifically, given a product-integrable functionf{\displaystyle f}one can define a set functionVf{\displaystyle {\cal {V}}_{f}}by defining, for every measurable setB⊆X{\displaystyle B\subseteq X}, whereIB(x){\displaystyle I_{B}(x)}denotes theindicator functionofB{\displaystyle B}. Then for any twodisjointmeasurable setsB1,B2{\displaystyle B_{1},B_{2}}one has This property can be contrasted with measures, which aresigma-additiveset functions. However, the Type I integral isnotmultiplicativeas afunctional. Given two product-integrable functionsf,g{\displaystyle f,g}, and a measurable setA{\displaystyle A}, it is generally the case that IfX{\displaystyle X}is a measure space with measureμ{\displaystyle \mu }, then for any product-integrable simple functionf(x)=∑k=1nakIAk(x){\displaystyle f(x)=\sum _{k=1}^{n}a_{k}I_{A_{k}}(x)}(i.e. aconical combinationof theindicator functionsfor some disjoint measurable setsA1,A2,…,An⊆X{\displaystyle A_{1},A_{2},\dots ,A_{n}\subseteq X}), its type II product integral is defined to be This can be seen to generalize the definition given above. Taking logarithms of both sides, we see that for any product-integrable simple functionf{\displaystyle f}: where the definition of the Lebesgue integral for simple functions was used. This observation, analogous to the one already made for Type II integrals above, allows one to entirely reduce the "Lebesgue theory of type II geometric integrals" to the Lebesgue theory of (classical) integrals. In other words, because continuous functions likeexp{\displaystyle \exp }andln{\displaystyle \ln }can be interchanged with limits, and the product integral of any product-integrable functionf{\displaystyle f}is equal to the limit of some increasing sequence of product integrals of simple functions, it follows that the relationship holds generally foranyproduct-integrablef{\displaystyle f}. This generalizes the property of geometric integrals mentioned above.
https://en.wikipedia.org/wiki/Product_integral
Inlinguistics, acalque(/kælk/) orloan translationis awordorphraseborrowed from anotherlanguagebyliteralword-for-word or root-for-roottranslation. When used as averb, "to calque" means to borrow a word or phrase from another language while translating its components, so as to create a new word or phrase (lexeme) in the target language. For instance, the English wordskyscraperhas been calqued in dozens of other languages,[1]combining words for "sky" and "scrape" in each language, as for exampleWolkenkratzerin German,arranha-céuin Portuguese,wolkenkrabberin Dutch,rascacieloin Spanish,grattacieloin Italian,gökdelenin Turkish, andmatenrō(摩天楼)in Japanese. Calques, like direct borrowings, often function as linguistic gap-fillers, emerging when a language lacks existing vocabulary to express new ideas, technologies, or objects. This phenomenon is widespread and is often attributed to the shared conceptual frameworks across human languages. Speakers of different languages tend to perceive the world through common categories such as time, space, and quantity, making the translation of concepts across languages both possible and natural.[2] Calquing is distinct fromphono-semantic matching: while calquing includessemantictranslation, it does not consist ofphoneticmatching—i.e., of retaining the approximatesoundof the borrowed word by matching it with a similar-sounding pre-existing word ormorphemein the target language.[3] Proving that a word is a calque sometimes requires more documentation than does an untranslated loanword because, in some cases, a similar phrase might have arisen in both languages independently. This is less likely to be the case when the grammar of the proposed calque is quite different from that of the borrowing language, or when the calque contains less obvious imagery. One system classifies calques into five groups. This terminology is not universal:[4] Some linguists refer to aphonological calque, in which the pronunciation of a word is imitated in the other language.[8]For example, the English word "radar" becomes the similar-sounding Chinese word雷达(pinyin:léidá),[8]which literally means "to arrive (as fast) as thunder". Partial calques, or loan blends, translate some parts of a compound but not others.[9]For example, the name of the Irish digital television serviceSaorviewis a partial calque of that of the UK service "Freeview", translating the first half of the word from English to Irish but leaving the second half unchanged. Other examples include "liverwurst" (< GermanLeberwurst)[10]and "apple strudel" (< GermanApfelstrudel).[11] The "computer mouse" was named in English for its resemblance to theanimal. Many other languages use their word for "mouse" for the "computer mouse", sometimes using adiminutiveor, inChinese, adding the word "cursor" (标), makingshǔbiāo"mouse cursor" (simplified Chinese:鼠标;traditional Chinese:鼠標;pinyin:shǔbiāo).[citation needed]Another example is the Spanish wordratónthat means both the animal and the computer mouse.[12] The common English phrase "flea market" is a loan translation of the Frenchmarché aux puces("market with fleas").[13]At least 22 other languages calque the French expression directly or indirectly through another language. The wordloanwordis a calque of theGermannounLehnwort. In contrast, the termcalqueis a loanword, from the Frenchnouncalque("tracing, imitation, close copy").[14] Another example of a common morpheme-by-morpheme loan-translation is of theEnglishword "skyscraper", akenning-like term which may be calqued using the word for "sky" or "cloud" and the word, variously, for "scrape", "scratch", "pierce", "sweep", "kiss", etc. At least 54 languages have their own versions of the English word. SomeGermanicandSlavic languagesderived their words for "translation" from words meaning "carrying across" or "bringing across", calquing from the Latintranslātiōortrādūcō.[15] The Latinweekday namescame to be associated by ancient Germanic speakers with their own gods following a practice known asinterpretatio germanica: the Latin "Day ofMercury",Mercurii dies(latermercrediin modernFrench), was borrowed intoLate Proto-Germanicas the "Day ofWōđanaz" (Wodanesdag), which becameWōdnesdæginOld English, then "Wednesday" in Modern English.[16] Since at least 1894, according to theTrésor de la langue française informatisé, theFrenchtermcalquehas been used in itslinguisticsense, namely in a publication by Louis Duvau:[17] Un autre phénomène d'hybridation est la création dans une langue d'un mot nouveau, dérivé ou composé à l'aide d'éléments existant déja dans cette langue, et ne se distinguant en rien par l'aspect extérieur des mots plus anciens, mais qui, en fait, n'est que lecalqued'un mot existant dans la langue maternelle de celui qui s'essaye à un parler nouveau. [...] nous voulons rappeler seulement deux ou trois exemples de cescalquesd'expressions, parmi les plus certains et les plus frappants. Another phenomenon of hybridization is the creation in a language of a new word, derived or composed with the help of elements already existing in that language, and which is not distinguished in any way by the external aspect of the older words, but which, in fact, is only thecopy(calque) of a word existing in the mother tongue of the one who tries out a new language. [...] we want to recall only two or three examples of thesecopies(calques) of expressions, among the most certain and the most striking. Since at least 1926, the termcalquehas been attested in English through a publication by the linguistOtakar Vočadlo[cs]:[18] Notes Bibliography
https://en.wikipedia.org/wiki/Calque
Incomputer architecture, thememory hierarchyseparatescomputer storageinto a hierarchy based onresponse time. Since response time,complexity, andcapacityare related, the levels may also be distinguished by theirperformanceand controlling technologies.[1]Memory hierarchy affects performance in computer architectural design, algorithm predictions, and lower levelprogrammingconstructs involvinglocality of reference. Designing for high performance requires considering the restrictions of the memory hierarchy, i.e. the size and capabilities of each component. Each of the various components can be viewed as part of a hierarchy of memories(m1,m2, ...,mn)in which each membermiis typically smaller and faster than the next highest membermi+1of the hierarchy. To limit waiting by higher levels, a lower level will respond by filling a buffer and then signaling for activating the transfer. There are four major storage levels.[1] This is a general memory hierarchy structuring. Many other structures are useful. For example, a paging algorithm may be considered as a level forvirtual memorywhen designing acomputer architecture, and one can include a level ofnearline storagebetween online and offline storage. The number of levels in the memory hierarchy and the performance at each level has increased over time. The type of memory or storage components also change historically.[6]For example, the memory hierarchy of an Intel Haswell Mobile[7]processor circa 2013 is: The lower levels of the hierarchy – from mass storage downwards – are also known astiered storage. The formal distinction between online, nearline, and offline storage is:[12] For example, always-on spinning disks are online, while spinning disks that spin down, such as massive arrays of idle disk (MAID), are nearline. Removable media such as tape cartridges that can be automatically loaded, as in atape library, are nearline, while cartridges that must be manually loaded are offline. Most modernCPUsare so fast that, for most program workloads, thebottleneckis thelocality of referenceof memory accesses and the efficiency of thecachingand memory transfer between different levels of the hierarchy[citation needed]. As a result, the CPU spends much of its time idling, waiting for memory I/O to complete. This is sometimes called thespace cost, as a larger memory object is more likely to overflow a small and fast level and require use of a larger, slower level. The resulting load on memory use is known aspressure(respectivelyregister pressure,cache pressure, and (main)memory pressure). Terms for data being missing from a higher level and needing to be fetched from a lower level are, respectively:register spilling(due toregister pressure: register to cache),cache miss(cache to main memory), and (hard)page fault(realmain memory tovirtualmemory, i.e. mass storage, commonly referred to asdiskregardless of the actual mass storage technology used). Modernprogramming languagesmainly assume two levels of memory, main (working) memory and mass storage, though inassembly languageandinline assemblersin languages such asC, registers can be directly accessed. Taking optimal advantage of the memory hierarchy requires the cooperation of programmers, hardware, and compilers (as well as underlying support from the operating system): Many programmers assume one level of memory. This works fine until the application hits a performance wall. Then the memory hierarchy will be assessed duringcode refactoring.
https://en.wikipedia.org/wiki/Memory_hierarchy
Genetic privacyinvolves the concept of personalprivacyconcerning the storing, repurposing, provision to third parties, and displaying of information pertaining to one'sgenetic information.[1][2]This concept also encompasses privacy regarding the ability to identify specific individuals by theirgenetic sequence, and the potential to gain information on specific characteristics about that person via portions of their genetic information, such as their propensity for specific diseases or their immediate or distant ancestry.[3] With the public release of genome sequence information of participants in large-scale research studies, questions regarding participant privacy have been raised. In some cases, it has been shown that it is possible to identify previously anonymous participants from large-scale genetic studies that released gene sequence information.[4] Genetic privacy concerns also arise in the context of criminal law because the government can sometimes overcome criminal suspects' genetic privacy interests and obtain their DNA sample.[5]Due to the shared nature of genetic information between family members, this raises privacy concerns of relatives as well.[6] As concerns and issues of genetic privacy are raised, regulations and policies have been developed in the United States both at a federal and state level.[7][8] In the majority of cases, an individual'sgenetic sequenceis considered unique to that individual. One notable exception to this rule in humans is the case ofidentical twins, who have nearly identical genome sequences at birth.[9]In the remainder of cases, one's genetic fingerprint is considered specific to a particular person and is regularly used in the identification of individuals in the case of establishing innocence or guilt in legal proceedings viaDNA profiling.[10]Specific gene variants one's genetic code, known asalleles, have been shown to have strong predictive effects in the occurrences of diseases, such as theBRCA1andBRCA2mutant genes inBreast CancerandOvarian Cancer, or PSEN1, PSEN2, and APP genes inearly-onset Alzheimer's disease.[11][12][13]Additionally, gene sequences are passed down with aregular pattern of inheritancebetween generations, and can therefore reveal one's ancestry viagenealogical DNA testing. Additionally with knowledge of the sequence of one's biological relatives, traits can be compared that allow relationships between individuals, or the lack thereof, to be determined, as is often done inDNA paternity testing. As such, one's genetic code can be used to infer many characteristics about an individual, including many potentially sensitive subjects such as:[14] Common specimen types for direct-to-consumer genetic testing are cheek swabs and saliva samples.[15]One of the most popular reasons for at-home genetic testing is to obtain information on an individual's ancestry via genealogical DNA testing and is offered by many companies such as23andMe,AncestryDNA,Family Tree DNA, orMyHeritage.[16]Other tests are also available which provide consumers with information on genes which influence the risk of specific diseases, such as the risk of developinglate-onset Alzheimer's diseaseorceliac disease.[17] Studies have shown that genomic data is not immune to adversary attacks.[3][18][19]A study conducted in 2013 revealed vulnerabilities in the security of public databases that containgenetic data.[4]As a result, research subjects could sometimes be identified by their DNA alone.[20]Although reports of premeditated breaches outside of experimental research are disputed, researchers suggest the liability is still important to study.[21] While accessible genomic data has been pivotal in advancing biomedical research, it also escalates the possibility of exposing sensitive information.[3][18][19][21][22]A common practice in genomic medicine to protect patient anonymity involves removing patient identifiers.[3][18][19][23]However, de-identified data is not subject to the same privileges as the research subjects.[23][19]Furthermore, there is an increasing ability to re-identify patients and their genetic relatives from their genetic data.[3][18][19][22] One study demonstrated re-identification by piecing together genomic data fromshort tandem repeats(e.g.CODIS),SNPallele frequencies (e.g.ancestrytesting), andwhole-genome sequencing.[18]They also hypothesize using a patient's genetic information, ancestry testing, and social media to identify relatives.[18]Other studies have echoed the risks associated with linking genomic information with public data like social media, including voter registries, web searches, and personal demographics,[3]or with controlled data, like personal medical records.[19] There is also controversy regarding the responsibility aDNA testingcompany has to ensure thatleaksand breaches do not happen.[24]Determining who legally owns the genomic data, the company or the individual, is of legal concern. There have been published examples of personal genome information being exploited, as well as indirect identification of family members.[18][25]Additional privacy concerns, related to, e.g.,genetic discrimination, loss of anonymity, and psychological impacts, have been increasingly pointed out by the academic community[25][26]as well as government agencies.[17] Additionally, for criminal justice and privacy advocates, the use of genetic information in identifying suspects for criminal investigations proves worrisome underthe United States Fourth Amendment—especially when an indirect genetic link connects an individual to crime scene evidence.[27]Since 2018, law enforcement officials have been harnessing the power of genetic data to revisitcold caseswith DNA evidence.[28]Suspects discovered through this process are not directly identified by the input of their DNA into established criminal databases, like CODIS. Instead, suspects are identified as the result of familial genetic sleuthing by law enforcement, submitting crime scene DNA evidence to genetic database services that link users whose DNA similarity indicates a family connection.[28][29]Officers can then track the newly identified suspect in person, waiting to collect discarded trash that might carry DNA in order to confirm the match.[28] Despite the privacy concerns of suspects and their relatives, this procedure is likely to survive Fourth Amendment scrutiny.[6]Much like donors of biological samples in cases of genetic research,[30][31]criminal suspects do not retain property rights in abandoned waste; they can no longer assert an expectation of privacy in the discarded DNA used to confirm law enforcement suspicions, thereby eliminating their Fourth Amendment protection in that DNA.[6]Additionally, the genetic privacy of relatives is likely irrelevant under current caselaw since Fourth Amendment protection is “personal” to criminal defendants.[6] In a systematic review of perspectives toward genetic privacy, researchers highlight some of the concerns individuals hold regarding their genetic information, such as the potential dangers and effects on themselves and family members.[21]Academics note that participating in biomedical research or genetic testing has implications beyond the participant; it can also reveal information about genetic relatives.[18][20][21][25]The study also found that people expressed concerns as to which body controls their information and if their genetic information could be used against them.[21] Additionally, theAmerican Society of Human Geneticshas expressed issues about genetic tests in children.[32]They infer that testing could lead to negative consequences for the child. For example, if a child's likelihood for adoption was influenced by genetic testing, the child might suffer from self esteem issues. A child's well-being might also suffer due to paternity testing or custody battles that require this type of information.[14] When the access of genetic information is regulated, it can preventinsurance companiesand employers from reaching such data. This could avoid issues of discrimination, which oftentimes leaves an individual whose information has been breached without a job or without insurance.[14] In the United States, biomedical research containing human subjects is governed by a baseline standard of ethics known asThe Common Rule, which aims to protect a subject's privacy by requiring "identifiers" such as name or address to be removed from collected data.[33]A 2012 report by thePresidential Commission for the Study of Bioethical Issuesstated, however, that "what constitutes 'identifiable' and 'de-identified' data is fluid and that evolving technologies and the increasing accessibility of data could allow de-identified data to become re-identified".[33]In fact, research has already shown that it is "possible to discover a study participant's identity by cross-referencing research data about him and his DNA sequence … [with] genetic genealogy and public-records databases".[34]This has led to calls for policy-makers to establish consistent guidelines and best practices for the accessibility and usage of individual genomic data collected by researchers.[35] Privacy protections for genetic research participants were strengthened by provisions of the21st Century Cures Act(H.R.34) passed on 7 December 2016 for which the American Society of Human Genetics (ASHG) commended Congress,Senator WarrenandSenator Enzi.[8][36][37] TheGenetic Information Nondiscrimination Actof 2008 (GINA) protects the genetic privacy of the public, including research participants. The passage of GINA makes it illegal for health insurers or employers to request or require genetic information of an individual or of family members (and further prohibits the discriminatory use of such information).[38]This protection does not extend to other forms of insurance such as life insurance.[38] TheHealth Insurance Portability and Accountability Act of 1996 (HIPAA)also provides some genetic privacy protections. HIPAA defines health information to include genetic information,[39]which places restrictions on who health providers can share the information with.[40] Three kinds of laws are frequently associated with genetic privacy: those relating to informed consent and property rights, those preventing insurance discrimination, and those prohibiting employment discrimination.[41][42]According to the National Human Genome Research Institute, forty-one states have enacted genetic privacy laws as of January 2020.[41]However, those privacy laws vary in the scope of protection offered; while some laws "apply broadly to any person" others apply "narrowly to certain entities such as insurers, employers, or researchers."[41] Arizona, for example, falls in the former category and offers broad protection. Currently, Arizona's genetic privacy statutes focus on the need for informed consent to create, store, or release genetic testing results,[43][44]but a pending bill would amend the state genetic privacy law framework to grant exclusive property rights in genetic information derived from genetic testing to all persons tested.[45]In expanding privacy rights by including property rights, the bill would grant persons who undergo genetic testing greater control over their genetic information. Arizona also prohibits insurance and employment discrimination on the basis of genetic testing results.[46][47] New York State also has strong legislative measures protecting individuals from genetic discrimination. Section 79-I of the New York Civil Rights Law places strict restrictions on the usage of genetic data. The statute also outlines the proper conditions for consenting to genetic data collection or usage.[48] California similarly offers a broad range of protection for genetic privacy, but it stops short of granting individuals property rights in their genetic information. While currently enacted legislation focuses on prohibiting genetic discrimination in employment[49]and insurance,[50]a piece of pending legislation would extend genetic privacy rights to provide individuals with greater control over genetic information obtained through direct-to-consumer testing services like23andMe.[51] Florida passed House Bill 1189, a DNA privacy law that prohibits insurers from using genetic data, in July 2020.[7] On the other hand, Mississippi offers few genetic privacy protections beyond those required by the federal government. In the Mississippi Employment Fairness Act, the legislature recognized the applicability of theGenetic Information Nondiscrimination Act,[52]which "prohibit[s] discrimination on the basis of genetic information with respect to health insurance and employment."[53][54] To balance data sharing with the need to protect the privacy of research subjects geneticists are considering to move more data behind controlled-access barriers, authorizing trusted users to access the data from many studies, rather than "having to obtain it piecemeal from different studies".[4][20] In October 2005,IBMbecame the world's first major corporation to establish a genetics privacy policy. Its policy prohibits using employees' genetic information in employment decisions.[55] According to a 2014 study by Yaniv Erlich andArvind Narayanan, genetic privacy breaching techniques fall into three categories:[56] However, more recent studies have indicated new avenues for breaching genetic privacy: According to a 2022 study by Zhiyu Wan et al., safeguards for genetic privacy fall into two categories:[59]
https://en.wikipedia.org/wiki/Genetic_privacy
Mass actioninsociologyrefers to the situations where numerous people behave simultaneously in a similar way but individually and without coordination. For example, at any given moment, many thousands of people are shopping - without any coordination between themselves, they are nonetheless performing the same mass action. Another, more complicated example would be one based on a work of 19th-centuryGermansociologistMax Weber,The Protestant Ethic and the Spirit of Capitalism: Weber wrote thatcapitalismevolved when theProtestantethicinfluenced large number of people to create their ownenterprisesand engage intradeand gathering ofwealth. In other words, the Protestant ethic was a force behind an unplanned and uncoordinated mass action that led to the development of capitalism. Abank runis mass action with sweeping implications. Upon hearing news of a bank's anticipated insolvency, many bank depositors may simultaneously rush down to a bank branch to withdraw their deposits.[1] More developed forms of mass actions aregroup behaviorandgroup action. In epidemiological (disease) models, assuming the "law of mass action" means assuming that individuals are homogeneously mixed and every individual is about as likely to interact with every other individual. This is a common assumption in models such as theSIR model. This idea serves as the main plot theme in authorIsaac Asimov's work,Foundation. In the early books of the series, the main character,Hari Seldon, uses the principle of mass action to foresee the imminent fall of the Galactic Empire, which encompasses the entire Milky Way, and a dark age lasting thirty thousand years before a second great empire arises. (Inlater booksthe principle is augmented with more recent developments inmathematical sociology.) With this, he hopes to reduce that dark age to only one thousand years, ostensibly by creating anEncyclopedia Galacticato retain all current knowledge. Thissociology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Mass_action_(sociology)
TheISO/IEC 11179metadata registry(MDR) standard is an internationalISO/IECstandard for representingmetadatafor an organization in a metadata registry. It documents the standardization and registration of metadata to make data understandable and shareable.[1] The ISO/IEC 11179 model is a result of two principles of semantic theory, combined with basic principles of data modelling. The first principle from semantic theory is the thesaurus type relation between wider and more narrow (or specific) concepts, e.g. the wide concept "income" has a relation to the more narrow concept "net income". The second principle from semantic theory is the relation between a concept and its representation, e.g., "buy" and "purchase" are the same concept although different terms are used. A basic principle of data modelling is the combination of an object class and a characteristic. For example, "Person - hair color". When applied to data modelling, ISO/IEC 11179 combines a wide "concept" with an "object class" to form a more specific "data element concept". For example, the high-level concept "income" is combined with the object class "person" to form the data element concept "net income of person". Note that "net income" is more specific than "income". The different possible representations of a data element concept are then described with the use of one or more data elements. Differences in representation may be a result of the use of synonyms or different value domains in different data sets in a data holding. A value domain is the permitted range of values for a characteristic of an object class. An example of a value domain for "sex of person" is "M = Male, F = Female, U = Unknown". The letters M, F and U are then the permitted values of sex of person in a particular data set. The data element concept "monthly net income of person" may thus have one data element called "monthly net income of individual by 100 dollar groupings" and one called "monthly net income of person range 0-1000 dollars", etc., depending on the heterogeneity of representation that exists within the data holdings covered by one ISO/IEC 11179 registry. Note that these two examples have different terms for the object class (person/individual) and different value sets (a 0-1000 dollar range as opposed to 100 dollar groupings). The result of this is a catalogue of sorts, in which related data element concepts are grouped by a high-level concept and an object class, and data elements grouped by a shared data element concept. Strictly speaking, this is not a hierarchy, even if it resembles one. ISO/IEC 11179 proper does not describe data as it is actually stored. It does not refer to the description of physical files, tables and columns. The ISO/IEC 11179 constructs are "semantic" as opposed to "physical" or "technical". The standard has two main purposes: definition and exchange. The core object is the data element concept, since it defines a concept and, ideally, describes data independent of its representation in any one system, table, column or organisation. The standard consists of seven parts: Part 1 explains the purpose of each part. Part 3 specifies the metamodel that defines the registry. Part 7 is released per December 2019 and provides an extension to part 3 for registration of metadata about data sets. The other parts specify various aspects of the use of the registry. Thedata elementis foundational concept in an ISO/IEC 11179 metadata registry. The purpose of the registry is to maintain a semantically precise structure of data elements. Each Data element in an ISO/IEC 11179 metadata registry: Data elements that store "Codes" or enumerated values must also specify the semantics of each of the code values with precise definitions. Software AG's COTS Metadata Registry (MDR) product supports the ISO 11179 standard and continues to be sold and used for this purpose in both commercial and government applications (see Vendor Tools section below). While commercial adoption is increasing, the spread of ISO/IEC 11179 has been more successful in the public sector. However, the reason for this is unclear. ISO membership is open to organizations through their national bodies. Countries with public sector repositories across various industries include Australia, Canada, Germany, United States and the United Kingdom. The United Nations and the US Government refer to and use the 11179 standards. 11179 is strongly recommended on the U.S. government'sXMLwebsite.[2]and is promoted byThe Open Groupas a foundation of theUniversal Data Element Framework.[3]The Open Group is avendor-neutraland technology-neutralconsortiumworking to enable access to integrated information within and between enterprises based onopen standardsand globalinteroperability. Although the ISO/IEC 11179 metadata registry is 6-part standard comprising several hundreds of pages, the primary model is presented in Part-3 and depicted in UML diagrams to facilitate understanding, supported by normative text. The eXtended Metadata Registry initiative,XMDRled by the US, explored the use of ontologies as the basis for MDR content in order to provide richer semantic framework than could be achieved by lexical and syntax naming conventions alone. The XMDR experimented with a prototype using OWL, RDF and SPARQL to prove the concept. The initiative resulted in Edition 3 of ISO/IEC 11179. The first part published is ISO/IEC 11179-3:2013. The primary extension in Edition 3 is the Concept Region, expanding the use of concepts to more components within the standard, and supporting registration of a Concept system for use within the registry. The standard also supports the use of externally defined concept systems. Edition 3 versions of Parts 1, 5, and 6 were published in 2015. Part 2, Classifications, is subsumed by the Concept Region in Part 3, but is being updated to a Technical Report (TR) to provide guidance on the development of Classification Schemes. Part 4 describes principles for forming data definitions; an Edition 3 has not been proposed. The following metadata registries state that they follow ISO/IEC 11179 guidelines although there have been no formal third party tests developed to test for metadata registry compliance. No independent agencies certify ISO/IEC 11179 compliance. To some extent, certain existing software implementations suffer from poor design and potential security vulnerabilities, which hinder the adoption of ISO/IEC 11179. Open Metadata
https://en.wikipedia.org/wiki/ISO/IEC_11179
Anexistential graphis a type ofdiagrammaticor visual notation for logical expressions, created byCharles Sanders Peirce, who wrote on graphical logic as early as 1882,[1]and continued to develop the method until his death in 1914. They include both a separate graphical notation for logical statements and a logical calculus, a formal system of rules of inference that can be used to derive theorems. Peirce found the algebraic notation (i.e. symbolic notation) of logic, especially that of predicate logic,[2]which was still very new during his lifetime and which he himself played a major role in developing, to be philosophically unsatisfactory, because the symbols had their meaning by mere convention. In contrast, he strove for a style of writing in which the signs literally carry their meaning within them[3]– in the terminology of his theory of signs: a system of iconic signs that resemble or resemble the represented objects and relations.[4] Thus, the development of an iconic, graphic and – as he intended – intuitive and easy-to-learn logical system was a project that Peirce worked on throughout his life. After at least one aborted approach – the "Entitative Graphs" – the closed system of "Existential Graphs" finally emerged from 1896 onwards. Although considered by their creator to be a clearly superior and more intuitive system, as a mode of writing and as a calculus, they had no major influence on the history of logic. This has been attributed to the fact(s) that, for one, Peirce published little on this topic, and that the published texts were not written in a very understandable way;[5]and, for two, that the linear formula notation in the hands of experts is actually the less complex tool.[6]Hence, the existential graphs received little attention[7]or were seen as unwieldy.[8]From 1963 onwards, works by Don D. Roberts and J. Jay Zeman, in which Peirce's graphic systems were systematically examined and presented, led to a better understanding; even so, they have today found practical use within only one modern application—the conceptual graphs introduced by John F. Sowa in 1976, which are used in computer science to represent knowledge. However, existential graphs are increasingly reappearing as a subject of research in connection with a growing interest in graphical logic,[9]which is also expressed in attempts to replace the rules of inference given by Peirce with more intuitive ones.[10] The overall system of existential graphs is composed of three subsystems that build on each other, the alpha graphs, the beta graphs and the gamma graphs. The alpha graphs are a purely propositional logical system. Building on this, the beta graphs are a first order logical calculus. The gamma graphs, which have not yet been fully researched and were not completed by Peirce, are understood as a further development of the alpha and beta graphs. When interpreted appropriately, the gamma graphs cover higher-level predicate logic as well as modal logic. As late as 1903, Peirce began a new approach, the "Tinctured Existential Graphs," with which he wanted to replace the previous systems of alpha, beta and gamma graphs and combine their expressiveness and performance in a single new system. Like the gamma graphs, the "Tinctured Existential Graphs" remained unfinished. As calculi, the alpha, beta and gamma graphs are sound (i.e., all expressions derived as graphs are semantically valid). The alpha and beta graphs are also complete (i.e., all propositional or predicate-logically semantically valid expressions can be derived as alpha or beta graphs).[11] Peirce proposed three systems of existential graphs: Alphanests inbetaandgamma.Betadoes not nest ingamma, quantified modal logic being more general than put forth by Peirce. Thesyntaxis: Any well-formed part of a graph is asubgraph. Thesemanticsare: Hence thealphagraphs are a minimalist notation forsentential logic, grounded in the expressive adequacy ofAndandNot. Thealphagraphs constitute a radical simplification of thetwo-element Boolean algebraand thetruth functors. Thedepthof an object is the number of cuts that enclose it. Rules of inference: Rules of equivalence: A proof manipulates a graph by a series of steps, with each step justified by one of the above rules. If a graph can be reduced by steps to the blank page or an empty cut, it is what is now called atautology(or the complement thereof, a contradiction). Graphs that cannot be simplified beyond a certain point are analogues of thesatisfiableformulasoffirst-order logic. In the case of betagraphs, the atomic expressions are no longer propositional letters (P, Q, R,...) or statements ("It rains," "Peirce died in poverty"), but predicates in the sense of predicate logic (see there for more details), possibly abbreviated to predicate letters (F, G, H,...). A predicate in the sense of predicate logic is a sequence of words with clearly defined spaces that becomes a propositional sentence if you insert a proper noun into each space. For example, the word sequence "_ x is a human" is a predicate because it gives rise to the declarative sentence "Peirce is a human" if you enter the proper name "Peirce" in the blank space. Likewise, the word sequence "_1is richer than _2" is a predicate, because it results in the statement "Socrates is richer than Plato" if the proper names "Socrates" or "Plato" are inserted into the spaces. The basic language device is the line of identity, a thickly drawn line of any form. The identity line docks onto the blank space of a predicate to show that the predicate applies to at least one individual. In order to express that the predicate "_ is a human being" applies to at least one individual – i.e. to say that there is (at least) one human being – one writes an identity line in the blank space of the predicate "_ is a human being:" The beta graphs can be read as a system in which all formula are to be taken as closed, because all variables are implicitly quantified. If the "shallowest" part of a line of identity has even depth, the associated variable is tacitlyexistentially(universally) quantified. Zeman (1964) was the first to note that thebetagraphs areisomorphictofirst-order logicwithequality(also see Zeman 1967). However, the secondary literature, especially Roberts (1973) and Shin (2002), does not agree on how this is. Peirce's writings do not address this question, because first-order logic was first clearly articulated only after his death, in the 1928 first edition ofDavid HilbertandWilhelm Ackermann'sPrinciples of Mathematical Logic. Add to the syntax ofalphaa second kind ofsimple closed curve, written using a dashed rather than a solid line. Peirce proposed rules for this second style of cut, which can be read as the primitiveunary operatorofmodal logic. Zeman (1964) was the first to note that thegammagraphs are equivalent to the well-knownmodal logics S4andS5. Hence thegammagraphs can be read as a peculiar form ofnormal modal logic. This finding of Zeman's has received little attention to this day, but is nonetheless included here as a point of interest. The existential graphs are a curious offspring ofPeircethelogician/mathematician with Peirce the founder of a major strand ofsemiotics. Peirce's graphical logic is but one of his many accomplishments in logic and mathematics. In a series of papers beginning in 1867, and culminating with his classic paper in the 1885American Journal of Mathematics, Peirce developed much of thetwo-element Boolean algebra,propositional calculus,quantificationand thepredicate calculus, and some rudimentaryset theory.Model theoristsconsider Peirce the first of their kind. He also extendedDe Morgan'srelation algebra. He stopped short ofmetalogic(which eluded evenPrincipia Mathematica). But Peirce's evolvingsemiotictheory led him to doubt the value of logic formulated using conventional linear notation, and to prefer that logic and mathematics be notated in two (or even three) dimensions. His work went beyondEuler's diagramsandVenn's 1880revisionthereof.Frege's 1879 workBegriffsschriftalso employed a two-dimensional notation for logic, but one very different from Peirce's. Peirce's first published paper on graphical logic (reprinted in Vol. 3 of hisCollected Papers) proposed a system dual (in effect) to thealphaexistential graphs, called theentitative graphs. He very soon abandoned this formalism in favor of the existential graphs. In 1911Victoria, Lady Welbyshowed the existential graphs toC. K. Ogdenwho felt they could usefully be combined with Welby's thoughts in a "less abstruse form."[12]Otherwise they attracted little attention during his life and were invariably denigrated or ignored after his death, until the PhD theses by Roberts (1964) and Zeman (1964). Currently, the chronological critical edition of Peirce's works, theWritings, extends only to 1892. Much of Peirce's work onlogical graphsconsists of manuscripts written after that date and still unpublished. Hence our understanding of Peirce's graphical logic is likely to change as the remaining 23 volumes of the chronological edition appear.
https://en.wikipedia.org/wiki/Logical_graph
Thelayeredhidden Markov model(LHMM)is astatistical modelderived from the hidden Markov model (HMM). A layered hidden Markov model (LHMM) consists ofNlevels of HMMs, where the HMMs on leveli+ 1 correspond to observation symbols or probability generators at leveli. Every leveliof the LHMM consists ofKiHMMs running in parallel.[1] LHMMs are sometimes useful in specific structures because they can facilitate learning and generalization. For example, even though a fully connected HMM could always be used if enough training data were available, it is often useful to constrain the model by not allowing arbitrary state transitions. In the same way it can be beneficial to embed the HMM in a layered structure which, theoretically, may not be able to solve any problems the basic HMM cannot, but can solve some problems more efficiently because less training data is needed. A layered hidden Markov model (LHMM) consists ofN{\displaystyle N}levels of HMMs where the HMMs on levelN+1{\displaystyle N+1}corresponds to observation symbols or probability generators at levelN{\displaystyle N}. Every leveli{\displaystyle i}of the LHMM consists ofKi{\displaystyle K_{i}}HMMs running in parallel. At any given levelL{\displaystyle L}in the LHMM a sequence ofTL{\displaystyle T_{L}}observation symbolsoL={o1,o2,…,oTL}{\displaystyle \mathbf {o} _{L}=\{o_{1},o_{2},\dots ,o_{T_{L}}\}}can be used to classify the input into one ofKL{\displaystyle K_{L}}classes, where each class corresponds to each of theKL{\displaystyle K_{L}}HMMs at levelL{\displaystyle L}. This classification can then be used to generate a new observation for the levelL−1{\displaystyle L-1}HMMs. At the lowest layer, i.e. levelN{\displaystyle N}, primitive observation symbolsop={o1,o2,…,oTp}{\displaystyle \mathbf {o} _{p}=\{o_{1},o_{2},\dots ,o_{T_{p}}\}}would be generated directly from observations of the modeled process. For example, in a trajectory tracking task the primitive observation symbols would originate from the quantized sensor values. Thus at each layer in the LHMM the observations originate from the classification of the underlying layer, except for the lowest layer where the observation symbols originate from measurements of the observed process. It is not necessary to run all levels at the same time granularity. For example, it is possible to use windowing at any level in the structure so that the classification takes the average of several classifications into consideration before passing the results up the layers of the LHMM.[2] Instead of simply using the winning HMM at levelL+1{\displaystyle L+1}as an input symbol for the HMM at levelL{\displaystyle L}it is possible to use it as aprobability generatorby passing the completeprobability distributionup the layers of the LHMM. Thus instead of having a "winner takes all" strategy where the most probable HMM is selected as an observation symbol, the likelihoodL(i){\displaystyle L(i)}of observing thei{\displaystyle i}th HMM can be used in the recursion formula of the levelL{\displaystyle L}HMM to account for the uncertainty in the classification of the HMMs at levelL+1{\displaystyle L+1}. Thus, if the classification of the HMMs at leveln+1{\displaystyle n+1}is uncertain, it is possible to pay more attention to the a-priori information encoded in the HMM at levelL{\displaystyle L}. A LHMM could in practice be transformed into a single layered HMM where all the different models are concatenated together.[3]Some of the advantages that may be expected from using the LHMM over a large single layer HMM is that the LHMM is less likely to suffer fromoverfittingsince the individual sub-components are trained independently on smaller amounts of data. A consequence of this is that a significantly smaller amount of training data is required for the LHMM to achieve a performance comparable of the HMM. Another advantage is that the layers at the bottom of the LHMM, which are more sensitive to changes in the environment such as the type of sensors, sampling rate etc. can be retrained separately without altering the higher layers of the LHMM.
https://en.wikipedia.org/wiki/Layered_hidden_Markov_model
In manyUnixvariants, "nobody" is the conventional name of auser identifierwhich owns no files, is in no privileged groups, and has no abilities except those which every other user has. It is normally not enabled as auser account, i.e. has nohome directoryor logincredentialsassigned. Some systems also define an equivalent group "nogroup".
https://en.wikipedia.org/wiki/Nobody_(username)
Thereputation marketingfield has evolved from the marriage of the fieldsreputation managementandbrand marketing, and involves a brand's reputation being vetted online in real-time by consumers leaving online reviews and citing experiences onsocial networking sites. With the popularity of social media in the new millennium reputation, vetting has turned from word-of-mouth to the digital platform, forcing businesses to take active measures to stay competitive and profitable. A study done byNeilsenin 2012 suggests that 70% of consumers trust online reviews (15% more than in 2008), second only to personal recommendations.[1]This gives credibility to thesocial prooftheory; most famously studied byMuzafer Sherif, and highlighted as one of the six principles of persuasion byRobert Cialdini. The increasing number of review websites such asYelpandConsumerAffairsattracted the attention ofHarvard Business Schoolwhich conducted a study of online reviews and their effects on restaurants. The study finds that a one-star increase in Yelp rating leads to a 5–7% increase in restaurant revenue having a major impact on local restaurants and a lesser impact on big chains[2]A similar study conducted atUC Berkeleyreports that a half-star improvement on a five-star rating could make it 30-49% more likely that a restaurant will sell out its evening seats.[3] Reputation marketing is often associated withreputation managementand is seen as a means of handling negative reviews. However, reputation marketing differs in that it also seeks to manage positive feedback as a way to attract new customers.[4]Reputation marketing is taking a proactive approach toward your brand’s presence.[5]Reputation marketing is not a new strategy. TheBetter Business Bureauhas been around since 1912 and is one of the most notable and well-known consumer review organizations.[6]With the surplus of social media review sites available to the average consumer, businesses are forced to closely monitor their reputations and find new and creative ways to use social media to stay competitive in today's economy.[4] Online reviews have a tremendous influence on consumers' purchases since they can read evaluations and opinions of the items they are considering.Amazonwas the first company to invite consumers to post reviews on the internet[7]and many others have since done the same. The average customer finds social media more trustworthy than brand-generated marketing making social media more effective than television commercials, advertising signs, and internet banners at drawing potential consumers; however, reviews by people the consumer does not know are only 2% as effective.[8] The chart below shows the most viewed review websites of 2017 according toAlexa:[9] A business's online reputation can have a critical impact on its success or failure, with more than 3 out of every 4 people preferring positively reviewed businesses over negative ones. The impact of negative reviews may even affect a business's ability to secure financial assistance as banks and other financial institutions check a company's online ratings as part of the application process.[10]In today's non-private, social society it would be plausible to see that one's business could be affected by what people say about it, the owner, or employees online.[11] Reputation marketing and building a good online reputation are critically important. However, they are not stand-alone growth strategies. Reputation marketing yields the most positive returns when coupled with other online and offline marketing efforts since the effectiveness of these efforts is increased by a good reputation. The popularity ofsmart phoneshave made it almost essential for businesses to be mobile-friendly with click-to-call, click-to-map, and instant review options readily available.[12]Although thefood industryseems to be most impacted by online reviews, experts predict that doctors, contractors, surgeons, accountants and many other local business owners will see more and more online reviews due to changes in search engines.[13] The benefits of online platforms on the economy are supported by predictions of economic theory, with most consumers preferring convenience, buyer options, and free access to information.[14]Product ratings and reviews are an important factor in how consumers choose products and services, with product ratings (usually in stars) attracting customers while personal reviews have the greater impact on the actual buying decision.[citation needed]Spending increases more than 30% with companies that have positive reviews companies with negative reviews may face a substantial drop in consumer traffic.[15]Research conducted in theUnited KingdombyBarclayslooking at how greater responses of businesses to the increase of customer feedback would improve business performance, shows that the economic output could grow by an additional 0.07% between 2016 and 2026. The effects could lead to an increase in the economic output of the United Kingdom by £555 million ($747 million) per year over the average growth rate by the year 2026.[16]
https://en.wikipedia.org/wiki/Reputation_marketing
Thespt function(smallest parts function) is a function innumber theorythat counts the sum of the number of smallest parts in eachinteger partitionof a positive integer. It is related to thepartition function.[1] The first few values of spt(n) are: For example, there are five partitions of 4 (with smallest parts underlined): These partitions have 1, 1, 2, 2, and 4 smallest parts, respectively. So spt(4) = 1 + 1 + 2 + 2 + 4 = 10. Like the partition function, spt(n) has agenerating function. It is given by where(q)∞=∏n=1∞(1−qn){\displaystyle (q)_{\infty }=\prod _{n=1}^{\infty }(1-q^{n})}. The functionS(q){\displaystyle S(q)}is related to amock modular form. LetE2(z){\displaystyle E_{2}(z)}denote the weight 2 quasi-modularEisenstein seriesand letη(z){\displaystyle \eta (z)}denote theDedekind eta function. Then forq=e2πiz{\displaystyle q=e^{2\pi iz}}, the function is amock modular formof weight 3/2 on the fullmodular groupSL2(Z){\displaystyle SL_{2}(\mathbb {Z} )}with multiplier systemχη−1{\displaystyle \chi _{\eta }^{-1}}, whereχη{\displaystyle \chi _{\eta }}is the multiplier system forη(z){\displaystyle \eta (z)}. While a closed formula is not known for spt(n), there are Ramanujan-likecongruencesincluding Thisnumber theory-related article is astub. You can help Wikipedia byexpanding it. Thiscombinatorics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Spt_function
Solomon Kullback(April 3, 1907 – August 5, 1994) was an Americancryptanalystandmathematician, who was one of the first three employees hired byWilliam F. Friedmanat theUS Army'sSignal Intelligence Service(SIS) in the 1930s, along withFrank RowlettandAbraham Sinkov. He went on to a long and distinguished career at SIS and its eventual successor, theNational Security Agency(NSA). Kullback was the Chief Scientist at the NSA until his retirement in 1962, whereupon he took a position at theGeorge Washington University. TheKullback–Leibler divergenceis named after Kullback andRichard Leibler. Kullback was born to Jewish parents inBrooklyn, New York. His father Nathan had been born in Vilna, Russian Empire, (nowVilnius,Lithuania) and had immigrated[1]to the US as a young man circa 1905, and became a naturalized American in 1911.[2]Kullback attendedBoys High Schoolin Brooklyn. He then went toCity College of New York, graduating with aBAin 1927 and anMAin math in 1929.[3]He completed adoctoratein math fromGeorge Washington Universityin 1934. His intention had been to teach, and he returned to Boy's High School to do so, but found it not to his taste; he discovered his real interest was using mathematics, not teaching it.[citation needed] At the suggestion ofAbraham Sinkov, who showed him aCivil Serviceflyer for "junior mathematicians" at US$2,000 per year, he took the examination. Both passed, and were assigned toWashington, D.C.as junior cryptanalysts. Upon arrival in Washington, Kullback was assigned toWilliam F. Friedman. Friedman had begun an intensive program of training in cryptology for his new civilian employees. For several summers running, the SIS cryptanalysts attended training camps atFort Meadeuntil they received commissions as reserve officers in the Army. Kullback and Sinkov took Friedman's admonitions on education seriously and spent the next several years attending night classes; both received their doctorates in mathematics. Afterward, Kullback rediscovered a love of teaching; he began offering evening classes in mathematics atGeorge Washington Universityfrom 1939. Once they had completed the training, the three were put to the work for which they had actually been hired, compilations ofcipherorcodematerial for the U.S. Army. Another task was to test commercial cipher devices which vendors wished to sell to the U.S. government. Kullback worked in partnership withFrank RowlettagainstRED cipher machinemessages. Almost overnight, they unravelled the keying system and then the machine pattern – with nothing but the intercepted messages in hand. Using the talents of linguist John Hurt to translate text, SIS started issuing current intelligence to military decision-makers. In May 1942, five months after attack onPearl Harbor, Kullback, by then a Major, was sent to Britain.[4]He learned atBletchley Parkthat the British were producing intelligence of high quality by exploiting theEnigma machine. He also cooperated with the British in the solution of more conventional German codebook-based systems. Shortly after his return to the States, Kullback moved into the Japanese section as its chief. When theNational Security Agency(NSA) was formed in 1952, Rowlett became chief of cryptanalysis. The primary problem facing research and development in the post-war period was development of high-speed processing equipment. Kullback supervised a team of about 60 people, including such innovative thinkers in automated data processing development asLeo Rosenand Sam Snyder. His staff pioneered new forms of input and memory, such asmagnetic tapeanddrum memory, and compilers to make machines truly "multi-purpose". Kullback gave priority to using computers to generatecommunications security(COMSEC) materials. Kullback's bookInformation Theory and Statisticswas published byJohn Wiley & Sonsin 1959. The book was republished, with additions and corrections, byDover Publicationsin 1968. Solomon Kullback retired from NSA in 1962, and focused on his teaching at George Washington University and publishing new papers. In 1963 he was elected as aFellow of the American Statistical Association.[5]He reached the rank ofcolonel, and was inducted into theMilitary Intelligence Hall of Fame. Kullback is remembered by his colleagues at NSA as straightforward; one described him as "totally guileless, you always knew where you stood with him." One former NSA senior recalled him as a man of unlimited energy and enthusiasm and a man whose judgment was usually "sound and right."
https://en.wikipedia.org/wiki/Solomon_Kullback
Inmachine learning(ML),boostingis anensemblemetaheuristicfor primarily reducingbias (as opposed to variance).[1]It can also improve thestabilityand accuracy of MLclassificationandregressionalgorithms. Hence, it is prevalent insupervised learningfor converting weak learners to strong learners.[2] The concept of boosting is based on the question posed byKearnsandValiant(1988, 1989):[3][4]"Can a set of weak learners create a single strong learner?" A weak learner is defined as aclassifierthat is only slightly correlated with the true classification. A strong learner is a classifier that is arbitrarily well-correlated with the true classification.Robert Schapireanswered the question in the affirmative in a paper published in 1990.[5]This has had significant ramifications in machine learning andstatistics, most notably leading to the development of boosting.[6] Initially, thehypothesis boosting problemsimply referred to the process of turning a weak learner into a strong learner.[3]Algorithms that achieve this quickly became known as "boosting".Freundand Schapire's arcing (Adapt[at]ive Resampling and Combining),[7]as a general technique, is more or less synonymous with boosting.[8] While boosting is not algorithmically constrained, most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are weighted in a way that is related to the weak learners' accuracy. After a weak learner is added, the data weights are readjusted, known as "re-weighting". Misclassified input data gain a higher weight and examples that are classified correctly lose weight.[note 1]Thus, future weak learners focus more on the examples that previous weak learners misclassified. There are many boosting algorithms. The original ones, proposed byRobert Schapire(arecursivemajority gate formulation),[5]andYoav Freund(boost by majority),[9]were notadaptiveand could not take full advantage of the weak learners. Schapire and Freund then developedAdaBoost, an adaptive boosting algorithm that won the prestigiousGödel Prize. Only algorithms that are provable boosting algorithms in theprobably approximately correct learningformulation can accurately be calledboosting algorithms. Other algorithms that are similar in spirit[clarification needed]to boosting algorithms are sometimes called "leveraging algorithms", although they are also sometimes incorrectly called boosting algorithms.[9] The main variation between many boosting algorithms is their method ofweightingtraining datapoints andhypotheses.AdaBoostis very popular and the most significant historically as it was the first algorithm that could adapt to the weak learners. It is often the basis of introductory coverage of boosting in university machine learning courses.[10]There are many more recent algorithms such asLPBoost, TotalBoost,BrownBoost,xgboost, MadaBoost,LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework,[9]which shows that boosting performsgradient descentin afunction spaceusing aconvexcost function. Given images containing various known objects in the world, a classifier can be learned from them to automaticallyclassifythe objects in future images. Simple classifiers built based on someimage featureof the object tend to be weak in categorization performance. Using boosting methods for object categorization is a way to unify the weak classifiers in a special way to boost the overall ability of categorization.[citation needed] Object categorizationis a typical task ofcomputer visionthat involves determining whether or not an image contains some specific category of object. The idea is closely related with recognition, identification, and detection. Appearance based object categorization typically containsfeature extraction,learningaclassifier, and applying the classifier to new examples. There are many ways to represent a category of objects, e.g. fromshape analysis,bag of words models, or local descriptors such asSIFT, etc. Examples ofsupervised classifiersareNaive Bayes classifiers,support vector machines,mixtures of Gaussians, andneural networks. However, research[which?]has shown that object categories and their locations in images can be discovered in anunsupervised manneras well.[11] The recognition of object categories in images is a challenging problem incomputer vision, especially when the number of categories is large. This is due to high intra class variability and the need for generalization across variations of objects within the same category. Objects within one category may look quite different. Even the same object may appear unalike under different viewpoint,scale, andillumination. Background clutter and partial occlusion add difficulties to recognition as well.[12]Humans are able to recognize thousands of object types, whereas most of the existingobject recognitionsystems are trained to recognize only a few,[quantify]e.g.human faces,cars, simple objects, etc.[13][needs update?]Research has been very active on dealing with more categories and enabling incremental additions of new categories, and although the general problem remains unsolved, several multi-category objects detectors (for up to hundreds or thousands of categories[14]) have been developed. One means is byfeaturesharing and boosting. AdaBoost can be used for face detection as an example ofbinary categorization. The two categories are faces versus background. The general algorithm is as follows: After boosting, a classifier constructed from 200 features could yield a 95% detection rate under a10−5{\displaystyle 10^{-5}}false positive rate.[15] Another application of boosting for binary categorization is a system that detects pedestrians usingpatternsof motion and appearance.[16]This work is the first to combine both motion information and appearance information as features to detect a walking person. It takes a similar approach to theViola-Jones object detection framework. Compared with binary categorization,multi-class categorizationlooks for common features that can be shared across the categories at the same time. They turn to be more genericedgelike features. During learning, the detectors for each category can be trained jointly. Compared with training separately, itgeneralizesbetter, needs less training data, and requires fewer features to achieve the same performance. The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged). This can be done via convertingmulti-class classificationinto a binary one (a set of categories versus the rest),[17]or by introducing a penalty error from the categories that do not have the feature of the classifier.[18] In the paper "Sharing visual features for multiclass and multiview object detection", A. Torralba et al. usedGentleBoostfor boosting and showed that when training data is limited, learning via sharing features does a much better job than no sharing, given same boosting rounds. Also, for a given performance level, the total number of features required (and therefore the run time cost of the classifier) for the feature sharing detectors, is observed to scale approximatelylogarithmicallywith the number of class, i.e., slower thanlineargrowth in the non-sharing case. Similar results are shown in the paper "Incremental learning of object detectors using a visual shape alphabet", yet the authors usedAdaBoostfor boosting. Boosting algorithms can be based onconvexor non-convex optimization algorithms. Convex algorithms, such asAdaBoostandLogitBoost, can be "defeated" by random noise such that they can't learn basic and learnable combinations of weak hypotheses.[19][20]This limitation was pointed out by Long & Servedio in 2008. However, by 2009, multiple authors demonstrated that boosting algorithms based on non-convex optimization, such asBrownBoost, can learn from noisy datasets and can specifically learn the underlying classifier of the Long–Servedio dataset.
https://en.wikipedia.org/wiki/Boosting_(machine_learning)
Email encryptionisencryptionofemailmessages to protect the content from being read by entities other than the intended recipients. Email encryption may also includeauthentication. Email is prone to the disclosure of information. Although many emails are encrypted during transmission, they are frequently stored in plaintext, potentially exposing them to unauthorized access by third parties, including email service providers.[1]By default, popular email services such asGmailand Outlook do not enableend-to-end encryption.[2]Utilizing certain available tools, unauthorized individuals may access and read the email content.[3] Email encryption can rely onpublic-key cryptography, in which users can each publish apublic keythat others can use to encrypt messages to them, while keeping secret aprivatekey they can use to decrypt such messages or to digitally encrypt and sign messages they send. With the original design ofemail protocol, the communication between email servers was inplain text, which posed a hugesecurityrisk. Over the years, various mechanisms have been proposed to encrypt the communication between email servers. Encryption may occur at the transport level (aka "hop by hop") or end-to-end.Transport layer encryptionis often easier to set up and use; end-to-end encryption provides stronger defenses, but can be more difficult to set up and use. One of the most commonly used email encryption extensions isSTARTTLS. It is aTLS (SSL)layer over the plaintext communication, allowing email servers to upgrade theirplaintextcommunication to encrypted communication. Assuming that the email servers on both the sender and the recipient side support encrypted communication,An eavesdropper monitoring the communication between mail servers cannot use packet sniffing tools to view the email contents. Similar STARTTLS extensions exist for the communication between an email client and the email server (seeIMAP4andPOP3, as stated by RFC 2595). STARTTLS may be used regardless of whether the email's contents are encrypted using another protocol. The encrypted message is revealed, and can be altered by, intermediate email relays. In other words, the encryption takes place between individualSMTPrelays, not between the sender and the recipient. This has both good and bad consequences. A key positive trait of transport layer encryption is that users do not need to do or change anything; the encryption automatically occurs when they send email. In addition, since receiving organizations can decrypt the email without cooperation of the end user, receiving organizations can runvirusscanners and spam filters before delivering the email to the recipient. However, it also means that the receiving organization and anyone who breaks into that organization's email system (unless further steps are taken) can easily read or modify the email. If the receiving organization is considered a threat, then end-to-end encryption is necessary. TheElectronic Frontier Foundationencourages the use of STARTTLS, and has launched the 'STARTTLS Everywhere' initiative to "make it simple and easy for everyone to help ensure their communications (over email) aren’t vulnerable tomass surveillance."[4]Support for STARTTLS has become quite common; Google reports that on Gmail, 90% of incoming email and 90% of outgoing email was encrypted using STARTTLS by July 24, 2018.[5] Mandatory certificate verification is historically not viable for Internet mail delivery without additional information, because many certificates are not verifiable and few want email delivery to fail in that case.[6]As a result, most email that is delivered over TLS uses onlyopportunistic encryption.DANEis a proposed standard that makes an incremental transition to verified encryption for Internet mail delivery possible.[7]The STARTTLS Everywhere project uses an alternative approach: they support a “preload list” of email servers that have promised to support STARTTLS, which can help detect and preventdowngrade attacks. Inend-to-end encryption, the data is encrypted and decrypted only at the end points. In other words, an email sent with end-to-end encryption would be encrypted at the source, unreadable to service providers like Gmail in transit, and then decrypted at its endpoint. Crucially, the email would only be decrypted for the end user on their computer and would remain in encrypted, unreadable form to an email service like Gmail, which wouldn't have the keys available to decrypt it.[8]Some email services integrateend-to-end encryptionautomatically. Notableprotocolsfor end-to-end email encryption include: OpenPGPis a data encryption standard that allows end-users to encrypt the email contents. There are various software and email-client plugins that allow users to encrypt the message using the recipient's public key before sending it. At its core, OpenPGP uses aPublic Key Cryptographyscheme where each email address is associated with a public/private key pair. OpenPGP provides a way for the end users to encrypt the email without any support from the server and be sure that only the intended recipient can read it. However, there are usability issues with OpenPGP — it requires users to set up public/private key pairs and make the public keys available widely. Also, it protects only the content of the email, and not metadata — an untrusted party can still observe who sent an email to whom. A general downside of end to end encryption schemes—where the server does not have decryption keys—is that it makes server side search almost impossible, thus impacting usability. The content of an email can also be end-to-end encrypted by putting it in an encrypted file (using any kind of file encryption tool[9]) and sending that encrypted file as an email attachment.[10] TheSigned and Encrypted Email Over The Internetdemonstration has shown that organizations can collaborate effectively using secure email. Previous barriers to adoption were overcome, including the use of a PKI bridge to provide a scalablepublic key infrastructure(PKI) and the use of network securityguardschecking encrypted content passing in and out of corporate network boundaries to avoid encryption being used to hide malware introduction and information leakage. Transport layer encryption using STARTTLS must be set up by the receiving organization. This is typically straightforward; a valid certificate must be obtained and STARTTLS must be enabled on the receiving organization's email server. To prevent downgrade attacks organizations can send their domain to the 'STARTTLS Policy List'[11] Most full-featured email clients provide native support forS/MIMEsecure email (digital signingand messageencryptionusingcertificates). Other encryption options include PGP and GNU Privacy Guard (GnuPG). Free and commercial software (desktop application, webmail and add-ons) are available as well.[12] While PGP can protect messages, it can also be hard to use in the correct way. Researchers atCarnegie Mellon Universitypublished a paper in 1999 showing that most people couldn't figure out how to sign and encrypt messages using the current version of PGP.[13]Eight years later, another group of Carnegie Mellon researchers published a follow-up paper saying that, although a newer version of PGP made it easy to decrypt messages, most people still struggled with encrypting and signing messages, finding and verifying other people's public encryption keys, and sharing their own keys.[14] Because encryption can be difficult for users, security and compliance managers at companies and government agencies automate the process for employees and executives by using encryption appliances and services that automate encryption. Instead of relying on voluntary co-operation, automated encryption, based on defined policies, takes the decision and the process out of the users' hands. Emails are routed through a gateway appliance that has been configured to ensure compliance with regulatory and security policies. Emails that require it are automatically encrypted and sent.[15] If the recipient works at an organization that uses the same encryption gateway appliance, emails are automatically decrypted, making the process transparent to the user. Recipients who are not behind an encryption gateway then need to take an extra step, either procuring the public key, or logging into an online portal to retrieve the message.[15][16] Since 2000, the number of available encrypted email providers[17]has increased significantly.[18]
https://en.wikipedia.org/wiki/Email_encryption
InBoolean algebra, thealgebraic normal form(ANF),ring sum normal form(RSNForRNF),Zhegalkin normal form, orReed–Muller expansionis a way of writingpropositional logicformulas in one of three subforms: Formulas written in ANF are also known asZhegalkin polynomialsand Positive Polarity (or Parity)Reed–Muller expressions(PPRM).[1] ANF is acanonical form, which means that twologically equivalentformulas will convert to the same ANF, easily showing whether two formulas are equivalent forautomated theorem proving. Unlike other normal forms, it can be represented as a simple list of lists of variable names—conjunctiveanddisjunctivenormal forms also require recording whether each variable is negated or not.Negation normal formis unsuitable for determining equivalence, since on negation normal forms, equivalence does not imply equality: a ∨ ¬a is not reduced to the same thing as 1, even though they are logically equivalent. Putting a formula into ANF also makes it easy to identifylinearfunctions (used, for example, inlinear-feedback shift registers): a linear function is one that is a sum of single literals. Properties of nonlinear-feedbackshift registerscan also be deduced from certain properties of the feedback function in ANF. There are straightforward ways to perform the standard Boolean operations on ANF inputs in order to get ANF results. XOR (logical exclusive disjunction) is performed directly: NOT (logical negation) is XORing 1:[2] AND (logical conjunction) isdistributed algebraically[3] OR (logical disjunction) uses either 1 ⊕ (1 ⊕ a)(1 ⊕ b)[4](easier when both operands have purely true terms) or a ⊕ b ⊕ ab[5](easier otherwise): Each variable in a formula is already in pure ANF, so one only needs to perform the formula's Boolean operations as shown above to get the entire formula into ANF. For example: ANF is sometimes described in an equivalent way: There are only four functions with one argument: To represent a function with multiple arguments one can use the following equality: Indeed, Since bothg{\displaystyle g}andh{\displaystyle h}have fewer arguments thanf{\displaystyle f}it follows that using this process recursively we will finish with functions with one variable. For example, let us construct ANF off(x,y)=x∨y{\displaystyle f(x,y)=x\lor y}(logical or):
https://en.wikipedia.org/wiki/Ring_sum_normal_form
Ametasyntaxis a syntax used to define the syntax of aprogramming languageorformal language. It describes the allowable structure and composition of phrases and sentences of ametalanguage, which is used to describe either anatural languageor a computer programming language.[1]Some of the widely used formal metalanguages for computer languages areBackus–Naur form(BNF),extended Backus–Naur form(EBNF),Wirth syntax notation(WSN), andaugmented Backus–Naur form(ABNF). Metalanguages have their own metasyntax each composed ofterminal symbols,nonterminal symbols, andmetasymbols. A terminal symbol, such as a word or a token, is a stand-alone structure in a language being defined. A nonterminal symbol represents asyntacticcategory, which defines one or more valid phrasal or sentence structure consisted of an n-element subset. Metasymbols provide syntactic information for denotational purposes in a given metasyntax. Terminals, nonterminals, and metasymbols do not apply across all metalanguages. Typically, the metalanguage for token-level languages (formally called "regular languages") does not have nonterminals because nesting is not an issue in these regular languages. English, as a metalanguage for describing certain languages, does not contain metasymbols since all explanation could be done using English expression. There are only certain formal metalanguages used for describing recursive languages (formally calledcontext-free languages) that have terminals, nonterminals, and metasymbols in their metasyntax. The metasyntax convention of these formal metalanguages are not yet formalized. Many metasyntactic variations or extensions exist in the reference manual of various computer programming languages. One variation to the standard convention for denoting nonterminals and terminals is to remove metasymbols such as angle brackets and quotations and applyfont typesto the intended words. InAda, for example, syntactic categories are denoted by applying lower casesans-serif fonton the intended words or symbols. All terminal words or symbols, in Ada, consist of characters of code position between16#20#and16#7E#(inclusive). The definition for each character set is referred to the International Standard described byISO/IEC10646:2003. InCandJava, syntactic categories are denoted usingitalic fontwhile terminal symbols are denoted bygothicfont. InJ, its metasyntax does not apply metasymbols to describe J's syntax at all. Rather, all syntactic explanations are done in a metalanguage very similar to English called Dictionary, which is uniquely documented for J. The purpose of the new extensions is to provide a simpler and unambiguous metasyntax. In terms of simplicity, BNF's metanotation definitely does not help to make the metasyntax easier-to-read as the open-end and close-end metasymbols appear too abundantly. In terms of ambiguity, BNF's metanotation generates unnecessary complexity when quotation marks, apostrophes, less-than signs or greater-than signs come to serve as terminal symbols, which they often do. The extended metasyntax utilizes properties such as case, font, and code position of characters to reduce unnecessary aforementioned complexity. Moreover, some metalanguages use fonted separator categories to incorporate metasyntactic features for layout conventions, which are not formally supported by BNF.
https://en.wikipedia.org/wiki/Metasyntax
Fermi–Dirac statisticsis a type ofquantum statisticsthat applies to thephysicsof asystemconsisting of many non-interacting,identical particlesthat obey thePauli exclusion principle. A result is the Fermi–Dirac distribution of particles overenergy states. It is named afterEnrico FermiandPaul Dirac, each of whom derived the distribution independently in 1926.[1][2]Fermi–Dirac statistics is a part of the field ofstatistical mechanicsand uses the principles ofquantum mechanics. Fermi–Dirac statistics applies to identical and indistinguishable particles withhalf-integerspin(1/2, 3/2, etc.), calledfermions, inthermodynamic equilibrium. For the case of negligible interaction between particles, the system can be described in terms of single-particleenergy states. A result is the Fermi–Dirac distribution of particles over these states where no two particles can occupy the same state, which has a considerable effect on the properties of the system. Fermi–Dirac statistics is most commonly applied toelectrons, a type of fermion withspin 1/2. A counterpart to Fermi–Dirac statistics isBose–Einstein statistics, which applies to identical and indistinguishable particles with integer spin (0, 1, 2, etc.) calledbosons. In classical physics,Maxwell–Boltzmann statisticsis used to describe particles that are identical and treated as distinguishable. For both Bose–Einstein and Maxwell–Boltzmann statistics, more than one particle can occupy the same state, unlike Fermi–Dirac statistics. Before the introduction of Fermi–Dirac statistics in 1926, understanding some aspects of electron behavior was difficult due to seemingly contradictory phenomena. For example, the electronicheat capacityof a metal atroom temperatureseemed to come from 100 times fewerelectronsthan were in theelectric current.[3]It was also difficult to understand why theemission currentsgenerated by applying high electric fields to metals at room temperature were almost independent of temperature. The difficulty encountered by theDrude model, the electronic theory of metals at that time, was due to considering that electrons were (according to classical statistics theory) all equivalent. In other words, it was believed that each electron contributed to the specific heat an amount on the order of theBoltzmann constantkB. This problem remained unsolved until the development of Fermi–Dirac statistics. Fermi–Dirac statistics was first published in 1926 byEnrico Fermi[1]andPaul Dirac.[2]According toMax Born,Pascual Jordandeveloped in 1925 the same statistics, which he calledPaulistatistics, but it was not published in a timely manner.[4][5][6]According to Dirac, it was first studied by Fermi, and Dirac called it "Fermi statistics" and the corresponding particles "fermions".[7] Fermi–Dirac statistics was applied in 1926 byRalph Fowlerto describe the collapse of astarto awhite dwarf.[8]In 1927Arnold Sommerfeldapplied it to electrons in metals and developed thefree electron model,[9]and in 1928 Fowler andLothar Nordheimapplied it tofield electron emissionfrom metals.[10]Fermi–Dirac statistics continue to be an important part of physics. For a system of identical fermions in thermodynamic equilibrium, the average number of fermions in a single-particle stateiis given by theFermi–Dirac (F–D) distribution:[11][nb 1] n¯i=1e(εi−μ)/kBT+1,{\displaystyle {\bar {n}}_{i}={\frac {1}{e^{(\varepsilon _{i}-\mu )/k_{\text{B}}T}+1}},} wherekBis theBoltzmann constant,Tis the absolutetemperature,εiis the energy of the single-particle statei, andμis thetotal chemical potential. The distribution is normalized by the condition that can be used to expressμ=μ(T,N){\displaystyle \mu =\mu (T,N)}in thatμ{\displaystyle \mu }can assume either a positive or negative value.[12] At zero absolute temperature,μis equal to theFermi energyplus the potential energy per fermion, provided it is in aneighbourhoodof positive spectral density. In the case of a spectral gap, such as for electrons in a semiconductor, the point of symmetryμis typically called theFermi levelor—for electrons—theelectrochemical potential, and will be located in the middle of the gap.[13][14] The Fermi–Dirac distribution is only valid if the number of fermions in the system is large enough so that adding one more fermion to the system has negligible effect onμ.[15]Since the Fermi–Dirac distribution was derived using thePauli exclusion principle, which allows at most one fermion to occupy each possible state, a result is that0<n¯i<1{\displaystyle 0<{\bar {n}}_{i}<1}.[nb 2] Thevarianceof the number of particles in stateican be calculated from the above expression forn¯i{\displaystyle {\bar {n}}_{i}}:[17][18] From the Fermi–Dirac distribution of particles over states, one can find the distribution of particles over energy.[nb 3]The average number of fermions with energyεi{\displaystyle \varepsilon _{i}}can be found by multiplying the Fermi–Dirac distributionn¯i{\displaystyle {\bar {n}}_{i}}by thedegeneracygi{\displaystyle g_{i}}(i.e. the number of states with energyεi{\displaystyle \varepsilon _{i}}),[19] Whengi≥2{\displaystyle g_{i}\geq 2}, it is possible thatn¯(εi)>1{\displaystyle {\bar {n}}(\varepsilon _{i})>1}, since there is more than one state that can be occupied by fermions with the same energyεi{\displaystyle \varepsilon _{i}}. When a quasi-continuum of energiesε{\displaystyle \varepsilon }has an associateddensity of statesg(ε){\displaystyle g(\varepsilon )}(i.e. the number of states per unit energy range per unit volume[20]), the average number of fermions per unit energy range per unit volume is whereF(ε){\displaystyle F(\varepsilon )}is called theFermi functionand is the samefunctionthat is used for the Fermi–Dirac distributionn¯i{\displaystyle {\bar {n}}_{i}}:[21] so that The Fermi–Dirac distribution approaches theMaxwell–Boltzmann distributionin the limit of high temperature and low particle density, without the need for any ad hoc assumptions: The classical regime, whereMaxwell–Boltzmann statisticscan be used as an approximation to Fermi–Dirac statistics, is found by considering the situation that is far from the limit imposed by theHeisenberg uncertainty principlefor a particle's position andmomentum. For example, in physics of semiconductor, when the density of states of conduction band is much higher than the doping concentration, the energy gap between conduction band and fermi level could be calculated using Maxwell-Boltzmann statistics. Otherwise, if the doping concentration is not negligible compared to density of states of conduction band, the Fermi–Dirac distribution should be used instead for accurate calculation. It can then be shown that the classical situation prevails when theconcentrationof particles corresponds to anaverage interparticle separationR¯{\displaystyle {\bar {R}}}that is much greater than the averagede Broglie wavelengthλ¯{\displaystyle {\bar {\lambda }}}of the particles:[22] wherehis thePlanck constant, andmis themass of a particle. For the case of conduction electrons in a typical metal atT= 300K(i.e. approximately room temperature), the system is far from the classical regime becauseR¯≈λ¯/25{\displaystyle {\bar {R}}\approx {\bar {\lambda }}/25}. This is due to the small mass of the electron and the high concentration (i.e. smallR¯{\displaystyle {\bar {R}}}) of conduction electrons in the metal. Thus Fermi–Dirac statistics is needed for conduction electrons in a typical metal.[22] Another example of a system that is not in the classical regime is the system that consists of the electrons of a star that has collapsed to a white dwarf. Although the temperature of white dwarf is high (typicallyT=10000Kon its surface[23]), its high electron concentration and the small mass of each electron precludes using a classical approximation, and again Fermi–Dirac statistics is required.[8] The Fermi–Dirac distribution, which applies only to a quantum system of non-interacting fermions, is easily derived from thegrand canonical ensemble.[24]In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperatureTand chemical potentialμfixed by the reservoir). Due to the non-interacting quality, each available single-particle level (with energy levelϵ) forms a separate thermodynamic system in contact with the reservoir. In other words, each single-particle level is a separate, tiny grand canonical ensemble. By the Pauli exclusion principle, there are only two possiblemicrostatesfor the single-particle level: no particle (energyE= 0), or one particle (energyE=ε). The resultingpartition functionfor that single-particle level therefore has just two terms: and the average particle number for that single-particle level substate is given by This result applies for each single-particle level, and thus gives the Fermi–Dirac distribution for the entire state of the system.[24] The variance in particle number (due tothermal fluctuations) may also be derived (the particle number has a simpleBernoulli distribution): This quantity is important in transport phenomena such as theMott relationsfor electrical conductivity andthermoelectric coefficientfor anelectron gas,[25]where the ability of an energy level to contribute to transport phenomena is proportional to⟨(ΔN)2⟩{\displaystyle {\big \langle }(\Delta N)^{2}{\big \rangle }}. It is also possible to derive Fermi–Dirac statistics in thecanonical ensemble. Consider a many-particle system composed ofNidentical fermions that have negligible mutual interaction and are in thermal equilibrium.[15]Since there is negligible interaction between the fermions, the energyER{\displaystyle E_{R}}of a stateR{\displaystyle R}of the many-particle system can be expressed as a sum of single-particle energies: wherenr{\displaystyle n_{r}}is called the occupancy number and is the number of particles in the single-particle stater{\displaystyle r}with energyεr{\displaystyle \varepsilon _{r}}. The summation is over all possible single-particle statesr{\displaystyle r}. The probability that the many-particle system is in the stateR{\displaystyle R}is given by the normalizedcanonical distribution:[26] whereβ=1/kBT{\displaystyle \beta =1/k_{\text{B}}T},e−βER{\displaystyle e^{-\beta E_{R}}}is called theBoltzmann factor, and the summation is over all possible statesR′{\displaystyle R'}of the many-particle system. The average value for an occupancy numberni{\displaystyle n_{i}}is[26] Note that the stateR{\displaystyle R}of the many-particle system can be specified by the particle occupancy of the single-particle states, i.e. by specifyingn1,n2,…,{\displaystyle n_{1},n_{2},\ldots ,}so that and the equation forn¯i{\displaystyle {\bar {n}}_{i}}becomes where the summation is over all combinations of values ofn1,n2,…{\displaystyle n_{1},n_{2},\ldots }which obey the Pauli exclusion principle, andnr=0{\displaystyle n_{r}=0}= 0 or1{\displaystyle 1}for eachr{\displaystyle r}. Furthermore, each combination of values ofn1,n2,…{\displaystyle n_{1},n_{2},\ldots }satisfies the constraint that the total number of particles isN{\displaystyle N}: Rearranging the summations, where the upper index(i){\displaystyle (i)}on the summation sign indicates that the sum is not overni{\displaystyle n_{i}}and is subject to the constraint that the total number of particles associated with the summation isNi=N−ni{\displaystyle N_{i}=N-n_{i}}. Note that∑(i){\displaystyle \textstyle \sum ^{(i)}}still depends onni{\displaystyle n_{i}}through theNi{\displaystyle N_{i}}constraint, since in one caseni=0{\displaystyle n_{i}=0}and∑(i){\displaystyle \textstyle \sum ^{(i)}}is evaluated withNi=N,{\displaystyle N_{i}=N,}while in the other caseni=1,{\displaystyle n_{i}=1,}and∑(i){\displaystyle \textstyle \sum ^{(i)}}is evaluated withNi=N−1.{\displaystyle N_{i}=N-1.}To simplify the notation and to clearly indicate that∑(i){\displaystyle \textstyle \sum ^{(i)}}still depends onni{\displaystyle n_{i}}throughN−ni,{\displaystyle N-n_{i},}define so that the previous expression forn¯i{\displaystyle {\bar {n}}_{i}}can be rewritten and evaluated in terms of theZi{\displaystyle Z_{i}}: The following approximation[27]will be used to find an expression to substitute forZi(N)/Zi(N−1){\displaystyle Z_{i}(N)/Z_{i}(N-1)}: whereαi≡∂ln⁡Zi(N)∂N.{\displaystyle \alpha _{i}\equiv {\frac {\partial \ln Z_{i}(N)}{\partial N}}.} If the number of particlesN{\displaystyle N}is large enough so that the change in the chemical potentialμ{\displaystyle \mu }is very small when a particle is added to the system, thenαi≃−μ/kBT.{\displaystyle \alpha _{i}\simeq -\mu /k_{\text{B}}T.}[28]Applying the exponential function to both sides, substituting forαi{\displaystyle \alpha _{i}}and rearranging, Substituting the above into the equation forn¯i{\displaystyle {\bar {n}}_{i}}and using a previous definition ofβ{\displaystyle \beta }to substitute1/kBT{\displaystyle 1/k_{\text{B}}T}forβ{\displaystyle \beta }, results in the Fermi–Dirac distribution: Like theMaxwell–Boltzmann distributionand theBose–Einstein distribution, the Fermi–Dirac distribution can also be derived by theDarwin–Fowler methodof mean values.[29] A result can be achieved by directly analyzing the multiplicities of the system and usingLagrange multipliers.[30] Suppose we have a number of energy levels, labeled by indexi, each level having energy εiand containing a total ofniparticles. Suppose each level containsgidistinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta (i.e. their momenta may be along different directions), in which case they are distinguishable from each other, yet they can still have the same energy. The value ofgiassociated with leveliis called the "degeneracy" of that energy level. ThePauli exclusion principlestates that only one fermion can occupy any such sublevel. The number of ways of distributingniindistinguishable particles among thegisublevels of an energy level, with a maximum of one particle per sublevel, is given by thebinomial coefficient, using itscombinatorial interpretation: For example, distributing two particles in three sublevels will give population numbers of 110, 101, or 011 for a total of three ways which equals 3!/(2!1!). The number of ways that a set of occupation numbersnican be realized is the product of the ways that each individual energy level can be populated: Following the same procedure used in deriving theMaxwell–Boltzmann statistics, we wish to find the set ofnifor whichWis maximized, subject to the constraint that there be a fixed number of particles and a fixed energy. We constrain our solution usingLagrange multipliersforming the function: UsingStirling's approximationfor the factorials, taking the derivative with respect toni, setting the result to zero, and solving forniyields the Fermi–Dirac population numbers: By a process similar to that outlined in theMaxwell–Boltzmann statisticsarticle, it can be shown thermodynamically thatβ=1kBT{\displaystyle \beta ={\tfrac {1}{k_{\text{B}}T}}}andα=−μkBT{\displaystyle \alpha =-{\tfrac {\mu }{k_{\text{B}}T}}}, so that finally, the probability that a state will be occupied is
https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics
Both electrical and electronics engineers typically possess anacademic degreewith a major in electrical/ electronics engineering. The length of study for such a degree is usually three or four years and the completed degree may be designated as aBachelor of Engineering,Bachelor of ScienceorBachelor of Applied Sciencedepending upon the university. The degree generally includes units coveringphysics,mathematics,project managementandspecific topics in electrical and electronics engineering. Initially such topics cover most, if not all, of the sub fields of electrical engineering. Students then choose to specialize in one or more sub fields towards the end of the degree. In most countries, a bachelor's degree in engineering represents the first step towards certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States and Canada), Chartered Engineer (in the United Kingdom, Ireland, India, Pakistan, South Africa and Zimbabwe), Chartered Professional Engineer (in Australia) or European Engineer (in much of the European Union). Electrical engineers can also choose to pursue a postgraduate degree such as amaster of engineering, adoctor of philosophyin engineering or anengineer's degree. The master and engineer's degree may consist of eitherresearch,courseworkor a mixture of the two. The doctor of philosophy consists of a significant research component and is often viewed as the entry point toacademia. In the United Kingdom and various other European countries, the master of engineering is often considered an undergraduate degree of slightly longer duration than the bachelor of engineering. Apart from electromagnetics and network theory, other items in the syllabus are particular toelectronicsengineering course.Electricalengineering courses have other specializations such asmachines,power generationanddistribution. Note that the following list does not include the large quantity of mathematics (maybe apart from the final year) included in each year's study. Elements of vector calculus: divergence and curl; Gauss' andStokes' theorems,Maxwell's equations: differential and integral forms.Wave equation,Poynting vector.Plane waves: propagation through various media; reflection and refraction; phase and group velocity; skin depth. Transmission lines: characteristic impedance; impedance transformation; Smith chart; impedance matching; pulse excitation. Waveguides: modes in rectangular waveguides; boundary conditions; cut-off frequencies; dispersion relations. Antennas:Dipole antennas;antenna arrays; radiation pattern;reciprocity theorem,antenna gain. Additional basic fundamental in electrical are to be study Network graphs: matrices associated with graphs; incidence, fundamentalcut setand fundamental circuit matrices. Solution methods: nodal andmesh analysis. Network theorems: superposition, Thevenin and Norton'smaximum power transfer,Wye-Delta transformation. Steady state sinusoidal analysis usingphasors. Linear constant coefficient differential equations; time domain analysis of simple RLC circuits, Solution of network equations using Laplace transform: frequency domain analysis of RLC circuits. 2-port network parameters: driving point and transfer functions. State equations. Electronic Devices:Energy bands in silicon, intrinsic and extrinsic silicon. Carrier transport in silicon:diffusion current,drift current, mobility,resistivity. Generation and recombination of carriers.p-n junctiondiode,Zener diode,tunnel diode, BJT,JFET, MOS capacitor,MOSFET, LED, p-I-n and avalanchephoto diode, LASERs. Device technology: integrated circuits fabrication process, oxidation, diffusion, ion implantation, photolithography, n-tub, p-tub and twin-tub CMOS process. Analog Circuits:Equivalent circuits (large and small-signal) of diodes,BJTs,JFETs, andMOSFETs, Simple diode circuits, clipping, clamping, rectifier. Biasing and bias stability of transistor and FET amplifiers. Amplifiers: single-and multi-stage, differential, operational, feedback and power. Analysis of amplifiers; frequency response of amplifiers. Simple op-amp circuits. Filters. Sinusoidal oscillators; criterion for oscillation; single-transistor and op-amp configurations.Function generatorsand wave-shaping circuits. Power supplies. Digital circuits:Boolean algebra, minimization of Boolean functions;logic gatesdigital IC families (DTL,TTL,ECL, MOS,CMOS). Combinational circuits: arithmetic circuits, code converters,multiplexersand decoders. Sequential circuits: latches andflip-flops, counters andshift-registers. Sample and hold circuits,ADCs, DACs. Semiconductor memories.Microprocessor(8085): architecture, programming, memory and I/O interfacing. Definitions and properties ofLaplace transform, continuous-time and discrete-time Fourier series, continuous-time and discrete-timeFourier Transform,z-transform. Sampling theorems. Linear Time-Invariant Systems: definitions and properties; casualty, stability,impulse response, convolution, poles and zeros frequency response, group delay, phase delay. Signal transmission through LTI systems.Random signalsand noise: probability,random variables,probability density function,autocorrelation, power spectral density. Control systemcomponents; block diagrammatic description, reduction of block diagrams.Open loopand closed loop (feedback) systems and stability analysis of these systems.Signal flow graphsand their use in determining transfer functions of systems; transient and steady state analysis of LTI control systems and frequency response. Tools and techniques for LTI control system analysis:root loci,Routh-Hurwitz criterion,BodeandNyquist plots. Control system compensators: elements of lead andlag compensation, elements of Proportional-Integral-Derivative control. State variable representation and solution of state equation of LTI control systems. Communication systems: amplitude and angle modulation and demodulation systems, spectral analysis of these operations, superheterodyne receivers; elements of hardware, realizations of analog communication systems; signal-to-noise ratio calculations for amplitude modulation (AM) and frequency modulation (FM) for low noise conditions. Digital communication systems: pulse code modulation,differential pulse-code modulation, delta modulation; digital modulation schemes-amplitude, phase and frequency shift keying schemes, matched filter receivers, bandwidth consideration and probability of error calculations for these schemes. The advantages of certification vary depending upon location. For example, in the United States and Canada "only a licensed engineer may...seal engineering work for public and private clients". This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act. In other countries, such as Australia, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion. In this way these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations such as building codes and legislation pertaining to environmental law. Significant professional bodies for electrical engineers include theInstitute of Electrical and Electronics Engineersand theInstitution of Engineering and Technology. The former claims to produce 30 percent of the world's literature on electrical engineering, has over 360,000 members worldwide and holds over 300 conferences annually. The latter publishes 14 journals, has a worldwide membership of 120,000, certifies Chartered Engineers in the United Kingdom and claims to be the largest professional engineering society in Europe.
https://en.wikipedia.org/wiki/Education_and_training_of_electrical_and_electronics_engineers
Apost-creole continuum(or simplycreole continuum) is adialect continuumofvarietiesof acreole languagebetween those most and least similar to thesuperstratelanguage (that is, a closely related language whose speakers assert or asserted dominance of some sort). Due to social, political, and economic factors, a creole language candecreolizetowards one of the languages from which it is descended, aligning itsmorphology,phonology, andsyntaxto the local standard of the dominant language but to different degrees depending on a speaker's status. William Stewart, in 1965, proposed the termsacrolect, the highest or most prestigious variety on the continuum, andbasilect, the lowest or least prestigious variety, as sociolinguistic labels for the upper and lower boundaries, respectively, of a post-creole speech continuum.[1]In the early 1970sDerek Bickertonpopularized these terms (as well asmesolectfor intermediate points in the continuum) to refer to the phenomenon ofcode-switchingused by some users of creole languages who also have some fluency in thestandard languageupon which the contact language is based.University of ChicagolinguistSalikoko Mufweneexplains the phenomenon of creole languages as "basilectalization" away from a standard, often European, language among a mixed European and non-European population.[2]In certainspeech communities, a continuum exists between speakers of a creole language and a related standard language. There are no discrete boundaries between the different varieties, and the situation in which such a continuum exists involves considerable social stratification. The following table (fromBell 1976) shows the 18 different ways of rendering the phraseI gave him oneinGuyanese English: The continuum shown has the acrolect form as[aɪɡeɪvhɪmwʌn](which is identical withStandard English) while the basilect form is[mɪɡiːæmwan]. Due to code-switching, most speakers have a command of a range in the continuum and, depending on social position, occupation, etc. can implement the different levels with various levels of skill.[3] If a society is so stratified as to have little to no contact between groups who speak the creole and those who speak the superstrate (dominant) language, a situation ofdiglossiaoccurs, rather than a continuum. Assigning separate and distinct functions for the two varieties will have the same effect. This is the case inHaitiwithHaitian CreoleandFrench. Use of the termsacrolect,mesolectandbasilectattempts to avoid thevalue judgementinherent in earlier terminology, by which the language spoken by the ruling classes in a capital city was defined as the "correct" or "pure" form while that spoken by the lower classes and inhabitants of outlying provinces was "a dialect" characterised as "incorrect", "impure" or "debased". It has been suggested (Rickford 1977;Dillard 1972) thatAfrican American Vernacular Englishis a decreolized form of a slave creole. After emancipation, African-Americans' recognition and exercise of increased opportunities for interaction created a strong influence ofStandard American Englishonto the speech of Black Americans so that a continuum exists today with Standard English as the acrolect and varieties closest to the original creole as the basilect. InJamaica, a continuum exists betweenJamaican EnglishandJamaican Patois.[4][5] In Haiti, the acrolect isHaitian Frenchand the basilect has been standardized asHaitian Creole. Meanwhile, in southern Africa,Afrikaansis a codified mesolect, or a partial creole,[6][7]with the acrolect (standardDutch) stripped of official status decades ago, having been used for only religious purposes.
https://en.wikipedia.org/wiki/Post-creole_speech_continuum
Gois ahigh-levelgeneral purpose programming languagethat isstatically typedandcompiled. It is known for the simplicity of its syntax and the efficiency of development that it enables by the inclusion of a large standard library supplying many needs for common projects.[12]It was designed atGoogle[13]in 2007 byRobert Griesemer,Rob Pike, andKen Thompson, and publicly announced in November of 2009.[4]It issyntacticallysimilar toC, but also hasmemory safety,garbage collection,structural typing,[7]andCSP-styleconcurrency.[14]It is often referred to asGolangto avoid ambiguity and because of its former domain name,golang.org, but its proper name is Go.[15] There are two major implementations: A third-partysource-to-source compiler, GopherJS,[21]transpiles Go toJavaScriptforfront-end web development. Go was designed atGooglein 2007 to improveprogramming productivityin an era ofmulticore,networkedmachinesand largecodebases.[22]The designers wanted to address criticisms of other languages in use at Google, but keep their useful characteristics:[23] Its designers were primarily motivated by their shareddislike of C++.[25][26][27] Go was publicly announced in November 2009,[28]and version 1.0 was released in March 2012.[29][30]Go is widely used in production at Google[31]and in many other organizations and open-source projects. In retrospect the Go authors judged Go to be successful due to the overall engineering work around the language, including the runtime support for the language's concurrency feature. Although the design of most languages concentrates on innovations in syntax, semantics, or typing, Go is focused on the software development process itself. ... The principal unusual property of the language itself—concurrency—addressed problems that arose with the proliferation of multicore CPUs in the 2010s. But more significant was the early work that established fundamentals for packaging, dependencies, build, test, deployment, and other workaday tasks of the software development world, aspects that are not usually foremost in language design.[32] TheGophermascotwas introduced in 2009 for theopen sourcelaunch of the language. The design, byRenée French, borrowed from a c. 2000WFMUpromotion.[33] In November 2016, the Go and Go Mono fonts were released by type designersCharles BigelowandKris Holmesspecifically for use by the Go project. Go is ahumanist sans-serifresemblingLucida Grande, and Go Mono ismonospaced. Both fonts adhere to theWGL4character set and were designed to be legible with a largex-heightand distinctletterforms. Both Go and Go Mono adhere to theDIN1450 standard by having a slashed zero, lowercaselwith a tail, and an uppercaseIwith serifs.[34][35] In April 2018, the original logo was redesigned by brand designer Adam Smith. The new logo is a modern, stylized GO slanting right with trailing streamlines. (The Gopher mascot remained the same.[36]) The lack of support forgeneric programmingin initial versions of Go drew considerable criticism.[37]The designers expressed an openness to generic programming and noted that built-in functionswerein fact type-generic, but are treated as special cases; Pike called this a weakness that might be changed at some point.[38]The Google team built at least one compiler for an experimental Go dialect with generics, but did not release it.[39] In August 2018, the Go principal contributors published draft designs for generic programming anderror handlingand asked users to submit feedback.[40][41]However, the error handling proposal was eventually abandoned.[42] In June 2020, a new draft design document[43]was published that would add the necessary syntax to Go for declaring generic functions and types. A code translation tool,go2go, was provided to allow users to try the new syntax, along with a generics-enabled version of the online Go Playground.[44] Generics were finally added to Go in version 1.18 on March 15, 2022.[45] Go 1 guarantees compatibility[46]for the language specification and major parts of the standard library. All versions up through the current Go 1.24 release[47]have maintained this promise. Go uses ago1.[major].[patch]versioning format, such asgo1.24.0and each major Go release is supported until there are two newer major releases. Unlike most software, Go calls the second number in a version the major, i.e., ingo1.24.0the24is the major version.[48]This is because Go plans to never reach 2.0, prioritizing backwards compatibility over potential breaking changes.[49] Go is influenced byC(especially thePlan 9dialect[50][failed verification–see discussion]), but with an emphasis on greater simplicity and safety. It consists of: Go's syntax includes changes fromCaimed at keeping code concise and readable. A combined declaration/initialization operator was introduced that allows the programmer to writei:=3ors:="Hello, world!",without specifying the typesof variables used. This contrasts with C'sinti=3;andconstchar*s="Hello, world!";. Go also removes the requirement to use parentheses in if statement conditions. Semicolons still terminate statements;[a]but are implicit when the end of a line occurs.[b] Methods may return multiple values, and returning aresult,errpair is the conventional way a method indicates an error to its caller in Go.[c]Go adds literal syntaxes for initializing struct parameters by name and for initializingmapsandslices. As an alternative to C's three-statementforloop, Go'srangeexpressions allow concise iteration over arrays, slices, strings, maps, and channels.[58] fmt.Println("Hello World!")is a statement. In Go, statements are separated by ending a line (hitting the Enter key) or by a semicolon ";". Hitting the Enter key adds ";" to the end of the line implicitly (does not show up in the source code). The left curly bracket{cannot come at the start of a line.[59] Go has a number of built-in types, including numeric ones (byte,int64,float32, etc.),Booleans, and byte strings (string). Strings are immutable; built-in operators and keywords (rather than functions) provide concatenation, comparison, andUTF-8encoding/decoding.[60]Record typescan be defined with thestructkeyword.[61] For each typeTand each non-negative integer constantn, there is anarray typedenoted[n]T; arrays of differing lengths are thus of different types.Dynamic arraysare available as "slices", denoted[]Tfor some typeT. These have a length and acapacityspecifying when new memory needs to be allocated to expand the array. Several slices may share their underlying memory.[38][62][63] Pointersare available for all types, and the pointer-to-Ttype is denoted*T. Address-taking and indirection use the&and*operators, as in C, or happen implicitly through the method call or attribute access syntax.[64][65]There is no pointer arithmetic,[d]except via the specialunsafe.Pointertype in the standard library.[66] For a pair of typesK,V, the typemap[K]Vis the type mapping type-Kkeys to type-Vvalues, though Go Programming Language specification does not give any performance guarantees or implementation requirements for map types. Hash tables are built into the language, with special syntax and built-in functions.chanTis achannelthat allows sending values of typeTbetweenconcurrent Go processes.[67] Aside from its support forinterfaces, Go's type system isnominal: thetypekeyword can be used to define a newnamed type, which is distinct from other named types that have the same layout (in the case of astruct, the same members in the same order). Some conversions between types (e.g., between the various integer types) are pre-defined and adding a new type may define additional conversions, but conversions between named types must always be invoked explicitly.[68]For example, thetypekeyword can be used to define a type forIPv4addresses, based on 32-bit unsigned integers as follows: With this type definition,ipv4addr(x)interprets theuint32valuexas an IP address. Simply assigningxto a variable of typeipv4addris a type error.[69] Constant expressionsmay be either typed or "untyped"; they are given a type when assigned to a typed variable if the value they represent passes a compile-time check.[70] Functiontypesare indicated by thefunckeyword; they take zero or moreparametersandreturnzero or more values, all of which are typed. The parameter and return values determine a function type; thus,func(string, int32) (int, error)is the type of functions that take astringand a 32-bit signed integer, and return a signed integer (of default width) and a value of the built-in interface typeerror.[71] Any named type has amethodset associated with it. The IP address example above can be extended with a method for checking whether its value is a known standard: Due to nominal typing, this method definition adds a method toipv4addr, but not onuint32. While methods have special definition and call syntax, there is no distinct method type.[72] Go provides two features that replaceclass inheritance.[citation needed] The first isembedding, which can be viewed as an automated form ofcomposition.[73] The second are itsinterfaces, which providesruntime polymorphism.[74]: 266Interfaces are a class of types and provide a limited form ofstructural typingin the otherwise nominal type system of Go. An object which is of an interface type is also of another type, much likeC++objects being simultaneously of a base and derived class. The design of Go interfaces was inspired byprotocolsfrom the Smalltalk programming language.[75]Multiple sources use the termduck typingwhen describing Go interfaces.[76][77]Although the term duck typing is not precisely defined and therefore not wrong, it usually implies that type conformance is not statically checked. Because conformance to a Go interface is checked statically by the Go compiler (except when performing a type assertion), the Go authors prefer the termstructural typing.[78] The definition of an interface type lists required methods by name and type. Any object of type T for which functions exist matching all the required methods of interface type I is an object of type I as well. The definition of type T need not (and cannot) identify type I. For example, ifShape,Squareand Circleare defined as then both aSquareand aCircleare implicitly aShapeand can be assigned to aShape-typed variable.[74]: 263–268In formal language, Go's interface system providesstructuralrather thannominaltyping. Interfaces can embed other interfaces with the effect of creating a combined interface that is satisfied by exactly the types that implement the embedded interface and any methods that the newly defined interface adds.[74]: 270 The Go standard library uses interfaces to provide genericity in several places, including the input/output system that is based on the concepts ofReaderandWriter.[74]: 282–283 Besides calling methods via interfaces, Go allows converting interface values to other types with a run-time type check. The language constructs to do so are thetype assertion,[79]which checks against a single potential type: and thetype switch,[80]which checks against multiple types:[citation needed] Theempty interfaceinterface{}is an important base case because it can refer to an item ofanyconcrete type. It is similar to theObjectclass inJavaorC#and is satisfied by any type, including built-in types likeint.[74]: 284Code using the empty interface cannot simply call methods (or built-in operators) on the referred-to object, but it can store theinterface{}value, try to convert it to a more useful type via a type assertion or type switch, or inspect it with Go'sreflectpackage.[81]Becauseinterface{}can refer to any value, it is a limited way to escape the restrictions of static typing, likevoid*in C but with additional run-time type checks.[citation needed] Theinterface{}type can be used to model structured data of any arbitrary schema in Go, such asJSONorYAMLdata, by representing it as amap[string]interface{}(map of string to empty interface). This recursively describes data in the form of a dictionary with string keys and values of any type.[82] Interface values are implemented using pointer to data and a second pointer to run-time type information.[83]Like some other types implemented using pointers in Go, interface values arenilif uninitialized.[84] Since version 1.18, Go supports generic code using parameterized types.[85] Functions and types now have the ability to be generic using type parameters. These type parameters are specified within square brackets, right after the function or type name.[86]The compiler transforms the generic function or type into non-generic by substitutingtype argumentsfor the type parameters provided, either explicitly by the user or type inference by the compiler.[87]This transformation process is referred to as type instantiation.[88] Interfaces now can define a set of types (known as type set) using|(Union) operator, as well as a set of methods. These changes were made to support type constraints in generics code. For a generic function or type, a constraint can be thought of as the type of the type argument: a meta-type. This new~Tsyntax will be the first use of~as a token in Go.~Tmeans the set of all types whose underlying type isT.[89] Go uses theiotakeyword to create enumerated constants.[90][91] In Go's package system, each package has a path (e.g.,"compress/bzip2"or"golang.org/x/net/html") and a name (e.g.,bzip2orhtml). By default other packages' definitions mustalwaysbe prefixed with the other package's name. However the name used can be changed from the package name, and if imported as_, then no package prefix is required. Only thecapitalizednames from other packages are accessible:io.Readeris public butbzip2.readeris not.[92]Thego getcommand can retrieve packages stored in a remote repository[93]and developers are encouraged to develop packages inside a base path corresponding to a source repository (such as example.com/user_name/package_name) to reduce the likelihood of name collision with future additions to the standard library or other external libraries.[94] The Go language has built-in facilities, as well as library support, for writingconcurrent programs. The runtime isasynchronous: program execution that performs for example a network read will be suspended until data is available to process, allowing other parts of the program to perform other work. This is built into the runtime and does not require any changes in program code. The go runtime also automatically schedules concurrent operations (goroutines) across multiple CPUs; this can achieve parallelism for a properly written program.[95] The primary concurrency construct is thegoroutine, a type ofgreen thread.[96]: 280–281A function call prefixed with thegokeyword starts a function in a new goroutine. The language specification does not specify how goroutines should be implemented, but current implementations multiplex a Go process's goroutines onto a smaller set ofoperating-system threads, similar to the scheduling performed inErlangandHaskell's GHC runtime implementation.[97]: 10 While a standard library package featuring most of the classicalconcurrency controlstructures (mutexlocks, etc.) is available,[97]: 151–152idiomatic concurrent programs instead preferchannels, whichsend messagesbetween goroutines.[98]Optional buffers store messages inFIFOorder[99]: 43and allow sending goroutines to proceed before their messages are received.[96]: 233 Channels are typed, so that a channel of typechanTcan only be used to transfer messages of typeT. Special syntax is used to operate on them;<-chis an expression that causes the executing goroutine to block until a value comes in over the channelch, whilech <- xsends the valuex(possibly blocking until another goroutine receives the value). The built-inswitch-likeselectstatement can be used to implement non-blocking communication on multiple channels; seebelowfor an example. Go has a memory model describing how goroutines must use channels or other operations to safely share data.[100] The existence of channels does not by itself set Go apart fromactor model-style concurrent languages like Erlang, where messages are addressed directly to actors (corresponding to goroutines). In the actor model, channels are themselves actors, therefore addressing a channel just means to address an actor. The actor style can be simulated in Go by maintaining a one-to-one correspondence between goroutines and channels, but the language allows multiple goroutines to share a channel or a single goroutine to send and receive on multiple channels.[97]: 147 From these tools one can build concurrent constructs likeworker pools, pipelines (in which, say, a file is decompressed and parsed as it downloads), background calls with timeout, "fan-out" parallel calls to a set of services, and others.[101]Channels have also found uses further from the usual notion of interprocess communication, like serving as a concurrency-safe list of recycled buffers,[102]implementingcoroutines(which helped inspire the namegoroutine),[103]and implementingiterators.[104] Concurrency-related structural conventions of Go (channelsand alternative channel inputs) are derived fromTony Hoare'scommunicating sequential processesmodel. Unlike previous concurrent programming languages such asOccamorLimbo(a language on which Go co-designer Rob Pike worked),[105]Go does not provide any built-in notion of safe or verifiable concurrency.[106]While the communicating-processes model is favored in Go, it is not the only one: all goroutines in a program share a single address space. This means that mutable objects and pointers can be shared between goroutines; see§ Lack of data race safety, below. Although Go's concurrency features are not aimed primarily atparallel processing,[95]they can be used to programshared-memorymulti-processormachines. Various studies have been done into the effectiveness of this approach.[107]One of these studies compared the size (inlines of code) and speed of programs written by a seasoned programmer not familiar with the language and corrections to these programs by a Go expert (from Google's development team), doing the same forChapel,CilkandIntel TBB. The study found that the non-expert tended to writedivide-and-conqueralgorithms with onegostatement per recursion, while the expert wrote distribute-work-synchronize programs using one goroutine per processor core. The expert's programs were usually faster, but also longer.[108] Go's approach to concurrency can be summarized as "don't communicate by sharing memory; share memory by communicating".[109]There are no restrictions on how goroutines access shared data, makingdata racespossible. Specifically, unless a program explicitly synchronizes via channels or other means, writes from one goroutine might be partly, entirely, or not at all visible to another, often with no guarantees about ordering of writes.[106]Furthermore, Go'sinternal data structureslike interface values, slice headers, hash tables, and string headers are not immune to data races, so type and memory safety can be violated in multithreaded programs that modify shared instances of those types without synchronization.[110][111]Instead of language support, safe concurrent programming thus relies on conventions; for example, Chisnall recommends an idiom called "aliasesxormutable", meaning that passing a mutable value (or pointer) over a channel signals a transfer of ownership over the value to its receiver.[97]: 155The gc toolchain has an optional data race detector that can check for unsynchronized access to shared memory during runtime since version 1.1,[112]additionally a best-effort race detector is also included by default since version 1.6 of the gc runtime for access to themapdata type.[113] The linker in the gc toolchain creates statically linked binaries by default; therefore all Go binaries include the Go runtime.[114][115] Go deliberately omits certain features common in other languages, including(implementation) inheritance,assertions,[e]pointer arithmetic,[d]implicit type conversions,untagged unions,[f]andtagged unions.[g]The designers added only those facilities that all three agreed on.[118] Of the omitted language features, the designers explicitly argue against assertions and pointer arithmetic, while defending the choice to omit type inheritance as giving a more useful language, encouraging instead the use ofinterfacesto achievedynamic dispatch[h]andcompositionto reuse code. Composition anddelegationare in fact largely automated bystructembedding; according to researchers Schmageret al., this feature "has many of the drawbacks of inheritance: it affects the public interface of objects, it is not fine-grained (i.e, no method-level control over embedding), methods of embedded objects cannot be hidden, and it is static", making it "not obvious" whether programmers will overuse it to the extent that programmers in other languages are reputed to overuse inheritance.[73] Exception handlingwas initially omitted in Go due to lack of a "design that gives value proportionate to the complexity".[119]An exception-likepanic/recovermechanism that avoids the usualtry-catchcontrol structure was proposed[120]and released in the March 30, 2010 snapshot.[121]The Go authors advise using it for unrecoverable errors such as those that should halt an entire program or server request, or as a shortcut to propagate errors up the stack within a package.[122][123]Across package boundaries, Go includes a canonical error type, and multi-value returns using this type are the standard idiom.[4] The Go authors put substantial effort into influencing the style of Go programs: The main Go distribution includes tools forbuilding,testing, andanalyzingcode: It also includesprofilinganddebuggingsupport,fuzzingcapabilities to detect bugs,runtimeinstrumentation (for example, to trackgarbage collectionpauses), and adata racedetector. Another tool maintained by the Go team but is not included in Go distributions isgopls, a language server that providesIDEfeatures such asintelligent code completiontoLanguage Server Protocolcompatible editors.[132] An ecosystem of third-party tools adds to the standard distribution, such asgocode, which enables code autocompletion in many text editors,goimports, which automatically adds/removes package imports as needed, anderrcheck, which detects code that might unintentionally ignore errors. where "fmt" is the package forformattedI/O, similar to C'sC file input/output.[133] The following simple program demonstrates Go'sconcurrency featuresto implement an asynchronous program. It launches two lightweight threads ("goroutines"): one waits for the user to type some text, while the other implements a timeout. Theselectstatement waits for either of these goroutines to send a message to the main routine, and acts on the first message to arrive (example adapted from David Chisnall's book).[97]: 152 The testing package provides support for automated testing of go packages.[134]Target function example: Test code (note thatassertkeyword is missing in Go; tests live in <filename>_test.go at the same package): It is possible to run tests in parallel. Thenet/http[135]package provides support for creating web applications. This example would show "Hello world!" when localhost:8080 is visited. Go has found widespread adoption in various domains due to its robust standard library and ease of use.[136] Popular applications include:Caddy, a web server that automates the process of setting up HTTPS,[137]Docker, which provides a platform for containerization, aiming to ease the complexities of software development and deployment,[138]Kubernetes, which automates the deployment, scaling, and management of containerized applications,[139]CockroachDB, a distributed SQL database engineered for scalability and strong consistency,[140]andHugo, a static site generator that prioritizes speed and flexibility, allowing developers to create websites efficiently.[141] The interface system, and the deliberate omission of inheritance, were praised by Michele Simionato, who likened these characteristics to those ofStandard ML, calling it "a shame that no popular language has followed [this] particular route".[142] Dave Astels atEngine Yardwrote in 2009:[143] Go is extremely easy to dive into. There are a minimal number of fundamental language concepts and thesyntaxis clean and designed to be clear and unambiguous. Goisstill experimental and still a little rough around the edges. Go was named Programming Language of the Year by theTIOBE Programming Community Indexin its first year, 2009, for having a larger 12-month increase in popularity (in only 2 months, after its introduction in November) than any other language that year, and reached 13th place by January 2010,[144]surpassing established languages likePascal. By June 2015, its ranking had dropped to below 50th in the index, placing it lower thanCOBOLandFortran.[145]But as of January 2017, its ranking had surged to 13th, indicating significant growth in popularity and adoption. Go was again awarded TIOBE Programming Language of the Year in 2016.[146] Bruce Eckelhas stated:[147] The complexity ofC++(even more complexity has been added in the new C++), and the resulting impact on productivity, is no longer justified. All the hoops that the C++ programmer had to jump through in order to use a C-compatible language make no sense anymore -- they're just a waste of time and effort. Go makes much more sense for the class of problems that C++ was originally intended to solve. A 2011 evaluation of the language and itsgcimplementation in comparison to C++ (GCC), Java andScalaby a Google engineer found: Go offers interesting language features, which also allow for a concise and standardized notation. The compilers for this language are still immature, which reflects in both performance and binary sizes. The evaluation got a rebuttal from the Go development team. Ian Lance Taylor, who had improved the Go code for Hundt's paper, had not been aware of the intention to publish his code, and says that his version was "never intended to be an example of idiomatic or efficient Go"; Russ Cox then optimized the Go code, as well as the C++ code, and got the Go code to run almost as fast as the C++ version and more than an order of magnitude faster than the code in the paper.[149] On November 10, 2009, the day of the general release of the language, Francis McCabe, developer of theGo! programming language(note the exclamation point), requested a name change of Google's language to prevent confusion with his language, which he had spent 10 years developing.[156]McCabe raised concerns that "the 'big guy' will end up steam-rollering over" him, and this concern resonated with the more than 120 developers who commented on Google's official issues thread saying they should change the name, with some[157]even saying the issue contradicts Google's motto of:Don't be evil.[158] On October 12, 2010, the filed public issue ticket was closed by Google developer Russ Cox (@rsc) with the custom status "Unfortunate" accompanied by the following comment: "There are many computing products and services named Go. In the 11 months since our release, there has been minimal confusion of the two languages."[158]
https://en.wikipedia.org/wiki/Go_(programming_language)
Traffic analysisis the process of intercepting and examining messages in order to deduce information from patterns incommunication. It can be performed even when the messages areencrypted.[1]In general, the greater the number of messages observed, the greater information be inferred. Traffic analysis can be performed in the context ofmilitary intelligence,counter-intelligence, orpattern-of-life analysis, and is also a concern incomputer security. Traffic analysis tasks may be supported by dedicated computersoftwareprograms. Advanced traffic analysis techniques which may include various forms ofsocial network analysis. Traffic analysis has historically been a vital technique incryptanalysis, especially when the attempted crack depends on successfully seeding aknown-plaintext attack, which often requires an inspired guess based on how specific the operational context might likely influence what an adversary communicates, which may be sufficient to establish a short crib. Traffic analysis method can be used to break theanonymityof anonymous networks, e.g.,TORs.[1]There are two methods of traffic-analysis attack, passive and active. In a military context, traffic analysis is a basic part ofsignals intelligence, and can be a source of information about the intentions and actions of the target. Representative patterns include: There is a close relationship between traffic analysis andcryptanalysis(commonly calledcodebreaking).Callsignsand addresses are frequentlyencrypted, requiring assistance in identifying them. Traffic volume can often be a sign of an addressee's importance, giving hints to pending objectives or movements to cryptanalysts. Traffic-flow securityis the use of measures that conceal the presence and properties of valid messages on a network to prevent traffic analysis. This can be done by operational procedures or by the protection resulting from features inherent in some cryptographic equipment. Techniques used include: Traffic-flow security is one aspect ofcommunications security. TheCommunications' Metadata Intelligence, orCOMINT metadatais a term incommunications intelligence(COMINT) referring to the concept of producing intelligence by analyzing only the technicalmetadata, hence, is a great practical example for traffic analysis in intelligence.[2] While traditionally information gathering in COMINT is derived from intercepting transmissions, tapping the target's communications and monitoring the content of conversations, the metadata intelligence is not based on content but on technical communicational data. Non-content COMINT is usually used to deduce information about the user of a certain transmitter, such as locations, contacts, activity volume, routine and its exceptions. For example, if an emitter is known as the radio transmitter of a certain unit, and by usingdirection finding(DF) tools, the position of the emitter is locatable, the change of locations from one point to another can be deduced, without listening to any orders or reports. If one unit reports back to a command on a certain pattern, and another unit reports on the same pattern to the same command, the two units are probably related. That conclusion is based on themetadataof the two units' transmissions, not on the content of their transmissions. Using all or as much of the metadata available is commonly used to build up anElectronic Order of Battle(EOB) by mapping different entities in the battlefield and their connections. Of course, the EOB could be built by tapping all the conversations and trying to understand, which unit is where, but using the metadata with an automatic analysis tool enables a much faster and accurate EOB build-up, which, alongside tapping, builds a much better and complete picture. Traffic analysis is also a concern incomputer security. An attacker can gain important information by monitoring the frequency and timing of network packets. A timing attack on theSSHprotocol can use timing information to deduce information aboutpasswordssince, during interactive session, SSH transmits each keystroke as a message.[8]The time between keystroke messages can be studied usinghidden Markov models. Song,et al.claim that it can recover the password fifty times faster than abrute force attack. Onion routingsystems are used to gain anonymity. Traffic analysis can be used to attack anonymous communication systems like theTor anonymity network. Adam Back, Ulf Möeller and Anton Stiglic present traffic analysis attacks against anonymity providing systems.[9]Steven J. MurdochandGeorge Danezisfrom University of Cambridge presented[10]research showing that traffic-analysis allows adversaries to infer which nodes relay the anonymous streams. This reduces the anonymity provided by Tor. They have shown that otherwise unrelated streams can be linked back to the same initiator. Remailersystems can also be attacked via traffic analysis. If a message is observed going to a remailing server, and an identical-length (if now anonymized) message is seen exiting the server soon after, a traffic analyst may be able to (automatically) connect the sender with the ultimate receiver. Variations of remailer operations exist that can make traffic analysis less effective. Traffic analysis involves intercepting and scrutinizing cybersecurity threats to gather valuable insights about anonymous data flowing through theexit node. By using technique rooted indark webcrawling and specializing software, one can identify the specific characteristics of a client's network traffic within the dark web.[11] It is difficult to defeat traffic analysis without both encrypting messages and masking the channel. When no actual messages are being sent, the channel can bemasked[12]by sending dummy traffic, similar to the encrypted traffic, thereby keeping bandwidth usage constant.[13]"It is very hard to hide information about the size or timing of messages. The known solutions requireAliceto send a continuous stream of messages at the maximumbandwidthshe will ever use...This might be acceptable for military applications, but it is not for most civilian applications." The military-versus-civilian problems applies in situations where the user is charged for the volume of information sent. Even for Internet access, where there is not a per-packet charge,ISPsmake statistical assumption that connections from user sites will not be busy 100% of the time. The user cannot simply increase the bandwidth of the link, since masking would fill that as well. If masking, which often can be built into end-to-end encryptors, becomes common practice, ISPs will have to change their traffic assumptions.
https://en.wikipedia.org/wiki/Traffic_analysis
Risk managementis the identification, evaluation, and prioritization ofrisks,[1]followed by the minimization, monitoring, and control of the impact or probability of those risks occurring.[2]Risks can come from various sources (i.e,threats) including uncertainty ininternational markets,political instability, dangers of project failures (at any phase in design, development, production, or sustaining of life-cycles),legal liabilities,credit risk,accidents,natural causes and disasters, deliberate attack from an adversary, or events of uncertain or unpredictableroot-cause.[3] There are two types of events viz. Risks and Opportunities. Negative events can be classified as risks while positive events are classified as opportunities. Risk managementstandardshave been developed by various institutions, including theProject Management Institute, theNational Institute of Standards and Technology,actuarialsocieties, andInternational Organization for Standardization.[4][5][6]Methods, definitions and goals vary widely according to whether the risk management method is in the context ofproject management,security,engineering,industrial processes,financial portfolios,actuarial assessments, orpublic healthandsafety. Certain risk management standards have been criticized for having no measurable improvement on risk, whereas the confidence in estimates and decisions seems to increase.[2] Strategies to manage threats (uncertainties with negative consequences) typically include avoiding the threat, reducing the negative effect or probability of the threat, transferring all or part of the threat to another party, and even retaining some or all of the potential or actual consequences of a particular threat. The opposite of these strategies can be used to respond to opportunities (uncertain future states with benefits).[7] As aprofessional role, arisk manager[8]will "oversee the organization's comprehensive insurance and risk management program, assessing and identifying risks that could impede the reputation, safety, security, or financial success of the organization", and then develop plans to minimize and / or mitigate any negative (financial) outcomes. Risk Analysts[9]support the technical side of the organization's risk management approach: once risk data has been compiled and evaluated, analysts share their findings with their managers, who use those insights to decide among possible solutions. See alsoChief Risk Officer,internal audit, andFinancial risk management § Corporate finance. Risk is defined as the possibility that an event will occur that adversely affects the achievement of an objective. Uncertainty, therefore, is a key aspect of risk.[10]Risk management appears in scientific and management literature since the 1920s.[11]It became a formal science in the 1950s, when articles and books with "risk management" in the title also appear in library searches.[12]Most of research was initially related to finance and insurance.[13][14]One popular standard clarifying vocabulary used in risk management isISO Guide 31073:2022, "Risk management — Vocabulary".[4] Ideally in risk management, a prioritization process is followed.[15]Whereby the risks with the greatest loss (or impact) and the greatestprobabilityof occurring are handled first. Risks with lower probability of occurrence and lower loss are handled in descending order. In practice the process of assessing overall risk can be tricky, and organisation has to balance resources used to mitigate between risks with a higher probability but lower loss, versus a risk with higher loss but lower probability.Opportunity costrepresents a unique challenge for risk managers. It can be difficult to determine when to put resources toward risk management and when to use those resources elsewhere. Again, ideal risk management optimises resource usage (spending, manpower etc), and also minimizes the negative effects of risks. Opportunities first appear in academic research or management books in the 1990s. The first PMBoKProject Management Body of Knowledgedraft of 1987 doesn't mention opportunities at all. Modern project management school recognize the importance of opportunities. Opportunities have been included in project management literature since the 1990s, e.g. in PMBoK, and became a significant part of project risk management in the years 2000s,[16]when articles titled "opportunity management" also begin to appear in library searches.Opportunity managementthus became an important part of risk management. Modern risk management theory deals with any type of external events, positive and negative. Positive risks are calledopportunities. Similarly to risks, opportunities have specific mitigation strategies: exploit, share, enhance, ignore. In practice, risks are considered "usually negative". Risk-related research and practice focus significantly more on threats than on opportunities. This can lead to negative phenomena such astarget fixation.[17] For the most part, these methods consist of the following elements, performed, more or less, in the following order: The Risk managementknowledge area, as defined by theProject Management Body of KnowledgePMBoK, consists of the following processes: TheInternational Organization for Standardization(ISO) identifies the following principles for risk management:[5] Benoit Mandelbrotdistinguished between "mild" and "wild" risk and argued that risk assessment and management must be fundamentally different for the two types of risk.[19]Mild risk followsnormalor near-normalprobability distributions, is subject toregression to the meanand thelaw of large numbers, and is therefore relatively predictable. Wild risk followsfat-tailed distributions, e.g.,Paretoorpower-law distributions, is subject to regression to the tail (infinite mean or variance, rendering the law of large numbers invalid or ineffective), and is therefore difficult or impossible to predict. A common error in risk assessment and management is to underestimate the wildness of risk, assuming risk to be mild when in fact it is wild, which must be avoided if risk assessment and management are to be valid and reliable, according to Mandelbrot. According to the standardISO 31000, "Risk management – Guidelines", the process of risk management consists of several steps as follows:[5] This involves: After establishing the context, the next step in the process of managing risk is to identify potential risks. Risks are about events that, when triggered, cause problems or benefits. Hence, risk identification can start with the source of problems and those of competitors (benefit), or with the problem's consequences. Some examples of risk sources are: stakeholders of a project, employees of a company or the weather over an airport. When either source or problem is known, the events that a source may trigger or the events that can lead to a problem can be investigated. For example: stakeholders withdrawing during a project may endanger funding of the project; confidential information may be stolen by employees even within a closed network; lightning striking an aircraft during takeoff may make all people on board immediate casualties. The chosen method of identifying risks may depend on culture, industry practice and compliance. The identification methods are formed by templates or the development of templates for identifying source, problem or event. Common risk identification methods are: Once risks have been identified, they must then be assessed as to their potential severity of impact (generally a negative impact, such as damage or loss) and to the probability of occurrence.[25]These quantities can be either simple to measure, in the case of the value of a lost building, or impossible to know for sure in the case of an unlikely event, the probability of occurrence of which is unknown. Therefore, in the assessment process it is critical to make the best educated decisions in order to properly prioritize the implementation of therisk management plan. Even a short-term positive improvement can have long-term negative impacts. Take the "turnpike" example. A highway is widened to allow more traffic. More traffic capacity leads to greater development in the areas surrounding the improved traffic capacity. Over time, traffic thereby increases to fill available capacity. Turnpikes thereby need to be expanded in a seemingly endless cycles. There are many other engineering examples where expanded capacity (to do any function) is soon filled by increased demand. Since expansion comes at a cost, the resulting growth could become unsustainable without forecasting and management. The fundamental difficulty in risk assessment is determining the rate of occurrence since statistical information is not available on all kinds of past incidents and is particularly scanty in the case of catastrophic events, simply because of their infrequency. Furthermore, evaluating the severity of the consequences (impact) is often quite difficult for intangible assets. Asset valuation is another question that needs to be addressed. Thus, best educated opinions and available statistics are the primary sources of information. Nevertheless, risk assessment should produce such information for senior executives of the organization that the primary risks are easy to understand and that the risk management decisions may be prioritized within overall company goals. Thus, there have been several theories and attempts to quantify risks. Numerous different risk formulae exist, but perhaps the most widely accepted formula for risk quantification is: "Rate (or probability) of occurrence multiplied by the impact of the event equals risk magnitude."[vague] Risk mitigation measures are usually formulated according to one or more of the following major risk options, which are: Later research[26]has shown that the financial benefits of risk management are less dependent on the formula used but are more dependent on the frequency and how risk assessment is performed. In business it is imperative to be able to present the findings of risk assessments in financial, market, or schedule terms. Robert Courtney Jr. (IBM, 1970) proposed a formula for presenting risks in financial terms. The Courtney formula was accepted as the official risk analysis method for the US governmental agencies. The formula proposes calculation of ALE (annualized loss expectancy) and compares the expected loss value to the security control implementation costs (cost–benefit analysis). Planning for risk management uses four essential techniques. Under the acceptance technique, the business intentionally assumes risks without financial protections in the hopes that possible gains will exceed prospective losses. The transfer approach shields the business from losses by shifting risks to a third party, frequently in exchange for a fee, while the third-party benefits from the project. By choosing not to participate in high-risk ventures, the avoidance strategy avoids losses but also loses out on possibilities. Last but not least, the reduction approach lowers risks by implementing strategies like insurance, which provides protection for a variety of asset classes and guarantees reimbursement in the event of losses.[27] Once risks have been identified and assessed, all techniques to manage the risk fall into one or more of these four major categories:[28] Ideal use of theserisk control strategiesmay not be possible. Some of them may involve trade-offs that are not acceptable to the organization or person making the risk management decisions. Another source, from the US Department of Defense (see link),Defense Acquisition University, calls these categories ACAT, for Avoid, Control, Accept, or Transfer. This use of the ACAT acronym is reminiscent of another ACAT (for Acquisition Category) used in US Defense industry procurements, in which Risk Management figures prominently in decision making and planning. Similarly to risks, opportunities have specific mitigation strategies: exploit, share, enhance, ignore. This includes not performing an activity that could present risk. Refusing to purchase apropertyor business to avoidlegal liabilityis one such example. Avoidingairplaneflights for fear ofhijacking. Avoidance may seem like the answer to all risks, but avoiding risks also means losing out on the potential gain that accepting (retaining) the risk may have allowed. Not entering a business to avoid the risk of loss also avoids the possibility of earning profits. Increasing risk regulation in hospitals has led to avoidance of treating higher risk conditions, in favor of patients presenting with lower risk.[29] Risk reduction or "optimization" involves reducing the severity of the loss or the likelihood of the loss from occurring. For example,sprinklersare designed to put out afireto reduce the risk of loss by fire. This method may cause a greater loss by water damage and therefore may not be suitable.Halonfire suppression systems may mitigate that risk, but the cost may be prohibitive as astrategy. Acknowledging that risks can be positive or negative, optimizing risks means finding a balance between negative risk and the benefit of the operation or activity; and between risk reduction and effort applied. By effectively applyingHealth, Safety and Environment(HSE) management standards, organizations can achieve tolerable levels ofresidual risk.[30] Modern software development methodologies reduce risk by developing and delivering software incrementally. Early methodologies suffered from the fact that they only delivered software in the final phase of development; any problems encountered in earlier phases meant costly rework and often jeopardized the whole project. By developing in iterations, software projects can limit effort wasted to a single iteration. Outsourcingcould be an example of risk sharing strategy if the outsourcer can demonstrate higher capability at managing or reducing risks.[31]For example, a company may outsource only its software development, the manufacturing of hard goods, or customer support needs to another company, while handling the business management itself. This way, the company can concentrate more on business development without having to worry as much about the manufacturing process, managing the development team, or finding a physical location for a center. Also, implanting controls can also be an option in reducing risk. Controls that either detect causes of unwanted events prior to the consequences occurring during use of the product, or detection of the root causes of unwanted failures that the team can then avoid. Controls may focus on management or decision-making processes. All these may help to make better decisions concerning risk.[32] Briefly defined as "sharing with another party the burden of loss or the benefit of gain, from a risk, and the measures to reduce a risk." The term 'risk transfer' is often used in place of risk-sharing in the mistaken belief that you can transfer a risk to a third party through insurance or outsourcing. In practice, if the insurance company or contractor go bankrupt or end up in court, the original risk is likely to still revert to the first party. As such, in the terminology of practitioners and scholars alike, the purchase of an insurance contract is often described as a "transfer of risk." However, technically speaking, the buyer of the contract generally retains legal responsibility for the losses "transferred", meaning that insurance may be described more accurately as a post-event compensatory mechanism. For example, a personal injuries insurance policy does not transfer the risk of a car accident to the insurance company. The risk still lies with the policyholder namely the person who has been in the accident. The insurance policy simply provides that if an accident (the event) occurs involving the policyholder then some compensation may be payable to the policyholder that is commensurate with the suffering/damage. Methods of managing risk fall into multiple categories. Risk-retention pools are technically retaining the risk for the group, but spreading it over the whole group involves transfer among individual members of the group. This is different from traditional insurance, in that no premium is exchanged between members of the group upfront, but instead, losses are assessed to all members of the group. Risk retention involves accepting the loss, or benefit of gain, from a risk when the incident occurs. Trueself-insurancefalls in this category. Risk retention is a viable strategy for small risks where the cost of insuring against the risk would be greater over time than the total losses sustained. All risks that are not avoided or transferred are retained by default. This includes risks that are so large or catastrophic that either they cannot be insured against or the premiums would be infeasible.Waris an example since most property and risks are not insured against war, so the loss attributed to war is retained by the insured. Also any amounts of potential loss (risk) over the amount insured is retained risk. This may also be acceptable if the chance of a very large loss is small or if the cost to insure for greater coverage amounts is so great that it would hinder the goals of the organization too much. Select appropriate controls or countermeasures to mitigate each risk. Risk mitigation needs to be approved by the appropriate level of management. For instance, a risk concerning the image of the organization should have top management decision behind it whereas IT management would have the authority to decide on computer virus risks. The risk management plan should propose applicable and effective security controls for managing the risks. For example, an observed high risk of computer viruses could be mitigated by acquiring and implementing antivirus software. A good risk management plan should contain a schedule for control implementation and responsible persons for those actions. There are four basic steps of risk management plan, which are threat assessment, vulnerability assessment, impact assessment and risk mitigation strategy development.[33] According toISO/IEC 27001, the stage immediately after completion of therisk assessmentphase consists of preparing a Risk Treatment Plan, which should document the decisions about how each of the identified risks should be handled. Mitigation of risks often means selection ofsecurity controls, which should be documented in a Statement of Applicability, which identifies which particular control objectives and controls from the standard have been selected, and why. Implementation follows all of the planned methods for mitigating the effect of the risks. Purchase insurance policies for the risks that it has been decided to transferred to an insurer, avoid all risks that can be avoided without sacrificing the entity's goals, reduce others, and retain the rest. Initial risk management plans will never be perfect. Practice, experience, and actual loss results will necessitate changes in the plan and contribute information to allow possible different decisions to be made in dealing with the risks being faced. Risk analysisresults and management plans should be updated periodically. There are two primary reasons for this: Enterprise risk management (ERM) defines risk as those possible events or circumstances that can have negative influences on theenterprisein question, where the impact can be on the very existence, the resources (human and capital), the products and services, or the customers of the enterprise, as well as external impacts on society, markets, or the environment. There arevarious defined frameworkshere, where every probable risk can have a pre-formulated plan to deal with its possible consequences (to ensurecontingencyif the risk becomes aliability). Managers thus analyze and monitor both the internal and external environment facing the enterprise, addressingbusiness riskgenerally, and any impact on the enterprise achieving itsstrategic goals. ERM thus overlaps various other disciplines -operational risk management,financial risk managementetc. - but is differentiated by its strategic and long-term focus.[34]ERM systems usually focus on safeguarding reputation, acknowledging its significant role in comprehensive risk management strategies.[35] As applied tofinance, risk management concerns the techniques and practices for measuring, monitoring and controlling themarket-andcredit risk(andoperational risk) on a firm'sbalance sheet, due to a bank's credit and trading exposure, or re afund manager's portfolio value; for an overview seeFinance § Risk management. The concept of "contractual risk management" emphasises the use of risk management techniques in contract deployment, i.e. managing the risks which are accepted through entry into a contract. Norwegian academic Petri Keskitalo defines "contractual risk management" as "a practical, proactive and systematical contracting method that uses contract planning and governance to manage risks connected to business activities".[36]In an article by Samuel Greengard published in 2010, two US legal cases are mentioned which emphasise the importance of having a strategy for dealing with risk:[37] Greengard recommends using industry-standard contract language as much as possible to reduce risk as much as possible and rely on clauses which have been in use and subject to established court interpretation over a number of years.[37] Customs risk management is concerned with the risks which arise within the context ofinternational tradeand have a bearing on safety and security, including the risk thatillicit drugsandcounterfeit goodscan pass across borders and the risk that shipments and their contents are incorrectly declared.[40]TheEuropean Unionhas adopted a Customs Risk Management Framework (CRMF) applicable across the union and throughout itsmember states, whose aims include establishing a common level of customs control protection and a balance between the objectives of safe customs control and the facilitation of legitimate trade.[41]Two events which prompted theEuropean Commissionto review customs risk management policy in 2012-13 were theSeptember 11 attacksof 2001 and the2010 transatlantic aircraft bomb plotinvolving packages being sent fromYemento theUnited States, referred to by the Commission as "the October 2010 (Yemen) incident".[42] ESRM is a security program management approach that links security activities to an enterprise's mission and business goals through risk management methods. The security leader's role in ESRM is to manage risks of harm to enterprise assets in partnership with the business leaders whose assets are exposed to those risks. ESRM involves educating business leaders on the realistic impacts of identified risks, presenting potential strategies to mitigate those impacts, then enacting the option chosen by the business in line with accepted levels of business risk tolerance[43] Formedical devices, risk management is a process for identifying, evaluating and mitigating risks associated with harm to people and damage to property or the environment. Risk management is an integral part of medical device design and development, production processes and evaluation of field experience, and is applicable to all types of medical devices. The evidence of its application is required by most regulatory bodies such as theUS FDA. The management of risks for medical devices is described by the International Organization for Standardization (ISO) inISO 14971:2019, Medical Devices—The application of risk management to medical devices, a product safety standard. The standard provides a process framework and associated requirements for management responsibilities, risk analysis and evaluation, risk controls and lifecycle risk management. Guidance on the application of the standard is available via ISO/TR 24971:2020. The European version of the risk management standard was updated in 2009 and again in 2012 to refer to the Medical Devices Directive (MDD) and Active Implantable Medical Device Directive (AIMDD) revision in 2007, as well as the In Vitro Medical Device Directive (IVDD). The requirements of EN 14971:2012 are nearly identical to ISO 14971:2007. The differences include three "(informative)" Z Annexes that refer to the new MDD, AIMDD, and IVDD. These annexes indicate content deviations that include the requirement for risks to be reducedas far as possible, and the requirement that risks be mitigated by design and not by labeling on the medical device (i.e., labeling can no longer be used to mitigate risk). Typical risk analysis and evaluation techniques adopted by the medical device industry includehazard analysis,fault tree analysis(FTA),failure mode and effects analysis(FMEA), hazard and operability study (HAZOP), and risk traceability analysis for ensuring risk controls are implemented and effective (i.e. tracking risks identified to product requirements, design specifications, verification and validation results etc.). FTA analysis requires diagramming software. FMEA analysis can be done using aspreadsheetprogram. There are also integrated medical device risk management solutions. Through adraft guidance, the FDA has introduced another method named "Safety Assurance Case" for medical device safety assurance analysis. The safety assurance case is structured argument reasoning about systems appropriate for scientists and engineers, supported by a body of evidence, that provides a compelling, comprehensible and valid case that a system is safe for a given application in a given environment. With the guidance, a safety assurance case is expected for safety critical devices (e.g. infusion devices) as part of the pre-market clearance submission, e.g. 510(k). In 2013, the FDA introduced another draft guidance expecting medical device manufacturers to submit cybersecurity risk analysis information. Project risk management must be considered at the different phases of acquisition. At the beginning of a project, the advancement of technical developments, or threats presented by a competitor's projects, may cause a risk or threat assessment and subsequent evaluation of alternatives (seeAnalysis of Alternatives). Once a decision is made, and the project begun, more familiar project management applications can be used:[44][45][46] Megaprojects(sometimes also called "major programs") are large-scale investment projects, typically costing more than $1 billion per project. Megaprojects include major bridges, tunnels, highways, railways, airports, seaports, power plants, dams, wastewater projects, coastal flood protection schemes, oil and natural gas extraction projects, public buildings, information technology systems, aerospace projects, and defense systems. Megaprojects have been shown to be particularly risky in terms of finance, safety, and social and environmental impacts. Risk management is therefore particularly pertinent for megaprojects and special methods and special education have been developed for such risk management.[47] It is important to assess risk in regard to natural disasters likefloods,earthquakes, and so on. Outcomes of natural disaster risk assessment are valuable when considering future repair costs, business interruption losses and other downtime, effects on the environment, insurance costs, and the proposed costs of reducing the risk.[48][49]TheSendai Framework for Disaster Risk Reductionis a 2015 international accord that has set goals and targets fordisaster risk reductionin response to natural disasters.[50]There are regularInternational Disaster and Risk ConferencesinDavosto deal with integral risk management. Several tools can be used to assess risk and risk management of natural disasters and other climate events, including geospatial modeling, a key component ofland change science. This modeling requires an understanding of geographic distributions of people as well as an ability to calculate the likelihood of a natural disaster occurring. The management of risks to persons and property inwildernessand remote natural areas has developed with increases in outdoor recreation participation and decreased social tolerance for loss. Organizations providing commercial wilderness experiences can now align with national and international consensus standards for training and equipment such asANSI/NASBLA 101-2017 (boating),[51]UIAA152 (ice climbing tools),[52]andEuropean Norm13089:2015 + A1:2015 (mountaineering equipment).[53][54]TheAssociation for Experiential Educationoffers accreditation for wilderness adventure programs.[55]TheWilderness Risk Management Conferenceprovides access to best practices, and specialist organizations provide wilderness risk management consulting and training.[56] The text Outdoor Safety – Risk Management for Outdoor Leaders,[57]published by the New Zealand Mountain Safety Council, provides a view of wilderness risk management from the New Zealand perspective, recognizing the value of national outdoor safety legislation and devoting considerable attention to the roles of judgment and decision-making processes in wilderness risk management. One popular models for risk assessment is the Risk Assessment and Safety Management (RASM) Model developed by Rick Curtis, author of The Backpacker's Field Manual.[58]The formula for the RASM Model is: Risk = Probability of Accident × Severity of Consequences. The RASM Model weighs negative risk—the potential for loss, against positive risk—the potential for growth. IT riskis a risk related to information technology. This is a relatively new term due to an increasing awareness thatinformation securityis simply one facet of a multitude of risks that are relevant to IT and the real world processes it supports. "Cybersecurity is tied closely to the advancement of technology. It lags only long enough for incentives like black markets to evolve and new exploits to be discovered. There is no end in sight for the advancement of technology, so we can expect the same from cybersecurity."[59] ISACA'sRisk ITframework ties IT risk to enterprise risk management. Duty of Care Risk Analysis (DoCRA) evaluates risks and their safeguards and considers the interests of all parties potentially affected by those risks.[60]TheVerizon Data Breach Investigations Report (DBIR)features how organizations can leverage the Veris Community Database (VCDB) to estimate risk. Using HALOCKmethodologywithin CIS RAM and data from VCDB, professionals can determine threat likelihood for their industries. IT risk management includes "incident handling", an action plan for dealing with intrusions, cyber-theft, denial of service, fire, floods, and other security-related events. According to theSANS Institute, it is a six step process: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned.[61] Operational risk management (ORM) is the oversight ofoperational risk, including the risk of loss resulting from: inadequate or failed internal processes and systems; human factors; or external events. Given thenature of operations, ORM is typically a "continual" process, and will include ongoing risk assessment, risk decision making, and the implementation of risk controls. For the offshore oil and gas industry, operational risk management is regulated by thesafety caseregime in many countries. Hazard identification and risk assessment tools and techniques are described in the international standard ISO 17776:2000, and organisations such as the IADC (International Association of Drilling Contractors) publish guidelines forHealth, Safety and Environment(HSE) Case development which are based on the ISO standard. Further, diagrammatic representations of hazardous events are often expected by governmental regulators as part of risk management in safety case submissions; these are known asbow-tie diagrams(seeNetwork theory in risk assessment). The technique is also used by organisations and regulators in mining, aviation, health, defence, industrial and finance. The principles and tools for quality risk management are increasingly being applied to different aspects of pharmaceutical quality systems. These aspects include development, manufacturing, distribution, inspection, and submission/review processes throughout the lifecycle of drug substances, drug products, biological and biotechnological products (including the use of raw materials, solvents, excipients, packaging and labeling materials in drug products, biological and biotechnological products). Risk management is also applied to the assessment of microbiological contamination in relation to pharmaceutical products and cleanroom manufacturing environments.[62] Supply chain risk management (SCRM) aims at maintainingsupply chaincontinuity in the event of scenarios or incidents which could interrupt normal business and hence profitability. Risks to the supply chain range from everyday to exceptional, including unpredictable natural events (such astsunamisandpandemics) to counterfeit products, and reach across quality, security, to resiliency and product integrity. Mitigation of these risks can involve various elements of the business includinglogisticsand cybersecurity, as well as the areas of finance and operations. Travel risk management is concerned with how organisations assess the risks to theirstaff when travelling, especially when travelling overseas. In the field ofinternational standards, ISO 31030:2021 addresses good practice in travel risk management.[63] The Global Business Travel Association's education and research arm, the GBTA Foundation. found in 2015 that most businesses covered by their research employed travel risk management protocols aimed at ensuring the safety and well-being of their business travelers.[64]Six key principles of travel risk awareness put forward by the association are preparation, awareness of surroundings and people, keeping a low profile, adopting an unpredictable routine, communications and layers of protection.[65]Traveler tracking using mobile tracking and messaging technologies had by 2015 become a widely used aspect of travel risk management.[64] Risk communicationis a complex cross-disciplinary academic field that is part of risk management and related to fields likecrisis communication. The goal is to make sure that targeted audiences understand how risks affect them or their communities by appealing to their values.[66][67] Risk communication is particularly important indisaster preparedness,[68]public health,[69]and preparation for majorglobal catastrophic risk.[68]For example, theimpacts of climate changeandclimate riskeffect every part of society, so communicating that risk is an importantclimate communicationpractice, in order for societies to plan forclimate adaptation.[70]Similarly, inpandemic prevention,understanding of riskhelps communities stop the spread of disease and improve responses.[71] Risk communication deals with possible risks and aims to raise awareness of those risks to encourage or persuade changes in behavior to relieve threats in the long term. On the other hand, crisis communication is aimed at raising awareness of a specific type of threat, the magnitude, outcomes, and specific behaviors to adopt to reduce the threat.[72] Risk communication infood safetyis part of therisk analysis framework. Together with risk assessment and risk management, risk communication aims to reducefoodborne illnesses. Food safety risk communication is an obligatory activity for food safety authorities[73]in countries, which adopted theAgreement on the Application of Sanitary and Phytosanitary Measures.
https://en.wikipedia.org/wiki/Risk_management
Thefriendship paradoxis the phenomenon first observed by the sociologistScott L. Feldin 1991 that on average, an individual's friends have more friends than that individual.[1]It can be explained as a form ofsampling biasin which people with more friends are more likely to be in one's own friend group. In other words, one is less likely to be friends with someone who has very few friends. In contradiction to this, most people believe that they have more friends than their friends have.[2][3][4][5] The same observation can be applied more generally tosocial networksdefined by other relations than friendship: for instance, most people's sexual partners have had (on the average) a greater number of sexual partners than they have.[6][7] The friendship paradox is an example of how network structure can significantly distort an individual's local observations.[8][9] In spite of its apparentlyparadoxicalnature, the phenomenon is real, and can be explained as a consequence of the general mathematical properties ofsocial networks. The mathematics behind this are directly related to thearithmetic-geometric mean inequalityand theCauchy–Schwarz inequality.[10] Formally, Feld assumes that a social network is represented by anundirected graphG= (V,E), where the setVofverticescorresponds to the people in the social network, and the setEofedgescorresponds to the friendship relation between pairs of people. That is, he assumes that friendship is asymmetric relation: ifxis a friend ofy, thenyis a friend ofx. The friendship betweenxandyis therefore modeled by the edge{x,y},and the number of friends an individual has corresponds to a vertex'sdegree. The average number of friends of a person in the social network is therefore given by the average of the degrees of theverticesin the graph. That is, if vertexvhasd(v)edges touching it (representing a person who hasd(v)friends), then the average numberμof friends of a random person in the graph is The average number of friends that a typical friend has can be modeled by choosing a random person (who has at least one friend), and then calculating how many friends their friends have on average. This amounts to choosing, uniformly at random, an edge of the graph (representing a pair of friends) and an endpoint of that edge (one of the friends), and again calculating the degree of the selected endpoint. The probability of a certain vertexv{\displaystyle v}to be chosen is The first factor corresponds to how likely it is that the chosen edge contains the vertex, which increases when the vertex has more friends. The halving factor simply comes from the fact that each edge has two vertices. So the expected value of the number of friends of a (randomly chosen) friend is We know from the definition of variance that whereσ2{\displaystyle \sigma ^{2}}is the variance of the degrees in the graph. This allows us to compute the desired expected value as For a graph that has vertices of varying degrees (as is typical for social networks),σ2{\displaystyle {\sigma }^{2}}is strictly positive, which implies that the average degree of a friend is strictly greater than the average degree of a random node. Another way of understanding how the first term came is as follows. For each friendship(u, v), a nodeumentions thatvis a friend andvhasd(v)friends. There ared(v)such friends who mention this. Hence the square ofd(v)term. We add this for all such friendships in the network from both theu's andv's perspective, which gives the numerator. The denominator is the number of total such friendships, which is twice the total edges in the network (one from theu's perspective and the other from thev's). After this analysis, Feld goes on to make some more qualitative assumptions about the statistical correlation between the number of friends that two friends have, based on theories of social networks such asassortative mixing, and he analyzes what these assumptions imply about the number of people whose friends have more friends than they do. Based on this analysis, he concludes that in real social networks, most people are likely to have fewer friends than the average of their friends' numbers of friends. However, this conclusion is not a mathematical certainty; there exist undirected graphs (such as the graph formed by removing a single edge from a largecomplete graph) that are unlikely to arise as social networks but in which most vertices have higher degree than the average of their neighbors' degrees. The Friendship Paradox may be restated ingraph theoryterms as "the average degree of a randomly selected node in a network is less than the average degree of neighbors of a randomly selected node", but this leaves unspecified the exact mechanism of averaging (i.e., macro vs micro averaging). LetG=(V,E){\displaystyle G=(V,E)}be an undirected graph with|V|=N{\displaystyle |V|=N}and|E|=M{\displaystyle |E|=M}, having no isolated nodes. Let the set of neighbors of nodeu{\displaystyle u}be denotednbr⁡(u){\displaystyle \operatorname {nbr} (u)}. The average degree is thenμ=1N∑u∈V|nbr⁡(u)|=2MN≥1{\displaystyle \mu ={\frac {1}{N}}\sum _{u\in V}|\operatorname {nbr} (u)|={\frac {2M}{N}}\geq 1}. Let the number of "friends of friends" of nodeu{\displaystyle u}be denotedFF⁡(u)=∑v∈nbr⁡(u)|nbr⁡(v)|{\displaystyle \operatorname {FF} (u)=\sum _{v\in \operatorname {nbr} (u)}|\operatorname {nbr} (v)|}. Note that this can count 2-hop neighbors multiple times, but so does Feld's analysis. We haveFF⁡(u)≥|nbr⁡(u)|≥1{\displaystyle \operatorname {FF} (u)\geq |\operatorname {nbr} (u)|\geq 1}. Feld considered the following "micro average" quantity. However, there is also the (equally legitimate) "macro average" quantity, given by The computation of MacroAvg can be expressed as the following pseudocode. Each edge{u,v}{\displaystyle \{u,v\}}contributes to MacroAvg the quantity|nbr⁡(v)||nbr⁡(u)|+|nbr⁡(u)||nbr⁡(v)|≥2{\displaystyle {\frac {|\operatorname {nbr} (v)|}{|\operatorname {nbr} (u)|}}+{\frac {|\operatorname {nbr} (u)|}{|\operatorname {nbr} (v)|}}\geq 2}, becausemina,b>0ab+ba=2{\displaystyle \min _{a,b>0}{\frac {a}{b}}+{\frac {b}{a}}=2}. We thus get Thus, we have bothMicroAvg≥μ{\displaystyle {\text{MicroAvg}}\geq \mu }andMacroAvg≥μ{\displaystyle {\text{MacroAvg}}\geq \mu }, but no inequality holds between them.[11] In a 2023 paper, a parallel paradox, but for negative, antagonistic, or animosity ties, termed the "enmity paradox," was defined and demonstrated by Ghasemian andChristakis.[12]In brief, one's enemies have more enemies than one does, too. This paper also documented diverse phenomena is "mixed worlds" of both hostile and friendly ties. The analysis of the friendship paradox implies that the friends of randomly selected individuals are likely to have higher than averagecentrality. This observation has been used as a way to forecast and slow the course ofepidemics, by using this random selection process to choose individuals to immunize or monitor for infection while avoiding the need for a complex computation of the centrality of all nodes in the network.[13][14][15]In a similar manner, in polling and election forecasting, the friendship paradox has been exploited in order to reach and query well-connected individuals who may have knowledge about how numerous other individuals are going to vote.[16]However, when utilized in such contexts, the friendship paradox inevitably introduces bias by over-representing individuals with many friends, potentially skewing resulting estimates.[17][18] A study in 2010 by Christakis and Fowler showed that flu outbreaks can be detected almost two weeks before traditional surveillance measures would do so by using the friendship paradox in monitoring the infection in a social network.[19]They found that using the friendship paradox to analyze the health ofcentralfriends is "an ideal way to predict outbreaks, but detailed information doesn't exist for most groups, and to produce it would be time-consuming and costly."[20]This extends to the spread of ideas as well, with evidence that the friendship paradox can be used to track and predict the spread of ideas and misinformation through networks.[21][13][22]This observation has been explained with the argument that individuals with more social connections may be the driving forces behind the spread of these ideas and beliefs, and as such can be used as early-warning signals.[18] Friendship paradox based sampling (i.e., sampling random friends) has been theoretically and empirically shown to outperform classical uniform sampling for the purpose of estimating thepower-law degree distributionsofscale-free networks.[23][24]The reason is that sampling the network uniformly will not collect enough samples from the characteristicheavy tailpart of the power-law degree distribution to properly estimate it. However, sampling random friends incorporates more nodes from the tail of the degree distribution (i.e., more high degree nodes) into the sample. Hence, friendship paradox based sampling captures the characteristic heavy tail of a power-law degree distribution more accurately and reduces the bias and variance of the estimation.[24] The "generalized friendship paradox" states that the friendship paradox applies to other characteristics as well. For example, one's co-authors are on average likely to be more prominent, with more publications, more citations and more collaborators,[25][26][27]or one's followers on Twitter have more followers.[28]The same effect has also been demonstrated for Subjective Well-Being by Bollen et al. (2017),[29]who used a large-scale Twitter network and longitudinal data on subjective well-being for each individual in the network to demonstrate that both a Friendship and a "happiness" paradox can occur in online social networks. The friendship paradox has also been used as a means to identify structurally influential nodes within social networks, so as to magnifysocial contagionof diverse practices relevant to human welfare and public health. This has been shown to be possible in several large-scale randomized controlled field trials conducted byChristakiset al., with respect to the adoption of multivitamins[30]or maternal and child health practices[31][32]in Honduras, or of iron-fortified salt in India.[33]This technique is valuable because, by exploiting the friendship paradox, one can identify such influential nodes without the expense and delay of actually mapping the whole network.
https://en.wikipedia.org/wiki/Friendship_paradox
Intext retrieval,full-text searchrefers to techniques for searching a singlecomputer-storeddocumentor a collection in afull-text database. Full-text search is distinguished from searches based onmetadataor on parts of the original texts represented in databases (such as titles, abstracts, selected sections, or bibliographical references). In a full-text search, asearch engineexamines all of the words in every stored document as it tries to match search criteria (for example, text specified by a user). Full-text-searching techniques appeared in the 1960s, for exampleIBM STAIRSfrom 1969, and became common in onlinebibliographic databasesin the 1990s.[verification needed]Many websites and application programs (such asword processingsoftware) provide full-text-search capabilities. Some web search engines, such as the formerAltaVista, employ full-text-search techniques, while others index only a portion of the web pages examined by their indexing systems.[1] When dealing with a small number of documents, it is possible for the full-text-search engine to directly scan the contents of the documents with eachquery, a strategy called "serial scanning". This is what some tools, such asgrep, do when searching. However, when the number of documents to search is potentially large, or the quantity of search queries to perform is substantial, the problem of full-text search is often divided into two tasks: indexing and searching. The indexing stage will scan the text of all the documents and build a list of search terms (often called anindex, but more correctly named aconcordance). In the search stage, when performing a specific query, only the index is referenced, rather than the text of the original documents.[2] The indexer will make an entry in the index for each term or word found in a document, and possibly note its relative position within the document. Usually the indexer will ignorestop words(such as "the" and "and") that are both common and insufficiently meaningful to be useful in searching. Some indexers also employ language-specificstemmingon the words being indexed. For example, the words "drives", "drove", and "driven" will be recorded in the index under the single concept word "drive". Recall measures the quantity of relevant results returned by a search, while precision is the measure of the quality of the results returned. Recall is the ratio of relevant results returned to all relevant results. Precision is the ratio of the number of relevant results returned to the total number of results returned. The diagram at right represents a low-precision, low-recall search. In the diagram the red and green dots represent the total population of potential search results for a given search. Red dots represent irrelevant results, and green dots represent relevant results. Relevancy is indicated by the proximity of search results to the center of the inner circle. Of all possible results shown, those that were actually returned by the search are shown on a light-blue background. In the example only 1 relevant result of 3 possible relevant results was returned, so the recall is a very low ratio of 1/3, or 33%. The precision for the example is a very low 1/4, or 25%, since only 1 of the 4 results returned was relevant.[3] Due to the ambiguities ofnatural language, full-text-search systems typically includes options likefilteringto increase precision andstemmingto increase recall.Controlled-vocabularysearching also helps alleviate low-precision issues bytaggingdocuments in such a way that ambiguities are eliminated. The trade-off between precision and recall is simple: an increase in precision can lower overall recall, while an increase in recall lowers precision.[4] Full-text searching is likely to retrieve many documents that are notrelevantto theintendedsearch question. Such documents are calledfalse positives(seeType I error). The retrieval of irrelevant documents is often caused by the inherent ambiguity ofnatural language. In the sample diagram to the right, false positives are represented by the irrelevant results (red dots) that were returned by the search (on a light-blue background). Clustering techniques based onBayesianalgorithms can help reduce false positives. For a search term of "bank", clustering can be used to categorize the document/data universe into "financial institution", "place to sit", "place to store" etc. Depending on the occurrences of words relevant to the categories, search terms or a search result can be placed in one or more of the categories. This technique is being extensively deployed in thee-discoverydomain.[clarification needed] The deficiencies of full text searching have been addressed in two ways: By providing users with tools that enable them to express their search questions more precisely, and by developing new search algorithms that improve retrieval precision. ThePageRankalgorithm developed byGooglegives more prominence to documents to which otherWeb pageshave linked.[6]SeeSearch enginefor additional examples. The following is a partial list of available software products whose predominant purpose is to perform full-text indexing and searching. Some of these are accompanied with detailed descriptions of their theory of operation or internal algorithms, which can provide additional insight into how full-text search may be accomplished.
https://en.wikipedia.org/wiki/Full-text_search
Inmathematics, theWeierstrass elliptic functionsareelliptic functionsthat take a particularly simple form. They are named forKarl Weierstrass. This class of functions is also referred to as℘-functionsand they are usually denoted by the symbol ℘, a uniquely fancyscriptp. They play an important role in the theory of elliptic functions, i.e.,meromorphic functionsthat aredoubly periodic. A ℘-function together with its derivative can be used to parameterizeelliptic curvesand they generate the field of elliptic functions with respect to a given period lattice. Symbol for Weierstrass℘{\displaystyle \wp }-function Acubicof the formCg2,g3C={(x,y)∈C2:y2=4x3−g2x−g3}{\displaystyle C_{g_{2},g_{3}}^{\mathbb {C} }=\{(x,y)\in \mathbb {C} ^{2}:y^{2}=4x^{3}-g_{2}x-g_{3}\}}, whereg2,g3∈C{\displaystyle g_{2},g_{3}\in \mathbb {C} }are complex numbers withg23−27g32≠0{\displaystyle g_{2}^{3}-27g_{3}^{2}\neq 0}, cannot berationally parameterized.[1]Yet one still wants to find a way to parameterize it. For thequadricK={(x,y)∈R2:x2+y2=1}{\displaystyle K=\left\{(x,y)\in \mathbb {R} ^{2}:x^{2}+y^{2}=1\right\}}; theunit circle, there exists a (non-rational) parameterization using the sine function and its derivative the cosine function:ψ:R/2πZ→K,t↦(sin⁡t,cos⁡t).{\displaystyle \psi :\mathbb {R} /2\pi \mathbb {Z} \to K,\quad t\mapsto (\sin t,\cos t).}Because of the periodicity of the sine and cosineR/2πZ{\displaystyle \mathbb {R} /2\pi \mathbb {Z} }is chosen to be the domain, so the function is bijective. In a similar way one can get a parameterization ofCg2,g3C{\displaystyle C_{g_{2},g_{3}}^{\mathbb {C} }}by means of the doubly periodic℘{\displaystyle \wp }-function (see in the section "Relation to elliptic curves"). This parameterization has the domainC/Λ{\displaystyle \mathbb {C} /\Lambda }, which is topologically equivalent to atorus.[2] There is another analogy to the trigonometric functions. Consider the integral functiona(x)=∫0xdy1−y2.{\displaystyle a(x)=\int _{0}^{x}{\frac {dy}{\sqrt {1-y^{2}}}}.}It can be simplified by substitutingy=sin⁡t{\displaystyle y=\sin t}ands=arcsin⁡x{\displaystyle s=\arcsin x}:a(x)=∫0sdt=s=arcsin⁡x.{\displaystyle a(x)=\int _{0}^{s}dt=s=\arcsin x.}That meansa−1(x)=sin⁡x{\displaystyle a^{-1}(x)=\sin x}. So the sine function is an inverse function of an integral function.[3] Elliptic functions are the inverse functions ofelliptic integrals. In particular, let:u(z)=∫z∞ds4s3−g2s−g3.{\displaystyle u(z)=\int _{z}^{\infty }{\frac {ds}{\sqrt {4s^{3}-g_{2}s-g_{3}}}}.}Then the extension ofu−1{\displaystyle u^{-1}}to the complex plane equals the℘{\displaystyle \wp }-function.[4]This invertibility is used incomplex analysisto provide a solution to certainnonlinear differential equationssatisfying thePainlevé property, i.e., those equations that admitpolesas their onlymovable singularities.[5] Letω1,ω2∈C{\displaystyle \omega _{1},\omega _{2}\in \mathbb {C} }be twocomplex numbersthat arelinearly independentoverR{\displaystyle \mathbb {R} }and letΛ:=Zω1+Zω2:={mω1+nω2:m,n∈Z}{\displaystyle \Lambda :=\mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}:=\{m\omega _{1}+n\omega _{2}:m,n\in \mathbb {Z} \}}be theperiod latticegenerated by those numbers. Then the℘{\displaystyle \wp }-function is defined as follows: This series converges locallyuniformly absolutelyin thecomplex torusC/Λ{\displaystyle \mathbb {C} /\Lambda }. It is common to use1{\displaystyle 1}andτ{\displaystyle \tau }in theupper half-planeH:={z∈C:Im⁡(z)>0}{\displaystyle \mathbb {H} :=\{z\in \mathbb {C} :\operatorname {Im} (z)>0\}}asgeneratorsof thelattice. Dividing byω1{\textstyle \omega _{1}}maps the latticeZω1+Zω2{\displaystyle \mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}}isomorphically onto the latticeZ+Zτ{\displaystyle \mathbb {Z} +\mathbb {Z} \tau }withτ=ω2ω1{\textstyle \tau ={\tfrac {\omega _{2}}{\omega _{1}}}}. Because−τ{\displaystyle -\tau }can be substituted forτ{\displaystyle \tau }, without loss of generality we can assumeτ∈H{\displaystyle \tau \in \mathbb {H} }, and then define℘(z,τ):=℘(z,1,τ){\displaystyle \wp (z,\tau ):=\wp (z,1,\tau )}. With that definition, we have℘(z,ω1,ω2)=ω1−2℘(z/ω1,ω2/ω1){\displaystyle \wp (z,\omega _{1},\omega _{2})=\omega _{1}^{-2}\wp (z/\omega _{1},\omega _{2}/\omega _{1})}. Letr:=min{|λ|:0≠λ∈Λ}{\displaystyle r:=\min\{{|\lambda }|:0\neq \lambda \in \Lambda \}}. Then for0<|z|<r{\displaystyle 0<|z|<r}the℘{\displaystyle \wp }-function has the followingLaurent expansion℘(z)=1z2+∑n=1∞(2n+1)G2n+2z2n{\displaystyle \wp (z)={\frac {1}{z^{2}}}+\sum _{n=1}^{\infty }(2n+1)G_{2n+2}z^{2n}}whereGn=∑0≠λ∈Λλ−n{\displaystyle G_{n}=\sum _{0\neq \lambda \in \Lambda }\lambda ^{-n}}forn≥3{\displaystyle n\geq 3}are so calledEisenstein series.[6] Setg2=60G4{\displaystyle g_{2}=60G_{4}}andg3=140G6{\displaystyle g_{3}=140G_{6}}. Then the℘{\displaystyle \wp }-function satisfies the differential equation[6]℘′2(z)=4℘3(z)−g2℘(z)−g3.{\displaystyle \wp '^{2}(z)=4\wp ^{3}(z)-g_{2}\wp (z)-g_{3}.}This relation can be verified by forming a linear combination of powers of℘{\displaystyle \wp }and℘′{\displaystyle \wp '}to eliminate the pole atz=0{\displaystyle z=0}. This yields an entire elliptic function that has to be constant byLiouville's theorem.[6] The coefficients of the above differential equationg2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}are known as theinvariants. Because they depend on the latticeΛ{\displaystyle \Lambda }they can be viewed as functions inω1{\displaystyle \omega _{1}}andω2{\displaystyle \omega _{2}}. The series expansion suggests thatg2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}arehomogeneous functionsof degree−4{\displaystyle -4}and−6{\displaystyle -6}. That is[7]g2(λω1,λω2)=λ−4g2(ω1,ω2){\displaystyle g_{2}(\lambda \omega _{1},\lambda \omega _{2})=\lambda ^{-4}g_{2}(\omega _{1},\omega _{2})}g3(λω1,λω2)=λ−6g3(ω1,ω2){\displaystyle g_{3}(\lambda \omega _{1},\lambda \omega _{2})=\lambda ^{-6}g_{3}(\omega _{1},\omega _{2})}forλ≠0{\displaystyle \lambda \neq 0}. Ifω1{\displaystyle \omega _{1}}andω2{\displaystyle \omega _{2}}are chosen in such a way thatIm⁡(ω2ω1)>0{\displaystyle \operatorname {Im} \left({\tfrac {\omega _{2}}{\omega _{1}}}\right)>0},g2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}can be interpreted as functions on theupper half-planeH:={z∈C:Im⁡(z)>0}{\displaystyle \mathbb {H} :=\{z\in \mathbb {C} :\operatorname {Im} (z)>0\}}. Letτ=ω2ω1{\displaystyle \tau ={\tfrac {\omega _{2}}{\omega _{1}}}}. One has:[8]g2(1,τ)=ω14g2(ω1,ω2),{\displaystyle g_{2}(1,\tau )=\omega _{1}^{4}g_{2}(\omega _{1},\omega _{2}),}g3(1,τ)=ω16g3(ω1,ω2).{\displaystyle g_{3}(1,\tau )=\omega _{1}^{6}g_{3}(\omega _{1},\omega _{2}).}That meansg2andg3are only scaled by doing this. Setg2(τ):=g2(1,τ){\displaystyle g_{2}(\tau ):=g_{2}(1,\tau )}andg3(τ):=g3(1,τ).{\displaystyle g_{3}(\tau ):=g_{3}(1,\tau ).}As functions ofτ∈H{\displaystyle \tau \in \mathbb {H} },g2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}are so calledmodular forms. TheFourier seriesforg2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}are given as follows:[9]g2(τ)=43π4[1+240∑k=1∞σ3(k)q2k]{\displaystyle g_{2}(\tau )={\frac {4}{3}}\pi ^{4}\left[1+240\sum _{k=1}^{\infty }\sigma _{3}(k)q^{2k}\right]}g3(τ)=827π6[1−504∑k=1∞σ5(k)q2k]{\displaystyle g_{3}(\tau )={\frac {8}{27}}\pi ^{6}\left[1-504\sum _{k=1}^{\infty }\sigma _{5}(k)q^{2k}\right]}whereσm(k):=∑d∣kdm{\displaystyle \sigma _{m}(k):=\sum _{d\mid {k}}d^{m}}is thedivisor functionandq=eπiτ{\displaystyle q=e^{\pi i\tau }}is thenome. Themodular discriminantΔ{\displaystyle \Delta }is defined as thediscriminantof the characteristic polynomial of the differential equation℘′2(z)=4℘3(z)−g2℘(z)−g3{\displaystyle \wp '^{2}(z)=4\wp ^{3}(z)-g_{2}\wp (z)-g_{3}}as follows:Δ=g23−27g32.{\displaystyle \Delta =g_{2}^{3}-27g_{3}^{2}.}The discriminant is a modular form of weight12{\displaystyle 12}. That is, under the action of themodular group, it transforms asΔ(aτ+bcτ+d)=(cτ+d)12Δ(τ){\displaystyle \Delta \left({\frac {a\tau +b}{c\tau +d}}\right)=\left(c\tau +d\right)^{12}\Delta (\tau )}wherea,b,d,c∈Z{\displaystyle a,b,d,c\in \mathbb {Z} }withad−bc=1{\displaystyle ad-bc=1}.[10] Note thatΔ=(2π)12η24{\displaystyle \Delta =(2\pi )^{12}\eta ^{24}}whereη{\displaystyle \eta }is theDedekind eta function.[11] For the Fourier coefficients ofΔ{\displaystyle \Delta }, seeRamanujan tau function. e1{\displaystyle e_{1}},e2{\displaystyle e_{2}}ande3{\displaystyle e_{3}}are usually used to denote the values of the℘{\displaystyle \wp }-function at the half-periods.e1≡℘(ω12){\displaystyle e_{1}\equiv \wp \left({\frac {\omega _{1}}{2}}\right)}e2≡℘(ω22){\displaystyle e_{2}\equiv \wp \left({\frac {\omega _{2}}{2}}\right)}e3≡℘(ω1+ω22){\displaystyle e_{3}\equiv \wp \left({\frac {\omega _{1}+\omega _{2}}{2}}\right)}They are pairwise distinct and only depend on the latticeΛ{\displaystyle \Lambda }and not on its generators.[12] e1{\displaystyle e_{1}},e2{\displaystyle e_{2}}ande3{\displaystyle e_{3}}are the roots of the cubic polynomial4℘(z)3−g2℘(z)−g3{\displaystyle 4\wp (z)^{3}-g_{2}\wp (z)-g_{3}}and are related by the equation:e1+e2+e3=0.{\displaystyle e_{1}+e_{2}+e_{3}=0.}Because those roots are distinct the discriminantΔ{\displaystyle \Delta }does not vanish on the upper half plane.[13]Now we can rewrite the differential equation:℘′2(z)=4(℘(z)−e1)(℘(z)−e2)(℘(z)−e3).{\displaystyle \wp '^{2}(z)=4(\wp (z)-e_{1})(\wp (z)-e_{2})(\wp (z)-e_{3}).}That means the half-periods are zeros of℘′{\displaystyle \wp '}. The invariantsg2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}can be expressed in terms of these constants in the following way:[14]g2=−4(e1e2+e1e3+e2e3){\displaystyle g_{2}=-4(e_{1}e_{2}+e_{1}e_{3}+e_{2}e_{3})}g3=4e1e2e3{\displaystyle g_{3}=4e_{1}e_{2}e_{3}}e1{\displaystyle e_{1}},e2{\displaystyle e_{2}}ande3{\displaystyle e_{3}}are related to themodular lambda function:λ(τ)=e3−e2e1−e2,τ=ω2ω1.{\displaystyle \lambda (\tau )={\frac {e_{3}-e_{2}}{e_{1}-e_{2}}},\quad \tau ={\frac {\omega _{2}}{\omega _{1}}}.} For numerical work, it is often convenient to calculate the Weierstrass elliptic function in terms ofJacobi's elliptic functions. The basic relations are:[15]℘(z)=e3+e1−e3sn2⁡w=e2+(e1−e3)dn2⁡wsn2⁡w=e1+(e1−e3)cn2⁡wsn2⁡w{\displaystyle \wp (z)=e_{3}+{\frac {e_{1}-e_{3}}{\operatorname {sn} ^{2}w}}=e_{2}+(e_{1}-e_{3}){\frac {\operatorname {dn} ^{2}w}{\operatorname {sn} ^{2}w}}=e_{1}+(e_{1}-e_{3}){\frac {\operatorname {cn} ^{2}w}{\operatorname {sn} ^{2}w}}}wheree1,e2{\displaystyle e_{1},e_{2}}ande3{\displaystyle e_{3}}are the three roots described above and where the moduluskof the Jacobi functions equalsk=e2−e3e1−e3{\displaystyle k={\sqrt {\frac {e_{2}-e_{3}}{e_{1}-e_{3}}}}}and their argumentwequalsw=ze1−e3.{\displaystyle w=z{\sqrt {e_{1}-e_{3}}}.} The function℘(z,τ)=℘(z,1,ω2/ω1){\displaystyle \wp (z,\tau )=\wp (z,1,\omega _{2}/\omega _{1})}can be represented byJacobi's theta functions:℘(z,τ)=(πθ2(0,q)θ3(0,q)θ4(πz,q)θ1(πz,q))2−π23(θ24(0,q)+θ34(0,q)){\displaystyle \wp (z,\tau )=\left(\pi \theta _{2}(0,q)\theta _{3}(0,q){\frac {\theta _{4}(\pi z,q)}{\theta _{1}(\pi z,q)}}\right)^{2}-{\frac {\pi ^{2}}{3}}\left(\theta _{2}^{4}(0,q)+\theta _{3}^{4}(0,q)\right)}whereq=eπiτ{\displaystyle q=e^{\pi i\tau }}is the nome andτ{\displaystyle \tau }is the period ratio(τ∈H){\displaystyle (\tau \in \mathbb {H} )}.[16]This also provides a very rapid algorithm for computing℘(z,τ){\displaystyle \wp (z,\tau )}. Consider the embedding of the cubic curve in thecomplex projective plane whereO{\displaystyle O}is a point lying on theline at infinityP1(C){\displaystyle \mathbb {P} _{1}(\mathbb {C} )}. For this cubic there exists no rational parameterization, ifΔ≠0{\displaystyle \Delta \neq 0}.[1]In this case it is also called an elliptic curve. Nevertheless there is a parameterization inhomogeneous coordinatesthat uses the℘{\displaystyle \wp }-function and its derivative℘′{\displaystyle \wp '}:[17] Now the mapφ{\displaystyle \varphi }isbijectiveand parameterizes the elliptic curveC¯g2,g3C{\displaystyle {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }}. C/Λ{\displaystyle \mathbb {C} /\Lambda }is anabelian groupand atopological space, equipped with thequotient topology. It can be shown that every Weierstrass cubic is given in such a way. That is to say that for every pairg2,g3∈C{\displaystyle g_{2},g_{3}\in \mathbb {C} }withΔ=g23−27g32≠0{\displaystyle \Delta =g_{2}^{3}-27g_{3}^{2}\neq 0}there exists a latticeZω1+Zω2{\displaystyle \mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}}, such that g2=g2(ω1,ω2){\displaystyle g_{2}=g_{2}(\omega _{1},\omega _{2})}andg3=g3(ω1,ω2){\displaystyle g_{3}=g_{3}(\omega _{1},\omega _{2})}.[18] The statement that elliptic curves overQ{\displaystyle \mathbb {Q} }can be parameterized overQ{\displaystyle \mathbb {Q} }, is known as themodularity theorem. This is an important theorem innumber theory. It was part ofAndrew Wiles'proof (1995) ofFermat's Last Theorem. Letz,w∈C{\displaystyle z,w\in \mathbb {C} }, so thatz,w,z+w,z−w∉Λ{\displaystyle z,w,z+w,z-w\notin \Lambda }. Then one has:[19]℘(z+w)=14[℘′(z)−℘′(w)℘(z)−℘(w)]2−℘(z)−℘(w).{\displaystyle \wp (z+w)={\frac {1}{4}}\left[{\frac {\wp '(z)-\wp '(w)}{\wp (z)-\wp (w)}}\right]^{2}-\wp (z)-\wp (w).} As well as the duplication formula:[19]℘(2z)=14[℘″(z)℘′(z)]2−2℘(z).{\displaystyle \wp (2z)={\frac {1}{4}}\left[{\frac {\wp ''(z)}{\wp '(z)}}\right]^{2}-2\wp (z).} These formulas also have a geometric interpretation, if one looks at the elliptic curveC¯g2,g3C{\displaystyle {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }}together with the mappingφ:C/Λ→C¯g2,g3C{\displaystyle {\varphi }:\mathbb {C} /\Lambda \to {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }}as in the previous section. The group structure of(C/Λ,+){\displaystyle (\mathbb {C} /\Lambda ,+)}translates to the curveC¯g2,g3C{\displaystyle {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }}and can be geometrically interpreted there: The sum of three pairwise different pointsa,b,c∈C¯g2,g3C{\displaystyle a,b,c\in {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }}is zero if and only if they lie on the same line inPC2{\displaystyle \mathbb {P} _{\mathbb {C} }^{2}}.[20] This is equivalent to:det(1℘(u+v)−℘′(u+v)1℘(v)℘′(v)1℘(u)℘′(u))=0,{\displaystyle \det \left({\begin{array}{rrr}1&\wp (u+v)&-\wp '(u+v)\\1&\wp (v)&\wp '(v)\\1&\wp (u)&\wp '(u)\\\end{array}}\right)=0,}where℘(u)=a{\displaystyle \wp (u)=a},℘(v)=b{\displaystyle \wp (v)=b}andu,v∉Λ{\displaystyle u,v\notin \Lambda }.[21] The Weierstrass's elliptic function is usually written with a rather special, lower case script letter ℘, which was Weierstrass's own notation introduced in his lectures of 1862–1863.[footnote 1]It should not be confused with the normal mathematical script letters P: 𝒫 and 𝓅. In computing, the letter ℘ is available as\wpinTeX. InUnicodethe code point isU+2118℘SCRIPT CAPITAL P(&weierp;, &wp;), with the more correct aliasweierstrass elliptic function.[footnote 2]InHTML, it can be escaped as&weierp;.
https://en.wikipedia.org/wiki/Weierstrass_elliptic_function
Categorical perceptionis a phenomenon ofperceptionof distinct categories when there is gradual change in a variable along a continuum. It was originally observed for auditory stimuli but now found to be applicable to other perceptual modalities.[1][2] If one analyzes the sound spectrogram of [ba] and [pa], for example, [p] and [b] can be visualized as lying somewhere on an acoustic continuum based on their VOT (voice onset time). It is possible to construct a continuum of some intermediate tokens lying between the [p] and [b] endpoints by gradually decreasing the voice onset time. Alvin Libermanand colleagues[3](he did not talk about voice onset time in that paper) reported that when people listen to sounds that vary along the voicing continuum, they perceive only /ba/s and /pa/s, nothing in between. This effect—in which a perceived quality jumps abruptly from one category to another at a certain point along a continuum, instead of changing gradually—he dubbed "categorical perception" (CP). He suggested that CP was unique to speech, that CP made speech special, and, in what came to be called "the motor theory of speechperception," he suggested that CP's explanation lay in the anatomy of speech production. According to the (now abandoned)motor theory of speech perception, the reason people perceive an abrupt change between /ba/ and /pa/ is that the way we hear speech sounds is influenced by how people produce them when they speak. What is varying along this continuum is voice-onset-time: the "b" in [ba] has shorter VOT than the "p" in [pa] (i.e. the vocal folds start vibrating around the time of the release of the occlusion for [b], but tens of miliseconds later for [p]; but note that different varieties of English may implement VOT in different ways to signal contrast). Apparently, unlike the synthetic "morphing" apparatus, people's natural vocal apparatus is not capable of producing anything in between ba and pa. So when one hears asoundfrom the VOT continuum, their brain perceives it by trying to match it with what it would have had to do to produce it. Since the only thing they can produce is /ba/ or /pa/, they will perceive any of the synthetic stimuli along the continuum as either /ba/ or /pa/, whichever it is closer to. A similar CP effect is found with ba/da (or with any two speech sounds belonging to different categories); these too lie along a continuum acoustically, but vocally, /ba/ is formed with the two lips, /da/ with the tip of the tongue and the alveolar ridge, and our anatomy does not allow any intermediates. The motortheoryof speech perception explained how speech was special and why speech-sounds are perceived categorically:sensory perceptionis mediated by motor production. If motor production mediates sensoryperception, then one assumes that this CP effect is a result of learning to producespeech.Eimaset al. (1971), however, found thatinfantsalready have speech CP before they begin to speak. Perhaps, then, it is aninnateeffect, evolved to "prepare" us to learn to speak.[4]But Kuhl (1987) found that chinchillas also have "speech CP" even though they never learn to speak, and presumably did not evolve to do so.[5]Lane (1965) went on to show that CP effects can be induced bylearningalone, with a purely sensory (visual) continuum in which there is no motor production discontinuity to mediate the perceptual discontinuity.[6]He concluded that speech CP is not special after all, but merely a special case of Lawrence's classic demonstration that stimuli to which you learn to make a different response become more distinctive and stimuli to which you learn to make the same response become more similar. It also became clear that CP was not quite the all-or-none effect Liberman had originally thought it was: It is not that all /pa/s are indistinguishable and all /ba/s are indistinguishable: We can hear the differences, just as we can see the differences between different shades of red. It is just that the within-category differences (pa1/pa2 or red1/red2) sound/look much smaller than the between-category differences (pa2/ba1 or red2/yellow1), even when the size of the underlying physical differences (voicing, wavelength) are actually the same. The study of categorical perception often uses experiments involving discrimination and identification tasks in order to categorize participants' perceptions of sounds.Voice onset time(VOT) is measured along a continuum rather than a binary. English bilabial stops /b/ and /p/ are voiced and voiceless counterparts of the same place and manner of articulation, yet native speakers distinguish the sounds primarily by where they fall on the VOT continuum. Participants in these experiments establish clearphonemeboundaries on the continuum; two sounds with different VOT will be perceived as the same phoneme if on the same side of the boundary.[7]Participants take longer to discriminate between two sounds falling in the same category of VOT than between two on opposite sides of the phoneme boundary, even if the difference in VOT is greater between the two in the same category.[8] In a categorical perception identification task, participants often must identify stimuli, such as speech sounds. An experimenter testing the perception of the VOT boundary between /p/ and /b/ may play several sounds falling on various parts of the VOT continuum and ask volunteers whether they hear each sound as /p/ or /b/.[9]In such experiments, sounds on one side of the boundary are heard almost universally as /p/ and on the other as /b/. Stimuli on or near the boundary take longer to identify and are reported differently by different volunteers, but are perceived as either /b/ or /p/, rather than as a sound somewhere in the middle.[7] A simple AB discrimination task presents participants with two options and participants must decide if they are identical.[9]Predictions for a discrimination task in an experiment are often based on the preceding identification task. An ideal discrimination experiment validating categorical perception of stop consonants would result in volunteers more often correctly discriminating stimuli that fall on opposite sides of the boundary, while discriminating at chance level on the same side of the boundary.[8] In an ABX discrimination task, volunteers are presented with three stimuli. A and B must be distinct stimuli and volunteers decide which of the two the third stimulus X matches. This discrimination task is much more common than a simple AB task.[9][8] According to theSapir–Whorf hypothesis(of which Lawrence's acquired similarity/distinctiveness effects would simply be a special case), language affects the way that people perceive the world. For example, colors are perceived categorically only because they happen to be named categorically: Our subdivisions of thespectrumarearbitrary, learned, and vary acrossculturesandlanguages. But Berlin & Kay (1969) suggested that this was not so: Not only do most cultures and languages subdivide and name thecolor spectrumthe same way, but even for those who don't, the regions of compression and separation are the same.[10]We all see blues as more alike and greens as more alike, with a fuzzy boundary in between, whether or not we have named the difference. This view has been challenged in a review article by Regier and Kay (2009) who discuss a distinction between the questions "1. Do color terms affect color perception?" and "2. Are color categories determined by largely arbitrary linguistic convention?". They report evidence that linguistic categories, stored in the left hemisphere of the brain for most people, do affect categorical perception but primarily in the right visual field, and that this effect is eliminated with a concurrent verbal interference task.[11] Universalism, in contrasts to the Sapir-Whorf hypothesis, posits that perceptual categories are innate, and are unaffected by the language that one speaks.[12] Support of the Sapir-Whorf hypothesis describes instances in which speakers of one language demonstrate categorical perception in a way that is different from speakers of another language. Examples of such evidence are provided below: Regier and Kay (2009) reported evidence that linguistic categories affect categorical perception primarily in the right visual field.[13]The right visual field is controlled by the left hemisphere of the brain, which also controls language faculties. Davidoff (2001) presented evidence that in color discrimination tasks, native English speakers discriminated more easily between color stimuli across a determined blue-green boundary than within the same side, but did not show categorical perception when given the same task with Berinmo "nol" and "wor"; Berinmo speakers performed oppositely.[14] A popular theory in current research is "weak-Whorfianism,' which is the theory that although there is a strong universal component to perception, cultural differences still have an impact. For example, a 1998 study found that while there was evidence of universal perception of color between speakers of Setswana and English, there were also marked differences between the two language groups.[15] Thesignatureof categorical perception (CP) is within-category compression and/or between-category separation. The size of the CP effect is merely a scaling factor; it is this compression/separation "accordion effect", that is CP's distinctive feature. In this respect, the "weaker" CP effect for vowels, whose motor production is continuous rather than categorical, but whoseperceptionis by this criterion categorical, is every bit as much of a CP effect as the ba/pa and ba/da effects. But, as with colors, it looks as if the effect is an innate one: Our sensory category detectors for both color and speech sounds are born already "biased" by evolution: Our perceived color and speech-soundspectrumis already "warped" with these compression/separations. The Lane/Lawrence demonstrations, lately replicated and extended by Goldstone (1994), showed that CP can be induced by learning alone.[16]There are also the countlesscategoriescataloged in our dictionaries that, according to categorical perception, are unlikely to be inborn. Nativist theorists such as Fodor [1983] have sometimes seemed to suggest that all of ourcategoriesare inborn.[17]There are recent demonstrations that, although the primary color and speech categories may be inborn, their boundaries can be modified or even lost as a result of learning, and weaker secondary boundaries can be generated by learning alone.[18] In the case of innate CP, our categorically biasedsensory detectorspick out their prepared color and speech-sound categories far more readily and reliably than if our perception had been continuous. Learning is a cognitive process that results in a relatively permanent change in behavior. Learning can influence perceptual processing.[19]Learning influences perceptual processing by altering the way in which an individual perceives a given stimulus based on prior experience or knowledge. This means that the way something is perceived is changed by how it was seen, observed, or experienced before. The effects of learning can be studied in categorical perception by looking at the processes involved.[20] Learned categorical perception can be divided into different processes through some comparisons. The processes can be divided into between category and within category groups of comparison .[21]Between category groups are those that compare between two separate sets of objects. Within category groups are those that compare within one set of objects. Between subjects comparisons lead to a categorical expansion effect. A categorical expansion occurs when the classifications and boundaries for the category become broader, encompassing a larger set of objects. In other words, a categorical expansion is when the "edge lines" for defining a category become wider. Within subjects comparisons lead to a categorical compression effect. A categorical compression effect corresponds to the narrowing of category boundaries to include a smaller set of objects (the "edge lines" are closer together).[21]Therefore, between category groups lead to less rigid group definitions whereas within category groups lead to more rigid definitions. Another method of comparison is to look at both supervised and unsupervised group comparisons. Supervised groups are those for which categories have been provided, meaning that the category has been defined previously or given a label; unsupervised groups are groups for which categories are created, meaning that the categories will be defined as needed and are not labeled.[22] In studying learned categorical perception, themes are important. Learning categories is influenced by the presence of themes. Themes increase quality of learning. This is seen especially in cases where the existing themes are opposite.[22]In learned categorical perception, themes serve as cues for different categories. They assist in designating what to look for when placing objects into their categories. For example, when perceiving shapes, angles are a theme. The number of angles and their size provide more information about the shape and cue different categories. Three angles would cue a triangle, whereas four might cue a rectangle or a square. Opposite to the theme of angles would be the theme of circularity. The stark contrast between the sharp contour of an angle and the round curvature of a circle make it easier to learn. Similar to themes, labels are also important to learned categorical perception.[21]Labels are “noun-like” titles that can encourage categorical processing with a focus on similarities.[21]The strength of a label can be determined by three factors: analysis of affective (or emotional) strength, permeability (the ability to break through) of boundaries, and a judgment (measurement of rigidity) of discreteness.[21]Sources of labels differ, and, similar to unsupervised/supervised categories, are either created or already exist.[21][22]Labels affect perception regardless of their source. Peers, individuals, experts, cultures, and communities can create labels. The source doesn’t appear to matter as much as mere presence of a label, what matters is that there is a label. There is a positive correlation between strength of the label (combination of three factors) and the degree to which the label affects perception, meaning that the stronger the label, the more the label affects perception.[21] Cues used in learned categorical perception can foster easier recall and access of prior knowledge in the process of learning and using categories.[22]An item in a category can be easier to recall if the category has a cue for the memory. As discussed, labels and themes both function as cues for categories, and, therefore, aid in the memory of these categories and the features of the objects belonging to them. There are several brain structures at work that promote learned categorical perception. The areas and structures involved include: neurons, the prefrontal cortex, and the inferotemporal cortex.[20][23]Neurons in general are linked to all processes in the brain and, therefore, facilitate learned categorical perception. They send the messages between brain areas and facilitate the visual and linguistic processing of the category. The prefrontal cortex is involved in “forming strong categorical representations.”[20]The inferotemporal cortex has cells that code for different object categories and are turned along diagnostic category dimensions, areas distinguishing category boundaries.[20] The learning of categories and categorical perception can be improved through adding verbal labels, making themes relevant to the self, making more separate categories, and by targeting similar features that make it easier to form and define categories. Learned categorical perception occurs not only in human species but has been demonstrated in animal species as well. Studies have targeted categorical perception using humans, monkeys, rodents, birds, frogs.[23][24]These studies have led to numerous discoveries. They focus primarily on learning the boundaries of categories, where inclusion begins and ends, and they support the hypothesis that categorical perception does have a learned component. Computational modeling (Tijsseling & Harnad 1997; Damper & Harnad 2000) has shown that many types of category-learning mechanisms (e.g. both back-propagation and competitive networks) display CP-like effects.[25][26]In back-propagation nets, the hidden-unit activation patterns that "represent" an input build up within-category compression and between-category separation as they learn; other kinds of nets display similar effects. CP seems to be a means to an end: Inputs that differ among themselves are "compressed" onto similar internal representations if they must all generate the same output; and they become more separate if they must generate different outputs. The network's "bias" is what filters inputs onto their correct output category. The nets accomplish this by selectively detecting (after much trial and error, guided by error-correcting feedback) the invariant features that are shared by the members of the same category and that reliably distinguish them from members of different categories; the nets learn to ignore all other variation as irrelevant to thecategorization. Neural data provide correlates of CP and of learning.[27]Differences between event-related potentials recorded from the brain have been found to be correlated with differences in the perceived category of the stimulus viewed by the subject.Neural imagingstudies have shown that these effects are localized and even lateralized to certain brain regions in subjects who have successfully learned the category, and are absent in subjects who have not.[28][29] Categorical perception is identified with the left prefrontal cortex with this showing such perception for speech units while this is not by posterior areas earlier in their processing such as areas in the leftsuperior temporal gyrus.[30] Both innate and learned CP are sensorimotor effects: The compression/separationbiasesare sensorimotor biases, and presumably had sensorimotor origins, whether during the sensorimotor life-history of theorganism, in the case of learned CP, or the sensorimotor life-history of the species, in the case of innate CP. Theneural netI/O models are also compatible with this fact: Their I/O biases derive from their I/O history. But when we look at our repertoire ofcategoriesin a dictionary, it is highly unlikely that many of them had a direct sensorimotor history during our lifetimes, and even less likely in our ancestors' lifetimes. How many of us have seen a unicorn in real life? We have seen pictures of them, but what had those who first drew those pictures seen? And what about categories I cannot draw or see (or taste or touch): What about the most abstract categories, such as goodness and truth? Some of ourcategoriesmust originate from another source than direct sensorimotorexperience, and here we return to language and the Whorf Hypothesis: Can categories, and their accompanying CP, be acquired through language alone? Again, there are some neural net simulation results suggesting that once a set of category names has been "grounded" through direct sensorimotor experience, they can be combined into Boolean combinations (man = male & human) and into still higher-ordercombinations(bachelor = unmarried & man) which not only pick out the more abstract, higher-order categories much the way the direct sensorimotor detectors do, but also inherit their CP effects, as well as generating some of their own. Bachelor inherits the compression/separation of unmarried and man, and adds a layer of separation/compression of its own.[31][32] These language-induced CP-effects remain to be directly demonstrated in human subjects; so far only learned and innate sensorimotor CP have been demonstrated.[33][34]The latter shows the Whorfian power ofnamingand categorization, in warping ourperceptionof the world. That is enough to rehabilitate the Whorf Hypothesis from its apparent failure on color terms (and perhaps also from its apparent failure on eskimo snow terms[35]), but to show that it is a full-blown language effect, and not merely a vocabulary effect, it will have to be shown that our perception of theworldcan also be warped, not just by how things are named but by what we are told about them. Emotions are an important characteristic of the human species. An emotion is an abstract concept that is most easily observed by looking at facial expressions. Emotions and their relation to categorical perception are often studied using facial expressions.[36][37][38][39][40]Faces contain a large amount of valuable information.[38] Emotions are divided into categories because they are discrete from one another. Each emotion entails a separate and distinct set of reactions, consequences, and expressions. The feeling and expression of emotions is a natural occurrence, and, it is actually a universal occurrence for some emotions. There are six basic emotions that are considered universal to the human species across age, gender, race, country, and culture and that are considered to be categorically distinct. These six basic emotions are: happiness, disgust, sadness, surprise, anger, and fear.[39]According to the discrete emotions approach, people experience one emotion and not others, rather than a blend.[39]Categorical perception of emotional facial expressions does not require lexical categories.[39]Of these six emotions, happiness is the most easily identified. The perception of emotions using facial expressions reveals slight gender differences[36]based on the definition and boundaries (essentially, the “edge line” where one emotion ends and a subsequent emotion begins) of the categories. The emotion of anger is perceived easier and quicker when it is displayed by males. However, the same effects are seen in the emotion of happiness when portrayed by women.[36]These effects are essentially observed because the categories of the two emotions (anger and happiness) are more closely associated with other features of these specific genders. Although a verbal label is provided to emotions, it is not required to categorically perceive them. Before language in infants, they can distinguish emotional responses. The categorical perception of emotions is by a "hardwired mechanism".[39]Additional evidence exists showing the verbal labels from cultures that may not have a label for a specific emotion but can still categorically perceive it as its own emotion, discrete and isolated from other emotions.[39]The perception of emotions into categories has also been studied using the tracking of eye movements which showed an implicit response with no verbal requirement because the eye movement response required only the movement and no subsequent verbal response.[37] The categorical perception of emotions is sometimes a result of joint processing. Other factors may be involved in this perception. Emotional expression and invariable features (features that remain relatively consistent) often work together.[38]Race is one of the invariable features that contribute to categorical perception in conjunction with expression. Race can also be considered a social category.[38]Emotional categorical perception can also be seen as a mix of categorical and dimensional perception. Dimensional perception involves visual imagery. Categorical perception occurs even when processing is dimensional.[40] This article incorporates text by Stevan Harnad available under theCC BY-SA 3.0license. The text and its release have been received by theWikimedia Volunteer Response Team; for more information, see thetalk page.
https://en.wikipedia.org/wiki/Categorical_perception
Identity-based cryptographyis a type ofpublic-key cryptographyin which a publicly known string representing an individual or organization is used as apublic key. The public string could include an email address, domain name, or a physical IP address. The first implementation of identity-based signatures and an email-address basedpublic-key infrastructure(PKI) was developed byAdi Shamirin 1984,[1]which allowed users to verifydigital signaturesusing only public information such as the user's identifier. Under Shamir's scheme, a trusted third party would deliver the private key to the user after verification of the user's identity, with verification essentially the same as that required for issuing acertificatein a typical PKI. Shamir similarly proposedidentity-based encryption, which appeared particularly attractive since there was no need to acquire an identity's public key prior to encryption. However, he was unable to come up with a concrete solution, and identity-based encryption remained an open problem for many years. The first practical implementations were finally devised by Sakai in 2000,[2]and Boneh and Franklin in 2001.[3]These solutions were based onbilinear pairings. Also in 2001, a solution was developed independently byClifford Cocks.[4][5] Closely related to various identity-based encryption schemes are identity based key agreement schemes. One of the first identity based key agreement algorithms was published in 1986, just two years after Shamir's identity based signature. The author was E. Okamoto.[6]Identity based key agreement schemes also allow for "escrow free" identity based cryptography. A notable example of such an escrow free identity based key agreement is the McCullagh-Barreto's "Authenticated Key Agreement without Escrow" found in section 4 of their 2004 paper, "A New Two-Party Identity-Based Authenticated Key Agreement".[7]A variant of this escrow free key exchange is standardized as the identity based key agreement in the Chinese identity based standardSM9. Identity-based systems allow any party to generate a public key from a known identity value, such as an ASCII string. A trusted third party, called the private key generator (PKG), generates the corresponding private keys. To operate, the PKG first publishes a master public key, and retains the correspondingmaster private key(referred to asmaster key). Given the master public key, any party can compute a public key corresponding to the identityIDby combining the master public key with the identity value. To obtain a corresponding private key, the party authorized to use the identityIDcontacts the PKG, which uses the master private key to generate the private key for the identityID. Identity-based systems have a characteristic problem in operation. Suppose Alice and Bob are users of such a system. Since the information needed to find Alice's public key is completely determined by Alice's ID and the master public key, it is not possible to revoke Alice's credentials and issue new credentials without either (a) changing Alice's ID (usually a phone number or an email address which will appear in a corporate directory); or (b) changing the master public key and re-issuing private keys to all users, including Bob.[8] This limitation may be overcome by including a time component (e.g. the current month) in the identity.[8]
https://en.wikipedia.org/wiki/Identity-based_cryptography
Thresholdmay refer to:
https://en.wikipedia.org/wiki/Threshold_(disambiguation)
Multimodal transport(also known ascombined transport) is thetransportationofgoodsunder a single contract, but performed with at least two differentmodes of transport; the carrier is liable (in a legal sense) for the entire carriage, even though it is performed by several different modes of transport (byrail, sea and road, for example). The carrier does not have to possess all the means oftransport, and in practice usually does not; the carriage is often performed by sub-carriers (referred to in legal language as "actual carriers"). The carrier responsible for the entire carriage is referred to as a multimodal transport operator, or MTO. Article 1.1. of theUnited Nations Convention on International Multimodal Transport of Goods(Geneva, 24 May 1980) (which will only enter into force 12 months after 30 countries ratify; as of May 2019, only 6 countries have ratified the treaty[1]) defines multimodal transport as follows: "'International multimodal transport' means the carriage of goods by at least two different modes of transport on the basis of a multimodal transport contract from a place in one country at which the goods are taken in charge by the multimodal transport operator to a place designated for delivery situated in a different country".[2] In practice,freight forwardershave become important MTOs; they have moved away from their traditional role as agents for the sender, accepting a greater liability as carriers. Large sea carriers have also evolved into MTOs; they provide customers with so-called door-to-door service. The sea carrier offers transport from the sender's premises (usually located inland) to the receiver's premises (also usually situated inland), rather than offering traditional tackle-to-tackle or pier-to-pier service. MTOs not in the possession of a sea vessel (even though the transport includes a sea leg) are referred to as Non-Vessel Operating Carriers (NVOC) incommon lawcountries (especially the United States). Multimodal transport developed in connection with the "container revolution" of the 1960s and 1970s; as of 2011,containerizedtransports are by far the most important multimodalconsignments. However, it is important to remember that multimodal transport is not equivalent to container transport; multimodal transport is feasible without any form of container. The MTO works on behalf of the supplier; it assures the supplier (and the buyer) that their goods will be effectively managed and supplied. Multimodal transport research is being conducted across a wide range of government, commercial and academic centers. TheResearch and Innovative Technology Administration(RITA) within theU.S. Department of Transportation(USDOT) chairs an inter-agency Research, Development and Technology (RD&T) Planning Team. The University Transportation Center (UTC) program, which consists of more than 100 universities nationwide conducts multi-modal research and education programs.[3]The European Commission has invested heavily in multimodal research under the H2020 programme[4]– examples are CORE[5]and SYNCHRO-NET.[6] From a legal standpoint, multimodal transport creates several problems. Unimodal transports are currently governed by different, often-mandatoryinternational conventions. These conventions stipulate different bases forliability, and different limitations of liability for the carrier. As of 2011, the solution to this problem has been the so-callednetwork principle. According to the network principle, the different conventions coexist unchanged; the carrier's liability is defined according to where thebreach of contracthas occurred (where the goods have been damaged during transport, for example). However, problems arise if the breach of contract is systemic (not localized).
https://en.wikipedia.org/wiki/Multimodal_transport
In mathematics,Mittag-Leffler summationis any of several variations of theBorel summationmethod for summing possiblydivergentformal power series, introduced byGösta Mittag-Leffler(1908) Let be aformal power seriesinz. Define the transformBαy{\displaystyle \scriptstyle {\mathcal {B}}_{\alpha }y}ofy{\displaystyle \scriptstyle y}by Then theMittag-Leffler sumofyis given by if each sum converges and the limit exists. A closely related summation method, also called Mittag-Leffler summation, is given as follows (Sansone & Gerretsen 1960). Suppose that the Borel transformB1y(z){\displaystyle {\mathcal {B}}_{1}y(z)}converges to ananalytic functionnear 0 that can beanalytically continuedalong thepositive real axisto a function growing sufficiently slowly that the following integral is well defined (as an improper integral). Then theMittag-Leffler sumofyis given by Whenα= 1 this is the same asBorel summation.
https://en.wikipedia.org/wiki/Mittag-Leffler_summation
Inmathematics, specificallyset theory, theCartesian productof twosetsAandB, denotedA×B, is the set of allordered pairs(a,b)whereais an element ofAandbis an element ofB.[1]In terms ofset-builder notation, that isA×B={(a,b)∣a∈Aandb∈B}.{\displaystyle A\times B=\{(a,b)\mid a\in A\ {\mbox{ and }}\ b\in B\}.}[2][3] A table can be created by taking the Cartesian product of a set of rows and a set of columns. If the Cartesian productrows×columnsis taken, the cells of the table contain ordered pairs of the form(row value, column value).[4] One can similarly define the Cartesian product ofnsets, also known as ann-fold Cartesian product, which can be represented by ann-dimensional array, where each element is ann-tuple. An ordered pair is a2-tuple or couple. More generally still, one can define the Cartesian product of anindexed familyof sets. The Cartesian product is named afterRené Descartes,[5]whose formulation ofanalytic geometrygave rise to the concept, which is further generalized in terms ofdirect product. A rigorous definition of the Cartesian product requires a domain to be specified in theset-builder notation. In this case the domain would have to contain the Cartesian product itself. For defining the Cartesian product of the setsA{\displaystyle A}andB{\displaystyle B}, with the typicalKuratowski's definitionof a pair(a,b){\displaystyle (a,b)}as{{a},{a,b}}{\displaystyle \{\{a\},\{a,b\}\}}, an appropriate domain is the setP(P(A∪B)){\displaystyle {\mathcal {P}}({\mathcal {P}}(A\cup B))}whereP{\displaystyle {\mathcal {P}}}denotes thepower set. Then the Cartesian product of the setsA{\displaystyle A}andB{\displaystyle B}would be defined as[6]A×B={x∈P(P(A∪B))∣∃a∈A∃b∈B:x=(a,b)}.{\displaystyle A\times B=\{x\in {\mathcal {P}}({\mathcal {P}}(A\cup B))\mid \exists a\in A\ \exists b\in B:x=(a,b)\}.} An illustrative example is thestandard 52-card deck. Thestandard playing cardranks {A, K, Q, J, 10, 9, 8, 7, 6, 5, 4, 3, 2} form a 13-element set. The card suits{♠,♥,♦, ♣} form a four-element set. The Cartesian product of these sets returns a 52-element set consisting of 52ordered pairs, which correspond to all 52 possible playing cards. Ranks×Suitsreturns a set of the form {(A, ♠), (A,♥), (A,♦), (A, ♣), (K, ♠), ..., (3, ♣), (2, ♠), (2,♥), (2,♦), (2, ♣)}. Suits×Ranksreturns a set of the form {(♠, A), (♠, K), (♠, Q), (♠, J), (♠, 10), ..., (♣, 6), (♣, 5), (♣, 4), (♣, 3), (♣, 2)}. These two sets are distinct, evendisjoint, but there is a naturalbijectionbetween them, under which (3, ♣) corresponds to (♣, 3) and so on. The main historical example is theCartesian planeinanalytic geometry. In order to represent geometrical shapes in a numerical way, and extract numerical information from shapes' numerical representations,René Descartesassigned to each point in the plane a pair ofreal numbers, called itscoordinates. Usually, such a pair's first and second components are called itsxandycoordinates, respectively (see picture). The set of all such pairs (i.e., the Cartesian productR×R{\displaystyle \mathbb {R} \times \mathbb {R} }, withR{\displaystyle \mathbb {R} }denoting the real numbers) is thus assigned to the set of all points in the plane.[7] A formal definition of the Cartesian product fromset-theoreticalprinciples follows from a definition ofordered pair. The most common definition of ordered pairs,Kuratowski's definition, is(x,y)={{x},{x,y}}{\displaystyle (x,y)=\{\{x\},\{x,y\}\}}. Under this definition,(x,y){\displaystyle (x,y)}is an element ofP(P(X∪Y)){\displaystyle {\mathcal {P}}({\mathcal {P}}(X\cup Y))}, andX×Y{\displaystyle X\times Y}is a subset of that set, whereP{\displaystyle {\mathcal {P}}}represents thepower setoperator. Therefore, the existence of the Cartesian product of any two sets inZFCfollows from the axioms ofpairing,union,power set, andspecification. Sincefunctionsare usually defined as a special case ofrelations, and relations are usually defined as subsets of the Cartesian product, the definition of the two-set Cartesian product is necessarily prior to most other definitions. LetA,B,C, andDbe sets. The Cartesian productA×Bis notcommutative,A×B≠B×A,{\displaystyle A\times B\neq B\times A,}[4]because theordered pairsare reversed unless at least one of the following conditions is satisfied:[8] For example: Strictly speaking, the Cartesian product is notassociative(unless one of the involved sets is empty).(A×B)×C≠A×(B×C){\displaystyle (A\times B)\times C\neq A\times (B\times C)}If for exampleA= {1}, then(A×A) ×A= {((1, 1), 1)} ≠{(1, (1, 1))} =A× (A×A). A= [1,4],B= [2,5], andC= [4,7], demonstratingA× (B∩C)= (A×B) ∩ (A×C),A× (B∪C) = (A×B) ∪ (A×C), and A= [2,5],B= [3,7],C= [1,3],D= [2,4], demonstrating The Cartesian product satisfies the following property with respect tointersections(see middle picture).(A∩B)×(C∩D)=(A×C)∩(B×D){\displaystyle (A\cap B)\times (C\cap D)=(A\times C)\cap (B\times D)} In most cases, the above statement is not true if we replace intersection withunion(see rightmost picture).(A∪B)×(C∪D)≠(A×C)∪(B×D){\displaystyle (A\cup B)\times (C\cup D)\neq (A\times C)\cup (B\times D)} In fact, we have that:(A×C)∪(B×D)=[(A∖B)×C]∪[(A∩B)×(C∪D)]∪[(B∖A)×D]{\displaystyle (A\times C)\cup (B\times D)=[(A\setminus B)\times C]\cup [(A\cap B)\times (C\cup D)]\cup [(B\setminus A)\times D]} For the set difference, we also have the following identity:(A×C)∖(B×D)=[A×(C∖D)]∪[(A∖B)×C]{\displaystyle (A\times C)\setminus (B\times D)=[A\times (C\setminus D)]\cup [(A\setminus B)\times C]} Here are some rules demonstrating distributivity with other operators (see leftmost picture):[8]A×(B∩C)=(A×B)∩(A×C),A×(B∪C)=(A×B)∪(A×C),A×(B∖C)=(A×B)∖(A×C),{\displaystyle {\begin{aligned}A\times (B\cap C)&=(A\times B)\cap (A\times C),\\A\times (B\cup C)&=(A\times B)\cup (A\times C),\\A\times (B\setminus C)&=(A\times B)\setminus (A\times C),\end{aligned}}}(A×B)∁=(A∁×B∁)∪(A∁×B)∪(A×B∁),{\displaystyle (A\times B)^{\complement }=\left(A^{\complement }\times B^{\complement }\right)\cup \left(A^{\complement }\times B\right)\cup \left(A\times B^{\complement }\right)\!,}whereA∁{\displaystyle A^{\complement }}denotes theabsolute complementofA. Other properties related withsubsetsare: if bothA,B≠∅, thenA×B⊆C×D⟺A⊆CandB⊆D.{\displaystyle {\text{if both }}A,B\neq \emptyset {\text{, then }}A\times B\subseteq C\times D\!\iff \!A\subseteq C{\text{ and }}B\subseteq D.}[9] Thecardinalityof a set is the number of elements of the set. For example, defining two sets:A= {a, b}andB= {5, 6}. Both setAand setBconsist of two elements each. Their Cartesian product, written asA×B, results in a new set which has the following elements: where each element ofAis paired with each element ofB, and where each pair makes up one element of the output set. The number of values in each element of the resulting set is equal to the number of sets whose Cartesian product is being taken; 2 in this case. The cardinality of the output set is equal to the product of the cardinalities of all the input sets. That is, In this case,|A×B| = 4 Similarly, and so on. The setA×Bisinfiniteif eitherAorBis infinite, and the other set is not the empty set.[10] The Cartesian product can be generalized to then-ary Cartesian productovernsetsX1, ...,Xnas the setX1×⋯×Xn={(x1,…,xn)∣xi∈Xifor everyi∈{1,…,n}}{\displaystyle X_{1}\times \cdots \times X_{n}=\{(x_{1},\ldots ,x_{n})\mid x_{i}\in X_{i}\ {\text{for every}}\ i\in \{1,\ldots ,n\}\}} ofn-tuples. If tuples are defined asnested ordered pairs, it can be identified with(X1× ... ×Xn−1) ×Xn. If a tuple is defined as a function on{1, 2, ...,n} that takes its value atito be thei-th element of the tuple, then the Cartesian productX1× ... ×Xnis the set of functions{x:{1,…,n}→X1∪⋯∪Xn|x(i)∈Xifor everyi∈{1,…,n}}.{\displaystyle \{x:\{1,\ldots ,n\}\to X_{1}\cup \cdots \cup X_{n}\ |\ x(i)\in X_{i}\ {\text{for every}}\ i\in \{1,\ldots ,n\}\}.} TheCartesian squareof a setXis the Cartesian productX2=X×X. An example is the 2-dimensionalplaneR2=R×RwhereRis the set ofreal numbers:[1]R2is the set of all points(x,y)wherexandyare real numbers (see theCartesian coordinate system). Then-ary Cartesian powerof a setX, denotedXn{\displaystyle X^{n}}, can be defined asXn=X×X×⋯×X⏟n={(x1,…,xn)|xi∈Xfor everyi∈{1,…,n}}.{\displaystyle X^{n}=\underbrace {X\times X\times \cdots \times X} _{n}=\{(x_{1},\ldots ,x_{n})\ |\ x_{i}\in X\ {\text{for every}}\ i\in \{1,\ldots ,n\}\}.} An example of this isR3=R×R×R, withRagain the set of real numbers,[1]and more generallyRn. Then-ary Cartesian power of a setXisisomorphicto the space of functions from ann-element set toX. As a special case, the 0-ary Cartesian power ofXmay be taken to be asingleton set, corresponding to theempty functionwithcodomainX. Let Cartesian products be givenA=A1×⋯×An{\displaystyle A=A_{1}\times \dots \times A_{n}}andB=B1×⋯×Bn{\displaystyle B=B_{1}\times \dots \times B_{n}}. Then Inn-tuple algebra(NTA),[12]such a matrix-like representation of Cartesian products is called aC-n-tuple. With this in mind, the union of some Cartesian products given in the same universe can be expressed as a matrix bounded by square brackets, in which the rows represent the Cartesian products involved in the union: Such a structure is called aC-systemin NTA. Then the complement of the Cartesian productA{\displaystyle A}will look like the followingC-system expressed as a matrix of the dimensionn×n{\displaystyle n\times n}: The diagonal components of this matrixAi∁{\displaystyle A_{i}^{\complement }}are equal correspondingly toXi∖Ai{\displaystyle X_{i}\setminus A_{i}}. In NTA, a diagonalC-systemA∁{\displaystyle A^{\complement }}, that represents the complement of aC-n-tupleA{\displaystyle A}, can be written concisely as a tuple of diagonal components bounded by inverted square brackets: This structure is called aD-n-tuple. Then the complement of theC-systemR{\displaystyle R}is a structureR∁{\displaystyle R^{\complement }}, represented by a matrix of the same dimension and bounded by inverted square brackets, in which all components are equal to the complements of the components of the initial matrixR{\displaystyle R}. Such a structure is called aD-system and is calculated, if necessary, as the intersection of theD-n-tuples contained in it. For instance, if the followingC-system is given: then its complement will be theD-system Let us consider some new relations for structures with Cartesian products obtained in the process of studying the properties of NTA.[12]The structures defined in the same universe are calledhomotypicones. It is possible to define the Cartesian product of an arbitrary (possiblyinfinite)indexed familyof sets. IfIis anyindex set, and{Xi}i∈I{\displaystyle \{X_{i}\}_{i\in I}}is a family of sets indexed byI, then the Cartesian product of the sets in{Xi}i∈I{\displaystyle \{X_{i}\}_{i\in I}}is defined to be∏i∈IXi={f:I→⋃i∈IXi|∀i∈I.f(i)∈Xi},{\displaystyle \prod _{i\in I}X_{i}=\left\{\left.f:I\to \bigcup _{i\in I}X_{i}\ \right|\ \forall i\in I.\ f(i)\in X_{i}\right\},}that is, the set of all functions defined on theindex setIsuch that the value of the function at a particular indexiis an element ofXi. Even if each of theXiis nonempty, the Cartesian product may be empty if theaxiom of choice, which is equivalent to the statement that every such product is nonempty, is not assumed.∏i∈IXi{\displaystyle \prod _{i\in I}X_{i}}may also be denotedX{\displaystyle {\mathsf {X}}}i∈IXi{\displaystyle {}_{i\in I}X_{i}}.[13] For eachjinI, the functionπj:∏i∈IXi→Xj,{\displaystyle \pi _{j}:\prod _{i\in I}X_{i}\to X_{j},}defined byπj(f)=f(j){\displaystyle \pi _{j}(f)=f(j)}is called thej-thprojection map. Cartesian poweris a Cartesian product where all the factorsXiare the same setX. In this case,∏i∈IXi=∏i∈IX{\displaystyle \prod _{i\in I}X_{i}=\prod _{i\in I}X}is the set of all functions fromItoX, and is frequently denotedXI. This case is important in the study ofcardinal exponentiation. An important special case is when the index set isN{\displaystyle \mathbb {N} }, thenatural numbers: this Cartesian product is the set of all infinite sequences with thei-th term in its corresponding setXi. For example, each element of∏n=1∞R=R×R×⋯{\displaystyle \prod _{n=1}^{\infty }\mathbb {R} =\mathbb {R} \times \mathbb {R} \times \cdots }can be visualized as avectorwith countably infinite real number components. This set is frequently denotedRω{\displaystyle \mathbb {R} ^{\omega }}, orRN{\displaystyle \mathbb {R} ^{\mathbb {N} }}. If several sets are being multiplied together (e.g.,X1,X2,X3, ...), then some authors[14]choose to abbreviate the Cartesian product as simply×Xi. Iffis a function fromXtoAandgis a function fromYtoB, then their Cartesian productf×gis a function fromX×YtoA×Bwith(f×g)(x,y)=(f(x),g(y)).{\displaystyle (f\times g)(x,y)=(f(x),g(y)).} This can be extended totuplesand infinite collections of functions. This is different from the standard Cartesian product of functions considered as sets. LetA{\displaystyle A}be a set andB⊆A{\displaystyle B\subseteq A}. Then thecylinderofB{\displaystyle B}with respect toA{\displaystyle A}is the Cartesian productB×A{\displaystyle B\times A}ofB{\displaystyle B}andA{\displaystyle A}. Normally,A{\displaystyle A}is considered to be theuniverseof the context and is left away. For example, ifB{\displaystyle B}is a subset of the natural numbersN{\displaystyle \mathbb {N} }, then the cylinder ofB{\displaystyle B}isB×N{\displaystyle B\times \mathbb {N} }. Although the Cartesian product is traditionally applied to sets,category theoryprovides a more general interpretation of theproductof mathematical structures. This is distinct from, although related to, the notion of aCartesian squarein category theory, which is a generalization of thefiber product. Exponentiationis theright adjointof the Cartesian product; thus any category with a Cartesian product (and afinal object) is aCartesian closed category. Ingraph theory, theCartesian product of two graphsGandHis the graph denoted byG×H, whosevertexset is the (ordinary) Cartesian productV(G) ×V(H)and such that two vertices(u,v)and(u′,v′)are adjacent inG×H, if and only ifu=u′andvis adjacent withv′ inH,orv=v′anduis adjacent withu′ inG. The Cartesian product of graphs is not aproductin the sense of category theory. Instead, the categorical product is known as thetensor product of graphs.
https://en.wikipedia.org/wiki/Cartesian_square
Algebraic statisticsis the use ofalgebrato advancestatistics. Algebra has been useful forexperimental design,parameter estimation, andhypothesis testing. Traditionally, algebraic statistics has been associated with the design of experiments andmultivariate analysis(especiallytime series). In recent years, the term "algebraic statistics" has been sometimes restricted, sometimes being used to label the use ofalgebraic geometryandcommutative algebrain statistics. In the past, statisticians have used algebra to advance research in statistics. Some algebraic statistics led to the development of new topics in algebra and combinatorics, such asassociation schemes. For example,Ronald A. Fisher,Henry B. Mann, andRosemary A. BaileyappliedAbelian groupsto thedesign of experiments. Experimental designs were also studied withaffine geometryoverfinite fieldsand then with the introduction ofassociation schemesbyR. C. Bose.Orthogonal arrayswere introduced byC. R. Raoalso for experimental designs. [relevant?] Invariant measuresonlocally compact groupshave long been used instatistical theory, particularly inmultivariate analysis.Beurling'sfactorization theoremand much of the work on (abstract)harmonic analysissought better understanding of theWolddecompositionofstationary stochastic processes, which is important intime seriesstatistics. Encompassing previous results on probability theory on algebraic structures,Ulf Grenanderdeveloped a theory of "abstract inference". Grenander's abstract inference and histheory of patternsare useful forspatial statisticsandimage analysis; these theories rely onlattice theory. Partially ordered vector spacesandvector latticesare used throughout statistical theory.Garrett Birkhoffmetrized the positive cone usingHilbert's projective metricand provedJentsch's theoremusing thecontraction mappingtheorem.[1]Birkhoff's results have been used formaximum entropyestimation(which can be viewed aslinear programmingininfinite dimensions) byJonathan Borweinand colleagues. Vector latticesandconical measureswere introduced intostatistical decision theorybyLucien Le Cam. In recent years, the term "algebraic statistics" has been used more restrictively, to label the use ofalgebraic geometryandcommutative algebrato study problems related todiscrete random variableswith finite state spaces. Commutative algebra and algebraic geometry have applications in statistics because many commonly used classes of discrete random variables can be viewed asalgebraic varieties. Consider arandom variableXwhich can take on the values 0, 1, 2. Such a variable is completely characterized by the three probabilities and these numbers satisfy Conversely, any three such numbers unambiguously specify a random variable, so we can identify the random variableXwith the tuple (p0,p1,p2)∈R3. Now supposeXis abinomial random variablewith parameterqandn = 2, i.e.Xrepresents the number of successes when repeating a certain experiment two times, where each experiment has an individual success probability ofq. Then and it is not hard to show that the tuples (p0,p1,p2) which arise in this way are precisely the ones satisfying The latter is apolynomial equationdefining an algebraic variety (or surface) inR3, and this variety, when intersected with thesimplexgiven by yields a piece of analgebraic curvewhich may be identified with the set of all 3-state Bernoulli variables. Determining the parameterqamounts to locating one point on this curve; testing the hypothesis that a given variableXisBernoulliamounts to testing whether a certain point lies on that curve or not. Algebraic geometry has also recently found applications tostatistical learning theory, including ageneralizationof theAkaike information criteriontosingular statistical models.[2]
https://en.wikipedia.org/wiki/Algebraic_statistics
Incomputer programming, ananonymous function(function literal,expressionorblock) is afunctiondefinition that is notboundto anidentifier. Anonymous functions are often arguments being passed tohigher-order functionsor used for constructing the result of a higher-order function that needs to return a function.[1]If the function is only used once, or a limited number of times, an anonymous function may be syntactically lighter than using a named function. Anonymous functions are ubiquitous infunctional programming languagesand other languages withfirst-class functions, where they fulfil the same role for thefunction typeasliteralsdo for otherdata types. Anonymous functions originate in the work ofAlonzo Churchin his invention of thelambda calculus, in which all functions are anonymous, in 1936, before electronic computers.[2]In several programming languages, anonymous functions are introduced using the keywordlambda, and anonymous functions are often referred to aslambdasorlambda abstractions. Anonymous functions have been a feature ofprogramming languagessinceLispin 1958, and a growing number of modern programming languages support anonymous functions. The names "lambda abstraction", "lambda function", and "lambda expression" refer to the notation of function abstraction in lambda calculus, where the usual functionf(x) =Mwould be written(λx.M), and whereMis an expression that usesx. Compare to the Python syntax oflambdax:M. The name "arrow function" refers to the mathematical "maps to" symbol,x↦M. Compare to the JavaScript syntax ofx=>M.[3] Anonymous functions can be used for containing functionality that need not be named and possibly for short-term use. Some notable examples includeclosuresandcurrying. The use of anonymous functions is a matter of style. Using them is never the only way to solve a problem; each anonymous function could instead be defined as a named function and called by name. Anonymous functions often provide a briefer notation than defining named functions. In languages that do not permit the definition of named functions in local scopes, anonymous functions may provide encapsulation via localized scope, however the code in the body of such anonymous function may not be re-usable, or amenable to separate testing. Short/simple anonymous functions used in expressions may be easier to read and understand than separately defined named functions, though without adescriptive namethey may be more difficult to understand. In some programming languages, anonymous functions are commonly implemented for very specific purposes such as binding events to callbacks or instantiating the function for particular values, which may be more efficient in aDynamic programming language, more readable, and less error-prone than calling a named function. The following examples are written in Python 3. When attempting to sort in a non-standard way, it may be easier to contain the sorting logic as an anonymous function instead of creating a named function. Most languages provide a generic sort function that implements asort algorithmthat will sort arbitrary objects. This function usually accepts an arbitrary function that determines how to compare whether two elements are equal or if one is greater or less than the other. Consider this Python code sorting a list of strings by length of the string: The anonymous function in this example is the lambda expression: The anonymous function accepts one argument,x, and returns the length of its argument, which is then used by thesort()method as the criteria for sorting. Basic syntax of a lambda function in Python is The expression returned by the lambda function can be assigned to a variable and used in the code at multiple places. Another example would be sorting items in a list by the name of their class (in Python, everything has a class): Note that11.2has class name "float",10has class name "int", and'number'has class name "str". The sorted order is "float", "int", then "str". Closures are functions evaluated in an environment containingbound variables. The following example binds the variable "threshold" in an anonymous function that compares the input to the threshold. This can be used as a sort of generator of comparison functions: It would be impractical to create a function for every possible comparison function and may be too inconvenient to keep the threshold around for further use. Regardless of the reason why a closure is used, the anonymous function is the entity that contains the functionality that does the comparing. Currying is the process of changing a function so that rather than taking multiple inputs, it takes a single input and returns a function which accepts the second input, and so forth. In this example, a function that performsdivisionby any integer is transformed into one that performs division by a set integer. While the use of anonymous functions is perhaps not common with currying, it still can be used. In the above example, the function divisor generates functions with a specified divisor. The functions half and third curry the divide function with a fixed divisor. The divisor function also forms a closure by binding the variabled. Ahigher-order functionis a function that takes a function as an argument or returns one as a result. This is commonly used to customize the behavior of a generically defined function, often a looping construct or recursion scheme. Anonymous functions are a convenient way to specify such function arguments. The following examples are in Python 3. The map function performs a function call on each element of a list. The following examplesquaresevery element in an array with an anonymous function. The anonymous function accepts an argument and multiplies it by itself (squares it). The above form is discouraged by the creators of the language, who maintain that the form presented below has the same meaning and is more aligned with the philosophy of the language: The filter function returns all elements from a list that evaluate True when passed to a certain function. The anonymous function checks if the argument passed to it is even. The same as with map, the form below is considered more appropriate: A fold function runs over all elements in a structure (for lists usually left-to-right, a "left fold", calledreducein Python), accumulating a value as it goes. This can be used to combine all elements of a structure into one value, for example: This performs The anonymous function here is the multiplication of the two arguments. The result of a fold need not be one value. Instead, both map and filter can be created using fold. In map, the value that is accumulated is a new list, containing the results of applying a function to each element of the original list. In filter, the value that is accumulated is a new list containing only those elements that match the given condition. The following is a list ofprogramming languagesthat support unnamed anonymous functions fully, or partly as some variant, or not at all. This table shows some general trends. First, the languages that do not support anonymous functions (C,Pascal,Object Pascal) are allstatically typedlanguages. However, statically typed languages can support anonymous functions. For example, theMLlanguages are statically typed and fundamentally include anonymous functions, andDelphi, a dialect ofObject Pascal, has been extended to support anonymous functions, as hasC++(by theC++11standard). Second, the languages that treat functions asfirst-class functions(Dylan,Haskell,JavaScript,Lisp,ML,Perl,Python,Ruby,Scheme) generally have anonymous function support so that functions can be defined and passed around as easily as other data types.
https://en.wikipedia.org/wiki/Anonymous_function
Dynamic pricing, also referred to assurge pricing,demand pricing, ortime-based pricing,andvariable pricing, is arevenue managementpricing strategyin which businesses set flexible prices forproductsorservicesbased on current market demands. It usually entails raising prices during periods of peak demand and lowering prices during periods of low demand.[1] As a pricing strategy, it encourages consumers to make purchases during periods of low demand (such as buying tickets well in advance of an event or buying meals outside of lunch and dinner rushes)[1]and disincentivizes them during periods of high demand (such as using less electricity during peak electricity hours).[2][3]In some sectors, economists have characterized dynamic pricing as having welfare improvements over uniform pricing and contributing to more optimal allocation of limited resources.[4]Its usage often stirs public controversy, as people frequently think of it asprice gouging.[5] Businesses are able to change prices based on algorithms that take into account competitor pricing,supply and demand, and other external factors in the market. Dynamic pricing is a common practice in several industries such ashospitality,tourism,entertainment,retail,electricity, andpublic transport. Each industry takes a slightly different approach to dynamic pricing based on its individual needs and the demand for the product. Cost-plus pricingis the most basic method of pricing. A store will simply charge consumers the cost required to produce a product plus a predetermined amount of profit. Cost-plus pricing is simple to execute, but it only considers internal information when setting the price and does not factor in external influencers like market reactions, the weather, or changes in consumer value. A dynamic pricing tool can make it easier to update prices, but will not make the updates often if the user doesn't account for external information like competitor market prices.[6]Due to its simplicity, this is the most widely used method of pricing with around 74% of companies in the United States employing this dynamic pricing strategy.[7]Although widely used, the usage is skewed, with companies facing a high degree of competition using this strategy the most, on the other hand, companies that deal with manufacturing tend to use this strategy the least.[7] Businesses that want to price competitively will monitor their competitors’ prices and adjust accordingly. This is called competitor-based pricing. In retail, the competitor that many companies watch is Amazon, which changes prices frequently throughout the day. Amazon is a market leader in retail that changes prices often,[8]which encourages other retailers to alter their prices to stay competitive. Such online retailers use price-matching mechanisms like price trackers.[9]The retailers give the end-user an option for the same, and upon selecting the option to price match, an online bot searches for the lowest price across various websites and offers a price lower than the lowest.[10] Such pricing behavior depends on market conditions, as well as a firm's planning. Although a firm existing within a highly competitive market is compelled to cut prices, that is not always the case. In case of high competition, yet a stable market, and a long-term view, it was predicted that firms will tend to cooperate on a price basis rather than undercut each other.[11] Ideally, companies should ask the price for a product that is equal to the value a consumer attaches to a product. This is called value-based pricing. As this value can differ from person to person, it is difficult to uncover the perfect value and have a differentiated price for every person. However, consumers' willingness to pay can be used as a proxy for the perceived value. With the price elasticity of products, companies can calculate how many consumers are willing to pay for the product at each price point. Products with high elasticities are highly sensitive to changes in price, while products with low elasticities are less sensitive to price changes (ceteris paribus). Subsequently, products with low elasticity are typically valued more by consumers if everything else is equal. The dynamic aspect of this pricing method is that elasticities change with respect to the product, category, time, location, and retailers. With the price elasticity of products and the margin of the product, retailers can use this method with their pricing strategy to aim for volume, revenue, orprofit maximizationstrategies.[12] There are two types of bundle pricing strategies: one from the consumer's point of view, and one from the seller's point of view. From the seller's point of view, an end product's price depends on whether it is bundled with something else; which bundle it belongs to; and sometimes on which customers it is offered to. This strategy is adopted by print-media houses and other subscription-based services.The Wall Street Journal, for example, offers a standalone price if an electronic mode of delivery is purchased, and a discount when it is bundled with print delivery.[10] Many industries, especially online retailers, change prices depending on thetime of day. Most retail customers shop during weekly office hours (between 9 AM and 5 PM), so many retailers will raise prices during the morning and afternoon, then lower prices during the evening.[13] Time-based pricing of services such as provision ofelectric powerincludes:[14][15] Peak fit pricing is best used for products that are inelastic in supply, where suppliers are fully able to anticipate demand growth and thus be able to charge differently for service during systematic periods of time. A utility with regulated prices may develop a time-based pricing schedule on analysis of its long-run costs, such as operation and investment costs. A utility such as electricity (or another service), operating in a market environment, may be auctioned on acompetitive market; time-based pricing will typically reflect price variations on the market. Such variations include both regular oscillations due to the demand patterns of users; supply issues (such as availability of intermittent natural resources like water flow or wind); and exceptional price peaks. Price peaks reflect strained conditions in the market (possibly augmented bymarket manipulation, as during theCalifornia electricity crisis), and convey a possible lack of investment. Extreme events include the default byGriddyafter the2021 Texas power crisis. Time-based pricing is the standard method of pricing in the tourism industry. Higher prices are charged during the peak season, or during special event periods. In the off-season, hotels may charge only the operating costs of the establishment, whereas investments and any profit are gained during the high season (this is the basic principle oflong-run marginal costpricing: see alsolong run and short run). Hotels and other players in the hospitality industry use dynamic pricing to adjust the cost of rooms and packages based on the supply and demand needs at a particular moment.[16]The goal of dynamic pricing in this industry is to find the highest price that consumers are willing to pay. Another name for dynamic pricing in the industry is demand pricing. This form of price discrimination is used to try to maximize revenue based on the willingness to pay of different market segments. It features price increases when demand is high and decreases to stimulate demand when it is low. Having a variety of prices based on the demand at each point in the day makes it possible for hotels to generate more revenue by bringing in customers at the different price points they are willing to pay. Airlines change prices often depending on the day of the week, time of day, and the number of days before the flight.[17]For airlines, dynamic pricing factors in different components such as: how many seats a flight has, departure time, and average cancellations on similar flights.[18]A 2022 study inEconometricaestimated that dynamic pricing was beneficial for "early-arriving, leisure consumers at the expense of late-arriving, business travelers. Although dynamic pricing ensures seat availability for business travelers, these consumers are then charged higher prices. When aggregated over markets, welfare is higher under dynamic pricing than under uniform pricing."[4] Congestion pricingis often used in public transportation androad pricing, where a higher price at peak periods is used to encourage more efficient use of the service or time-shifting to cheaper or free off-peak travel. For example, the San Francisco Bay Bridge charges a higher toll during rush hour and on the weekend, when drivers are more likely to be traveling.[19]This is an effective way to boost revenue when demand is high, while also managing demand since drivers unwilling to pay the premium will avoid those times. TheLondon congestion chargediscourages automobile travel to Central London during peak periods. TheWashington MetroandLong Island Rail Roadcharge higher fares at peak times. The tolls on theCustis Memorial Parkwayvary automatically according to the actual number of cars on the roadway, and at times of severe congestion can reach almost $50.[citation needed] Dynamic pricing is also used byUberandLyft.[20]Uber's system for "dynamically adjusting prices for service" measures supply (Uber drivers) and demand (passengers hailing rides by use of smartphones), and prices fares accordingly.[21]Ride-sharing companies such as Uber and Lyft have increasingly incorporated dynamic pricing into their operations. This strategy enables these businesses to offer the best prices for both drivers and passengers by adjusting prices in real-time in response to supply and demand. When there is a strong demand for rides, rates go up to encourage more drivers to offer their services, and when there is a low demand, prices go down to draw in more passengers. Someprofessional sportsteams use dynamic pricing structures to boost revenue. Dynamic pricing is particularly important in baseball because MLB teams play around twice as many games as some other sports and in much larger venues.[22] Sports that are outdoors have to factor weather into pricing strategy, in addition to the date of the game, date of purchase, and opponent.[23]Tickets for a game during inclement weather will sell better at a lower price; conversely, when a team is on a winning streak, fans will be willing to pay more. Dynamic pricing was first introduced to sports by a start-up software company from Austin, Texas,QcueandMajor League BaseballclubSan Francisco Giants. TheSan Francisco Giantsimplemented a pilot of 2,000 seats in the View Reserved and Bleachers and moved on to dynamically pricing the entire venue for the 2010 season.Qcuecurrently works with two-thirds ofMajor League Baseballfranchises, not all of which have implemented a full dynamic pricing structure, and for the 2012 postseason, theSan Francisco Giants,Oakland Athletics, andSt. Louis Cardinalsbecame the first teams to dynamically price postseason tickets. While behind baseball in terms of adoption, theNational Basketball Association,National Hockey League, andNCAAhave also seen teams implement dynamic pricing. Outside of the U.S., it has since been adopted on a trial basis by some clubs in theFootball League.[24]Scottish Premier LeagueclubHeart of Midlothianintroduced dynamic pricing for the sale of theirseason ticketsin 2012, but supporters complained that they were being charged significantly more than the advertised price.[25] Retailers, and online retailers, in particular, adjust the price of their products according to competitors, time, traffic, conversion rates, and sales goals.[26][27] Supermarkets often use dynamic pricing strategies to manage perishable inventory, such as fresh produce and meat products, that have a limited shelf life. By adjusting prices based on factors like expiration dates and current inventory levels, retailers can minimize waste and maximize revenue. Additionally, the widespread adoption ofelectronic shelf labelsin grocery stores has made it easier to implement dynamic pricing strategies in real-time, enabling retailers to respond quickly to changing market conditions and consumer preferences.[28]These labels also makes it easier for grocery stores to markup high demand items (e.g. making it more expensive to purchase ice in warmer weather).[29] Theme parks have also recently adopted this pricing model.DisneylandandDisney Worldadapted this practice in 2016, and Universal Studios followed suit.[30]Since the supply of parks is limited and new rides cannot be added based on the surge of demand, the model followed by theme parks in regards to dynamic pricing resembles that followed by the hotel industry. During summertime, when demand is ratherinelastic, the parks charge higher prices, whereas ticket prices in winter are less expensive.[31] Dynamic pricing is often criticized asprice gouging.[32][33]Dynamic pricing is widely unpopular among consumers as some feel it tends to favour particular buyers.[34][35][36]While the intent of surge pricing is generally driven by demand-supply dynamics, some instances have proven otherwise. Some businesses utilise modern technologies (Big dataandIoT) to adopt dynamic pricing strategies, where collection and analysis of real-time private data occur almost instantaneously.[37][38][39][40] As modern technology on data analysis is developing rapidly, enabling to detect one’s browsing history, age, gender, location and preference, some consumers fear “unwanted privacy invasions and data fraud” as the extent of their information being used is often undisclosed or ambiguous.[41]Even with firms’ disclaimers stating private information will only be used strictly for data collection and promising no third-party distribution will occur, few cases of misconducting companies can disrupt consumers’ perceptions.[42]Some consumers were simply skeptical on general information collection outright due to the potentiality of “data leakages and misuses”, possibly impacting suppliers’ long-term profitability stimulated by reduced customer loyalty.[43] Consumers can also develop price fairness/unfairness perceptions, whereby different prices being offered to individuals for the same products can affect customers’ perceptions on price fairness.[41][43][44]Studies discovered easiness of learning other individuals’ purchase price induced consumers to sense price unfairness and lower satisfaction when others paid less than themselves. However, when consumers were price-advantaged, development of trust and increased repurchase intentions were observed.[44][45][46]Other research indicated price fairness perceptions varied depending on their privacy sensitivity and natures of dynamic pricing like, individual pricing, segment pricing, location data pricing and purchase history pricing.[41] Amazon engaged inprice discriminationfor some customers in the year 2000, showing different prices at the same time for the same item to different customers, potentially violating theRobinson–Patman Act.[47]When this incident was criticised, Amazon issued a public apology with refunds to almost 7000 customers but did not cease the practice.[42] During theCOVID-19 pandemic, prices of certain items in high demand were reported to shoot up by quadruple their original price, garnering negative attention.[48]Although Amazon denied claims of any such manipulation and blamed a few sellers for shooting up prices for essentials such as sanitizers and masks, prices of essential products 'sold by Amazon' had also seen a hefty rise in prices. Amazon claimed this was a result of software malfunction.[48] Uber's surge pricing has also been criticized. In 2013, when New York was in the midst of a storm, Uber users saw fares go up eight times the usual fares.[49][50]This incident attracted public backlash from public figures, withSalman Rushdieamongst others publicly criticizing this move.[34] After this incident, the company started placing caps on how high surge pricing can go during times of emergency, starting in 2015.[51]Drivers have been known to hold off on accepting rides in an area until surge pricing forces fares up to a level satisfactory to them.[52] In 2024,Wendy'sannounced plans to test dynamic pricing in certain American locations during 2025. This pricing method was included with plans to redesign menu boards[53]and these changes were announced to stakeholders.[54]The company received significant online backlash for this decision. In response, Wendy's stated that the intended implementation was limited to reducing prices during low traffic periods.[55]
https://en.wikipedia.org/wiki/Variable_pricing
TheTrain Protection & Warning System(TPWS) is atrain protection systemused throughout the British passengermain-line railway network, and inVictoria, Australia.[1] According to the UKRail Safety and Standards Board,[2]the purpose of TPWS is to stop a train by automatically initiating a brake demand, where TPWS track equipment is fitted, if the train has: passed a signal at danger without authority; approached a signal at danger too fast; approached areduction in permissible speedtoo fast; approached buffer stops too fast. TPWS is not designed to preventsignals passed at danger(SPADs) but to mitigate the consequences of a SPAD, by preventing a train that has had a SPAD from reaching a conflict point after the signal. A standard installation consists of an on-track transmitter adjacent to a signal, activated when the signal is at danger. A train that passes the signal will have its emergency brake activated. If the train is travelling at speed, this may be too late to stop it before the point of collision, therefore a second transmitter may be placed on the approach to the signal that applies the brakes on trains going too quickly to stop at the signal, positioned to stop trains approaching at up to 75 mph (120 km/h). At around 400 high-risk locations,TPWS+is installed with a third transmitter further in rear of the signal increasing the effectiveness to 100 mph (160 km/h). When installed in conjunction with signal controls such as 'double blocking' (i.e. two red signal aspects in succession), TPWS can be fully effective at any realistic speed.[3] TPWS is not the same astrain stopswhich accomplish a similar task using electro-mechanical technology. Buffer stop protection using train stops is known as ‘Moorgate protection' or 'Moorgate control’. TPWS was developed byBritish Railand its successorRailtrack, following a determination in 1994 that British Rail'sAutomatic Train Protectionsystem was not economical, costing £600,000,000 equivalent to £979,431,929 in 2019 to implement, compared to value in lives saved: £3-£4 million (4,897,160 - 6,529,546 in 2019), per life saved, which was estimated to be 2.9 per year.[4][5] Trial installations of track side and train mounted equipment were made in 1997, with trials and development continuing over the next two years.[6] The rollout of TPWS accelerated when the Railway Safety Regulations 1999 came into force in 2003, requiring the installation of train stops at a number of types of location.[6]However, in March 2001 theJoint Inquiry Into Train Protection Systemsreport found that TPWS had a number of limitations, and that while it provided a relatively cheap stop-gap prior to the widescale introduction of ATP and ERTMS,[6]nothing should impede the installation of the much more capableEuropean Train Control System.[7] A pair of electronic loops are placed 50–450 metres on the approach side of the signal, energized when it is at danger. The distance between the loops determines the minimum speed at which the on board equipment will apply the train'semergency brake. When the train's TPWS receiver passes over the first loop a timer begins to count down. If the second loop is passed before the timer has reached zero, the TPWS will activate. The greater the line speed, the more widely spaced the two loops will be. There is another pair of loops at the signal, also energised when the signal is at danger. These are end to end, and thus will initiate a brake application on a train about to pass a signal at danger regardless of speed. In a standard installation there are two pairs of loops, colloquially referred to as "grids" or "toast racks". Both pairs consist of an 'arming' and a 'trigger' loop. If the signal is at danger the loops will be energised. If the signal is clear, the loops will de-energise. The first pair, the Overspeed Sensor System (OSS), is sited at a position determined by line speed and gradient. The loops are separated by a distance that should not be traversed within less than a pre-determined period of time of about one second if the train is running at a safe speed approaching the signal at danger. The exact timings are 974 milliseconds for passenger trains and 1218 milliseconds for freight trains, determined by equipment on the train. Freight trains use the 1.25 times longer timing because of their different braking characteristics.[8] The first, 'arming', loop emits a frequency of 64.25kHz. The second, 'trigger', loop has a frequency of 65.25 kHz. The other pair of loops is back to back at the signal, and is called a Train Stop System (TSS). The 'arming' and 'trigger' loops work at 66.25 kHz and 65.25 kHz respectively. The brakes will be applied if the on-train equipment detects both frequencies together after having detected the arming frequency alone. Thus, an energised TSS is effective at any speed, but only if a train passes it in the right direction. Since a train may be required to pass a signal at danger during failure etc., the driver has the option to override a TSS, but not an OSS. When asubsidiary signalassociated with a main aspect signal is cleared for a shunting movement, the TSS loops are de-energised, but the OSS loops remain active. Where trains are signalled in opposite directions on an individual line it could be possible for an unwarranted TPWS intervention to occur as a train travelled between an OSS arming and either trigger loops that were in fact associated with different signals. To cater for this situation one signal would be nominated the ‘normal direction’ and fitted with ‘ND’ equipment. The other signal would be nominated the ‘opposite direction’ and fitted with ‘OD’ equipment. Opposite direction TPWS transmission frequencies are slightly different, working at 64.75 (OSS arming), 66.75 (TSS arming), and 65.75 kHz (common trigger). At the lineside there are two modules associated with each set of loops: a Signal Interface Module (SIM) and an OSS or TSS module. These generate the frequencies for the loops, and prove the loops are intact. They interface with the signalling system. SIM Modules are colour coded red ND TSS Modules are colour coded green OD TSS Modules are colour coded brown ND OSS Modules are colour coded yellow OD OSS Modules are colour coded blue Every traction unit is fitted with a:[8] If the loops are energised, an aerial on the underside of the train picks up the radio frequency signal and passes it to the receiver. A timer measures how long it takes to pass between the arming and trigger loops. This time is used to check the speed, and if it is higher than the TPWS 'set speed', an emergency brake application is initiated. If the train is travelling slower than the TPWS set speed, but then passes the signal at danger, the aerial will receive the signal from the energised Train Stop System loops, and the brake will be applied to stop the train within theoverlap. Multiple unit trains have an aerial at each end. Vehicles that can operate singly (single car DMUs and locomotives) only have one aerial. This would be either at the front or rear of it depending on the direction the vehicle was moving in. Every driving cab has a TPWS control panel, located where the driver can see it from their desk. There are two types of panel; the original 'standard' type, and a more recent 'enhanced' version, which gives separate indications for a brake demand caused by a SPAD, Overspeed or AWS.[9] The standard type consists of two circular indicator lamps and a square push button. The push switch marked "Train Stop Override" is used topass a signal at danger with authority. It ignores the TPWS TSS loops for approximately 20 seconds (generally for passenger trains) or 60 seconds (generally for slower accelerating freight trains) or until the loops have been passed, whichever is sooner. TheAWSsystem and the TPWS system are inter-linked and if either of these has initiated a brake application, the "Brake Demand" indicator lamp will flash. The "Temporary Isolation/Fault" indicator lamp will flash if there is a TPWS system fault, or will show a steady illumination if the "Temporary Isolation Switch" has been activated. There is also a separate TPWS Temporary Isolation Switch located out of reach of the driver's desk. This is operated by the driver when the train is being worked in degraded conditions such as Temporary Block Working where multiple signals need to be passed at danger with the signaller's authority. Temporarily isolating the TPWS does not affect the AWS. The driver must reinstate the TPWS immediately at the point where normal working is resumed. As a safety feature, if they forget to do this, the TPWS will be reinstated on the next occasion that the driver's desk is shut down and then opened up again. An alternative to usingderailersinDepot Personnel Protection Systemsis to equip the system with TPWS. This equipment safeguards staff from unauthorised movements by using the TPWS equipment. Any unplanned movement will cause the train to automatically come to a stand when it has passed the relevant signal set at danger. This has the added benefit of preventing damage to the infrastructure and traction and rolling stock that a derailer system can cause. The first known installation of such a system is at Ilford Depot.[citation needed]TPWS equipped depot protection systems are suitable only for locations where vehicles are driven in and out of the maintenance building from a leading driving cab - they are not suitable for use with loose coaching stock or wagon maintenance, where vehicle movements are undertaken by a propelling shunting loco (in this case the lead vehicles would not be equipped with the relevant TPWS safety equipment), nor will it prevent a run-away vehicle from entering a protected work area. Certain signals may have multiple OSSes fitted. Alternatively, usually due to low line speeds, an OSS may not be fitted. An example of this is a terminalstation platformstarting signal. An OSS on its own may be used to protect a permanent speed restriction, orbuffer stop. Although loops are standard, buffer stops may be fitted with 'mini loops', due to the very low approach speed, usually 10 mph. When buffer stops were originally fitted with TPWS using standard loops there were many instances of false applications, causing delays whilst it reset, with trains potentially blocking the station throat, plus the risk of passengers standing to alight being thrown over by the sudden braking. This problem arose when a train passed over the arming loop so slowly that it was still detected by the train's receiver after the on-board timer had completed its cycle. The timer would reset and begin timing again, and the trigger loop then being detected within this second timing cycle would lead to a false intervention. As a temporary solution, drivers were instructed to pass the buffer stop OSSs at 5 mph, eliminating the problem, but meaning that trains no longer had the momentum to roll to the normal stopping point and requiring drivers to apply power beyond the OSS, just a short distance from the buffers, arguably making a buffer stop collision more likely than before TPWS was fitted. The redesigned 'mini loops', roughly a third the length of the standard ones, eliminate this problem, although due to the low speed and low margin, buffer stop OSSs are still a major cause of TPWS trips.[citation needed] Recent applications in the UK have, in conjunction with advancedSPADprotection techniques, used TPWS with outer home signals that protect converging junctions with a higher than average risk by controlling the speed of an approaching train an extra signal section in rear of the junction. If this fails the resultant TPWS application of brakes will stop the train before the point of conflict is reached. This system is referred to as TPWS OS (Outer Signal). Standard TPWS installations can only bring a train to a stop prior to passing a red signal, at 74 miles per hour (119 km/h). In 2001, it was observed that roughly one-third of the UK railway allows for a speed above 75 miles per hour (121 km/h). Further this assumes the train's brakes is capable of providing a brake force of 12%g.[10][a]A number of train types, most notably, theHSTswere not capable of achieving this, despite having a top speed of 125 miles per hour (201 km/h). TPWS-A was capable of stopping a train up to 100 miles per hour (160 km/h). TPWS has no ability to regulate speed after a trainpasses a signal at dangerwith authority. However, on those occasions there are strict rules governing the actions of drivers, train speed, and the use of TPWS. There aremany reasonswhy a driver might be required to pass a signal at danger with authority. The signaller will advise the driver to pass the signal at danger, proceed with caution, be prepared to stop short of any obstruction, and then obey all other signals. Immediately before moving, the driver will press the "Trainstop Override" button on the TPWS panel, so that the train can pass the signal without triggering the TPWS to apply the brakes. The driver must then proceed at a speed which enables them to stop within the distance that they can see to be clear. Even if it appears that the section is clear to the next signal, they must still exercise caution.[11] TPWS failed to prevent the2021 Salisbury rail crash, because although the train went to full emergency braking, theslick conditionsproduced wheel slide and the train therefore was not brought to a stop prior to the collision point. (ATP would not have prevented this circumstance either).[12] Critics, such as those representing victims of the Ladbroke Grove and Southhall rail crashes, and ASLEF and RMT rail unions pushed for the abandonment of TPWS in the late 1990s in favor of continuing with British Rail's ATP project.[13] A 2000 study,Automatic Train Protection for the rail network in Britainremarked that TPWS was "in terms of avoiding “ATP preventable accidents” it is about 70% effective.", highlighting the speed limitation.[14]That 2000 study did still conclude that TPWS was good solution for the short term of 10–15 years, but stressed that European Train Control system was the long term solution.[14] Notably, the combination of TPWS and AWS is least effective in accidents like the one atPurley, where a driver repeatedly cancelled the AWS warning without applying the brakes, passing the danger signal at high speed. Purley was one of several high profile SPAD crashes in the late 1980s, that led to the initial plan in the 1990s for the mass rollout of ATP, that was subsequently canceled in 1994 to be replaced by TPWS. Supporters of TPWS claim that even where it could not prevent accidents due to SPADs, it would likely reduce the impact, and reduce or eliminate fatalities, by at least slowing the train down. However, it is likely that in those cases the driver would have applied the emergency brakes well before the overspeed sensor.[7] While it has been noted that there have been very few fatalities since the fitting of TPWS that would have been prevented had ATP been fitted instead. This overlooks that during the delay between the decision to cancel ATP and replace it with TPWS and the actual roll out of TPWS thatLadbroke GroveandSouthall rail crashboth occurred, accidents that were ATP preventable, and occurred on the Great Western line, which had been outfitted with ATP as part of the pilot studies in the early-90s.[15][16] The TPWS system is used in: Since 1996, an older variant of TPWS, called the Auxiliary Warning System, has been used by theMumbai Suburban Railwayin India, on theWestern LineandCentral Line.
https://en.wikipedia.org/wiki/Train_Protection_%26_Warning_System
Etymology(/ˌɛtɪˈmɒlədʒi/ET-im-OL-ə-jee[1]) is the study of the origin and evolution of words—including their constituent units ofsoundandmeaning—across time.[2]In the 21st century a subfield withinlinguistics, etymology has become a more rigorously scientific study.[1]Most directly tied tohistorical linguistics,philology, andsemiotics, it additionally draws upon comparativesemantics,morphology,pragmatics, andphoneticsin order to attempt a comprehensive and chronological catalogue of all meanings and changes that a word (and its related parts) carries throughout its history. The origin of any particular word is also known as itsetymology. For languages with a longwritten history, etymologists make use of texts, particularly texts about the language itself, to gather knowledge about how words were used during earlier periods, how they developed in meaning andform, or when and how they entered the language. Etymologists also apply the methods ofcomparative linguisticsto reconstruct information about forms that are too old for any direct information to be available. By analyzing related languages with a technique known as thecomparative method, linguists can make inferences about their shared parent language and its vocabulary. In this way,word rootsin many European languages, for example, can be traced back to the origin of theIndo-European language family. Even though etymological research originated from the philological tradition, much current etymological research is done on language families where little or no early documentation is available, such asUralicandAustronesian. The wordetymologyis derived from the Ancient Greek wordἐτυμολογία(etumologíā), itself fromἔτυμον(étumon), meaning'true sense or sense of a truth', and the suffix-logia, denoting'the study or logic of'.[3][4] Theetymonrefers to the predicate (i.e. stem[5]or root[6]) from which a later word or morpheme derives. For example, the Latin wordcandidus, which means'white', is the etymon of Englishcandid. Relationships are often less transparent, however. Englishplace namessuch asWinchester,Gloucester,Tadcastershare different forms of asuffixthat originated as the Latincastrum'fort'. Reflexis the name given to a descendant word in a daughter language, descended from an earlier language. For example, Modern English heat is the reflex of the Old Englishhǣtu. Rarely, this word is used in reverse, and the reflex is actually the root word rather than the descendant word. However, this usage is usually filled by the termetymoninstead. A reflex will sometimes be described simply as adescendant,derivativeorderivedfrom an etymon (but see below).[citation needed] Cognatesorlexical cognatesare sets of words that have been inherited in direct descent from an etymological ancestor in a common parent language.[7]Doubletsoretymological twinsortwinlings(or possibly triplets, and so forth) are specifically cognates within the same language. Although they have the same etymological root, they tend to have different phonological forms, and to have entered the language through different routes. Arootis the source of related words within a single language (no language barrier is crossed). Similar to the distinction betweenetymonandroot, a nuanced distinction can sometimes be made between adescendantand aderivative. Aderivativeis one of the words which have their source in a root word, and were at some time created from the root word using morphological constructs such as suffixes, prefixes, and slight changes to the vowels or to the consonants of the root word. For example:unhappy,happily, andunhappilyare all derivatives of the root wordhappy. The termsrootandderivativeare used in the analysis ofmorphologicalderivation within a language in studies that are not concerned with historical linguistics and that do not cross the language barrier. Etymologists apply a number of methods to study the origins of words, some of which are: Etymological theory recognizes that words originate through a limited number of basic mechanisms, the most important of which arelanguage change, borrowing (i.e., the adoption ofloanwordsfrom other languages);word formationsuch asderivationandcompounding; andonomatopoeiaandsound symbolism(i.e., the creation of imitative words such asclickorgrunt). While the origin of newly emerged words is often more or less transparent, it tends to become obscured through time due to sound change or semantic change. Due tosound change, it is not readily obvious that the English wordsetis related to the wordsit(the former is originally acausativeformation of the latter). It is even less obvious thatblessis related toblood(the former was originally a derivative term meaning 'to mark with blood'). Semantic change may also occur. For example, the English wordbeadoriginally meant 'prayer', and acquired its modern meaning through the practice of counting the recitation of prayers by using small objects strung together (beads). One type of semantic change involves the quotidianisation ofmetaphor.[8]Thus the word "trauma", the predecessors of which apparently referenced an "open hole" in the body, has passed through some metaphorical stage or stages and now often refers to some sort of psychological wound.[9] The search for meaningful origins for familiar or strange words is far older than the modern understanding of linguistic evolution and the relationships of languages, which began no earlier than the 18th century. Etymology has been a form of witty wordplay, in which the supposed origins of words were creatively imagined to satisfy contemporary requirements. For example, the Greek poetPindar(bornc.522 BCE) employed inventive etymologies to flatter his patrons.Plutarchemployed etymologies insecurely based on fancied resemblances in sounds.Isidore of Seville'sEtymologiaewas an encyclopedic tracing of "first things" that remained uncritically in use in Europe until the sixteenth century.Etymologicum Genuinumis a grammatical encyclopedia edited atConstantinopleduring the 9th century, one of several similarByzantineworks. The 13th-centuryGolden Legend, as written byJacobus de Voragine, begins eachhagiographyof a saint with a fancifulexcursusin the form of an etymology.[10] Inancient India,Sanskritlinguists and grammarians were the first to undertake comprehensive analyses of linguistics and etymology. The study of Sanskrit etymology has provided Western scholars with the basis ofhistorical linguisticsand modern etymology. Four of the most famous Sanskrit linguists are: These were not the earliest Sanskrit grammarians, but rather followed an earlier line of scholars who lived several centuries earlier, who includedŚākaṭāyana(814–760 BCE), and of whom very little is known. The earliest of attested etymologies can be found in theVedas, in the philosophical explanations of theBrahmanas,Aranyakas, andUpanishads. The analyses ofSanskrit grammardone by the previously mentioned linguists involved extensive studies on the etymology (calledNiruktaorVyutpattiin Sanskrit) of Sanskrit words, because the ancient Indians considered sound and speech itself to be sacred and, for them, the words of the Vedas contained deep encoding of the mysteries of the soul and God. One of the earliest philosophical texts of the Classical Greek period to address etymology was theSocratic dialogueCratylus(c.360 BCE) byPlato. During much of the dialogue,Socratesmakes guesses as to the origins of many words, including the names of the gods. In hisodes, Pindar spins complimentary etymologies to flatter his patrons.Plutarch(Life ofNuma Pompilius) spins an etymology forpontifex, while explicitly dismissing the obvious, and actual "bridge-builder": The priests, called Pontifices.... have the name of Pontifices frompotens, powerful because they attend the service of the gods, who have power and command overall. Others make the word refer to exceptions of impossible cases; the priests were to perform all the duties possible; if anything lays beyond their power, the exception was not to be cavilled. The most common opinion is the most absurd, which derives this word from pons, and assigns the priests the title of bridge-makers. The sacrifices performed on the bridge were amongst the most sacred and ancient, and the keeping and repairing of the bridge attached, like any other public sacred office, to the priesthood. Isidore of Sevillecompiled a volume of etymologies to illuminate the triumph of religion. Each saint's legend inJacobus de Voragine'sGolden Legendbegins with an etymological discourse on their name: Lucy is said of light, and light is beauty in beholding, after that S. Ambrose saith: The nature of light is such, she is gracious in beholding, she spreadeth over all without lying down, she passeth in going right without crooking by right long line; and it is without dilation of tarrying, and therefore it is showed the blessed Lucy hath beauty of virginity without any corruption; essence of charity without disordinate love; rightful going and devotion to God, without squaring out of the way; right long line by continual work without negligence of slothful tarrying. In Lucy is said, the way of light.[11] Etymology in the modern sense emerged in the late 18th-century European academia, in the context of theAge of Enlightenment, although preceded by 17th-century pioneers such asMarcus Zuerius van Boxhorn,Gerardus Vossius,Stephen Skinner,Elisha Coles, andWilliam Wotton. The first known systematic attempt to prove the relationship between two languages on the basis of similarity ofgrammarandlexiconwas made in 1770 by the Hungarian,János Sajnovics, when he attempted to demonstrate the relationship betweenSamiandHungarian.[12] The origin of modernhistorical linguisticsis often traced toWilliam Jones, a Welsh philologist living in India, who in 1782 observed the genetic relationship between Greek and Latin. Jones published hisThe Sanscrit Languagein 1786, laying the foundation for the field ofIndo-European studies. However, as early as 1727, a Jesuit missionary in India, père Gargam, theorized that Sanskrit could be a "mother tongue arrived from another country" forTeluguandKannadabecause they contained many of the same Sanskrit terms; and in a letter to Abbé Barthélemy of theAcadémie des Inscriptions et Belles Lettresin 1767, another Jesuit missionary in India, pèreGaston-Laurent Coeurdoux, posed the question of the origin of the Sanskrit language and systematically argued his hypothesis of a "commune origine" of Sanskrit, Latin, and Greek, even putting Sanskrit terms and their Latin equivalents in columns.[13]Although they sent many Sanskrit-related texts to theBibliothèque du roi, such as literary translations, grammars, dictionaries, and other works, theJesuit Missionariesin theCarnatic Regionbetween 1695–1762, includingJean Calmette, Coeurdoux, Gargam,Jean François Pons, and others, have only recently begun receiving more attention in modern scholarship for their early contributions to fields like Indo-European Studies, historical linguistics, and comparative philology.[13][14] The study of etymology inGermanic philologywas introduced byRasmus Raskin the early 19th century and elevated to a high standard with theDeutsches Wörterbuch(German Dictionary) compiled by theBrothers Grimm. The successes of the comparative approach culminated in theNeogrammarianschool of the late 19th century. Still,Friedrich Nietzscheused etymological strategies (principally and most famously inOn the Genealogy of Morality, but also elsewhere) to argue that moral values have definite historical origins, where the meaning of concepts such as good and evil are shown to have changed over time according to the value-system that appropriates them. This strategy gained popularity in the 20th century, and philosophers, such asJacques Derrida, have used etymologies to indicate former meanings of words to de-center the "violent hierarchies" of Western philosophy.
https://en.wikipedia.org/wiki/Etymology
Inmathematics educationat theprimary schoollevel,chunking(sometimes also called thepartial quotients method) is an elementary approach for solving simpledivisionquestions by repeatedsubtraction. It is also known as thehangman methodwith the addition of a line separating the divisor, dividend, and partial quotients.[1]It has a counterpart in thegrid methodfor multiplication as well. In general, chunking is more flexible than the traditional method in that the calculation of quotient is less dependent on the place values. As a result, it is often considered to be a more intuitive, but a less systematic approach to divisions – where the efficiency is highly dependent upon one'snumeracyskills. To calculate thewhole numberquotientof dividing a large number by a small number, the student repeatedly takes away "chunks" of the large number, where each "chunk" is an easy multiple (for example 100×, 10×, 5× 2×, etc.) of the small number, until the large number has been reduced to zero – or theremainderis less than the small number itself. At the same time the student is generating a list of the multiples of the small number (i.e., partial quotients) that have so far been taken away, which when added up together would then become the whole number quotient itself. For example, to calculate 132÷8, one might successively subtract 80, 40 and 8 to leave 4: Because 10 + 5 + 1 = 16, 132÷8 is 16 with 4 remaining. In the UK, this approach for elementary division sums has come into widespread classroom use in primary schools since the late 1990s, when theNational Numeracy Strategyin its "numeracy hour" brought in a new emphasis on more free-form oral and mental strategies for calculations, rather than the rote learning of standard methods.[2] Compared to theshort divisionandlong divisionmethods that are traditionally taught, chunking may seem strange, unsystematic, and arbitrary. However, it is argued that chunking, rather than moving straight to short division, gives a better introduction to division, in part because the focus is always holistic, focusing throughout on the whole calculation and its meaning, rather than just rules for generating successive digits. The more freeform nature of chunking also means that it requires more genuine understanding – rather than just the ability to follow a ritualised procedure – to be successful.[3] An alternative way of performing chunking involves the use of the standard long division tableau – except that the partial quotients are stacked up on the top of each other above the long division sign, and that all numbers are spelled out in full. By allowing one to subtract more chunks than what one currently has, it is also possible to expand chunking into a fully bidirectional method as well.
https://en.wikipedia.org/wiki/Chunking_(division)
Inoptics, theoptical sine theoremstates that the products of the index, height, andsineof the slope angle of a ray in object space and its corresponding ray inimage spaceare equal. That is: Thisoptics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Optical_sine_theorem
Theinverted pyramidis ametaphorused byjournalistsand other writers to illustrate how information should be prioritised and structured inprose(e.g., a news report). It is a common method for writingnews storiesand has wide adaptability to other kinds of texts, such as blogs, editorial columns and marketing factsheets. It is a way to communicate the basics about a topic in the initial sentences. The inverted pyramid is taught tomass communicationand journalism students, and is systematically used inEnglish-languagemedia.[1] The inverted or upside-down pyramid can be thought of as a triangle pointing down. The widest part at the top represents the most substantial, interesting, andimportantinformation that the writer means to convey, illustrating that this kind of material should head the article, while the tapering lower portion illustrates that other material should follow in order of diminishing importance. It is sometimes called asummary news leadstyle,[2]orbottom line up front(BLUF).[3]The opposite, the failure to mention the most important, interesting or attention-grabbing elements of a story in the opening paragraphs, is calledburying the lead. Other styles are also used in news writing, including the "anecdotal lead", which begins the story with an eye-catching tale oranecdoterather than the central facts; and theQ&A, or question-and-answer format. The inverted pyramid may also include a "hook" as a kind of prologue, typically a provocative quote, question, or image, to entice the reader into committing to reading the full story. This format is valued for two reasons. First, readers can leave the story at any point and understand it, even if they do not have all the details. Second, it conducts readers through the details of the story by the end.[citation needed] This system also means that information less vital to the reader's understanding comes later in the story, where it is easier to edit out for space or other reasons. This is called "cutting from the bottom."[4]Rather than petering out, a story may end with a "kicker"—a conclusion, perhaps call to action—which comesafterthe pyramid. This is particularly common infeature stylearticles. Historians disagree about when the form was created. Many say theinvention of the telegraphsparked its development by encouraging reporters to condense material, to reduce costs,[5]or to hedge against the unreliability of the telegraph network.[6]Studies of 19th-century news stories in American newspapers, however, suggest that the form spread several decades later than the telegraph, possibly because the reform era's social and educational forces encouraged factual reporting rather than more interpretive narrative styles.[2] Chip Scanlan's essay on the form[7]includes this frequently cited example of telegraphic reporting: This evening at about 9:30 p.m. atFord's Theatre, thePresident, while sitting in his private box withMrs. Lincoln,Mrs. HarrisandMajor Rathburn, was shot by an assassin, who suddenly entered the box and approached behind the President. The assassin then leaped upon the stage, brandishing a large dagger or knife, and made his escape in the rear of the theatre. The pistol ball entered the back of the President's head and penetrated nearly through the head. The wound is mortal. The President has been insensible ever since it was inflicted, and is now dying. About the same hour an assassin, whether the same or not, enteredMr. Seward's apartment and under pretense of having a prescription was shown to the Secretary's sick chamber. The assassin immediately rushed to the bed and inflicted two or three stabs on the chest and two on the face. It is hoped the wounds may not be mortal. My apprehension is that they will prove fatal. The nurse alarmed Mr.Frederick Seward, who was in an adjoining rented room, and he hastened to the door of his father's room, when he met the assassin, who inflicted upon him one or more dangerous wounds. The recovery of Frederick Seward is doubtful. It is not probable that the President will live through the night. General Grantand his wife were advertised to be at the theatre... Who, when, where, why, what, and howare addressed in the first paragraph. As the article continues, the less important details are presented. An even more pyramid-conscious reporter or editor would move two additional details to the first two sentences: That the shot was to the head, and that it was expected to prove fatal. The transitional sentence about the Grants suggests that less-important facts are being added to the rest of the story. Other news outlets such as theAssociated Pressdid not use this format when covering the assassination, instead adopting a chronological organization.[8]
https://en.wikipedia.org/wiki/Inverted_pyramid_(journalism)
This article compares variety of differentX window managers. For an introduction to the topic, seeX Window System.
https://en.wikipedia.org/wiki/Comparison_of_X_window_managers
Financial managementis thebusiness functionconcerned with profitability, expenses, cash and credit. These are often grouped together under the rubric of maximizing thevalue of the firmforstockholders. The discipline is then tasked with the "efficient acquisition and deployment" of bothshort-andlong-term financial resources, to ensure the objectives of the enterprise are achieved.[1] Financial managers[2](FM) are specialized professionals directly reporting tosenior management, often thefinancial director(FD); the function is seen as'staff', and not'line'. Financial management is generally concerned with short termworking capital management, focusing oncurrent assetsandcurrent liabilities, andmanaging fluctuationsin foreign currency and product cycles, often throughhedging. The function also entails the efficient and effective day-to-day management of funds, and thus overlapstreasury management. It is also involved with long termstrategic financial management, focused on i.a.capital structuremanagement, including capital raising,capital budgeting(capital allocation between business units or products), anddividend policy; these latter, in large corporates, being more the domain of "corporate finance." Specific tasks: Two areas of finance directly overlap financial management: (i)Managerial financeis the (academic) branch of finance concerned with the managerial application of financial techniques; (ii)Corporate financeis mainly concerned with the longer term capital budgeting, and typically is more relevant to large corporations. Investment management, also related, is the professionalasset managementof varioussecurities(shares, bonds and other securities/assets). In the context of financial management, the function sits with treasury; usually the management of the various short-term financiallegal instruments(contractual duties, obligations, or rights) appropriate to the company'scash-andliquidity managementrequirements. SeeTreasury management § Functions. The term "financial management" refers to a company's financial strategy, whilepersonal financeorfinancial life managementrefers to an individual's management strategy. Afinancial planner, or personal financial planner, is a professional who prepares financial plans here. Financial management systems are thesoftware and technologyused by organizations to connect, store, and report on assets, income, and expenses.[4]SeeFinancial modeling § AccountingandFinancial planning and analysisfor discussion. The discipline relies on a range of products, fromspreadsheets(invariably as a starting point, and frequently in total[5]) through commercialEPMandBItools, oftenBusinessObjects(SAP),OBI EE(Oracle),Cognos(IBM), andPower BI(Microsoft). SpecialisedFP&Aproducts are provided byJedox,Anaplan,Workday,Hyperion,Wolters Kluwer,Datarails, andWorkiva.
https://en.wikipedia.org/wiki/Financial_management
Inlinguistics,affectis an attitude oremotionthat a speaker brings to an utterance. Affects such as sarcasm, contempt, dismissal, distaste, disgust, disbelief, exasperation, boredom, anger, joy, respect or disrespect, sympathy,pity, gratitude, wonder, admiration, humility, and awe are frequently conveyed throughparalinguisticmechanisms such asintonation, facial expression, andgesture, and thus require recourse to punctuation oremoticonswhen reduced to writing, but there are grammatical and lexical expressions of affect as well, such aspejorativeandapprobativeorlaudativeexpressions or inflections, adversative forms,honorific and deferential language,interrogativesandtag questions, and some types ofevidentiality. Lexical choices mayframea speaker's affect, such asslender(positive affect) vs.scrawny(negative affect),thrifty(positive) vs.stingy(negative) andfreedom fighter(positive) vs.terrorist(negative).[1] In many languages of Europe,augmentativederivations are used to express contempt or other negative attitudes toward the noun being so modified, whereasdiminutivesmay express affection; on the other hand, diminutives are frequently used to belittle or be dismissive. For instance, inSpanish, a name ending in diminutive-ito(masculine) or-ita(feminine) may be aterm of endearment, butseñorito"little mister" forseñor"mister" may be mocking.Polishhas a range ofaugmentativeanddiminutiveforms, which express differences in affect. So, fromżaba"a frog", besidesżabuchafor simply a big frog, there is augmentativeżabskoto express distaste,żabiskoif the frog is ugly,żabulaif it is likeably awkward, etc. Affect can also be conveyed by more subtle means. Duranti, for example, shows that the use of pronouns in Italian narration indicates that the character referred to is important to the narration but is generally also a mark of a positive speaker attitude toward the character.[2] InJapaneseandKorean, grammatical affect is conveyed both throughhonorific, polite, and humble language, which affects both nouns and verbal inflection, and through clause-finalparticlesthat express a range of speaker emotions and attitudes toward what is being said. For instance, when asked in Japanese if what one is eating is good, one might say 美味しいoishii"it's delicious" or まずいmazui"it's bad" with various particles for nuance: The same can be done inKorean: In English and Japanese, thepassiveof intransitive verbs may be used to express an adversative situation: 雨が ame-ga rain-NOM 降った。 fut-ta fall-PFV 雨が 降った。 ame-ga fut-ta rain-NOM fall-PFV It rained. 雨に ame-ni rain-DAT 降られた。 fu-rare-ta fall-PASS-PFV 雨に 降られた。 ame-ni fu-rare-ta rain-DAT fall-PASS-PFV I was rained on. In some languages withsplit intransitivegrammars, such as theCentral Pomo languageofCalifornia, the choice of encoding an affectedverb argumentas an "object" (patientive case) reflects empathy or emotional involvement on the part of the speaker:[3] ʔaː=tʼo 1.AGT=but béda=ht̪ow here=from béː=yo-w away=go-PFV dá-ːʔ-du-w want-REFL-IPFV-PFV tʃʰó-w. not-PFV. béda here ʔaː I.AGT qʼlá-w=ʔkʰe. die-PFV=FUT. ʔaː=tʼo béda=ht̪ow béː=yo-w dá-ːʔ-du-w tʃʰó-w. béda ʔaː qʼlá-w=ʔkʰe. 1.AGT=but here=from away=go-PFV want-REFL-IPFV-PFV not-PFV. here I.AGT die-PFV=FUT. "(But) I don't want to go away from here. I (agentive) will die here." (said matter-of-factly) ʔaː I.AGT tʃá=ʔel house=the ʔtʃí=hla get=if t̪oː I.PAT qʼlá=hla die=if tʼo? but ʔaː tʃá=ʔel ʔtʃí=hla t̪oː qʼlá=hla tʼo? I.AGT house=the get=if I.PAT die=if but "(But) what if I (patientive) died after I got the house?" (given as a reason not to buy a new house)
https://en.wikipedia.org/wiki/Affect_(linguistics)
Thehistory of supercomputinggoes back to the 1960s when a series of computers atControl Data Corporation(CDC) were designed bySeymour Crayto use innovative designs and parallelism to achieve superior computational peak performance.[1]TheCDC 6600, released in 1964, is generally considered the first supercomputer.[2][3]However, some earlier computers were considered supercomputers for their day such as the 1954IBM NORCin the 1950s,[4]and in the early 1960s, theUNIVAC LARC(1960),[5]theIBM 7030 Stretch(1962),[6]and theManchesterAtlas(1962), all[specify]of which were of comparable power.[citation needed] While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records. By the end of the 20th century, massively parallel supercomputers with thousands of "off-the-shelf" processors similar to those found in personal computers were constructed and broke through theteraFLOPScomputational barrier. Progress in the first decade of the 21st century was dramatic and supercomputers with over 60,000 processors appeared, reaching petaFLOPS performance levels. The term "Super Computing" was first used in theNew York Worldin 1929[7]to refer to large custom-builttabulatorsthatIBMhad made forColumbia University.[8] There were several lines of second generation computers that were substantially faster than most contemporary mainframes. These included The second generation saw the introduction of features intended to supportmultiprogrammingandmultiprocessorconfigurations, including master/slave (supervisor/problem) mode, storage protection keys, limit registers, protection associated with address translation, andatomic instructions. In 1957, a group of engineers leftSperry Corporationto formControl Data Corporation(CDC) inMinneapolis, Minnesota.Seymour Crayleft Sperry a year later to join his colleagues at CDC.[1]In 1960, Cray completed theCDC 1604, one of the first generation of commercially successfultransistorizedcomputers and at the time of its release, the fastest computer in the world.[9]However, the sole fully transistorizedHarwell CADETwas operational in 1951, and IBM delivered its commercially successful transistorizedIBM 7090in 1959. Around 1960, Cray decided to design a computer that would be the fastest in the world by a large margin. After four years of experimentation along with Jim Thornton, and Dean Roush and about 30 other engineers, Cray completed theCDC 6600in 1964. Cray switched from germanium to silicon transistors, built byFairchild Semiconductor, that used the planar process. These did not have the drawbacks of the mesa silicon transistors. He ran them very fast, and thespeed of lightrestriction forced a very compact design with severe overheating problems, which were solved by introducing refrigeration, designed by Dean Roush.[10]The 6600 outperformed the industry's prior recordholder, theIBM 7030 Stretch,[clarification needed]by a factor of three.[11][12]With performance of up to threemegaFLOPS,[13][14]it was dubbed asupercomputerand defined the supercomputing market when two hundred computers were sold at $9 million each.[9][15] The 6600 gained speed by "farming out" work to peripheral computing elements, freeing the CPU (Central Processing Unit) to process actual data. The MinnesotaFORTRANcompiler for the machine was developed by Liddiard and Mundstock at theUniversity of Minnesotaand with it the 6600 could sustain 500 kiloflops on standard mathematical operations.[16]In 1968, Cray completed theCDC 7600, again the fastest computer in the world.[9]At 36MHz, the 7600 had 3.6 times theclock speedof the 6600, but ran significantly faster due to other technical innovations. They sold only about 50 of the 7600s, not quite a failure. Cray left CDC in 1972 to form his own company.[9]Two years after his departure CDC delivered theSTAR-100, which at 100 megaflops was three times the speed of the 7600. Along with theTexas Instruments ASC, the STAR-100 was one of the first machines to usevector processing⁠‍—‍the idea having been inspired around 1964 by theAPL programming language.[17][18] In 1956, a team atManchester Universityin the United Kingdom began development ofMUSE⁠‍—‍a name derived frommicrosecondengine‍—‍with the aim of eventually building a computer that could operate at processing speeds approaching one microsecond per instruction, about one millioninstructions per second.[19]Mu(the name of the Greek letterμ) is a prefix in the SI and other systems of units denoting a factor of 10−6(one millionth). At the end of 1958,Ferrantiagreed to collaborate with Manchester University on the project, and the computer was shortly afterwards renamedAtlas, with the joint venture under the control ofTom Kilburn. The first Atlas was officially commissioned on 7 December1962‍—‍nearly three years before the Cray CDC 6600 supercomputer wasintroduced‍—‍as one of the world's firstsupercomputers. It was considered at the time of its commissioning to be the most powerful computer in the world, equivalent to fourIBM 7094s. It was said that whenever Atlas went offline half of the United Kingdom's computer capacity was lost.[20]The Atlas pioneeredvirtual memoryandpagingas a way to extend its working memory by combining its 16,384 words of primarycore memorywith an additional 96K words of secondarydrum memory.[21]Atlas also pioneered theAtlas Supervisor, "considered by many to be the first recognizable modernoperating system".[20] Four years after leaving CDC, Cray delivered the 80 MHzCray-1in 1976, and it became the most successful supercomputer in history.[18][22]The Cray-1, which used integrated circuits with two gates per chip, was avector processor. It introduced a number of innovations, such aschaining, in which scalar and vector registers generate interim results that can be used immediately, without additional memory references which would otherwise reduce computational speed.[10][23]TheCray X-MP(designed bySteve Chen) was released in 1982 as a 105 MHz shared-memoryparallelvector processorwith better chaining support and multiple memory pipelines. All three floating point pipelines on the X-MP could operate simultaneously.[23]By 1983 Cray and Control Data were supercomputer leaders; despite its lead in the overall computer market, IBM was unable to produce a profitable competitor.[24] TheCray-2, released in 1985, was a four-processorliquid cooledcomputer totally immersed in a tank ofFluorinert, which bubbled as it operated.[10]It reached 1.9 gigaflops and was the world's fastest supercomputer, and the first to break the gigaflop barrier.[25]The Cray-2 was a totally new design. It did not use chaining and had a high memory latency, but used much pipelining and was ideal for problems that required large amounts of memory.[23]The software costs in developing a supercomputer should not be underestimated, as evidenced by the fact that in the 1980s the cost for software development at Cray came to equal what was spent on hardware.[26]That trend was partly responsible for a move away from the in-house,Cray Operating SystemtoUNICOSbased onUnix.[26] TheCray Y-MP, also designed by Steve Chen, was released in 1988 as an improvement of the X-MP and could have eightvector processorsat 167 MHz with a peak performance of 333 megaflops per processor.[23]In the late 1980s, Cray's experiment on the use ofgallium arsenidesemiconductors in theCray-3did not succeed. Seymour Cray began to work on amassively parallelcomputer in the early 1990s, but died in a car accident in 1996 before it could be completed. Cray Research did, however, produce such computers.[22][10] TheCray-2which set the frontiers of supercomputing in the mid to late 1980s had only 8 processors. In the 1990s, supercomputers with thousands of processors began to appear. Another development at the end of the 1980s was the arrival of Japanese supercomputers, some of which were modeled after the Cray-1. During the first half of theStrategic Computing Initiative, some massively parallel architectures were proven to work, such as theWARP systolic array, message-passingMIMDlike theCosmic Cubehypercube,SIMDlike theConnection Machine, etc. In 1987, a TeraOPS Computing Technology Program was proposed, with a goal of achieving 1 teraOPS (a trillion operations per second) by 1992, which was considered achievable by scaling up any of the previously proven architectures.[27] TheSX-3/44Rwas announced byNEC Corporationin 1989 and a year later earned the fastest-in-the-world title with a four-processor model.[28]However, Fujitsu'sNumerical Wind Tunnelsupercomputer used 166 vector processors to gain the top spot in 1994. It had a peak speed of 1.7 gigaflops per processor.[29][30]TheHitachi SR2201obtained a peak performance of 600 gigaflops in 1996 by using 2,048 processors connected via a fast three-dimensionalcrossbarnetwork.[31][32][33] In the same timeframe theIntel Paragoncould have 1,000 to 4,000Intel i860processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was aMIMDmachine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes; communicating via theMessage Passing Interface.[34]By 1995, Cray was also shipping massively parallel systems, e.g. theCray T3Ewith over 2,000 processors, using a three-dimensionaltorus interconnect.[35][36] The Paragon architecture soon led to the IntelASCI Redsupercomputer in the United States, which held the top supercomputing spot to the end of the 20th century as part of theAdvanced Simulation and Computing Initiative. This was also a mesh-based MIMD massively-parallel system with over 9,000 compute nodes and well over 12 terabytes of disk storage, but used off-the-shelfPentium Proprocessors that could be found in everyday personal computers. ASCI Red was the first system ever to break through the 1 teraflop barrier on the MP-Linpackbenchmark in 1996; eventually reaching 2 teraflops.[37] Significant progress was made in the first decade of the 21st century. The efficiency of supercomputers continued to increase, but not dramatically so. TheCray C90used 500 kilowatts of power in 1991, while by 2003 theASCI Qused 3,000 kW while being 2,000 times faster, increasing the performance per watt 300 fold.[38] In 2004, theEarth Simulatorsupercomputer built byNECat the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) reached 35.9 teraflops, using 640 nodes, each with eight proprietaryvector processors.[39] TheIBMBlue Genesupercomputer architecture found widespread use in the early part of the 21st century, and 27 of the computers on theTOP500list used that architecture. The Blue Gene approach is somewhat different in that it trades processor speed for low power consumption so that a larger number of processors can be used at air cooled temperatures. It can use over 60,000 processors, with 2048 processors "per rack", and connects them via a three-dimensional torus interconnect.[40][41] Progress inChinahas been rapid, in that China placed 51st on the TOP500 list in June 2003; this was followed by 14th in November 2003, 10th in June 2004, then 5th during 2005, before gaining the top spot in 2010 with the 2.5 petaflopTianhe-Isupercomputer.[42][43] In July 2011, the 8.1 petaflop JapaneseK computerbecame the fastest in the world, using over 60,000SPARC64 VIIIfxprocessors housed in over 600 cabinets. The fact that the K computer is over 60 times faster than the Earth Simulator, and that the Earth Simulator ranks as the 68th system in the world seven years after holding the top spot, demonstrates both the rapid increase in top performance and the widespread growth of supercomputing technology worldwide.[44][45][46]By 2014, the Earth Simulator had dropped off the list and by 2018 the K computer had dropped out of the top 10. By 2018,Summithad become the world's most powerful supercomputer, at 200 petaFLOPS. In 2020, the Japanese once again took the top spot with theFugaku supercomputer, capable of 442 PFLOPS. Finally, starting in 2022 and until the present (as of December 2023[update]), theworld's fastest supercomputerhad become the Hewlett Packard EnterpriseFrontier, also known as the OLCF-5 and hosted at theOak Ridge Leadership Computing Facility(OLCF) inTennessee, United States. The Frontier is based on theCray EX, is the world's firstexascalesupercomputer, and uses onlyAMDCPUsandGPUs; it achieved anRmaxof 1.102exaFLOPS, which is 1.102 quintillion operations per second.[47][48][49][50][51] This is a list of the computers which appeared at the top of theTOP500list since 1993.[52]The "Peak speed" is given as the "Rmax" rating. TheCoComand its later replacement, theWassenaar Arrangement, legally regulated, i.e. required licensing and approval and record-keeping; or banned entirely, the export ofhigh-performance computers(HPCs) to certain countries. Such controls have become harder to justify, leading to loosening of these regulations. Some have argued these regulations were never justified.[53][54][55][56][57][58]
https://en.wikipedia.org/wiki/History_of_supercomputing
TheZhuangzi(historically romanizedChuang Tzŭ) is an ancient Chinese text that is one of the two foundational texts ofTaoism, alongside theTao Te Ching. It was written during the lateWarring States period(476–221 BC) and is named for its traditional author,Zhuang Zhou, who is customarily known as "Zhuangzi" ("Master Zhuang"). TheZhuangziconsists of stories and maxims that exemplify the nature of the ideal Taoist sage. It recounts many anecdotes, allegories, parables, and fables, often expressed with irreverence or humor. Recurring themes include embracing spontaneity and achieving freedom from the human world and its conventions. The text aims to illustrate the arbitrariness andultimate falsity of dichotomiesnormally embraced by human societies, such as those between good and bad, large and small, life and death, or human and nature. In contrast with the focus on good morals and personal duty expressed by many Chinese philosophers of the period, Zhuang Zhou promoted carefree wandering and following nature, through which one would ultimately become one with the "Way" (Tao). Though appreciation for the work often focuses on its philosophy, theZhuangziis also regarded as one of the greatest works of literature in theClassical Chinesecanon. It has significantly influenced major Chinese writers and poets across more than two millennia, with the first attested commentary on the work written during theHan dynasty(202 BC – 220 AD). It has been called "the most important pre-Qintext for the study of Chinese literature".[1] TheZhuangziis presented as the collected works of a man namedZhuang Zhou—traditionally referred to as "Zhuangzi" (莊子; "Master Zhuang"), using the traditional Chinesehonorific. Almost nothing is concretely known of Zhuang Zhou's life. Most of what is known comes from theZhuangziitself, which was subject to changes in later centuries. Most historians place his birth around 369 BC in a place called Meng (蒙) in the historicalstate of Song, near present-dayShangqiu, Henan. His death is variously placed at 301, 295, or 286 BC.[2] Zhuang Zhou is thought to have spent time in the southernstate of Chu, as well as in theQicapital ofLinzi.Sima Qianincluded a biography of Zhuang Zhou in the Han-eraShiji(c.91 BC),[3]but it seems to have been sourced mostly from theZhuangziitself.[4]The American sinologistBurton Watsonconcluded: "Whoever Zhuang Zhou was, the writings attributed to him bear the stamp of a brilliant and original mind".[5]University of Sydneylecturer Esther Klein observes: "In the perception of the vast majority of readers, whoever authored the coreZhuangzitextwasMaster Zhuang."[6] The only version of theZhuangziknown to exist in its entirety consists of 33 chapters originally prepared around AD 300 by theJin-erascholarGuo Xiang(252–312), who reduced the text from an earlier form of 52 chapters. The first 7 of these, referred to as the 'inner chapters' (內篇;nèipiān), were considered even before Guo to have been wholly authored by Zhuang Zhou himself. This attribution has been traditionally accepted since, and is still assumed by many modern scholars.[7]The original authorship of the remaining 26 chapters has been the subject of perennial debate: they were divided by Guo into 15 'outer chapters' (外篇;wàipiān) and 11 'miscellaneous chapters' (雜篇;zápiān).[8] Today, it is generally accepted that the outer and miscellaneous chapters were the result of a process of "accretion and redaction" in which later authors "[responded] to the scintillating brilliance" of the original inner chapters,[9]although close intertextual analysis does not support the inner chapters comprising the earliest stratum.[10]Multiple authorship over time was a typical feature of Warring States texts of this genre.[11]A limited consensus has been established regarding five distinct "schools" of authorship, each responsible for their own layers of substance within the text.[12]Despite the lack of traceable attribution, modern scholars generally accept that the surviving chapters were originally composed between the 4th and 2nd centuries BC.[13] Excepting textual analysis, details of the text's history prior to theHan dynasty(202 BC – 220 AD) are largely unknown. Traces of its influence on the philosophy of texts written during the lateWarring States period, such as theGuanzi,Han FeiziandHuainanzi, suggest that theZhuangzi'sintellectual lineage had already been fairly influential in the states of Qi and Chu by the 3rd century BC.[14]Sima Qianrefers to theZhuangzias a 100,000-character work in theShiji, and references several chapters present in the received text.[15] Many scholars consider aZhuangzicomposed of 52 chapters, as attested by theBook of Hanin 111 AD, to have been the original form of the text.[16]During the late 1st century BC, the entire Han imperial library—including its edition of theZhuangzi—was subject to considerable redaction and standardization by the polymathLiu Xiang(77–6 BC) and his sonLiu Xin(c.46 BC– AD 23). All extant copies of theZhuangziultimately derive from a version that was further edited and redacted to 33 chapters byGuo Xiangc.300 AD,[16]who worked from the material previously edited by Liu. Guo plainly stated that he had made considerable edits to the outer and miscellaneous chapters in an attempt to preserve Zhuang Zhou's original ideas from later distortions, in a way that "did not hesitate to impose his personal understanding and philosophical preferences on the text".[17]The received text as edited by Guo is approximately 63,000 characters long—around two-thirds the attested length of the Han-era manuscript. While none are known to exist in full, versions of the text unaffected by both the Guo and Liu revisions survived into theTang dynasty(618–907), with the existing fragments hinting at the folkloric nature of the material removed by Guo.[18] Portions of theZhuangzihave been found among thebamboo sliptexts discovered in tombs dating to the earlyHan dynasty, particularly at theShuangguduisite nearFuyanginAnhui, and theMount Zhangjiasite nearJingzhouinHubei. The earlierGuodian Chu Slips—unearthed nearJingmen, Hubei, and dating to the Warring States periodc.300 BC—contain what appears to be a short fragment parallel to the "Ransacking Coffers" chapter (No.10 of 33).[8] TheDunhuang manuscripts—discovered in the early 20th century byWang Yuanlu, then obtained and analysed by the Hungarian-British explorerAurel Steinand the French sinologistPaul Pelliot—contain numerousZhuangzifragments dating to the early Tang dynasty. Stein and Pelliot took most of the manuscripts back to Europe; they are presently held at theBritish Libraryand theBibliothèque nationale de France. TheZhuangzifragments among the manuscripts constitute approximately twelve chapters of Guo Xiang's edition.[19] AZhuangzimanuscript dating to theMuromachi period(1338–1573) is preserved in theKōzan-jitemple inKyoto; it is considered one of Japan's national treasures. The manuscript has seven complete selections from the outer and miscellaneous chapters, and is believed to be a close copy of a 7th-century annotated edition written by the Chinese Taoist masterCheng Xuanying.[20] Topics Neo Confucianism New Confucianism Topics TheZhuangziconsists ofanecdotes,allegories,parables, andfablesthat are often humorous or irreverent in nature. Most of these are fairly short and simple, such as the humans "Lickety" and "Split" drilling seven holes into the primordial "Wonton" (No. 7), or Zhuang Zhou being discovered sitting and drumming on a basin after his wife dies (No. 18). A few are longer and more complex, like the story ofLie Yukouand themagus, or the account of theYellow Emperor's music (both No. 14). Most of the stories within theZhuangziseem to have been invented by Zhuang Zhou himself. This distinguishes the text from other works of the period, where anecdotes generally only appear as occasional interjections, and were usually drawn from existingproverbsor legends.[21] Some stories are completely whimsical, such as the strange description of evolution from "misty spray" through a series of substances and insects to horses and humans (No. 18), while a few other passages seem to be "sheer playful nonsense" which read likeLewis Carroll's "Jabberwocky". TheZhuangziis full of quirky and fantastic character archetypes, such as "Mad Stammerer", "Fancypants Scholar", "Sir Plow", and a man who fancies that his left arm will turn into a rooster, his right arm will turn into a crossbow, and his buttocks will become cartwheels.[22] A master of language, Zhuang Zhou sometimes engages in logic and reasoning, but then turns it upside down or carries the arguments to absurdity to demonstrate the limitations of human knowledge and the rational world. SinologistVictor H. Maircompares Zhuang Zhou's process of reasoning toSocratic dialogue—exemplified by the debate between Zhuang Zhou and fellow philosopherHuiziregarding the "joy of fish" (No. 17). Mair additionally characterizes Huizi's paradoxes near the end of the book as being "strikingly like those ofZeno of Elea".[23] The most famous of allZhuangzistories appears at the end of the second chapter, "On the Equality of Things", and consists of a dream being briefly recalled. 昔者莊周夢為胡蝶,栩栩然胡蝶也,自喻適志與。不知周也。Once, Zhuang Zhou dreamed he was a butterfly, a butterfly flitting and fluttering about, happy with himself and doing as he pleased. He didn't know that he was Zhuang Zhou.俄然覺,則蘧蘧然周也。不知周之夢為胡蝶與,胡蝶之夢為周與。周與胡蝶,則必有分矣。此之謂物化。Suddenly he woke up and there he was, solid and unmistakable Zhuang Zhou. But he didn't know if he was Zhuang Zhou who had dreamt he was a butterfly, or a butterfly dreaming that he was Zhuang Zhou. Between Zhuang Zhou and the butterfly there must be some distinction! This is called the Transformation of Things. The image of Zhuang Zhou wondering if he was a man who dreamed of being a butterfly or a butterfly dreaming of being a man became so well known that whole dramas have been written on its theme.[25]In the passage, Zhuang Zhou "[plays] with the theme of transformation",[25]illustrating that "the distinction between waking and dreaming is anotherfalse dichotomy. If [one] distinguishes them, how can [one] tell if [one] is now dreaming or awake?"[26] Another well-known passage dubbed "The Death of Wonton" illustrates the dangers Zhuang Zhou saw in going against the innate nature of things.[27] 南海之帝為儵,北海之帝為忽,中央之帝為渾沌。儵與忽時相與遇於渾沌之地,渾沌待之甚善。儵與忽謀報渾沌之德,曰:人皆有七竅,以視聽食息,此獨無有,嘗試鑿之。日鑿一竅,七日而渾沌死。The emperor of the Southern Seas was Lickety, the emperor of the Northern Sea was Split, and the emperor of the Centre was Wonton. Lickety and Split often met each other in the land of Wonton, and Wonton treated them very well. Wanting to repay Wonton's kindness, Lickety and Split said, "All people have seven holes for seeing, hearing, eating, and breathing. Wonton alone lacks them. Let's try boring some holes for him." So every day they bored one hole [in him], and on the seventh day Wonton died. Zhuang Zhou believed that the greatest of all human happiness could be achieved through a higher understanding of the nature of things, and that in order to develop oneself fully one needed to express one's innate ability.[25] Chapter 17 contains a well-known exchange between Zhuang Zhou and Huizi, featuring a heavy use of wordplay; it has been compared to aSocratic dialogue.[23] 莊子與惠子遊於濠梁之上。莊子曰:儵魚出遊從容,是魚樂也。Zhuangzi and Huizi were enjoying themselves on the bridge over the Hao River. Zhuangzi said, "Theminnowsare darting about free and easy! This is how fish are happy."惠子曰:子非魚,安知魚之樂。莊子曰:子非我,安知我不知魚之樂。Huizi replied, "You are not a fish. How[a]do you know that the fish are happy?" Zhuangzi said, "You are not I. How do you know that I do not know that the fish are happy?"惠子曰:我非子,固不知子矣;子固非魚也,子之不知魚之樂全矣。Huizi said, "I am not you, to be sure, so of course I don't know about you. But you obviously are not a fish; so the case is complete that you do not know that the fish are happy."莊子曰:請循其本。子曰汝安知魚樂云者,既已知吾知之而問我,我知之濠上也。Zhuangzi said, "Let's go back to the beginning of this. You said, How do you know that the fish are happy; but in asking me this, you already knew that I know it. I know it right here above the Hao." The precise point Zhuang Zhou intends to make in the debate is not entirely clear. The text appears to stress that "knowing" a thing is simply a state of mind: moreover, that it is not possible to determine whether "knowing" has any objective meaning. This sequence has been cited as an example of Zhuang Zhou's mastery of language, with reason subtly employed in order to make an anti-rationalist point.[32] A passage in chapter 18 describes Zhuang Zhou's reaction following the death of his wife, expressing a view of death as something not to be feared. 莊子妻死,惠子弔之,莊子則方箕踞鼓盆而歌。惠子曰:與人居長子,老身死,不哭亦足矣,又鼓盆而歌,不亦甚乎。Zhuangzi's wife died. When Huizi went to convey his condolences, he found Zhuangzi sitting with his legs sprawled out, pounding on a tub and singing. "You lived with her, she brought up your children and grew old," said Huizi. "It should be enough simply not to weep at her death. But pounding on a tub and singing—this is going too far, isn't it?"莊子曰:不然。是其始死也,我獨何能無概然。察其始而本無生,非徒無生也,而本無形,非徒無形也,而本無氣。雜乎芒芴之間,變而有氣,氣變而有形,形變而有生,今又變而之死,是相與為春秋冬夏四時行也。Zhuangzi said, "You're wrong. When she first died, do you think I didn't grieve like anyone else? But I looked back to her beginning and the time before she was born. Not only the time before she was born, but the time before she had a body. Not only the time before she had a body, but the time before she had a spirit. In the midst of the jumble of wonder and mystery a change took place and she had a spirit. Another change and she had a body. Another change and she was born. Now there's been another change and she's dead. It's just like the progression of the four seasons, spring, summer, fall, winter."人且偃然寢於巨室,而我噭噭然隨而哭之,自以為不通乎命,故止也。"Now she's going to lie down peacefully in a vast room. If I were to follow after her bawling and sobbing, it would show that I don't understand anything about fate. So I stopped." Zhuang Zhou seems to have viewed death as a natural process of transformation to be wholly accepted, where a person gives up one form of existence and assumes another.[34]In the second chapter, Zhuang Zhou makes the point that, for all humans know, death may in fact be better than life: "How do I know that loving life is not a delusion? How do I know that in hating death I am not like a man who, having left home in his youth, has forgotten the way back?"[35]His writings teach that "the wise man or woman accepts death with equanimity and thereby achieves absolute happiness."[34] Zhuang Zhou's own death is depicted in chapter 32, pointing to the body of lore that grew up around him in the decades following his death.[13]It serves to embody and reaffirm the ideas attributed to Zhuang Zhou throughout the previous chapters. 莊子將死,弟子欲厚葬之。莊子曰:吾以天地為棺槨,以日月為連璧,星辰為珠璣,萬物為齎送。吾葬具豈不備邪。何以加此。When Master Zhuang was about to die, his disciples wanted to give him a lavish funeral. Master Zhuang said: "I take heaven and earth as my inner and outer coffins, the sun and moon as my pair ofjade disks, the stars and constellations as my pearls and beads, the ten thousand things as my funerary gifts. With my burial complete, how is there anything left unprepared? What shall be added to it?"弟子曰:吾恐烏鳶之食夫子也。莊子曰:在上為烏鳶食,在下為螻蟻食,奪彼與此,何其偏也。The disciples said: "We are afraid that the crows andkiteswill eat you, Master!" Master Zhuang said: "Above ground I'd be eaten by crows and kites, below ground I'd be eaten bymole cricketsand ants. You rob the one and give to the other—how skewed would that be?" The principles and attitudes expressed in theZhuangziform the core of philosophicalTaoism. The text recommends embracing a natural spontaneity in order to better align one's inner self with the cosmic "Way". It also encourages keeping a distance from politics and social obligations, accepting death as a natural transformation, and appreciating things otherwise viewed as useless or lacking purpose. The text implores the reader to reject societal norms and conventional reasoning. The other major philosophical schools in ancient China—includingConfucianism,Legalism, andMohism—all proposed concrete social, political, and ethical reforms. By reforming both individuals and society as a whole, thinkers from these schools sought to alleviate human suffering, and ultimately solve the world's problems.[5]Contrarily, Zhuang Zhou believed the key to true happiness was to free oneself from worldly impingements through a principle of 'inaction' (wu wei)—action that is not based in purposeful striving or motivated by potential gain. As such, he fundamentally opposed systems that sought to impose order on individuals.[37][38] TheZhuangzidescribes the universe as being in a constant state of spontaneous change, which is not driven by any conscious God or force ofwill. It argues that humans, owing to their exceptional cognitive ability, tend to create artificial distinctions that remove them from the natural spontaneity of the universe. These include those of good versus bad, large versus small, and usefulness versus uselessness. It proposes that humans can achieve ultimate happiness by rejecting these distinctions, and living spontaneously in kind.[39]Zhuang Zhou often uses examples of craftsmen and artisans to illustrate the mindlessness and spontaneity he felt should characterize human action. AsBurton Watsondescribed, "the skilled woodcarver, the skilled butcher, the skilled swimmer does not ponder orratiocinateon the course of action he should take; his skill has become so much a part of him that he merely acts instinctively and spontaneously and, without knowing why, achieves success".[37]The term "wandering" (遊;yóu) is used throughout theZhuangzito describe how an enlightened person "wanders through all of creation, enjoying its delights without ever becoming attached to any one part of it".[37]The nonhuman characters throughout the text are often identified as being useful vehicles for metaphor. However, some recent scholarship has characterized theZhuangzias being "anti-anthropocentric" or even "animalistic" in the significance it ascribes to nonhuman characters. When viewed through this lens, theZhuangziquestions humanity's central place in the world, or even rejects the distinction between the human and natural worlds altogether.[40] Political positions in theZhuangzigenerally pertain to what governments should not do, rather than what they should do or how they may be reformed. The text seems to oppose formal government, viewing it as fundamentally problematic due to "the opposition between man and nature".[41]Zhuang Zhou attempts to illustrate that "as soon as government intervenes in natural affairs, it destroys all possibility of genuine happiness".[42]It is unclear whether Zhuang Zhou's positions amount to a form ofanarchism.[43] Western scholars have noted strong anti-rationalistthemes present throughout theZhuangzi. Whereas reason and logic as understood inAncient Greek philosophyproved foundational to the entire Western tradition, Chinese philosophers often preferred to rely on moral persuasion and intuition. Throughout Chinese history, theZhuangzisignificantly informed skepticism towards rationalism. In the text, Zhuang Zhou frequently turns logical arguments upside-down in order to satirize and discredit them. However, according to Mair he does not abandon language and reason altogether, but "only wishe[s] to point out that over-dependence on them could limit the flexibility of thought".[44]Confuciushimself is a recurring character in the text—sometimes engaging in invented debates withLaozi, where Confucius is consistently portrayed as being the less authoritative, junior figure of the two. In some appearances, Confucius is subjected to mockery and made "the butt of many jokes", while in others he is treated with unambiguous respect, intermittently serving as the "mouthpiece" for Zhuang Zhou's ideas.[45] TheZhuangziandTao Te Chingare considered to be the two fundamental texts in theTaoist tradition. It is accepted that some version of theTao Te Chinginfluenced the composition of theZhuangzi; however, the two works are distinct in their perspectives on the Tao itself. TheZhuangziuses the word "Tao" (道) less frequently than theTao Te Ching, with the former often using 'heaven' (天) in places the latter would use "Tao". While Zhuang Zhou discusses the personal process of following the Tao at length, compared to Laozi he articulates little about the nature of the Tao itself. TheZhuangzi's only direct description of the Tao is contained in "The Great Ancestral Teacher" (No. 6), in a passage "demonstrably adapted" from chapter 21 of theTao Te Ching. The inner chapters and theTao Te Chingagree that limitations inherent to human language preclude any sufficient description of the Tao. Meanwhile, imperfect descriptions are ubiquitous throughout both texts.[46] Of the texts written in China prior to its unification under theQin dynastyin 221 BC, theZhuangzimay have been the most influential on later literary works. For the period, it demonstrated an unparalleled creativity in its use of language.[47]Virtually every major Chinese writer or poet in history, fromSima XiangruandSima Qianduring theHan dynasty,Ruan JiandTao Yuanmingduring theSix Dynasties,Li Baiduring theTang dynasty, toSu ShiandLu Youin theSong dynastywere "deeply imbued with the ideas and artistry of theZhuangzi".[48] Traces of theZhuangzi's influence in lateWarring States periodphilosophical texts such as theGuanzi,Han Feizi, andLüshi Chunqiusuggest that Zhuang Zhou's intellectual lineage was already influential by the 3rd century BC. During theQinandHan dynasties, with their respective state-sponsoredLegalistandConfucianideologies, theZhuangzidoes not seem to have been highly regarded. One exception is "Fuon the Owl" (鵩鳥賦;Fúniǎo fù)—the earliest known definitive example offurhapsody, written by the Han-era scholarJia Yiin 170 BC. Jia does not reference theZhuangziby name, but cites it for one-sixth of the poem.[49] TheSix Dynastiesperiod (AD 220–589) that followed the collapse of the Han saw Confucianism temporarily surpassed by a resurgence of interest in Taoism and old divination texts such as theI Ching, with many poets, artists, and calligraphers of this period drawing influence from theZhuangzi.[50]The poetsRuan JiandXi Kang—both members of theSeven Sages of the Bamboo Grove—admired the work; an essay authored by Ruan entitled "Discourse on Summing Up theZhuangzi" (達莊論;Dá Zhuāng lùn) is still extant.[21] TheZhuangzihas been called "the most important of all the Daoist writings",[51]with the inner chapters embodying the core ideas of philosophical Taoism.[13]During the 4th century AD, theZhuangzibecame a major source of imagery and terminology for theShangqing School, a new form of Taoism that had become popular among the aristocracy of theJin dynasty(266–420). Shangqing School Taoism borrowed numerous terms from theZhuangzi, such as "perfected man" (真人;zhēnrén), "Great Clarity" (太清;Tài Qīng), and "fasting the mind" (心齋;xīn zhāi). While their use of these terms was distinct from that found in theZhuangziitself, their incidence still demonstrates the text's influence on Shangqing thought.[52] TheZhuangziwas very influential in the adaptation of Buddhism to Chinese culture after Buddhism was first brought to China from India in the 1st century AD.Zhi Dun, China's first aristocratic Buddhist monk, wrote a prominent commentary to theZhuangziin the mid-4th century. TheZhuangzialso played a significant role in the formation ofChan Buddhism—and therefore ofZenin Japan—which grew out of "a fusion of Buddhist ideology and ancient Daoist thought." Traits of Chan practice traceable to theZhuangziinclude a distrust of language and logic, an insistence that the "Way" can be found in everything, even dung and urine, and a fondness for dialogues based onkoans.[52] In 742, an imperial proclamation fromEmperor Xuanzong of Tang(r.712–756) canonized theZhuangzias one of theChinese classics, awarding it the honorific title 'True Scripture of Southern Florescence' (南華真經;Nánhuá zhēnjīng).[53]Nevertheless, most scholars throughout Chinese history did not consider it as being a "classic" per se, due to its non-Confucian nature.[54] Throughout Chinese history, theZhuangziremained the pre-eminent expression of core Taoist ideals. The 17th-century scholarGu Yanwulamented the flippant use of theZhuangzion theimperial examinationessays as representing a decline in traditional morals at the end of theMing dynasty(1368–1644).[55]Jia Baoyu, the main protagonist of the classic 18th-century novelDream of the Red Chamber, often turns to theZhuangzifor comfort amid the strife in his personal and romantic relationships.[56]The story of Zhuang Zhou drumming on a tub and singing after the death of his wife inspired an entire tradition of folk music in the central Chinese provinces ofHubeiandHunancalled "funeral drumming" (喪鼓;sànggǔ) that survived into the 18th and 19th centuries.[57] Outside of East Asia, theZhuangziis not as popular as theTao Te Chingand is rarely known by non-scholars. A number of prominent scholars have attempted to bring theZhuangzito wider attention among Western readers. In 1939, the British sinologistArthur Waleydescribed it as "one of the most entertaining as well as one of the profoundest books in the world".[58]In the introduction to his 1994 translation, Victor H. Mair wrote that he "[felt] a sense of injustice that theDao De Jingis so well known to my fellow citizens while theZhuangziis so thoroughly ignored, because I firmly believe that the latter is in every respect a superior work."[59] Western thinkers who have been influenced by the text includeMartin Heidegger, who became deeply interested in the oeuvres of Laozi and Zhuang Zhou during the 1930s. In particular, Heidegger was drawn to theZhuangzi's treatment of usefulness versus uselessness. He explicitly references one of the debates between Zhuang Zhou and Huizi (No. 24) within the third dialogue ofCountry Path Conversations, written as theSecond World Warwas coming to an end.[60]In the dialogue, Heidegger's characters conclude that "pure waiting" as expressed in theZhuangzi—that is, waiting for nothing—is the only viable mindset for the German people in the wake of the failure ofnational socialismand Germany's comprehensive defeat.[61]
https://en.wikipedia.org/wiki/The_Butterfly_Dream
Information extraction(IE) is the task of automatically extractingstructured informationfromunstructuredand/or semi-structuredmachine-readabledocuments and other electronically represented sources. Typically, this involves processing human language texts by means ofnatural language processing(NLP).[1]Recent activities inmultimediadocument processing like automatic annotation and content extraction out of images/audio/video/documents could be seen as information extraction. Recent advances in NLP techniques have allowed for significantly improved performance compared to previous years.[2]An example is the extraction from newswire reports of corporate mergers, such as denoted by the formal relation: from an online news sentence such as: A broad goal of IE is to allow computation to be done on the previously unstructured data. A more specific goal is to allowautomated reasoningabout thelogical formof the input data. Structured data is semantically well-defined data from a chosen target domain, interpreted with respect to category andcontext. Information extraction is the part of a greater puzzle which deals with the problem of devising automatic methods for text management, beyond its transmission, storage and display. The discipline ofinformation retrieval(IR)[3]has developed automatic methods, typically of a statistical flavor, for indexing large document collections and classifying documents. Another complementary approach is that ofnatural language processing(NLP) which has solved the problem of modelling human language processing with considerable success when taking into account the magnitude of the task. In terms of both difficulty and emphasis, IE deals with tasks in between both IR and NLP. In terms of input, IE assumes the existence of a set of documents in which each document follows a template, i.e. describes one or more entities or events in a manner that is similar to those in other documents but differing in the details. An example, consider a group of newswire articles on Latin American terrorism with each article presumed to be based upon one or more terroristic acts. We also define for any given IE task a template, which is a(or a set of) case frame(s) to hold the information contained in a single document. For the terrorism example, a template would have slots corresponding to the perpetrator, victim, and weapon of the terroristic act, and the date on which the event happened. An IE system for this problem is required to "understand" an attack article only enough to find data corresponding to the slots in this template. Information extraction dates back to the late 1970s in the early days of NLP.[4]An early commercial system from the mid-1980s was JASPER built forReutersby the Carnegie Group Inc with the aim of providingreal-time financial newsto financial traders.[5] Beginning in 1987, IE was spurred by a series ofMessage Understanding Conferences. MUC is a competition-based conference[6]that focused on the following domains: Considerable support came from the U.S. Defense Advanced Research Projects Agency (DARPA), who wished to automate mundane tasks performed by government analysts, such as scanning newspapers for possible links to terrorism.[citation needed] The present significance of IE pertains to the growing amount of information available in unstructured form.Tim Berners-Lee, inventor of theWorld Wide Web, refers to the existingInternetas the web ofdocuments[7]and advocates that more of the content be made available as aweb ofdata.[8]Until this transpires, the web largely consists of unstructured documents lacking semanticmetadata. Knowledge contained within these documents can be made more accessible for machine processing by means of transformation intorelational form, or by marking-up withXMLtags. An intelligent agent monitoring a news data feed requires IE to transform unstructured data into something that can be reasoned with. A typical application of IE is to scan a set of documents written in anatural languageand populate a database with the information extracted.[9] Applying information extraction to text is linked to the problem oftext simplificationin order to create a structured view of the information present in free text. The overall goal being to create a more easily machine-readable text to process the sentences. Typical IE tasks and subtasks include: Note that this list is not exhaustive and that the exact meaning of IE activities is not commonly accepted and that many approaches combine multiple sub-tasks of IE in order to achieve a wider goal. Machine learning, statistical analysis and/or natural language processing are often used in IE. IE on non-text documents is becoming an increasingly interesting topic[when?]in research, and information extracted from multimedia documents can now[when?]be expressed in a high level structure as it is done on text. This naturally leads to the fusion of extracted information from multiple kinds of documents and sources. IE has been the focus of the MUC conferences. The proliferation of theWeb, however, intensified the need for developing IE systems that help people to cope with theenormous amount of datathat are available online. Systems that perform IE from online text should meet the requirements of low cost, flexibility in development and easy adaptation to new domains. MUC systems fail to meet those criteria. Moreover, linguistic analysis performed for unstructured text does not exploit the HTML/XMLtags and the layout formats that are available in online texts. As a result, less linguistically intensive approaches have been developed for IE on the Web usingwrappers, which are sets of highly accurate rules that extract a particular page's content. Manually developing wrappers has proved to be a time-consuming task, requiring a high level of expertise.Machine learningtechniques, eithersupervisedorunsupervised, have been used to induce such rules automatically. Wrapperstypically handle highly structured collections of web pages, such as product catalogs and telephone directories. They fail, however, when the text type is less structured, which is also common on the Web. Recent effort onadaptive information extractionmotivates the development of IE systems that can handle different types of text, from well-structured to almost free text -where common wrappers fail- including mixed types. Such systems can exploit shallow natural language knowledge and thus can be also applied to less structured texts. A recent[when?]development is Visual Information Extraction,[16][17]that relies on rendering a webpage in a browser and creating rules based on the proximity of regions in the rendered web page. This helps in extracting entities from complex web pages that may exhibit a visual pattern, but lack a discernible pattern in the HTML source code. The following standard approaches are now widely accepted: Numerous other approaches exist for IE including hybrid approaches that combine some of the standard approaches previously listed.
https://en.wikipedia.org/wiki/Information_extraction
Azero-marking languageis one with nogrammatical markson neither the dependents (or themodifiers) nor theheads(or thenuclei) that show the relationship between different constituents of a phrase. Pervasive zero marking is very rare, but instances of zero marking in various forms occur in quite a number oflanguages.VietnameseandIndonesianare two national languages listed in theWorld Atlas of Language Structuresas having zero-marking. In manyEast and Southeast Asian languages, such asThaiandChinese, the headverband its dependents are not marked for any arguments or for thenouns' roles in the sentence. On the other hand, possession is marked in such languages by the use ofclitic particlesbetween possessor and possessed. Some languages, such as many dialects ofArabic, use a similar process, called juxtaposition, to indicate possessive relationships. In Arabic, two nouns next to each other could indicate a possessed-possessor construction:كتب مريمkutub Maryam"Maryam's books" (literally "books Maryam"). InClassicalandModern Standard Arabic, however, the second noun is in the genitive case, as inكتبُ مريمٍkutub-u Maryam-a. Zero-marking, when it occurs, tends to show a strong relationship with word order. Languages in which zero-marking is widespread are almost allsubject–verb–object, perhaps because verb-medial order allows two or morenounsto be recognized as such much more easily thansubject–object–verb,object–subject–verb,verb–subject–object, orverb–object–subjectorder, for which two nouns might be adjacent and their role in a sentence possibly thus confused.[citation needed]It has been suggested thatverb-finallanguages may be likely to develop verb-medial order if marking on nouns is lost.[citation needed]
https://en.wikipedia.org/wiki/Zero-marking_language
The following lists identify, characterize, and link to more thorough information onfile systems. Many olderoperating systemssupport only their one "native" file system, which does not bear any name apart from the name of the operating system itself. Disk file systems are usually block-oriented. Files in a block-oriented file system are sequences of blocks, often featuring fully random-access read, write, and modify operations. These file systems have built-in checksumming and either mirroring or parity for extra redundancy on one or several block devices: Solid state media, such asflash memory, are similar to disks in their interfaces, but have different problems. At low level, they require special handling such aswear levelingand differenterror detection and correctionalgorithms. Typically a device such as asolid-state drivehandles such operations internally and therefore a regular file system can be used. However, for certain specialized installations (embedded systems, industrial applications) a file system optimized for plain flash memory is advantageous. Inrecord-oriented file systemsfiles are stored as a collection ofrecords. They are typically associated withmainframeandminicomputeroperating systems. Programs read and write whole records, rather than bytes or arbitrary byte ranges, and can seek to a record boundary but not within records. The more sophisticated record-oriented file systems have more in common with simpledatabasesthan with other file systems. Shared-disk file systems (also calledshared-storage file systems,SAN file system,Clustered file systemor evencluster file systems) are primarily used in astorage area networkwhere all nodes directly access theblock storagewhere the file system is located. This makes it possible for nodes to fail without affecting access to the file system from the other nodes. Shared-disk file systems are normally used in ahigh-availability clustertogether with storage on hardwareRAID. Shared-disk file systems normally do not scale over 64 or 128 nodes. Shared-disk file systems may besymmetricwheremetadatais distributed among the nodes orasymmetricwith centralizedmetadataservers. Distributed file systemsare also called network file systems. Many implementations have been made, they are location dependent and they haveaccess control lists(ACLs), unless otherwise stated below. Distributedfault-tolerantreplication of data between nodes (between servers or servers/clients) forhigh availabilityandoffline(disconnected) operation. Distributedparallelfile systems stripe data over multiple servers for high performance. They are normally used inhigh-performance computing (HPC). Some of the distributed parallel file systems use anobject storage device(OSD) (in Lustre called OST) for chunks of data together with centralizedmetadataservers. Distributed file systems, which also areparallelandfault tolerant, stripe and replicate data over multiple servers for high performance and to maintaindata integrity. Even if a server fails no data is lost. The file systems are used in bothhigh-performance computing (HPC)andhigh-availability clusters. All file systems listed here focus onhigh availability,scalabilityand high performance unless otherwise stated below. In development: Some of these may be calledcooperative storage cloud. These are not really file systems; they allow access to file systems from an operating system standpoint.
https://en.wikipedia.org/wiki/List_of_file_systems#Distributed_file_systems
Information overload(also known asinfobesity,[1][2]infoxication,[3]orinformation anxiety[4]) is the difficulty in understanding an issue andeffectively making decisionswhen one hastoo much information(TMI) about that issue,[5]and is generally associated with the excessive quantity of daily information.[6]The term "information overload" was first used as early as 1962 by scholars in management and information studies, including in Bertram Gross' 1964 bookThe Managing of Organizations[7][8]and was further popularized byAlvin Tofflerin his bestselling 1970 bookFuture Shock.[9]Speier et al. (1999) said that if input exceeds the processing capacity, information overload occurs, which is likely to reduce the quality of the decisions.[10] In a newer definition, Roetzel (2019) focuses on time and resources aspects. He states that when a decision-maker is given many sets of information, such as complexity, amount, and contradiction, the quality of its decision is decreased because of the individual's limitation of scarce resources to process all the information and optimally make the best decision.[11] The advent of moderninformation technologyhas been a primary driver of information overload on multiple fronts: in quantity produced, ease of dissemination, and breadth of the audience reached. Longstanding technological factors have been further intensified by the rise ofsocial mediaincluding theattention economy, which facilitatesattention theft.[12][13]In the age of connective digital technologies,informatics, theInternet culture(or the digital culture), information overload is associated with over-exposure, excessive viewing of information, and input abundance of information and data. Even though information overload is linked to digital cultures and technologies,Ann Blairnotes that the term itself predates modern technologies, as indications of information overload were apparent when humans began collecting manuscripts, collecting, recording, and preserving information.[14]One of the first social scientists to notice the negative effects of information overload was the sociologistGeorg Simmel(1858–1918), who hypothesized that the overload of sensations in the modern urban world caused city dwellers to become jaded and interfered with their ability to react to new situations.[15]The social psychologistStanley Milgram(1933–1984) later used the concept of information overload to explainbystander behavior. Psychologists have recognized for many years that humans have a limited capacity to store current information in memory. PsychologistGeorge Armitage Millerwas very influential in this regard, proposing that people can process about seven chunks of information at a time. Miller says that under overload conditions, people become confused and are likely to make poorer decisions based on the information they have received as opposed to making informed ones. A quite early example of the term "information overload" can be found in an article by Jacob Jacoby, Donald Speller and Carol Kohn Berning, who conducted an experiment on 192 housewives which was said to confirm the hypothesis that more information about brands would lead to poorerdecision making. Long before that, the concept was introduced by Diderot, although it was not by the term "information overload": As long as the centuries continue to unfold, the number of books will grow continually, and one can predict that a time will come when it will be almost as difficult to learn anything from books as from the direct study of the whole universe. It will be almost as convenient to search for some bit of truth concealed in nature as it will be to find it hidden away in an immense multitude of bound volumes. In the internet age, the term "information overload" has evolved into phrases such as "information glut", "data smog", and "data glut" (Data Smog, Shenk, 1997).[16]In his abstract, Kazi Mostak Gausul Hoq commented that people often experience an "information glut" whenever they struggle with locating information from print, online, or digital sources.[17]What was once a term grounded incognitive psychologyhas evolved into a rich metaphor used outside the world of academia. Information overload has been documented throughout periods where advances in technology have increased a production of information. As early as the 3rd or 4th century BC, people regarded information overload with disapproval. Around this time, inEcclesiastes12:12, the passage revealed the writer's comment "of making books there is no end" and in the 1st century AD,Seneca the Eldercommented, that "the abundance of books is distraction". In 1255, the Dominican Vincent of Beauvais, also commented on the flood of information: "the multitude of books, the shortness of time and the slipperiness of memory."[14]Similar complaints around the growth of books were also mentioned in China. There were also information enthusiasts. TheLibrary of Alexandriawas established around the 3rd century BCE or 1st century Rome, which introduced acts of preserving historical artifacts. Museums and libraries established universal grounds of preserving the past for the future, but much like books, libraries were only granted with limited access. Renaissance humanists always had a desire to preserve their writings and observations,[14]but were only able to record ancient texts by hand because books were expensive and only the privileged and educated could afford them. Humans experience an overload in information by excessively copying ancient manuscripts and replicating artifacts, creating libraries and museums that have remained in the present.[14]Around 1453 AD,Johannes Gutenberginvented theprinting pressand this marked another period of information proliferation. As a result of lowering production costs, generation of printed materials ranging frompamphlets,manuscriptsto books were made available to the average person. Following Gutenberg's invention, the introduction of mass printing began in Western Europe. Information overload was often experienced by the affluent, but the circulation of books were becoming rapidly printed and available at a lower cost, allowing the educated to purchase books. Information became recordable, by hand, and could be easily memorized for future storage and accessibility. This era marked a time where inventive methods were established to practice information accumulation. Aside from printing books and passage recording, encyclopedias and alphabetical indexes were introduced, enabling people to save and bookmark information for retrieval. These practices marked both present and future acts of information processing. Swiss scientistConrad Gessnercommented on the increasing number of libraries and printed books,[14]and was most likely the first academic who discussed the consequences of information overload as he observed how "unmanageable" information came to be after the creation of the printing press.[18] Blair notes that while scholars were elated with the number of books available to them, they also later experienced fatigue with the amount of excessive information that was readily available and overpopulated them.Scholarscomplained about the abundance of information for a variety of reasons, such as the diminishing quality of text asprintersrushed to print manuscripts and the supply of new information being distracting and difficult to manage. Erasmus, one of the many recognized humanists of the 16th century asked, "Is there anywhere on earth exempt from these swarms of new books?".[19] Many grew concerned with the rise of books in Europe, especially in England, France, and Germany. From 1750 to 1800, there was a 150% increase in the production of books. In 1795, German bookseller and publisher Johann Georg Heinzmann said "no nation printed as much as the Germans" and expressed concern about Germans reading ideas and no longer creating original thoughts and ideas.[20] To combat information overload, scholars developed their own information records for easier and simply archival access and retrieval. Modern Europe compilers used paper and glue to cut specific notes and passages from a book and pasted them to a new sheet for storage.Carl Linnaeusdeveloped paper slips, often called his botanical paper slips, from 1767 to 1773, to record his observations. Blair argues that these botanical paper slips gave birth to the "taxonomical system" that has endured to the present, influencing both the mass inventions of the index card and the library card catalog.[19] In his book,The Information: A History, A Theory, A Flood,published in 2011, authorJames Gleicknotes that engineers began taking note of the concept of information, quickly associated it in a technical sense: information was both quantifiable and measurable. He discusses how information theory was created to first bridge mathematics, engineering, and computing together, creating an information code between the fields. English speakers from Europe often equated "computer science" to "informatique,informatica, andInformatik".[21]This leads to the idea that all information can be saved and stored on computers, even if information experiences entropy. But at the same time, the term information, and its many definitions have changed.[citation needed] In the second half of the 20th century, advances in computer and information technology led to the creation of theInternet. In the modernInformation Age, information overload is experienced as distracting and unmanageable information such asemail spam, email notifications,instant messages,Tweetsand Facebook updates in the context of the work environment.[22]Social mediahas resulted in "social information overload", which can occur on sites like Facebook, and technology is changing to serve our social culture. In today's society, day-to-day activities increasingly involve the technological world where information technology exacerbates the number of interruptions that occur in the work environment.[23]Management may be even more disrupted in their decision making, and may result in more poor decisions. Thus, thePIECESframework mentions information overload as a potential problem in existing information systems.[24] As the world moves into a new era ofglobalization, an increasing number of people connect to the internet to conduct their own research[25]and are given the ability to contribute to publicly accessible data. This has elevated the risk for the spread of misinformation.[according to whom?] In a 2018 literature review, Roetzel indicates that information overload can be seen as a virus—spreading through (social) media and news networks.[11] The latest research hypothesizes that information overload is a multilevel phenomenon, i.e., there are different mechanisms responsible for its emergence at the individual, group, and the whole society levels, however, these levels are interlinked.[26] In a piece published bySlate, Vaughan Bell argues that "Worries about information overload are as old as information itself"[18]because each generation and century will inevitably experience a significant impact with technology. In the 21st century, Frank Furedi describes how an overload in information is metaphorically expressed as a flood, which is an indication that humanity is being "drowned" by the waves of data coming at it.[27]This includes how the human brain continues to process information whether digitally or not. Information overload can lead to "information anxiety", which is the gap between the information that is understood and the information that it is perceived must be understood. The phenomenon of information overload is connected to the field ofinformation technology(IT). IT corporate management implements training to "improve the productivity of knowledge workers". Ali F. Farhoomand and Don H. Drury note that employees often experience an overload in information whenever they have difficulty absorbing and assimilating the information they receive to efficiently complete a task because they feel burdened, stressed, and overwhelmed.[28] At New York's Web 2.0 Expo in 2008,Clay Shirky's speech indicated that information overload in the modern age is a consequence of a deeper problem, which he calls "filter failure",[29]where humans continue to overshare information with each other. This is due to the rapid rise of apps and unlimited wireless access. In the moderninformation age, information overload is experienced as distracting and unmanageable information such asemail spam, email notifications,instant messages,Tweets, and Facebook updates in the context of the work environment.Social mediahas resulted in "social information overload", which can occur on sites like Facebook, and technology is changing to serve our social culture. As people view increasing amounts of information in the form of news stories, emails, blog posts, Facebook statuses,Tweets,Tumblrposts and other new sources of information, they become their own editors,gatekeepers, and aggregators of information.[30]Social media platforms create a distraction as users attention spans are challenged once they enter an online platform. One concern in this field is that massive amounts of information can be distracting and negatively impact productivity anddecision-makingandcognitive control. Another concern is the "contamination" of useful information with information that might not be entirely accurate (information pollution). The general causes of information overload include: Email remains a major source of information overload, as people struggle to keep up with the rate of incoming messages. As well as filtering out unsolicited commercial messages (spam), users also have to contend with the growing use ofemail attachmentsin the form of lengthy reports, presentations, and media files.[31] A December 2007New York Timesblog post described email as "a $650 billion drag on the economy",[32]and theNew York Timesreported in April 2008 that "email has become the bane of some people's professional lives" due to information overload, yet "none of [the current wave of high-profile Internet startups focused on email] really eliminates the problem of email overload because none helps us prepare replies".[33] In January 2011, Eve Tahmincioglu, a writer forNBC News, wrote an article titled "It's Time to Deal With That Overflowing Inbox". Compiling statistics with commentary, she reported that there were 294 billion emails sent each day in 2010, up from 50 billion in 2009. Quoted in the article, workplace productivity expert Marsha Egan stated that people need to differentiate between working on email and sorting through it. This meant that rather than responding to every email right away, users should delete unnecessary emails and sort the others into action or reference folders first. Egan then went on to say "We are more wired than ever before, and as a result need to be more mindful of managing email or it will end up managing us."[34] The Daily TelegraphquotedNicholas Carr, former executive editor of theHarvard Business Reviewand the author ofThe Shallows: What The Internet Is Doing To Our Brains, as saying that email exploits a basic human instinct to search for new information, causing people to become addicted to "mindlessly pressing levers in the hope of receiving a pellet of social or intellectual nourishment". His concern is shared byEric Schmidt, chief executive ofGoogle, who stated that "instantaneous devices" and the abundance of information people are exposed to through email and other technology-based sources could be having an impact on the thought process, obstructing deep thinking, understanding, impeding the formation of memories and making learning more difficult. This condition of "cognitive overload" results in diminished information retaining ability and failing to connect remembrances to experiences stored in the long-term memory, leaving thoughts "thin and scattered".[35]This is also manifest in the education process.[36] In addition to email, theWorld Wide Webhas provided access to billions of pages of information. In many offices, workers are given unrestricted access to the Web, allowing them to manage their own research. The use ofsearch engineshelps users to find information quickly. However, information published online may not always be reliable, due to the lack of authority-approval or a compulsory accuracy check before publication. Internet information lacks credibility as the Web's search engines do not have the abilities to filter and manage information and misinformation.[37]This results in people having to cross-check what they read before using it for decision-making, which takes up more time.[citation needed] Viktor Mayer-Schönberger, author ofDelete: The Virtue of Forgetting in the Digital Age,argues that everyone can be a "participant" on the Internet, where they are all senders and receivers of information.[38]On the Internet, trails of information are left behind, allowing other Internet participants to share and exchange information. Information becomes difficult to control on the Internet. TheBBCreports that "every day, the information we send and receive online – whether that's checking emails or searching the internet – amount to over 2.5 quintillion bytes of data."[39] Social mediaare applications and websites with an online community where users create and share content with each other, and it adds to the problem of information overload because so many people have access to it.[40]It presents many different views and outlooks on subject matters so that one may have difficulty taking it all in and drawing a clear conclusion.[41]Information overload may not be the core reason for people's anxieties about the amount of information they receive in their daily lives. Instead, information overload can be considered situational. Social media users tend to feel less overloaded by information when using their personal profiles, rather than when their work institutions expect individuals to gather a mass of information. Most people see information through social media in their lives as an aid to help manage their day-to-day activities and not an overload.[42]Depending on what social media platform is being used, it may be easier or harder to stay up to date on posts from people. Facebook users who post and read more than others tend to be able to keep up. On the other hand, Twitter users who post and read a lot of tweets still feel like it is too much information (or none of it is interesting enough).[11]Another problem with social media is that many people create a living by creating content for either their own or someone else's platform, which can create for creators to publish an overload of content. In the context of searching for information, researchers have identified two forms of information overload:outcome overloadwhere there are too many sources of information andtextual overloadwhere the individual sources are too long. This form of information overload may cause searchers to be less systematic. Disillusionment when a search is more challenging than expected may result in an individual being less able to search effectively. Information overload when searching can result in asatisficingstrategy.[43]: 7 Savolainen identifiesfilteringandwithdrawalas common responses to information. Filtering involves quickly working out whether a particular piece of information, such as an email, can be ignored based on certain criteria. Withdrawal refers to limiting the number of sources of information with which one interacts. They distinguish between "pull" and "push" sources of information, a "pull" source being one where one seeks out relevant information, a "push" source one where others decide what information might be interesting. They note that "pull" sources can avoid information overload but by only "pulling" information one risks missing important information.[44] There have been many solutions proposed for how to mitigate information overload. Research examining how people seek to control an overloaded environment has shown that people purposefully using different coping strategies.[45][46][47]In general, overload coping strategy consists of two excluding (ignoring and filtering) and two including (customizing and saving) approaches.[47][46]Excluding approach focuses on managing the quantity of information, while including approach is geared towards complexity management. Johnson advisesdisciplinewhich helps mitigate interruptions and for the elimination of push or notifications. He explains that notifications pull people's attentions away from their work and into social networks and emails. He also advises that people stop using their iPhones as alarm clocks which means that the phone is the first thing that people will see when they wake up leading to people checking their email right away.[51] Clay Shirkystates:[29] What we're dealing with now is not the problem of information overload, because we're always dealing (and always have been dealing) with information overload... Thinking about information overload isn't accurately describing the problem; thinking about filter failure is. Consider the use of Internet applications and add-ons such as theInbox Pauseadd-on forGmail.[52]This add-on does not reduce the number of emails that people get but it pauses the inbox. Burkeman in his article talks about the feeling of being in control is the way to deal with information overload which might involve self-deception. He advises to fight irrationality with irrationality by using add-ons that allow you to pause your inbox or produce other results. Reducing large amounts of information is key. Dealing with IO from a social network site such as Facebook, a study done byHumboldt University[53]showed some strategies that students take to try and alleviate IO while using Facebook. Some of these strategies included: Prioritizing updates from friends who were physically farther away in other countries, hiding updates from less-prioritized friends, deleting people from their friends list, narrowing the amount of personal information shared, and deactivating the Facebook account. Decision makers performing complex tasks have little if any excess cognitive capacity. Narrowing one's attention as a result of the interruption is likely to result in the loss of information cues, some of which may be relevant to completing the task. Under these circumstances, performance is likely to deteriorate. As the number or intensity of the distractions/interruptions increases, the decision maker's cognitive capacity is exceeded, and performance deteriorates more severely. In addition to reducing the number of possible cues attended to, more severe distractions/interruptions may encourage decision-makers to use heuristics, take shortcuts, or opt for asatisficing decision, resulting in lower decision accuracy. Somecognitive scientistsand graphic designers have emphasized the distinction between raw information and information in a form that can be used in thinking. In this view, information overload may be better viewed as organization underload. That is, they suggest that the problem is not so much the volume of information but the fact that it cannot be discerned how to use it well in the raw or biased form it is presented. Authors who have taken this view include graphic artist and architectRichard Saul Wurmanand statistician and cognitive scientistEdward Tufte. Wurman uses the term "information anxiety" to describe humanity's attitude toward the volume of information in general and their limitations in processing it.[55]Tufte primarily focuses on quantitative information and explores ways to organize large complex datasets visually to facilitate clear thinking. Tufte's writing is important in such fields as information design and visual literacy,[56]which deal with the visual communication of information. Tufte coined the term "chartjunk" to refer to useless, non-informative, or information-obscuring elements of quantitative information displays, such as the use of graphics to overemphasize the importance of certain pieces of data or information.[57] In a study conducted by Soucek and Moser (2010),[58]they investigated what impact a training intervention on how to cope with information overload would have on employees. They found that the training intervention did have a positive impact on IO, especially on those who struggled with work impairment and media usage, and employees who had a higher amount of incoming emails.[58] Recent research suggests that an "attention economy" of sorts will naturally emerge from information overload,[59]allowing Internet users greater control over their online experience with particular regard to communication mediums such as email and instant messaging. This could involve some sort of cost being attached to email messages. For example, managers charging a small fee for every email received – e.g. $1.00 – which the sender must pay from their budget. The aim of such charging is to force the sender to consider the necessity of the interruption. However, such a suggestion undermines the entire basis of the popularity of email, namely that emails are free of charge to send. Economics often assumes that people are rational in that they have the knowledge of their preferences and an ability to look for the best possible ways to maximize their preferences. People are seen as selfish and focus on what pleases them. Looking at various parts on their own results in the negligence of the other parts that work alongside it that create the effect of IO. Lincoln suggests possible ways to look at IO in a more holistic approach by recognizing the many possible factors that play a role in IO and how they work together to achieve IO.[60] It would be impossible for an individual to read all theacademic paperspublished in a narrow speciality, even if they spent all their time reading. A response to this is the publishing ofsystematic reviewssuch as theCochrane Reviews. Richard Smith argues that it would be impossible for a general practitioner to read all the literature relevant to every individual patient they consult with and suggests one solution would be anexpert systemfor use of doctors while consulting.[61] Media related toInformation overloadat Wikimedia Commons
https://en.wikipedia.org/wiki/Information_overload
Media intelligenceusesdata mininganddata scienceto analyze public,socialand editorialmedia content. It refers to marketing systems that synthesize billions ofonline conversationsinto relevant information. This allow organizations to measure and manage content performance, understand trends, and drive communications andbusiness strategy. Media intelligence can includesoftware as a serviceusingbig dataterminology.[1]This includes questions about messaging efficiency,share of voice, audience geographical distribution, message amplification,influencerstrategy, journalist outreach, creative resonance, and competitor performance in all these areas. Media intelligence differs frombusiness intelligencein that it uses and analyzes data outside companyfirewalls. Examples of that data areuser-generated contenton social media sites,blogs, comment fields, and wikis etc. It may also include other public data sources likepress releases, news, blogs, legal filings, reviews and job postings. Media intelligence may also include competitive intelligence, wherein information that is gathered from publicly available sources such as social media, press releases, and news announcements are used to better understand the strategies and tactics being deployed by competing businesses.[2] Media intelligence is enhanced by means of emerging technologies likeambient intelligence,machine learning,semantic tagging,natural language processing,sentiment analysisandmachine translation. Different media intelligence platforms use different technologies formonitoring, curating content, engaging with content, data analysis and measurement of communications and marketing campaign success. These technology providers may obtain content by scraping content directly from websites or by connecting to the API provided by social media, or other content platforms that are created for 3rd party developers to develop their own applications and services that access data. Technology companies may also get data from a data reseller. Some social media monitoring and analytics companies use calls to data providers each time an end-user develops a query. Others archive and index social media posts to provide end users with on-demand access to historical data and enable methodologies and technologies leveraging network and relational data. Additional monitoring companies use crawlers and spidering technology to find keyword references, known assemantic analysisornatural language processing. Basic implementation involves curating data from social media on a large scale and analyzing the results to make sense out of it.[3]
https://en.wikipedia.org/wiki/Media_intelligence
Enhanced Data rates for GSM Evolution(EDGE), also known as2.75Gand under various other names, is a2Gdigitalmobile phonetechnology forpacket switcheddata transmission. It is a subset ofGeneral Packet Radio Service(GPRS) on theGSMnetwork and improves upon it offering speeds close to3Gtechnology, hence the name 2.75G. EDGE is standardized by the3GPPas part of the GSM family and as an upgrade to GPRS. EDGE was deployed on GSM networks beginning in 2003 – initially byCingular(nowAT&T) in the United States.[1]It could be readily deployed on existing GSM and GPRS cellular equipment, making it an easier upgrade forcellular companiescompared to theUMTS3G technology that required significant changes.[2]Through the introduction of sophisticated methods of coding and transmitting data, EDGE delivers higher bit-rates per radio channel, resulting in a threefold increase in capacity and performance compared with an ordinary GSM/GPRS connection - originally a max speed of 384 kbit/s.[3]Later,Evolved EDGEwas developed as an enhanced standard providing even more reduced latency and more than double performance, with a peak bit-rate of up to 1 Mbit/s. Enhanced Data rates for GSM Evolutionis the common full name of the EDGE standard. Other names include:Enhanced GPRS(EGPRS),IMT Single Carrier(IMT-SC), andEnhanced Data rates for Global Evolution. Although described as "2.75G" by the3GPPbody, EDGE is part ofInternational Telecommunication Union(ITU)'s 3G definition.[4]It is also recognized as part of theInternational Mobile Telecommunications - 2000(IMT-2000) standard for 3G. EDGE/EGPRS is implemented as a bolt-on enhancement for2.5GGSM/GPRS networks, making it easier for existing GSM carriers to upgrade to it. EDGE is a superset to GPRS and can function on any network with GPRS deployed on it, provided the carrier implements the necessary upgrade. EDGE requires no hardware or software changes to be made in GSM core networks. EDGE-compatible transceiver units must be installed and the base station subsystem needs to be upgraded to support EDGE. If the operator already has this in place, which is often the case today, the network can be upgraded to EDGE by activating an optional software feature. Today EDGE is supported by all major chip vendors for both GSM andWCDMA/HSPA. In addition toGaussian minimum-shift keying(GMSK), EDGE useshigher-order PSK/8 phase-shift keying(8PSK) for the upper five of its nine modulation and coding schemes. EDGE produces a 3-bit word for every change in carrier phase. This effectively triples the gross data rate offered by GSM. EDGE, likeGPRS, uses a rate adaptation algorithm that adapts the modulation and coding scheme (MCS) according to the quality of the radio channel, and thus the bit rate and robustness of data transmission. It introduces a new technology not found in GPRS,incremental redundancy, which, instead of retransmitting disturbed packets, sends more redundancy information to be combined in the receiver. This increases the probability of correct decoding. EDGE can carry a bandwidth up to 236 kbit/s (with end-to-end latency of less than 150 ms) for 4timeslots(theoretical maximum is 473.6 kbit/s for 8 timeslots) in packet mode. This means it can handle four times as much traffic as standard GPRS. EDGE meets theInternational Telecommunication Union's requirement for a3Gnetwork, and has been accepted by the ITU as part of theIMT-2000family of 3G standards.[4]It also enhances the circuit data mode calledHSCSD, increasing the data rate of this service. The channel encoding process in GPRS as well as EGPRS/EDGE consists of two steps: first, a cyclic code is used to add parity bits, which are also referred to as the Block Check Sequence, followed by coding with a possibly puncturedconvolutional code.[5]In GPRS, the Coding Schemes CS-1 to CS-4 specify the number of parity bits generated by the cyclic code and the puncturing rate of the convolutional code.[5]In GPRS Coding Schemes CS-1 through CS-3, the convolutional code is of rate 1/2, i.e. each input bit is converted into two coded bits.[5]In Coding Schemes CS-2 and CS-3, the output of the convolutional code ispuncturedto achieve the desired code rate.[5]In GPRS Coding Scheme CS-4, no convolutional coding is applied.[5] In EGPRS/EDGE, themodulationand coding schemes MCS-1 to MCS-9 take the place of the coding schemes of GPRS, and additionally specify which modulation scheme is used, GMSK or 8PSK.[5]MCS-1 through MCS-4 use GMSK and have performance similar (but not equal) to GPRS, while MCS-5 through MCS-9 use 8PSK.[5]In all EGPRS modulation and coding schemes, a convolutional code of rate 1/3 is used, and puncturing is used to achieve the desired code rate.[5]In contrast to GPRS, theRadio Link Control(RLC) andmedium access control(MAC) headers and the payload data are coded separately in EGPRS.[5]The headers are coded more robustly than the data.[5] The first EDGE network was deployed byCingular(nowAT&T) in the United States[1]on June 30, 2003, initially coveringIndianapolis.[8][9]T-Mobile USdeployed their EDGE network in September 2005.[10][11]In Canada,Rogers Wirelessdeployed their EDGE network in 2004.[12]In Malaysia,DiGilaunched EDGE beginning in May 2004 initially only in theKlang Valley.[13] In Europe,TeliaSonerain Finland rolled out EDGE in April 2004.[14]Orangebegan trialling EDGE in France in April 2005 before a consumer rollout later that year.[15]Bouygues Telecomcompleted its national deployment of EDGE in the country in 2005, strategically focusing on EDGE which is cheaper to deploy compared to 3G networks.[16]Telfortwas the first network in the Netherlands to roll out EDGE having done so by May 2005.[17]Orange launched the UK's first EDGE network in February 2006.[18] TheGlobal Mobile Suppliers Associationreported in 2008 that EDGE networks have been launched in 147 countries around the world.[19] Evolved EDGE, also calledEDGE Evolutionand2.875G, is a bolt-on extension to theGSMmobile telephony standard, which improves on EDGE in a number of ways. Latencies are reduced by lowering theTransmission Time Intervalby half (from 20 ms to 10 ms). Bit rates are increased up to 1 Mbit/s peak bandwidth and latencies down to 80 ms using dual carrier, higher symbol rate andhigher-order modulation(32QAM and 16QAM instead of 8PSK), andturbo codesto improve error correction. This results in real world downlink speeds of up to 600 kbit/s.[20]Further the signal quality is improved using dual antennas improving average bit-rates and spectrum efficiency. The main intention of increasing the existing EDGE throughput is that many operators would like to upgrade their existing infrastructure rather than invest on new network infrastructure. Mobile operators have invested billions in GSM networks, many of which are already capable of supporting EDGE data speeds up to 236.8 kbit/s. With a software upgrade and a new device compliant with Evolved EDGE (like an Evolved EDGEsmartphone) for the user, these data rates can be boosted to speeds approaching 1 Mbit/s (i.e. 98.6 kbit/s per timeslot for 32QAM). Many service providers may not invest in a completely new technology like3Gnetworks.[21] Considerable research and development happened throughout the world for this new technology. A successful trial by Nokia Siemens and "one of China's leading operators" was achieved in a live environment.[21]However, Evolved EDGE was introduced much later than its predecessor, EDGE, coinciding with the widespread adoption of 3G technologies such asHSPAand just before the emergence of4Gnetworks. This timing significantly limited its relevance and practical application, as operators prioritized investment in more advanced wireless technologies likeUMTSandLTE. Moreover, these newer technologies also targeted network coverage layers on low frequencies, further diminishing the potential advantages of Evolved EDGE. Coupled with the upcoming phase-out and shutdown of 2G mobile networks, it became very unlikely that Evolved EDGE would ever see deployment on live networks. As of 2016, nocommercial networkssupported the Evolved EDGE standard (3GPP Rel-7). With Evolved EDGE come three major features designed to reduce latency over the air interface. In EDGE, a single RLC data block (ranging from 23 to 148 bytes of data) is transmitted over four frames, using a single time slot. On average, this requires 20 ms for one way transmission. Under the RTTI scheme, one data block is transmitted over two frames in two timeslots, reducing the latency of the air interface to 10 ms. In addition, Reduced Latency also implies support of Piggy-backedACK/NACK(PAN), in which a bitmap of blocks not received is included in normal data blocks. Using the PAN field, the receiver may report missing data blocks immediately, rather than waiting to send a dedicated PAN message. A final enhancement is RLC-non persistent mode. With EDGE, the RLC interface could operate in either acknowledged mode, or unacknowledged mode. In unacknowledged mode, there is no retransmission of missing data blocks, so a single corrupt block would cause an entire upper-layer IP packet to be lost. With non-persistent mode, an RLC data block may be retransmitted if it is less than a certain age. Once this time expires, it is considered lost, and subsequent data blocks may then be forwarded to upper layers. Both uplink and downlink throughput is improved by using 16 or 32 QAM (quadrature amplitude modulation), along with turbo codes and higher symbol rates. A lesser-known version of the EDGE standard is Enhanced Circuit Switched Data (ECSD), which iscircuit switched.[22] A variant, so called Compact-EDGE, was developed for use in a portion ofDigital AMPSnetwork spectrum.[23] The Global mobile Suppliers Association (GSA) states that, as of May 2013, there were 604 GSM/EDGE networks in 213 countries, from a total of 606 mobile network operator commitments in 213 countries.[24]
https://en.wikipedia.org/wiki/Evolved_EDGE
↔⇔≡⟺Logical symbols representingiff Inlogicand related fields such asmathematicsandphilosophy, "if and only if" (often shortened as "iff") is paraphrased by thebiconditional, alogical connective[1]between statements. The biconditional is true in two cases, where either both statements are true or both are false. The connective isbiconditional(a statement ofmaterial equivalence),[2]and can be likened to the standardmaterial conditional("only if", equal to "if ... then") combined with its reverse ("if"); hence the name. The result is that the truth of either one of the connected statements requires the truth of the other (i.e. either both statements are true, or both are false), though it is controversial whether the connective thus defined is properly rendered by the English "if and only if"—with its pre-existing meaning. For example,P if and only if Qmeans thatPis true wheneverQis true, and the only case in whichPis true is ifQis also true, whereas in the case ofP if Q, there could be other scenarios wherePis true andQis false. In writing, phrases commonly used as alternatives to P "if and only if" Q include:Q isnecessary and sufficientfor P,for P it is necessary and sufficient that Q,P is equivalent (or materially equivalent) to Q(compare withmaterial implication),P precisely if Q,P precisely (or exactly) when Q,P exactly in case Q, andP just in case Q.[3]Some authors regard "iff" as unsuitable in formal writing;[4]others consider it a "borderline case" and tolerate its use.[5]Inlogical formulae, logical symbols, such as↔{\displaystyle \leftrightarrow }and⇔{\displaystyle \Leftrightarrow },[6]are used instead of these phrases; see§ Notationbelow. Thetruth tableofP↔{\displaystyle \leftrightarrow }Qis as follows:[7][8] It is equivalent to that produced by theXNOR gate, and opposite to that produced by theXOR gate.[9] The corresponding logical symbols are "↔{\displaystyle \leftrightarrow }", "⇔{\displaystyle \Leftrightarrow }",[6]and≡{\displaystyle \equiv },[10]and sometimes "iff". These are usually treated as equivalent. However, some texts ofmathematical logic(particularly those onfirst-order logic, rather thanpropositional logic) make a distinction between these, in which the first,↔{\displaystyle \leftrightarrow }, is used as a symbol in logic formulas, while⇔{\displaystyle \Leftrightarrow }or≡{\displaystyle \equiv }is used in reasoning about those logic formulas (e.g., inmetalogic). InŁukasiewicz'sPolish notation, it is the prefix symbolE{\displaystyle E}.[11] Another term for thelogical connective, i.e., the symbol in logic formulas, isexclusive nor. InTeX, "if and only if" is shown as a long double arrow:⟺{\displaystyle \iff }via command \iff or \Longleftrightarrow.[12] In mostlogical systems, oneprovesa statement of the form "P iff Q" by proving either "if P, then Q" and "if Q, then P", or "if P, then Q" and "if not-P, then not-Q". Proving these pairs of statements sometimes leads to a more natural proof, since there are not obvious conditions in which one would infer a biconditional directly. An alternative is to prove thedisjunction"(P and Q) or (not-P and not-Q)", which itself can be inferred directly from either of its disjuncts—that is, because "iff" istruth-functional, "P iff Q" follows if P and Q have been shown to be both true, or both false. Usage of the abbreviation "iff" first appeared in print inJohn L. Kelley's 1955 bookGeneral Topology.[13]Its invention is often credited toPaul Halmos, who wrote "I invented 'iff,' for 'if and only if'—but I could never believe I was really its first inventor."[14] It is somewhat unclear how "iff" was meant to be pronounced. In current practice, the single 'word' "iff" is almost always read as the four words "if and only if". However, in the preface ofGeneral Topology, Kelley suggests that it should be read differently: "In some cases where mathematical content requires 'if and only if' andeuphonydemands something less I use Halmos' 'iff'". The authors of one discrete mathematics textbook suggest:[15]"Should you need to pronounce iff, reallyhang on to the 'ff'so that people hear the difference from 'if'", implying that "iff" could be pronounced as[ɪfː]. Conventionally,definitionsare "if and only if" statements; some texts — such as Kelley'sGeneral Topology— follow this convention, and use "if and only if" oriffin definitions of new terms.[16]However, this usage of "if and only if" is relatively uncommon and overlooks the linguistic fact that the "if" of a definition is interpreted as meaning "if and only if". The majority of textbooks, research papers and articles (including English Wikipedia articles) follow the linguistic convention of interpreting "if" as "if and only if" whenever a mathematical definition is involved (as in "a topological space is compact if every open cover has a finite subcover").[17]Moreover, in the case of arecursive definition, theonly ifhalf of the definition is interpreted as a sentence in the metalanguage stating that the sentences in the definition of a predicate are theonly sentencesdetermining the extension of the predicate. Euler diagramsshow logical relationships among events, properties, and so forth. "P only if Q", "if P then Q", and "P→Q" all mean that P is asubset, either proper or improper, of Q. "P if Q", "if Q then P", and Q→P all mean that Q is a proper or improper subset of P. "P if and only if Q" and "Q if and only if P" both mean that the sets P and Q are identical to each other. Iffis used outside the field of logic as well. Wherever logic is applied, especially inmathematicaldiscussions, it has the same meaning as above: it is an abbreviation forif and only if, indicating that one statement is bothnecessary and sufficientfor the other. This is an example ofmathematical jargon(although, as noted above,ifis more often used thaniffin statements of definition). The elements ofXareall and onlythe elements ofYmeans: "For anyzin thedomain of discourse,zis inXif and only ifzis inY." In theirArtificial Intelligence: A Modern Approach,RussellandNorvignote (page 282),[18]in effect, that it is often more natural to expressif and only ifasiftogether with a "database (or logic programming) semantics". They give the example of the English sentence "Richard has two brothers, Geoffrey and John". In adatabaseorlogic program, this could be represented simply by two sentences: The database semantics interprets the database (or program) as containingallandonlythe knowledge relevant for problem solving in a given domain. It interpretsonly ifas expressing in the metalanguage that the sentences in the database represent theonlyknowledge that should be considered when drawing conclusions from the database. Infirst-order logic(FOL) with the standard semantics, the same English sentence would need to be represented, usingif and only if, withonly ifinterpreted in the object language, in some such form as: Compared with the standard semantics for FOL, the database semantics has a more efficient implementation. Instead of reasoning with sentences of the form: it uses sentences of the form: toreason forwardsfromconditionstoconclusionsorbackwardsfromconclusionstoconditions. The database semantics is analogous to the legal principleexpressio unius est exclusio alterius(the express mention of one thing excludes all others). Moreover, it underpins the application of logic programming to the representation of legal texts and legal reasoning.[19]
https://en.wikipedia.org/wiki/If_and_only_if
Leet(or "1337"), also known aseleetorleetspeak, or simplyhacker speech, is a system of modified spellings used primarily on theInternet. It often uses character replacements in ways that play on the similarity of theirglyphsviareflectionor other resemblance. Additionally, it modifies certain words on the basis of a system ofsuffixesand alternative meanings. There are manydialectsorlinguistic varietiesin differentonline communities. The term "leet" is derived from the wordelite, used as an adjective to describe skill or accomplishment, especially in the fields ofonline gamingandcomputer hacking. The leet lexicon includes spellings of the word as1337orleet. Leet originated withinbulletin board systems(BBS) in the 1980s,[1][2]where having "elite" status on a BBS allowed a user access to file folders, games, and special chat rooms. TheCult of the Dead Cowhacker collective has been credited with the original coining of the term, in their text-files of that era.[3]One theory is that it was developed to defeattext filterscreated by BBS orInternet Relay Chatsystem operatorsfor message boards to discourage the discussion of forbidden topics, likecrackingandhacking.[1] Once reserved forhackers, crackers, andscript kiddies, leet later entered the mainstream.[1]Some consideremoticonsandASCII art, like smiley faces, to be leet, while others maintain that leet consists of only symbolic word obfuscation. More obscure forms of leet, involving the use of symbol combinations and almost no letters or numbers, continue to be used for its original purpose of obfuscated communication. It is also sometimes used as a scripting language. Variants of leet have been used to evade censorship for many years; for instance "@$$" (ass) and "$#!+" (shit) are frequently seen to make a word appear censored to the untrained eye but obvious to a person familiar with leet. This enables coders and programmers especially to circumvent filters and speak about topics that would usually get banned. "Hacker" would end up as "H4x0r", for example.[4] Leet symbols, especially the number 1337, areInternet memesthat have spilled over into some culture. Signs that show the numbers "1337" are popular motifs for pictures and are shared widely across the Internet.[5] Algospeakshares conceptual similarities with leet, albeit with its primary purpose to circumvent algorithmiccensorship online, "algospeak" deriving fromalgoofalgorithmandspeak. These areeuphemismsthat aim to evadeautomated online moderation techniques, especiallythose that are considered unfairor hinderingfree speech.[6][7][8][9][10]One prominent example is using the term "unalive" as opposed to the verb "kill" or even "suicide". Other examples include using "restarted" or "regarded" instead of "retarded" and "seggs" in place of "sex". These phrases are easily understandable to humans, providing either the same general meaning, pronunciation, or shape of the original word. It is furthermore often employed as a more contemporary alternative to leet. The approach has gained more popularity in 2023 and 2024 due to therise in conflict between Israel and Gazawith the topic's contentious nature on the Internet, especially onMetaandTikTokplatforms.[11][12] One of the hallmarks of leet is its unique approach toorthography, using substitutions of other letters, or indeed of characters other than letters, to represent letters in a word.[13][14]For more casual use of leet, the primary strategy is to use quasi-homoglyphs, symbols that closely resemble (to varying degrees) the letters for which they stand. The choice of symbol is not fixed: anything the reader can make sense of is valid in leet-speak. Sometimes,a gamerwould work around a nickname being already taken (and maybe abandoned as well) by replacing a letter with a similar-looking digit. Another use for leet orthographic substitutions is the creation of paraphrased passwords.[1]Limitations imposed by websites on password length (usually no more than 36) and the characters permitted (e.g. alphanumeric and symbols)[15]require less extensive forms when used in this application. Some examples of leet include: However, leetspeak should not be confused withSMS-speak, characterized by using "4" as "for", "2" as "to", "b&" as "ban'd" (e.g. "banned"), "gr8 b8, m8, appreci8, no h8" as "great bait, mate, appreciate, no hate", and so on. 34 Text rendered in leet is often characterized by distinctive, recurring forms. Leet can be pronounced as a single syllable,/ˈliːt/, rhyming witheat,by way ofapheresisof the initial vowel of "elite". It may also be pronounced as two syllables,/ɛˈliːt/. Likehacker slang, leet enjoys a looser grammar than standard English. The loose grammar, just like loose spelling, encodes some level of emphasis, ironic or otherwise. A reader must rely more on intuitiveparsingof leet to determine the meaning of a sentence rather than the actual sentence structure. In particular, speakers of leet are fond ofverbingnouns, turning verbs into nouns (and back again) as forms of emphasis, e.g. "Austin rocks" is weaker than "Austin roxxorz" (note spelling), which is weaker than "Au5t1N is t3h r0xx0rz" (note grammar), which is weaker than something like "0MFG D00D /\Ü571N 15 T3H l_l83Я 1337 Я0XX0ЯZ" (OMG, dude, Austin is theüber-elite rocks-er!). In essence, all of these mean "Austin rocks," not necessarily the other options. Added words and misspellings add to the speaker's enjoyment. Leet, like hacker slang, employs analogy in construction of new words. For example, ifhaxoredis the past tense of the verb "to hack" (hack → haxor → haxored), thenwinzoredwould be easily understood to be the past tense conjugation of "to win," even if the reader had not seen that particular word before. Leet has its own colloquialisms, many of which originated as jokes based on common typing errors, habits of new computer users, or knowledge ofcybercultureand history.[20]Leet is not solely based upon one language or character set. Greek, Russian, and other languages have leet forms, and leet in one language may use characters from another where they are available. As such, while it may be referred to as a "cipher", a "dialect", or a "language", leet does not fit squarely into any of these categories. The termleetitself is often written31337, or1337, and many other variations. After the meaning of these became widely familiar,10100111001came to be used in its place, because it is thebinaryform of1337decimal, making it more of a puzzle to interpret. An increasingly common characteristic of leet is the changing of grammatical usage so as to be deliberately incorrect. The widespread popularity of deliberate misspelling is similar to the cult following of the "All your base are belong to us" phrase. Indeed, the online and computer communities have been international from their inception, so spellings and phrases typical of non-native speakers are quite common. Many words originally derived from leet have now become part of modernInternet slang, such as "pwned".[1]The original driving forces of new vocabulary in leet were common misspellings and typing errors such as "teh" (generally considered lolspeak), and intentional misspellings,[21]especially the "z" at the end of words ("skillz").[1]Another prominent example of a surviving leet expression isw00t, an exclamation of joy.[2]w00t is sometimes used as abackronymfor "We owned the other team." New words (or corruptions thereof) may arise from a need to make one's username unique. As any given Internet service reaches more people, the number of names available to a given user is drastically reduced. While many users may wish to have the username "CatLover," for example, in many cases it is only possible for one user to have the moniker. As such, degradations of the name may evolve, such as "C@7L0vr." As the leet cipher is highly dynamic, there is a wider possibility for multiple users to share the "same" name, through combinations of spelling and transliterations. Additionally,leet—the word itself—can be found in thescreen-namesandgamertagsof many Internet and video games. Use of the term in such a manner announces a high level of skill, though such an announcement may be seen as baselesshubris.[22][more detail needed] Warez(nominally/wɛərz/) is a plural shortening of "software", typically referring to cracked and redistributed software.[22]Phreakingrefers to the hacking of telephone systems and other non-Internet equipment.[1]Tehoriginated as a typographical error of "the", and is sometimes spelledt3h.[1][23]j00takes the place of "you",[2]originating from theaffricatesound that occurs in place of thepalatal approximant,/j/, whenyoufollows a word ending in analveolarplosiveconsonant, such as/t/or/d/. Also, from German, isüber, which means "over" or "above"; it usually appears as a prefix attached to adjectives, and is frequently written without theumlautover theu.[24] Haxor, and derivations thereof, is leet for "hacker",[25]and it is one of the most commonplace examples of the use of the-xorsuffix.Suxxor(pronounced suck-zor) is a derogatory term which originated inwarezculture and is currently[when?]used in multi-user environments such as multiplayer video games andinstant messaging; it, likehaxor, is one of the early leet words to use the-xorsuffix.Suxxoris a modified version of "sucks" (the phrase "to suck"), and the meaning is the same as the English slang.Suxxorcan be mistaken withSuccer/Succkerif used in the wrong context. Its negative definition essentially makes it the opposite ofroxxor, and both can be used as a verb or a noun. The lettersckare often replaced with the Greek Χ (chi) in other words as well. Within leet, the termn00b(and derivations thereof) is used extensively. The term is derived fromnewbie(as in new and inexperienced, or uninformed),[21][24][26]and is used to differentiate "n00bs" from the "elite" (or even "normal") members of a group. Ownedandpwned(generally pronounced "poned"[27][pʰo͡ʊnd]) both refer to the domination of a player in a video game or argument (rather than just a win), or the successful hacking of a website or computer.[28][29][30][1][24][31]It is a slang term derived from the verbown, meaning to appropriate or to conquer to gain ownership. As is a common characteristic of leet, the terms have also been adapted into noun and adjective forms,[24]ownageandpwnage, which can refer to the situation ofpwningor to the superiority of its subject (e.g., "He is a very good player. He is pwnage."). The term was created accidentally by the misspelling of "own" due to the keyboard proximity of the "O" and "P" keys. It implies domination or humiliation of a rival,[32]used primarily in theInternet-basedvideo game cultureto taunt an opponent who has just been soundly defeated (e.g., "You just got pwned!").[33]In 2015Scrabbleadded pwn to their Official Scrabble Words list.[34] Pr0nisslangforpornography.[1]This is a deliberately inaccurate spelling/pronunciation forporn,[26]where a zero is often used to replace the letter O. It is sometimes used in legitimate communications (such as email discussion groups,Usenet, chat rooms, and Internet web pages) to circumvent language andcontent filters, which may reject messages as offensive orspam. The word also helps preventsearch enginesfrom associating commercial sites with pornography, which might result in unwelcome traffic.[citation needed]Pr0nis also sometimes spelled backwards (n0rp) to further obscure the meaning to potentially uninformed readers. It can also refer toASCII artdepicting pornographic images, or to photos of the internals of consumer and industrial hardware.Prawn, a spoof of the misspelling, has started to come into use, as well; inGrand Theft Auto: Vice City, a pornographer films his movies on "Prawn Island". Conversely, in theRPGKingdom of Loathing,prawn, referring to a kind ofcrustacean, is spelledpr0n, leading to the creation of food items such as "pr0n chow mein". Also seeporm.
https://en.wikipedia.org/wiki/Leetspeak
This is alist of convexity topics, by Wikipedia page.
https://en.wikipedia.org/wiki/List_of_convexity_topics
Thematerial conditional(also known asmaterial implication) is abinary operationcommonly used inlogic. When the conditional symbol→{\displaystyle \to }isinterpretedas material implication, a formulaP→Q{\displaystyle P\to Q}is true unlessP{\displaystyle P}is true andQ{\displaystyle Q}is false. Material implication is used in all the basic systems ofclassical logicas well as somenonclassical logics. It is assumed as a model of correct conditional reasoning within mathematics and serves as the basis for commands in manyprogramming languages. However, many logics replace material implication with other operators such as thestrict conditionaland thevariably strict conditional. Due to theparadoxes of material implicationand related problems, material implication is not generally considered a viable analysis ofconditional sentencesinnatural language. In logic and related fields, the material conditional is customarily notated with an infix operator→{\displaystyle \to }.[1]The material conditional is also notated using the infixes⊃{\displaystyle \supset }and⇒{\displaystyle \Rightarrow }.[2]In the prefixedPolish notation, conditionals are notated asCpq{\displaystyle Cpq}. In a conditional formulap→q{\displaystyle p\to q}, the subformulap{\displaystyle p}is referred to as theantecedentandq{\displaystyle q}is termed theconsequentof the conditional. Conditional statements may be nested such that the antecedent or the consequent may themselves be conditional statements, as in the formula(p→q)→(r→s){\displaystyle (p\to q)\to (r\to s)}. InArithmetices Principia: Nova Methodo Exposita(1889),Peanoexpressed the proposition "IfA{\displaystyle A}, thenB{\displaystyle B}" asA{\displaystyle A}ƆB{\displaystyle B}with the symbol Ɔ, which is the opposite of C.[3]He also expressed the propositionA⊃B{\displaystyle A\supset B}asA{\displaystyle A}ƆB{\displaystyle B}.[4][5][6]Hilbertexpressed the proposition "IfA, thenB" asA→B{\displaystyle A\to B}in 1918.[1]Russellfollowed Peano in hisPrincipia Mathematica(1910–1913), in which he expressed the proposition "IfA, thenB" asA⊃B{\displaystyle A\supset B}. Following Russell,Gentzenexpressed the proposition "IfA, thenB" asA⊃B{\displaystyle A\supset B}.Heytingexpressed the proposition "IfA, thenB" asA⊃B{\displaystyle A\supset B}at first but later came to express it asA→B{\displaystyle A\to B}with a right-pointing arrow.Bourbakiexpressed the proposition "IfA, thenB" asA→B{\displaystyle A\to B}in 1954.[7] From aclassicalsemantic perspective, material implication is thebinarytruth functionaloperator which returns "true" unless its first argument is true and its second argument is false. This semantics can be shown graphically in the followingtruth table: One can also consider the equivalenceA→B≡¬(A∧¬B)≡¬A∨B{\displaystyle A\to B\equiv \neg (A\land \neg B)\equiv \neg A\lor B}. The conditionals(A→B){\displaystyle (A\to B)}where the antecedentA{\displaystyle A}is false, are called "vacuous truths". Examples are ... Formulas over the set of connectives{→,⊥}{\displaystyle \{\to ,\bot \}}[8]are calledf-implicational.[9]Inclassical logicthe other connectives, such as¬{\displaystyle \neg }(negation),∧{\displaystyle \land }(conjunction),∨{\displaystyle \lor }(disjunction) and↔{\displaystyle \leftrightarrow }(equivalence), can be defined in terms of→{\displaystyle \to }and⊥{\displaystyle \bot }(falsity):[10]¬A=defA→⊥A∧B=def(A→(B→⊥))→⊥A∨B=def(A→⊥)→BA↔B=def{(A→B)→[(B→A)→⊥]}→⊥{\displaystyle {\begin{aligned}\neg A&\quad {\overset {\text{def}}{=}}\quad A\to \bot \\A\land B&\quad {\overset {\text{def}}{=}}\quad (A\to (B\to \bot ))\to \bot \\A\lor B&\quad {\overset {\text{def}}{=}}\quad (A\to \bot )\to B\\A\leftrightarrow B&\quad {\overset {\text{def}}{=}}\quad \{(A\to B)\to [(B\to A)\to \bot ]\}\to \bot \\\end{aligned}}} The validity of f-implicational formulas can be semantically established by themethod of analytic tableaux. The logical rules are Hilbert-style proofscan be foundhereorhere. AHilbert-style proofcan be foundhere. The semantic definition by truth tables does not permit the examination of structurally identical propositional forms in variouslogical systems, where different properties may be demonstrated. The language considered here is restricted tof-implicational formulas. Consider the following (candidate)natural deductionrules. If assumingA{\displaystyle A}one can deriveB{\displaystyle B}, then one can concludeA→B{\displaystyle A\to B}. [A]⋮BA→B{\displaystyle {\frac {\begin{array}{c}[A]\\\vdots \\B\end{array}}{A\to B}}}(→{\displaystyle \to }I) [A]{\displaystyle [A]}is an assumption that is discharged when applying the rule. This rule corresponds tomodus ponens. A→BAB{\displaystyle {\frac {A\to B\quad A}{B}}}(→{\displaystyle \to }E) AA→BB{\displaystyle {\frac {A\quad A\to B}{B}}}(→{\displaystyle \to }E) (A→⊥)→⊥A{\displaystyle {\frac {\begin{array}{c}(A\to \bot )\to \bot \\\end{array}}{A}}}(¬¬{\displaystyle \neg \neg }E) From falsum (⊥{\displaystyle \bot }) one can derive any formula.(ex falso quodlibet) ⊥A{\displaystyle {\frac {\bot }{A}}}(⊥{\displaystyle \bot }E) Inclassical logicmaterial implication validates the following: Similarly, on classical interpretations of the other connectives, material implication validates the followingentailments: Tautologiesinvolving material implication include: Material implication does not closely match the usage ofconditional sentencesinnatural language. For example, even though material conditionals with false antecedents arevacuously true, the natural language statement "If 8 is odd, then 3 is prime" is typically judged false. Similarly, any material conditional with a true consequent is itself true, but speakers typically reject sentences such as "If I have a penny in my pocket, then Paris is in France". These classic problems have been called theparadoxes of material implication.[16]In addition to the paradoxes, a variety of other arguments have been given against a material implication analysis. For instance,counterfactual conditionalswould all be vacuously true on such an account, when in fact some are false.[17] In the mid-20th century, a number of researchers includingH. P. GriceandFrank Jacksonproposed thatpragmaticprinciples could explain the discrepancies between natural language conditionals and the material conditional. On their accounts, conditionalsdenotematerial implication but end up conveying additional information when they interact with conversational norms such asGrice's maxims.[16][18]Recent work informal semanticsandphilosophy of languagehas generally eschewed material implication as an analysis for natural-language conditionals.[18]In particular, such work has often rejected the assumption that natural-language conditionals aretruth functionalin the sense that the truth value of "IfP, thenQ" is determined solely by the truth values ofPandQ.[16]Thus semantic analyses of conditionals typically propose alternative interpretations built on foundations such asmodal logic,relevance logic,probability theory, andcausal models.[18][16][19] Similar discrepancies have been observed by psychologists studying conditional reasoning, for instance, by the notoriousWason selection taskstudy, where less than 10% of participants reasoned according to the material conditional. Some researchers have interpreted this result as a failure of the participants to conform to normative laws of reasoning, while others interpret the participants as reasoning normatively according to nonclassical laws.[20][21][22]
https://en.wikipedia.org/wiki/Logical_conditional
Adata breach, also known asdata leakage, is "the unauthorized exposure, disclosure, or loss ofpersonal information".[1] Attackers have a variety of motives, from financial gain topolitical activism,political repression, andespionage. There are several technical root causes of data breaches, including accidental or intentional disclosure of information by insiders, loss or theft ofunencrypteddevices, hacking into a system by exploitingsoftware vulnerabilities, andsocial engineering attackssuch asphishingwhere insiders are tricked into disclosing information. Although prevention efforts by the company holding the data can reduce the risk of data breach, it cannot bring it to zero. The first reported breach was in 2002 and the number occurring each year has grown since then. A large number of data breaches are never detected. If a breach is made known to the company holding the data, post-breach efforts commonly include containing the breach, investigating its scope and cause, and notifications to people whose records were compromised, as required by law in many jurisdictions. Law enforcement agencies may investigate breaches, although the hackers responsible are rarely caught. Many criminals sell data obtained in breaches on thedark web. Thus, people whose personal data was compromised are at elevated risk ofidentity theftfor years afterwards and a significant number will become victims of this crime.Data breach notification lawsin many jurisdictions, including allstates of the United StatesandEuropean Union member states, require the notification of people whose data has been breached. Lawsuits against the company that was breached are common, although few victims receive money from them. There is little empirical evidence of economic harm to firms from breaches except the direct cost, although there is some evidence suggesting a temporary, short-term decline instock price. A data breach is a violation of "organizational, regulatory, legislative or contractual" law or policy[2]that causes "the unauthorized exposure, disclosure, or loss ofpersonal information".[1]Legal and contractual definitions vary.[3][2]Some researchers include other types of information, for exampleintellectual propertyorclassified information.[4]However, companies mostly disclose breaches because it is required by law,[5]and only personal information is covered bydata breach notification laws.[6][7] The first reported data breach occurred on 5 April 2002[8]when 250,000social security numberscollected by theState of Californiawere stolen from a data center.[9]Before the widespread adoption ofdata breach notification lawsaround 2005, the prevalence of data breaches is difficult to determine. Even afterwards, statistics per year cannot be relied on because data breaches may be reported years after they occurred,[10]or not reported at all.[11]Nevertheless, the statistics show a continued increase in the number and severity of data breaches that continues as of 2022[update].[12]In 2016, researcherSasha Romanoskyestimated that data breaches (excludingphishing) outnumbered other security breaches by a factor of four.[13] According to a 2020 estimate, 55 percent of data breaches were caused byorganized crime, 10 percent bysystem administrators, 10 percent byend userssuch as customers or employees, and 10 percent by states or state-affiliated actors.[14]Opportunistic criminals may cause data breaches—often usingmalwareorsocial engineering attacks, but they will typically move on if the security is above average. More organized criminals have more resources and are more focused in theirtargeting of particular data.[15]Both of them sell the information they obtain for financial gain.[16]Another source of data breaches arepolitically motivated hackers, for exampleAnonymous, that target particular objectives.[17]State-sponsored hackers target either citizens of their country or foreign entities, for such purposes aspolitical repressionandespionage. Often they use undisclosedzero-day vulnerabilitiesfor which the hackers are paid large sums of money.[18]ThePegasus spyware—ano-click malwaredeveloped by the Israeli companyNSO Groupthat can be installed on most cellphones and spies on the users' activity—has drawn attention both for use against criminals such as drug kingpinEl Chapoas well as political dissidents, facilitating themurder of Jamal Khashoggi.[19] Despite developers' goal of delivering a product that works entirely as intended, virtually allsoftwareandhardwarecontains bugs.[20]If a bug creates a security risk, it is called avulnerability.[21][22][23]Patchesare often released to fix identified vulnerabilities, but those that remain unknown (zero days) as well as those that have not been patched are still liable for exploitation.[24]Both software written by the target of the breach and third party software used by them are vulnerable to attack.[22]Thesoftware vendor is rarely legally liablefor the cost of breaches, thus creating an incentive to make cheaper but less secure software.[25] Vulnerabilities vary in their ability to beexploitedby malicious actors. The most valuable allow the attacker toinjectand run their own code (calledmalware), without the user being aware of it.[21]Some malware is downloaded by users via clicking on a malicious link, but it is also possible for maliciousweb applicationsto download malware just from visiting the website (drive-by download).Keyloggers, a type of malware that records a user's keystrokes, are often used in data breaches.[26]The majority of data breaches could have been averted by storing all sensitive information in an encrypted format. That way, physical possession of the storage device or access to encrypted information is useless unless the attacker has theencryption key.[27]Hashingis also a good solution for keepingpasswordssafe frombrute-force attacks, but only if the algorithm is sufficiently secure.[28] Many data breaches occur on the hardware operated by a partner of the organization targeted—including the2013 Target data breachand2014 JPMorgan Chase data breach.[29]Outsourcingwork to a third party leads to a risk of data breach if that company has lower security standards; in particular, small companies often lack the resources to take as many security precautions.[30][29]As a result, outsourcing agreements often include security guarantees and provisions for what happens in the event of a data breach.[30] Human causes of breach are often based on trust of another actor that turns out to be malicious.Social engineering attacksrely on tricking an insider into doing something that compromises the system's security, such as revealing a password or clicking a link to download malware.[31]Data breaches may also be deliberately caused by insiders.[32]One type of social engineering,phishing,[31]obtains a user'scredentialsby sending them a malicious message impersonating a legitimate entity, such as a bank, and getting the user to enter their credentials onto a malicious website controlled by the cybercriminal.Two-factor authenticationcan prevent the malicious actor from using the credentials.[33]Training employees to recognize social engineering is another common strategy.[34] Another source of breaches is accidental disclosure of information, for example publishing information that should be kept private.[35][36]With the increase inremote workandbring your own devicepolicies, large amounts of corporate data is stored on personal devices of employees. Via carelessness or disregard of company security policies, these devices can be lost or stolen.[37]Technical solutions can prevent many causes of human error, such as encrypting all sensitive data, preventing employees from using insecure passwords, installingantivirus softwareto prevent malware, and implementing a robust patching system to ensure that all devices are kept up to date.[38] Although attention to security can reduce the risk of data breach, it cannot bring it to zero. Security is not the only priority of organizations, and an attempt to achieve perfect security would make the technology unusable.[39]Many companies hire achief information security officer(CISO) to oversee the company's information security strategy.[40]To obtain information about potential threats, security professionals will network with each other and share information with other organizations facing similar threats.[41]Defense measures can include an updated incident response strategy, contracts withdigital forensicsfirms that could investigate a breach,[42]cyber insurance,[43][7]and monitoring thedark webfor stolen credentials of employees.[44]In 2024, the United StatesNational Institute of Standards and Technology(NIST) issued a special publication, "Data Confidentiality: Identifying and Protecting Assets Against Data Breaches".[45]TheNIST Cybersecurity Frameworkalso contains information about data protection.[46]Other organizations have released different standards for data protection.[47] The architecture of a company's systems plays a key role in deterring attackers. Daswani and Elbayadi recommend having only one means ofauthentication,[48]avoiding redundant systems, and making the most secure setting default.[49]Defense in depthanddistributed privilege(requiring multiple authentications to execute an operation) also can make a system more difficult to hack.[50]Giving employees and software the least amount of access necessary to fulfill their functions (principle of least privilege) limits the likelihood and damage of breaches.[48][51]Several data breaches were enabled by reliance onsecurity by obscurity; the victims had put access credentials in publicly accessible files.[52]Nevertheless, prioritizing ease of use is also important because otherwise users might circumvent the security systems.[53]Rigoroussoftware testing, includingpenetration testing, can reduce software vulnerabilities, and must be performed prior to each release even if the company is using acontinuous integration/continuous deploymentmodel where new versions are constantly being rolled out.[54] The principle ofleast persistence[55]—avoiding the collection of data that is not necessary and destruction of data that is no longer necessary—can mitigate the harm from breaches.[56][57][58]The challenge is that destroying data can be more complex with modern database systems.[59] A large number of data breaches are never detected.[60]Of those that are, most breaches are detected by third parties;[61][62]others are detected by employees or automated systems.[63]Responding to breaches is often the responsibility of a dedicatedcomputer security incident response team, often including technical experts,public relations, and legal counsel.[64][65]Many companies do not have sufficient expertise in-house, and subcontract some of these roles;[66]often, these outside resources are provided by the cyber insurance policy.[67]After a data breach becomes known to the company, the next steps typically include confirming it occurred, notifying the response team, and attempting to contain the damage.[68] To stop exfiltration of data, common strategies include shutting down affected servers, taking them offline,patchingthe vulnerability, andrebuilding.[69]Once the exact way that the data was compromised is identified, there is typically only one or two technical vulnerabilities that need to be addressed in order to contain the breach and prevent it from reoccurring.[70]Apenetration testcan then verify that the fix is working as expected.[71]Ifmalwareis involved, the organization must investigate and close all infiltration and exfiltration vectors, as well as locate and remove all malware from its systems.[72]If data was posted on thedark web, companies may attempt to have it taken down.[73]Containing the breach can compromise investigation, and some tactics (such as shutting down servers) can violate the company's contractual obligations.[74] Gathering data about the breach can facilitate later litigation or criminal prosecution,[75]but only if the data is gathered according to legal standards and thechain of custodyis maintained.[76]Database forensics can narrow down the records involved, limiting the scope of the incident.[77]Extensive investigation may be undertaken, which can be even more expensive thanlitigation.[62]In the United States, breaches may be investigated by government agencies such as theOffice for Civil Rights, theUnited States Department of Health and Human Services, and theFederal Trade Commission(FTC).[78]Law enforcement agencies may investigate breaches[79]although the hackers responsible are rarely caught.[80] Notifications are typically sent out as required by law.[81]Many companies offer freecredit monitoringto people affected by a data breach, although only around 5 percent of those eligible take advantage of the service.[82]Issuing new credit cards to consumers, although expensive, is an effective strategy to reduce the risk ofcredit card fraud.[82]Companies try to restore trust in their business operations and take steps to prevent a breach from reoccurring.[83] After a data breach, criminals make money by selling data, such as usernames, passwords,social mediaorcustomer loyaltyaccount information,debitandcredit cardnumbers,[16]and personal health information (seemedical data breach).[84]Criminals often sell this data on thedark web—parts of the internet where it is difficult to trace users and illicit activity is widespread—using platforms like.onionorI2P.[85]Originating in the 2000s, the dark web, followed by untraceablecryptocurrenciessuch asBitcoinin the 2010s, made it possible for criminals to sell data obtained in breaches with minimal risk of getting caught, facilitating an increase in hacking.[86][87]One popular darknet marketplace,Silk Road, was shut down in 2013 and its operators arrested, but several other marketplaces emerged in its place.[88]Telegramis also a popular forum for illegal sales of data.[89] This information may be used for a variety of purposes, such asspamming, obtaining products with a victim's loyalty or payment information,identity theft,prescription drug fraud, orinsurance fraud.[90]The threat of data breach or revealing information obtained in a data breach can be used forextortion.[16] Consumers may suffer various forms of tangible or intangible harm from the theft of their personal data, or not notice any harm.[91]A significant portion of those affected by a data breach become victims ofidentity theft.[82]A person's identifying information often circulates on the dark web for years, causing an increased risk of identity theft regardless of remediation efforts.[80][92]Even if a customer does not end up footing the bill forcredit card fraudor identity theft, they have to spend time resolving the situation.[93][94]Intangible harms includedoxxing(publicly revealing someone's personal information), for example medication usage or personal photos.[95] There is little empirical evidence of economic harm from breaches except the direct cost, although there is some evidence suggesting a temporary, short-term decline instock price.[96]Other impacts on the company can range from lost business, reduced employee productivity due to systems being offline or personnel redirected to working on the breach,[97]resignation or firing of senior executives,[78]reputational damage,[78][98]and increasing the future cost of auditing or security.[78]Consumer losses from a breach are usually a negativeexternalityfor the business.[99]Some experts have argued that the evidence suggests there is not enough direct costs or reputational damage from data breaches to sufficientlyincentivizetheir prevention.[100][101] Estimating the cost of data breaches is difficult, both because not all breaches are reported and also because calculating the impact of breaches in financial terms is not straightforward. There are multiple ways of calculating the cost to businesses, especially when it comes to personnel time dedicated to dealing with the breach.[102]Author Kevvie Fowler estimates that more than half the direct cost incurred by companies is in the form of litigation expenses and services provided to affected individuals, with the remaining cost split between notification and detection, including forensics and investigation. He argues that these costs are reduced if the organization has invested in security prior to the breach or has previous experience with breaches. The moredata recordsinvolved, the more expensive a breach typically will be.[103]In 2016, researcherSasha Romanoskyestimated that while the mean breach cost around the targeted firm $5 million, this figure was inflated by a few highly expensive breaches, and the typical data breach was much less costly, around $200,000. Romanosky estimated the total annual cost to corporations in the United States to be around $10 billion.[104] The law regarding data breaches is often found inlegislation to protect privacymore generally, and is dominated by provisions mandating notification when breaches occur.[105]Laws differ greatly in how breaches are defined,[3]what type of information is protected, the deadline for notification,[6]and who hasstandingto sue if the law is violated.[106]Notification laws increasetransparencyand provide a reputational incentive for companies to reduce breaches.[107]The cost of notifying the breach can be high if many people were affected and is incurred regardless of the company's responsibility, so it can function like astrict liabilityfine.[108] As of 2024[update],Thomas on Data Breachlisted 62United Nations member statesthat are covered by data breach notification laws. Some other countries require breach notification in more generaldata protection laws.[109]Shortly after the first reported data breach in April 2002, California passeda law requiring notificationwhen an individual's personal information was breached.[9]In the United States, notification laws proliferated after the February 2005ChoicePoint data breach, widely publicized in part because of the large number of people affected (more than 140,000) and also because of outrage that the company initially informed only affected people in California.[110][111]In 2018, theEuropean Union'sGeneral Data Protection Regulation(GDPR) took effect. The GDPR requires notification within 72 hours, with very high fines possible for large companies not in compliance. This regulation also stimulated the tightening of data privacy laws elsewhere.[112][113]As of 2022[update], the onlyUnited States federal lawrequiring notification for data breaches is limited to medical data regulated underHIPAA, but all 50 states (since Alabama passed a law in 2018) have their own general data breach notification laws.[113] Measures to protect data from a breach are typically absent from the law or vague.[105]Filling this gap is standards required bycyber insurance, which is held by most large companies andfunctions asde factoregulation.[114][115]Of the laws that do exist, there are two main approaches—one that prescribes specific standards to follow, and thereasonablenessapproach.[116]The former is rarely used due to a lack of flexibility and reluctance of legislators to arbitrate technical issues; with the latter approach, the law is vague but specific standards can emerge fromcase law.[117]Companies often prefer the standards approach for providing greaterlegal certainty, but they might check all the boxes without providing a secure product.[118]An additional flaw is that the laws are poorly enforced, with penalties often much less than the cost of a breach, and many companies do not follow them.[119] Manyclass-action lawsuits,derivative suits, and other litigation have been brought after data breaches.[120]They are oftensettledregardless of the merits of the case due to the high cost of litigation.[121][122]Even if a settlement is paid, few affected consumers receive any money as it usually is only cents to a few dollars per victim.[78][122]Legal scholarsDaniel J. SoloveandWoodrow Hartzogargue that "Litigation has increased the costs of data breaches but has accomplished little else."[123]Plaintiffs often struggle to prove that they suffered harm from a data breach.[123]The contribution of a company's actions to a data breach varies,[119][124]and likewise the liability for the damage resulting for data breaches is a contested matter. It is disputed what standard should be applied, whether it is strict liability,negligence, or something else.[124]
https://en.wikipedia.org/wiki/Data_breach
The list below includes the names of notable ofpassword managerswith their Wikipedia articles.
https://en.wikipedia.org/wiki/List_of_password_managers
Incryptography, thePointcheval–Stern signature algorithmis adigital signaturescheme based on the closely relatedElGamal signature scheme. It changes the ElGamal scheme slightly to produce an algorithm which has been proven secure in a strong sense againstadaptive chosen-message attacks, assuming thediscrete logarithm problemis intractable in a strong sense.[1][2] David PointchevalandJacques Sterndeveloped theforking lemmatechnique in constructing their proof for this algorithm. It has been used in other security investigations of various cryptographic algorithms. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Pointcheval%E2%80%93Stern_signature_algorithm
Helmut Norpoth(born 1943) is an American political scientist and professor ofpolitical scienceatStony Brook University. Norpoth is best known for developing the Primary Model to predictUnited States presidential elections. Norpoth's model has successfully matched the results of 25 out of 29 United States presidential elections since 1912, with the exceptions being those in 1960, 2000, 2020, and 2024. Norpoth was born inEssen, Germany, in 1943. He received his undergraduate degree from theFree University of BerlininWest Berlinin 1966. He then attended theUniversity of Michigan, where he received his M.A. andPh.D.in 1967 and 1974, respectively. Before joining Stony Brook University as an assistant professor in 1979, he taught at theUniversity of Arizona(he had been a visiting lecturer in its political science department in 1978), theUniversity of Cologne, and theUniversity of Texas at Austin. In 1980, Norpoth was promoted to associate professor at Stony Brook University and became a tenured full professor there in 1985.[1] Norpoth's research focuses on multiple subjects in political science, includingpublic opinionand electoral behavior, and predicting the results of elections in the United States, Great Britain, and Germany.[1]Alongside fellow political scientistMichael Lewis-Beck, he is the co-author ofThe American Voter Revisited, a 2008 book published by theUniversity of Michigan Pressthe covering the images of presidential candidates, party identification, and why Americans turn out to vote.[2][3]He also wroteConfidence Regained: Economics, Mrs. Thatcher, and the British Voter, a 1992 book published by the University of Michigan Press about public reactions toMargaret Thatcher, especially her economic and foreign policies.[4][5]Other articles written by Norpoth include "Fighting to Win: Wartime Morale in the American Public" with Andrew H. Sidman (2012), "Yes, Prime Minister: The Key to Forecasting British Elections" with Matthew Lebo (2011), "The New Deal Realignment in Real Time" with Andrew H. Sidman and Clara Suong, "History and Primary: The Obama Re-Election" with Michael Bednarczuk, and "Guns 'N Jobs: The FDR Legacy" with Alexa Bankert.[1] Norpoth developed the Primary Model, a statistical model of United States presidential elections based on data going back to 1912. Instead ofopinion polling, Norpoth relies on statistics from a candidate's performance in theprimariesand patterns in the electoral cycle to forecast results through the Primary Model.[6][7]The Primary Model is based on two factors: whether the party that has been in power for a long time seems to be about to lose it, and whether a given candidate did better in the primaries than his or her opponent. The Primary Model was first used in the 1996 election,[8]and correctly predictedBarack Obama's re-election as early as February 2012 and the election ofDonald Trumpin 2016.[5] Norpoth's election model had predicted 25 out of the past 29 elections, with 1960, 2000, 2020, and 2024 as misses.[9] In February 2015, Norpoth projected that Republicans had a 65 percent chance of winning the presidential election the following year.[10]In February 2016, he had predicted a Trump victory with 97 percent certainty,[11]and by October 2016, citing Trump's performance in the primaries, his election model projected a win for Trump with a certainty of 87 to 99 percent, in contrast to all major election forecast.[7]As a result, Norpoth's election model gained significant media attention because it predicted that Trump would win the election.[12]Despite the attention for predicting Trump would win in 2016, Norpoth's election model only said that Trump would win the two-party popular vote 52.5% to 47.5%; Trump actually lost the 2016 two-party popular vote 48.2% to 46.1%, and the Primary Model for the next elections was modified to predict only the Electoral College votes as a result. In response to critics who cited polls in whichHillary Clintonled Trump by a significant margin,[13]Norpoth said that these polls were not taking into account who will actually vote in November 2016, writing that "nearly all of us say, oh yes, I'll vote, and then many will not follow through."[7] On March 2, 2020, Norpoth stated that his model gave Trump a 91 percent chance at winning re-election.[14][15]His model also predicted that Trump would win with up to 362 electoral votes. This would have required Trump to have flipped several Clinton states from 2016; however, this prediction proved to be inaccurate. Trump did not flip any states Clinton won in 2016 and ended up losing five states plus one electoral vote in Nebraska that he won in 2016, ultimately losing the election with 232 electoral votes to Biden's 306 electoral votes. Norpoth cited a "perfect storm" of subsequent surprise events following his prediction that were not taken into account, notably theCOVID-19 pandemic in the United States, which led to lockdowns, beginning only a few weeks after his prediction, and an economic downturn, which was not improved due to perceived inadequate response by Trump. The pandemic also led to an increase in mail-in and absentee ballots, which would lean toward the Democratic candidate. TheGeorge Floyd protestswere also cited as a factor.[16] The Primary Model for 2024 predicted a victory forKamala Harrisat 75 percent. Before the withdrawal ofJoe Bidenfrom the presidential election, the Primary Model had also given Biden a 75 percent chance to defeat Trump;[17]this was because Biden was the incumbent and had won the Democratic primaries in New Hampshire and South Carolina by larger margins than Trump had in the Republican primaries. Norpoth thus predicted an election win for Biden based on the similar positive results for Trump in the 2020 Republican primaries (and which the Primary Model had incorrectly predicted would lead to a Trump victory).[18]Biden would therefore secure 315 electoral votes and Trump 223 electoral votes.[5]Harris ultimately lost to Trump, winning only 226 electoral votes to Trump's 312 electoral votes.
https://en.wikipedia.org/wiki/Helmut_Norpoth#"Primary_Model"_for_US_presidential_elections
Inphysicsand thephilosophy of physics,quantum Bayesianismis a collection of related approaches to theinterpretation of quantum mechanics, the most prominent of which isQBism(pronounced "cubism"). QBism is an interpretation that takes an agent's actions and experiences as the central concerns of the theory. QBism deals with common questions in the interpretation of quantum theory about the nature ofwavefunctionsuperposition,quantum measurement, andentanglement.[1][2]According to QBism, many, but not all, aspects of the quantum formalism are subjective in nature. For example, in this interpretation, a quantum state is not an element of reality—instead, it represents thedegrees of beliefan agent has about the possible outcomes of measurements. For this reason, somephilosophers of sciencehave deemed QBism a form ofanti-realism.[3][4]The originators of the interpretation disagree with this characterization, proposing instead that the theory more properly aligns with a kind of realism they call "participatory realism", wherein reality consists ofmorethan can be captured by any putative third-person account of it.[5][6] This interpretation is distinguished by its use of asubjective Bayesianaccount of probabilities to understand the quantum mechanicalBorn ruleas anormativeaddition to gooddecision-making. Rooted in the prior work ofCarlton Caves, Christopher Fuchs, and Rüdiger Schack during the early 2000s, QBism itself is primarily associated with Fuchs and Schack and has more recently been adopted byDavid Mermin.[7]QBism draws from the fields ofquantum informationandBayesian probabilityand aims to eliminate the interpretational conundrums that have beset quantum theory. The QBist interpretation is historically derivative of the views of the various physicists that are often grouped together as "the"Copenhagen interpretation,[8][9]but is itself distinct from them.[9][10]Theodor Hänschhas characterized QBism as sharpening those older views and making them more consistent.[11] More generally, any work that uses a Bayesian or personalist (a.k.a. "subjective") treatment of the probabilities that appear in quantum theory is also sometimes calledquantum Bayesian. QBism, in particular, has been referred to as "the radical Bayesian interpretation".[12] In addition to presenting an interpretation of the existing mathematical structure of quantum theory, some QBists have advocated a research program ofreconstructingquantum theory from basic physical principles whose QBist character is manifest. The ultimate goal of this research is to identify what aspects of theontologyof the physical world make quantum theory a good tool for agents to use.[13]However, the QBist interpretation itself, as described in§ Core positions, does not depend on any particular reconstruction. E. T. Jaynes, a promoter of the use of Bayesian probability in statistical physics, once suggested that quantum theory is "[a] peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature—all scrambled up byHeisenbergandBohrinto an omelette that nobody has seen how to unscramble".[15]QBism developed out of efforts to separate these parts using the tools ofquantum information theoryandpersonalist Bayesian probability theory. There are manyinterpretations of probability theory. Broadly speaking, these interpretations fall into one of three categories: those which assert that a probability is an objective property of reality (the propensity school), those who assert that probability is an objective property of the measuring process (frequentists), and those which assert that a probability is a cognitive construct which an agent may use to quantify their ignorance or degree of belief in a proposition (Bayesians). QBism begins by asserting that all probabilities, even those appearing in quantum theory, are most properly viewed as members of the latter category. Specifically, QBism adopts a personalist Bayesian interpretation along the lines of Italian mathematicianBruno de Finetti[16]and English philosopherFrank Ramsey.[17][18] According to QBists, the advantages of adopting this view of probability are twofold. First, for QBists the role of quantum states, such as the wavefunctions of particles, is to efficiently encode probabilities; so quantum states are ultimately degrees of belief themselves. (If one considers any single measurement that is a minimal, informationally completepositive operator-valued measure(POVM), this is especially clear: A quantum state is mathematically equivalent to a single probability distribution, the distribution over the possible outcomes of that measurement.[19]) Regarding quantum states as degrees of belief implies that the event of a quantum state changing when a measurement occurs—the "collapse of the wave function"—is simply the agent updating her beliefs in response to a new experience.[13]Second, it suggests that quantum mechanics can be thought of as a local theory, because theEinstein–Podolsky–Rosen (EPR)criterion of reality can be rejected. The EPR criterion states: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal tounity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity."[20]Arguments that quantum mechanics should be considered anonlocal theorydepend upon this principle, but to a QBist, it is invalid, because a personalist Bayesian considers all probabilities, even those equal to unity, to be degrees of belief.[21][22]Therefore, while manyinterpretations of quantum theoryconclude that quantum mechanics is a nonlocal theory, QBists do not.[23] Christopher Fuchsintroduced the term "QBism" and outlined the interpretation in more or less its present form in 2010,[24]carrying further and demanding consistency of ideas broached earlier, notably in publications from 2002.[25][26]Several subsequent works have expanded and elaborated upon these foundations, notably aReviews of Modern Physicsarticle by Fuchs and Schack;[19]anAmerican Journal of Physicsarticle by Fuchs, Mermin, and Schack;[23]andEnrico Fermi Summer School[27]lecture notes by Fuchs and Stacey.[22] Prior to the 2010 article, the term "quantum Bayesianism" was used to describe the developments which have since led to QBism in its present form. However, as noted above, QBism subscribes to a particular kind of Bayesianism which does not suit everyone who might apply Bayesian reasoning to quantum theory (see, for example,§ Other uses of Bayesian probability in quantum physicsbelow). Consequently, Fuchs chose to call the interpretation "QBism", pronounced "cubism", preserving the Bayesian spirit via theCamelCasein the first two letters, but distancing it from Bayesianism more broadly. As thisneologismis a homophone ofCubismthe art movement, it has motivated conceptual comparisons between the two,[28]and media coverage of QBism has been illustrated with art byPicasso[7]andGris.[29]However, QBism itself was not influenced or motivated by Cubism and has no lineage to a potentialconnection between Cubist art and Bohr's views on quantum theory.[30] According to QBism, quantum theory is a tool which an agent may use to help manage their expectations, more like probability theory than a conventional physical theory.[13]Quantum theory, QBism claims, is fundamentally a guide for decision making which has been shaped by some aspects of physical reality. Chief among the tenets of QBism are the following:[31] Reactions to the QBist interpretation have ranged from enthusiastic[13][28]to strongly negative.[32]Some who have criticized QBism claim that it fails to meet the goal of resolving paradoxes in quantum theory. Bacciagaluppi argues that QBism's treatment of measurement outcomes does not ultimately resolve the issue of nonlocality,[33]and Jaeger finds QBism's supposition that the interpretation of probability is key for the resolution to be unnatural and unconvincing.[12]Norsen[34]has accused QBism ofsolipsism, and Wallace[35]identifies QBism as an instance ofinstrumentalism; QBists have argued insistently that these characterizations are misunderstandings, and that QBism is neither solipsist nor instrumentalist.[17][36]A critical article by Nauenberg[32]in theAmerican Journal of Physicsprompted a reply by Fuchs, Mermin, and Schack.[37] Some assert that there may be inconsistencies; for example, Stairs argues that when a probability assignment equals one, it cannot be a degree of belief as QBists say.[38]Further, while also raising concerns about the treatment of probability-one assignments, Timpson suggests that QBism may result in a reduction of explanatory power as compared to other interpretations.[1]Fuchs and Schack replied to these concerns in a later article.[39]Mermin advocated QBism in a 2012Physics Todayarticle,[2]which prompted considerable discussion. Several further critiques of QBism which arose in response to Mermin's article, and Mermin's replies to these comments, may be found in thePhysics Todayreaders' forum.[40][41]Section 2 of theStanford Encyclopedia of Philosophyentry on QBism also contains a summary of objections to the interpretation, and some replies.[42]Others are opposed to QBism on more general philosophical grounds; for example, Mohrhoff criticizes QBism from the standpoint ofKantian philosophy.[43] Certain authors find QBism internally self-consistent, but do not subscribe to the interpretation.[44]For example, Marchildon finds QBism well-defined in a way that, to him,many-worlds interpretationsare not, but he ultimately prefers aBohmian interpretation.[45]Similarly, Schlosshauer and Claringbold state that QBism is a consistent interpretation of quantum mechanics, but do not offer a verdict on whether it should be preferred.[46]In addition, some agree with most, but perhaps not all, of the core tenets of QBism; Barnum's position,[47]as well as Appleby's,[48]are examples. Popularizedor semi-popularized media coverage of QBism has appeared inNew Scientist,[49]Scientific American,[50]Nature,[51]Science News,[52]theFQXi Community,[53]theFrankfurter Allgemeine Zeitung,[29]Quanta Magazine,[16]Aeon,[54]Discover,[55]Nautilus Quarterly,[56]andBig Think.[57]In 2018, two popular-science books about the interpretation of quantum mechanics,Ball'sBeyond WeirdandAnanthaswamy'sThrough Two Doors at Once, devoted sections to QBism.[58][59]Furthermore,Harvard University Presspublished a popularized treatment of the subject,QBism: The Future of Quantum Physics, in 2016.[13] The philosophy literature has also discussed QBism from the viewpoints ofstructural realismand ofphenomenology.[60][61][62]Ballentine argues that "the initial assumption of QBism is not valid" because the inferential probability of Bayesian theory used by QBism is not applicable to quantum mechanics.[63] The views of many physicists (Bohr,Heisenberg,Rosenfeld,von Weizsäcker,Peres, etc.) are often grouped together as the "Copenhagen interpretation" of quantum mechanics. Several authors have deprecated this terminology, claiming that it is historically misleading and obscures differences between physicists that are as important as their similarities.[14][64]QBism shares many characteristics in common with the ideas often labeled as "the Copenhagen interpretation", but the differences are important; to conflate them or to regard QBism as a minor modification of the points of view of Bohr or Heisenberg, for instance, would be a substantial misrepresentation.[10][31] QBism takes probabilities to be personal judgments of the individual agent who is using quantum mechanics. This contrasts with older Copenhagen-type views, which hold that probabilities are given by quantum states that are in turn fixed by objective facts about preparation procedures.[13][65]QBism considers a measurement to be any action that an agent takes to elicit a response from the world and the outcome of that measurement to be the experience the world's response induces back on that agent. As a consequence, communication between agents is the only means by which different agents can attempt to compare their internal experiences. Most variants of the Copenhagen interpretation, however, hold that the outcomes of experiments are agent-independent pieces of reality for anyone to access.[10]QBism claims that these points on which it differs from previous Copenhagen-type interpretations resolve the obscurities that many critics have found in the latter, by changing the role that quantum theory plays (even though QBism does not yet provide a specific underlyingontology). Specifically, QBism posits that quantum theory is anormativetool which an agent may use to better navigate reality, rather than a set of mechanics governing it.[22][42] Approaches to quantum theory, like QBism,[66]which treat quantum states as expressions of information, knowledge, belief, or expectation are called "epistemic" interpretations.[6]These approaches differ from each other in what they consider quantum states to be information or expectations "about", as well as in the technical features of the mathematics they employ. Furthermore, not all authors who advocate views of this type propose an answer to the question of what the information represented in quantum states concerns. In the words of the paper that introduced theSpekkens Toy Model: if a quantum state is a state of knowledge, and it is not knowledge oflocaland noncontextualhidden variables, then what is it knowledge about? We do not at present have a good answer to this question. We shall therefore remain completely agnostic about the nature of the reality to which the knowledge represented by quantum states pertains. This is not to say that the question is not important. Rather, we see the epistemic approach as an unfinished project, and this question as the central obstacle to its completion. Nonetheless, we argue that even in the absence of an answer to this question, a case can be made for the epistemic view. The key is that one can hope to identify phenomena that are characteristic of states of incomplete knowledge regardless of what this knowledge is about.[67] Leifer and Spekkens propose a way of treating quantum probabilities as Bayesian probabilities, thereby considering quantum states as epistemic, which they state is "closely aligned in its philosophical starting point" with QBism.[68]However, they remain deliberately agnostic about what physical properties or entities quantum states are information (or beliefs) about, as opposed to QBism, which offers an answer to that question.[68]Another approach, advocated byBuband Pitowsky, argues that quantum states are information about propositions within event spaces that formnon-Boolean lattices.[69]On occasion, the proposals of Bub and Pitowsky are also called "quantum Bayesianism".[70] Zeilingerand Brukner have also proposed an interpretation of quantum mechanics in which "information" is a fundamental concept, and in which quantum states are epistemic quantities.[71]Unlike QBism, the Brukner–Zeilinger interpretation treats some probabilities as objectively fixed. In the Brukner–Zeilinger interpretation, a quantum state represents the information that a hypothetical observer in possession of all possible data would have. Put another way, a quantum state belongs in their interpretation to anoptimally informedagent, whereas in QBism,anyagent can formulate a state to encode her own expectations.[72]Despite this difference, in Cabello's classification, the proposals of Zeilinger and Brukner are also designated as "participatory realism", as QBism and the Copenhagen-type interpretations are.[6] Bayesian, or epistemic, interpretations of quantum probabilities were proposed in the early 1990s byBaezand Youssef.[73][74] R. F. Streaterargued that "[t]he first quantum Bayesian wasvon Neumann", basing that claim on von Neumann's textbookThe Mathematical Foundations of Quantum Mechanics.[75]Blake Stacey disagrees, arguing that the views expressed in that book on the nature of quantum states and the interpretation of probability are not compatible with QBism, or indeed, with any position that might be called quantum Bayesianism.[14] Comparisons have also been made between QBism and therelational quantum mechanics(RQM) espoused byCarlo Rovelliand others.[76][77]In both QBism and RQM, quantum states are not intrinsic properties of physical systems.[78]Both QBism and RQM deny the existence of an absolute, universal wavefunction. Furthermore, both QBism and RQM insist that quantum mechanics is a fundamentallylocaltheory.[23][79]In addition, Rovelli, like several QBist authors, advocates reconstructing quantum theory from physical principles in order to bring clarity to the subject of quantum foundations.[80](The QBist approaches to doing so are different from Rovelli's, and are describedbelow.) One important distinction between the two interpretations is their philosophy of probability: RQM does not adopt the Ramsey–de Finetti school of personalist Bayesianism.[6][17]Moreover, RQM does not insist that a measurement outcome is necessarily an agent's experience.[17] QBism should be distinguished from other applications ofBayesian inferencein quantum physics, and from quantum analogues of Bayesian inference.[19][73]For example, some in the field of computer science have introduced a kind of quantumBayesian network, which they argue could have applications in "medical diagnosis, monitoring of processes, and genetics".[81][82]Bayesian inference has also been applied in quantum theory for updating probability densities over quantum states,[83]andMaxEntmethods have been used in similar ways.[73][84]Bayesian methods forquantum state and process tomographyare an active area of research.[85] Conceptual concerns about the interpretation of quantum mechanics and the meaning of probability have motivated technical work. A quantum version of thede Finetti theorem, introduced by Caves, Fuchs, and Schack (independently reproving a result found using different means by Størmer[86]) to provide a Bayesian understanding of the idea of an "unknown quantum state",[87][88]has found application elsewhere, in topics likequantum key distribution[89]andentanglementdetection.[90] Adherents of several interpretations of quantum mechanics, QBism included, have been motivated to reconstruct quantum theory. The goal of these research efforts has been to identify a new set of axioms or postulates from which the mathematical structure of quantum theory can be derived, in the hope that with such a reformulation, the features of nature which made quantum theory the way it is might be more easily identified.[51][91]Although the core tenets of QBism do not demand such a reconstruction, some QBists—Fuchs,[26]in particular—have argued that the task should be pursued. One topic prominent in the reconstruction effort is the set of mathematical structures known as symmetric, informationally-complete, positive operator-valued measures (SIC-POVMs). QBist foundational research stimulated interest in these structures, which now have applications in quantum theory outside of foundational studies[92]and in pure mathematics.[93] The most extensively explored QBist reformulation of quantum theory involves the use of SIC-POVMs to rewrite quantum states (either pure ormixed) as a set of probabilities defined over the outcomes of a "Bureau of Standards" measurement.[94][95]That is, if one expresses adensity matrixas a probability distribution over the outcomes of a SIC-POVM experiment, one can reproduce all the statistical predictions implied by the density matrix from the SIC-POVM probabilities instead.[96]TheBorn rulethen takes the role of relating one valid probability distribution to another, rather than of deriving probabilities from something apparently more fundamental. Fuchs, Schack, and others have taken to calling this restatement of the Born rule theurgleichung,from the German for "primal equation" (seeUr-prefix), because of the central role it plays in their reconstruction of quantum theory.[19][97][98] The following discussion presumes some familiarity with the mathematics ofquantum informationtheory, and in particular, the modeling of measurement procedures byPOVMs. Consider a quantum system to which is associated ad{\textstyle d}-dimensionalHilbert space. If a set ofd2{\textstyle d^{2}}rank-1projectorsΠ^i{\displaystyle {\hat {\Pi }}_{i}}satisfyingtr⁡Π^iΠ^j=dδij+1d+1{\displaystyle \operatorname {tr} {\hat {\Pi }}_{i}{\hat {\Pi }}_{j}={\frac {d\delta _{ij}+1}{d+1}}}exists, then one may form a SIC-POVMH^i=1dΠ^i{\textstyle {\hat {H}}_{i}={\frac {1}{d}}{\hat {\Pi }}_{i}}. An arbitrary quantum stateρ^{\displaystyle {\hat {\rho }}}may be written as a linear combination of the SIC projectorsρ^=∑i=1d2[(d+1)P(Hi)−1d]Π^i,{\displaystyle {\hat {\rho }}=\sum _{i=1}^{d^{2}}\left[(d+1)P(H_{i})-{\frac {1}{d}}\right]{\hat {\Pi }}_{i},}whereP(Hi)=tr⁡ρ^H^i{\textstyle P(H_{i})=\operatorname {tr} {\hat {\rho }}{\hat {H}}_{i}}is the Born rule probability for obtaining SIC measurement outcomeHi{\displaystyle H_{i}}implied by the state assignmentρ^{\displaystyle {\hat {\rho }}}. We follow the convention that operators have hats while experiences (that is, measurement outcomes) do not. Now consider an arbitrary quantum measurement, denoted by the POVM{D^j}{\displaystyle \{{\hat {D}}_{j}\}}. The urgleichung is the expression obtained from forming the Born rule probabilities,Q(Dj)=tr⁡ρ^D^j{\textstyle Q(D_{j})=\operatorname {tr} {\hat {\rho }}{\hat {D}}_{j}}, for the outcomes of this quantum measurement,Q(Dj)=∑i=1d2[(d+1)P(Hi)−1d]P(Dj∣Hi),{\displaystyle Q(D_{j})=\sum _{i=1}^{d^{2}}\left[(d+1)P(H_{i})-{\frac {1}{d}}\right]P(D_{j}\mid H_{i}),}whereP(Dj∣Hi)≡tr⁡Π^iD^j{\displaystyle P(D_{j}\mid H_{i})\equiv \operatorname {tr} {\hat {\Pi }}_{i}{\hat {D}}_{j}}is the Born rule probability for obtaining outcomeDj{\displaystyle D_{j}}implied by the state assignmentΠ^i{\displaystyle {\hat {\Pi }}_{i}}. TheP(Dj∣Hi){\displaystyle P(D_{j}\mid H_{i})}term may be understood to be a conditional probability in a cascaded measurement scenario: Imagine that an agent plans to perform two measurements, first a SIC measurement and then the{Dj}{\displaystyle \{D_{j}\}}measurement. After obtaining an outcome from the SIC measurement, the agent will update her state assignment to a new quantum stateρ^′{\displaystyle {\hat {\rho }}'}before performing the second measurement. If she uses theLüdersrule[99]for state update and obtains outcomeHi{\displaystyle H_{i}}from the SIC measurement, thenρ^′=Π^i{\textstyle {\hat {\rho }}'={\hat {\Pi }}_{i}}. Thus the probability for obtaining outcomeDj{\displaystyle D_{j}}for the second measurement conditioned on obtaining outcomeHi{\displaystyle H_{i}}for the SIC measurement isP(Dj∣Hi){\displaystyle P(D_{j}\mid H_{i})}. Note that the urgleichung is structurally very similar to thelaw of total probability, which is the expressionP(Dj)=∑i=1d2P(Hi)P(Dj∣Hi).{\displaystyle P(D_{j})=\sum _{i=1}^{d^{2}}P(H_{i})P(D_{j}\mid H_{i}).}They functionally differ only by a dimension-dependentaffine transformationof the SIC probability vector. As QBism says that quantum theory is an empirically-motivated normative addition to probability theory, Fuchs and others find the appearance of a structure in quantum theory analogous to one in probability theory to be an indication that a reformulation featuring the urgleichung prominently may help to reveal the properties of nature which made quantum theory so successful.[19][22] The urgleichung does notreplacethe law of total probability. Rather, the urgleichung and the law of total probability apply in different scenarios becauseP(Dj){\displaystyle P(D_{j})}andQ(Dj){\displaystyle Q(D_{j})}refer to different situations.P(Dj){\displaystyle P(D_{j})}is the probability that an agent assigns for obtaining outcomeDj{\displaystyle D_{j}}on her second of two planned measurements, that is, for obtaining outcomeDj{\displaystyle D_{j}}after first making the SIC measurement and obtaining one of theHi{\displaystyle H_{i}}outcomes.Q(Dj){\displaystyle Q(D_{j})}, on the other hand, is the probability an agent assigns for obtaining outcomeDj{\displaystyle D_{j}}when she does not plan to first make the SIC measurement.The law of total probability is a consequence ofcoherencewithin the operational context of performing the two measurements as described. The urgleichung, in contrast, is a relation between different contexts which finds its justification in the predictive success of quantum physics. The SIC representation of quantum states also provides a reformulation of quantum dynamics. Consider a quantum stateρ^{\displaystyle {\hat {\rho }}}with SIC representationP(Hi){\textstyle P(H_{i})}. The time evolution of this state is found by applying aunitary operatorU^{\displaystyle {\hat {U}}}to form the new stateU^ρ^U^†{\textstyle {\hat {U}}{\hat {\rho }}{\hat {U}}^{\dagger }}, which has the SIC representation Pt(Hi)=tr⁡[(U^ρ^U^†)H^i]=tr⁡[ρ^(U^†H^iU^)].{\displaystyle P_{t}(H_{i})=\operatorname {tr} \left[({\hat {U}}{\hat {\rho }}{\hat {U}}^{\dagger }){\hat {H}}_{i}\right]=\operatorname {tr} \left[{\hat {\rho }}({\hat {U}}^{\dagger }{\hat {H}}_{i}{\hat {U}})\right].} The second equality is written in theHeisenberg pictureof quantum dynamics, with respect to which the time evolution of a quantum system is captured by the probabilities associated with a rotated SIC measurement{Dj}={U^†H^jU^}{\textstyle \{D_{j}\}=\{{\hat {U}}^{\dagger }{\hat {H}}_{j}{\hat {U}}\}}of the original quantum stateρ^{\displaystyle {\hat {\rho }}}. Then theSchrödinger equationis completely captured in the urgleichung for this measurement:Pt(Hj)=∑i=1d2[(d+1)P(Hi)−1d]P(Dj∣Hi).{\displaystyle P_{t}(H_{j})=\sum _{i=1}^{d^{2}}\left[(d+1)P(H_{i})-{\frac {1}{d}}\right]P(D_{j}\mid H_{i}).}In these terms, the Schrödinger equation is an instance of the Born rule applied to the passing of time; an agent uses it to relate how she will gamble on informationally complete measurements potentially performed at different times. Those QBists who find this approach promising are pursuing a complete reconstruction of quantum theory featuring the urgleichung as the key postulate.[97](The urgleichung has also been discussed in the context ofcategory theory.[100]) Comparisons between this approach and others not associated with QBism (or indeed with any particular interpretation) can be found in a book chapter by Fuchs and Stacey[101]and an article by Applebyet al.[97]As of 2017, alternative QBist reconstruction efforts are in the beginning stages.[102]
https://en.wikipedia.org/wiki/Quantum_Bayesianism
This is a list of Wikipedia articles of Latin phrases and their translation into English. To view all phrases on a single, lengthy document, see:List of Latin phrases (full).
https://en.wikipedia.org/wiki/List_of_Latin_phrases
Artificial intelligence(AI) refers to the capability ofcomputational systemsto perform tasks typically associated withhuman intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is afield of researchincomputer sciencethat develops and studies methods andsoftwarethat enable machines toperceive their environmentand uselearningandintelligenceto take actions that maximize their chances of achieving defined goals.[1]Such machines may be called AIs. High-profileapplications of AIinclude advancedweb search engines(e.g.,Google Search);recommendation systems(used byYouTube,Amazon, andNetflix);virtual assistants(e.g.,Google Assistant,Siri, andAlexa);autonomous vehicles(e.g.,Waymo);generativeandcreativetools (e.g.,ChatGPTandAI art); andsuperhumanplay and analysis instrategy games(e.g.,chessandGo). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it'snot labeled AI anymore."[2][3] Various subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include learning,reasoning,knowledge representation,planning,natural language processing,perception, and support forrobotics.[a]To reach these goals, AI researchers have adapted and integrated a wide range of techniques, includingsearchandmathematical optimization,formal logic,artificial neural networks, and methods based onstatistics,operations research, andeconomics.[b]AI also draws uponpsychology,linguistics,philosophy,neuroscience, and other fields.[4]Some AI companies, such asOpenAI,Google DeepMindandMeta, aim to createartificial general intelligence(AGI)—AI that can complete virtually any cognitive task at least as well as humans.[5] Artificial intelligence was founded as an academic discipline in 1956,[6]and the field went through multiple cycles of optimism throughoutits history,[7][8]followed by periods of disappointment and loss of funding, known asAI winters.[9][10]Funding and interest vastly increased after 2012 whengraphics processing unitsstarted being used to accelerate neural networks, anddeep learningoutperformed previous AI techniques.[11]This growth accelerated further after 2017 with thetransformer architecture.[12]In the 2020s, the period of rapidprogressmarked by advanced generative AI became known as theAI boom. Generative AI and its ability to create and modify content exposed several unintended consequences and harms in the present and raisedethical concernsaboutAI's long-term effectsand potentialexistential risks, prompting discussions aboutregulatory policiesto ensure thesafetyand benefits of the technology. The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research.[a] Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logicaldeductions.[13]By the late 1980s and 1990s, methods were developed for dealing withuncertainor incomplete information, employing concepts fromprobabilityandeconomics.[14] Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": They become exponentially slower as the problems grow.[15]Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[16]Accurate and efficient reasoning is an unsolved problem. Knowledge representationandknowledge engineering[17]allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval,[18]scene interpretation,[19]clinical decision support,[20]knowledge discovery (mining "interesting" and actionable inferences from largedatabases),[21]and other areas.[22] Aknowledge baseis a body of knowledge represented in a form that can be used by a program. Anontologyis the set of objects, relations, concepts, and properties used by a particular domain of knowledge.[23]Knowledge bases need to represent things such as objects, properties, categories, and relations between objects;[24]situations, events, states, and time;[25]causes and effects;[26]knowledge about knowledge (what we know about what other people know);[27]default reasoning(things that humans assume are true until they are told differently and will remain true even when other facts are changing);[28]and many other aspects and domains of knowledge. Among the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous);[29]and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally).[16]There is also the difficulty ofknowledge acquisition, the problem of obtaining knowledge for AI applications.[c] An "agent" is anything that perceives and takes actions in the world. Arational agenthas goals or preferences and takes actions to make them happen.[d][32]Inautomated planning, the agent has a specific goal.[33]Inautomated decision-making, the agent has preferences—there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision-making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": theutilityof all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility.[34] Inclassical planning, the agent knows exactly what the effect of any action will be.[35]In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked.[36] In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., withinverse reinforcement learning), or the agent can seek information to improve its preferences.[37]Information value theorycan be used to weigh the value of exploratory or experimental actions.[38]The space of possible future actions and situations is typicallyintractablylarge, so the agents must take actions and evaluate situations while being uncertain of what the outcome will be. AMarkov decision processhas atransition modelthat describes the probability that a particular action will change the state in a particular way and areward functionthat supplies the utility of each state and the cost of each action. Apolicyassociates a decision with each possible state. The policy could be calculated (e.g., byiteration), beheuristic, or it can be learned.[39] Game theorydescribes the rational behavior of multiple interacting agents and is used in AI programs that make decisions that involve other agents.[40] Machine learningis the study of programs that can improve their performance on a given task automatically.[41]It has been a part of AI from the beginning.[e] There are several kinds of machine learning.Unsupervised learninganalyzes a stream of data and finds patterns and makes predictions without any other guidance.[44]Supervised learningrequires labeling the training data with the expected answers, and comes in two main varieties:classification(where the program must learn to predict what category the input belongs in) andregression(where the program must deduce a numeric function based on numeric input).[45] Inreinforcement learning, the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good".[46]Transfer learningis when the knowledge gained from one problem is applied to a new problem.[47]Deep learningis a type of machine learning that runs inputs through biologically inspiredartificial neural networksfor all of these types of learning.[48] Computational learning theorycan assess learners bycomputational complexity, bysample complexity(how much data is required), or by other notions ofoptimization.[49] Natural language processing(NLP)[50]allows programs to read, write and communicate in human languages such asEnglish. Specific problems includespeech recognition,speech synthesis,machine translation,information extraction,information retrievalandquestion answering.[51] Early work, based onNoam Chomsky'sgenerative grammarandsemantic networks, had difficulty withword-sense disambiguation[f]unless restricted to small domains called "micro-worlds" (due to the common sense knowledge problem[29]).Margaret Mastermanbelieved that it was meaning and not grammar that was the key to understanding languages, and thatthesauriand not dictionaries should be the basis of computational language structure. Modern deep learning techniques for NLP includeword embedding(representing words, typically asvectorsencoding their meaning),[52]transformers(a deep learning architecture using anattentionmechanism),[53]and others.[54]In 2019,generative pre-trained transformer(or "GPT") language models began to generate coherent text,[55][56]and by 2023, these models were able to get human-level scores on thebar exam,SATtest,GREtest, and many other real-world applications.[57] Machine perceptionis the ability to use input from sensors (such as cameras, microphones, wireless signals, activelidar, sonar, radar, andtactile sensors) to deduce aspects of the world.Computer visionis the ability to analyze visual input.[58] The field includesspeech recognition,[59]image classification,[60]facial recognition,object recognition,[61]object tracking,[62]androbotic perception.[63] Affective computingis a field that comprises systems that recognize, interpret, process, or simulate humanfeeling, emotion, and mood.[65]For example, somevirtual assistantsare programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitatehuman–computer interaction. However, this tends to give naïve users an unrealistic conception of the intelligence of existing computer agents.[66]Moderate successes related to affective computing include textualsentiment analysisand, more recently,multimodal sentiment analysis, wherein AI classifies the effects displayed by a videotaped subject.[67] A machine withartificial general intelligenceshould be able to solve a wide variety of problems with breadth and versatility similar tohuman intelligence.[68] AI research uses a wide variety of techniques to accomplish the goals above.[b] AI can solve many problems by intelligently searching through many possible solutions.[69]There are two very different kinds of search used in AI:state space searchandlocal search. State space searchsearches through a tree of possible states to try to find a goal state.[70]For example,planningalgorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process calledmeans-ends analysis.[71] Simple exhaustive searches[72]are rarely sufficient for most real-world problems: thesearch space(the number of places to search) quickly grows toastronomical numbers. The result is a search that istoo slowor never completes.[15]"Heuristics" or "rules of thumb" can help prioritize choices that are more likely to reach a goal.[73] Adversarial searchis used forgame-playingprograms, such as chess or Go. It searches through atreeof possible moves and countermoves, looking for a winning position.[74] Local searchusesmathematical optimizationto find a solution to a problem. It begins with some form of guess and refines it incrementally.[75] Gradient descentis a type of local search that optimizes a set of numerical parameters by incrementally adjusting them to minimize aloss function. Variants of gradient descent are commonly used to trainneural networks,[76]through thebackpropagationalgorithm. Another type of local search isevolutionary computation, which aims to iteratively improve a set of candidate solutions by "mutating" and "recombining" them,selectingonly the fittest to survive each generation.[77] Distributed search processes can coordinate viaswarm intelligencealgorithms. Two popular swarm algorithms used in search areparticle swarm optimization(inspired by birdflocking) andant colony optimization(inspired byant trails).[78] Formallogicis used forreasoningandknowledge representation.[79]Formal logic comes in two main forms:propositional logic(which operates on statements that are true or false and useslogical connectivessuch as "and", "or", "not" and "implies")[80]andpredicate logic(which also operates on objects, predicates and relations and usesquantifierssuch as "EveryXis aY" and "There aresomeXs that areYs").[81] Deductive reasoningin logic is the process ofprovinga new statement (conclusion) from other statements that are given and assumed to be true (thepremises).[82]Proofs can be structured as prooftrees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes byinference rules. Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whoseleaf nodesare labelled by premises oraxioms. In the case ofHorn clauses, problem-solving search can be performed by reasoningforwardsfrom the premises orbackwardsfrom the problem.[83]In the more general case of the clausal form offirst-order logic,resolutionis a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved.[84] Inference in both Horn clause logic and first-order logic isundecidable, and thereforeintractable. However, backward reasoning with Horn clauses, which underpins computation in thelogic programminglanguageProlog, isTuring complete. Moreover, its efficiency is competitive with computation in othersymbolic programminglanguages.[85] Fuzzy logicassigns a "degree of truth" between 0 and 1. It can therefore handle propositions that are vague and partially true.[86] Non-monotonic logics, including logic programming withnegation as failure, are designed to handledefault reasoning.[28]Other specialized versions of logic have been developed to describe many complex domains. Many problems in AI (including reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods fromprobabilitytheory and economics.[87]Precise mathematical tools have been developed that analyze how an agent can make choices and plan, usingdecision theory,decision analysis,[88]andinformation value theory.[89]These tools include models such asMarkov decision processes,[90]dynamicdecision networks,[91]game theoryandmechanism design.[92] Bayesian networks[93]are a tool that can be used forreasoning(using theBayesian inferencealgorithm),[g][95]learning(using theexpectation–maximization algorithm),[h][97]planning(usingdecision networks)[98]andperception(usingdynamic Bayesian networks).[91] Probabilistic algorithms can also be used for filtering, prediction, smoothing, and finding explanations for streams of data, thus helping perception systems analyze processes that occur over time (e.g.,hidden Markov modelsorKalman filters).[91] The simplest AI applications can be divided into two types: classifiers (e.g., "if shiny then diamond"), on one hand, and controllers (e.g., "if diamond then pick up"), on the other hand.Classifiers[99]are functions that usepattern matchingto determine the closest match. They can be fine-tuned based on chosen examples usingsupervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as adata set. When a new observation is received, that observation is classified based on previous experience.[45] There are many kinds of classifiers in use.[100]Thedecision treeis the simplest and most widely used symbolic machine learning algorithm.[101]K-nearest neighboralgorithm was the most widely used analogical AI until the mid-1990s, andKernel methodssuch as thesupport vector machine(SVM) displaced k-nearest neighbor in the 1990s.[102]Thenaive Bayes classifieris reportedly the "most widely used learner"[103]at Google, due in part to its scalability.[104]Neural networksare also used as classifiers.[105] An artificial neural network is based on a collection of nodes also known asartificial neurons, which loosely model theneuronsin a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once theweightcrosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers.[105] Learning algorithms for neural networks uselocal searchto choose the weights that will get the right output for each input during training. The most common training technique is thebackpropagationalgorithm.[106]Neural networks learn to model complex relationships between inputs and outputs andfind patternsin data. In theory, a neural network can learn any function.[107] Infeedforward neural networksthe signal passes in only one direction.[108]Recurrent neural networksfeed the output signal back into the input, which allows short-term memories of previous input events.Long short term memoryis the most successful network architecture for recurrent networks.[109]Perceptrons[110]use only a single layer of neurons; deep learning[111]uses multiple layers.Convolutional neural networksstrengthen the connection between neurons that are "close" to each other—this is especially important inimage processing, where a local set of neurons mustidentify an "edge"before the network can identify an object.[112] Deep learning[111]uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, inimage processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces.[113] Deep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, includingcomputer vision,speech recognition,natural language processing,image classification,[114]and others. The reason that deep learning performs so well in so many applications is not known as of 2021.[115]The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s)[i]but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching toGPUs) and the availability of vast amounts of training data, especially the giantcurated datasetsused for benchmark testing, such asImageNet.[j] Generative pre-trained transformers(GPT) arelarge language models(LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pre-trained on a largecorpus of textthat can be from the Internet. The pretraining consists of predicting the nexttoken(a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique calledreinforcement learning from human feedback(RLHF). Current GPT models are prone to generating falsehoods called "hallucinations". These can be reduced with RLHF and quality data, but the problem has been getting worse for reasoning systems.[123]Such systems are used inchatbots, which allow people to ask a question or request a task in simple text.[124][125] Current models and services includeGemini(formerly Bard),ChatGPT,Grok,Claude,Copilot, andLLaMA.[126]MultimodalGPT models can process different types of data (modalities) such as images, videos, sound, and text.[127] In the late 2010s,graphics processing units(GPUs) that were increasingly designed with AI-specific enhancements and used with specializedTensorFlowsoftware had replaced previously usedcentral processing unit(CPUs) as the dominant means for large-scale (commercial and academic)machine learningmodels' training.[128]Specializedprogramming languagessuch asPrologwere used in early AI research,[129]butgeneral-purpose programming languageslikePythonhave become predominant.[130] The transistor density inintegrated circuitshas been observed to roughly double every 18 months—a trend known asMoore's law, named after theIntelco-founderGordon Moore, who first identified it. Improvements inGPUshave been even faster,[131]a trend sometimes calledHuang's law,[132]named afterNvidiaco-founder and CEOJensen Huang. AI and machine learning technology is used in most of the essential applications of the 2020s, including:search engines(such asGoogle Search),targeting online advertisements,recommendation systems(offered byNetflix,YouTubeorAmazon), drivinginternet traffic,targeted advertising(AdSense,Facebook),virtual assistants(such asSiriorAlexa),autonomous vehicles(includingdrones,ADASandself-driving cars),automatic language translation(Microsoft Translator,Google Translate),facial recognition(Apple'sFaceIDorMicrosoft'sDeepFaceandGoogle'sFaceNet) andimage labeling(used byFacebook, Apple'sPhotosandTikTok). The deployment of AI may be overseen by aChief automation officer(CAO). The application of AI inmedicineandmedical researchhas the potential to increase patient care and quality of life.[133]Through the lens of theHippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients.[134][135] For medical research, AI is an important tool for processing and integratingbig data. This is particularly important fororganoidandtissue engineeringdevelopment which usemicroscopyimaging as a key technique in fabrication.[136]It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research.[136][137]New AI tools can deepen the understanding of biomedically relevant pathways. For example,AlphaFold 2(2021) demonstrated the ability to approximate, in hours rather than months, the 3Dstructure of a protein.[138]In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria.[139]In 2024, researchers used machine learning to accelerate the search forParkinson's diseasedrug treatments. Their aim was to identify compounds that block the clumping, or aggregation, ofalpha-synuclein(the protein that characterises Parkinson's disease). They were able to speed up the initial screening process ten-fold and reduce the cost by a thousand-fold.[140][141] Game playingprograms have been used since the 1950s to demonstrate and test AI's most advanced techniques.[142]Deep Bluebecame the first computer chess-playing system to beat a reigning world chess champion,Garry Kasparov, on 11 May 1997.[143]In 2011, in aJeopardy!quiz showexhibition match,IBM'squestion answering system,Watson, defeated the two greatestJeopardy!champions,Brad RutterandKen Jennings, by a significant margin.[144]In March 2016,AlphaGowon 4 out of 5 games ofGoin a match with Go championLee Sedol, becoming the firstcomputer Go-playing system to beat a professional Go player withouthandicaps. Then, in 2017, itdefeated Ke Jie, who was the best Go player in the world.[145]Other programs handleimperfect-informationgames, such as thepoker-playing programPluribus.[146]DeepMinddeveloped increasingly generalisticreinforcement learningmodels, such as withMuZero, which could be trained to play chess, Go, orAtarigames.[147]In 2019, DeepMind's AlphaStar achieved grandmaster level inStarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map.[148]In 2021, an AI agent competed in a PlayStationGran Turismocompetition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning.[149]In 2024, Google DeepMind introduced SIMA, a type of AI capable of autonomously playing nine previously unseenopen-worldvideo games by observing screen output, as well as executing short, specific tasks in response to natural language instructions.[150] Large language models, such asGPT-4,Gemini,Claude,LLaMaorMistral, are increasingly used in mathematics. These probabilistic models are versatile, but can also produce wrong answers in the form ofhallucinations. They sometimes need a large database of mathematical problems to learn from, but also methods such assupervisedfine-tuning[151]or trainedclassifierswith human-annotated data to improve answers for new problems and learn from corrections.[152]A February 2024 study showed that the performance of some language models for reasoning capabilities in solving math problems not included in their training data was low, even for problems with only minor deviations from trained data.[153]One technique to improve their performance involves training the models to produce correctreasoningsteps, rather than just the correct result.[154]TheAlibaba Groupdeveloped a version of itsQwenmodels calledQwen2-Math, that achieved state-of-the-art performance on several mathematical benchmarks, including 84% accuracy on the MATH dataset of competition mathematics problems.[155]In January 2025, Microsoft proposed the techniquerStar-Maththat leveragesMonte Carlo tree searchand step-by-step reasoning, enabling a relatively small language model likeQwen-7Bto solve 53% of theAIME2024 and 90% of the MATH benchmark problems.[156] Alternatively, dedicated models for mathematical problem solving with higher precision for the outcome including proof of theorems have been developed such asAlphaTensor,AlphaGeometryandAlphaProofall fromGoogle DeepMind,[157]LlemmafromEleutherAI[158]orJulius.[159] When natural language is used to describe mathematical problems, converters can transform such prompts into a formal language such asLeanto define mathematical tasks. Some models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics.[160] Topological deep learningintegrates varioustopologicalapproaches. Finance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated "robot advisers" have been in use for some years.[161] According to Nicolas Firzli, director of theWorld Pensions & Investments Forum, it may be too early to see the emergence of highly innovative AI-informed financial products and services. He argues that "the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I'm not sure it will unleash a new wave of [e.g., sophisticated] pension innovation."[162] Various countries are deploying AI military applications.[163]The main applications enhancecommand and control, communications, sensors, integration and interoperability.[164]Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous andautonomous vehicles.[163]AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions,target acquisition, coordination and deconfliction of distributedJoint Firesbetween networked combat vehicles, both human operated andautonomous.[164] AI has been used in military operations in Iraq, Syria, Israel and Ukraine.[163][165][166][167] Generative artificial intelligence(Generative AI, GenAI,[168]or GAI) is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data.[169][170][171]These modelslearnthe underlying patterns and structures of theirtraining dataand use them to produce new data[172][173]based on the input, which often comes in the form of natural languageprompts.[174][175] Generative AI tools have become more common since an "AI boom" in the 2020s. This boom was made possible by improvements intransformer-baseddeepneural networks, particularlylarge language models(LLMs). Major tools includechatbotssuch asChatGPT,DeepSeek,Copilot,Gemini,Llama, andGrok;text-to-imageartificial intelligence image generationsystems such asStable Diffusion,Midjourney, andDALL-E; andtext-to-videoAI generators such asSora.[176][177][178][179]Technology companies developing generative AI includeOpenAI,Anthropic,Microsoft,Google,DeepSeek, andBaidu.[180][181][182] Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, includingvirtual assistants,chatbots,autonomous vehicles,game-playing systems, andindustrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks.[186][187][188] Applications of AI in this domain include AI-enabled menstruation and fertility trackers that analyze user data to offer prediction,[189]AI-integrated sex toys (e.g.,teledildonics),[190]AI-generated sexual education content,[191]and AI agents that simulate sexual and romantic partners (e.g.,Replika).[192]AI is also used for the production of non-consensualdeepfake pornography, raising significant ethical and legal concerns.[193] AI technologies have also been used to attempt to identifyonline gender-based violenceand onlinesexual groomingof minors.[194][195] There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated "AI" in some offerings or processes.[196]A few examples areenergy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions,foreign policy, or supply chain management. AI applications for evacuation anddisastermanagement are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions.[197][198][199] In agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conductpredictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water. Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights." For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation. During the2024 Indian elections, US$50 million was spent on authorized AI-generated content, notably by creatingdeepfakesof allied (including sometimes deceased) politicians to better engage with voters, and by translating speeches to various local languages.[200] AI has potential benefits and potential risks.[201]AI may be able to advance science and find solutions for serious problems:Demis HassabisofDeepMindhopes to "solve intelligence, and then use that to solve everything else".[202]However, as the use of AI has become widespread, several unintended consequences and risks have been identified.[203]In-production systems can sometimes not factor ethics and bias into their AI training processes, especially when the AI algorithms are inherently unexplainable in deep learning.[204] Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns aboutprivacy,surveillanceandcopyright. AI-powered devices and services, such as virtual assistants and IoT products, continuously collect personal information, raising concerns about intrusive data gathering and unauthorized access by third parties. The loss of privacy is further exacerbated by AI's ability to process and combine vast amounts of data, potentially leading to a surveillance society where individual activities are constantly monitored and analyzed without adequate safeguards or transparency. Sensitive user data collected may include online activity records, geolocation data, video, or audio.[205]For example, in order to buildspeech recognitionalgorithms,Amazonhas recorded millions of private conversations and allowedtemporary workersto listen to and transcribe some of them.[206]Opinions about this widespread surveillance range from those who see it as anecessary evilto those for whom it is clearlyunethicaland a violation of theright to privacy.[207] AI developers argue that this is the only way to deliver valuable applications and have developed several techniques that attempt to preserve privacy while still obtaining the data, such asdata aggregation,de-identificationanddifferential privacy.[208]Since 2016, some privacy experts, such asCynthia Dwork, have begun to view privacy in terms offairness.Brian Christianwrote that experts have pivoted "from the question of 'what they know' to the question of 'what they're doing with it'."[209] Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under the rationale of "fair use". Experts disagree about how well and under what circumstances this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work".[210][211]Website owners who do not wish to have their content scraped can indicate it in a "robots.txt" file.[212]In 2023, leading authors (includingJohn GrishamandJonathan Franzen) sued AI companies for using their work to train generative AI.[213][214]Another discussed approach is to envision a separatesui generissystem of protection for creations generated by AI to ensure fair attribution and compensation for human authors.[215] The commercial AI scene is dominated byBig Techcompanies such asAlphabet Inc.,Amazon,Apple Inc.,Meta Platforms, andMicrosoft.[216][217][218]Some of these players already own the vast majority of existingcloud infrastructureandcomputingpower fromdata centers, allowing them to entrench further in the marketplace.[219][220] In January 2024, theInternational Energy Agency(IEA) releasedElectricity 2024, Analysis and Forecast to 2026, forecasting electric power use.[221]This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.[222] Prodigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and "intelligent", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.[223] A 2024Goldman SachsResearch Paper,AI Data Centers and the Coming US Power Demand Surge, found "US power demand (is) likely to experience growth not seen in a generation...." and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means.[224]Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.[225] In 2024, theWall Street Journalreported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US).[226]NvidiaCEOJen-Hsun Huangsaid nuclear power is a good option for the data centers.[227] In September 2024,Microsoftannounced an agreement withConstellation Energyto re-open theThree Mile Islandnuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the USNuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of energy will be produced. The cost for re-opening and upgrading is estimated at $1.6 billion (US) and is dependent on tax breaks for nuclear power contained in the 2022 USInflation Reduction Act.[228]The US government and the state of Michigan are investing almost $2 billion (US) to reopen thePalisades Nuclearreactor on Lake Michigan. Closed since 2022, the plant is planned to be reopened in October 2025. The Three Mile Island facility will be renamed the Crane Clean Energy Center after Chris Crane, a nuclear proponent and former CEO ofExelonwho was responsible for Exelon spinoff of Constellation.[229] After the last approval in September 2023,Taiwansuspended the approval of data centers north ofTaoyuanwith a capacity of more than 5 MW in 2024, due to power supply shortages.[230]Taiwan aims tophase out nuclear powerby 2025.[230]On the other hand,Singaporeimposed a ban on the opening of data centers in 2019 due to electric power, but in 2022, lifted this ban.[230] Although most nuclear plants in Japan have been shut down after the 2011Fukushima nuclear accident, according to an October 2024Bloombergarticle in Japanese, cloud gaming services company Ubitus, in which Nvidia has a stake, is looking for land in Japan near nuclear power plant for a new data center for generative AI.[231]Ubitus CEO Wesley Kuo said nuclear power plants are the most efficient, cheap and stable power for AI.[231] On 1 November 2024, theFederal Energy Regulatory Commission(FERC) rejected an application submitted byTalen Energyfor approval to supply some electricity from the nuclear power stationSusquehannato Amazon's data center.[232]According to the Commission ChairmanWillie L. Phillips, it is a burden on the electricity grid as well as a significant cost shifting concern to households and other business sectors.[232] In 2025 a report prepared by the International Energy Agency estimated thegreenhouse gas emissionsfrom the energy consumption of AI at 180 million tons. By 2035, these emissions could rise to 300-500 million tonnes depending on what measures will be taken. This is below 1.5% of the energy sector emissions. The emissions reduction potential of AI was estimated at 5% of the energy sector emissions, butrebound effects(for example if people will pass from public transport to autonomous cars) can reduce it.[233] YouTube,Facebookand others userecommender systemsto guide users to more content. These AI programs were given the goal ofmaximizinguser engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choosemisinformation,conspiracy theories, and extremepartisancontent, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people intofilter bubbleswhere they received multiple versions of the same misinformation.[234]This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.[235]The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took some steps to mitigate the problem.[236] In 2022,generative AIbegan to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.[237]One such potential malicious use is deepfakes forcomputational propaganda.[238]AI pioneerGeoffrey Hintonexpressed concern about AI enabling "authoritarian leaders to manipulate their electorates" on a large scale, among other risks.[239] AI researchers atMicrosoft,OpenAI, universities and other organisations have suggested using "personhood credentials" as a way to overcome online deception enabled by AI models.[240] Machine learning applications will bebiased[k]if they learn from biased data.[242]The developers may not be aware that the bias exists.[243]Bias can be introduced by the waytraining datais selected and by the way a model is deployed.[244][242]If a biased algorithm is used to make decisions that can seriouslyharmpeople (as it can inmedicine,finance,recruitment,housingorpolicing) then the algorithm may causediscrimination.[245]The field offairnessstudies how to prevent harms from algorithmic biases. On June 28, 2015,Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people,[246]a problem called "sample size disparity".[247]Google "fixed" this problem by preventing the system from labellinganythingas a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon.[248] COMPASis a commercial program widely used byU.S. courtsto assess the likelihood of adefendantbecoming arecidivist. In 2016,Julia AngwinatProPublicadiscovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different—the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend.[249]In 2017, several researchers[l]showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data.[251] A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender".[252]Moritz Hardt said "the most robust fact in this research area is that fairness through blindness doesn't work."[253] Criticism of COMPAS highlighted that machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future. If an application then uses these predictions asrecommendations, some of these "recommendations" will likely be racist.[254]Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will bebetterthan the past. It is descriptive rather than prescriptive.[m] Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women.[247] There are various conflicting definitions and mathematical models of fairness. These notions depend on ethical assumptions, and are influenced by beliefs about society. One broad category isdistributive fairness, which focuses on the outcomes, often identifying groups and seeking to compensate for statistical disparities. Representational fairness tries to ensure that AI systems do not reinforce negativestereotypesor render certain groups invisible. Procedural fairness focuses on the decision process rather than the outcome. The most relevant notions of fairness may depend on the context, notably the type of AI application and the stakeholders. The subjectivity in the notions of bias and fairness makes it difficult for companies to operationalize them. Having access to sensitive attributes such as race or gender is also considered by many AI ethicists to be necessary in order to compensate for biases, but it may conflict withanti-discrimination laws.[241] At its 2022Conference on Fairness, Accountability, and Transparency(ACM FAccT 2022), theAssociation for Computing Machinery, in Seoul, South Korea, presented and published findings that recommend that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe, and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed.[dubious–discuss][256] Many AI systems are so complex that their designers cannot explain how they reach their decisions.[257]Particularly withdeep neural networks, in which there are a large amount of non-linearrelationships between inputs and outputs. But some popular explainability techniques exist.[258] It is impossible to be certain that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, a system that could identify skin diseases better than medical professionals was found to actually have a strong tendency to classify images with aruleras "cancerous", because pictures of malignancies typically include a ruler to show the scale.[259]Another machine learning system designed to help effectively allocate medical resources was found to classify patients with asthma as being at "low risk" of dying from pneumonia. Having asthma is actually a severe risk factor, but since the patients having asthma would usually get much more medical care, they were relatively unlikely to die according to the training data. The correlation between asthma and low risk of dying from pneumonia was real, but misleading.[260] People who have been harmed by an algorithm's decision have a right to an explanation.[261]Doctors, for example, are expected to clearly and completely explain to their colleagues the reasoning behind any decision they make. Early drafts of the European Union'sGeneral Data Protection Regulationin 2016 included an explicit statement that this right exists.[n]Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used.[262] DARPAestablished theXAI("Explainable Artificial Intelligence") program in 2014 to try to solve these problems.[263] Several approaches aim to address the transparency problem. SHAP enables to visualise the contribution of each feature to the output.[264]LIME can locally approximate a model's outputs with a simpler, interpretable model.[265]Multitask learningprovides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned.[266]Deconvolution,DeepDreamand othergenerativemethods can allow developers to see what different layers of a deep network for computer vision have learned, and produce output that can suggest what the network is learning.[267]Forgenerative pre-trained transformers,Anthropicdeveloped a technique based ondictionary learningthat associates patterns of neuron activations with human-understandable concepts.[268] Artificial intelligence provides a number of tools that are useful tobad actors, such asauthoritarian governments,terrorists,criminalsorrogue states. A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision.[o]Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentiallyweapons of mass destruction.[270]Even when used in conventional warfare, they currently cannot reliably choose targets and could potentiallykill an innocent person.[270]In 2014, 30 nations (including China) supported a ban on autonomous weapons under theUnited Nations'Convention on Certain Conventional Weapons, however theUnited Statesand others disagreed.[271]By 2015, over fifty countries were reported to be researching battlefield robots.[272] AI tools make it easier forauthoritarian governmentsto efficiently control their citizens in several ways.Faceandvoice recognitionallow widespreadsurveillance.Machine learning, operating this data, canclassifypotential enemies of the state and prevent them from hiding.Recommendation systemscan precisely targetpropagandaandmisinformationfor maximum effect.Deepfakesandgenerative AIaid in producing misinformation. Advanced AI can make authoritariancentralized decision makingmore competitive than liberal and decentralized systems such asmarkets. It lowers the cost and difficulty ofdigital warfareandadvanced spyware.[273]All these technologies have been available since 2020 or earlier—AIfacial recognition systemsare already being used formass surveillancein China.[274][275] There many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.[276] Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.[277] In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI.[278]A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-termunemployment, but they generally agree that it could be a net benefit ifproductivitygains areredistributed.[279]Risk estimates vary; for example, in the 2010s, Michael Osborne andCarl Benedikt Freyestimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk".[p][281]The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies.[277]In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.[282][283] Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence;The Economiststated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".[284]Jobs at extreme risk range fromparalegalsto fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[285] From the early days of the development of artificial intelligence, there have been arguments, for example, those put forward byJoseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement.[286] It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicistStephen Hawkingstated, "spell the end of the human race".[287]This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character.[q]These sci-fi scenarios are misleading in several ways. First, AI does not require human-likesentienceto be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. PhilosopherNick Bostromargued that if one givesalmost anygoal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of apaperclip factory manager).[289]Stuart Russellgives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead."[290]In order to be safe for humanity, asuperintelligencewould have to be genuinelyalignedwith humanity's morality and values so that it is "fundamentally on our side".[291] Second,Yuval Noah Harariargues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things likeideologies,law,government,moneyand theeconomyare built onlanguage; they exist because there are stories that billions of people believe. The current prevalence ofmisinformationsuggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.[292] The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI.[293]Personalities such asStephen Hawking,Bill Gates, andElon Musk,[294]as well as AI pioneers such asYoshua Bengio,Stuart Russell,Demis Hassabis, andSam Altman, have expressed concerns about existential risk from AI. In May 2023,Geoffrey Hintonannounced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google".[295]He notably mentioned risks of anAI takeover,[296]and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI.[297] In 2023, many leading AI experts endorsedthe joint statementthat "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".[298] Some other researchers were more optimistic. AI pioneerJürgen Schmidhuberdid not sign the joint statement, emphasising that in 95% of all cases, AI research is about making "human lives longer and healthier and easier."[299]While the tools that are now being used to improve lives can also be used by bad actors, "they can also be used against the bad actors."[300][301]Andrew Ngalso argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests."[302]Yann LeCun"scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction."[303]In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine.[304]However, after 2016, the study of current and future risks and possible solutions became a serious area of research.[305] Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans.Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.[306] Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas.[307]The field of machine ethics is also called computational morality,[307]and was founded at anAAAIsymposium in 2005.[308] Other approaches includeWendell Wallach's "artificial moral agents"[309]andStuart J. Russell'sthree principlesfor developing provably beneficial machines.[310] Active organizations in the AI open-source community includeHugging Face,[311]Google,[312]EleutherAIandMeta.[313]Various AI models, such asLlama 2,MistralorStable Diffusion, have been made open-weight,[314][315]meaning that their architecture and trained parameters (the "weights") are publicly available. Open-weight models can be freelyfine-tuned, which allows companies to specialize them with their own data and for their own use-case.[316]Open-weight models are useful for research and innovation but can also be misused. Since they can be fine-tuned, any built-in security measure, such as objecting to harmful requests, can be trained away until it becomes ineffective. Some researchers warn that future AI models may develop dangerous capabilities (such as the potential to drastically facilitatebioterrorism) and that once released on the Internet, they cannot be deleted everywhere if needed. They recommend pre-release audits and cost-benefit analyses.[317] Artificial Intelligence projects can be guided by ethical considerations during the design, development, and implementation of an AI system. An AI framework such as the Care and Act Framework, developed by theAlan Turing Instituteand based on the SUM values, outlines four main ethical dimensions, defined as follows:[318][319] Other developments in ethical frameworks include those decided upon during theAsilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others;[320]however, these principles are not without criticism, especially regards to the people chosen to contribute to these frameworks.[321] Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[322] TheUK AI Safety Institutereleased in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.[323] The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.[324]The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally.[325]According to AI Index atStanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[326][327]Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[328]Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[328]TheGlobal Partnership on Artificial Intelligencewas launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology.[328]Henry Kissinger,Eric Schmidt, andDaniel Huttenlocherpublished a joint statement in November 2021 calling for a government commission to regulate AI.[329]In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[330]In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.[331]In 2024, theCouncil of Europecreated the first international legally binding treaty on AI, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.[332] In a 2022Ipsossurvey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".[326]A 2023Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.[333]In a 2023Fox Newspoll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".[334][335] In November 2023, the first globalAI Safety Summitwas held inBletchley Parkin the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[336]28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[337][338]In May 2024 at theAI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI.[339][340] The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly toAlan Turing'stheory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable form of mathematical reasoning.[342][343]This, along with concurrent discoveries incybernetics,information theoryandneurobiology, led researchers to consider the possibility of building an "electronic brain".[r]They developed several areas of research that would become part of AI,[345]such asMcCullouchandPittsdesign for "artificial neurons" in 1943,[116]and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced theTuring testand showed that "machine intelligence" was plausible.[346][343] The field of AI research was founded ata workshopatDartmouth Collegein 1956.[s][6]The attendees became the leaders of AI research in the 1960s.[t]They and their students produced programs that the press described as "astonishing":[u]computers were learningcheckersstrategies, solving word problems in algebra, provinglogical theoremsand speaking English.[v][7]Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s.[343] Researchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine withgeneral intelligenceand considered this the goal of their field.[350]In 1965Herbert Simonpredicted, "machines will be capable, within twenty years, of doing any work a man can do".[351]In 1967Marvin Minskyagreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[352]They had, however, underestimated the difficulty of the problem.[w]In 1974, both the U.S. and British governments cut off exploratory research in response to thecriticismofSir James Lighthill[354]and ongoing pressure from the U.S. Congress tofund more productive projects.[355]Minsky's andPapert's bookPerceptronswas understood as proving thatartificial neural networkswould never be useful for solving real-world tasks, thus discrediting the approach altogether.[356]The "AI winter", a period when obtaining funding for AI projects was difficult, followed.[9] In the early 1980s, AI research was revived by the commercial success ofexpert systems,[357]a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan'sfifth generation computerproject inspired the U.S. and British governments to restore funding foracademic research.[8]However, beginning with the collapse of theLisp Machinemarket in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.[10] Up to this point, most of AI's funding had gone to projects that used high-levelsymbolsto representmental objectslike plans, goals, beliefs, and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especiallyperception,robotics,learningandpattern recognition,[358]and began to look into "sub-symbolic" approaches.[359]Rodney Brooksrejected "representation" in general and focussed directly on engineering machines that move and survive.[x]Judea Pearl,Lofti Zadeh, and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic.[87][364]But the most important development was the revival of "connectionism", including neural network research, byGeoffrey Hintonand others.[365]In 1990,Yann LeCunsuccessfully showed thatconvolutional neural networkscan recognize handwritten digits, the first of many successful applications of neural networks.[366] AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such asstatistics,economicsandmathematics).[367]By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence" (a tendency known as theAI effect).[368]However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield ofartificial general intelligence(or "AGI"), which had several well-funded institutions by the 2010s.[68] Deep learningbegan to dominate industry benchmarks in 2012 and was adopted throughout the field.[11]For many specific tasks, other methods were abandoned.[y]Deep learning's success was based on both hardware improvements (faster computers,[370]graphics processing units,cloud computing[371]) and access tolarge amounts of data[372](including curated datasets,[371]such asImageNet). Deep learning's success led to an enormous increase in interest and funding in AI.[z]The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019.[328] In 2016, issues offairnessand the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. Thealignment problembecame a serious field of academic study.[305] In the late 2010s and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015,AlphaGo, developed byDeepMind, beat the world championGo player. The program taught only the game's rules and developed a strategy by itself.GPT-3is alarge language modelthat was released in 2020 byOpenAIand is capable of generating high-quality human-like text.[373]ChatGPT, launched on November 30, 2022, became the fastest-growing consumer software application in history, gaining over 100 million users in two months.[374]It marked what is widely regarded as AI's breakout year, bringing it into the public consciousness.[375]These programs, and others, inspired an aggressiveAI boom, where large companies began investing billions of dollars in AI research. According to AI Impacts, about $50 billion annually was invested in "AI" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in "AI".[376]About 800,000 "AI"-related U.S. job openings existed in 2022.[377]According to PitchBook research, 22% of newly fundedstartupsin 2024 claimed to be AI companies.[378] Philosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines.[379]Another major focus has been whether machines can be conscious, and the associated ethical implications.[380]Many other topics in philosophy are relevant to AI, such asepistemologyandfree will.[381]Rapid advancements have intensified public discussions on the philosophy andethics of AI.[380] Alan Turingwrote in 1950 "I propose to consider the question 'can machines think'?"[382]He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour".[382]He devised the Turing test, which measures the ability of a machine to simulate human conversation.[346]Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes thatwe can not determine these things about other peoplebut "it is usual to have a polite convention that everyone thinks."[383] RussellandNorvigagree with Turing that intelligence must be defined in terms of external behavior, not internal structure.[1]However, they are critical that the test requires the machine to imitate humans. "Aeronautical engineeringtexts", they wrote, "do not define the goal of their field as making 'machines that fly so exactly likepigeonsthat they can fool other pigeons.'"[385]AI founderJohn McCarthyagreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence".[386] McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world".[387]Another AI founder,Marvin Minsky, similarly describes it as "the ability to solve hard problems".[388]The leading AI textbook defines it as the study of agents that perceive their environment and take actions that maximize their chances of achieving defined goals.[1]These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible. Another definition has been adopted by Google,[389]a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence. Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI,[390]with many companies during the early 2020s AI boom using the term as a marketingbuzzword, often even if they did "not actually use AI in a material way".[391] No established unifying theory orparadigmhas guided AI research for most of its history.[aa]The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostlysub-symbolic,softandnarrow. Critics argue that these questions may have to be revisited by future generations of AI researchers. Symbolic AI(or "GOFAI")[393]simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed thephysical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[394] However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning.Moravec's paradoxis the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult.[395]PhilosopherHubert Dreyfushadarguedsince the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge.[396]Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.[ab][16] The issue is not resolved:sub-symbolicreasoning can make many of the same inscrutable mistakes that human intuition does, such asalgorithmic bias. Critics such asNoam Chomskyargue continuing research into symbolic AI will still be necessary to attain general intelligence,[398][399]in part because sub-symbolic AI is a move away fromexplainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field ofneuro-symbolic artificial intelligenceattempts to bridge the two approaches. "Neats" hope that intelligent behavior is described using simple, elegant principles (such aslogic,optimization, orneural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s,[400]but eventually was seen as irrelevant. Modern AI has elements of both. Finding a provably correct or optimal solution isintractablefor many important problems.[15]Soft computing is a set of techniques, includinggenetic algorithms,fuzzy logicand neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks. AI researchers are divided as to whether to pursue the goals of artificial general intelligence andsuperintelligencedirectly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.[401][402]General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively. Thephilosophy of minddoes not know whether a machine can have amind,consciousnessandmental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence.RussellandNorvigadd that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on."[403]However, the question has become central to the philosophy of mind. It is also typically the central question at issue inartificial intelligence in fiction. David Chalmersidentified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness.[404]The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how thisfeelsor why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While humaninformation processingis easy to explain, humansubjective experienceis difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person toknow what red looks like.[405] Computationalism is the position in thephilosophy of mindthat the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to themind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophersJerry FodorandHilary Putnam.[406] PhilosopherJohn Searlecharacterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[ac]Searle challenges this claim with hisChinese roomargument, which attempts to show that even a computer capable of perfectly simulating human behavior would not have a mind.[410] It is difficult or impossible to reliably evaluate whether an advancedAI is sentient(has the ability to feel), and if so, to what degree.[411]But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.[412][413]Sapience(a set of capacities related to high intelligence, such as discernment orself-awareness) may provide another moral basis for AI rights.[412]Robot rightsare also sometimes proposed as a practical way to integrate autonomous agents into society.[414] In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.[415]Critics argued in 2018 that granting rights to AI systems would downplay the importance ofhuman rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.[416][417] Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be amoral blind spotanalogous toslaveryorfactory farming, which could lead tolarge-scale sufferingif sentient AI is created and carelessly exploited.[413][412] Asuperintelligenceis a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[402]If research intoartificial general intelligenceproduced sufficiently intelligent software, it might be able toreprogram and improve itself. The improved software would be even better at improving itself, leading to whatI. J. Goodcalled an "intelligence explosion" andVernor Vingecalled a "singularity".[418] However, technologies cannot improve exponentially indefinitely, and typically follow anS-shaped curve, slowing when they reach the physical limits of what the technology can do.[419] Robot designerHans Moravec, cyberneticistKevin Warwickand inventorRay Kurzweilhave predicted that humans and machines may merge in the future intocyborgsthat are more capable and powerful than either. This idea, called transhumanism, has roots in the writings ofAldous HuxleyandRobert Ettinger.[420] Edward Fredkinargues that "artificial intelligence is the next step in evolution", an idea first proposed bySamuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon byGeorge Dysonin his 1998 bookDarwin Among the Machines: The Evolution of Global Intelligence.[421] Arguments fordecomputinghave been raised byDan McQuillan(Resisting AI: An Anti-fascist Approach to Artificial Intelligence, 2022), meaning an opposition to the sweeping application and expansion of artificial intelligence. Similar todegrowth, the approach criticizes AI as an outgrowth of the systemic issues and capitalist world we live in. It argues that a different future is possible, in which distance between people is reduced rather than increased through AI intermediaries.[422] Thought-capable artificial beings have appeared as storytelling devices since antiquity,[423]and have been a persistent theme inscience fiction.[424] A commontropein these works began withMary Shelley'sFrankenstein, where a human creation becomes a threat to its masters. This includes such works asArthur C. Clarke'sandStanley Kubrick's2001: A Space Odyssey(both 1968), withHAL 9000, the murderous computer in charge of theDiscovery Onespaceship, as well asThe Terminator(1984) andThe Matrix(1999). In contrast, the rare loyal robots such as Gort fromThe Day the Earth Stood Still(1951) and Bishop fromAliens(1986) are less prominent in popular culture.[425] Isaac Asimovintroduced theThree Laws of Roboticsin many stories, most notably with the "Multivac" super-intelligent computer. Asimov's laws are often brought up during lay discussions of machine ethics;[426]while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.[427] Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that havethe ability to feel, and thus to suffer. This appears inKarel Čapek'sR.U.R., the filmsA.I. Artificial IntelligenceandEx Machina, as well as the novelDo Androids Dream of Electric Sheep?, byPhilip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.[428] The two most widely used textbooks in 2023 (see theOpen Syllabus): The four most widely used AI textbooks in 2008: Other textbooks:
https://en.wikipedia.org/wiki/Artificial_intelligence
Shor's algorithmis aquantum algorithmfor finding theprime factorsof an integer. It was developed in 1994 by the American mathematicianPeter Shor.[1][2]It is one of the few known quantum algorithms with compelling potential applications and strong evidence of superpolynomial speedup compared to best known classical (non-quantum) algorithms.[3]On the other hand, factoring numbers of practical significance requires far morequbitsthan available in the near future.[4]Another concern is that noise in quantum circuits may undermine results,[5]requiring additional qubits forquantum error correction. Shor proposed multiple similar algorithms for solving thefactoring problem, thediscrete logarithm problem, and the period-finding problem. "Shor's algorithm" usually refers to the factoring algorithm, but may refer to any of the three algorithms. The discrete logarithm algorithm and the factoring algorithm are instances of the period-finding algorithm, and all three are instances of thehidden subgroup problem. On a quantum computer, to factor an integerN{\displaystyle N}, Shor's algorithm runs inpolynomial time, meaning the time taken is polynomial inlog⁡N{\displaystyle \log N}.[6]It takesquantum gatesof orderO((log⁡N)2(log⁡log⁡N)(log⁡log⁡log⁡N)){\displaystyle O\!\left((\log N)^{2}(\log \log N)(\log \log \log N)\right)}using fast multiplication,[7]or evenO((log⁡N)2(log⁡log⁡N)){\displaystyle O\!\left((\log N)^{2}(\log \log N)\right)}utilizing the asymptotically fastest multiplication algorithm currently known due to Harvey andVan Der Hoven,[8]thus demonstrating that theinteger factorizationproblem can be efficiently solved on a quantum computer and is consequently in thecomplexity classBQP. This is significantly faster than the most efficient known classical factoring algorithm, thegeneral number field sieve, which works insub-exponential time:O(e1.9(log⁡N)1/3(log⁡log⁡N)2/3){\displaystyle O\!\left(e^{1.9(\log N)^{1/3}(\log \log N)^{2/3}}\right)}.[9] If a quantum computer with a sufficient number ofqubitscould operate without succumbing toquantum noiseand otherquantum-decoherencephenomena, then Shor's algorithm could be used to breakpublic-key cryptographyschemes, such as RSA can be broken if factoring large integers is computationally feasible. As far as is known, this is not possible using classical (non-quantum) computers; no classical algorithm is known that can factor integers in polynomial time. However, Shor's algorithm shows that factoring integers is efficient on an ideal quantum computer, so it may be feasible to defeat RSA by constructing a large quantum computer. It was also a powerful motivator for the design and construction of quantum computers, and for the study of new quantum-computer algorithms. It has also facilitated research on new cryptosystems that are secure from quantum computers, collectively calledpost-quantum cryptography. Given the high error rates of contemporary quantum computers and too few qubits to usequantum error correction, laboratory demonstrations obtain correct results only in a fraction of attempts. In 2001, Shor's algorithm was demonstrated by a group atIBM, who factored15{\displaystyle 15}into3×5{\displaystyle 3\times 5}, using anNMR implementationof a quantum computer with seven qubits.[11]After IBM's implementation, two independent groups implemented Shor's algorithm usingphotonicqubits, emphasizing that multi-qubitentanglementwas observed when running the Shor's algorithm circuits.[12][13]In 2012, the factorization of15{\displaystyle 15}was performed with solid-state qubits.[14]Later, in 2012, the factorization of21{\displaystyle 21}was achieved.[15]In 2016, the factorization of15{\displaystyle 15}was performed again using trapped-ion qubits with a recycling technique.[16]In 2019, an attempt was made to factor the number35{\displaystyle 35}using Shor's algorithm on an IBMQ System One, but the algorithm failed because of accumulating errors.[17]However, all these demonstrations have compiled the algorithm by making use of prior knowledge of the answer, and some have even oversimplified the algorithm in a way that makes it equivalent to coin flipping.[18]Furthermore, attempts using quantum computers with other algorithms have been made.[19]However, these algorithms are similar to classical brute-force checking of factors, so unlike Shor's algorithm, they are not expected to ever perform better than classical factoring algorithms.[20] Theoretical analyses of Shor's algorithm assume a quantum computer free of noise and errors. However, near-term practical implementations will have to deal with such undesired phenomena (when more qubits are available,quantum error correctioncan help). In 2023,Jin-Yi Caishowed that in the presence of noise, Shor's algorithm failsasymptotically almost surelyfor large semiprimes that are products of two primes inOEISsequence A073024.[5]These primesp{\displaystyle p}have the property thatp−1{\displaystyle p-1}has a prime factor larger thanp2/3{\displaystyle p^{2/3}}, and have a positive density in the set of all primes. Hence error correction will be needed to be able to factor all numbers with Shor's algorithm. The problem that we are trying to solve is:given an oddcomposite numberN{\displaystyle N}, find itsinteger factors. To achieve this, Shor's algorithm consists of two parts: A complete factoring algorithm is possible if we're able to efficiently factor arbitraryN{\displaystyle N}into just two integersp{\displaystyle p}andq{\displaystyle q}greater than 1, since if eitherp{\displaystyle p}orq{\displaystyle q}are not prime, then the factoring algorithm can in turn be run on those until only primes remain. A basic observation is that, usingEuclid's algorithm, we can always compute theGCDbetween two integers efficiently. In particular, this means we can check efficiently whetherN{\displaystyle N}is even, in which case 2 is trivially a factor. Let us thus assume thatN{\displaystyle N}is odd for the remainder of this discussion. Afterwards, we can use efficient classical algorithms to check whetherN{\displaystyle N}is aprime power.[21]For prime powers, efficient classical factorization algorithms exist,[22]hence the rest of the quantum algorithm may assume thatN{\displaystyle N}is not a prime power. If those easy cases do not produce a nontrivial factor ofN{\displaystyle N}, the algorithm proceeds to handle the remaining case. We pick a random integer2≤a<N{\displaystyle 2\leq a<N}. A possible nontrivial divisor ofN{\displaystyle N}can be found by computinggcd(a,N){\displaystyle \gcd(a,N)}, which can be done classically and efficiently using theEuclidean algorithm. If this produces a nontrivial factor (meaninggcd(a,N)≠1{\displaystyle \gcd(a,N)\neq 1}), the algorithm is finished, and the other nontrivial factor isN/gcd(a,N){\displaystyle N/\gcd(a,N)}. If a nontrivial factor was not identified, then this means thatN{\displaystyle N}and the choice ofa{\displaystyle a}arecoprime, soa{\displaystyle a}is contained in themultiplicative group of integers moduloN{\displaystyle N}, having amultiplicative inversemoduloN{\displaystyle N}. Thus,a{\displaystyle a}has amultiplicative orderr{\displaystyle r}moduloN{\displaystyle N}, meaning andr{\displaystyle r}is the smallest positive integer satisfying this congruence. The quantum subroutine findsr{\displaystyle r}. It can be seen from the congruence thatN{\displaystyle N}dividesar−1{\displaystyle a^{r}-1}, writtenN∣ar−1{\displaystyle N\mid a^{r}-1}. This can be factored usingdifference of squares:N∣(ar/2−1)(ar/2+1).{\displaystyle N\mid (a^{r/2}-1)(a^{r/2}+1).}Since we have factored the expression in this way, the algorithm doesn't work for oddr{\displaystyle r}(becausear/2{\displaystyle a^{r/2}}must be an integer), meaning that the algorithm would have to restart with a newa{\displaystyle a}. Hereafter we can therefore assume thatr{\displaystyle r}is even. It cannot be the case thatN∣ar/2−1{\displaystyle N\mid a^{r/2}-1}, since this would implyar/2≡1modN{\displaystyle a^{r/2}\equiv 1{\bmod {N}}}, which would contradictorily imply thatr/2{\displaystyle r/2}would be the order ofa{\displaystyle a}, which was alreadyr{\displaystyle r}. At this point, it may or may not be the case thatN∣ar/2+1{\displaystyle N\mid a^{r/2}+1}. IfN{\displaystyle N}does not dividear/2+1{\displaystyle a^{r/2}+1}, then this means that we are able to find a nontrivial factor ofN{\displaystyle N}. We computed=gcd(N,ar/2−1).{\displaystyle d=\gcd(N,a^{r/2}-1).}Ifd=1{\displaystyle d=1}, thenN∣ar/2+1{\displaystyle N\mid a^{r/2}+1}was true, and a nontrivial factor ofN{\displaystyle N}cannot be achieved froma{\displaystyle a}, and the algorithm must restart with a newa{\displaystyle a}. Otherwise, we have found a nontrivial factor ofN{\displaystyle N}, with the other beingN/d{\displaystyle N/d}, and the algorithm is finished. For this step, it is also equivalent to computegcd(N,ar/2+1){\displaystyle \gcd(N,a^{r/2}+1)}; it will produce a nontrivial factor ifgcd(N,ar/2−1){\displaystyle \gcd(N,a^{r/2}-1)}is nontrivial, and will not if it's trivial (whereN∣ar/2+1{\displaystyle N\mid a^{r/2}+1}). The algorithm restated shortly follows: letN{\displaystyle N}be odd, and not a prime power. We want to output two nontrivial factors ofN{\displaystyle N}. It has been shown that this will be likely to succeed after a few runs.[2]In practice, a single call to the quantum order-finding subroutine is enough to completely factorN{\displaystyle N}with very high probability of success if one uses a more advanced reduction.[23] The goal of the quantum subroutine of Shor's algorithm is, givencoprime integersN{\displaystyle N}and1<a<N{\displaystyle 1<a<N}, to find theorderr{\displaystyle r}ofa{\displaystyle a}moduloN{\displaystyle N}, which is the smallest positive integer such thatar≡1(modN){\displaystyle a^{r}\equiv 1{\pmod {N}}}. To achieve this, Shor's algorithm uses a quantum circuit involving two registers. The second register usesn{\displaystyle n}qubits, wheren{\displaystyle n}is the smallest integer such thatN≤2n{\displaystyle N\leq 2^{n}}, i.e.,n=⌈log2⁡N⌉{\displaystyle n=\left\lceil {\log _{2}N}\right\rceil }. The size of the first register determines how accurate of an approximation the circuit produces. It can be shown that using2n{\displaystyle 2n}qubits gives sufficient accuracy to findr{\displaystyle r}. The exact quantum circuit depends on the parametersa{\displaystyle a}andN{\displaystyle N}, which define the problem. The following description of the algorithm usesbra–ket notationto denote quantum states, and⊗{\displaystyle \otimes }to denote thetensor product, rather thanlogical AND. The algorithm consists of two main steps: The connection with quantum phase estimation was not discussed in the original formulation of Shor's algorithm,[2]but was later proposed by Kitaev.[24] In general thequantum phase estimation algorithm, for any unitaryU{\displaystyle U}and eigenstate|ψ⟩{\displaystyle |\psi \rangle }such thatU|ψ⟩=e2πiθ|ψ⟩{\displaystyle U|\psi \rangle =e^{2\pi i\theta }|\psi \rangle }, sends input states|0⟩|ψ⟩{\displaystyle |0\rangle |\psi \rangle }to output states close to|ϕ⟩|ψ⟩{\displaystyle |\phi \rangle |\psi \rangle }, whereϕ{\displaystyle \phi }is a superposition of integers close to22nθ{\displaystyle 2^{2n}\theta }. In other words, it sends each eigenstate|ψj⟩{\displaystyle |\psi _{j}\rangle }ofU{\displaystyle U}to a state containing information close to the associated eigenvalue. For the purposes of quantum order-finding, we employ this strategy using the unitary defined by the actionU|k⟩={|ak(modN)⟩0≤k<N,|k⟩N≤k<2n.{\displaystyle U|k\rangle ={\begin{cases}|ak{\pmod {N}}\rangle &0\leq k<N,\\|k\rangle &N\leq k<2^{n}.\end{cases}}}The action ofU{\displaystyle U}on states|k⟩{\displaystyle |k\rangle }withN≤k<2n{\displaystyle N\leq k<2^{n}}is not crucial to the functioning of the algorithm, but needs to be included to ensure that the overall transformation is a well-defined quantum gate. Implementing the circuit for quantum phase estimation withU{\displaystyle U}requires being able to efficiently implement the gatesU2j{\displaystyle U^{2^{j}}}. This can be accomplished viamodular exponentiation, which is the slowest part of the algorithm. The gate thus defined satisfiesUr=I{\displaystyle U^{r}=I}, which immediately implies that its eigenvalues are ther{\displaystyle r}-throots of unityωrk=e2πik/r{\displaystyle \omega _{r}^{k}=e^{2\pi ik/r}}. Furthermore, each eigenvalueωrj{\displaystyle \omega _{r}^{j}}has an eigenvector of the form|ψj⟩=r−1/2∑k=0r−1ωr−kj|ak⟩{\textstyle |\psi _{j}\rangle =r^{-1/2}\sum _{k=0}^{r-1}\omega _{r}^{-kj}|a^{k}\rangle }, and these eigenvectors are such that1r∑j=0r−1|ψj⟩=1r∑j=0r−1∑k=0r−1ωrjk|ak⟩=|1⟩+1r∑k=1r−1(∑j=0r−1ωrjk)|ak⟩=|1⟩,{\displaystyle {\begin{aligned}{\frac {1}{\sqrt {r}}}\sum _{j=0}^{r-1}|\psi _{j}\rangle &={\frac {1}{r}}\sum _{j=0}^{r-1}\sum _{k=0}^{r-1}\omega _{r}^{jk}|a^{k}\rangle \\&=|1\rangle +{\frac {1}{r}}\sum _{k=1}^{r-1}\left(\sum _{j=0}^{r-1}\omega _{r}^{jk}\right)|a^{k}\rangle =|1\rangle ,\end{aligned}}}where the last identity follows from thegeometric seriesformula, which implies∑j=0r−1ωrjk=0{\textstyle \sum _{j=0}^{r-1}\omega _{r}^{jk}=0}. Usingquantum phase estimationon an input state|0⟩⊗2n|ψj⟩{\displaystyle |0\rangle ^{\otimes 2n}|\psi _{j}\rangle }would then return the integer22nj/r{\displaystyle 2^{2n}j/r}with high probability. More precisely, the quantum phase estimation circuit sends|0⟩⊗2n|ψj⟩{\displaystyle |0\rangle ^{\otimes 2n}|\psi _{j}\rangle }to|ϕj⟩|ψj⟩{\displaystyle |\phi _{j}\rangle |\psi _{j}\rangle }such that the resulting probability distributionpk≡|⟨k|ϕj⟩|2{\displaystyle p_{k}\equiv |\langle k|\phi _{j}\rangle |^{2}}is peaked aroundk=22nj/r{\displaystyle k=2^{2n}j/r}, withp22nj/r≥4/π2≈0.4053{\displaystyle p_{2^{2n}j/r}\geq 4/\pi ^{2}\approx 0.4053}. This probability can be made arbitrarily close to 1 using extra qubits. Applying the above reasoning to the input|0⟩⊗2n|1⟩{\displaystyle |0\rangle ^{\otimes 2n}|1\rangle }, quantum phase estimation thus results in the evolution|0⟩⊗2n|1⟩=1r∑j=0r−1|0⟩⊗2n|ψj⟩→1r∑j=0r−1|ϕj⟩|ψj⟩.{\displaystyle |0\rangle ^{\otimes 2n}|1\rangle ={\frac {1}{\sqrt {r}}}\sum _{j=0}^{r-1}|0\rangle ^{\otimes 2n}|\psi _{j}\rangle \to {\frac {1}{\sqrt {r}}}\sum _{j=0}^{r-1}|\phi _{j}\rangle |\psi _{j}\rangle .}Measuring the first register, we now have a balanced probability1/r{\displaystyle 1/r}to find each|ϕj⟩{\displaystyle |\phi _{j}\rangle }, each one giving an integer approximation to22nj/r{\displaystyle 2^{2n}j/r}, which can be divided by22n{\displaystyle 2^{2n}}to get a decimal approximation forj/r{\displaystyle j/r}. Then, we apply thecontinued-fractionalgorithm to find integersb{\displaystyle b}andc{\displaystyle c}, whereb/c{\displaystyle b/c}gives the best fraction approximation for the approximation measured from the circuit, forb,c<N{\displaystyle b,c<N}andcoprimeb{\displaystyle b}andc{\displaystyle c}. The number of qubits in the first register,2n{\displaystyle 2n}, which determines the accuracy of the approximation, guarantees thatbc=jr,{\displaystyle {\frac {b}{c}}={\frac {j}{r}},}given the best approximation from the superposition of|ϕj⟩{\displaystyle |\phi _{j}\rangle }was measured[2](which can be made arbitrarily likely by using extra bits and truncating the output). However, whileb{\displaystyle b}andc{\displaystyle c}are coprime, it may be the case thatj{\displaystyle j}andr{\displaystyle r}are not coprime. Because of that,b{\displaystyle b}andc{\displaystyle c}may have lost some factors that were inj{\displaystyle j}andr{\displaystyle r}. This can be remedied by rerunning the quantum order-finding subroutine an arbitrary number of times, to produce a list of fraction approximationsb1c1,b2c2,…,bscs,{\displaystyle {\frac {b_{1}}{c_{1}}},{\frac {b_{2}}{c_{2}}},\ldots ,{\frac {b_{s}}{c_{s}}},}wheres{\displaystyle s}is the number of times the subroutine was run. Eachck{\displaystyle c_{k}}will have different factors taken out of it because the circuit will (likely) have measured multiple different possible values ofj{\displaystyle j}. To recover the actualr{\displaystyle r}value, we can take theleast common multipleof eachck{\displaystyle c_{k}}:lcm⁡(c1,c2,…,cs).{\displaystyle \operatorname {lcm} (c_{1},c_{2},\ldots ,c_{s}).}The least common multiple will be the orderr{\displaystyle r}of the original integera{\displaystyle a}with high probability. In practice, a single run of the quantum order-finding subroutine is in general enough if more advanced post-processing is used.[25] Phase estimation requires choosing the size of the first register to determine the accuracy of the algorithm, and for the quantum subroutine of Shor's algorithm,2n{\displaystyle 2n}qubits is sufficient to guarantee that the optimal bitstring measured from phase estimation (meaning the|k⟩{\displaystyle |k\rangle }wherek/22n{\textstyle k/2^{2n}}is the most accurate approximation of the phase from phase estimation) will allow the actual value ofr{\displaystyle r}to be recovered. Each|ϕj⟩{\displaystyle |\phi _{j}\rangle }before measurement in Shor's algorithm represents a superposition of integers approximating22nj/r{\displaystyle 2^{2n}j/r}. Let|k⟩{\displaystyle |k\rangle }represent the most optimal integer in|ϕj⟩{\displaystyle |\phi _{j}\rangle }. The following theorem guarantees that the continued fractions algorithm will recoverj/r{\displaystyle j/r}fromk/22n{\displaystyle k/2^{2{n}}}: Theorem—Ifj{\displaystyle j}andr{\displaystyle r}aren{\displaystyle n}bit integers, and|jr−ϕ|≤12r2{\displaystyle \left\vert {\frac {j}{r}}-\phi \right\vert \leq {\frac {1}{2r^{2}}}}then the continued fractions algorithm run onϕ{\displaystyle \phi }will recover bothjgcd(j,r){\textstyle {\frac {j}{\gcd(j,\;r)}}}andrgcd(j,r){\textstyle {\frac {r}{\gcd(j,\;r)}}}. [3]Ask{\displaystyle k}is the optimal bitstring from phase estimation,k/22n{\displaystyle k/2^{2{n}}}is accurate toj/r{\displaystyle j/r}by2n{\displaystyle 2n}bits. Thus,|jr−k22n|≤122n+1≤12N2≤12r2{\displaystyle \left\vert {\frac {j}{r}}-{\frac {k}{2^{2n}}}\right\vert \leq {\frac {1}{2^{2{n}+1}}}\leq {\frac {1}{2N^{2}}}\leq {\frac {1}{2r^{2}}}}which implies that the continued fractions algorithm will recoverj{\displaystyle j}andr{\displaystyle r}(or with their greatest common divisor taken out). The runtime bottleneck of Shor's algorithm is quantummodular exponentiation, which is by far slower than thequantum Fourier transformand classical pre-/post-processing. There are several approaches to constructing and optimizing circuits for modular exponentiation. The simplest and (currently) most practical approach is to mimic conventional arithmetic circuits withreversible gates, starting withripple-carry adders. Knowing the base and the modulus of exponentiation facilitates further optimizations.[26][27]Reversible circuits typically use on the order ofn3{\displaystyle n^{3}}gates forn{\displaystyle n}qubits. Alternative techniques asymptotically improve gate counts by usingquantum Fourier transforms, but are not competitive with fewer than 600 qubits owing to high constants. Shor's algorithms for thediscrete logand the order finding problems are instances of an algorithm solving the period finding problem.[citation needed]All three are instances of thehidden subgroup problem. Given agroupG{\displaystyle G}with orderp{\displaystyle p}andgeneratorg∈G{\displaystyle g\in G}, suppose we know thatx=gr∈G{\displaystyle x=g^{r}\in G}, for somer∈Zp{\displaystyle r\in \mathbb {Z} _{p}}, and we wish to computer{\displaystyle r}, which is thediscrete logarithm:r=logg(x){\displaystyle r={\log _{g}}(x)}. Consider theabelian groupZp×Zp{\displaystyle \mathbb {Z} _{p}\times \mathbb {Z} _{p}}, where each factor corresponds to modular addition of values. Now, consider the function This gives us an abelianhidden subgroup problem, wheref{\displaystyle f}corresponds to agroup homomorphism. Thekernelcorresponds to the multiples of(r,1){\displaystyle (r,1)}. So, if we can find the kernel, we can findr{\displaystyle r}. A quantum algorithm for solving this problem exists. This algorithm is, like the factor-finding algorithm, due to Peter Shor and both are implemented by creating a superposition through using Hadamard gates, followed by implementingf{\displaystyle f}as a quantum transform, followed finally by a quantum Fourier transform.[3]Due to this, the quantum algorithm for computing the discrete logarithm is also occasionally referred to as "Shor's Algorithm." The order-finding problem can also be viewed as a hidden subgroup problem.[3]To see this, consider the group of integers under addition, and for a givena∈Z{\displaystyle a\in \mathbb {Z} }such that:ar=1{\displaystyle a^{r}=1}, the function For any finite abelian groupG{\displaystyle G}, a quantum algorithm exists for solving the hidden subgroup forG{\displaystyle G}in polynomial time.[3]
https://en.wikipedia.org/wiki/Shor%27s_algorithm
Asatellite navigationorsatnavsystem is a system that usessatellitesto provide autonomousgeopositioning. A satellite navigation system with global coverage is termedglobal navigation satellite system(GNSS). As of 2024[update], four global systems are operational: theUnited States'sGlobal Positioning System(GPS),Russia's Global Navigation Satellite System (GLONASS),China'sBeiDouNavigation Satellite System (BDS),[1]and theEuropean Union'sGalileo.[2] Satellite-based augmentation systems(SBAS), designed to enhance the accuracy of GNSS,[3]include Japan'sQuasi-Zenith Satellite System(QZSS),[3]India'sGAGANand the EuropeanEGNOS, all of them based on GPS. Previous iterations of the BeiDou navigation system and the presentIndian Regional Navigation Satellite System(IRNSS), operationally known as NavIC, are examples of stand-alone operatingregional navigation satellite systems(RNSS).[4] Satellite navigation devicesdetermine their location (longitude,latitude, andaltitude/elevation) to high precision (within a few centimeters to meters) usingtime signalstransmitted along aline of sightbyradiofrom satellites. The system can be used for providing position, navigation or for tracking the position of something fitted with a receiver (satellite tracking). The signals also allow the electronic receiver to calculate the current local time to a high precision, which allows time synchronisation. These uses are collectively known asPositioning, Navigation and Timing(PNT). Satnav systems operate independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the positioning information generated. Global coverage for each system is generally achieved by asatellite constellationof 18–30medium Earth orbit(MEO) satellites spread between severalorbital planes. The actual systems vary, but all useorbital inclinationsof >50° andorbital periodsof roughly twelve hours (at an altitude of about 20,000 kilometres or 12,000 miles).[not verified in body] GNSS systems that provide enhanced accuracy and integrity monitoring usable for civil navigation are classified as follows:[5] By their roles in the navigation system, systems can be classified as: As many of the global GNSS systems (and augmentation systems) use similar frequencies and signals around L1, many "Multi-GNSS" receivers capable of using multiple systems have been produced. While some systems strive to interoperate with GPS as well as possible by providing the same clock, others do not.[8] Ground-basedradio navigationis decades old. TheDECCA,LORAN,GEEandOmegasystems used terrestriallongwaveradiotransmitterswhich broadcast a radio pulse from a known "master" location, followed by a pulse repeated from a number of "slave" stations. The delay between the reception of the master signal and the slave signals allowed the receiver to deduce the distance to each of the slaves, providing afix. The first satellite navigation system wasTransit, a system deployed by the US military in the 1960s. Transit's operation was based on theDoppler effect: the satellites travelled on well-known paths and broadcast their signals on a well-knownradio frequency. The received frequency will differ slightly from the broadcast frequency because of the movement of the satellite with respect to the receiver. By monitoring this frequency shift over a short time interval, the receiver can determine its location to one side or the other of the satellite, and several such measurements combined with a precise knowledge of the satellite's orbit can fix a particular position. Satellite orbital position errors are caused by radio-waverefraction, gravity field changes (as the Earth's gravitational field is not uniform), and other phenomena. A team, led by Harold L Jury of Pan Am Aerospace Division in Florida from 1970 to 1973, found solutions and/or corrections for many error sources.[citation needed]Using real-time data and recursive estimation, the systematic and residual errors were narrowed down to accuracy sufficient for navigation.[9] Part of an orbiting satellite's broadcast includes its precise orbital data. Originally, theUS Naval Observatory (USNO)continuously observed the precise orbits of these satellites. As a satellite's orbit deviated, the USNO sent the updated information to the satellite. Subsequent broadcasts from an updated satellite would contain its most recentephemeris. Modern systems are more direct. The satellite broadcasts a signal that contains orbital data (from which the position of the satellite can be calculated) and the precise time the signal was transmitted. Orbital data include a roughalmanacfor all satellites to aid in finding them, and a precise ephemeris for this satellite. The orbitalephemerisis transmitted in a data message that is superimposed on a code that serves as a timing reference. The satellite uses anatomic clockto maintain synchronization of all the satellites in the constellation. The receiver compares the time of broadcast encoded in the transmission of three (at sea level) or four (which allows an altitude calculation also) different satellites, measuring the time-of-flight to each satellite. Several such measurements can be made at the same time to different satellites, allowing a continual fix to be generated in real time using an adapted version oftrilateration: seeGNSS positioning calculationfor details. Each distance measurement, regardless of the system being used, places the receiver on a spherical shell centred on the broadcaster, at the measured distance from the broadcaster. By taking several such measurements and then looking for a point where the shells meet, a fix is generated. However, in the case of fast-moving receivers, the position of the receiver moves as signals are received from several satellites. In addition, the radio signals slow slightly as they pass through the ionosphere, and this slowing varies with the receiver's angle to the satellite, because that angle corresponds to the distance which the signal travels through the ionosphere. The basic computation thus attempts to find the shortest directed line tangent to four oblate spherical shells centred on four satellites. Satellite navigation receivers reduce errors by using combinations of signals from multiple satellites and multiple correlators, and then using techniques such asKalman filteringto combine the noisy, partial, and constantly changing data into a single estimate for position, time, and velocity. Einstein's theory ofgeneral relativityis applied to GPS time correction, the net result is that time on a GPS satellite clock advances faster than a clock on the ground by about 38 microseconds per day.[10] The original motivation for satellite navigation was for military applications. Satellite navigation allows precision in the delivery of weapons to targets, greatly increasing their lethality whilst reducing inadvertent casualties from mis-directed weapons. (SeeGuided bomb). Satellite navigation also allows forces to be directed and to locate themselves more easily, reducing thefog of war. Now a global navigation satellite system, such asGalileo, is used to determine users location and the location of other people or objects at any given moment. The range of application of satellite navigation in the future is enormous, including both the public and private sectors across numerous market segments such as science, transport, agriculture, etc.[11] The ability to supply satellite navigation signals is also the ability to deny their availability. The operator of a satellite navigation system potentially has the ability to degrade or eliminate satellite navigation services over any territory it desires. In order of first launch year: First launch year: 1978 The United States' Global Positioning System (GPS) consists of up to 32medium Earth orbitsatellites in six differentorbital planes. The exact number of satellites varies as older satellites are retired and replaced. Operational since 1978 and globally available since 1994, GPS is the world's most utilized satellite navigation system. First launch year: 1982 The formerlySoviet, and nowRussian,Global'nayaNavigatsionnayaSputnikovayaSistema, (GLObal NAvigation Satellite System or GLONASS), is a space-based satellite navigation system that provides a civilian radionavigation-satellite service and is also used by the Russian Aerospace Defence Forces. GLONASS has full global coverage since 1995 and with 24 active satellites. First launch year: 2000 BeiDou started as the now-decommissioned Beidou-1, an Asia-Pacific local network on the geostationary orbits. The second generation of the system BeiDou-2 became operational in China in December 2011.[12]The BeiDou-3 system is proposed to consist of 30MEOsatellites and five geostationary satellites (IGSO). A 16-satellite regional version (covering Asia and Pacific area) was completed by December 2012. Global service was completed by December 2018.[13]On 23 June 2020, the BDS-3 constellation deployment is fully completed after the last satellite was successfully launched at theXichang Satellite Launch Center.[14] First launch year: 2011 TheEuropean UnionandEuropean Space Agencyagreed in March 2002 to introduce their own alternative to GPS, called theGalileo positioning system. Galileo became operational on 15 December 2016 (global Early Operational Capability, EOC).[15]At an estimated cost of €10 billion,[16]the system of 30MEOsatellites was originally scheduled to be operational in 2010. The original year to become operational was 2014.[17]The first experimental satellite was launched on 28 December 2005.[18]Galileo is expected to be compatible with themodernized GPSsystem. The receivers will be able to combine the signals from both Galileo and GPS satellites to greatly increase the accuracy. The full Galileo constellation consists of 24 active satellites,[19]the last of which was launched in December 2021.[20][21]The main modulation used in Galileo Open Service signal is theComposite Binary Offset Carrier(CBOC) modulation. TheNavIC(acronym forNavigation with Indian Constellation) is an autonomous regional satellite navigation system developed by theIndian Space Research Organisation(ISRO). TheIndian governmentapproved the project in May 2006. It consists of a constellation of 7 navigational satellites.[22]Three of the satellites are placed ingeostationary orbit (GEO)and the remaining 4 ingeosynchronous orbit (GSO)to have a larger signal footprint and lower number of satellites to map the region. It is intended to provide an all-weather absolute position accuracy of better than 7.6 metres (25 ft) throughoutIndiaand within a region extending approximately 1,500 km (930 mi) around it.[23]An Extended Service Area lies between the primary service area and a rectangle area enclosed by the30th parallel southto the50th parallel northand the30th meridian eastto the130th meridian east, 1,500–6,000 km beyond borders.[24]A goal of complete Indian control has been stated, with thespace segment,ground segmentand user receivers all being built in India.[25] The constellation was in orbit as of 2018, and the system was available for public use in early 2018.[26]NavIC provides two levels of service, the "standard positioning service", which will be open for civilian use, and a "restricted service" (anencryptedone) for authorized users (including military). There are plans to expand NavIC system by increasing constellation size from 7 to 11.[27] India plans to make the NavIC global by adding 24 moreMEOsatellites. The Global NavIC will be free to use for the global public.[28] The first two generations of China's BeiDou navigation system were designed to provide regional coverage. The Korean Positioning System (KPS) is currently in development and expected to be operational by 2035.[29][30] GNSS augmentationis a method of improving a navigation system's attributes, such as accuracy, reliability, and availability, through the integration of external information into the calculation process, for example, theWide Area Augmentation System, theEuropean Geostationary Navigation Overlay Service, theMulti-functional Satellite Augmentation System,Differential GPS,GPS-aided GEO augmented navigation(GAGAN) andinertial navigation systems. The Quasi-Zenith Satellite System (QZSS) is a four-satellite regionaltime transfersystem and enhancement forGPScoveringJapanand theAsia-Oceaniaregions. QZSS services were available on a trial basis as of January 12, 2018, and were started in November 2018. The first satellite was launched in September 2010.[31]An independent satellite navigation system (from GPS) with 7 satellites is planned for 2023.[32] TheEuropean Geostationary Navigation Overlay Service(EGNOS) is asatellite-based augmentation system(SBAS) developed by theEuropean Space AgencyandEurocontrolon behalf of theEuropean Commission. Currently, it supplementsGPSby reporting on the reliability and accuracy of their positioning data and sending out corrections. The system will supplementGalileoin the future version 3.0. EGNOS consists of 40 Ranging Integrity Monitoring Stations, 2 Mission Control Centres, 6 Navigation Land Earth Stations, the EGNOS Wide Area Network (EWAN), and 3geostationary satellites.[33]Ground stations determine the accuracy of the satellite navigation systems data and transfer it to the geostationary satellites; users may freely obtain this data from those satellites using an EGNOS-enabled receiver, or over the Internet. One main use of the system is inaviation. According to specifications, horizontal position accuracy when using EGNOS-provided corrections should be better than seven metres. In practice, the horizontal position accuracy is at the metre level. Similar service is provided in North America by theWide Area Augmentation System(WAAS), in Russia by theSystem for Differential Corrections and Monitoring(SDCM), and in Asia, by Japan'sMulti-functional Satellite Augmentation System(MSAS) and India'sGPS-aided GEO augmented navigation(GAGAN). 27 operational + 3 spares Currently: 26 in orbit24 operational 2 inactive6 to be launched[36] Using multiple GNSS systems for user positioning increases the number of visible satellites, improves precise point positioning (PPP) and shortens the average convergence time.[43]The signal-in-space ranging error (SISRE) in November 2019 were 1.6 cm for Galileo, 2.3 cm for GPS, 5.2 cm for GLONASS and 5.5 cm for BeiDou when using real-time corrections for satellite orbits and clocks.[44]The average SISREs of the BDS-3 MEO, IGSO, and GEO satellites were 0.52 m, 0.90 m and 1.15 m, respectively. Compared to the four major global satellite navigation systems consisting of MEO satellites, the SISRE of the BDS-3 MEO satellites was slightly inferior to 0.4 m of Galileo, slightly superior to 0.59 m of GPS, and remarkably superior to 2.33 m of GLONASS. The SISRE of BDS-3 IGSO was 0.90 m, which was on par with the 0.92 m of QZSS IGSO. However, as the BDS-3 GEO satellites were newly launched and not completely functioning in orbit, their average SISRE was marginally worse than the 0.91 m of the QZSS GEO satellites.[3] Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS) is a French precision navigation system. Unlike other GNSS systems, it is based on static emitting stations around the world, the receivers being on satellites, in order to precisely determine their orbital position. The system may be used also for mobile receivers on land with more limited usage and coverage. Used with traditional GNSS systems, it pushes the accuracy of positions to centimetric precision (and to millimetric precision for altimetric application and also allows monitoring very tiny seasonal changes of Earth rotation and deformations), in order to build a much more precise geodesic reference system.[45] The two current operationallow Earth orbit(LEO)satellite phonenetworks are able to track transceiver units with accuracy of a few kilometres using doppler shift calculations from the satellite. The coordinates are sent back to the transceiver unit where they can be read usingAT commandsor agraphical user interface.[46][47]This can also be used by the gateway to enforce restrictions on geographically bound calling plans. TheInternational Telecommunication Union(ITU) defines aradionavigation-satellite service(RNSS) as "aradiodetermination-satellite serviceused for the purpose ofradionavigation. This service may also includefeeder linksnecessary for its operation".[48] RNSS is regarded as asafety-of-life serviceand an essential part ofnavigationwhich must be protected frominterferences. Aeronautical radionavigation-satellite(ARNSS) is – according toArticle 1.47of theInternational Telecommunication Union's(ITU)Radio Regulations(RR)[49]– defined as «Aradionavigation servicein whichearth stationsare located on board aircraft.» Maritime radionavigation-satellite service(MRNSS) is – according toArticle 1.45of theInternational Telecommunication Union's(ITU)Radio Regulations(RR)[50]– defined as «Aradionavigation-satellite servicein which earth stations are located on board ships.» ITU Radio Regulations (article 1) classifiesradiocommunicationservices as: The allocation of radio frequencies is provided according toArticle 5of the ITU Radio Regulations (edition 2012).[51] To improve harmonisation in spectrum utilisation, most service allocations are incorporated in national Tables of Frequency Allocations and Utilisations within the responsibility of the appropriate national administration. Allocations are: Alternative Positioning, Navigation and Timing(AltPNT) refers to the concept of as an alternative to GNSS. Such alternatives include:[52]
https://en.wikipedia.org/wiki/Satellite_navigation