text
stringlengths
21
172k
source
stringlengths
32
113
Inmathematics, particularly incombinatorics, given afamily of sets, here called a collectionC, atransversal(also called across-section[1][2][3]) is a set containing exactly one element from each member of the collection. When the sets of the collection are mutually disjoint, each element of the transversal corresponds to exactly one member ofC(the set it is a member of). If the original sets are not disjoint, there are two possibilities for the definition of a transversal: Incomputer science, computing transversals is useful in several application domains, with the inputfamily of setsoften being described as ahypergraph. Inset theory, theaxiom of choiceis equivalent to the statement that everypartitionhas a transversal.[7] A fundamental question in the study of SDR is whether or not an SDR exists.Hall's marriage theoremgives necessary and sufficient conditions for a finite collection of sets, some possibly overlapping, to have a transversal. The condition is that, for every integerk, every collection ofksets must contain in common at leastkdifferent elements.[4]: 29 The following refinement byH. J. Rysergives lower bounds on the number of such SDRs.[8]: 48 Theorem. LetS1,S2, ...,Smbe a collection of sets such thatSi1∪Si2∪⋯∪Sik{\displaystyle S_{i_{1}}\cup S_{i_{2}}\cup \dots \cup S_{i_{k}}}contains at leastkelements fork= 1,2,...,mand for allk-combinations {i1,i2,…,ik{\displaystyle i_{1},i_{2},\ldots ,i_{k}}} of the integers 1,2,...,mand suppose that each of these sets contains at leasttelements. Ift≤mthen the collection has at leastt! SDRs, and ift>mthen the collection has at leastt! / (t-m)! SDRs. One can construct abipartite graphin which the vertices on one side are the sets, the vertices on the other side are the elements, and the edges connect a set to the elements it contains. Then, a transversal (defined as a system ofdistinctrepresentatives) is equivalent to aperfect matchingin this graph. One can construct ahypergraphin which the vertices are the elements, and the hyperedges are the sets. Then, a transversal (defined as a system ofnot-necessarily-distinctrepresentatives) is avertex coverin a hypergraph. Ingroup theory, given asubgroupHof a groupG, a right (respectively left) transversal is asetcontaining exactly one element from each right (respectively left)cosetofH. In this case, the "sets" (cosets) are mutually disjoint, i.e. the cosets form apartitionof the group. As a particular case of the previous example, given adirect product of groupsG=H×K{\displaystyle G=H\times K}, thenHis a transversal for the cosets ofK. In general, since anyequivalence relationon an arbitrary set gives rise to a partition, picking any representative from eachequivalence classresults in a transversal. Another instance of a partition-based transversal occurs when one considers the equivalence relation known as the(set-theoretic) kernel of a function, defined for a functionf{\displaystyle f}withdomainXas the partition of the domainker⁡f:={{y∈X∣f(x)=f(y)}∣x∈X}{\displaystyle \operatorname {ker} f:=\left\{\,\left\{\,y\in X\mid f(x)=f(y)\,\right\}\mid x\in X\,\right\}}. which partitions the domain offinto equivalence classes such that all elements in a class map viafto the same value. Iffis injective, there is only one transversal ofker⁡f{\displaystyle \operatorname {ker} f}. For a not-necessarily-injectivef, fixing a transversalTofker⁡f{\displaystyle \operatorname {ker} f}induces a one-to-one correspondence betweenTand theimageoff, henceforth denoted byIm⁡f{\displaystyle \operatorname {Im} f}. Consequently, a functiong:(Im⁡f)→T{\displaystyle g:(\operatorname {Im} f)\to T}is well defined by the property that for allzinIm⁡f,g(z)=x{\displaystyle \operatorname {Im} f,g(z)=x}wherexis the unique element inTsuch thatf(x)=z{\displaystyle f(x)=z}; furthermore,gcan be extended (not necessarily in a unique manner) so that it is defined on the wholecodomainoffby picking arbitrary values forg(z)whenzis outside the image off. It is a simple calculation to verify thatgthus defined has the property thatf∘g∘f=f{\displaystyle f\circ g\circ f=f}, which is the proof (when the domain and codomain offare the same set) that thefull transformation semigroupis aregular semigroup.g{\displaystyle g}acts as a (not necessarily unique)quasi-inverseforf; within semigroup theory this is simply called an inverse. Note however that for an arbitrarygwith the aforementioned property the "dual" equationg∘f∘g=g{\displaystyle g\circ f\circ g=g}may not hold. However if we denote byh=g∘f∘g{\displaystyle h=g\circ f\circ g}, thenfis a quasi-inverse ofh, i.e.h∘f∘h=h{\displaystyle h\circ f\circ h=h}. Acommon transversalof the collectionsAandB(where|A|=|B|=n{\displaystyle |A|=|B|=n}) is a set that is a transversal of bothAandB. The collectionsAandBhave a common transversal if and only if, for allI,J⊂{1,...,n}{\displaystyle I,J\subset \{1,...,n\}}, Apartial transversalis a set containing at most one element from each member of the collection, or (in the stricter form of the concept) a set with an injection from the set toC. The transversals of a finite collectionCof finite sets form the basis sets of amatroid, thetransversal matroidofC. The independent sets of the transversal matroid are the partial transversals ofC.[10] Anindependent transversal(also called arainbow-independent setorindependent system of representatives) is a transversal which is also anindependent setof a given graph. To explain the difference in figurative terms, consider a faculty withmdepartments, where the faculty dean wants to construct a committee ofmmembers, one member per department. Such a committee is a transversal. But now, suppose that some faculty members dislike each other and do not agree to sit in the committee together. In this case, the committee must be an independent transversal, where the underlying graph describes the "dislike" relations.[11] Another generalization of the concept of a transversal would be a set that just has a non-empty intersection with each member ofC. An example of the latter would be aBernstein set, which is defined as a set that has a non-empty intersection with each set ofC, but contains no set ofC, whereCis the collection of allperfect setsof a topologicalPolish space. As another example, letCconsist of all the lines of aprojective plane, then ablocking setin this plane is a set of points which intersects each line but contains no line. In the language ofcategory theory, atransversalof a collection of mutually disjoint sets is asectionof thequotient mapinduced by the collection. Thecomputational complexityof computing all transversals of an inputfamily of setshas been studied, in particular in the framework ofenumeration algorithms.
https://en.wikipedia.org/wiki/Transversal_(combinatorics)
General Problem Solver(GPS) is acomputer programcreated in 1957 byHerbert A. Simon,J. C. Shaw, andAllen Newell(RAND Corporation) intended to work as a universal problemsolvermachine. In contrast to the formerLogic Theoristproject, theGPSworks withmeans–ends analysis.[1] Any problem that can be expressed as a set ofwell-formed formulas(WFFs) orHorn clauses, and that constitutes adirected graphwith one or more sources (that is,hypotheses) and sinks (that is, desired conclusions), can be solved, in principle, by GPS. Proofs in thepredicate logicandEuclidean geometryproblem spaces are prime examples of the domain of applicability of GPS. It was based on Simon and Newell's theoretical work onlogicmachines. GPS was the first computer program that separated itsknowledgeof problems (rules represented as input data) from its strategy of how to solve problems (a generic solverengine). GPS was implemented in the third-order programming language,IPL.[2] While GPS solved simple problems such as theTowers of Hanoithat could be sufficiently formalized, it could not solve any real-world problems because the search was easily lost in thecombinatorial explosion. Put another way, the number of "walks" through the inferential digraph became computationally untenable. (In practice, even a straightforwardstate space searchsuch as the Towers of Hanoi can become computationally infeasible, albeit judicious prunings of the state space can be achieved by such elementary AI techniques asA*andIDA*). The user defined objects and operations that could be done on the objects, and GPS generatedheuristicsbymeans–ends analysisin order to solve problems. It focused on the available operations, finding what inputs were acceptable and what outputs were generated. It then created subgoals to get closer and closer to the goal. The GPS paradigm eventually evolved into theSoararchitecture forartificial intelligence.
https://en.wikipedia.org/wiki/General_Problem_Solver
NoScript(orNoScript Security Suite) is afree and open-sourceextensionforFirefox- andChromium-based web browsers,[4]written and maintained by Giorgio Maone,[5]a software developer and member of the Mozilla Security Group.[6] By default, NoScript blocks active (executable) web content, which can be wholly or partially unblocked by allowlisting a site or domain from the extension's toolbar menu or by clicking a placeholder icon. In the default configuration, active content is globally denied, although the user may turn this around and use NoScript to block specific unwanted content. The allowlist may be permanent or temporary (until the browser closes or the user revokes permissions). Active content may consist ofJavaScript, web fonts, mediacodecs,WebGL,Java applet,SilverlightandFlash. The add-on also offers specific countermeasures against security exploits.[7] Because many web browser attacks require active content that the browser normally runs without question, disabling such content by default and using it only to the degree that it is necessary reduces the chances of vulnerability exploitation. In addition, not loading this content saves significant bandwidth[8]and defeats some forms of web tracking. NoScript is useful for developers to see how well their site works with JavaScript turned off. It also can remove many irritating web elements, such as in-page pop-up messages and certainpaywalls, which require JavaScript in order to function. NoScript takes the form of atoolbaricon orstatus baricon in Firefox. It displays on every website to denote whether NoScript has either blocked, allowed, or partially allowed scripts to run on the web page being viewed. Clicking or hovering (since version 2.0.3rc1[9]) the mouse cursor on the NoScript icon gives the user the option to allow or forbid the script's processing. NoScript's interface, whether accessed by right-clicking on the web page or the distinctive NoScript box at the bottom of the page (by default), shows the URL of the script(s) that are blocked, but does not provide any sort of reference to look up whether or not a given script is safe to run.[10]With complex webpages, users may be faced with well over a dozen different cryptic URLs and a non-functioning webpage, with only the choice to allow the script, block the script or to allow it temporarily. On November 14, 2017, Giorgio Maone announced NoScript 10, which will be "very different" from 5.x versions, and will use WebExtension technology, making it compatible withFirefox Quantum.[11]On November 20, 2017, Maone released version 10.1.1 for Firefox 57 and above. NoScript is available for Firefox for Android.[12] On April 11, 2007, NoScript 1.1.4.7 was publicly released,[13]introducing the first client-side protection against Type 0 and Type 1cross-site scripting(XSS) ever delivered in a web browser. Whenever a website tries to inject HTML or JavaScript code inside a different site (a violation of thesame-origin policy), NoScript filters the malicious request and neutralizes its dangerous payload.[14] Similar features have been adopted years later byMicrosoft Internet Explorer 8[15]and byGoogle Chrome.[16] The Application Boundaries Enforcer (ABE) is a built-in NoScript module meant to harden theweb application-oriented protections already provided by NoScript, by delivering a firewall-like component running inside the browser. This "firewall" is specialized in defining and guarding the boundaries of each sensitive web application relevant to the user (e.g., plug-ins, webmail,online banking, and so on), according to policies defined directly by the user, the web developer/administrator, or a trusted third party.[17]In its default configuration, NoScript's ABE provides protection againstCSRFandDNS rebindingattacks aimed at intranet resources, such as routers and sensitive web applications.[18] NoScript's ClearClick feature,[19]released on October 8, 2008, prevents users from clicking on invisible or "redressed" page elements of embedded documents or applets, defeating all types ofclickjacking(i.e., from frames and plug-ins).[20] This makes NoScript "the only freely available product which offers a reasonable degree of protection against clickjacking attacks."[21] NoScript can force the browser to always useHTTPSwhen establishing connections to some sensitive sites, in order to prevent man-in-the-middle attacks. This behavior can be triggered either by the websites themselves, by sending theStrict Transport Securityheader, or configured by users for those websites that don't support Strict Transport Security yet.[22] NoScript's HTTPS enhancement features have been used by theElectronic Frontier Foundationas the basis of itsHTTPS Everywhereadd-on.[23] In May 2009, it was reported that an "extension war" had broken out between NoScript's developer, Giorgio Maone, and the developers of the Firefox ad-blocking extensionAdblock Plusafter Maone released a version of NoScript that circumvented a block enabled by an AdBlock Plus filter.[29][30]The code implementing this workaround was "camouflaged"[29]to avoid detection. Maone stated that he had implemented it in response to a filter that blocked his own website. After mounting criticism and a declaration by the administrators of theMozilla Add-onssite that the site would change its guidelines regarding add-on modifications,[31]Maone removed the code and issued a full apology.[29][32] In the immediate aftermath of the Adblock Plus incident,[33]a spat arose between Maone and the developers of theGhosteryadd-on after Maone implemented a change on his website that disabled the notification Ghostery used to reportweb tracking software.[34]This was interpreted as an attempt to "prevent Ghostery from reporting on trackers and ad networks on NoScript's websites".[33]In response, Maone stated that the change was made because Ghostery's notification obscured the donation button on the NoScript site.[35]This conflict was resolved when Maone changed his site's CSS to move—rather than disable—the Ghostery notification.[36]
https://en.wikipedia.org/wiki/NoScript
Cover's theoremis a statement incomputational learning theoryand is one of the primary theoretical motivations for the use of non-linearkernel methodsinmachine learningapplications. It is so termed after the information theoristThomas M. Coverwho stated it in 1965, referring to it ascounting function theorem. Let the number of homogeneously linearly separable sets ofN{\displaystyle N}points ind{\displaystyle d}dimensions be defined as acountingfunctionC(N,d){\displaystyle C(N,d)}of the number of pointsN{\displaystyle N}and the dimensionalityd{\displaystyle d}. The theorem states thatC(N,d)=2∑k=0d−1(N−1k){\displaystyle C(N,d)=2\sum _{k=0}^{d-1}{\binom {N-1}{k}}}. It requires, as a necessary and sufficient condition, that the points are ingeneral position. Simply put, this means that the points should be as linearly independent (non-aligned) as possible. This condition is satisfied "with probability 1" oralmost surelyfor random point sets, while it may easily be violated for real data, since these are often structured along smaller-dimensionality manifolds within the data space. The functionC(N,d){\displaystyle C(N,d)}follows two different regimes depending on the relationship betweenN{\displaystyle N}andd{\displaystyle d}. A consequence of the theorem is that given a set of training data that is notlinearly separable, one can with high probability transform it into a training set that is linearly separable by projecting it into ahigher-dimensional spacevia somenon-linear transformation, or: A complex pattern-classification problem, cast in a high-dimensional space nonlinearly, is more likely to be linearly separable than in a low-dimensional space, provided that the space is not densely populated. By induction with the recursive relationC(N+1,d)=C(N,d)+C(N,d−1).{\displaystyle C(N+1,d)=C(N,d)+C(N,d-1).}To show that, with fixedN{\displaystyle N}, increasingd{\displaystyle d}may turn a set of points from non-separable to separable, adeterministic mappingmay be used: suppose there areN{\displaystyle N}points. Lift them onto the vertices of thesimplexin theN−1{\displaystyle N-1}dimensional real space. Since everypartitionof the samples into two sets is separable by alinear separator, the property follows. The 1965 paper contains multiple theorems. Theorem 6: LetX∪{y}={x1,x2,⋯,xN,y}{\textstyle X\cup \{y\}=\left\{x_{1},x_{2},\cdots ,x_{N},y\right\}}be inϕ{\textstyle \phi }-general position ind{\textstyle d}-space, whereϕ=(ϕ1,ϕ2,⋯,ϕd){\textstyle \phi =\left(\phi _{1},\phi _{2},\cdots ,\phi _{d}\right)}. Theny{\textstyle y}is ambiguous with respect toC(N,d−1){\textstyle C(N,d-1)}dichotomies ofX{\textstyle X}relative to the class of allϕ{\textstyle \phi }-surfaces. Corollary: If each of theϕ{\textstyle \phi }-separable dichotomies ofX{\textstyle X}has equal probability, then the probabilityA(N,d){\textstyle A(N,d)}thaty{\textstyle y}is ambiguous with respect to a randomϕ{\textstyle \phi }-separable dichotomy ofX{\textstyle X}isC(N,d−1)C(N,d){\displaystyle {\frac {C(N,d-1)}{C(N,d)}}}. IfN/d→β{\displaystyle N/d\to \beta }, then at the limit ofN→∞{\displaystyle N\to \infty }, this probability converges tolimNA(N,d)={1,0≤β≤21β−1,β≥2{\displaystyle \lim _{N}A(N,d)={\begin{cases}1,&0\leq \beta \leq 2\\{\frac {1}{\beta -1}},&\beta \geq 2\end{cases}}}. This can be interpreted as a bound on the memory capacity of a singleperceptron unit. Thed{\displaystyle d}is the number of input weights into the perceptron. The formula states that at the limit of larged{\displaystyle d}, the perceptron would almost certainly be able to memorize up to2d{\displaystyle 2d}binary labels, but almost certainly fail to memorize any more than that. (MacKay 2003, p. 490) Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Cover%27s_theorem
Website promotionis a process used bywebmasterstoimprove contentand increase exposure of awebsiteto bring more visitors.[1]: 210Many techniques such assearch engine optimizationandsearch engine submissionare used to increase a site's traffic once content is developed.[1]: 314 With the rise in popularity of social media platforms, many webmasters have moved to platforms likeFacebook,Twitter,LinkedInandInstagramforviral marketing. By sharing interesting content, webmasters hope that some of the audience will visit the website. Examples ofviral contentareinfographicsandmemes. Webmasters often hireoutsourcedoroffshorefirms to perform website promotion for them, many of whom provide "low-quality, assembly-linelink building".[2]
https://en.wikipedia.org/wiki/Website_promotion
Inatomic physicsandchemistry, anatomic electron transition(also called an atomic transition, quantum jump, or quantum leap) is anelectronchanging from oneenergy levelto another within anatom[1]orartificial atom.[2]The time scale of a quantum jump has not been measured experimentally. However, theFranck–Condon principlebinds the upper limit of this parameter to the order ofattoseconds.[3] Electrons canrelaxinto states of lower energy by emittingelectromagnetic radiationin the form of a photon. Electrons can also absorb passing photons, whichexcitesthe electron into a state of higher energy. The larger the energy separation between the electron's initial and final state, the shorter the photons'wavelength.[4] Danish physicistNiels Bohrfirst theorized that electrons can perform quantum jumps in 1913.[5]Soon after,James FranckandGustav Ludwig Hertzproved experimentallythat atoms have quantized energy states.[6] The observability of quantum jumps was predicted byHans Dehmeltin 1975, and they were first observed usingtrapped ionsofbariumatUniversity of HamburgandmercuryatNISTin 1986.[4] An atom interacts with the oscillating electric field: with amplitude|E0|{\displaystyle |{\textbf {E}}_{0}|}, angular frequencyω{\displaystyle \omega }, and polarization vectore^rad{\displaystyle {\hat {\textbf {e}}}_{\mathrm {rad} }}.[7]Note that the actual phase is(ωt−k⋅r){\displaystyle (\omega t-{\textbf {k}}\cdot {\textbf {r}})}. However, in many cases, the variation ofk⋅r{\displaystyle {\textbf {k}}\cdot {\textbf {r}}}is small over the atom (or equivalently, the radiation wavelength is much greater than the size of an atom) and this term can be ignored. This is called the dipole approximation. The atom can also interact with the oscillating magnetic field produced by the radiation, although much more weakly. The Hamiltonian for this interaction, analogous to the energy of a classical dipole in an electric field, isHI=er⋅E(t){\displaystyle H_{I}=e{\textbf {r}}\cdot {\textbf {E}}(t)}. The stimulated transition rate can be calculated usingtime-dependent perturbation theory; however, the result can be summarized usingFermi's golden rule:Rate∝|eE0|2×|⟨2|r⋅e^rad|1⟩|2{\displaystyle Rate\propto |eE_{0}|^{2}\times |\langle 2|{\textbf {r}}\cdot {\hat {\textbf {e}}}_{\mathrm {rad} }|1\rangle |^{2}}The dipole matrix element can be decomposed into the product of the radial integral and the angular integral. The angular integral is zero unless theselection rulesfor the atomic transition are satisfied. In 2019, it was demonstrated in an experiment with a superconductingartificial atomconsisting of two strongly-hybridizedtransmon qubitsplaced inside a readout resonator cavity at 15 mK, that the evolution of some jumps is continuous, coherent, deterministic, and reversible.[8]On the other hand, other quantum jumps are inherently unpredictable.[9]
https://en.wikipedia.org/wiki/Atomic_electron_transition
Aholonis something that is simultaneously a whole in and of itself, as well as a part of a larger whole. In this way, a holon can be considered asubsystemwithin a largerhierarchicalsystem.[1] The holon represents a way to overcome thedichotomy between parts and wholes, as well as a way to account for both theself-assertiveand the integrative tendencies oforganisms.[2]Holons are sometimes discussed in the context ofself-organizing holarchic open (SOHO) systems.[2][1] The wordholon(Ancient Greek:ὅλον) is a combination of the Greekholos(ὅλος) meaning 'whole', with the suffix-onwhich denotes aparticleor part (as inprotonandneutron). Holons are self-reliant units that possess a degree of independence and can handle contingencies without asking higher authorities for instructions (i.e., they have a degree ofautonomy). These holons are also simultaneously subject to control from one or more of these higher authorities. The first property ensures that holons are stable forms that are able to withstand disturbances, while the latter property signifies that they are intermediate forms, providing a context for the proper functionality for the larger whole. The termholonwas coined byArthur KoestlerinThe Ghost in the Machine(1967), though Koestler first articulated the concept inThe Act of Creation(1964), in which he refers to the relationship between the searches forsubjectiveandobjectiveknowledge: Einstein's space is no closer to reality thanVan Gogh'ssky. The glory of science is not in a truth more absolute than the truth ofBachorTolstoy, but in the act of creation itself. The scientist's discoveries impose his own order on chaos, as the composer or painter imposes his; an order that always refers to limited aspects of reality, and is based on the observer's frame of reference, which differs from period to period as aRembrantnude differs from a nude byManet.[3] Koestler would finally propose the termholoninThe Ghost in the Machine(1967), using it to describe natural organisms as composed of semi-autonomous sub-wholes (or, parts) that are linked in a form of hierarchy, aholarchy, to form a whole.[2][4][5]The title of the book itself points to the notion that the entire 'machine' of life and of theuniverseitself is ever-evolving toward more and more complex states, as if a ghost were operating the machine.[6] The first observation was influenced by a story told to him byHerbert A. Simon—the 'parableof the two watchmakers'—in which Simon concludes thatcomplex systemsevolve from simple systems much more rapidly when there are stable intermediate forms present in theevolutionary processcompared to when they are not present:[7] There once were two watchmakers, named Bios and Mekhos, who made very fine watches. The phones in their workshops rang frequently; new customers were constantly calling them. However, Bios prospered while Mekhos became poorer and poorer. In the end, Mekhos lost his shop and worked as a mechanic for Bios. What was the reason behind this? The watches consisted of about 1000 parts each. The watches that Mekhos made were designed such that, when he had to put down a partly assembled watch (for instance, to answer the phone), it immediately fell into pieces and had to be completely reassembled from the basic elements. On the other hand Bios designed his watches so that he could put together subassemblies of about ten components each. Ten of these subassemblies could be put together to make a larger sub-assembly. Finally, ten of the larger subassemblies constituted the whole watch. When Bios had to put his watches down to attend to some interruption they did not break up into their elemental parts but only into their sub-assemblies. Now, the watchmakers were each disturbed at the same rate of once per hundred assembly operations. However, due to their different assembly methods, it took Mekhos four thousand times longer than Bios to complete a single watch. The second observation was made by Koestler himself in his analysis of hierarchies and stable intermediate forms in non-livingmatter(atomicandmolecularstructure),living organisms, andsocial organizations.
https://en.wikipedia.org/wiki/Holon_(philosophy)
This is atimeline ofquantum computing. Stephen Wiesnerinventsconjugate coding[1][a] 13 June –James L. Park(Washington State University,Pullman)'s paper is received byFoundations of Physics[6]in which he describes the non possibility of disturbance in a quantumtransition statein the context of a disproof ofquantum jumpsin the concept of the atom described byBohr.[7][8][b] At the first Conference on the Physics of Computation, held at theMassachusetts Institute of Technology(MIT) in May,[25]Paul Benioff andRichard Feynmangive talks on quantum computing. Benioff's talk built on his earlier 1980 work showing that a computer can operate under the laws of quantum mechanics. The talk was titled "Quantum mechanical Hamiltonian models of discrete processes that erase their own histories: application to Turing machines".[26]In Feynman's talk, he observed that it appeared to be impossible to efficiently simulate the evolution of a quantum nature system on a classical computer, and he proposed a basic model for a quantum computer.[27]Feynman's conjecture on a quantum simulating computer, published 1982,[d]understood as - the reality ofquantum mechanicsexpressed as an effective quantum system necessitates quantum computers,[28]is conventionally accepted as a beginning of quantum computing.[29][30] Charles BennettandGilles Brassardemploy Wiesner's conjugate coding for distribution of cryptographic keys.[34] Artur Ekertat the University of Oxford, proposesentanglement-based secure communication.[40] Daniel R. Simon, atUniversité de Montréal, Quebec, Canada, invent anoracleproblem,Simon's problem, for which a quantum computer would beexponentially fasterthan a conventional computer. Thisalgorithmintroduces the main ideas which were then developed inPeter Shor's factorization algorithm.
https://en.wikipedia.org/wiki/Timeline_of_quantum_computing_and_communication
Thememory-prediction frameworkis a theory ofbrainfunction created byJeff Hawkinsand described in his 2004 bookOn Intelligence. This theory concerns the role of the mammalianneocortexand its associations with thehippocampiand thethalamusin matching sensory inputs to storedmemorypatterns and how this process leads to predictions of what will happen in the future. The theory is motivated by the observed similarities between the brain structures (especiallyneocorticaltissue) that are used for a wide range of behaviours available to mammals. The theory posits that the remarkably uniformphysicalarrangement of cortical tissue reflects a single principle or algorithm which underlies all cortical information processing. The basic processing principle is hypothesized to be a feedback/recallloopwhich involves bothcorticaland extra-cortical participation (the latter from thethalamusand thehippocampiin particular). The central concept of the memory-prediction framework is that bottom-up inputs are matched in ahierarchyofrecognition, and evoke a series of top-down expectations encoded as potentiations. These expectations interact with the bottom-up signals to both analyse those inputs and generatepredictionsof subsequent expected inputs. Each hierarchy level remembers frequently observed temporal sequences of input patterns and generates labels or 'names' for these sequences. When an input sequence matches a memorized sequence at a given level of the hierarchy, a label or 'name' is propagated up the hierarchy – thus eliminating details at higher levels and enabling them to learn higher-order sequences. This process produces increased invariance at higher levels. Higher levels predict future input by matching partial sequences and projecting their expectations to the lower levels. However, when a mismatch between input and memorized/predicted sequences occurs, a more complete representation propagates upwards. This causes alternative 'interpretations' to be activated at higher levels, which in turn generates other predictions at lower levels. Consider, for example, the process ofvision. Bottom-up information starts as low-levelretinalsignals (indicating the presence of simple visual elements and contrasts). At higher levels of the hierarchy, increasingly meaningful information is extracted, regarding the presence oflines,regions,motions, etc. Even further up the hierarchy, activity corresponds to the presence of specific objects – and then to behaviours of these objects. Top-down information fills in details about the recognized objects, and also about their expected behaviour as time progresses. The sensory hierarchy induces a number of differences between the various levels. As one moves up the hierarchy,representationshave increased: The relationship between sensory and motor processing is an important aspect of the basic theory. It is proposed that the motor areas of thecortexconsist of a behavioural hierarchy similar to the sensory hierarchy, with the lowest levels consisting of explicit motor commands to musculature and the highest levels corresponding to abstract prescriptions (e.g. 'resize the browser'). The sensory and motor hierarchies are tightly coupled, with behaviour giving rise to sensory expectations and sensoryperceptionsdriving motor processes. Finally, it is important to note that all the memories in the cortical hierarchy have to be learnt – this information is not pre-wired in the brain. Hence, the process of extracting thisrepresentationfrom the flow of inputs and behaviours is theorized as a process that happens continually duringcognition. Hawkins has extensive training as an electrical engineer. Another way to describe the theory (hinted at in his book) is as alearninghierarchyoffeed forwardstochasticstate machines. In this view, the brain is analyzed as an encoding problem, not too dissimilar from future-predicting error-correction codes. The hierarchy is a hierarchy ofabstraction, with the higher level machines' states representing more abstract conditions or events, and these states predisposing lower-level machines to perform certain transitions. The lower level machines model limited domains of experience, or control or interpret sensors or effectors. The whole system actually controls the organism's behavior. Since the state machine is "feed forward", the organism responds to future events predicted from past data. Since it is hierarchical, the system exhibits behavioral flexibility, easily producing new sequences of behavior in response to new sensory data. Since the system learns, the new behavior adapts to changing conditions. That is, the evolutionary purpose of the brain is to predict the future, in admittedly limited ways, so as to change it. The hierarchies described above are theorized to occur primarily in mammalian neocortex. In particular, neocortex is assumed to consist of a large number ofcolumns(as surmised also byVernon Benjamin Mountcastlefrom anatomical and theoretical considerations). Each column is attuned to a particular feature at a given level in a hierarchy. It receives bottom-up inputs from lower levels, and top-down inputs from higher levels. (Other columns at the same level also feed into a given column, and serve mostly to inhibit the activation exclusive representations.) When an input is recognized – that is, acceptable agreement is obtained between the bottom-up and top-down sources – a column generates outputs which in turn propagate to both lower and higher levels. These processes map well to specific layers within mammalian cortex. (The cortical layers should not be confused with different levels of the processing hierarchy: all the layers in a single column participate as one element in a single hierarchical level). Bottom-up input arrives at layer 4 (L4), whence it propagates to L2 and L3 for recognition of the invariant content. Top-down activation arrives to L2 and L3 via L1 (the mostly axonal layer that distributes activation locally across columns). L2 and L3 compare bottom up and top-down information, and generate either the invariant 'names' when sufficient match is achieved, or the more variable signals that occur when this fails. These signals are propagated up the hierarchy (via L5) and also down the hierarchy (via L6 and L1). To account for storage and recognition ofsequencesof patterns, a combination of two processes is suggested. The nonspecificthalamusacts as a 'delay line' – that is, L5 activates this brain area, which re-activates L1 after a slight delay. Thus, the output of one column generates L1 activity, which will coincide with the input to a column which is temporally subsequent within a sequence. This time ordering operates in conjunction with the higher-level identification of the sequence, which does not change in time; hence, activation of the sequence representation causes the lower-level components to be predicted one after the other. (Besides this role in sequencing, the thalamus is also active as sensorywaystation– these roles apparently involve distinct regions of this anatomically non-uniform structure.) Another anatomically diverse brain structure which is hypothesized to play an important role in hierarchical cognition is thehippocampus. It is well known that damage to both hippocampi impairs the formation of long-termdeclarative memory; individuals with such damage are unable to form new memories of episodic nature, although they can recall earlier memories without difficulties and can also learn new skills. In the current theory, the hippocampi are thought of as the top level of the cortical hierarchy; they are specialized to retain memories of events that propagate all the way to the top. As such events fit into predictable patterns, they become memorizable at lower levels in the hierarchy. (Such movement of memories down the hierarchy is, incidentally, a general prediction of the theory.) Thus, the hippocampi continually memorize 'unexpected' events (that is, those not predicted at lower levels); if they are damaged, the entire process of memorization through the hierarchy is compromised. In 2016 Hawkins hypothesized thatcortical columnsdid not just capture a sensation, but also the relative location of that sensation, in three dimensions rather than two (situated capture), in relation to what was around it.[1]"When the brain builds a model of the world, everything has a location relative to everything else"[1]—Jeff Hawkins. Some neuroscience research with animals supports the idea that the hippocampus integrates new information with existing memories to form predictive models. This process enables more efficient problem-solving and adaptation to new tasks.[2][3] The memory-prediction framework explains a number of psychologically salient aspects of cognition. For example, the ability of experts in any field to effortlessly analyze and remember complex problems within their field is a natural consequence of their formation of increasingly refined conceptual hierarchies. Also, the procession from 'perception' to 'understanding' is readily understandable as a result of the matching of top-down and bottom-upexpectations. Mismatches, in contrast, generate the exquisite ability of biological cognition to detect unexpected perceptions and situations. (Deficiencies in this regard are a common characteristic of current approaches to artificial intelligence.) Besides these subjectively satisfying explanations, the framework also makes a number of testablepredictions. For example, the important role that prediction plays throughout the sensory hierarchies calls for anticipatory neural activity in certain cells throughout sensory cortex. In addition, cells that 'name' certain invariants should remain active throughout the presence of those invariants, even if the underlying inputs change. The predicted patterns of bottom-up and top-down activity – with former being more complex when expectations are not met – may be detectable, for example by functional magnetic resonance imaging (fMRI). Although these predictions are not highly specific to the proposed theory, they are sufficiently unambiguous to make verification or rejection of its central tenets possible. SeeOn Intelligencefor details on the predictions and findings. By design, the current theory builds on the work of numerous neurobiologists, and it may be argued that most of these ideas have already been proposed by researchers such asGrossbergandMountcastle. On the other hand, the novel separation of the conceptual mechanism (i.e., bidirectional processing and invariant recognition) from the biological details (i.e., neural layers, columns and structures) lays the foundation for abstract thinking about a wide range of cognitive processes. The most significant limitation of this theory is its current[when?]lack of detail. For example, the concept ofinvarianceplays a crucial role; Hawkins posits "name cells" for at least some of these invariants. (See alsoNeural ensemble#Encodingforgrandmother neuronswhich perform this type of function, andmirror neuronsfor asomatosensory systemviewpoint.) But it is far from obvious how to develop a mathematically rigorous definition, which will carry the required conceptual load across the domains presented by Hawkins. Similarly, a complete theory will require credible details on both the short-term dynamics and the learning processes that will enable the cortical layers to behave as advertised. IBMis implementing Hawkins' model.[citation needed] The memory-prediction theory claims a common algorithm is employed by all regions in the neocortex. The theory has given rise to a number of software models aiming to simulate this common algorithm using a hierarchical memory structure. The year in the list below indicates when the model was last updated. The following models use belief propagation orbelief revisionin singly connectedBayesian networks.
https://en.wikipedia.org/wiki/Memory-prediction_framework
Incomputercentral processing units,micro-operations(also known asmicro-opsorμops, historically also asmicro-actions[2]) are detailed low-level instructions used in some designs to implement complex machine instructions (sometimes termedmacro-instructionsin this context).[3]: 8–9 Usually, micro-operations perform basic operations on data stored in one or moreregisters, including transferring data between registers or between registers and externalbusesof thecentral processing unit(CPU), and performing arithmetic or logical operations on registers. In a typicalfetch-decode-execute cycle, each step of a macro-instruction is decomposed during its execution so the CPU determines and steps through a series of micro-operations. The execution of micro-operations is performed under control of the CPU'scontrol unit, which decides on their execution while performing various optimizations such as reordering, fusion and caching.[1] Various forms of μops have long been the basis for traditionalmicrocoderoutines used to simplify the implementation of a particularCPU designor perhaps just the sequencing of certain multi-step operations or addressing modes. More recently, μops have also been employed in a different way in order to let modernCISCprocessors more easily handle asynchronous parallel and speculative execution: As with traditional microcode, one or more table lookups (or equivalent) is done to locate the appropriate μop-sequence based on the encoding and semantics of the machine instruction (the decoding or translation step), however, instead of having rigid μop-sequences controlling the CPU directly from a microcode-ROM, μops are here dynamically buffered for rescheduling before being executed.[4]: 6–7, 9–11 This buffering means that the fetch and decode stages can be more detached from the execution units than is feasible in a more traditional microcoded (or hard-wired) design. As this allows a degree of freedom regarding execution order, it makes some extraction ofinstruction-level parallelismout of a normal single-threaded program possible (provided that dependencies are checked, etc.). It opens up for more analysis and therefore also for reordering of code sequences in order to dynamically optimize mapping and scheduling of μops onto machine resources (such asALUs, load/store units, etc.). As this happens on the μop-level, sub-operations of different machine (macro) instructions may often intermix in a particular μop-sequence, forming partially reordered machine instructions as a direct consequence of the out-of-order dispatching of microinstructions from several macro instructions. However, this is not the same as themicro-op fusion, which aims at the fact that a more complex microinstruction may replace a few simpler microinstructions in certain cases, typically in order to minimize state changes and usage of the queue andre-order bufferspace, therefore reducing power consumption. Micro-op fusion is used in some modern CPU designs.[3]: 89–91, 105–106[4]: 6–7, 9–15 Execution optimization has gone even further; processors not only translate many machine instructions into a series of μops, but also do the opposite when appropriate; they combine certain machine instruction sequences (such as a compare followed by a conditional jump) into a more complex μop which fits the execution model better and thus can be executed faster or with less machine resources involved. This is also known asmacro-op fusion.[3]: 106–107[4]: 12–13 Another way to try to improve performance is to cache the decoded micro-operations in amicro-operation cache, so that if the same macroinstruction is executed again, the processor can directly access the decoded micro-operations from the cache, instead of decoding them again. Theexecution trace cachefound inIntelNetBurstmicroarchitecture (Pentium 4) is a widespread example of this technique.[5]The size of this cache may be stated in terms of how many thousands (or strictly multiple of 1024) of micro-operations it can store:Kμops.[6]
https://en.wikipedia.org/wiki/Micro-operation
Inprobability theory,Dirichlet processes(after the distribution associated withPeter Gustav Lejeune Dirichlet) are a family ofstochastic processeswhoserealizationsareprobability distributions. In other words, a Dirichlet process is a probability distribution whose range is itself a set of probability distributions. It is often used inBayesian inferenceto describe thepriorknowledge about the distribution ofrandom variables—how likely it is that the random variables are distributed according to one or another particular distribution. As an example, a bag of 100 real-world dice is arandomprobability mass function(random pmf)—to sample this random pmf you put your hand in the bag and draw out a die, that is, you draw a pmf. A bag of dice manufactured using a crude process 100 years ago will likely have probabilities that deviate wildly from the uniform pmf, whereas a bag of state-of-the-art dice used by Las Vegas casinos may have barely perceptible imperfections. We can model the randomness of pmfs with the Dirichlet distribution.[1] The Dirichlet process is specified by a base distributionH{\displaystyle H}and a positivereal numberα{\displaystyle \alpha }called theconcentration parameter(also known as scaling parameter). The base distribution is theexpected valueof the process, i.e., the Dirichlet process draws distributions "around" the base distribution the way anormal distributiondraws real numbers around its mean. However, even if the base distribution iscontinuous, the distributions drawn from the Dirichlet process arealmost surelydiscrete. The scaling parameter specifies how strong this discretization is: in the limit ofα→0{\displaystyle \alpha \rightarrow 0}, the realizations are all concentrated at a single value, while in the limit ofα→∞{\displaystyle \alpha \rightarrow \infty }the realizations become continuous. Between the two extremes the realizations are discrete distributions with less and less concentration asα{\displaystyle \alpha }increases. The Dirichlet process can also be seen as the infinite-dimensional generalization of theDirichlet distribution. In the same way as the Dirichlet distribution is theconjugate priorfor thecategorical distribution, the Dirichlet process is the conjugate prior for infinite,nonparametricdiscrete distributions. A particularly important application of Dirichlet processes is as aprior probabilitydistribution ininfinite mixture models. The Dirichlet process was formally introduced byThomas S. Fergusonin 1973.[2]It has since been applied indata miningandmachine learning, among others fornatural language processing,computer visionandbioinformatics. Dirichlet processes are usually used when modelling data that tends to repeat previous values in a so-called "rich get richer" fashion. Specifically, suppose that the generation of valuesX1,X2,…{\displaystyle X_{1},X_{2},\dots }can be simulated by the following algorithm. a) With probabilityαα+n−1{\displaystyle {\frac {\alpha }{\alpha +n-1}}}drawXn{\displaystyle X_{n}}fromH{\displaystyle H}.b) With probabilitynxα+n−1{\displaystyle {\frac {n_{x}}{\alpha +n-1}}}setXn=x{\displaystyle X_{n}=x}, wherenx{\displaystyle n_{x}}is the number of previous observations ofx{\displaystyle x}.(Formally,nx:=|{j:Xj=xandj<n}|{\displaystyle n_{x}:=|\{j\colon X_{j}=x{\text{ and }}j<n\}|}where|⋅|{\displaystyle |\cdot |}denotes the number of elements in the set.) At the same time, another common model for data is that the observationsX1,X2,…{\displaystyle X_{1},X_{2},\dots }are assumed to beindependent and identically distributed(i.i.d.) according to some (random) distributionP{\displaystyle P}. The goal of introducing Dirichlet processes is to be able to describe the procedure outlined above in this i.i.d. model. TheX1,X2,…{\displaystyle X_{1},X_{2},\dots }observations in the algorithm are notindependent, since we have to consider the previous results when generating the next value. They are, however,exchangeable. This fact can be shown by calculating thejoint probability distributionof the observations and noticing that the resulting formula only depends on whichx{\displaystyle x}values occur among the observations and how many repetitions they each have. Because of this exchangeability,de Finetti's representation theoremapplies and it implies that the observationsX1,X2,…{\displaystyle X_{1},X_{2},\dots }areconditionally independentgiven a (latent) distributionP{\displaystyle P}. ThisP{\displaystyle P}is a random variable itself and has a distribution. This distribution (over distributions) is called a Dirichlet process (DP{\displaystyle \operatorname {DP} }). In summary, this means that we get an equivalent procedure to the above algorithm: In practice, however, drawing a concrete distributionP{\displaystyle P}is impossible, since its specification requires an infinite amount of information. This is a common phenomenon in the context of Bayesiannon-parametric statisticswhere a typical task is to learn distributions on function spaces, which involve effectively infinitely many parameters. The key insight is that in many applications the infinite-dimensional distributions appear only as an intermediary computational device and are not required for either the initial specification of prior beliefs or for the statement of the final inference. Given ameasurable setS, a base probability distributionHand a positivereal numberα{\displaystyle \alpha }, the Dirichlet processDP⁡(H,α){\displaystyle \operatorname {DP} (H,\alpha )}is astochastic processwhosesample path(orrealization, i.e. an infinite sequence ofrandom variatesdrawn from the process) is a probability distribution overS, such that the following holds. For any measurable finitepartitionofS, denoted{Bi}i=1n{\displaystyle \{B_{i}\}_{i=1}^{n}}, whereDir{\displaystyle \operatorname {Dir} }denotes theDirichlet distributionand the notationX∼D{\displaystyle X\sim D}means that the random variableX{\displaystyle X}has the distributionD{\displaystyle D}. There are several equivalent views of the Dirichlet process. Besides the formal definition above, the Dirichlet process can be defined implicitly through de Finetti's theorem as described in the first section; this is often called theChinese restaurant process. A third alternative is thestick-breaking process, which defines the Dirichlet process constructively by writing a distribution sampled from the process asf(x)=∑k=1∞βkδxk(x){\displaystyle f(x)=\sum _{k=1}^{\infty }\beta _{k}\delta _{x_{k}}(x)}, where{xk}k=1∞{\displaystyle \{x_{k}\}_{k=1}^{\infty }}are samples from the base distributionH{\displaystyle H},δxk{\displaystyle \delta _{x_{k}}}is anindicator functioncentered onxk{\displaystyle x_{k}}(zero everywhere except forδxk(xk)=1{\displaystyle \delta _{x_{k}}(x_{k})=1}) and theβk{\displaystyle \beta _{k}}are defined by a recursive scheme that repeatedly samples from thebeta distributionBeta⁡(1,α){\displaystyle \operatorname {Beta} (1,\alpha )}. A widely employed metaphor for the Dirichlet process is based on the so-calledChinese restaurant process. The metaphor is as follows: Imagine a Chinese restaurant in which customers enter. A new customer sits down at a table with a probability proportional to the number of customers already sitting there. Additionally, a customer opens a new table with a probability proportional to the scaling parameterα{\displaystyle \alpha }. After infinitely many customers entered, one obtains a probability distribution over infinitely many tables to be chosen. This probability distribution over the tables is a random sample of the probabilities of observations drawn from a Dirichlet process with scaling parameterα{\displaystyle \alpha }. If one associates draws from the base measureH{\displaystyle H}with every table, the resulting distribution over the sample spaceS{\displaystyle S}is a random sample of a Dirichlet process. The Chinese restaurant process is related to thePólya urn sampling schemewhich yields samples from finite Dirichlet distributions. Because customers sit at a table with a probability proportional to the number of customers already sitting at the table, two properties of the DP can be deduced: A third approach to the Dirichlet process is the so-called stick-breaking process view. Conceptually, this involves repeatedly breaking off and discarding a random fraction (sampled from a Beta distribution) of a "stick" that is initially of length 1. Remember that draws from a Dirichlet process are distributions over a setS{\displaystyle S}. As noted previously, the distribution drawn is discrete with probability 1. In the stick-breaking process view, we explicitly use the discreteness and give theprobability mass functionof this (random) discrete distribution as: whereδθk{\displaystyle \delta _{\theta _{k}}}is theindicator functionwhich evaluates to zero everywhere, except forδθk(θk)=1{\displaystyle \delta _{\theta _{k}}(\theta _{k})=1}. Since this distribution is random itself, its mass function is parameterized by two sets of random variables: the locations{θk}k=1∞{\displaystyle \left\{\theta _{k}\right\}_{k=1}^{\infty }}and the corresponding probabilities{βk}k=1∞{\displaystyle \left\{\beta _{k}\right\}_{k=1}^{\infty }}. In the following, we present without proof what these random variables are. The locationsθk{\displaystyle \theta _{k}}are independent and identically distributed according toH{\displaystyle H}, the base distribution of the Dirichlet process. The probabilitiesβk{\displaystyle \beta _{k}}are given by a procedure resembling the breaking of a unit-length stick (hence the name): whereβk′{\displaystyle \beta '_{k}}are independent random variables with thebeta distributionBeta⁡(1,α){\displaystyle \operatorname {Beta} (1,\alpha )}. The resemblance to 'stick-breaking' can be seen by consideringβk{\displaystyle \beta _{k}}as the length of a piece of a stick. We start with a unit-length stick and in each step we break off a portion of the remaining stick according toβk′{\displaystyle \beta '_{k}}and assign this broken-off piece toβk{\displaystyle \beta _{k}}. The formula can be understood by noting that after the firstk− 1 values have their portions assigned, the length of the remainder of the stick is∏i=1k−1(1−βi′){\displaystyle \prod _{i=1}^{k-1}\left(1-\beta '_{i}\right)}and this piece is broken according toβk′{\displaystyle \beta '_{k}}and gets assigned toβk{\displaystyle \beta _{k}}. The smallerα{\displaystyle \alpha }is, the less of the stick will be left for subsequent values (on average), yielding more concentrated distributions. The stick-breaking process is similar to the construction where one samples sequentially frommarginal beta distributionsin order to generate a sample from aDirichlet distribution.[4] Yet another way to visualize the Dirichlet process and Chinese restaurant process is as a modifiedPólya urn schemesometimes called theBlackwell–MacQueensampling scheme. Imagine that we start with an urn filled withα{\displaystyle \alpha }black balls. Then we proceed as follows: The resulting distribution over colours is the same as the distribution over tables in the Chinese restaurant process. Furthermore, when we draw a black ball, if rather than generating a new colour, we instead pick a random value from a base distributionH{\displaystyle H}and use that value to label the new ball, the resulting distribution over labels will be the same as the distribution over the values in a Dirichlet process. The Dirichlet Process can be used as a prior distribution to estimate the probability distribution that generates the data. In this section, we consider the model The Dirichlet Process distribution satisfiesprior conjugacy, posterior consistency, and theBernstein–von Mises theorem.[5] In this model, the posterior distribution is again a Dirichlet process. This means that the Dirichlet process is aconjugate priorfor this model. Theposterior distributionis given by wherePn{\displaystyle \mathbb {P} _{n}}is defined below. If we take thefrequentistview of probability, we believe there is a true probability distributionP0{\displaystyle P_{0}}that generated the data. Then it turns out that the Dirichlet process is consistent in theweak topology, which means that for every weak neighbourhoodU{\displaystyle U}ofP0{\displaystyle P_{0}}, the posterior probability ofU{\displaystyle U}converges to1{\displaystyle 1}. In order to interpret the credible sets as confidence sets, aBernstein–von Mises theoremis needed. In case of the Dirichlet process we compare the posterior distribution with theempirical processPn=1n∑i=1nδXi{\displaystyle \mathbb {P} _{n}={\frac {1}{n}}\sum _{i=1}^{n}\delta _{X_{i}}}. SupposeF{\displaystyle {\mathcal {F}}}is aP0{\displaystyle P_{0}}-Donsker class, i.e. for some Brownian BridgeGP0{\displaystyle G_{P_{0}}}. Suppose also that there exists a functionF{\displaystyle F}such thatF(x)≥supf∈Ff(x){\displaystyle F(x)\geq \sup _{f\in {\mathcal {F}}}f(x)}such that∫F2dH<∞{\displaystyle \int F^{2}\,\mathrm {d} H<\infty }, then,P0{\displaystyle P_{0}}almost surely This implies that credible sets you construct are asymptotic confidence sets, and the Bayesian inference based on the Dirichlet process is asymptotically also valid frequentist inference. To understand what Dirichlet processes are and the problem they solve we consider the example ofdata clustering. It is a common situation that data points are assumed to be distributed in a hierarchical fashion where each data point belongs to a (randomly chosen) cluster and the members of a cluster are further distributed randomly within that cluster. For example, we might be interested in how people will vote on a number of questions in an upcoming election. A reasonable model for this situation might be to classify each voter as a liberal, a conservative or a moderate and then model the event that a voter says "Yes" to any particular question as aBernoulli random variablewith the probability dependent on which political cluster they belong to. By looking at how votes were cast in previous years on similar pieces of legislation one could fit a predictive model using a simple clustering algorithm such ask-means. That algorithm, however, requires knowing in advance the number of clusters that generated the data. In many situations, it is not possible to determine this ahead of time, and even when we can reasonably assume a number of clusters we would still like to be able to check this assumption. For example, in the voting example above the division into liberal, conservative and moderate might not be finely tuned enough; attributes such as a religion, class or race could also be critical for modelling voter behaviour, resulting in more clusters in the model. As another example, we might be interested in modelling the velocities of galaxies using a simple model assuming that the velocities are clustered, for instance by assuming each velocity is distributed according to thenormal distributionvi∼N(μk,σ2){\displaystyle v_{i}\sim N(\mu _{k},\sigma ^{2})}, where thei{\displaystyle i}th observation belongs to thek{\displaystyle k}th cluster of galaxies with common expected velocity. In this case it is far from obvious how to determine a priori how many clusters (of common velocities) there should be and any model for this would be highly suspect and should be checked against the data. By using a Dirichlet process prior for the distribution of cluster means we circumvent the need to explicitly specify ahead of time how many clusters there are, although the concentration parameter still controls it implicitly. We consider this example in more detail. A first naive model is to presuppose that there areK{\displaystyle K}clusters of normally distributed velocities with common known fixedvarianceσ2{\displaystyle \sigma ^{2}}. Denoting the event that thei{\displaystyle i}th observation is in thek{\displaystyle k}th cluster aszi=k{\displaystyle z_{i}=k}we can write this model as: That is, we assume that the data belongs toK{\displaystyle K}distinct clusters with meansμk{\displaystyle \mu _{k}}and thatπk{\displaystyle \pi _{k}}is the (unknown) prior probability of a data point belonging to thek{\displaystyle k}th cluster. We assume that we have no initial information distinguishing the clusters, which is captured by the symmetric priorDir⁡(α/K⋅1K){\displaystyle \operatorname {Dir} \left(\alpha /K\cdot \mathbf {1} _{K}\right)}. HereDir{\displaystyle \operatorname {Dir} }denotes theDirichlet distributionand1K{\displaystyle \mathbf {1} _{K}}denotes a vector of lengthK{\displaystyle K}where each element is 1. We further assign independent and identical prior distributionsH(λ){\displaystyle H(\lambda )}to each of the cluster means, whereH{\displaystyle H}may be any parametric distribution with parameters denoted asλ{\displaystyle \lambda }. The hyper-parametersα{\displaystyle \alpha }andλ{\displaystyle \lambda }are taken to be known fixed constants, chosen to reflect our prior beliefs about the system. To understand the connection to Dirichlet process priors we rewrite this model in an equivalent but more suggestive form: Instead of imagining that each data point is first assigned a cluster and then drawn from the distribution associated to that cluster we now think of each observation being associated with parameterμ~i{\displaystyle {\tilde {\mu }}_{i}}drawn from some discrete distributionG{\displaystyle G}with support on theK{\displaystyle K}means. That is, we are now treating theμ~i{\displaystyle {\tilde {\mu }}_{i}}as being drawn from the random distributionG{\displaystyle G}and our prior information is incorporated into the model by the distribution over distributionsG{\displaystyle G}. We would now like to extend this model to work without pre-specifying a fixed number of clustersK{\displaystyle K}. Mathematically, this means we would like to select a random prior distributionG(μ~i)=∑k=1∞πkδμk(μ~i){\displaystyle G({\tilde {\mu }}_{i})=\sum _{k=1}^{\infty }\pi _{k}\delta _{\mu _{k}}({\tilde {\mu }}_{i})}where the values of the clusters meansμk{\displaystyle \mu _{k}}are again independently distributed according toH(λ){\displaystyle H\left(\lambda \right)}and the distribution overπk{\displaystyle \pi _{k}}is symmetric over the infinite set of clusters. This is exactly what is accomplished by the model: With this in hand we can better understand the computational merits of the Dirichlet process. Suppose that we wanted to drawn{\displaystyle n}observations from the naive model with exactlyK{\displaystyle K}clusters. A simple algorithm for doing this would be to drawK{\displaystyle K}values ofμk{\displaystyle \mu _{k}}fromH(λ){\displaystyle H(\lambda )}, a distributionπ{\displaystyle \pi }fromDir⁡(α/K⋅1K){\displaystyle \operatorname {Dir} \left(\alpha /K\cdot \mathbf {1} _{K}\right)}and then for each observation independently sample the clusterk{\displaystyle k}with probabilityπk{\displaystyle \pi _{k}}and the value of the observation according toN(μk,σ2){\displaystyle N\left(\mu _{k},\sigma ^{2}\right)}. It is easy to see that this algorithm does not work in case where we allow infinite clusters because this would require sampling an infinite dimensional parameterπ{\displaystyle {\boldsymbol {\pi }}}. However, it is still possible to sample observationsvi{\displaystyle v_{i}}. One can e.g. use the Chinese restaurant representation described below and calculate the probability for used clusters and a new cluster to be created. This avoids having to explicitly specifyπ{\displaystyle {\boldsymbol {\pi }}}. Other solutions are based on a truncation of clusters: A (high) upper bound to the true number of clusters is introduced and cluster numbers higher than the lower bound are treated as one cluster. Fitting the model described above based on observed dataD{\displaystyle D}means finding theposterior distributionp(π,μ∣D){\displaystyle p\left({\boldsymbol {\pi }},{\boldsymbol {\mu }}\mid D\right)}over cluster probabilities and their associated means. In the infinite dimensional case it is obviously impossible to write down the posterior explicitly. It is, however, possible to draw samples from this posterior using a modifiedGibbs sampler.[6]This is the critical fact that makes the Dirichlet process prior useful forinference. Dirichlet processes are frequently used inBayesiannonparametric statistics. "Nonparametric" here does not mean a parameter-less model, rather a model in which representations grow as more data are observed. Bayesian nonparametric models have gained considerable popularity in the field ofmachine learningbecause of the above-mentioned flexibility, especially inunsupervised learning. In a Bayesian nonparametric model, the prior and posterior distributions are not parametric distributions, but stochastic processes.[7]The fact that the Dirichlet distribution is a probability distribution on thesimplexof sets of non-negative numbers that sum to one makes it a good candidate to model distributions over distributions or distributions over functions. Additionally, the nonparametric nature of this model makes it an ideal candidate for clustering problems where the distinct number of clusters is unknown beforehand. In addition, the Dirichlet process has also been used for developing a mixture of expert models, in the context of supervised learning algorithms (regression or classification settings). For instance, mixtures of Gaussian process experts, where the number of required experts must be inferred from the data.[8][9] As draws from a Dirichlet process are discrete, an important use is as aprior probabilityininfinite mixture models. In this case,S{\displaystyle S}is the parametric set of component distributions. The generative process is therefore that a sample is drawn from a Dirichlet process, and for each data point, in turn, a value is drawn from this sample distribution and used as the component distribution for that data point. The fact that there is no limit to the number of distinct components which may be generated makes this kind of model appropriate for the case when the number of mixture components is not well-defined in advance. For example, the infinite mixture of Gaussians model,[10]as well as associated mixture regression models, e.g.[11] The infinite nature of these models also lends them tonatural language processingapplications, where it is often desirable to treat the vocabulary as an infinite, discrete set. The Dirichlet Process can also be used for nonparametric hypothesis testing, i.e. to develop Bayesian nonparametric versions of the classical nonparametric hypothesis tests, e.g.sign test,Wilcoxon rank-sum test,Wilcoxon signed-rank test, etc. For instance, Bayesian nonparametric versions of the Wilcoxon rank-sum test and the Wilcoxon signed-rank test have been developed by using theimprecise Dirichlet process, a prior ignorance Dirichlet process.[citation needed]
https://en.wikipedia.org/wiki/Dirichlet_process
ICSA Labs(International Computer Security Association) began as NCSA (National Computer Security Association). Its mission was to increase awareness of the need for computer security and to provide education about various security products and technologies. In its early days, NCSA focused almost solely on the certification ofanti-virus software. Using the Consortia model, NCSA worked together with anti-virus software vendors to develop one of the first anti-virus software certification schemes. Over the past decade, the organization added certification programs for other security-related products and changed its name to ICSA Labs. Operating as an independent division ofVerizon, ICSA Labs provides resources for research, intelligence, certification and testing of products, including anti-virus,firewall,IPsecVPN,cryptography, SSLVPN, networkIPS,anti-spywareand PC firewall products. ICSA Labstemporarily ceased operationin April 2017, restoring operations a year later. ICSA Labs ceased operation in 2022, following closure by its parent company Verizon. This in turn heralded the end ofThe WildList, a curated collection of computer virus samples, which ICSA Labs managed and distributed within the security industry for testing purposes. This article about a business, industry, or trade-related organization is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/International_Computer_Security_Association
Inmathematics, arational numberis anumberthat can be expressed as thequotientorfraction⁠pq{\displaystyle {\tfrac {p}{q}}}⁠of twointegers, anumeratorpand a non-zerodenominatorq.[1]For example,⁠37{\displaystyle {\tfrac {3}{7}}}⁠is a rational number, as is every integer (for example,−5=−51{\displaystyle -5={\tfrac {-5}{1}}}).Thesetof all rational numbers, also referred to as "the rationals",[2]thefield of rationals[3]or thefield of rational numbersis usually denoted by boldfaceQ, orblackboard bold⁠Q.{\displaystyle \mathbb {Q} .}⁠ A rational number is areal number. The real numbers that are rational are those whosedecimal expansioneither terminates after a finite number ofdigits(example:3/4 = 0.75), or eventually begins torepeatthe same finitesequenceof digits over and over (example:9/44 = 0.20454545...).[4]This statement is true not only inbase 10, but also in every other integerbase, such as thebinaryandhexadecimalones (seeRepeating decimal § Extension to other bases). Areal numberthat is not rational is calledirrational.[5]Irrational numbers include thesquare root of 2(⁠2{\displaystyle {\sqrt {2}}}⁠),π,e, and thegolden ratio(φ). Since the set of rational numbers iscountable, and the set of real numbers isuncountable,almost allreal numbers are irrational.[1] Rational numbers can beformallydefined asequivalence classesof pairs of integers(p, q)withq≠ 0, using theequivalence relationdefined as follows: The fraction⁠pq{\displaystyle {\tfrac {p}{q}}}⁠then denotes the equivalence class of(p, q).[6] Rational numbers together withadditionandmultiplicationform afieldwhich contains theintegers, and is contained in any field containing the integers. In other words, the field of rational numbers is aprime field, and a field hascharacteristic zeroif and only if it contains the rational numbers as a subfield. Finiteextensionsof⁠Q{\displaystyle \mathbb {Q} }⁠are calledalgebraic number fields, and thealgebraic closureof⁠Q{\displaystyle \mathbb {Q} }⁠is the field ofalgebraic numbers.[7] Inmathematical analysis, the rational numbers form adense subsetof the real numbers. The real numbers can be constructed from the rational numbers bycompletion, usingCauchy sequences,Dedekind cuts, or infinitedecimals(seeConstruction of the real numbers). In mathematics, "rational" is often used as a noun abbreviating "rational number". The adjectiverationalsometimes means that thecoefficientsare rational numbers. For example, arational pointis a point with rationalcoordinates(i.e., a point whose coordinates are rational numbers); arational matrixis amatrixof rational numbers; arational polynomialmay be a polynomial with rational coefficients, although the term "polynomial over the rationals" is generally preferred, to avoid confusion between "rational expression" and "rational function" (apolynomialis a rational expression and defines a rational function, even if its coefficients are not rational numbers). However, arational curveis nota curve defined over the rationals, but a curve which can be parameterized by rational functions. Although nowadaysrational numbersare defined in terms ofratios, the termrationalis not aderivationofratio. On the contrary, it isratiothat is derived fromrational: the first use ofratiowith its modern meaning was attested in English about 1660,[8]while the use ofrationalfor qualifying numbers appeared almost a century earlier, in 1570.[9]This meaning ofrationalcame from the mathematical meaning ofirrational, which was first used in 1551, and it was used in "translations of Euclid (following his peculiar use ofἄλογος)".[10][11] This unusual history originated in the fact thatancient Greeks"avoided heresy by forbidding themselves from thinking of those [irrational] lengths as numbers".[12]So such lengths wereirrational, in the sense ofillogical, that is "not to be spoken about" (ἄλογοςin Greek).[13] Every rational number may be expressed in a unique way as anirreducible fraction⁠ab,{\displaystyle {\tfrac {a}{b}},}⁠whereaandbarecoprime integersandb> 0. This is often called thecanonical formof the rational number. Starting from a rational number⁠ab,{\displaystyle {\tfrac {a}{b}},}⁠its canonical form may be obtained by dividingaandbby theirgreatest common divisor, and, ifb< 0, changing the sign of the resulting numerator and denominator. Any integerncan be expressed as the rational number⁠n1,{\displaystyle {\tfrac {n}{1}},}⁠which is its canonical form as a rational number. If both fractions are in canonical form, then: If both denominators are positive (particularly if both fractions are in canonical form): On the other hand, if either denominator is negative, then each fraction with a negative denominator must first be converted into an equivalent form with a positive denominator—by changing the signs of both its numerator and denominator.[6] Two fractions are added as follows: If both fractions are in canonical form, the result is in canonical form if and only ifb, darecoprime integers.[6][14] If both fractions are in canonical form, the result is in canonical form if and only ifb, darecoprime integers.[14] The rule for multiplication is: where the result may be areducible fraction—even if both original fractions are in canonical form.[6][14] Every rational number⁠ab{\displaystyle {\tfrac {a}{b}}}⁠has anadditive inverse, often called itsopposite, If⁠ab{\displaystyle {\tfrac {a}{b}}}⁠is in canonical form, the same is true for its opposite. A nonzero rational number⁠ab{\displaystyle {\tfrac {a}{b}}}⁠has amultiplicative inverse, also called itsreciprocal, If⁠ab{\displaystyle {\tfrac {a}{b}}}⁠is in canonical form, then the canonical form of its reciprocal is either⁠ba{\displaystyle {\tfrac {b}{a}}}⁠or⁠−b−a,{\displaystyle {\tfrac {-b}{-a}},}⁠depending on the sign ofa. Ifb, c, dare nonzero, the division rule is Thus, dividing⁠ab{\displaystyle {\tfrac {a}{b}}}⁠by⁠cd{\displaystyle {\tfrac {c}{d}}}⁠is equivalent to multiplying⁠ab{\displaystyle {\tfrac {a}{b}}}⁠by thereciprocalof⁠cd:{\displaystyle {\tfrac {c}{d}}:}⁠[14] Ifnis a non-negative integer, then The result is in canonical form if the same is true for⁠ab.{\displaystyle {\tfrac {a}{b}}.}⁠In particular, Ifa≠ 0, then If⁠ab{\displaystyle {\tfrac {a}{b}}}⁠is in canonical form, the canonical form of the result is⁠bnan{\displaystyle {\tfrac {b^{n}}{a^{n}}}}⁠ifa> 0ornis even. Otherwise, the canonical form of the result is⁠−bn−an.{\displaystyle {\tfrac {-b^{n}}{-a^{n}}}.}⁠ Afinite continued fractionis an expression such as whereanare integers. Every rational number⁠ab{\displaystyle {\tfrac {a}{b}}}⁠can be represented as a finite continued fraction, whosecoefficientsancan be determined by applying theEuclidean algorithmto(a, b). are different ways to represent the same rational value. The rational numbers may be built asequivalence classesofordered pairsofintegers.[6][14] More precisely, let⁠(Z×(Z∖{0})){\displaystyle (\mathbb {Z} \times (\mathbb {Z} \setminus \{0\}))}⁠be the set of the pairs(m, n)of integers suchn≠ 0. Anequivalence relationis defined on this set by Addition and multiplication can be defined by the following rules: This equivalence relation is acongruence relation, which means that it is compatible with the addition and multiplication defined above; the set of rational numbers⁠Q{\displaystyle \mathbb {Q} }⁠is the defined as thequotient setby this equivalence relation,⁠(Z×(Z∖{0}))/∼,{\displaystyle (\mathbb {Z} \times (\mathbb {Z} \backslash \{0\}))/\sim ,}⁠equipped with the addition and the multiplication induced by the above operations. (This construction can be carried out with anyintegral domainand produces itsfield of fractions.)[6] The equivalence class of a pair(m, n)is denoted⁠mn.{\displaystyle {\tfrac {m}{n}}.}⁠Two pairs(m1,n1)and(m2,n2)belong to the same equivalence class (that is are equivalent) if and only if This means that if and only if[6][14] Every equivalence class⁠mn{\displaystyle {\tfrac {m}{n}}}⁠may be represented by infinitely many pairs, since Each equivalence class contains a uniquecanonical representative element. The canonical representative is the unique pair(m, n)in the equivalence class such thatmandnarecoprime, andn> 0. It is called therepresentation in lowest termsof the rational number. The integers may be considered to be rational numbers identifying the integernwith the rational number⁠n1.{\displaystyle {\tfrac {n}{1}}.}⁠ Atotal ordermay be defined on the rational numbers, that extends the natural order of the integers. One has If The set⁠Q{\displaystyle \mathbb {Q} }⁠of all rational numbers, together with the addition and multiplication operations shown above, forms afield.[6] ⁠Q{\displaystyle \mathbb {Q} }⁠has nofield automorphismother than the identity. (A field automorphism must fix 0 and 1; as it must fix the sum and the difference of two fixed elements, it must fix every integer; as it must fix the quotient of two fixed elements, it must fix every rational number, and is thus the identity.) ⁠Q{\displaystyle \mathbb {Q} }⁠is aprime field, which is a field that has no subfield other than itself.[15]The rationals are the smallest field withcharacteristiczero. Every field of characteristic zero contains a unique subfield isomorphic to⁠Q.{\displaystyle \mathbb {Q} .}⁠ With the order defined above,⁠Q{\displaystyle \mathbb {Q} }⁠is anordered field[14]that has no subfield other than itself, and is the smallest ordered field, in the sense that every ordered field contains a unique subfieldisomorphicto⁠Q.{\displaystyle \mathbb {Q} .}⁠ ⁠Q{\displaystyle \mathbb {Q} }⁠is thefield of fractionsof theintegers⁠Z.{\displaystyle \mathbb {Z} .}⁠[16]Thealgebraic closureof⁠Q,{\displaystyle \mathbb {Q} ,}⁠i.e. the field of roots of rational polynomials, is the field ofalgebraic numbers. The rationals are adensely orderedset: between any two rationals, there sits another one, and, therefore, infinitely many other ones.[6]For example, for any two fractions such that (whereb,d{\displaystyle b,d}are positive), we have Anytotally orderedset which is countable, dense (in the above sense), and has no least or greatest element isorder isomorphicto the rational numbers.[17] The set of positive rational numbers iscountable, as is illustrated in the figure. More precisely, one can sort the fractions by increasing values of the sum of the numerator and the denominator, and, for equal sums, by increasing numerator or denominator. This produces asequenceof fractions, from which one can remove the reducible fractions (in red on the figure), for getting a sequence that contains each rational number exactly once. This establishes a bijection between the rational numbers and the natural numbers, which maps each rational number to its rank in the sequence. A similar method can be used for numbering all rational numbers (positive and negative). As the set of all rational numbers is countable, and the set of all real numbers (as well as the set of irrational numbers) is uncountable, the set of rational numbers is anull set, that is,almost allreal numbers are irrational, in the sense ofLebesgue measure.[18] The rationals are adense subsetof thereal numbers; every real number has rational numbers arbitrarily close to it.[6]A related property is that rational numbers are the only numbers withfiniteexpansions asregular continued fractions.[19] In the usualtopologyof the real numbers, the rationals are neither anopen setnor aclosed set.[20] By virtue of their order, the rationals carry anorder topology. The rational numbers, as a subspace of the real numbers, also carry asubspace topology. The rational numbers form ametric spaceby using theabsolute differencemetricd(x,y)=|x−y|,{\displaystyle d(x,y)=|x-y|,}and this yields a third topology on⁠Q.{\displaystyle \mathbb {Q} .}⁠All three topologies coincide and turn the rationals into atopological field. The rational numbers are an important example of a space which is notlocally compact. The rationals are characterized topologically as the uniquecountablemetrizable spacewithoutisolated points. The space is alsototally disconnected. The rational numbers do not form acomplete metric space, and thereal numbersare the completion of⁠Q{\displaystyle \mathbb {Q} }⁠under the metricd(x,y)=|x−y|{\displaystyle d(x,y)=|x-y|}above.[14] In addition to the absolute value metric mentioned above, there are other metrics which turn⁠Q{\displaystyle \mathbb {Q} }⁠into a topological field: Letpbe aprime numberand for any non-zero integera, let|a|p=p−n,{\displaystyle |a|_{p}=p^{-n},}wherepnis the highest power ofpdividinga. In addition set|0|p=0.{\displaystyle |0|_{p}=0.}For any rational number⁠ab,{\displaystyle {\frac {a}{b}},}⁠we set Then defines ametricon⁠Q.{\displaystyle \mathbb {Q} .}⁠[21] The metric space⁠(Q,dp){\displaystyle (\mathbb {Q} ,d_{p})}⁠is not complete, and its completion is thep-adic number field⁠Qp.{\displaystyle \mathbb {Q} _{p}.}⁠Ostrowski's theoremstates that any non-trivialabsolute valueon the rational numbers⁠Q{\displaystyle \mathbb {Q} }⁠is equivalent to either the usual real absolute value or ap-adicabsolute value.
https://en.wikipedia.org/wiki/Rational_numbers
This is a list ofnumerical libraries, which arelibrariesused insoftware developmentfor performingnumericalcalculations. It is not a complete listing but is instead a list of numerical libraries with articles on Wikipedia, with few exceptions. The choice of a typical library depends on a range of requirements such as: desired features (e.g. large dimensional linear algebra, parallel computation, partial differential equations), licensing, readability of API, portability or platform/compiler dependence (e.g. Linux, Windows, Visual C++, GCC), performance, ease-of-use, continued support from developers, standard compliance, specialized optimization in code for specific application scenarios or even the size of the code-base to be installed.
https://en.wikipedia.org/wiki/List_of_numerical_libraries
TheWhittaker–Shannon interpolation formulaorsinc interpolationis a method to construct acontinuous-timebandlimitedfunction from a sequence of real numbers. The formula dates back to the works ofE. Borelin 1898, andE. T. Whittakerin 1915, and was cited from works ofJ. M. Whittakerin 1935, and in the formulation of theNyquist–Shannon sampling theorembyClaude Shannonin 1949. It is also commonly calledShannon's interpolation formulaandWhittaker's interpolation formula. E. T. Whittaker, who published it in 1915, called it theCardinal series. Given a sequence of real numbers,x[n]=x(nT), the continuous function (where "sinc" denotes thenormalized sinc function) has aFourier transform,X(f), whose non-zero values are confined to the region :|f|≤12T{\displaystyle |f|\leq {\frac {1}{2T}}}. When the parameterThas units of seconds, thebandlimit, 1/(2T), has units of cycles/sec (hertz). When thex[n] sequence represents time samples, at intervalT, of a continuous function, the quantityfs= 1/Tis known as thesample rate, andfs/2 is the correspondingNyquist frequency. When the sampled function has a bandlimit,B, less than the Nyquist frequency,x(t) is aperfect reconstructionof the original function. (SeeSampling theorem.) Otherwise, the frequency components above the Nyquist frequency "fold" into the sub-Nyquist region ofX(f), resulting in distortion. (SeeAliasing.) The interpolation formula is derived in theNyquist–Shannon sampling theoremarticle, which points out that it can also be expressed as theconvolutionof aninfinite impulse trainwith asinc function: This is equivalent to filtering the impulse train with an ideal (brick-wall)low-pass filterwith gain of 1 (or 0 dB) in the passband. If the sample rate is sufficiently high, this means that the baseband image (the original signal before sampling) is passed unchanged and the other images are removed by the brick-wall filter. The interpolation formula always convergesabsolutelyandlocally uniformlyas long as By theHölder inequalitythis is satisfied if the sequence(x[n])n∈Z{\displaystyle (x[n])_{n\in \mathbb {Z} }}belongs to any of theℓp(Z,C){\displaystyle \ell ^{p}(\mathbb {Z} ,\mathbb {C} )}spaceswith 1 ≤p< ∞, that is This condition is sufficient, but not necessary. For example, the sum will generally converge if the sample sequence comes from sampling almost anystationary process, in which case the sample sequence is not square summable, and is not in anyℓp(Z,C){\displaystyle \ell ^{p}(\mathbb {Z} ,\mathbb {C} )}space. Ifx[n] is an infinite sequence of samples of a sample function of a wide-sensestationary process, then it is not a member of anyℓp{\displaystyle \ell ^{p}}orLpspace, with probability 1; that is, the infinite sum of samples raised to a powerpdoes not have a finite expected value. Nevertheless, the interpolation formula converges with probability 1. Convergence can readily be shown by computing the variances of truncated terms of the summation, and showing that the variance can be made arbitrarily small by choosing a sufficient number of terms. If the process mean is nonzero, then pairs of terms need to be considered to also show that the expected value of the truncated terms converges to zero. Since a random process does not have a Fourier transform, the condition under which the sum converges to the original function must also be different. A stationary random process does have anautocorrelation functionand hence aspectral densityaccording to theWiener–Khinchin theorem. A suitable condition for convergence to a sample function from the process is that the spectral density of the process be zero at all frequencies equal to and above half the sample rate.
https://en.wikipedia.org/wiki/Whittaker%E2%80%93Shannon_interpolation_formula
Incombinatorics,stars and bars(also called "sticks and stones",[1]"balls and bars",[2]and "dots and dividers"[3]) is a graphical aid for deriving certaincombinatorialtheorems. It can be used to solve a variety ofcounting problems, such as how many ways there are to putnindistinguishable balls intokdistinguishable bins.[4]The solution to this particular problem is given by the binomial coefficient(n+k−1k−1){\displaystyle {\tbinom {n+k-1}{k-1}}}, which is the number of subsets of sizek− 1that can be formed from a set of sizen+k− 1. If, for example, there are two balls and three bins, then the number of ways of placing the balls is(2+3−13−1)=(42)=6{\displaystyle {\tbinom {2+3-1}{3-1}}={\tbinom {4}{2}}=6}. The table shows the six possible ways of distributing the two balls, the strings of stars and bars that represent them (with stars indicating balls and bars separating bins from one another), and the subsets that correspond to the strings. As two bars are needed to separate three bins and there are two balls, each string contains two bars and two stars. Each subset indicates which of the four symbols in the corresponding string is a bar. The stars and bars method is often introduced specifically to prove the following two theorems of elementary combinatorics concerning the number of solutions to an equation. For any pair ofpositive integersnandk, the number ofk-tuplesofpositiveintegers whose sum isnis equal to the number of(k− 1)-element subsets of a set withn− 1elements. For example, ifn= 10andk= 4, the theorem gives the number of solutions tox1+x2+x3+x4= 10(withx1,x2,x3,x4> 0) as thebinomial coefficient where(n−1k−1){\displaystyle {\tbinom {n-1}{k-1}}}is the number ofcombinationsofn− 1elements takenk− 1at a time. This corresponds tocompositionsof an integer. For any pair of positive integersnandk, the number ofk-tuplesofnon-negativeintegers whose sum isnis equal to the number ofmultisetsof sizek− 1taken from a set of sizen+ 1, or equivalently, the number of multisets of sizentaken from a set of sizek, and is given by For example, ifn= 10andk= 4, the theorem gives the number of solutions tox1+x2+x3+x4= 10(withx1,x2,x3,x4≥0{\displaystyle \geq 0}) as where themultiset coefficient((kn)){\displaystyle \left(\!\!{\binom {k}{n}}\!\!\right)}is the number of multisets of sizen, with elements taken from a set of sizek. This corresponds toweak compositionsof an integer. Withkfixed, the numbers forn= 0, 1, 2, 3, ...are those in the(k− 1)st diagonal ofPascal's triangle. For example, whenk= 3thenth number is the(n+ 1)sttriangular number, which falls on the second diagonal, 1, 3, 6, 10, .... The problem of enumeratingk-tuples whose sum isnis equivalent to the problem of counting configurations of the following kind: let there benobjects to be placed intokbins, so that all bins contain at least one object. The bins are distinguished (say they are numbered 1 tok) but thenobjects are not (so configurations are only distinguished by thenumber of objectspresent in each bin). A configuration is thus represented by ak-tuple of positive integers. Thenobjects are now represented as a row ofnstars; adjacent bins are separated by bars. The configuration will be specified by indicating the boundary between the first and second bin, the boundary between the second and third bin, and so on. Hencek− 1bars need to be placed between stars. Because no bin is allowed to be empty, there is at most one bar between any pair of stars. There aren− 1gaps between stars and hencen− 1positions in which a bar may be placed. A configuration is obtained by choosingk− 1of these gaps to contain a bar; therefore there are(n−1k−1){\displaystyle {\tbinom {n-1}{k-1}}}configurations. Withn= 7andk= 3, start by placing seven stars in a line: Now indicate the boundaries between the bins: In general two of the six possible bar positions must be chosen. Therefore there are(62)=15{\displaystyle {\tbinom {6}{2}}=15}such configurations. In this case, the weakened restriction of non-negativity instead of positivity means that we can place multiple bars between stars and that one or more bars also be placed before the first star and after the last star. In terms of configurations involving objects and bins, bins are now allowed to be empty. Rather than a(k− 1)-set of bar positions taken from a set of sizen− 1as in the proof of Theorem one, we now have a(k− 1)-multiset of bar positions taken from a set of sizen+ 1(since bar positions may repeat and since the ends are now allowed bar positions). An alternative interpretation in terms of multisets is the following: there is a set ofkbin labels from which a multiset of sizenis to be chosen, the multiplicity of a bin label in this multiset indicating the number of objects placed in that bin. The equality((n+1k−1))=((kn)){\displaystyle \left(\!\!{n+1 \choose k-1}\!\!\right)=\left(\!\!{k \choose n}\!\!\right)}can also be understood as an equivalence of different counting problems: the number ofk-tuples of non-negative integers whose sum isnequals the number of(n+ 1)-tuples of non-negative integers whose sum isk− 1, which follows by interchanging the roles of bars and stars in the diagrams representing configurations. To see the expression(n+k−1k−1){\displaystyle {\tbinom {n+k-1}{k-1}}}directly, observe that any arrangement of stars and bars consists of a total ofn+k− 1symbols,nof which are stars andk− 1of which are bars. Thus, we may lay outn+k− 1slots and choosek− 1of these to contain bars (or, equivalently, choosenof the slots to contain stars). Whenn= 7andk= 5, the tuple (4, 0, 1, 2, 0) may be represented by the following diagram: If possible bar positions are labeled 1, 2, 3, 4, 5, 6, 7, 8 with labeli≤7corresponding to a bar preceding theith star and following any previous star and 8 to a bar following the last star, then this configuration corresponds to the(k− 1)-multiset{5,5,6,8}, as described in the proof of Theorem two. If bins are labeled 1, 2, 3, 4, 5, then it also corresponds to then-multiset{1,1,1,1,3,4,4}, also as described in the proof of Theorem two. Theorem one can be restated in terms of Theorem two, because the requirement that each variable be positive can be imposed by shifting each variable by −1, and then requiring only that each variable be non-negative. For example: withx1,x2,x3,x4>0{\displaystyle x_{1},x_{2},x_{3},x_{4}>0} is equivalent to: withx1′,x2′,x3′,x4′≥0,{\displaystyle x'_{1},x'_{2},x'_{3},x'_{4}\geq 0,} wherexi′=xi−1{\displaystyle x'_{i}=x_{i}-1}for eachi∈{1,2,3,4}{\displaystyle i\in \{1,2,3,4\}}. If one wishes to count the number of ways to distribute seven indistinguishable one dollar coins among Amber, Ben, and Curtis so that each of them receives at least one dollar, one may observe that distributions are essentially equivalent to tuples of three positive integers whose sum is 7. (Here the first entry in the tuple is the number of coins given to Amber, and so on.) Thus Theorem 1 applies, withn= 7andk= 3, and there are(7−13−1)=15{\displaystyle {\tbinom {7-1}{3-1}}=15}ways to distribute the coins. Ifn= 5,k= 4, and thekbin labels area,b,c,d, then ★|★★★||★ could represent either the 4-tuple(1, 3, 0, 1), or the multiset of bar positions{2, 5, 5}, or the multiset of bin labels{a,b,b,b,d}. The solution of this problem should use Theorem 2 withn= 5stars andk– 1 = 3bars to give(5+4−14−1)=(83)=56{\displaystyle {\tbinom {5+4-1}{4-1}}={\tbinom {8}{3}}=56}configurations. In the proof of Theorem two there can be more bars than stars, which cannot happen in the proof of Theorem one. So, for example, 10 balls into 7 bins gives(166){\displaystyle {\tbinom {16}{6}}}configurations, while 7 balls into 10 bins gives(169){\displaystyle {\tbinom {16}{9}}}configurations, and 6 balls into 11 bins gives(1610)=(166){\displaystyle {\tbinom {16}{10}}={\tbinom {16}{6}}}configurations. The graphical method was used byPaul EhrenfestandHeike Kamerlingh Onnes—with symbolε(quantum energy element) in place of a star and the symbol0in place of a bar—as a simple derivation ofMax Planck's expression for the number of "complexions" for a system of "resonators" of a single frequency.[5][6] By complexions (microstates) Planck meant distributions ofPenergy elementsεoverNresonators.[7][8]The numberRof complexions is The graphical representation of each possible distribution would containPcopies of the symbolεandN– 1copies of the symbol0. In their demonstration, Ehrenfest and Kamerlingh Onnes tookN= 4andP= 7(i.e.,R= 120combinations). They chose the 4-tuple (4, 2, 0, 1) as the illustrative example for this symbolic representation:εεεε0εε00ε. The enumerations of Theorems one and two can also be found usinggenerating functionsinvolving simple rational expressions. The two cases are very similar; we will look at the case whenxi≥0{\displaystyle x_{i}\geq 0}, that is, Theorem two first. There is only one configuration for a single bin and any given number of objects (because the objects are not distinguished). This is represented by the generating function The series is a geometric series, and the last equality holds analytically for|x| < 1, but is better understood in this context as a manipulation offormal power series. The exponent ofxindicates how many objects are placed in the bin. Each additional bin is represented by another factor of11−x{\displaystyle {\frac {1}{1-x}}}; the generating function forkbins is where the multiplication is theCauchy productof formal power series. To find the number of configurations withnobjects, we want the coefficient ofxn{\displaystyle x^{n}}(denoted by prefixing the expression for the generating function with[xn]{\displaystyle [x^{n}]}), that is, This coefficient can be found usingbinomial seriesand agrees with the result of Theorem two, namely(n+k−1k−1){\displaystyle {\tbinom {n+k-1}{k-1}}}. This Cauchy product expression is justified via stars and bars: the coefficient ofxn{\displaystyle x^{n}}in the expansion of the product is the number of ways of obtaining thenth power ofxby multiplying one power ofxfrom each of thekfactors. So the stars representxs and a bar separates thexs coming from one factor from those coming from the next factor. For the case whenxi>0{\displaystyle x_{i}>0}, that is, Theorem one, no configuration has an empty bin, and so the generating function for a single bin is The Cauchy product is thereforexk(1−x)k{\displaystyle {\frac {x^{k}}{(1-x)^{k}}}}, and the coefficient ofxn{\displaystyle x^{n}}is found using binomial series to be(n−1k−1){\displaystyle {\tbinom {n-1}{k-1}}}.
https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)
Lollipop sequence numberingis anumbering schemeused inrouting protocols. In this numbering scheme, sequence numbers start at a negative value, increase until they reach zero, then cycle through a finite set of positive numbers indefinitely. When a system is rebooted, the sequence is restarted from a negative number again. This allows recently rebooted systems to be distinguished from systems which have simply looped around their numbering space. This path can be visualized as a line with a circle at the end; hence alollipop. Lollipop sequence numbering was originally believed to resolve the ambiguity problem in cyclic sequence numbering schemes, and was used inOSPFversion 1 for this reason. Later work showed that this was not the case, like in theARPANET sequence bug, and OSPF version 2 replaced it with a linear numbering space, with special rules for what happens when the sequence numbers reach the end of the numbering space.[1]
https://en.wikipedia.org/wiki/Lollipop_sequence_numbering
Photo identificationorphoto IDis anidentity documentthat includes aphotographof the holder, usually only their face. The most commonly accepted forms of photo ID are those issued by government authorities, such asdriver's licenses, identity cards andpassports, but special-purpose photo IDs may be also produced, such as internal security oraccess controlcards. Photo identification may be used forface-to-faceauthenticationof identity of a party who either is personally unknown to the person in authority or because that person does not have access to a file, a directory, aregistryor an information service that contains or that can render a photograph of somebody on account of that person's name and other personal information. Some countries – including almost all developed nations – use a single, government-issued type of card as a proof of age or citizenship. TheUnited States,United Kingdom,Australia,New Zealand,Ireland, andCanadado not have such a single type of card. Types of photo ID used in the US include: Australianphoto IDincludes: Photo identification cards appear to have been first used at the1876 Centennial ExpositioninPhiladelphia, Pennsylvania. The Scottish-born Canadian photographerWilliam Notman, through his affiliated business, Centennial Photographic Co., which had exclusive photographic concession at the exhibition, introduced a photo identification system that was required for all exhibitors and employees of the exhibition. The innovation was known as a "photographic ticket".[3]
https://en.wikipedia.org/wiki/Photo_identification
Attentionis amachine learningmethod that determines the importance of each component in a sequence relative to the other components in that sequence. Innatural language processing, importance is represented by"soft"weights assigned to each word in a sentence. More generally, attention encodes vectors calledtokenembeddingsacross a fixed-widthsequencethat can range from tens to millions of tokens in size. Unlike "hard" weights, which are computed during the backwards training pass, "soft" weights exist only in the forward pass and therefore change with every step of the input. Earlier designs implemented the attention mechanism in a serialrecurrent neural network(RNN) language translation system, but a more recent design, namely thetransformer, removed the slower sequential RNN and relied more heavily on the faster parallel attention scheme. Inspired by ideas aboutattention in humans, the attention mechanism was developed to address the weaknesses of leveraging information from thehidden layersof recurrent neural networks. Recurrent neural networks favor more recent information contained in words at the end of a sentence, while information earlier in the sentence tends to beattenuated. Attention allows a token equal access to any part of a sentence directly, rather than only through the previous state. Academic reviews of the history of the attention mechanism are provided in Niu et al.[1]and Soydaner.[2] seq2seqwith RNN + Attention.[13]Attention mechanism was added onto RNN encoder-decoder architecture to improve language translation of long sentences. See Overview section. The modern era of machine attention was revitalized by grafting an attention mechanism (Fig 1. orange) to an Encoder-Decoder. Figure 2 shows the internal step-by-step operation of the attention block (A) in Fig 1. This attention scheme has been compared to the Query-Key analogy of relational databases. That comparison suggests anasymmetricrole for the Query and Key vectors, whereoneitem of interest (the Query vector "that") is matched againstallpossible items (the Key vectors of each word in the sentence). However, both Self and Cross Attentions' parallel calculations matches all tokens of the K matrix with all tokens of the Q matrix; therefore the roles of these vectors aresymmetric. Possibly because the simplistic database analogy is flawed, much effort has gone into understanding attention mechanisms further by studying their roles in focused settings, such as in-context learning,[20]masked language tasks,[21]stripped down transformers,[22]bigram statistics,[23]N-gram statistics,[24]pairwise convolutions,[25]and arithmetic factoring.[26] In translating between languages, alignment is the process of matching words from the source sentence to words of the translated sentence. Networks that perform verbatim translation without regard to word order would show the highest scores along the (dominant) diagonal of the matrix. The off-diagonal dominance shows that the attention mechanism is more nuanced. Consider an example of translatingI love youto French. On the first pass through the decoder, 94% of the attention weight is on the first English wordI, so the network offers the wordje. On the second pass of the decoder, 88% of the attention weight is on the third English wordyou, so it offerst'. On the last pass, 95% of the attention weight is on the second English wordlove, so it offersaime. In theI love youexample, the second wordloveis aligned with the third wordaime. Stacking soft row vectors together forje,t', andaimeyields analignment matrix: Sometimes, alignment can be multiple-to-multiple. For example, the English phraselook it upcorresponds tocherchez-le. Thus, "soft" attention weights work better than "hard" attention weights (setting one attention weight to 1, and the others to 0), as we would like the model to make a context vector consisting of a weighted sum of the hidden vectors, rather than "the best one", as there may not be a best hidden vector. Many variants of attention implement soft weights, such as Forconvolutional neural networks, attention mechanisms can be distinguished by the dimension on which they operate, namely: spatial attention,[30]channel attention,[31]or combinations.[32][33] These variants recombine the encoder-side inputs to redistribute those effects to each target output. Often, a correlation-style matrix of dot products provides the re-weighting coefficients. In the figures below, W is the matrix of context attention weights, similar to the formula in Core Calculations section above. The size of the attention matrix is proportional to the square of the number of input tokens. Therefore, when the input is long, calculating the attention matrix requires a lot ofGPUmemory. Flash attention is an implementation that reduces the memory needs and increases efficiency without sacrificing accuracy. It achieves this by partitioning the attention computation into smaller blocks that fit into the GPU's faster on-chip memory, reducing the need to store large intermediate matrices and thus lowering memory usage while increasing computational efficiency.[38] Flex Attention[39]is an attention kernel developed by Meta that allows users to modify attention scores prior tosoftmaxand dynamically chooses the optimal attention algorithm. The major breakthrough came with self-attention, where each element in the input sequence attends to all others, enabling the model to capture global dependencies. This idea was central to the Transformer architecture, which replaced recurrence entirely with attention mechanisms. As a result, Transformers became the foundation for models like BERT, GPT, and T5 (Vaswani et al., 2017). Attention is widely used in natural language processing, computer vision, and speech recognition. In NLP, it improves context understanding in tasks like question answering and summarization. In vision, visual attention helps models focus on relevant image regions, enhancing object detection and image captioning. For matrices:Q∈Rm×dk,K∈Rn×dk{\displaystyle \mathbf {Q} \in \mathbb {R} ^{m\times d_{k}},\mathbf {K} \in \mathbb {R} ^{n\times d_{k}}}andV∈Rn×dv{\displaystyle \mathbf {V} \in \mathbb {R} ^{n\times d_{v}}}, the scaled dot-product, orQKV attentionis defined as:Attention(Q,K,V)=softmax(QKTdk)V∈Rm×dv{\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}\left({\frac {\mathbf {Q} \mathbf {K} ^{T}}{\sqrt {d_{k}}}}\right)\mathbf {V} \in \mathbb {R} ^{m\times d_{v}}}whereT{\displaystyle {}^{T}}denotestransposeand thesoftmax functionis applied independently to every row of its argument. The matrixQ{\displaystyle \mathbf {Q} }containsm{\displaystyle m}queries, while matricesK,V{\displaystyle \mathbf {K} ,\mathbf {V} }jointly contain anunorderedset ofn{\displaystyle n}key-value pairs. Value vectors in matrixV{\displaystyle \mathbf {V} }are weighted using the weights resulting from the softmax operation, so that the rows of them{\displaystyle m}-by-dv{\displaystyle d_{v}}output matrix are confined to theconvex hullof the points inRdv{\displaystyle \mathbb {R} ^{d_{v}}}given by the rows ofV{\displaystyle \mathbf {V} }. To understand thepermutation invarianceandpermutation equivarianceproperties of QKV attention,[40]letA∈Rm×m{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times m}}andB∈Rn×n{\displaystyle \mathbf {B} \in \mathbb {R} ^{n\times n}}bepermutation matrices; andD∈Rm×n{\displaystyle \mathbf {D} \in \mathbb {R} ^{m\times n}}an arbitrary matrix. The softmax function ispermutation equivariantin the sense that: By noting that the transpose of a permutation matrix is also its inverse, it follows that: which shows that QKV attention isequivariantwith respect to re-ordering the queries (rows ofQ{\displaystyle \mathbf {Q} }); andinvariantto re-ordering of the key-value pairs inK,V{\displaystyle \mathbf {K} ,\mathbf {V} }. These properties are inherited when applying linear transforms to the inputs and outputs of QKV attention blocks. For example, a simpleself-attentionfunction defined as: is permutation equivariant with respect to re-ordering the rows of the input matrixX{\displaystyle X}in a non-trivial way, because every row of the output is a function of all the rows of the input. Similar properties hold formulti-head attention, which is defined below. When QKV attention is used as a building block for an autoregressive decoder, and when at training time all input and output matrices haven{\displaystyle n}rows, amasked attentionvariant is used:Attention(Q,K,V)=softmax(QKTdk+M)V{\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}\left({\frac {\mathbf {Q} \mathbf {K} ^{T}}{\sqrt {d_{k}}}}+\mathbf {M} \right)\mathbf {V} }where the mask,M∈Rn×n{\displaystyle \mathbf {M} \in \mathbb {R} ^{n\times n}}is astrictly upper triangular matrix, with zeros on and below the diagonal and−∞{\displaystyle -\infty }in every element above the diagonal. The softmax output, also inRn×n{\displaystyle \mathbb {R} ^{n\times n}}is thenlower triangular, with zeros in all elements above the diagonal. The masking ensures that for all1≤i<j≤n{\displaystyle 1\leq i<j\leq n}, rowi{\displaystyle i}of the attention output is independent of rowj{\displaystyle j}of any of the three input matrices. The permutation invariance and equivariance properties of standard QKV attention do not hold for the masked variant. Multi-head attentionMultiHead(Q,K,V)=Concat(head1,...,headh)WO{\displaystyle {\text{MultiHead}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{Concat}}({\text{head}}_{1},...,{\text{head}}_{h})\mathbf {W} ^{O}}where each head is computed with QKV attention as:headi=Attention(QWiQ,KWiK,VWiV){\displaystyle {\text{head}}_{i}={\text{Attention}}(\mathbf {Q} \mathbf {W} _{i}^{Q},\mathbf {K} \mathbf {W} _{i}^{K},\mathbf {V} \mathbf {W} _{i}^{V})}andWiQ,WiK,WiV{\displaystyle \mathbf {W} _{i}^{Q},\mathbf {W} _{i}^{K},\mathbf {W} _{i}^{V}}, andWO{\displaystyle \mathbf {W} ^{O}}are parameter matrices. The permutation properties of (standard, unmasked) QKV attention apply here also. For permutation matrices,A,B{\displaystyle \mathbf {A} ,\mathbf {B} }: from which we also see thatmulti-head self-attention: is equivariant with respect to re-ordering of the rows of input matrixX{\displaystyle X}. Attention(Q,K,V)=softmax(tanh⁡(WQQ+WKK)V){\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}(\tanh(\mathbf {W} _{Q}\mathbf {Q} +\mathbf {W} _{K}\mathbf {K} )\mathbf {V} )}whereWQ{\displaystyle \mathbf {W} _{Q}}andWK{\displaystyle \mathbf {W} _{K}}are learnable weight matrices.[13] Attention(Q,K,V)=softmax(QWKT)V{\displaystyle {\text{Attention}}(\mathbf {Q} ,\mathbf {K} ,\mathbf {V} )={\text{softmax}}(\mathbf {Q} \mathbf {W} \mathbf {K} ^{T})\mathbf {V} }whereW{\displaystyle \mathbf {W} }is a learnable weight matrix.[27] Self-attention is essentially the same as cross-attention, except that query, key, and value vectors all come from the same model. Both encoder and decoder can use self-attention, but with subtle differences. For encoder self-attention, we can start with a simple encoder without self-attention, such as an "embedding layer", which simply converts each input word into a vector by a fixedlookup table. This gives a sequence of hidden vectorsh0,h1,…{\displaystyle h_{0},h_{1},\dots }. These can then be applied to a dot-product attention mechanism, to obtainh0′=Attention(h0WQ,HWK,HWV)h1′=Attention(h1WQ,HWK,HWV)⋯{\displaystyle {\begin{aligned}h_{0}'&=\mathrm {Attention} (h_{0}W^{Q},HW^{K},HW^{V})\\h_{1}'&=\mathrm {Attention} (h_{1}W^{Q},HW^{K},HW^{V})\\&\cdots \end{aligned}}}or more succinctly,H′=Attention(HWQ,HWK,HWV){\displaystyle H'=\mathrm {Attention} (HW^{Q},HW^{K},HW^{V})}. This can be applied repeatedly, to obtain a multilayered encoder. This is the "encoder self-attention", sometimes called the "all-to-all attention", as the vector at every position can attend to every other. For decoder self-attention, all-to-all attention is inappropriate, because during the autoregressive decoding process, the decoder cannot attend to future outputs that has yet to be decoded. This can be solved by forcing the attention weightswij=0{\displaystyle w_{ij}=0}for alli<j{\displaystyle i<j}, called "causal masking". This attention mechanism is the "causally masked self-attention".
https://en.wikipedia.org/wiki/Attention_mechanism
Autocephaly recognized by some autocephalous Churchesde jure: Autocephaly and canonicity recognized by Constantinople and 3 other autocephalous Churches: Spiritual independence recognized by Georgian Orthodox Church: Semi-Autonomous: In theEastern Orthodox Church,Catholic Church,[1]and in the teachings of theChurch Fatherswhich undergirds thetheologyof those communions,economyoroeconomy(Greek:οἰκονομία,oikonomia) has several meanings.[2]The basic meaning of the word is "handling" or "disposition" or "management" of a thing, or more literally "housekeeping", usually assuming or implyinggoodorprudenthandling (as opposed topoorhandling) of the matter at hand. In short,economiais a discretionary deviation from the letter of the law in order to adhere to the spirit of thelawandcharity. This is in contrast tolegalism, orakribia(Greek:ακριβεια), which is strict adherence to the letter of the law of the church. The divine economy, in Eastern Orthodoxy, not only refers to God's actions to bring about the world'ssalvationandredemption, but toallof God's dealings with, and interactions with, the world, including the Creation.[3][verification needed] According toLossky,theology(literally, "words about God" or "teaching about God") was concerned with all that pertains to God alone, in himself, i.e. the teaching on theTrinity, thedivine attributes, and so on; but it was not concerned with anything pertaining to the creation or the redemption. Lossky writes: "The distinction betweenοικονομια[economy] andθεολογια[theology] [...] remains common to most of the GreekFathersand to all of theByzantinetradition.θεολογια[...] means, in the fourth century, everything which can be said of God considered in Himself, outside of His creative and redemptive economy. To reach this 'theology' properly so-called, one therefore must go beyond [...] God as Creator of the universe, in order to be able to extricate the notion of the Trinity from the cosmological implications proper to the 'economy.' "[3] TheEcumenical Patriarchateconsiders that through "extreme oikonomia [economy]", those who arebaptizedin theOriental Orthodox, Roman Catholic,Lutheran,Old Catholic,Moravian,Anglican,Methodist,Reformed,Presbyterian,Church of the Brethren,Assemblies of God, orBaptisttraditions can be received into the Eastern Orthodox Church through the sacrament ofChrismationand not throughre-baptism.[4] In thecanon law of the Eastern Orthodox Church, the notions ofakriveiaandeconomia(economy) also exist.Akriveia, which is harshness, "is the strict application (sometimes even extension) of thepenancegiven to an unrepentant and habitual offender."Economia, which is sweetness, "is a judicious relaxation of the penance when the sinner shows remorse andrepentance."[5] According to the Catechism of the Catholic Church:[6] The Fathers of the Church distinguish between theology (theologia) and economy (oikonomia). "Theology" refers to the mystery of God's inmost life within the Blessed Trinity and "economy" to all the works by which God reveals himself and communicates his life. Through the oikonomia the theologia is revealed to us; but conversely, the theologia illuminates the whole oikonomia. God's works reveal who he is in himself; the mystery of his inmost being enlightens our understanding of all his works. So it is, analogously, among human persons. A person discloses himself in his actions, and the better we know a person, the better we understand his actions.
https://en.wikipedia.org/wiki/Economy_(religion)
Ring homomorphisms Algebraic structures Related structures Algebraic number theory Noncommutative algebraic geometry Free algebra Clifford algebra Infunctional analysis, a branch ofmathematics, anoperator algebrais analgebraofcontinuouslinear operatorson atopological vector space, with the multiplication given by thecomposition of mappings. The results obtained in the study of operator algebras are often phrased inalgebraicterms, while the techniques used are often highlyanalytic.[1]Although the study of operator algebras is usually classified as a branch of functional analysis, it has direct applications torepresentation theory,differential geometry,quantum statistical mechanics,quantum information, andquantum field theory. Operator algebras can be used to study arbitrary sets of operators with little algebraic relationsimultaneously. From this point of view, operator algebras can be regarded as a generalization ofspectral theoryof a single operator. In general, operator algebras arenon-commutativerings. An operator algebra is typically required to beclosedin a specified operatortopologyinside the whole algebra of continuous linear operators. In particular, it is a set of operators with both algebraic and topological closure properties. In some disciplines such properties areaxiomatizedand algebras with certain topological structure become the subject of the research. Though algebras of operators are studied in various contexts (for example, algebras ofpseudo-differential operatorsacting on spaces ofdistributions), the termoperator algebrais usually used in reference to algebras ofbounded operatorson aBanach spaceor, even more specially in reference to algebras of operators on aseparableHilbert space, endowed with theoperator normtopology. In the case of operators on a Hilbert space, theHermitian adjointmap on operators gives a naturalinvolution, which provides an additional algebraic structure that can be imposed on the algebra. In this context, the best studied examples areself-adjointoperator algebras, meaning that they are closed under taking adjoints. These includeC*-algebras,von Neumann algebras, andAW*-algebras. C*-algebras can be easily characterized abstractly by a condition relating the norm, involution and multiplication. Such abstractly defined C*-algebras can be identified to a certain closedsubalgebraof the algebra of the continuous linear operators on a suitable Hilbert space. A similar result holds for von Neumann algebras. Commutativeself-adjoint operator algebras can be regarded as the algebra ofcomplex-valued continuous functions on alocally compact space, or that ofmeasurable functionson astandard measurable space. Thus, general operator algebras are often regarded as a noncommutative generalizations of these algebras, or the structure of thebase spaceon which the functions are defined. This point of view is elaborated as the philosophy ofnoncommutative geometry, which tries to study various non-classical and/or pathological objects by noncommutative operator algebras. Examples of operator algebras that are not self-adjoint include:
https://en.wikipedia.org/wiki/Operator_algebra
False precision(also calledoverprecision,fake precision,misplaced precision,excess precision, andspurious precision) occurs when numerical data are presented in a manner that implies betterprecisionthan is justified; since precision is a limit toaccuracy(in the ISO definition of accuracy), this often leads to overconfidence in the accuracy, namedprecision bias.[1] Madsen Piriedefines the term "false precision" in a more general way: when exact numbers are used for notions that cannot be expressed in exact terms. For example, "We know that 90% of the difficulty in writing is getting started." Often false precision is abused to produce an unwarranted confidence in the claim: "our mouthwash is twice as good as our competitor's".[2] Inscienceandengineering, convention dictates that unless amargin of erroris explicitly stated, the number ofsignificant figuresused in the presentation of data should be limited to what is warranted by the precision of those data. For example, if an instrument can be read to tenths of a unit of measurement, results of calculations using data obtained from that instrument can only be confidently stated to the tenths place, regardless of what the raw calculation returns or whether other data used in the calculation are more accurate. Even outside these disciplines, there is a tendency to assume that all the non-zero digits of a number are meaningful; thus, providing excessive figures may lead the viewer to expect better precision than exists. However, in contrast, it is good practice to retain more significant figures than this in the intermediate stages of a calculation, in order to avoid accumulatedrounding errors. False precision commonly arises when high-precision and low-precision data are combined, when using anelectronic calculator, and inconversion of units. False precision is the gist of numerous variations of a joke which can be summarized as follows: A tour guide at a museum tells visitors that a dinosaur skeleton is 100,000,005 years old, because he was told that it was 100 million years old when he started working there 5 years ago. If a car's speedometer indicates a speed of 60 mph, converting it to 96.56064 km/h makes it seem like the measurement was very precise, when in fact it was not. Assuming the speedometer is accurate to 1 mph, a more appropriate conversion is 97 km/h. Measures that rely onstatistical sampling, such asIQ tests, are often reported with false precision.[3]
https://en.wikipedia.org/wiki/False_precision
Gibrat's law, sometimes calledGibrat's rule of proportionate growthor thelaw of proportionate effect,[1]is a rule defined byRobert Gibrat(1904–1980) in 1931 stating that the proportionalrate of growthof a firm is independent of its absolute size.[2][3]The law of proportionate growth gives rise to a firm size distribution that islog-normal.[4] Gibrat's law is also applied tocitiessize and growth rate,[5]where proportionate growth process may give rise to a distribution of city sizes that is log-normal, as predicted by Gibrat's law. While the city size distribution is often associated withZipf's law, this holds only in the upper tail. When considering the entire size distribution, not just the largest cities, then the city size distribution is log-normal.[6]The log-normality of the distribution reconciles Gibrat's law also for cities: The law of proportionate effect will therefore imply that the logarithms of the variable will be distributed following the log-normal distribution.[2]In isolation, the upper tail (less than 1,000 out of 24,000 cities) fits both the log-normal and thePareto distribution: the uniformly most powerful unbiased test comparing the lognormal to the power law shows that the largest 1000 cities are distinctly in the power law regime.[7] However, it has been argued that it is problematic to define cities through their fairly arbitrary legal boundaries (the places method treatsCambridgeandBoston, Massachusetts, as two separate units). A clustering method to construct cities from the bottom up by clustering populated areas obtained from high-resolution data finds a power-law distribution of city size consistent with Zipf's law in almost the entire range of sizes.[8]Note that populated areas are still aggregated rather than individual based. A new method based on individual street nodes for the clustering process leads to the concept of natural cities. It has been found that natural cities exhibit a striking Zipf's law[9]Furthermore, the clustering method allows for a direct assessment of Gibrat's law. It is found that the growth of agglomerations is not consistent with Gibrat's law: the mean andstandard deviationof the growth rates of cities follows a power-law with the city size.[10] In general, processes characterized by Gibrat's law converge to a limiting distribution, often proposed to be thelog-normal, or apower law, depending on more specific assumptions about thestochasticgrowth process. However, the tail of the lognormal may fall off too quickly, and itsPDFis not monotonic, but rather has aY-interceptof zero probability at the origin. The typical power law is the Pareto I, which has a tail that cannot model fall-off in the tail at large outcomes size, and which does not extend downwards to zero, but rather must be truncated at some positive minimum value. More recently, theWeibull distributionhas been derived as the limiting distribution for Gibrat processes, by recognizing that (a) the increments of the growth process are not independent, but rather correlated, in magnitude, and (b) the increment magnitudes typically have monotonic PDFs.[11]The Weibull PDF can appear essentiallylog-loglinear over orders of magnitude ranging from zero, while eventually falling off at unreasonably large outcome sizes. In the study of thefirms(business), the scholars do not agree that the foundation and the outcome of Gibrat's law are empirically correct.[citation needed][12]
https://en.wikipedia.org/wiki/Gibrat%27s_law
Theprobabilistic roadmap[1]planner is amotion planningalgorithm in robotics, which solves the problem of determining a path between a starting configuration of the robot and a goal configuration while avoiding collisions. The basic idea behind PRM is to take random samples from theconfiguration spaceof the robot, testing them for whether they are in the free space, and use a local planner to attempt to connect these configurations to other nearby configurations. The starting and goal configurations are added in, and agraph search algorithmis applied to the resultinggraphto determine a path between the starting and goal configurations. The probabilistic roadmap planner consists of two phases: a construction and a query phase. In the construction phase, a roadmap (graph) is built, approximating the motions that can be made in the environment. First, a random configuration is created. Then, it is connected to some neighbors, typically either theknearest neighbors or all neighbors less than some predetermined distance. Configurations and connections are added to the graph until the roadmap is dense enough. In the query phase, the start and goal configurations are connected to the graph, and the path is obtained by aDijkstra's shortest pathquery. Given certain relatively weak conditions on the shape of the free space, PRM is provably probabilistically complete, meaning that as the number of sampled points increases without bound, the probability that the algorithm will not find a path if one exists approaches zero. The rate of convergence depends on certain visibility properties of the free space, where visibility is determined by the local planner. Roughly, if each point can "see" a large fraction of the space, and also if a large fraction of each subset of the space can "see" a large fraction of its complement, then the planner will find a path quickly. The invention of the PRM method is credited toLydia E. Kavraki.[2][3]There are many variants on the basic PRM method, some quite sophisticated, that vary the sampling strategy and connection strategy to achieve faster performance. See e.g.Geraerts & Overmars (2002)[4]for a discussion. This robotics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Probabilistic_roadmap
In thex86architecture, theCPUIDinstruction (identified by aCPUIDopcode) is aprocessor supplementary instruction(its name derived from "CPUIdentification") allowing software to discover details of the processor. It was introduced byIntelin 1993 with the launch of thePentiumandSL-enhanced 486processors.[1] A program can use theCPUIDto determine processor type and whether features such asMMX/SSEare implemented. Prior to the general availability of theCPUIDinstruction, programmers would write esotericmachine codewhich exploited minor differences in CPU behavior in order to determine the processor make and model.[2][3][4][5]With the introduction of the 80386 processor, EDX on reset indicated the revision but this was only readable after reset and there was no standard way for applications to read the value. Outside the x86 family, developers are mostly still required to use esoteric processes (involving instruction timing or CPU fault triggers) to determine the variations in CPU design that are present. For example, in theMotorola 68000 series— which never had aCPUIDinstruction of any kind — certain specific instructions required elevated privileges. These could be used to tell various CPU family members apart. In theMotorola 68010the instructionMOVE from SRbecame privileged. Because the68000offered an unprivilegedMOVE from SRthe two different CPUs could be told apart by a CPU error condition being triggered. While theCPUIDinstruction is specific to the x86 architecture, other architectures (like ARM) often provide on-chip registers which can be read in prescribed ways to obtain the same sorts of information provided by the x86CPUIDinstruction. TheCPUIDopcode is0F A2. Inassembly language, theCPUIDinstruction takes no parameters asCPUIDimplicitly uses the EAX register to determine the main category of information returned. In Intel's more recent terminology, this is called the CPUID leaf.CPUIDshould be called withEAX = 0first, as this will store in the EAX register the highest EAX calling parameter (leaf) that the CPU implements. To obtain extended function informationCPUIDshould be called with the most significant bit of EAX set. To determine the highest extended function calling parameter, callCPUIDwithEAX = 80000000h. CPUID leaves greater than 3 but less than 80000000 are accessible only when themodel-specific registershave IA32_MISC_ENABLE.BOOT_NT4 [bit 22] = 0 (which is so by default). As the name suggests,Windows NT 4.0until SP6 did not boot properly unless this bit was set,[6]but later versions of Windows do not need it, so basic leaves greater than 4 can be assumed visible on current Windows systems. As of April 2024[update], basic valid leaves go up to 23h, but the information returned by some leaves are not disclosed in the publicly available documentation, i.e. they are "reserved". Some of the more recently added leaves also have sub-leaves, which are selected via the ECX register before callingCPUID. This returns the CPU's manufacturer ID string – a twelve-characterASCIIstring stored in EBX, EDX, ECX (in that order). The highest basic calling parameter (the largest value that EAX can be set to before callingCPUID) is returned in EAX. Here is a list of processors and the highest function implemented. The following are known processor manufacturer ID strings: The following are ID strings used by open sourcesoft CPU cores: The following are known ID strings from virtual machines: For instance, on aGenuineIntelprocessor, values returned in EBX is0x756e6547, EDX is0x49656e69and ECX is0x6c65746e. The following example code displays the vendor ID string as well as the highest calling parameter that the CPU implements. On some processors, it is possible to modify the Manufacturer ID string reported by CPUID.(EAX=0) by writing a new ID string to particular MSRs (Model-specific registers) using theWRMSRinstruction. This has been used on non-Intel processors to enable features and optimizations that have been disabled in software for CPUs that don't return theGenuineIntelID string.[22]Processors that are known to possess such MSRs include: This returns the CPU'sstepping, model, and family information in register EAX (also called thesignatureof a CPU), feature flags in registers EDX and ECX, and additional feature info in register EBX.[30] As of October 2023, the following x86 processor family IDs are known:[32] CPUID.01.EDX.CLFSH [bit 19]= 1 The nearest power-of-2 integer that is not smaller than this value is the number of unique initial APIC IDs reserved for addressing different logical processors in a physical package.[a] Former use: Number of logical processors per physical processor; two for the Pentium 4 processor with Hyper-Threading Technology.[47] CPUID.01.EDX.HTT [bit 28]= 1 The processor info and feature flags are manufacturer specific but usually, the Intel values are used by other manufacturers for the sake of compatibility. Processors noted to exhibit this behavior include Cyrix MII[48]and IDT WinChip 2.[49] In older documentation, this bit is often listed as a "Hyper-threadingtechnology"[61]flag - however, while this flag is a prerequisite for Hyper-Threading support, it does not by itself indicate support for Hyper-Threading and it has been set on many CPUs that do not feature any form of multi-threading technology.[62] Reserved fields should be masked before using them for processor identification purposes. This returns a list of descriptors indicating cache andTLBcapabilities in EAX, EBX, ECX and EDX registers. On processors that support this leaf, callingCPUIDwith EAX=2 will cause the bottom byte of EAX to be set to01h[a]and the remaining 15 bytes of EAX/EBX/ECX/EDX to be filled with 15 descriptors, one byte each. These descriptors provide information about the processor's caches, TLBs and prefetch. This is typically one cache or TLB per descriptor, but some descriptor-values provide other information as well - in particular,00his used for an empty descriptor,FFhindicates that the leaf does not contain valid cache information and that leaf 4h should be used instead, andFEhindicates that the leaf does not contain valid TLB information and that leaf 18h should be used instead. The descriptors may appear in any order. For each of the four registers (EAX,EBX,ECX,EDX), if bit 31 is set, then the register should not be considered to contain valid descriptors (e.g. on Itanium in IA-32 mode, CPUID(EAX=2) returns80000000hin EDX - this should be interpreted to mean that EDX contains no valid information, not that it contains a descriptor for a 512K L2 cache.) The table below provides, for known descriptor values, a condensed description of the cache or TLB indicated by that descriptor value (or other information, where that applies). The suffixes used in the table are: This returns the processor's serial number. The processor serial number was introduced on IntelPentium III, but due to privacy concerns, this feature is no longer implemented on later models (the PSN feature bit is always cleared).Transmeta'sEfficeon and Crusoe processors also provide this feature. AMD CPUs however, do not implement this feature in any CPU models. For Intel Pentium III CPUs, the serial number is returned in the EDX:ECX registers. For Transmeta Efficeon CPUs, it is returned in the EBX:EAX registers. And for Transmeta Crusoe CPUs, it is returned in the EBX register only. Note that the processor serial number feature must be enabled in theBIOSsetting in order to function. These two leaves are used to provide information about thecache hierarchylevels available to the processor core on which theCPUIDinstruction is run. Leaf4is used on Intel processors and leaf8000'001Dhis used on AMD processors - they both return data in EAX, EBX, ECX and EDX, using the same data format except that leaf4returns a few additional fields that are considered "reserved" for leaf8000'001Dh. They both provide CPU cache information in a series of sub-leaves selected by ECX - to get information about all the cache levels, it is necessary to invokeCPUIDrepeatedly, with EAX=4or8000'001Dhand ECX set to increasing values starting from 0 (0,1,2,...) until a sub-leaf not describing any caches (EAX[4:0]=0) is found. The sub-leaves that do return cache information may appear in any order, but all of them will appear before the first sub-leaf not describing any caches. In the below table, fields that are defined for leaf4but not for leaf8000'001Dhare highlighted with yellow cell coloring and a(#4)item. For any caches that are valid and not fully-associative, the value returned in ECX is the number of sets in the cache minus 1. (For fully-associative caches, ECX should be treated as if it return the value 0.) For any given cache described by a sub-leaf ofCPUIDleaf4or8000'001Dh, the total cache size in bytes can be computed as: CacheSize = (EBX[11:0]+1) * (EBX[21:12]+1) * (EBX[31:22]+1) * (ECX+1) For example, on IntelCrystalwellCPUs, executing CPUID with EAX=4 and ECX=4 will cause the processor to return the following size information for its level-4 cache in EBX and ECX:EBX=03C0F03FandECX=00001FFF- this should be taken to mean that this cache has a cache line size of 64 bytes (EBX[11:0]+1), has 16 cache lines per tag (EBX[21:12]+1), is 16-way set-associative (EBX[31:22]+1) with 8192 sets (ECX+1), for a total size of 64*16*16*8192=134217728 bytes, or 128 binary megabytes. These two leaves are used for processor topology (thread, core, package) and cache hierarchy enumeration in Intel multi-core (and hyperthreaded) processors.[89]As of 2013[update]AMD does not use these leaves but has alternate ways of doing the core enumeration.[90] Unlike most other CPUID leaves, leaf Bh will return different values in EDX depending on which logical processor the CPUID instruction runs; the value returned in EDX is actually thex2APICid of the logical processor. The x2APIC id space is not continuously mapped to logical processors, however; there can be gaps in the mapping, meaning that some intermediate x2APIC ids don't necessarily correspond to any logical processor. Additional information for mapping the x2APIC ids to cores is provided in the other registers. Although the leaf Bh has sub-leaves (selected by ECX as described further below), the value returned in EDX is only affected by the logical processor on which the instruction is running but not by the subleaf. The processor(s) topology exposed by leaf Bh is a hierarchical one, but with the strange caveat that the order of (logical) levels in this hierarchy doesn't necessarily correspond to the order in the physical hierarchy (SMT/core/package). However, every logical level can be queried as an ECX subleaf (of the Bh leaf) for its correspondence to a "level type", which can be either SMT, core, or "invalid". The level id space starts at 0 and is continuous, meaning that if a level id is invalid, all higher level ids will also be invalid. The level type is returned in bits 15:08 of ECX, while the number of logical processors at the level queried is returned in EBX. Finally, the connection between these levels and x2APIC ids is returned in EAX[4:0] as the number of bits that the x2APIC id must be shifted in order to obtain a unique id at the next level. As an example, a dual-coreWestmereprocessor capable ofhyperthreading(thus having two cores and four threads in total) could have x2APIC ids 0, 1, 4 and 5 for its four logical processors. Leaf Bh (=EAX), subleaf 0 (=ECX) of CPUID could for instance return 100h in ECX, meaning that level 0 describes the SMT (hyperthreading) layer, and return 2 in EBX because there are two logical processors (SMT units) per physical core. The value returned in EAX for this 0-subleaf should be 1 in this case, because shifting the aforementioned x2APIC ids to the right by one bit gives a unique core number (at the next level of the level id hierarchy) and erases the SMT id bit inside each core. A simpler way to interpret this information is that the last bit (bit number 0) of the x2APIC id identifies the SMT/hyperthreading unit inside each core in our example. Advancing to subleaf 1 (by making another call to CPUID with EAX=Bh and ECX=1) could for instance return 201h in ECX, meaning that this is a core-type level, and 4 in EBX because there are 4 logical processors in the package; EAX returned could be any value greater than 3, because it so happens that bit number 2 is used to identify the core in the x2APIC id. Note that bit number 1 of the x2APIC id is not used in this example. However, EAX returned at this level could well be 4 (and it happens to be so on a Clarkdale Core i3 5x0) because that also gives a unique id at the package level (=0 obviously) when shifting the x2APIC id by 4 bits. Finally, you may wonder what the EAX=4 leaf can tell us that we didn't find out already. In EAX[31:26] it returns the APIC mask bitsreservedfor a package; that would be 111b in our example because bits 0 to 2 are used for identifying logical processors inside this package, but bit 1 is also reserved although not used as part of the logical processor identification scheme. In other words, APIC ids 0 to 7 are reserved for the package, even though half of these values don't map to a logical processor. The cache hierarchy of the processor is explored by looking at the sub-leaves of leaf 4. The APIC ids are also used in this hierarchy to convey information about how the different levels of cache are shared by the SMT units and cores. To continue our example, the L2 cache, which is shared by SMT units of the same core but not between physical cores on the Westmere is indicated by EAX[26:14] being set to 1, while the information that the L3 cache is shared by the whole package is indicated by setting those bits to (at least) 111b. The cache details, including cache type, size, and associativity are communicated via the other registers on leaf 4. Beware that older versions of the Intel app note 485 contain some misleading information, particularly with respect to identifying and counting cores in a multi-core processor;[91]errors from misinterpreting this information have even been incorporated in the Microsoft sample code for using CPUID, even for the 2013 edition of Visual Studio,[92]and also in the sandpile.org page for CPUID,[93]but the Intel code sample for identifying processor topology[89]has the correct interpretation, and the current Intel Software Developer's Manual has a more clear language. The (open source) cross-platform production code[94]fromWildfire Gamesalso implements the correct interpretation of the Intel documentation. Topology detection examples involving older (pre-2010) Intel processors that lack x2APIC (thus don't implement the EAX=Bh leaf) are given in a 2010 Intel presentation.[95]Beware that using that older detection method on 2010 and newer Intel processors may overestimate the number of cores and logical processors because the old detection method assumes there are no gaps in the APIC id space, and this assumption is violated by some newer processors (starting with the Core i3 5x0 series), but these newer processors also come with an x2APIC, so their topology can be correctly determined using the EAX=Bh leaf method. This returns feature information related to theMONITORandMWAITinstructions in the EAX, EBX, ECX and EDX registers. This returns feature bits in the EAX register and additional information in the EBX, ECX and EDX registers. This returns extended feature flags in EBX, ECX, and EDX. Returns the maximum ECX value for EAX=7 in EAX. This returns extended feature flags in all four registers. This returns extended feature flags in EDX. EAX, EBX and ECX are reserved. IPRED_DIS prevents instructions at an indirect branch target from speculatively executing until the branch target address is resolved. BHI_DIS_S prevents predicted targets of indirect branches executed in ring0/1/2 from being selected based on branch history from branches executed in ring 3. This leaf is used to enumerate XSAVE features and state components. The XSAVE instruction set extension is designed to save/restore CPU extended state (typically for the purpose ofcontext switching) in a manner that can be extended to cover new instruction set extensions without the OS context-switching code needing to understand the specifics of the new extensions. This is done by defining a series ofstate-components, each with a size and offset within a given save area, and each corresponding to a subset of the state needed for one CPU extension or another. TheEAX=0DhCPUID leaf is used to provide information about which state-components the CPU supports and what their sizes/offsets are, so that the OS can reserve the proper amount of space and set the associated enable-bits. The state-components can be subdivided into two groups: user-state (state-items that are visible to the application, e.g.AVX-512vector registers), and supervisor-state (state items that affect the application but are not directly user-visible, e.g. user-mode interrupt configuration). The user-state items are enabled by setting their associated bits in theXCR0control register, while the supervisor-state items are enabled by setting their associated bits in theIA32_XSS(0DA0h) MSR - the indicated state items then become the state-components that can be saved and restored with theXSAVE/XRSTORfamily of instructions. The XSAVE mechanism can handle up to 63 state-components in this manner. State-components 0 and 1 (x87andSSE, respectively) have fixed offsets and sizes - for state-components 2 to 62, their sizes, offsets and a few additional flags can be queried by executingCPUIDwithEAX=0DhandECXset to the index of the state-component. This will return the following items in EAX, EBX and ECX (with EDX being reserved): (This offset is 0 for supervisor state-components, since these can only be saved with theXSAVES/XRSTORSinstruction, which use compacting.) If this bit is set for a state-component, then, when storing state with compaction, padding will be inserted between the preceding state-component and this state-component as needed to provide 64-byte alignment. If this bit is not set, the state-component will be stored directly after the preceding one. Attempting to query an unsupported state-component in this manner results in EAX,EBX,ECX and EDX all being set to 0. Sub-leaves 0 and 1 ofCPUIDleaf0Dhare used to provide feature information: As of July 2023, the XSAVE state-components that have been architecturally defined are: This leaf provides information about the supported capabilities of the IntelSoftware Guard Extensions(SGX) feature. The leaf provides multiple sub-leaves, selected with ECX. Sub-leaf 0 provides information about supported SGX leaf functions in EAX and maximum supported SGX enclave sizes in EDX; ECX is reserved. EBX provides a bitmap of bits that can be set in the MISCSELECT field in the SECS (SGX Enclave Control Structure) - this field is used to control information written to the MISC region of the SSA (SGX Save State Area) when an AEX (SGX Asynchronous Enclave Exit) occurs. Sub-leaf 1 provides a bitmap of which bits can be set in the 128-bit ATTRIBUTES field of SECS in EDX:ECX:EBX:EAX (this applies to the SECS copy used as input to theENCLS[ECREATE]leaf function). The top 64 bits (given in EDX:ECX) are a bitmap of which bits can be set in the XFRM (X-feature request mask) - this mask is a bitmask of which CPU state-components (see leaf 0Dh) will be saved to the SSA in case of an AEX; this has the same layout as theXCR0control register. The other bits are given in EAX and EBX, as follows: Sub-leaves 2 and up are used to provide information about which physical memory regions are available for use as EPC (Enclave Page Cache) sections under SGX. This sub-leaf provides feature information for IntelProcessor Trace(also known as Real Time Instruction Trace). The value returned in EAX is the index of the highest sub-leaf supported for CPUID with EAX=14h. EBX and ECX provide feature flags, EDX is reserved. These two leaves provide information about various frequencies in the CPU in EAX, EBX and ECX (EDX is reserved in both leaves). If the returned values in EBX and ECX of leaf 15h are both nonzero, then the TSC (Time Stamp Counter) frequency in Hz is given byTSCFreq = ECX*(EBX/EAX). On some processors (e.g. IntelSkylake), CPUID_15h_ECX is zero but CPUID_16h_EAX is present and not zero. On all known processors where this is the case,[121]the TSC frequency is equal to the Processor Base Frequency, and the Core Crystal Clock Frequency in Hz can be computed asCoreCrystalFreq = (CPUID_16h_EAX * 10000000) * (CPUID_15h_EAX/CPUID_15h_EBX). On processors that enumerate the TSC/Core Crystal Clock ratio in CPUID leaf 15h, theAPICtimer frequency will be the Core Crystal Clock frequency divided by the divisor specified by the APIC's Divide Configuration Register.[122] This leaf is present in systems where an x86 CPUIP coreis implemented in an SoC (System on chip) from another vendor - whereas the other leaves ofCPUIDprovide information about the x86 CPU core, this leaf provides information about the SoC. This leaf takes a sub-leaf index in ECX. Sub-leaf 0 returns a maximum sub-leaf index in EAX (at least 3), and SoC identification information in EBX/ECX/EDX: Sub-leaves 1 to 3 return a 48-byte SoC vendor brand string inUTF-8format. Sub-leaf 1 returns the first 16 bytes in EAX,EBX,ECX,EDX (in that order); sub-leaf 2 returns the next 16 bytes and sub-leaf 3 returns the last 16 bytes. The string is allowed but not required to benull-terminated. This leaf provides feature information for Intel Key Locker in EAX, EBX and ECX. EDX is reserved. WhenECX=0, the highest supported "palette" subleaf is enumerated in EAX. WhenECX≥1, information on palettenis returned. This leaf returns information on theTMUL(tile multiplier) unit. This leaf returns feature flags on theTMUL(tile multiplier) unit. When Intel TDX (Trust Domain Extensions) is active, attempts to execute theCPUIDinstruction by a TD (Trust Domain) guest will be intercepted by the TDX module. This module will, whenCPUIDis invoked withEAX=21handECX=0(leaf21h, sub-leaf 0), return the index of the highest supported sub-leaf for leaf21hinEAXand a TDX module vendor ID string as a 12-byte ASCII string in EBX,EDX,ECX (in that order). Intel's own module implementation returns the vendor ID string"IntelTDX"(with four trailing spaces)[124]- for this module, additional feature information is not available throughCPUIDand must instead be obtained through the TDX-specificTDCALLinstruction. This leaf is reserved in hardware and will (on processors whose highest basic leaf is21hor higher) return 0 in EAX/EBX/ECX/EDX when run directly on the CPU. This returns a maximum supported sub-leaf in EAX and AVX10 feature information in EBX.[113](ECX and EDX are reserved.) Subleaf 1 is reserved for AVX10 features not bound to a version. The highest function is returned in EAX. This leaf is only present onXeon Phiprocessors.[127] This function returns feature flags. When theCPUIDinstruction is executed underIntel VT-x or AMD-v virtualization, it will be intercepted by the hypervisor, enabling the hypervisor to returnCPUIDfeature flags that differ from those of the underlying hardware.CPUIDleaves40000000hto4FFFFFFFhare not implemented in hardware, and are reserved for use by hypervisors to provide hypervisor-specific identification and feature information through this interception mechanism. For leaf40000000h, the hypervisor is expected to return the index of the highest supported hypervisor CPUID leaf in EAX, and a 12-character hypervisor ID string in EBX,ECX,EDX (in that order). For leaf40000001h, the hypervisor may return an interface identification signature in EAX - e.g. hypervisors that wish to advertise that they areHyper-Vcompatible may return0x31237648—"Hv#1"in EAX.[128][129]The formats of leaves40000001hand up to the highest supported leaf are otherwise hypervisor-specific. Hypervisors that implement these leaves will normally also set bit 31 of ECX for CPUID leaf 1 to indicate their presence. Hypervisors that expose more than one hypervisor interface may provide additional sets of CPUID leaves for the additional interfaces, at a spacing of100hleaves per interface. For example, whenQEMUis configured to provide bothHyper-VandKVMinterfaces, it will provide Hyper-V information starting from CPUID leaf40000000hand KVM information starting from leaf40000100h.[130][131] Some hypervisors that are known to return a hypervisor ID string in leaf40000000hinclude: Lower-case string also used in bhyve-derived hypervisors such as xhyve and HyperKit.[136] (KGT also returns a signature inCPUIDleaf 3: ECX=0x4D4D5645 "EVMM"and EDX=0x43544E49 "INTC") The highest calling parameter is returned in EAX. EBX/ECX/EDX return the manufacturer ID string (same as EAX=0) on AMD but not Intel CPUs. This returns extended feature flags in EDX and ECX. Many of the bits inEDX(bits 0 through 9, 12 through 17, 23, and 24) are duplicates ofEDXfrom theEAX=1leaf - these bits are highlighted in light yellow. (These duplicated bits are present on AMD but not Intel CPUs.) AMD feature flagsare as follows:[150][151] These instructions were first introduced on Model 7[152]- the CPUID bit to indicate their support was moved[153]to EDX bit 11 from Model 8 (AMD K6-2) onwards. These return the processor brand string in EAX, EBX, ECX and EDX.CPUIDmust be issued with each parameter in sequence to get the entire 48-byte ASCII processor brand string.[162]It is necessary to check whether the feature is present in the CPU by issuingCPUIDwithEAX = 80000000hfirst and checking if the returned value is not less than80000004h. The string is specified in Intel/AMD documentation to benull-terminated, however this is not always the case (e.g. DM&PVortex86DX3and AMDRyzen 7 6800HSare known to return non-null-terminated brand strings in leaves80000002h-80000004h[163][164]), and software should not rely on it. On AMD processors, from180nm Athlononwards (AuthenticAMDFamily 6 Model 2 and later), it is possible to modify the processor brand string returned by CPUID leaves80000002h-80000004hby using theWRMSRinstruction to write a 48-byte replacement string to MSRsC0010030h-C0010035h.[159][165]This can also be done on AMD Geode GX/LX, albeit using MSRs300Ah-300Fh.[166] In some cases, determining the CPU vendor requires examining not just the Vendor ID in CPUID leaf 0 and the CPU signature in leaf 1, but also the Processor Brand String in leaves80000002h-80000004h. Known cases include: This provides information about the processor's level-1 cache andTLBcharacteristics in EAX, EBX, ECX and EDX as follows:[a] Returns details of the L2 cache in ECX, including the line size in bytes (Bits 07 - 00), type of associativity (encoded by a 4 bits field; Bits 15 - 12) and the cache size in KB (Bits 31 - 16). This function provides information about power management, power reporting and RAS (Reliability, availability and serviceability) capabilities of the CPU. PPIN_CTL(C001_02F0) andPPIN(C001_02F1) MSRs are present[174] This leaf returns information about AMD SVM (Secure Virtual Machine) features in EAX, EBX and EDX. Later AMD documentation, such as #25481 "CPUID specification" rev 2.18[179]and later, only lists the bit as reserved. In rev 2.30[180]and later, a different bit is listed as reserved for hypervisor use:CPUID.(EAX=1):ECX[bit 31]. Rev 2.28 of #25481 lists the bit as "Ssse3Sse5Dis"[182]- in rev 2.34, it is listed as having been removed from the spec at rev 2.32 under the name "SseIsa10Compat".[183] Several AMD CPU models will, for CPUID withEAX=8FFFFFFFh, return an Easter Egg string in EAX, EBX, ECX and EDX.[190][191]Known Easter Egg strings include: Returns index of highest Centaur leaf in EAX. If the returned value in EAX is less thanC0000001h, then Centaur extended leaves are not supported. Present in CPUs fromVIAandZhaoxin. On IDTWinChipCPUs (CentaurHaulsFamily 5), the extended leavesC0000001h-C0000005hdo not encode any Centaur-specific functionality but are instead aliases of leaves80000001h-80000005h.[193] This leaf returns Centaur feature information (mainlyVIA/Zhaoxin PadLock) in EDX.[194][195][196][197](EAX, EBX and ECX are reserved.) This information is easy to access from other languages as well. For instance, the C code for gcc below prints the first five values, returned by the cpuid: In MSVC and Borland/Embarcadero C compilers (bcc32) flavored inline assembly, the clobbering information is implicit in the instructions: If either version was written in plain assembly language, the programmer must manually save the results of EAX, EBX, ECX, and EDX elsewhere if they want to keep using the values. GCC also provides a header called<cpuid.h>on systems that have CPUID. The__cpuidis a macro expanding to inline assembly. Typical usage would be: But if one requested an extended feature not present on this CPU, they would not notice and might get random, unexpected results. Safer version is also provided in<cpuid.h>. It checks for extended features and does some more safety checks. The output values are not passed using reference-like macro parameters, but more conventional pointers. Notice the ampersands in&a, &b, &c, &dand the conditional statement. If the__get_cpuidcall receives a correct request, it will return a non-zero value, if it fails, zero.[199] Microsoft Visual C compiler has builtin function__cpuid()so the cpuid instruction may be embedded without using inline assembly, which is handy since the x86-64 version of MSVC does not allow inline assembly at all. The same program forMSVCwould be: Many interpreted or compiled scripting languages are capable of using CPUID via anFFIlibrary.One such implementationshows usage of the Ruby FFI module to execute assembly language that includes the CPUID opcode. .NET5 and later versions provide theSystem.Runtime.Intrinsics.X86.X86base.CpuIdmethod. For instance, the C# code below prints the processor brand if it supports CPUID instruction: Some of the non-x86 CPU architectures also provide certain forms of structured information about the processor's abilities, commonly as a set of special registers: DSPandtransputer-like chip families have not taken up the instruction in any noticeable way, in spite of having (in relative terms) as many variations in design. Alternate ways of silicon identification might be present; for example, DSPs fromTexas Instrumentscontain a memory-based register set for each functional unit that starts with identifiers determining the unit type and model, itsASICdesign revision and features selected at the design phase, and continues with unit-specific control and data registers. Access to these areas is performed by simply using the existing load and store instructions; thus, for such devices, there is no need for extending the register set for device identification purposes.[citation needed]
https://en.wikipedia.org/wiki/CPUID
Incomplexity theoryandcomputability theory, anoracle machineis anabstract machineused to studydecision problems. It can be visualized as ablack box, called anoracle, which is able to solve certain problems in a single operation. The problem can be of anycomplexity class. Evenundecidable problems, such as thehalting problem, can be used. An oracle machine can be conceived as aTuring machineconnected to anoracle. The oracle, in this context, is an entity capable of solving some problem, which for example may be adecision problemor afunction problem. The problem does not have to be computable; the oracle is not assumed to be a Turing machine or computer program. The oracle is simply a "black box" that is able to produce a solution for any instance of a givencomputational problem: An oracle machine can perform all of the usual operations of a Turing machine, and can also query the oracle to obtain a solution to any instance of the computational problem for that oracle. For example, if the problem is a decision problem for a setAof natural numbers, the oracle machine supplies the oracle with a natural number, and the oracle responds with "yes" or "no" stating whether that number is an element ofA. There are many equivalent definitions of oracle Turing machines, as discussed below. The one presented here is fromvan Melkebeek (2003, p. 43). An oracle machine, like a Turing machine, includes: In addition to these components, an oracle machine also includes: From time to time, the oracle machine may enter the ASK state. When this happens, the following actions are performed in a single computational step: The effect of changing to the ASK state is thus to receive, in a single step, a solution to the problem instance that is written on the oracle tape. There are many alternative definitions to the one presented above. Many of these are specialized for the case where the oracle solves a decision problem. In this case: These definitions are equivalent from the point of view of Turing computability: a function is oracle-computable from a given oracle under all of these definitions if it is oracle-computable under any of them. The definitions are not equivalent, however, from the point of view of computational complexity. A definition such as the one by van Melkebeek, using an oracle tape which may have its own alphabet, is required in general. Thecomplexity classofdecision problemssolvable by an algorithm in class A with an oracle for a language L is called AL. For example, PSATis the class of problems solvable inpolynomial timeby adeterministic Turing machinewith an oracle for theBoolean satisfiability problem. The notation ABcan be extended to a set of languages B (or a complexity class B), by using the following definition: When a language L iscompletefor some class B, then AL=ABprovided that machines in A can execute reductions used in the completeness definition of class B. In particular, since SAT isNP-completewith respect to polynomial time reductions, PSAT=PNP. However, if A =DLOGTIME, then ASATmay not equal ANP. (The definition ofAB{\displaystyle A^{B}}given above is not completely standard. In some contexts, such as the proof of thetimeandspace hierarchy theorems, it is more useful to assume that the abstract machine defining classA{\displaystyle A}only has access to a single oracle for one language. In this context,AB{\displaystyle A^{B}}is not defined if the complexity classB{\displaystyle B}does not have any complete problems with respect to the reductions available toA{\displaystyle A}.) It is understood that NP ⊆ PNP, but the question of whether NPNP, PNP, NP, and P are equal remains tentative at best. It is believed they are different, and this leads to the definition of thepolynomial hierarchy. Oracle machines are useful for investigating the relationship betweencomplexity classes P and NP, by considering the relationship between PAand NPAfor an oracle A. In particular, it has been shown there exist languages A and B such that PA=NPAand PB≠NPB.[4]The fact the P = NP question relativizes both ways is taken as evidence that answering this question is difficult, because a proof technique thatrelativizes(i.e., unaffected by the addition of an oracle) will not answer the P = NP question.[5]Most proof techniques relativize.[6] One may consider the case where an oracle is chosen randomly from among all possible oracles (aninfinite set). It has been shown in this case, that with probability 1, PA≠NPA.[7]When a question is true for almost all oracles, it is said to be truefor arandom oracle. This choice of terminology is justified by the fact that random oracles support a statement with probability 0 or 1 only. (This follows fromKolmogorov's zero–one law.) This is only weak evidence that P≠NP, since a statement may be true for a random oracle but false for ordinary Turing machines;[original research?]for example, IPA≠PSPACEAfor a random oracle A butIP=PSPACE.[8] A machine with an oracle for thehalting problemcan determine whether particular Turing machines will halt on particular inputs, but it cannot determine, in general, whether machines equivalent to itself will halt. This creates a hierarchy of machines, each with a more powerful halting oracle and an even harder halting problem. This hierarchy of machines can be used to define thearithmetical hierarchy.[9] Incryptography, oracles are used to make arguments for the security of cryptographic protocols where ahash functionis used. Asecurity reduction(proof of security) for the protocol is given in the case where, instead of a hash function, arandom oracleanswers each query randomly but consistently; the oracle is assumed to be available to all parties including the attacker, as the hash function is. Such a proof shows that unless the attacker solves the hard problem at the heart of the security reduction, they must make use of some interesting property of the hash function to break the protocol; they cannot treat the hash function as a black box (i.e., as a random oracle).
https://en.wikipedia.org/wiki/Oracle_machine
Direct-attached storage(DAS) isdigital storagedirectly attached to thecomputeraccessing it, as opposed to storage accessed over a computer network (i.e.network-attached storage). DAS consists of one or more storage units such ashard drives,solid-state drives,optical disc driveswithin anexternal enclosure. The term "DAS" is aretronymto contrast withstorage area network(SAN) andnetwork-attached storage(NAS). A typical DAS system is made of adata storage device(for exampleenclosuresholding a number ofhard disk drives) connected directly to a computer through ahost bus adapter(HBA). Between those two points there is no network device (like hub, switch, or router), and this is the main characteristic of DAS. The mainprotocolsused for DAS connections areParallel ATA,SATA,eSATA,[1]NVMe,Parallel SCSI,SAS,USB, andIEEE 1394. Most functions found in modern storage do not depend on whether the storage is attached directly to servers (DAS), or via a network (SAN and NAS). In enterprise environments, direct-attached storage systems can utilize storage devices that have higher endurance in terms of data workload capability, along with scalability in the amount of capacity that storage arrays can achieve compared to consumer-grade NAS and other storage devices.[2] The key difference between DAS and NAS is that DAS storage does not incorporate any network hardware and related operating environment to provide a facility to share storage resources independently of the host so is only available via the host to which the DAS is attached. DAS is typically considered much faster than NAS due to lower latency in the type of host connection although contemporary network and direct connection throughput typically exceeds the raw read/write performance of the storage units themselves. ASAN (storage area network)has more in common with a DAS than a NAS with the key difference being that DAS is a 1:1 relationship between storage and host whereas SAN is many to many.
https://en.wikipedia.org/wiki/Direct-attached_storage
Fuzzy mathematicsis the branch ofmathematicsincludingfuzzy set theoryandfuzzy logicthat deals with partial inclusion of elements in a set on a spectrum, as opposed to simple binary "yes" or "no" (0 or 1) inclusion. It started in 1965 after the publication ofLotfi Asker Zadeh's seminal workFuzzy sets.[1]Linguisticsis an example of a field that utilizes fuzzy set theory. Afuzzy subsetAof asetXis afunctionA:X→L, whereLis theinterval[0, 1]. This function is also called a membership function. A membership function is a generalization of anindicator function(also called acharacteristic function) of a subset defined forL= {0, 1}. More generally, one can use anycomplete latticeLin a definition of a fuzzy subsetA.[2] The evolution of the fuzzification of mathematical concepts can be broken down into three stages:[3] Usually, a fuzzification of mathematical concepts is based on a generalization of these concepts from characteristic functions to membership functions. LetAandBbe two fuzzy subsets ofX. TheintersectionA∩BandunionA∪Bare defined as follows: (A∩B)(x) = min(A(x),B(x)), (A∪B)(x) = max(A(x),B(x)) for allxinX. Instead ofminandmaxone can uset-normand t-conorm, respectively;[4]for example, min(a,b) can be replaced by multiplicationab. A straightforward fuzzification is usually based onminandmaxoperations because in this case more properties of traditional mathematics can be extended to the fuzzy case. An important generalization principle used in fuzzification of algebraic operations is a closure property. Let * be abinary operationonX. The closure property for a fuzzy subsetAofXis that for allx,yinX,A(x*y) ≥ min(A(x),A(y)). Let (G, *) be agroupandAa fuzzy subset ofG. ThenAis afuzzy subgroupofGif for allx,yinG,A(x*y−1) ≥ min(A(x),A(y−1)). A similar generalization principle is used, for example, for fuzzification of thetransitivityproperty. LetRbe a fuzzy relation onX, i.e.Ris a fuzzy subset ofX×X. ThenRis (fuzzy-)transitive if for allx,y,zinX,R(x,z) ≥ min(R(x,y),R(y,z)). Fuzzy subgroupoids and fuzzy subgroups were introduced in 1971 by A. Rosenfeld.[5][6][7] Analogues of other mathematical subjects have been translated to fuzzy mathematics, such as fuzzy field theory and fuzzy Galois theory,[8]fuzzy topology,[9][10]fuzzy geometry,[11][12][13][14]fuzzy orderings,[15]and fuzzy graphs.[16][17][18]
https://en.wikipedia.org/wiki/Fuzzy_mathematics
InInternet culture, the1% ruleis a generalrule of thumbpertaining to participation in anInternet community, stating that only 1% of the users of a website actively create new content, while the other 99% of the participants onlylurk. Variants include the1–9–90 rule(sometimes90–9–1 principleor the89:10:1 ratio),[1]which states that in a collaborative website such as awiki, 90% of the participants of a community only consume content, 9% of the participants change or update content, and 1% of the participants add content. Similar rules are known ininformation science; for instance, the 80/20 rule known as thePareto principlestates that 20 percent of a group will produce 80 percent of the activity, regardless of how the activity is defined. According to the 1% rule, about 1% of Internet users create content, while 99% are just consumers of that content. For example, for every person who posts on a forum, generally about 99 other people view that forum but do not post. The term was coined by authors and bloggers Ben McConnell and Jackie Huba,[2]although there were earlier references this concept[3]that did not use the name. The termslurkandlurking, in reference to online activity, are used to refer to online observation without engaging others in the Internet community.[4] A 2007 study ofradicaljihadistInternet forums found 87% of users had never posted on the forums, 13% had posted at least once, 5% had posted 50 or more times, and only 1% had posted 500 or more times.[5] A 2014 peer-reviewed paper entitled "The 1% Rule in Four Digital Health Social Networks: An Observational Study" empirically examined the 1% rule in health-oriented online forums. The paper concluded that the 1% rule was consistent across the four support groups, with a handful of "Superusers" generating the vast majority of content.[6]A study later that year, from a separate group of researchers, replicated the 2014 van Mierlo study in an online forum for depression.[7]Results indicated that the distribution frequency of the 1% rule fit followedZipf's Law, which is a specific type ofpower law. The "90–9–1" version of this rule states that for websites where users can both create and edit content, 1% of people create content, 9% edit or modify that content, and 90% view the content without contributing. However, the actual percentage is likely to vary depending upon the subject. For example, if a forum requires content submissions as a condition of entry, the percentage of people who participate will probably be significantly higher than 1%, but the content producers will still be a minority of users. This is validated in a study conducted by Michael Wu, who uses economics techniques to analyze the participation inequality across hundreds of communities segmented by industry, audience type, and community focus.[8] The 1% rule is often misunderstood to apply to the Internet in general, but it applies more specifically to any given Internet community. It is for this reason that one can see evidence for the 1% principle on many websites, but aggregated together one can see a different distribution. This latter distribution is still unknown and likely to shift, but various researchers and pundits have speculated on how to characterize the sum total of participation. Research in late 2012 suggested that only 23% of the population (rather than 90%) could properly be classified as lurkers, while 17% of the population could be classified as intense contributors of content.[9]Several years prior, results were reported on a sample of students from Chicago where 60% of the sample created content in some form.[10] A similar concept was introduced by Will Hill ofAT&T Laboratories[11]and later cited byJakob Nielsen; this was the earliest known reference to the term "participation inequality" in an online context.[12]The term regained public attention in 2006 when it was used in a strictly quantitative context within a blog entry on the topic of marketing.[2]
https://en.wikipedia.org/wiki/1%25_rule
Inmathematics, ageneralized hypergeometric seriesis apower seriesin which the ratio of successivecoefficientsindexed bynis arational functionofn. The series, if convergent, defines ageneralized hypergeometric function, which may then be defined over a wider domain of the argument byanalytic continuation. The generalized hypergeometric series is sometimes just called the hypergeometric series, though this term also sometimes just refers to theGaussian hypergeometric series. Generalized hypergeometric functions include the (Gaussian)hypergeometric functionand theconfluent hypergeometric functionas special cases, which in turn have many particularspecial functionsas special cases, such aselementary functions,Bessel functions, and theclassical orthogonal polynomials. A hypergeometric series is formally defined as apower series in which the ratio of successive coefficients is arational functionofn. That is, whereA(n) andB(n) arepolynomialsinn. For example, in the case of the series for theexponential function, we have: So this satisfies the definition withA(n) = 1andB(n) =n+ 1. It is customary to factor out the leading term, so β0is assumed to be 1. The polynomials can be factored into linear factors of the form (aj+n) and (bk+n) respectively, where theajandbkarecomplex numbers. For historical reasons, it is assumed that (1 +n) is a factor ofB. If this is not already the case then bothAandBcan be multiplied by this factor; the factor cancels so the terms are unchanged and there is no loss of generality. The ratio between consecutive coefficients now has the form wherecanddare the leading coefficients ofAandB. The series then has the form or, by scalingzby the appropriate factor and rearranging, This has the form of anexponential generating function. This series is usually denoted by or Using the rising factorial orPochhammer symbol this can be written (Note that this use of the Pochhammer symbol is not standard; however it is the standard usage in this context.) When all the terms of the series are defined and it has a non-zeroradius of convergence, then the series defines ananalytic function. Such a function, and itsanalytic continuations, is called thehypergeometric function. The case when the radius of convergence is 0 yields many interesting series in mathematics, for example theincomplete gamma functionhas theasymptotic expansion which could be writtenza−1e−z2F0(1−a,1;;−z−1). However, the use of the termhypergeometric seriesis usually restricted to the case where the series defines an actual analytic function. The ordinary hypergeometric series should not be confused with thebasic hypergeometric series, which, despite its name, is a rather more complicated and recondite series. The "basic" series is theq-analogof the ordinary hypergeometric series. There are several such generalizations of the ordinary hypergeometric series, including the ones coming fromzonal spherical functionsonRiemannian symmetric spaces. The series without the factor ofn! in the denominator (summed over all integersn, including negative) is called thebilateral hypergeometric series. There are certain values of theajandbkfor which the numerator or the denominator of the coefficients is 0. Excluding these cases, theratio testcan be applied to determine the radius of convergence. The question of convergence forp=q+1 whenzis on the unit circle is more difficult. It can be shown that the series converges absolutely atz= 1 if Further, ifp=q+1,∑i=1pai≥∑j=1qbj{\displaystyle \sum _{i=1}^{p}a_{i}\geq \sum _{j=1}^{q}b_{j}}andzis real, then the following convergence result holdsQuigley et al. (2013): It is immediate from the definition that the order of the parametersaj, or the order of the parametersbkcan be changed without changing the value of the function. Also, if any of the parametersajis equal to any of the parametersbk, then the matching parameters can be "cancelled out", with certain exceptions when the parameters are non-positive integers. For example, This cancelling is a special case of a reduction formula that may be applied whenever a parameter on the top row differs from one on the bottom row by a non-negative integer.[1][2] The following basic identity is very useful as it relates the higher-order hypergeometric functions in terms of integrals over the lower order ones[3] The generalized hypergeometric function satisfies and (zddz+bk−1)pFq[a1,…,apb1,…,bk,…,bq;z]=(bk−1)pFq[a1,…,apb1,…,bk−1,…,bq;z]forbk≠1{\displaystyle {\begin{aligned}\left(z{\frac {\rm {d}}{{\rm {d}}z}}+b_{k}-1\right){}_{p}F_{q}\left[{\begin{array}{c}a_{1},\dots ,a_{p}\\b_{1},\dots ,b_{k},\dots ,b_{q}\end{array}};z\right]&=(b_{k}-1)\;{}_{p}F_{q}\left[{\begin{array}{c}a_{1},\dots ,a_{p}\\b_{1},\dots ,b_{k}-1,\dots ,b_{q}\end{array}};z\right]{\text{ for }}b_{k}\neq 1\end{aligned}}} Additionally, ddzpFq[a1,…,apb1,…,bq;z]=∏i=1pai∏j=1qbjpFq[a1+1,…,ap+1b1+1,…,bq+1;z]{\displaystyle {\begin{aligned}{\frac {\rm {d}}{{\rm {d}}z}}\;{}_{p}F_{q}\left[{\begin{array}{c}a_{1},\dots ,a_{p}\\b_{1},\dots ,b_{q}\end{array}};z\right]&={\frac {\prod _{i=1}^{p}a_{i}}{\prod _{j=1}^{q}b_{j}}}\;{}_{p}F_{q}\left[{\begin{array}{c}a_{1}+1,\dots ,a_{p}+1\\b_{1}+1,\dots ,b_{q}+1\end{array}};z\right]\end{aligned}}} Combining these gives a differential equation satisfied byw=pFq: Take the following operator: From the differentiation formulas given above, the linear space spanned by contains each of Since the space has dimension 2, any three of thesep+q+2 functions are linearly dependent:[4][5] These dependencies can be written out to generate a large number of identities involvingpFq{\displaystyle {}_{p}F_{q}}. For example, in the simplest non-trivial case, So This, and other important examples, can be used to generatecontinued fractionexpressions known asGauss's continued fraction. Similarly, by applying the differentiation formulas twice, there are(p+q+32){\displaystyle {\binom {p+q+3}{2}}}such functions contained in which has dimension three so any four are linearly dependent. This generates more identities and the process can be continued. The identities thus generated can be combined with each other to produce new ones in a different way. A function obtained by adding ±1 to exactly one of the parametersaj,bkin is calledcontiguousto Using the technique outlined above, an identity relating0F1(;a;z){\displaystyle {}_{0}F_{1}(;a;z)}and its two contiguous functions can be given, six identities relating1F1(a;b;z){\displaystyle {}_{1}F_{1}(a;b;z)}and any two of its four contiguous functions, and fifteen identities relating2F1(a,b;c;z){\displaystyle {}_{2}F_{1}(a,b;c;z)}and any two of its six contiguous functions have been found. The first one was derived in the previous paragraph. The last fifteen were given by (Gauss 1813). A number of other hypergeometric function identities were discovered in the nineteenth and twentieth centuries. A 20th century contribution to the methodology of proving these identities is theEgorychev method. Saalschütz's theorem[6](Saalschütz 1890) is For extension of this theorem, see a research paper by Rakha & Rathie. According to (Andrews, Askey & Roy 1999, p. 69), it was in fact first discovered byPfaffin 1797.[7] Dixon's identity,[8]first proved byDixon (1902), gives the sum of a well-poised3F2at 1: For generalization of Dixon's identity, see a paper by Lavoie, et al. Dougall's formula (Dougall1907) gives the sum of a very well-poised series that is terminating and 2-balanced. Terminating means thatmis a non-negative integer and 2-balanced means that Many of the other formulas for special values of hypergeometric functions can be derived from this as special or limiting cases. It is also called the Dougall-Ramanujan identity. It is a special case of Jackson's identity, and it gives Dixon's identity and Saalschütz's theorem as special cases.[9] Identity 1. where Identity 2. which linksBessel functionsto2F2; this reduces to Kummer's second formula forb= 2a: Identity 3. Identity 4. which is a finite sum ifb-dis a non-negative integer. Kummer's relation is Clausen's formula was used byde Brangesto prove theBieberbach conjecture. Many of the special functions in mathematics are special cases of theconfluent hypergeometric functionor thehypergeometric function; see the corresponding articles for examples. As noted earlier,0F0(;;z)=ez{\displaystyle {}_{0}F_{0}(;;z)=e^{z}}. The differential equation for this function isddzw=w{\displaystyle {\frac {d}{dz}}w=w}, which has solutionsw=kez{\displaystyle w=ke^{z}}wherekis a constant. The functions of the form0F1(;a;z){\displaystyle {}_{0}F_{1}(;a;z)}are calledconfluent hypergeometric limit functionsand are closely related toBessel functions. The relationship is: The differential equation for this function is or Whenais not a positive integer, the substitution gives a linearly independent solution so the general solution is wherek,lare constants. (Ifais a positive integer, the independent solution is given by the appropriate Bessel function of the second kind.) A special case is: An important case is: The differential equation for this function is or which has solutions wherekis a constant. The functions of the form1F1(a;b;z){\displaystyle {}_{1}F_{1}(a;b;z)}are calledconfluent hypergeometric functions of the first kind, also writtenM(a;b;z){\displaystyle M(a;b;z)}. The incomplete gamma functionγ(a,z){\displaystyle \gamma (a,z)}is a special case. The differential equation for this function is or Whenbis not a positive integer, the substitution gives a linearly independent solution so the general solution is wherek,lare constants. When a is a non-positive integer, −n,1F1(−n;b;z){\displaystyle {}_{1}F_{1}(-n;b;z)}is a polynomial. Up to constant factors, these are theLaguerre polynomials. This impliesHermite polynomialscan be expressed in terms of1F1as well. Relations to other functions are known for certain parameter combinations only. The functionx1F2(12;32,32;−x24){\displaystyle x\;{}_{1}F_{2}\left({\frac {1}{2}};{\frac {3}{2}},{\frac {3}{2}};-{\frac {x^{2}}{4}}\right)}is the antiderivative of thecardinal sine. With modified values ofa1{\displaystyle a_{1}}andb1{\displaystyle b_{1}}, one obtains the antiderivative ofsin⁡(xβ)/xα{\displaystyle \sin(x^{\beta })/x^{\alpha }}.[10] TheLommel functionissμ,ν(z)=zμ+1(μ−ν+1)(μ+ν+1)1F2(1;μ2−ν2+32,μ2+ν2+32;−z24){\displaystyle s_{\mu ,\nu }(z)={\frac {z^{\mu +1}}{(\mu -\nu +1)(\mu +\nu +1)}}{}_{1}F_{2}(1;{\frac {\mu }{2}}-{\frac {\nu }{2}}+{\frac {3}{2}},{\frac {\mu }{2}}+{\frac {\nu }{2}}+{\frac {3}{2}};-{\frac {z^{2}}{4}})}.[11] The confluent hypergeometric function of the second kind can be written as:[12] Historically, the most important are the functions of the form2F1(a,b;c;z){\displaystyle {}_{2}F_{1}(a,b;c;z)}. These are sometimes calledGauss's hypergeometric functions, classical standard hypergeometric or often simply hypergeometric functions. The termGeneralized hypergeometric functionis used for the functionspFqif there is risk of confusion. This function was first studied in detail byCarl Friedrich Gauss, who explored the conditions for its convergence. The differential equation for this function is or It is known as thehypergeometric differential equation. Whencis not a positive integer, the substitution gives a linearly independent solution so the general solution for |z| < 1 is wherek,lare constants. Different solutions can be derived for other values ofz. In fact there are 24 solutions, known as theKummersolutions, derivable using various identities, valid in different regions of the complex plane. Whenais a non-positive integer, −n, is a polynomial. Up to constant factors and scaling, these are theJacobi polynomials. Several other classes of orthogonal polynomials, up to constant factors, are special cases of Jacobi polynomials, so these can be expressed using2F1as well. This includesLegendre polynomialsandChebyshev polynomials. A wide range of integrals of elementary functions can be expressed using the hypergeometric function, e.g.: TheMott polynomialscan be written as:[13] The function is thedilogarithm[14] The function is aHahn polynomial. The function is aWilson polynomial. All roots of aquintic equationcan be expressed in terms of radicals and theBring radical, which is the real solution tox5+x+a=0{\displaystyle x^{5}+x+a=0}. The Bring radical can be written as:[15] The functions forq∈N0{\displaystyle q\in \mathbb {N} _{0}}andp∈N{\displaystyle p\in \mathbb {N} }are thePolylogarithm. For each integern≥2, the roots of the polynomialxn−x+t can be expressed as a sum of at mostN−1 hypergeometric functions of typen+1Fn, which can always be reduced by eliminating at least one pair ofaandbparameters.[15] The generalized hypergeometric function is linked to theMeijer G-functionand theMacRobert E-function. Hypergeometric series were generalised to several variables, for example byPaul Emile AppellandJoseph Kampé de Fériet; but a comparable general theory took long to emerge. Many identities were found, some quite remarkable. A generalization, theq-seriesanalogues, called thebasic hypergeometric series, were given byEduard Heinein the late nineteenth century. Here, the ratios considered of successive terms, instead of a rational function ofn, are a rational function ofqn. Another generalization, theelliptic hypergeometric series, are those series where the ratio of terms is anelliptic function(a doubly periodicmeromorphic function) ofn. During the twentieth century this was a fruitful area of combinatorial mathematics, with numerous connections to other fields. There are a number of new definitions ofgeneral hypergeometric functions, by Aomoto,Israel Gelfandand others; and applications for example to the combinatorics of arranging a number ofhyperplanesin complexN-space (seearrangement of hyperplanes). Special hypergeometric functions occur aszonal spherical functionsonRiemannian symmetric spacesand semi-simpleLie groups. Their importance and role can be understood through the following example: the hypergeometric series2F1has theLegendre polynomialsas a special case, and when considered in the form ofspherical harmonics, these polynomials reflect, in a certain sense, the symmetry properties of the two-sphere or, equivalently, the rotations given by the Lie groupSO(3). In tensor product decompositions of concrete representations of this groupClebsch–Gordan coefficientsare met, which can be written as3F2hypergeometric series. Bilateral hypergeometric seriesare a generalization of hypergeometric functions where one sums over all integers, not just the positive ones. Fox–Wright functionsare a generalization of generalized hypergeometric functions where the Pochhammer symbols in the series expression are generalised to gamma functions of linear expressions in the indexn.
https://en.wikipedia.org/wiki/Generalized_hypergeometric_function
Empiricalmethods Prescriptiveand policy Thecreator economyor also known ascreator marketingandinfluencer economy, is a software-driven economy that is built aroundcreatorswho produce and distribute content, products, or services directly to their audience, leveraging social media platforms and AI tools.[1]These creators - who may includesocial media influencers,YouTubers, bloggers, artists, podcasters, and even independent professionals - generate revenue from their creations through a variety ofmonetizationstrategies, includingadvertising,sponsorships,product sales,crowdfunding, andsubscription-based services.[2]According to Goldman Sachs Research, the ongoing growth of the creator economy will likely benefit companies that possess a combination of factors, including a large global user base, access to substantial capital, robust AI-powered recommendation engines, versatile monetization tools, comprehensive data analytics, and integrated e-commerce options.[3]Examples of creator economy software platforms includeYouTube,TikTok,Instagram,Facebook,Twitch,Spotify,Substack,OnlyFansandPatreon.[4][5][6][7][8] In 1997,Stanford University's Paul Saffo suggested that the creator economy first came into being in 1997 as the "new economy". Early creators in that economy worked with animations and illustrations, but at the time there was no available marketplace infrastructure to enable them to generate revenue.[citation needed] The term "creator"was coined by YouTube in 2011 to be used instead of "YouTube star", an expression that at the time could only apply to famous individuals on the platform. The term has since become omnipresent and is used to describe anyone creating any form of online content.[9] A number of platforms such asTikTok,Snapchat, YouTube, andFacebookhave set up funds with which to pay creators.[10][11][12][13][14] The large majority of content creators derive no monetary gain for their creations, with most of the benefits accruing to the platforms who can make significant revenues from their uploads.[15]As few as 0.1% of creators are able to earn a living through their channels.[16]
https://en.wikipedia.org/wiki/Creator_economy
Inlogic,Peirce's lawis named after thephilosopherandlogicianCharles Sanders Peirce. It was taken as anaxiomin his first axiomatisation ofpropositional logic. It can be thought of as thelaw of excluded middlewritten in a form that involves only one sort of connective, namely implication. Inpropositional calculus,Peirce's lawsays that ((P→Q)→P)→P. Written out, this means thatPmust be true if there is a propositionQsuch that the truth ofPfollows fromthe truth of "ifPthenQ". Peirce's law does not hold inintuitionistic logicorintermediate logicsand cannot be deduced from thededuction theoremalone. Under theCurry–Howard isomorphism, Peirce's law is the type ofcontinuationoperators, e.g.call/ccinScheme.[1] Here is Peirce's own statement of the law: Peirce goes on to point out an immediate application of the law: Warning: As explained in the text, "a" here does not denote a propositional atom, but something like thequantified propositional formula∀pp{\displaystyle \forall p\,p}. The formula((x→y) →a) →xwould not be atautologyifawere interpreted as an atom. In intuitionistic logic, ifP{\displaystyle P}is proven or rejected, or ifQ{\displaystyle Q}is proven valid, then Peirce's law for the two propositions holds. But the law's special case whenQ{\displaystyle Q}is rejected, calledconsequentia mirabilis, is equivalent to excluded middle already overminimal logic. This also means that Piece's law entails classical logic over intuitionistic logic. This is shown below. Firstly, fromP→Q{\displaystyle P\to Q}follows the equivalenceP↔(P∧Q){\displaystyle P\leftrightarrow (P\land Q)}, and so(P→Q)→P{\displaystyle (P\to Q)\to P}is equivalent to(P→Q)→(P∧Q){\displaystyle (P\to Q)\to (P\land Q)}. With this, one can also establish Peirce's law by establishing the equivalent form((P→Q)→(P∧Q))→P{\displaystyle ((P\to Q)\to (P\land Q))\to P}. Considering the caseQ=⊥{\displaystyle Q=\bot }likewise also shows how double-negation elimination¬¬P→P{\displaystyle \neg \neg P\to P}implies consequentia mirabilis, and this direction even only uses minimal logic. Now in intuitionistic logic, explosion can be used for⊥→(P∧⊥){\displaystyle \bot \to (P\land \bot )}, and so here consequentia mirabilis also implies double-negation elimination. As the double-negated excluded middle is always already valid even in minimal logic, it thus further also implies excluded middle, over intuitionistic logic. In the other direction, one can intuitionistically also show that excluded middle implies the full Peirce's law directly. To this end, note that using theprinciple of explosion, excluded middle may be expressed asP∨(P→Q){\displaystyle P\lor (P\to Q)}. In words, this may be expressed as: "Every propositionP{\displaystyle P}either holds or implies any other proposition." Now to prove the law, note that(P∨R)→((R→P)→P){\displaystyle (P\lor R)\to ((R\to P)\to P)}is derivable from just implication introduction on the one hand andmodus ponenson the other. Finally, in place ofR{\displaystyle R}considerP→Q{\displaystyle P\to Q}. Another proof of the law in classical logic proceeds by passing through the classically valid reversedisjunctive syllogismtwice: First note that¬¬P{\displaystyle \neg \neg P}is implied by(¬¬P∧¬Q)∨P{\displaystyle (\neg \neg P\land \neg Q)\lor P}, which is intuitionistically equivalent to¬(¬P∨Q)∨P{\displaystyle \neg (\neg P\lor Q)\lor P}. Now explosion entails that¬A∨B{\displaystyle \neg A\lor B}impliesA→B{\displaystyle A\to B}, and using excluded middle forA{\displaystyle A}here entails that these two are in fact equivalent. Taken together, this means that in classical logicP{\displaystyle P}is equivalent to(P→Q)→P{\displaystyle (P\to Q)\to P}. Intuitionistically, not even the constraint¬Q→P{\displaystyle \neg Q\to P}always implies Pierce's law for two propositions. Postulating the latter to be valid in its propositional form results inSmetanich's intermediate logic. Peirce's law allows one to enhance the technique of using thededuction theoremto prove theorems. Suppose one is given a set of premises Γ and one wants to deduce a propositionZfrom them. With Peirce's law, one can add (at no cost) additional premises of the formZ→Pto Γ. For example, suppose we are givenP→Zand (P→Q)→Zand we wish to deduceZso that we can use the deduction theorem to conclude that (P→Z)→(((P→Q)→Z)→Z) is a theorem. Then we can add another premiseZ→Q. From that andP→Z, we getP→Q. Then we apply modus ponens with (P→Q)→Zas the major premise to getZ. Applying the deduction theorem, we get that (Z→Q)→Zfollows from the original premises. Then we use Peirce's law in the form ((Z→Q)→Z)→Zand modus ponens to deriveZfrom the original premises. Then we can finish off proving the theorem as we originally intended. (P→Z)→(((P→Q)→Z)→Z) One reason that Peirce's law is important is that it can substitute for the law of excluded middle in the logic which only uses implication. The sentences which can be deduced from the axiom schemas: (whereP,Q,Rcontain only "→" as a connective) are all thetautologieswhich use only "→" as a connective. Since Peirce's law implies the law of the excluded middle, it must always fail in non-classical intuitionistic logics. A simple explicit counterexample is that ofGödel many valued logics, which are afuzzy logicwhere truth values are real numbers between 0 and 1, with material implication defined by: and where Peirce's law as a formula can be simplified to: where it always being true would be equivalent to the statement that u > v implies u = 1, which is true only if 0 and 1 are the only allowed values. At the same time however, the expression cannot ever be equal to the bottom truth value of the logic and its double negation is always true.
https://en.wikipedia.org/wiki/Peirce%27s_law
Agreedy algorithmis anyalgorithmthat follows the problem-solvingheuristicof making the locally optimal choice at each stage.[1]In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time. For example, a greedy strategy for thetravelling salesman problem(which is of highcomputational complexity) is the following heuristic: "At each step of the journey, visit the nearest unvisited city." This heuristic does not intend to find the best solution, but it terminates in a reasonable number of steps; finding an optimal solution to such a complex problem typically requires unreasonably many steps. Inmathematical optimization, greedy algorithms optimally solvecombinatorialproblems having the properties ofmatroidsand give constant-factor approximations to optimization problems with the submodular structure. Greedy algorithms produce good solutions on somemathematical problems, but not on others. Most problems for which they work will have two properties: A common technique for proving the correctness of greedy algorithms uses aninductiveexchange argument.[3]The exchange argument demonstrates that any solution different from the greedy solution can be transformed into the greedy solution without degrading its quality. This proof pattern typically follows these steps: This proof pattern typically follows these steps (by contradiction): In some cases, an additional step may be needed to prove that no optimal solution can strictly improve upon the greedy solution. Greedy algorithms fail to produce the optimal solution for many other problems and may even produce theunique worst possiblesolution. One example is thetravelling salesman problemmentioned above: for each number of cities, there is an assignment of distances between the cities for which the nearest-neighbour heuristic produces the unique worst possible tour.[4]For other possible examples, seehorizon effect. Greedy algorithms can be characterized as being 'short sighted', and also as 'non-recoverable'. They are ideal only for problems that have an 'optimal substructure'. Despite this, for many simple problems, the best-suited algorithms are greedy. It is important, however, to note that the greedy algorithm can be used as a selection algorithm to prioritize options within a search, or branch-and-bound algorithm. There are a few variations to the greedy algorithm:[5] Greedy algorithms have a long history of study incombinatorial optimizationandtheoretical computer science. Greedy heuristics are known to produce suboptimal results on many problems,[6]and so natural questions are: A large body of literature exists answering these questions for general classes of problems, such asmatroids, as well as for specific problems, such asset cover. Amatroidis a mathematical structure that generalizes the notion oflinear independencefromvector spacesto arbitrary sets. If an optimization problem has the structure of a matroid, then the appropriate greedy algorithm will solve it optimally.[7] A functionf{\displaystyle f}defined on subsets of a setΩ{\displaystyle \Omega }is calledsubmodularif for everyS,T⊆Ω{\displaystyle S,T\subseteq \Omega }we have thatf(S)+f(T)≥f(S∪T)+f(S∩T){\displaystyle f(S)+f(T)\geq f(S\cup T)+f(S\cap T)}. Suppose one wants to find a setS{\displaystyle S}which maximizesf{\displaystyle f}. The greedy algorithm, which builds up a setS{\displaystyle S}by incrementally adding the element which increasesf{\displaystyle f}the most at each step, produces as output a set that is at least(1−1/e)maxX⊆Ωf(X){\displaystyle (1-1/e)\max _{X\subseteq \Omega }f(X)}.[8]That is, greedy performs within a constant factor of(1−1/e)≈0.63{\displaystyle (1-1/e)\approx 0.63}as good as the optimal solution. Similar guarantees are provable when additional constraints, such as cardinality constraints,[9]are imposed on the output, though often slight variations on the greedy algorithm are required. See[10]for an overview. Other problems for which the greedy algorithm gives a strong guarantee, but not an optimal solution, include Many of these problems have matching lower bounds; i.e., the greedy algorithm does not perform better than the guarantee in the worst case. Greedy algorithms typically (but not always) fail to find the globally optimal solution because they usually do not operate exhaustively on all the data. They can make commitments to certain choices too early, preventing them from finding the best overall solution later. For example, all knowngreedy coloringalgorithms for thegraph coloring problemand all otherNP-completeproblems do not consistently find optimum solutions. Nevertheless, they are useful because they are quick to think up and often give good approximations to the optimum. If a greedy algorithm can be proven to yield the global optimum for a given problem class, it typically becomes the method of choice because it is faster than other optimization methods likedynamic programming. Examples of such greedy algorithms areKruskal's algorithmandPrim's algorithmfor findingminimum spanning treesand the algorithm for finding optimumHuffman trees. Greedy algorithms appear in networkroutingas well. Using greedy routing, a message is forwarded to the neighbouring node which is "closest" to the destination. The notion of a node's location (and hence "closeness") may be determined by its physical location, as ingeographic routingused byad hoc networks. Location may also be an entirely artificial construct as insmall world routinganddistributed hash table.
https://en.wikipedia.org/wiki/Greedy_algorithm
Instatistics, alocation parameterof aprobability distributionis a scalar- or vector-valuedparameterx0{\displaystyle x_{0}}, which determines the "location" or shift of the distribution. In the literature of location parameter estimation, the probability distributions with such parameter are found to be formally defined in one of the following equivalent ways: A direct example of a location parameter is the parameterμ{\displaystyle \mu }of thenormal distribution. To see this, note that the probability density functionf(x|μ,σ){\displaystyle f(x|\mu ,\sigma )}of a normal distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}can have the parameterμ{\displaystyle \mu }factored out and be written as: thus fulfilling the first of the definitions given above. The above definition indicates, in the one-dimensional case, that ifx0{\displaystyle x_{0}}is increased, the probability density or mass function shifts rigidly to the right, maintaining its exact shape. A location parameter can also be found in families having more than one parameter, such aslocation–scale families. In this case, the probability density function or probability mass function will be a special case of the more general form wherex0{\displaystyle x_{0}}is the location parameter,θrepresents additional parameters, andfθ{\displaystyle f_{\theta }}is a function parametrized on the additional parameters. Source:[4] Letf(x){\displaystyle f(x)}be any probability density function and letμ{\displaystyle \mu }andσ>0{\displaystyle \sigma >0}be any given constants. Then the function g(x|μ,σ)=1σf(x−μσ){\displaystyle g(x|\mu ,\sigma )={\frac {1}{\sigma }}f\left({\frac {x-\mu }{\sigma }}\right)} is a probability density function. The location family is then defined as follows: Letf(x){\displaystyle f(x)}be any probability density function. Then the family of probability density functionsF={f(x−μ):μ∈R}{\displaystyle {\mathcal {F}}=\{f(x-\mu ):\mu \in \mathbb {R} \}}is called the location family with standard probability density functionf(x){\displaystyle f(x)}, whereμ{\displaystyle \mu }is called thelocation parameterfor the family. An alternative way of thinking of location families is through the concept ofadditive noise. Ifx0{\displaystyle x_{0}}is a constant andWis randomnoisewith probability densityfW(w),{\displaystyle f_{W}(w),}thenX=x0+W{\displaystyle X=x_{0}+W}has probability densityfx0(x)=fW(x−x0){\displaystyle f_{x_{0}}(x)=f_{W}(x-x_{0})}and its distribution is therefore part of a location family. For the continuous univariate case, consider a probability density functionf(x|θ),x∈[a,b]⊂R{\displaystyle f(x|\theta ),x\in [a,b]\subset \mathbb {R} }, whereθ{\displaystyle \theta }is a vector of parameters. A location parameterx0{\displaystyle x_{0}}can be added by defining: it can be proved thatg{\displaystyle g}is a p.d.f. by verifying if it respects the two conditions[5]g(x|θ,x0)≥0{\displaystyle g(x|\theta ,x_{0})\geq 0}and∫−∞∞g(x|θ,x0)dx=1{\displaystyle \int _{-\infty }^{\infty }g(x|\theta ,x_{0})dx=1}.g{\displaystyle g}integrates to 1 because: now making the variable changeu=x−x0{\displaystyle u=x-x_{0}}and updating the integration interval accordingly yields: becausef(x|θ){\displaystyle f(x|\theta )}is a p.d.f. by hypothesis.g(x|θ,x0)≥0{\displaystyle g(x|\theta ,x_{0})\geq 0}follows fromg{\displaystyle g}sharing the same image off{\displaystyle f}, which is a p.d.f. so its range is contained in[0,1]{\displaystyle [0,1]}.
https://en.wikipedia.org/wiki/Location_parameter
Bluetooth beaconsare hardware transmitters — a class ofBluetooth Low Energy(LE) devices that broadcast their identifier to nearbyportable electronicdevices. The technology enablessmartphones,tabletsand other devices to perform actions when in close proximity to a beacon. Bluetooth beacons useBluetooth Low Energy proximity sensingto transmit auniversally unique identifier[1]picked up by a compatible app or operating system. The identifier and several bytes sent with it can be used to determine the device's physical location,[2]track customers, or trigger alocation-basedaction on the device such as acheck-in on social mediaor apush notification. One application is distributing messages at a specificpoint of interest, for example a store, a bus stop, a room or a more specific location like a piece of furniture or a vending machine. This is similar to previously used geopush technology based onGPS, but with a much reduced impact on battery life and much extended precision. Another application is anindoor positioning system,[3][4][5]which helps smartphones determine their approximate location or context. With the help of a Bluetooth beacon, a smartphone's software can approximately find its relative location to a Bluetooth beacon in a store.Brick and mortarretail stores use the beacons formobile commerce, offering customers special deals throughmobile marketing,[6]and can enablemobile paymentsthroughpoint of salesystems. Bluetooth beacons differ from some other location-based technologies as the broadcasting device (beacon) is only a 1-way transmitter to the receiving smartphone or receiving device, and necessitates a specific app installed on the device to interact with the beacons. Thus only the installed app, and not the Bluetooth beacon transmitter, can track users. Bluetooth beacon transmitters come in a variety of form factors, including small coin cell devices, USB sticks, and generic Bluetooth 4.0 capable USBdongles.[7] The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Dr. Nils Rydbeck CTO at Ericsson Mobile inLundand Dr.Johan Ullman. The purpose was to develop wireless headsets, according to two inventions byJohan Ullman, SE 8902098–6, issued 1989-06-12 and SE 9202239, issued 1992-07-24. Since its creation the Bluetooth standard has gone through many generations each adding different features. Bluetooth 1.2 allowed for faster speed up to ≈700 kbit/s. Bluetooth 2.0 improved on this for speeds up to 3 Mbit/s. Bluetooth 2.1 improved device pairing speed and security. Bluetooth 3.0 again improved transfer speed up to 24 Mbit/s. In 2010 Bluetooth 4.0 (Low Energy) was released with its main focus being reduced power consumption. Before Bluetooth 4.0 the majority of connections using Bluetooth were two way, both devices listen and talk to each other. Although this two way communication is still possible with Bluetooth 4.0, one way communication is also possible. This one way communication allows a bluetooth device to transmit information but not listen for it. These one way "beacons" do not require a paired connection like previous Bluetooth devices so they have new useful applications. Bluetooth beacons operate using the Bluetooth 4.0 Low Energy standard so battery powered devices are possible. Battery life of devices varies depending on manufacturer. The Bluetooth LE protocol is significantly more power efficient than Bluetooth Classic. Several chipsets makers, includingTexas Instruments[8]andNordic Semiconductornow supply chipsets optimized for iBeacon use. Power consumption depends on iBeacon configuration parameters of advertising interval and transmit power. Battery life can range between 1–48 months. Apple's recommended setting of 100 ms advertising interval with a coin cell battery provides for 1–3 months of life, which increases to 2–3 years as advertising interval is increased to 900 ms.[9] Battery consumption of the phones is a factor that must be taken into account when deploying beacon enabled apps. A recent report has shown that older phones tend to draw more battery power in the vicinity of iBeacons, while the newer phones can be more efficient in the same environment.[10]In addition to the time spent by the phone scanning, number of scans and number of beacons in the vicinity are also significant factors for battery drain. An energy efficient iBeacon application needs to consider these aspects in order to strike a good balance between app responsiveness and battery consumption. Bluetooth beacons can also come in the form of USB dongles. These small USB beacons can be powered by a standard USB port which makes them ideal for long term permanent installations. Bluetooth beacons can be used to send a packet of information that contains a Universally Unique Identifier (UUID). This UUID is used to trigger events specific to that beacon. In the case of Apple's iBeacon the UUID will be recognized by an app on the user device that will trigger an event. This event is fully customizable by the app developer but in the case of advertising the event might be a push notification with an ad. However, with a UID based system the users device must connect to an online server which is capable of understanding the beacons UUID. Once the UUID is sent to the server the appropriate message action is sent to a users device. Other methods of advertising are also possible with beacons, URIBeacon and Google's Eddystone allow for a URI transmission mode that unlike iBeacons UID doesn't require an outside server for recognition. The URI beacons transmit a URI which could be a link to a webpage and the user will see that URI directly on their phone.[11] Beacons can be associated with the artpieces in a museum to encourage further interaction. For example, a notification can be sent to user's mobile device when user is in the proximity to a particular artpiece. By sending user the notification, user is alerted with the artpiece in his proximity, and if user indicates their further interest, a specific app can be installed to interact with the encountered artpiece.[12]In general, a native app is needed for a mobile device to interact with the beacon if the beacon uses iBeacon protocol; whereas if Eddystone is employed, user can interact with the artpiece through a physical web URL broadcast by the Eddystone. Indoor positioning with beacons falls into three categories. Implementations with many beacons per room, implementations with one beacon per room, and implementations with a few beacons per building. Indoor navigation with Bluetooth is still in its infancy but attempts have been made to find a working solution. With multiple beacons per roomtrilaterationcan be used to estimate a users' position to within about 2 meters.[13]Bluetooth beacons are capable of transmitting their Received Signal Strength Indicator (RSSI) value in addition to other data. This RSSI value is calibrated by the manufacturer of the beacon to be the signal strength of the beacon at a known distance, typically one meter. Using the known output signal strength of the beacon and the signal strength observed by the receiving device an approximation can be made about the distance between the beacon and the device. However this approximation is not very reliable, so for more accurate position tracking other methods are preferred. Since its release in 2010 many studies have been connected using Bluetooth beacons for tracking. A few methods have been tested to find the best way of combining the RSSI values for tracking. Neural networks have been proposed as a good way of reducing the error in estimation.[13]AStigmergicapproach has also been tested, this method uses an intensity map to estimate a users location.[14]Bluetooth LE specification 5.1 added further more precise methods for position determination using multiple beacons. With only one beacon per room, a user can use their known room position in conjunction with a virtual map of all the rooms in a building to navigate a building. A building with many separate rooms may need a different beacon configuration for navigation. With one beacon in each room a user can use an app to know the room they are in, and a simple shortest path algorithm can be used to give them the best route to the room they are looking for. This configuration requires a digital map of the building but attempts have been made to make this map creation easier.[15] Beacons can be used in conjunction withpedestrian dead reckoningtechniques to add checkpoints to a large open space.[16]PDR uses a known last location in conjunction with direction and speed information provided by the user to estimate a person's location. This technique can be used to estimate a person's location as they walk through a building. Using Bluetooth beacons as checkpoints the user's location can be recalculated to reduce error. In this way a few Bluetooth beacons can be used to cover a large area like a mall. Using the device tracking capabilities of Bluetooth beacons, in-home patient monitoring is possible. Using bluetooth beacons a person's movements and activities can be tracked in their home.[17]Bluetooth beacons are a good alternative to in house cameras due to their increased level of privacy. Additionally bluetooth beacons can be used in hospitals or other workplaces to ensure workers meet certain standards. For example, a beacon may be placed at a hand sanitizer dispenser in a hospital – the beacons can help ensure employees are using the station regularly. One use of beacons is as a "key finder" where a beacon is attached to, for example, a keyring and a smartphone app can be used to track the last time the device came in range. Another similar use is to track pets, objects (e.g. baggage) or people. The precision and range of BLE doesn't match GPS, but beacons are significantly less expensive. Several commercial and free solutions exist, which are based on proximity detection, not precise positioning. For example, Nivea launched the "kid-tracker" campaign in Brazil back in 2014.[18] In mid-2013,AppleintroducediBeaconsand experts wrote about how it is designed to help the retail industry by simplifying payments and enabling on-site offers. On December 6, 2013, Apple activated iBeacons across its 254 US retail stores.[19]McDonald's has used the devices to give special offers to consumers in its fast-food stores.[6]As of May 2014, different hardware iBeacons can be purchased for as little as $5 per device to more than $30 per device.[20]Each of these different iBeacons have varying default settings for their default transmit power and iBeacon advertisement frequency. Some hardware iBeacons advertise at as low as 1 Hz while others can be as fast as 10 Hz.[21] AltBeacon is an open source alternative to iBeacon created by Radius Networks.[22] URIBeacons are different from iBeacons and AltBeacons because rather than broadcasting an identifier, they send an URL which can be understood immediately.[22] Eddystoneis Google's standard for Bluetooth beacons. It supports three types of packets, Eddystone-UID, Eddystone-URL, and Eddystone-TLM.[11]Eddystone-UID functions in a very similar way to Apple's iBeacon, however, it supports additional telemetry data with Eddystone-TLM. The telemetry information is sent along with the UID data. The beacon information available includes battery voltage, beacon temperature, number of packets sent since last startup, and beacon uptime.[11]Using the Eddystone protocol, Google had built the now discontinued[23]Google Nearby that allowed Android users to receive beacon notifications without an app. Although thenear-field communication(NFC) environment is very different and has many non-overlapping applications, it is still compared with iBeacons.
https://en.wikipedia.org/wiki/Bluetooth_Low_Energy_beacon
Finite element method(FEM) is a popular method for numerically solvingdifferential equationsarising in engineering andmathematical modeling. Typical problem areas of interest include the traditional fields ofstructural analysis,heat transfer,fluid flow, mass transport, andelectromagnetic potential. Computers are usually used to perform the calculations required. With high-speedsupercomputers, better solutions can be achieved and are often required to solve the largest and most complex problems. FEM is a generalnumerical methodfor solvingpartial differential equationsin two- or three-space variables (i.e., someboundary value problems). There are also studies about using FEM to solve high-dimensional problems.[1]To solve a problem, FEM subdivides a large system into smaller, simpler parts calledfinite elements. This is achieved by a particular spacediscretizationin the space dimensions, which is implemented by the construction of ameshof the object: the numerical domain for the solution that has a finite number of points. FEM formulation of a boundary value problem finally results in a system ofalgebraic equations. The method approximates the unknown function over the domain.[2]The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. FEM then approximates a solution by minimizing an associated error function via thecalculus of variations. Studying oranalyzinga phenomenon with FEM is often referred to as finite element analysis (FEA). The subdivision of a whole domain into simpler parts has several advantages:[3] A typical approach using the method involves the following steps: The global system of equations uses known solution techniques and can be calculated from theinitial valuesof the original problem to obtain a numerical answer. In the first step above, the element equations are simple equations that locally approximate the original complex equations to be studied, where the original equations are oftenpartial differential equations(PDEs). To explain the approximation of this process, FEM is commonly introduced as a special case of theGalerkin method. The process, in mathematical language, is to construct an integral of theinner productof the residual and theweight functions; then, set the integral to zero. In simple terms, it is a procedure that minimizes the approximation error by fitting trial functions into the PDE. The residual is the error caused by the trial functions, and the weight functions arepolynomialapproximation functions that project the residual. The process eliminates all the spatial derivatives from the PDE, thus approximating the PDE locally using the following: These equation sets are element equations. They arelinearif the underlying PDE is linear and vice versa. Algebraic equation sets that arise in the steady-state problems are solved usingnumerical linear algebraicmethods. In contrast,ordinary differential equationsets that occur in the transient problems are solved by numerical integrations using standard techniques such asEuler's methodor theRunge–Kutta method. In the second step above, a global system of equations is generated from the element equations by transforming coordinates from the subdomains' local nodes to the domain's global nodes. This spatial transformation includes appropriateorientation adjustmentsas applied in relation to the referencecoordinate system. The process is often carried out using FEM software withcoordinatedata generated from the subdomains. The practical application of FEM is known as finite element analysis (FEA). FEA, as applied inengineering, is a computational tool for performingengineering analysis. It includes the use ofmesh generationtechniques for dividing acomplex probleminto smaller elements, as well as the use of software coded with a FEM algorithm. When applying FEA, the complex problem is usually a physical system with the underlyingphysics, such as theEuler–Bernoulli beam equation, theheat equation, or theNavier–Stokes equations, expressed in either PDEs orintegral equations, while the divided, smaller elements of the complex problem represent different areas in the physical system. FEA may be used for analyzing problems over complicated domains (e.g., cars and oil pipelines) when the domain changes (e.g., during a solid-state reaction with a moving boundary), when the desired precision varies over the entire domain, or when the solution lacks smoothness. FEA simulations provide a valuable resource, as they remove multiple instances of creating and testing complex prototypes for various high-fidelity situations.[citation needed]For example, in a frontal crash simulation, it is possible to increase prediction accuracy in important areas, like the front of the car, and reduce it in the rear of the car, thus reducing the cost of the simulation. Another example would be innumerical weather prediction, where it is more important to have accurate predictions over developing highly nonlinear phenomena, such astropical cyclonesin the atmosphere oreddiesin the ocean, rather than relatively calm areas. A clear, detailed, and practical presentation of this approach can be found in the textbookThe Finite Element Method for Engineers.[4] While it is difficult to quote the date of the invention of FEM, the method originated from the need to solve complexelasticityandstructural analysisproblems incivilandaeronautical engineering.[5]Its development can be traced back to work byAlexander Hrennikoff[6]andRichard Courant[7]in the early 1940s. Another pioneer wasIoannis Argyris. In the USSR, the introduction of the practical application of FEM is usually connected withLeonard Oganesyan.[8]It was also independently rediscovered in China byFeng Kangin the late 1950s and early 1960s, based on the computations of dam constructions, where it was called the "finite difference method" based on variation principles. Although the approaches used by these pioneers are different, they share one essential characteristic: themeshdiscretizationof a continuous domain into a set of discrete sub-domains, usually called elements. Hrennikoff's work discretizes the domain by using alatticeanalogy, while Courant's approach divides the domain into finite triangular sub-regions to solvesecond-orderelliptic partial differential equationsthat arise from the problem of thetorsionof acylinder. Courant's contribution was evolutionary, drawing on a large body of earlier results for PDEs developed byLord Rayleigh,Walther Ritz, andBoris Galerkin. The application of FEM gained momentum in the 1960s and 1970s due to the developments ofJ. H. Argyrisand his co-workers at theUniversity of Stuttgart;R. W. Cloughand his co-workers atUniversity of California Berkeley;O. C. Zienkiewiczand his co-workersErnest Hinton,Bruce Irons,[9]and others atSwansea University;Philippe G. Ciarletat the University ofParis 6; andRichard Gallagherand his co-workers atCornell University. During this period, additional impetus was provided by the available open-source FEM programs. NASA sponsored the original version ofNASTRAN. University of California Berkeley made the finite element programs SAP IV[10]and, later,OpenSeeswidely available. In Norway, the ship classification society Det Norske Veritas (nowDNV GL) developedSesamin 1969 for use in the analysis of ships.[11]A rigorous mathematical basis for FEM was provided in 1973 with a publication byGilbert StrangandGeorge Fix.[12]The method has since been generalized for thenumerical modelingof physical systems in a wide variety ofengineeringdisciplines, such aselectromagnetism,heat transfer, andfluid dynamics.[13][14] A finite element method is characterized by avariational formulation, a discretization strategy, one or more solution algorithms, and post-processing procedures. Examples of the variational formulation are theGalerkin method, the discontinuous Galerkin method, mixed methods, etc. A discretization strategy is understood to mean a clearly defined set of procedures that cover (a) the creation of finite element meshes, (b) the definition of basis function on reference elements (also called shape functions), and (c) the mapping of reference elements onto the elements of the mesh. Examples of discretization strategies are the h-version,p-version,hp-version,x-FEM,isogeometric analysis, etc. Each discretization strategy has certain advantages and disadvantages. A reasonable criterion in selecting a discretization strategy is to realize nearly optimal performance for the broadest set of mathematical models in a particular model class. Various numerical solution algorithms can be classified into two broad categories; direct and iterative solvers. These algorithms are designed to exploit the sparsity of matrices that depend on the variational formulation and discretization strategy choices. Post-processing procedures are designed to extract the data of interest from a finite element solution. To meet the requirements of solution verification, postprocessors need to provide fora posteriorierror estimation in terms of the quantities of interest. When the errors of approximation are larger than what is considered acceptable, then the discretization has to be changed either by an automated adaptive process or by the action of the analyst. Some very efficient postprocessors provide for the realization ofsuperconvergence. The following two problems demonstrate the finite element method. P1 is a one-dimensional problemP1:{u″(x)=f(x)in(0,1),u(0)=u(1)=0,{\displaystyle {\text{ P1 }}:{\begin{cases}u''(x)=f(x){\text{ in }}(0,1),\\u(0)=u(1)=0,\end{cases}}}wheref{\displaystyle f}is given,u{\displaystyle u}is an unknown function ofx{\displaystyle x}, andu″{\displaystyle u''}is the second derivative ofu{\displaystyle u}with respect tox{\displaystyle x}. P2 is a two-dimensional problem (Dirichlet problem)P2:{uxx(x,y)+uyy(x,y)=f(x,y)inΩ,u=0on∂Ω,{\displaystyle {\text{P2 }}:{\begin{cases}u_{xx}(x,y)+u_{yy}(x,y)=f(x,y)&{\text{ in }}\Omega ,\\u=0&{\text{ on }}\partial \Omega ,\end{cases}}} whereΩ{\displaystyle \Omega }is a connected open region in the(x,y){\displaystyle (x,y)}plane whose boundary∂Ω{\displaystyle \partial \Omega }is nice (e.g., asmooth manifoldor apolygon), anduxx{\displaystyle u_{xx}}anduyy{\displaystyle u_{yy}}denote the second derivatives with respect tox{\displaystyle x}andy{\displaystyle y}, respectively. The problem P1 can be solved directly by computingantiderivatives. However, this method of solving theboundary value problem(BVP) works only when there is one spatial dimension. It does not generalize to higher-dimensional problems or problems likeu+V″=f{\displaystyle u+V''=f}. For this reason, we will develop the finite element method for P1 and outline its generalization to P2. Our explanation will proceed in two steps, which mirror two essential steps one must take to solve a boundary value problem (BVP) using the FEM. After this second step, we have concrete formulae for a large but finite-dimensional linear problem whose solution will approximately solve the original BVP. This finite-dimensional problem is then implemented on acomputer. The first step is to convert P1 and P2 into their equivalentweak formulations. Ifu{\displaystyle u}solves P1, then for any smooth functionv{\displaystyle v}that satisfies the displacement boundary conditions, i.e.v=0{\displaystyle v=0}atx=0{\displaystyle x=0}andx=1{\displaystyle x=1}, we have Conversely, ifu{\displaystyle u}withu(0)=u(1)=0{\displaystyle u(0)=u(1)=0}satisfies (1) for every smooth functionv(x){\displaystyle v(x)}then one may show that thisu{\displaystyle u}will solve P1. The proof is easier for twice continuously differentiableu{\displaystyle u}(mean value theorem) but may be proved in adistributionalsense as well. We define a new operator or mapϕ(u,v){\displaystyle \phi (u,v)}by usingintegration by partson the right-hand-side of (1): where we have used the assumption thatv(0)=v(1)=0{\displaystyle v(0)=v(1)=0}. If we integrate by parts using a form ofGreen's identities, we see that ifu{\displaystyle u}solves P2, then we may defineϕ(u,v){\displaystyle \phi (u,v)}for anyv{\displaystyle v}by∫Ωfvds=−∫Ω∇u⋅∇vds≡−ϕ(u,v),{\displaystyle \int _{\Omega }fv\,ds=-\int _{\Omega }\nabla u\cdot \nabla v\,ds\equiv -\phi (u,v),} where∇{\displaystyle \nabla }denotes thegradientand⋅{\displaystyle \cdot }denotes thedot productin the two-dimensional plane. Once moreϕ{\displaystyle \,\!\phi }can be turned into an inner product on a suitable spaceH01(Ω){\displaystyle H_{0}^{1}(\Omega )}of once differentiable functions ofΩ{\displaystyle \Omega }that are zero on∂Ω{\displaystyle \partial \Omega }. We have also assumed thatv∈H01(Ω){\displaystyle v\in H_{0}^{1}(\Omega )}(seeSobolev spaces). The existence and uniqueness of the solution can also be shown. We can loosely think ofH01(0,1){\displaystyle H_{0}^{1}(0,1)}to be theabsolutely continuousfunctions of(0,1){\displaystyle (0,1)}that are0{\displaystyle 0}atx=0{\displaystyle x=0}andx=1{\displaystyle x=1}(seeSobolev spaces). Such functions are (weakly) once differentiable, and it turns out that the symmetricbilinear mapϕ{\displaystyle \!\,\phi }then defines aninner productwhich turnsH01(0,1){\displaystyle H_{0}^{1}(0,1)}into aHilbert space(a detailed proof is nontrivial). On the other hand, the left-hand-side∫01f(x)v(x)dx{\displaystyle \int _{0}^{1}f(x)v(x)dx}is also an inner product, this time on theLp spaceL2(0,1){\displaystyle L^{2}(0,1)}. An application of theRiesz representation theoremfor Hilbert spaces shows that there is a uniqueu{\displaystyle u}solving (2) and, therefore, P1. This solution is a-priori only a member ofH01(0,1){\displaystyle H_{0}^{1}(0,1)}, but usingellipticregularity, will be smooth iff{\displaystyle f}is. P1 and P2 are ready to be discretized, which leads to a common sub-problem (3). The basic idea is to replace the infinite-dimensional linear problem: with a finite-dimensional version: whereV{\displaystyle V}is a finite-dimensionalsubspaceofH01{\displaystyle H_{0}^{1}}. There are many possible choices forV{\displaystyle V}(one possibility leads to thespectral method). However, we takeV{\displaystyle V}as a space of piecewise polynomial functions for the finite element method. We take the interval(0,1){\displaystyle (0,1)}, choosen{\displaystyle n}values ofx{\displaystyle x}with0=x0<x1<⋯<xn<xn+1=1{\displaystyle 0=x_{0}<x_{1}<\cdots <x_{n}<x_{n+1}=1}and we defineV{\displaystyle V}by:V={v:[0,1]→R:vis continuous,v|[xk,xk+1]is linear fork=0,…,n, andv(0)=v(1)=0}{\displaystyle V=\{v:[0,1]\to \mathbb {R} \;:v{\text{ is continuous, }}v|_{[x_{k},x_{k+1}]}{\text{ is linear for }}k=0,\dots ,n{\text{, and }}v(0)=v(1)=0\}} where we definex0=0{\displaystyle x_{0}=0}andxn+1=1{\displaystyle x_{n+1}=1}. Observe that functions inV{\displaystyle V}are not differentiable according to the elementary definition of calculus. Indeed, ifv∈V{\displaystyle v\in V}then the derivative is typically not defined at anyx=xk{\displaystyle x=x_{k}},k=1,…,n{\displaystyle k=1,\ldots ,n}. However, the derivative exists at every other value ofx{\displaystyle x}, and one can use this derivative forintegration by parts. We needV{\displaystyle V}to be a set of functions ofΩ{\displaystyle \Omega }. In the figure on the right, we have illustrated atriangulationof a 15-sidedpolygonalregionΩ{\displaystyle \Omega }in the plane (below), and apiecewise linear function(above, in color) of this polygon which is linear on each triangle of the triangulation; the spaceV{\displaystyle V}would consist of functions that are linear on each triangle of the chosen triangulation. One hopes that as the underlying triangular mesh becomes finer and finer, the solution of the discrete problem (3) will, in some sense, converge to the solution of the original boundary value problem P2. To measure this mesh fineness, the triangulation is indexed by a real-valued parameterh>0{\displaystyle h>0}which one takes to be very small. This parameter will be related to the largest or average triangle size in the triangulation. As we refine the triangulation, the space of piecewise linear functionsV{\displaystyle V}must also change withh{\displaystyle h}. For this reason, one often readsVh{\displaystyle V_{h}}instead ofV{\displaystyle V}in the literature. Since we do not perform such an analysis, we will not use this notation. To complete the discretization, we must select abasisofV{\displaystyle V}. In the one-dimensional case, for each control pointxk{\displaystyle x_{k}}we will choose the piecewise linear functionvk{\displaystyle v_{k}}inV{\displaystyle V}whose value is1{\displaystyle 1}atxk{\displaystyle x_{k}}and zero at everyxj,j≠k{\displaystyle x_{j},\;j\neq k}, i.e.,vk(x)={x−xk−1xk−xk−1ifx∈[xk−1,xk],xk+1−xxk+1−xkifx∈[xk,xk+1],0otherwise,{\displaystyle v_{k}(x)={\begin{cases}{x-x_{k-1} \over x_{k}\,-x_{k-1}}&{\text{ if }}x\in [x_{k-1},x_{k}],\\{x_{k+1}\,-x \over x_{k+1}\,-x_{k}}&{\text{ if }}x\in [x_{k},x_{k+1}],\\0&{\text{ otherwise}},\end{cases}}} fork=1,…,n{\displaystyle k=1,\dots ,n}; this basis is a shifted and scaledtent function. For the two-dimensional case, we choose again one basis functionvk{\displaystyle v_{k}}per vertexxk{\displaystyle x_{k}}of the triangulation of the planar regionΩ{\displaystyle \Omega }. The functionvk{\displaystyle v_{k}}is the unique function ofV{\displaystyle V}whose value is1{\displaystyle 1}atxk{\displaystyle x_{k}}and zero at everyxj,j≠k{\displaystyle x_{j},\;j\neq k}. Depending on the author, the word "element" in the "finite element method" refers to the domain's triangles, the piecewise linear basis function, or both. So, for instance, an author interested in curved domains might replace the triangles with curved primitives and so might describe the elements as being curvilinear. On the other hand, some authors replace "piecewise linear" with "piecewise quadratic" or even "piecewise polynomial". The author might then say "higher order element" instead of "higher degree polynomial". The finite element method is not restricted to triangles (tetrahedra in 3-d or higher-order simplexes in multidimensional spaces). Still, it can be defined on quadrilateral subdomains (hexahedra, prisms, or pyramids in 3-d, and so on). Higher-order shapes (curvilinear elements) can be defined with polynomial and even non-polynomial shapes (e.g., ellipse or circle). Examples of methods that use higher degree piecewise polynomial basis functions are thehp-FEMandspectral FEM. More advanced implementations (adaptive finite element methods) utilize a method to assess the quality of the results (based on error estimation theory) and modify the mesh during the solution aiming to achieve an approximate solution within some bounds from the exact solution of the continuum problem. Mesh adaptivity may utilize various techniques; the most popular are: The primary advantage of this choice of basis is that the inner products⟨vj,vk⟩=∫01vjvkdx{\displaystyle \langle v_{j},v_{k}\rangle =\int _{0}^{1}v_{j}v_{k}\,dx}andϕ(vj,vk)=∫01vj′vk′dx{\displaystyle \phi (v_{j},v_{k})=\int _{0}^{1}v_{j}'v_{k}'\,dx}will be zero for almost allj,k{\displaystyle j,k}. (The matrix containing⟨vj,vk⟩{\displaystyle \langle v_{j},v_{k}\rangle }in the(j,k){\displaystyle (j,k)}location is known as theGramian matrix.) In the one dimensional case, thesupportofvk{\displaystyle v_{k}}is the interval[xk−1,xk+1]{\displaystyle [x_{k-1},x_{k+1}]}. Hence, the integrands of⟨vj,vk⟩{\displaystyle \langle v_{j},v_{k}\rangle }andϕ(vj,vk){\displaystyle \phi (v_{j},v_{k})}are identically zero whenever|j−k|>1{\displaystyle |j-k|>1}. Similarly, in the planar case, ifxj{\displaystyle x_{j}}andxk{\displaystyle x_{k}}do not share an edge of the triangulation, then the integrals∫Ωvjvkds{\displaystyle \int _{\Omega }v_{j}v_{k}\,ds}and∫Ω∇vj⋅∇vkds{\displaystyle \int _{\Omega }\nabla v_{j}\cdot \nabla v_{k}\,ds}are both zero. If we writeu(x)=∑k=1nukvk(x){\displaystyle u(x)=\sum _{k=1}^{n}u_{k}v_{k}(x)}andf(x)=∑k=1nfkvk(x){\displaystyle f(x)=\sum _{k=1}^{n}f_{k}v_{k}(x)}then problem (3), takingv(x)=vj(x){\displaystyle v(x)=v_{j}(x)}forj=1,…,n{\displaystyle j=1,\dots ,n}, becomes If we denote byu{\displaystyle \mathbf {u} }andf{\displaystyle \mathbf {f} }the column vectors(u1,…,un)t{\displaystyle (u_{1},\dots ,u_{n})^{t}}and(f1,…,fn)t{\displaystyle (f_{1},\dots ,f_{n})^{t}}, and if we letL=(Lij){\displaystyle L=(L_{ij})}andM=(Mij){\displaystyle M=(M_{ij})}be matrices whose entries areLij=ϕ(vi,vj){\displaystyle L_{ij}=\phi (v_{i},v_{j})}andMij=∫vivjdx{\displaystyle M_{ij}=\int v_{i}v_{j}dx}then we may rephrase (4) as It is not necessary to assumef(x)=∑k=1nfkvk(x){\displaystyle f(x)=\sum _{k=1}^{n}f_{k}v_{k}(x)}. For a general functionf(x){\displaystyle f(x)}, problem (3) withv(x)=vj(x){\displaystyle v(x)=v_{j}(x)}forj=1,…,n{\displaystyle j=1,\dots ,n}becomes actually simpler, since no matrixM{\displaystyle M}is used, whereb=(b1,…,bn)t{\displaystyle \mathbf {b} =(b_{1},\dots ,b_{n})^{t}}andbj=∫fvjdx{\displaystyle b_{j}=\int fv_{j}dx}forj=1,…,n{\displaystyle j=1,\dots ,n}. As we have discussed before, most of the entries ofL{\displaystyle L}andM{\displaystyle M}are zero because the basis functionsvk{\displaystyle v_{k}}have small support. So we now have to solve a linear system in the unknownu{\displaystyle \mathbf {u} }where most of the entries of the matrixL{\displaystyle L}, which we need to invert, are zero. Such matrices are known assparse matrices, and there are efficient solvers for such problems (much more efficient than actually inverting the matrix.) In addition,L{\displaystyle L}is symmetric and positive definite, so a technique such as theconjugate gradient methodis favored. For problems that are not too large, sparseLU decompositionsandCholesky decompositionsstill work well. For instance,MATLAB's backslash operator (which uses sparse LU, sparse Cholesky, and other factorization methods) can be sufficient for meshes with a hundred thousand vertices. The matrixL{\displaystyle L}is usually referred to as thestiffness matrix, while the matrixM{\displaystyle M}is dubbed themass matrix. In general, the finite element method is characterized by the following process. Separate consideration is the smoothness of the basis functions. For second-orderelliptic boundary value problems, piecewise polynomial basis function that is merely continuous suffice (i.e., the derivatives are discontinuous.) For higher-order partial differential equations, one must use smoother basis functions. For instance, for a fourth-order problem such asuxxxx+uyyyy=f{\displaystyle u_{xxxx}+u_{yyyy}=f}, one may use piecewise quadratic basis functions that areC1{\displaystyle C^{1}}. Another consideration is the relation of the finite-dimensional spaceV{\displaystyle V}to its infinite-dimensional counterpart in the examples aboveH01{\displaystyle H_{0}^{1}}. Aconforming element methodis one in which spaceV{\displaystyle V}is a subspace of the element space for the continuous problem. The example above is such a method. If this condition is not satisfied, we obtain anonconforming element method, an example of which is the space of piecewise linear functions over the mesh, which are continuous at each edge midpoint. Since these functions are generally discontinuous along the edges, this finite-dimensional space is not a subspace of the originalH01{\displaystyle H_{0}^{1}}. Typically, one has an algorithm for subdividing a given mesh. If the primary method for increasing precision is to subdivide the mesh, one has anh-method (his customarily the diameter of the largest element in the mesh.) In this manner, if one shows that the error with a gridh{\displaystyle h}is bounded above byChp{\displaystyle Ch^{p}}, for someC<∞{\displaystyle C<\infty }andp>0{\displaystyle p>0}, then one has an orderpmethod. Under specific hypotheses (for instance, if the domain is convex), a piecewise polynomial of orderd{\displaystyle d}method will have an error of orderp=d+1{\displaystyle p=d+1}. If instead of makinghsmaller, one increases the degree of the polynomials used in the basis function, one has ap-method. If one combines these two refinement types, one obtains anhp-method (hp-FEM). In the hp-FEM, the polynomial degrees can vary from element to element. High-order methods with large uniformpare called spectral finite element methods (SFEM). These are not to be confused withspectral methods. For vector partial differential equations, the basis functions may take values inRn{\displaystyle \mathbb {R} ^{n}}. The Applied Element Method or AEM combines features of both FEM andDiscrete element methodor (DEM). Yang and Lui introduced the Augmented-Finite Element Method, whose goal was to model the weak and strong discontinuities without needing extra DoFs, as PuM stated. The Cut Finite Element Approach was developed in 2014.[15]The approach is "to make the discretization as independent as possible of the geometric description and minimize the complexity of mesh generation, while retaining the accuracy and robustness of a standard finite element method."[16] The generalized finite element method (GFEM) uses local spaces consisting of functions, not necessarily polynomials, that reflect the available information on the unknown solution and thus ensure good local approximation. Then apartition of unityis used to “bond” these spaces together to form the approximating subspace. The effectiveness of GFEM has been shown when applied to problems with domains having complicated boundaries, problems with micro-scales, and problems with boundary layers.[17] The mixed finite element method is a type of finite element method in which extra independent variables are introduced as nodal variables during the discretization of a partial differential equation problem. Thehp-FEMcombines adaptively elements with variable sizehand polynomial degreepto achieve exceptionally fast, exponential convergence rates.[18] Thehpk-FEMcombines adaptively elements with variable sizeh, polynomial degree of the local approximationsp, and global differentiability of the local approximations (k-1) to achieve the best convergence rates. Theextended finite element method(XFEM) is a numerical technique based on the generalized finite element method (GFEM) and the partition of unity method (PUM). It extends the classical finite element method by enriching the solution space for solutions to differential equations with discontinuous functions. Extended finite element methods enrich the approximation space to naturally reproduce the challenging feature associated with the problem of interest: the discontinuity, singularity, boundary layer, etc. It was shown that for some problems, such an embedding of the problem's feature into the approximation space can significantly improve convergence rates and accuracy. Moreover, treating problems with discontinuities with XFEMs suppresses the need to mesh and re-mesh the discontinuity surfaces, thus alleviating the computational costs and projection errors associated with conventional finite element methods at the cost of restricting the discontinuities to mesh edges. Several research codes implement this technique to various degrees: XFEM has also been implemented in codes like Altair Radios, ASTER, Morfeo, and Abaqus. It is increasingly being adopted by other commercial finite element software, with a few plugins and actual core implementations available (ANSYS, SAMCEF, OOFELIE, etc.). The introduction of the scaled boundary finite element method (SBFEM) came from Song and Wolf (1997).[19]The SBFEM has been one of the most profitable contributions in the area of numerical analysis of fracture mechanics problems. It is a semi-analytical fundamental-solutionless method combining the advantages of finite element formulations and procedures and boundary element discretization. However, unlike the boundary element method, no fundamental differential solution is required. The S-FEM, Smoothed Finite Element Methods, is a particular class of numerical simulation algorithms for the simulation of physical phenomena. It was developed by combining mesh-free methods with the finite element method. Spectral element methods combine the geometric flexibility of finite elements and the acute accuracy of spectral methods. Spectral methods are the approximate solution of weak-form partial equations based on high-order Lagrangian interpolants and used only with certain quadrature rules.[20] Loubignac iterationis an iterative method in finite element methods. The crystal plasticity finite element method (CPFEM) is an advanced numerical tool developed by Franz Roters. Metals can be regarded as crystal aggregates, which behave anisotropy under deformation, such as abnormal stress and strain localization. CPFEM, based on the slip (shear strain rate), can calculate dislocation, crystal orientation, and other texture information to consider crystal anisotropy during the routine. It has been applied in the numerical study of material deformation, surface roughness, fractures, etc. The virtual element method (VEM), introduced by Beirão da Veiga et al. (2013)[21]as an extension ofmimeticfinite difference(MFD) methods, is a generalization of the standard finite element method for arbitrary element geometries. This allows admission of general polygons (orpolyhedrain 3D) that are highly irregular and non-convex in shape. The namevirtualderives from the fact that knowledge of the local shape function basis is not required and is, in fact, never explicitly calculated. Some types of finite element methods (conforming, nonconforming, mixed finite element methods) are particular cases of thegradient discretization method(GDM). Hence the convergence properties of the GDM, which are established for a series of problems (linear and nonlinear elliptic problems, linear, nonlinear, and degenerate parabolic problems), hold as well for these particular FEMs. Thefinite difference method(FDM) is an alternative way of approximating solutions of PDEs. The differences between FEM and FDM are: Generally, FEM is the method of choice in all types of analysis in structural mechanics (i.e., solving for deformation and stresses in solid bodies or dynamics of structures). In contrast,computational fluid dynamics(CFD) tend to use FDM or other methods likefinite volume method(FVM). CFD problems usually require discretization of the problem into a large number of cells/gridpoints (millions and more). Therefore the cost of the solution favors simpler, lower-order approximation within each cell. This is especially true for 'external flow' problems, like airflow around the car, airplane, or weather simulation. Another method used for approximating solutions to a partial differential equation is theFast Fourier Transform(FFT), where the solution is approximated by a fourier series computed using the FFT. For approximating the mechanical response of materials under stress, FFT is often much faster,[24]but FEM may be more accurate.[25]One example of the respective advantages of the two methods is in simulation ofrollinga sheet ofaluminum(an FCC metal), anddrawinga wire oftungsten(a BCC metal). This simulation did not have a sophisticated shape update algorithm for the FFT method. In both cases, the FFT method was more than 10 times as fast as FEM, but in the wire drawing simulation, where there were large deformations ingrains, the FEM method was much more accurate. In the sheet rolling simulation, the results of the two methods were similar.[25]FFT has a larger speed advantage in cases where the boundary conditions are given in the materialsstrain, and loses some of its efficiency in cases where thestressis used to apply the boundary conditions, as more iterations of the method are needed.[26] The FE and FFT methods can also be combined in avoxelbased method (2) to simulate deformation in materials, where the FE method is used for the macroscale stress and deformation, and the FFT method is used on the microscale to deal with the effects of microscale on the mechanical response.[27]Unlike FEM, FFT methods’ similarities to image processing methods means that an actual image of the microstructure from a microscope can be input to the solver to get a more accurate stress response. Using a real image with FFT avoids meshing the microstructure, which would be required if using FEM simulation of the microstructure, and might be difficult. Because fourier approximations are inherently periodic, FFT can only be used in cases of periodic microstructure, but this is common in real materials.[27]FFT can also be combined with FEM methods by using fourier components as the variational basis for approximating the fields inside an element, which can take advantage of the speed of FFT based solvers.[28] Various specializations under the umbrella of the mechanical engineering discipline (such as aeronautical, biomechanical, and automotive industries) commonly use integrated FEM in the design and development of their products. Several modern FEM packages include specific components such as thermal, electromagnetic, fluid, and structural working environments. In a structural simulation, FEM helps tremendously in producing stiffness and strength visualizations and minimizing weight, materials, and costs.[29] FEM allows detailed visualization of where structures bend or twist, indicating the distribution of stresses and displacements. FEM software provides a wide range of simulation options for controlling the complexity of modeling and system analysis. Similarly, the desired level of accuracy required and associated computational time requirements can be managed simultaneously to address most engineering applications. FEM allows entire designs to be constructed, refined, and optimized before the design is manufactured. The mesh is an integral part of the model and must be controlled carefully to give the best results. Generally, the higher the number of elements in a mesh, the more accurate the solution of the discretized problem. However, there is a value at which the results converge, and further mesh refinement does not increase accuracy.[30] This powerful design tool has significantly improved both the standard of engineering designs and the design process methodology in many industrial applications.[32]The introduction of FEM has substantially decreased the time to take products from concept to the production line.[32]Testing and development have been accelerated primarily through improved initial prototype designs using FEM.[33]In summary, benefits of FEM include increased accuracy, enhanced design and better insight into critical design parameters, virtual prototyping, fewer hardware prototypes, a faster and less expensive design cycle, increased productivity, and increased revenue.[32] In the 1990s FEM was proposed for use in stochastic modeling for numerically solving probability models[34]and later for reliability assessment.[35] FEM is widely applied for approximating differential equations that describe physical systems. This method is very popular in the community ofComputational fluid dynamics, and there are many applications for solvingNavier–Stokes equationswith FEM.[36][37][38]Recently, the application of FEM has been increasing in the researches of computational plasma. Promising numerical results using FEM forMagnetohydrodynamics,Vlasov equation, andSchrödinger equationhave been proposed.[39][40]
https://en.wikipedia.org/wiki/Finite_element_method
Inoperating systems,memory managementis the function responsible for managing the computer'sprimary memory.[1]: 105–208 The memory management function keeps track of the status of each memory location, eitherallocatedorfree. It determines how memory is allocated among competing processes, deciding which gets memory, when they receive it, and how much they are allowed. When memory is allocated it determines which memory locations will be assigned. It tracks when memory is freed orunallocatedand updates the status. This is distinct fromapplication memory management, which is how a process manages the memory assigned to it by the operating system. Single allocationis the simplest memory management technique. All the computer's memory, usually with the exception of a small portion reserved for the operating system, is available to a single application.MS-DOSis an example of a system that allocates memory in this way. Anembedded systemrunning a single application might also use this technique. A system using single contiguous allocation may stillmultitaskbyswappingthe contents of memory to switch among users. Early versions of theMUSICoperating system used this technique. Partitioned allocationdivides primary memory into multiplememory partitions, usually contiguous areas of memory. Each partition might contain all the information for a specificjobortask. Memory management consists of allocating a partition to a job when it starts and unallocating it when the job ends. Partitioned allocation usually requires some hardware support to prevent the jobs from interfering with one another or with the operating system. TheIBM System/360uses alock-and-keytechnique. TheUNIVAC 1108,PDP-6andPDP-10, andGE-600 seriesusebase and boundsregisters to indicate the ranges of accessible memory. Partitions may be eitherstatic, that is defined atInitial Program Load(IPL) orboot time, or by thecomputer operator, ordynamic, that is, automatically created for a specific job.IBM System/360 Operating SystemMultiprogramming with a Fixed Number of Tasks(MFT) is an example of static partitioning, andMultiprogramming with a Variable Number of Tasks(MVT) is an example of dynamic. MVT and successors use the termregionto distinguish dynamic partitions from static ones in other systems.[2] Partitions may be relocatable with base registers, as in the UNIVAC 1108, PDP-6 and PDP-10, and GE-600 series. Relocatable partitions are able to becompactedto provide larger chunks of contiguous physical memory. Compaction moves "in-use" areas of memory to eliminate "holes" or unused areas of memory caused by process termination in order to create larger contiguous free areas.[3] Some systems allow partitions to beswapped outtosecondary storageto free additional memory. Early versions of IBM'sTime Sharing Option(TSO) swapped users in and out oftime-sharingpartitions.[4][a] Paged allocationdivides the computer's primary memory into fixed-size units calledpage frames, and the program's virtualaddress spaceintopagesof the same size. The hardwarememory management unitmaps pages to frames. The physical memory can be allocated on a page basis while the address space appears contiguous. Usually, with paged memory management, each job runs in its own address space. However, there are somesingle address space operating systemsthat run all processes within a single address space, such asIBM i, which runs all processes within a large address space, and IBMOS/VS1andOS/VS2 (SVS), which ran all jobs in a single 16MiB virtual address space. Paged memory can bedemand-pagedwhen the system can move pages as required between primary and secondary memory. Segmented memoryis the only memory management technique that does not provide the user's program with a "linear and contiguous address space."[1]: 165Segmentsare areas of memory that usually correspond to a logical grouping of information such as a code procedure or a data array. Segments require hardware support in the form of asegment tablewhich usually contains the physical address of the segment in memory, its size, and other data such as access protection bits and status (swapped in, swapped out, etc.) Segmentation allows better access protection than other schemes because memory references are relative to a specific segment and the hardware will not permit the application to reference memory not defined for that segment. It is possible to implement segmentation with or without paging. Without paging support the segment is the physical unit swapped in and out of memory if required. With paging support the pages are usually the unit of swapping and segmentation only adds an additional level of security. Addresses in a segmented system usually consist of the segment id and an offset relative to the segment base address, defined to be offset zero. The IntelIA-32(x86) architecture allows a process to have up to 16,383 segments of up to 4GiB each. IA-32 segments are subdivisions of the computer'slinear address space, the virtual address space provided by the paging hardware.[5] TheMulticsoperating system is probably the best known system implementing segmented memory. Multics segments are subdivisions of the computer'sphysical memoryof up to 256 pages, each page being 1K 36-bit words in size, resulting in a maximum segment size of 1MiB (with 9-bit bytes, as used in Multics). A process could have up to 4046 segments.[6] Rollout/rollin (RO/RI) is a computer operating system memory management technique where the entire non-sharedcode and data of a running program is swapped out toauxiliary memory(disk or drum) to freemain storagefor another task. Programs may be rolled out "by demand end or...when waiting for some long event."[7]Rollout/rollin was commonly used intime-sharingsystems,[8]where the user's "think time" was relatively long compared to the time to do the swap. Unlikevirtual storage—paging or segmentation, rollout/rollin does not require any special memory management hardware; however, unless the system has relocation hardware such as amemory maporbase and boundsregisters, the program must be rolled back in to its original memory locations. Rollout/rollin has been largely superseded by virtual memory. Rollout/rollin was an optional feature ofOS/360 Multiprogramming with a Variable number of Tasks (MVT) Rollout/rollin allows the temporary, dynamic expansion of a particular job beyond its originally specified region. When a job needs more space, rollout/rollin attempts to obtain unassigned storage for the job's use. If there is no such unassigned storage, another job is rolled out—i.e., is transferred to auxiliary storage—so that its region may be used by the first job. When released by the first job, this additional storage is again available, either (1) as unassigned storage, if that was its source, or (2) to receive the job to be transferred back into main storage (rolled in).[9] In OS/360, rollout/rollin was used only for batch jobs, and rollin does not occur until the jobstep borrowing the region terminates.
https://en.wikipedia.org/wiki/Memory_management_(operating_systems)
For detection systems that record discrete events, such asparticleandnucleardetectors, thedead timeis the time after each event during which the system is not able to record another event.[1]An everyday life example of this is what happens when someone takes a photo using a flash – another picture cannot be taken immediately afterward because the flash needs a few seconds to recharge. In addition to lowering the detection efficiency, dead times can have other effects, such as creating possible exploits inquantum cryptography.[2] The total dead time of a detection system is usually due to the contributions of the intrinsic dead time of the detector (for example the ion drift time in agaseous ionization detector), of the analog front end (for example the shaping time of a spectroscopy amplifier) and of thedata acquisition(the conversion time of theanalog-to-digital convertersand the readout and storage times). The intrinsic dead time of a detector is often due to its physical characteristics; for example aspark chamberis "dead" until the potential between the plates recovers above a high enough value. In other cases the detector, after a first event, is still "live" and does produce a signal for the successive event, but the signal is such that the detector readout is unable to discriminate and separate them, resulting in an event loss or in a so-called "pile-up" event where, for example, a (possibly partial) sum of the deposited energies from the two events is recorded instead. In some cases this can be minimised by an appropriate design, but often only at the expense of other properties like energy resolution. The analog electronics can also introduce dead time; in particular a shaping spectroscopy amplifier needs to integrate a fast rise, slow fall signal over the longest possible time (usually 0.5–10 microseconds) to attain the best possible resolution, such that the user needs to choose a compromise between event rate and resolution. Trigger logic is another possible source of dead time; beyond the proper time of the signal processing, spurious triggers caused by noise need to be taken into account. Finally, digitisation, readout and storage of the event, especially in detection systems with large number of channels like those used in modern High Energy Physics experiments, also contribute to the total dead time. To alleviate the issue, medium and large experiments use sophisticated pipelining and multi-level trigger logic to reduce the readout rates.[3] From the total time a detection system is running, the dead time must be subtracted to obtain thelive time. A detector, or detection system, can be characterized by aparalyzableornon-paralyzablebehaviour.[1]In a non-paralyzable detector, an event happening during the dead time is simply lost, so that with an increasing event rate the detector will reach a saturation rate equal to the inverse of the dead time. In a paralyzable detector, an event happening during the dead time will not just be missed, but will restart the dead time, so that with increasing rate the detector will reach a saturation point where it will be incapable of recording any event at all. A semi-paralyzable detector exhibits an intermediate behaviour, in which the event arriving during dead time does extend it, but not by the full amount, resulting in a detection rate that decreases when the event rate approaches saturation.[4] It will be assumed that the events are occurring randomly with an average frequency off. That is, they constitute aPoisson process. The probability that an event will occur in an infinitesimal time intervaldtis thenf dt. It follows that the probabilityP(t)that an event will occur at timettot+dtwith no events occurring betweent=0and timetis given by theexponential distribution(Lucke 1974, Meeks 2008): The expected time between events is then For the non-paralyzable case, with a dead time ofτ{\displaystyle \tau }, the probability of measuring an event betweent=0{\displaystyle t=0}andt=τ{\displaystyle t=\tau }is zero. Otherwise the probabilities of measurement are the same as the event probabilities. The probability of measuring an event at timetwith no intervening measurements is then given by an exponential distribution shifted byτ{\displaystyle \tau }: The expected time between measurements is then In other words, ifNm{\displaystyle N_{m}}counts are recorded during a particular time intervalT{\displaystyle T}and the dead time is known, the actual number of events (N) may be estimated by[5] If the dead time is not known, a statistical analysis can yield the correct count. For example, (Meeks 2008), ifti{\displaystyle t_{i}}are a set of intervals between measurements, then theti{\displaystyle t_{i}}will have a shifted exponential distribution, but if a fixed valueDis subtracted from each interval, with negative values discarded, the distribution will be exponential as long asDis greater than the dead timeτ{\displaystyle \tau }. For an exponential distribution, the following relationship holds: wherenis any integer. If the above function is estimated for many measured intervals with various values ofDsubtracted (and for various values ofn) it should be found that for values ofDabove a certain threshold, the above equation will be nearly true, and the count rate derived from these modified intervals will be equal to the true count rate. With a modern microprocessor basedratemeterone technique for measuring field strength with detectors (e.g.,Geiger–Müller tubes) with a recovery time is Time-To-Count. In this technique, the detector is armed at the same time a counter is started. When a strike occurs, the counter is stopped. If this happens many times in a certain time period (e.g., two seconds), then the mean time between strikes can be determined, and thus the count rate. Live time, dead time, and total time are thus measured, not estimated. This technique is used quite widely inradiation monitoringsystems used in nuclear power generating stations. Morris, S.L. and Naftilan, S.A., "Determining Photometric Dead Time by Using Hydrogen Filters", Astron. Astrophys. Suppl. Ser. 107, 71-75, Oct. 1994
https://en.wikipedia.org/wiki/Dead_time
Adversarial information retrieval(adversarial IR) is a topic ininformation retrievalrelated to strategies for working with a data source where some portion of it has been manipulated maliciously. Tasks can include gathering, indexing, filtering, retrieving and ranking information from such a data source. Adversarial IR includes the study of methods to detect, isolate, and defeat such manipulation. On the Web, the predominant form of such manipulation issearch engine spamming(also known as spamdexing), which involves employing various techniques to disrupt the activity ofweb search engines, usually for financial gain. Examples of spamdexing arelink-bombing,commentorreferrer spam,spam blogs(splogs), malicious tagging.Reverse engineeringofranking algorithms,click fraud,[1]andweb content filteringmay also be considered forms of adversarialdata manipulation.[2] Topics related to Web spam (spamdexing): Other topics: The term "adversarial information retrieval" was first coined in 2000 byAndrei Broder(then Chief Scientist atAlta Vista) during the Web plenary session at theTREC-9 conference.[3]
https://en.wikipedia.org/wiki/Adversarial_information_retrieval
Inmathematics,probabilistic metric spacesare a generalization ofmetric spaceswhere thedistanceno longer takes values in the non-negativereal numbersR≥0, but in distribution functions.[1] LetD+be the set of allprobability distribution functionsFsuch thatF(0) = 0 (Fis a nondecreasing, leftcontinuous mappingfromRinto [0, 1] such thatmax(F) = 1). Then given anon-emptysetSand a functionF:S×S→D+where we denoteF(p,q) byFp,qfor every (p,q) ∈S×S, theordered pair(S,F) is said to be a probabilistic metric space if: Probabilistic metric spaces are initially introduced by Menger, which were termedstatistical metrics.[3]Shortly after, Wald criticized the generalizedtriangle inequalityand proposed an alternative one.[4]However, both authors had come to the conclusion that in some respects the Wald inequality was too stringent a requirement to impose on all probability metric spaces, which is partly included in the work of Schweizer and Sklar.[5]Later, the probabilistic metric spaces found to be very suitable to be used with fuzzy sets[6]and further called fuzzy metric spaces[7] A probability metricDbetween tworandom variablesXandYmay be defined, for example, asD(X,Y)=∫−∞∞∫−∞∞|x−y|F(x,y)dxdy{\displaystyle D(X,Y)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|x-y|F(x,y)\,dx\,dy}whereF(x,y) denotes the joint probability density function of the random variablesXandY. IfXandYare independent from each other, then the equation above transforms intoD(X,Y)=∫−∞∞∫−∞∞|x−y|f(x)g(y)dxdy{\displaystyle D(X,Y)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|x-y|f(x)g(y)\,dx\,dy}wheref(x) andg(y) are probability density functions ofXandYrespectively. One may easily show that such probability metrics do not satisfy the firstmetricaxiom or satisfies itif, and only if, both of argumentsXandYare certain events described byDirac deltadensityprobability distribution functions. In this case:D(X,Y)=∫−∞∞∫−∞∞|x−y|δ(x−μx)δ(y−μy)dxdy=|μx−μy|{\displaystyle D(X,Y)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|x-y|\delta (x-\mu _{x})\delta (y-\mu _{y})\,dx\,dy=|\mu _{x}-\mu _{y}|}the probability metric simply transforms into the metric betweenexpected valuesμx{\displaystyle \mu _{x}},μy{\displaystyle \mu _{y}}of the variablesXandY. For all otherrandom variablesX,Ythe probability metric does not satisfy theidentity of indiscerniblescondition required to be satisfied by the metric of the metric space, that is:D(X,X)>0.{\displaystyle D\left(X,X\right)>0.} For example if bothprobability distribution functionsof random variablesXandYarenormal distributions(N) having the samestandard deviationσ{\displaystyle \sigma }, integratingD(X,Y){\displaystyle D\left(X,Y\right)}yields:DNN(X,Y)=μxy+2σπexp⁡(−μxy24σ2)−μxyerfc⁡(μxy2σ){\displaystyle D_{NN}(X,Y)=\mu _{xy}+{\frac {2\sigma }{\sqrt {\pi }}}\exp \left(-{\frac {\mu _{xy}^{2}}{4\sigma ^{2}}}\right)-\mu _{xy}\operatorname {erfc} \left({\frac {\mu _{xy}}{2\sigma }}\right)}whereμxy=|μx−μy|,{\displaystyle \mu _{xy}=\left|\mu _{x}-\mu _{y}\right|,}anderfc⁡(x){\displaystyle \operatorname {erfc} (x)}is the complementaryerror function. In this case:limμxy→0DNN(X,Y)=DNN(X,X)=2σπ.{\displaystyle \lim _{\mu _{xy}\to 0}D_{NN}(X,Y)=D_{NN}(X,X)={\frac {2\sigma }{\sqrt {\pi }}}.} The probability metric of random variables may be extended into metricD(X,Y) ofrandom vectorsX,Yby substituting|x−y|{\displaystyle |x-y|}with any metric operatord(x,y):D(X,Y)=∫Ω∫Ωd(x,y)F(x,y)dΩxdΩy{\displaystyle D(\mathbf {X} ,\mathbf {Y} )=\int _{\Omega }\int _{\Omega }d(\mathbf {x} ,\mathbf {y} )F(\mathbf {x} ,\mathbf {y} )\,d\Omega _{x}d\Omega _{y}}whereF(X,Y) is the joint probability density function of random vectorsXandY. For example substitutingd(x,y) withEuclidean metricand providing the vectorsXandYare mutually independent would yield to:D(X,Y)=∫Ω∫Ω∑i|xi−yi|2F(x)G(y)dΩxdΩy.{\displaystyle D(\mathbf {X} ,\mathbf {Y} )=\int _{\Omega }\int _{\Omega }{\sqrt {\sum _{i}|x_{i}-y_{i}|^{2}}}F(\mathbf {x} )G(\mathbf {y} )\,d\Omega _{x}d\Omega _{y}.}
https://en.wikipedia.org/wiki/Probabilistic_metric_space
Thedigital divideis the unequal access todigital technology, includingsmartphones, tablets,laptops, and the internet.[1][2]The digital divide worsens inequality around access to information and resources. In theInformation Age, people without access to the Internet and other technology are at a disadvantage, for they are unable or less able to connect with others, find and apply for jobs, shop, and learn.[1][3][4][5] People who arehomeless, living in poverty, elderly people, and those living in rural communities may have limited access to the Internet; in contrast, urban middle class and upper-class people have easy access to the Internet. Another divide is between producers and consumers of Internet content,[6][7]which could be a result of educational disparities.[8]While social media use varies across age groups, a US 2010 study reported no racial divide.[9] The historical roots of the digital divide in America refer to the increasing gap that occurred during the early modern period between those who could and could not access the real time forms of calculation, decision-making, and visualization offered via written and printed media.[10]Within this context, ethical discussions regarding the relationship between education and the free distribution of information were raised by thinkers such asImmanuel Kant,Jean Jacques Rousseau, andMary Wollstonecraft(1712–1778). The latter advocated that governments should intervene to ensure that any society's economic benefits should be fairly and meaningfully distributed. Amid theIndustrial Revolutionin Great Britain, Rousseau's idea helped to justifypoor lawsthat created a safety net for those who were harmed by new forms of production. Later when telegraph and postal systems evolved, many used Rousseau's ideas to argue for full access to those services, even if it meant subsidizing hard-to-serve citizens. Thus, "universal services"[11]referred to innovations in regulation and taxation that would allow phone services such asAT&Tinthe United Statesto serve hard-to-serve rural users. In 1996, as telecommunications companies merged with Internet companies, theFederal Communications CommissionadoptedTelecommunications Services Act of 1996to consider regulatory strategies and taxation policies to close the digital divide. Though the term "digital divide" was coined among consumer groups that sought to tax and regulateinformation and communications technology(ICeT) companies to close the digital divide, the topic soon moved onto a global stage. The focus was theWorld Trade Organizationwhich passed a Telecommunications Services Act, which resisted regulation of ICT companies so that they would be required to serve hard to serve individuals and communities. In 1999, to assuage anti-globalization forces, the WTO hosted the "Financial Solutions to Digital Divide" in Seattle, US, co-organized byCraig Warren Smithof Digital Divide Institute andBill Gates Sr.the chairman of theBill and Melinda Gates Foundation. It catalyzed a full-scale global movement to close the digital divide, which quickly spread to all sectors of the global economy.[12]In 2000, US president Bill Clinton mentioned the term in theState of the Union Address. At the outset of theCOVID-19 pandemic, governments worldwide issued stay-at-home orders that establishedlockdowns, quarantines, restrictions, and closures. The resulting interruptions to schooling, public services, and business operations drove nearly half of the world's population into seeking alternative methods to live while in isolation.[13]These methods included telemedicine, virtual classrooms, online shopping, technology-based social interactions and working remotely, all of which require access to high-speed or broadbandinternet accessand digital technologies. A Pew Research Centre study reports that 90% of Americans describe the use of the Internet as "essential" during the pandemic.[14]The accelerated use of digital technologies creates a landscape where the ability, or lack thereof, to access digital spaces becomes a crucial factor in everyday life.[15] According to thePew Research Center, 59% of children from lower-income families were likely to face digital obstacles in completing school assignments.[14]These obstacles included the use of acellphoneto complete homework, having to usepublic Wi-Fibecause of unreliable internet service in the home and lack of access to a computer in the home. This difficulty, titled thehomework gap, affects more than 30% of K-12 students living below the poverty threshold, and disproportionally affects American Indian/Alaska Native, Black, and Hispanic students.[16][17]These types of interruptions or privilege gaps in education exemplify problems in the systemic marginalization of historically oppressed individuals in primary education. The pandemic exposed inequity causing discrepancies in learning.[18] A lack of "tech readiness", that is, confident and independent use of devices, was reported among the US elderly population; with more than 50% reporting an inadequate knowledge of devices and more than one-third reporting a lack of confidence.[14][19]Moreover, according to a UN research paper, similar results can be found across various Asian countries, with those above the age of 74 reporting a lower and more confused usage of digital devices.[20]This aspect of the digital divide and the elderly occurred during the pandemic as healthcare providers increasingly relied upon telemedicine to manage chronic and acute health conditions.[21] There are various definitions of the digital divide, all with slightly different emphasis, which is evidenced by related concepts likedigital inclusion,[22]digital participation,[23]digital skills,[24]media literacy,[25]anddigital accessibility.[26] The infrastructure by which individuals, households, businesses, and communities connect to the Internet address the physical mediums that people use to connect to the Internet such as desktop computers, laptops, basic mobile phones orsmartphones, iPods or other MP3 players, gaming consoles such asXboxorPlayStation, electronic book readers, and tablets such as iPads.[27] Traditionally, the nature of the divide has been measured in terms of the existing numbers of subscriptions and digital devices. Given the increasing number of such devices, some have concluded that the digital divide among individuals has increasingly been closing as the result of a natural and almost automatic process.[29][30]Others point to persistent lower levels of connectivity among women, racial and ethnic minorities, people with lower incomes, rural residents, and less educated people as evidence that addressing inequalities in access to and use of the medium will require much more than the passing of time.[31][32]Recent studies have measured the digital divide not in terms of technological devices, but in terms of the existing bandwidth per individual (in kbit/s per capita).[33][28] As shown in the Figure on the side, the digital divide in kbit/s is not monotonically decreasing but re-opens up with each new innovation. For example, "the massive diffusion of narrow-band Internet and mobile phones during the late 1990s" increased digital inequality, as well as "the initial introduction of broadband DSL and cable modems during 2003–2004 increased levels of inequality".[33]During the mid-2000s, communication capacity was more unequally distributed than during the late 1980s, when only fixed-line phones existed. The most recent increase in digital equality stems from the massive diffusion of the latest digital innovations (i.e. fixed and mobile broadband infrastructures, e.g.5Gand fiber opticsFTTH).[34]Measurement methodologies of the digital divide, and more specifically an Integrated Iterative Approach General Framework (Integrated Contextual Iterative Approach – ICI) and the digital divide modeling theory under measurement model DDG (Digital Divide Gap) are used to analyze the gap existing between developed and developing countries, and the gap among the 27 members-states of the European Union.[35][36]TheGood Things Foundation, a UK non-profit organisation, collates data on the extent and impact of the digital divide in the UK[37]and lobbies the government to fix digital exclusion[38] Research from 2001 showed that the digital divide is more than just an access issue and cannot be alleviated merely by providing the necessary equipment. There are at least three factors at play: information accessibility, information utilization, and information receptiveness. More than just accessibility, the digital divide consists of society's lack of knowledge on how to make use of the information and communication tools once they exist within a community.[39]Information professionals have the ability to help bridge the gap by providing reference and information services to help individuals learn and utilize the technologies to which they do have access, regardless of the economic status of the individual seeking help.[40] One can connect to the internet in a variety of locations, such as homes, offices, schools, libraries, public spaces, and Internet cafes. Levels of connectivity often vary between rural, suburban, and urban areas.[41][42] In 2017, theWireless Broadband Alliancepublished thewhite paperThe Urban Unconnected, which highlighted that in the eight countries with the world's highestGNPabout 1.75 billion people had no internet connection, and one third of them lived in the major urban centers.Delhi(5.3 millions, 9% of the total population),São Paulo(4.3 millions, 36%),New York(1.6 mln, 19%), andMoscow(2.1 mln, 17%) registered the highest percentages of citizens who had no internet access of any type.[43] As of 2021, only about half of the world's population had access to the internet, leaving 3.7 billion people without internet. A majority of those are in developing countries, and a large portion of them are women.[44]Also, the governments of different countries have different policies about privacy, data governance, speech freedoms and many other factors. Government restrictions make it challenging for technology companies to provide services in certain countries. This disproportionately impacts the different regions of the world; Europe has the highest percentage of the population online while Africa has the lowest. From 2010 to 2014 Europe went from 67% to 75% and in the same time span Africa went from 10% to 19%.[45] Network speeds play a large role in the quality of an internet connection. Large cities and towns may have better access to high speed internet than rural areas, which may have limited or no service.[46]Households can be locked into a specific service provider, since it may be the only carrier that even offers service to the area. This applies to regions that have developed networks, like the United States, but also applies to developing countries, so that very large areas have virtually no coverage.[47]In those areas there are very limited actions that a consumer could take, since the issue is mainly infrastructure. Technologies that provide an internet connection through satellite are becoming more common, like Starlink, but they are still not available in many regions.[48] Based on location, a connection may be so slow as to be virtually unusable, solely because a network provider has limited infrastructure in the area. For example, to download 5 GB of data in Taiwan it might take about 8 minutes, while the same download might take 30 hours in Yemen.[49] From 2020 to 2022, average download speeds in the EU climbed from 70 Mbps to more than 120 Mbps, owing mostly to the demand for digital services during the pandemic.[50]There is still a large rural-urban disparity in internet speeds, with metropolitan areas inFranceandDenmarkreaching rates of more than 150 Mbps, while many rural areas inGreece,Croatia, andCyprushave speeds of less than 60 Mbps.[50][51] The EU aspires for complete gigabit coverage by 2030, however as of 2022, only over 60% of Europe has high-speed internet infrastructure, signalling the need for more enhancements.[50][52] Common Sense Media, a nonprofit group based in San Francisco, surveyed almost 1,400 parents and reported in 2011 that 47 percent of families with incomes more than $75,000 had downloaded apps for their children, while only 14 percent of families earning less than $30,000 had done so.[53] As of 2014, the gap in a digital divide was known to exist for a number of reasons. Obtaining access to ICTs and using them actively has been linked to demographic and socio-economic characteristics including income, education, race, gender, geographic location (urban-rural), age, skills, awareness, political, cultural and psychological attitudes.[54][55][56][57][58][59][60]Multiple regressionanalysis across countries has shown that income levels and educational attainment are identified as providing the most powerful explanatory variables for ICT access and usage.[61]Evidence was found that Caucasians are much more likely than non-Caucasians to own a computer as well as have access to the Internet in their homes.[citation needed][62][63]As for geographic location, people living in urban centers have more access and show more usage of computer services than those in rural areas. In developing countries, a digital divide between women and men is apparent in tech usage, with men more likely to be competent tech users. Controlled statistical analysis has shown that income, education and employment act asconfounding variablesand that women with the same level of income, education and employment actually embrace ICT more than men (see Women and ICT4D), this argues against any suggestion that women are "naturally" more technophobic or less tech-savvy.[64]However, each nation has its own set of causes or the digital divide. For example, thedigital divide in Germanyis unique because it is not largely due to difference in quality of infrastructure.[65] The correlation between income and internet use suggests that the digital divide persists at least in part due to income disparities.[66]Most commonly, a digital divide stems from poverty and the economic barriers that limit resources and prevent people from obtaining or otherwise using newer technologies. In research, while each explanation is examined, others must be controlled to eliminateinteraction effectsormediating variables,[54]but these explanations are meant to stand as general trends, not direct causes. Measurements for the intensity of usages, such as incidence and frequency, vary by study. Some report usage as access to Internet and ICTs while others report usage as having previously connected to the Internet. Some studies focus on specific technologies, others on a combination (such asInfostate, proposed byOrbicom-UNESCO, theDigital Opportunity Index, orITU'sICT Development Index). During the mid-1990s, the United States Department of Commerce, National Telecommunications & Information Administration (NTIA) began publishing reports about the Internet and access to and usage of the resource. The first of three reports is titled "Falling Through the Net: A Survey of the "Have Nots" in Rural and Urban America" (1995),[67]the second is "Falling Through the Net II: New Data on the Digital Divide" (1998),[68]and the final report "Falling Through the Net: Defining the Digital Divide" (1999).[69]The NTIA's final report attempted clearly to define the term digital divide as "the divide between those with access to new technologies and those without".[69]Since the introduction of the NTIA reports, much of the early, relevant literature began to reference the NTIA's digital divide definition. The digital divide is commonly defined as being between the "haves" and "have-nots".[69][67] The U.S.Federal Communications Commission's (FCC) 2019 Broadband Deployment Report indicated that 21.3 million Americans do not have access to wired or wireless broadband internet.[70]As of 2020, BroadbandNow, an independent research company studying access to internet technologies, estimated that the actual number of United States Americans without high-speed internet is twice that number.[71]According to a 2021Pew Research Centerreport, smartphone ownership and internet use has increased for all Americans, however, a significant gap still exists between those with lower incomes and those with higher incomes:[72]U.S. households earning $100K or more are twice as likely to own multiple devices and have home internet service as those making $30K or more, and three times as likely as those earning less than $30K per year.[72]The same research indicated that 13% of the lowest income households had no access to internet or digital devices at home compared to only 1% of the highest income households.[72] According to a Pew Research Center survey of U.S. adults executed from January 25 to February 8, 2021, the digital lives of Americans with high and low incomes are varied. Conversely, the proportion of Americans that use home internet or cell phones has maintained constant between 2019 and 2021. A quarter of those with yearly average earnings under $30,000 (24%) says they don't own smartphones. Four out of every ten low-income people (43%) do not have home internet access or a computer (43%). Furthermore, the more significant part of lower-income Americans does not own a tablet device.[72] On the other hand, every technology is practically universal among people earning $100,000 or higher per year. Americans with larger family incomes are also more likely to buy a variety of internet-connected products. Wi-Fi at home, a smartphone, a computer, and a tablet are used by around six out of ten families making $100,000 or more per year, compared to 23 percent in the lesser household.[72] Although many groups in society are affected by a lack of access to computers or the Internet, communities of color are specifically observed to be negatively affected by the digital divide.[73]Pew research shows that as of 2021, home broadband rates are 81% for White households, 71% for Black households and 65% for Hispanic households.[74]While 63% of adults find the lack of broadband to be a disadvantage, only 49% of White adults do.[73]Smartphone and tablet ownership remains consistent with about 8 out of 10 Black, White, and Hispanic individuals reporting owning a smartphone and half owning a tablet.[73]A 2021 survey found that a quarter of Hispanics rely on their smartphone and do not have access to broadband.[73] Inequities inaccess to informationtechnologies are present among individuals living with a physical disability in comparison to those who are not living with a disability. In 2011, according to the Pew Research Center, 54% of households with a person who had a disability had home Internet access, compared to 81% of households that did not have a person who has a disability.[75]The type of disability an individual has can prevent them from interacting with computer screens and smartphone screens, such as having aquadriplegiadisability or having a disability in the hands. However, there is still a lack of access to technology and home Internet access among those who have a cognitive and auditory disability as well. There is a concern of whether or not the increase in the use of information technologies will increase equality through offering opportunities for individuals living with disabilities or whether it will only add to the present inequalities and lead to individuals living with disabilities being left behind in society.[76]Issues such as the perception of disabilities in society, national and regional government policy, corporate policy, mainstream computing technologies, and real-time online communication have been found to contribute to the impact of the digital divide on individuals with disabilities. In 2022, a survey of people in the UK with severe mental illness found that 42% lacked basic digital skills, such as changing passwords or connecting to Wi-Fi.[77][78] People with disabilities are also the targets of online abuse. Online disability hate crimes have increased by 33% across the UK between 2016–17 and 2017–18 according to a report published byLeonard Cheshire, a health and welfare charity.[79]Accounts of online hate abuse towards people with disabilities were shared during an incident in 2019 when modelKatie Price's son was the target of online abuse that was attributed to him having a disability. In response to the abuse, a campaign was launched by Price to ensure that Britain's MPs held accountable those who perpetuate online abuse towards those with disabilities.[80]Online abuse towards individuals with disabilities is a factor that can discourage people from engaging online which could prevent people from learning information that could improve their lives. Many individuals living with disabilities face online abuse in the form of accusations of benefit fraud and "faking" their disability for financial gain, which in some cases leads to unnecessary investigations. Due to the rapidly declining price of connectivity and hardware, skills deficits have eclipsed barriers of access as the primary contributor to thegender digital divide. Studies show that women are less likely to know how to leverage devices and Internet access to their full potential, even when they do use digital technologies.[81]In ruralIndia, for example, a study found that the majority of women who ownedmobile phonesonly knew how to answer calls. They could not dial numbers or read messages without assistance from their husbands, due to a lack of literacy and numeracy skills.[82]A survey of 3,000 respondents across 25 countries found that adolescent boys withmobile phonesused them for a wider range of activities, such as playing games and accessing financial services online. Adolescent girls in the same study tended to use just the basic functionalities of their phone, such as making calls and using the calculator.[83]Similar trends can be seen even in areas where Internet access is near-universal. A survey of women in nine cities around the world revealed that although 97% of women were using social media, only 48% of them were expanding their networks, and only 21% of Internet-connected women had searched online for information related to health, legal rights or transport.[83]In some cities, less than one quarter of connected women had used the Internet to look for a job.[81] Studies show that despite strong performance in computer and information literacy (CIL), girls do not have confidence in theirICTabilities. According to theInternational Computer and Information Literacy Study(ICILS) assessment girls'self-efficacyscores (their perceived as opposed to their actual abilities) for advanced ICT tasks were lower than boys'.[84][81] A paper published by J. Cooper from Princeton University points out that learning technology is designed to be receptive to men instead of women. Overall, the study presents the problem of various perspectives in society that are a result of gendered socialization patterns that believe that computers are a part of the male experience since computers have traditionally presented as a toy for boys when they are children.[85]This divide is followed as children grow older and young girls are not encouraged as much to pursue degrees in IT and computer science. In 1990, the percentage of women in computing jobs was 36%, however in 2016, this number had fallen to 25%. This can be seen in the under representation of women in IT hubs such as Silicon Valley.[86] There has also been the presence of algorithmic bias that has been shown in machine learning algorithms that are implemented by major companies.[clarification needed]In 2015, Amazon had to abandon a recruiting algorithm that showed a difference between ratings that candidates received for software developer jobs as well as other technical jobs. As a result, it was revealed that Amazon's machine algorithm was biased against women and favored male resumes over female resumes. This was due to the fact that Amazon's computer models were trained to vet patterns in resumes over a 10-year period. During this ten-year period, the majority of the resumes belong to male individuals, which is a reflection of male dominance across the tech industry.[87] The age gap contributes to the digital divide due to the fact that people born before 1983 did not grow up with the internet. According to Marc Prensky, people who fall into this age range are classified as "digital immigrants."[88]A digital immigrant is defined as "a person born or brought up before the widespread use of digital technology."[89]The internet became officially available for public use on January 1, 1983; anyone born before then has had to adapt to the new age of technology.[90]On the contrary, people born after 1983 are considered "digital natives". Digital natives are defined as people born or brought up during the age of digital technology.[89] Across the globe, there is a 10% difference in internet usage between people aged 15–24 years old and people aged 25 years or older. According to the International Telecommunication Union (ITU), 75% of people aged 15–24 used the internet in 2022 compared to 65% of people aged 25 years or older.[91]The highest amount of digital divide between generations occurs in Africa with 55% of the younger age group using the internet compared to 36% of people aged 25 years or older. The lowest amount of divide occurs between the Commonwealth of Independent States with 91% of the younger age group using the internet compared to 83% of people aged 25 years or older. In addition to being less connected with the internet, older generations are less likely to use financial technology, also known as fintech. Fintech is any way of managing money via digital devices.[92]Some examples of fintech include digital payment apps such as Venmo and Apple Pay, tax services such as TurboTax, or applying for a mortgage digitally. In data from World Bank Findex, 40% of people younger than 40 years old utilized fintech compared to less than 25% of people aged 60 years or older.[93] The divide between differing countries or regions of the world is referred to as theglobal digital divide, which examines the technological gap between developing and developed countries.[94]The divide within countries (such as thedigital divide in the United States) may refer to inequalities between individuals, households, businesses, or geographic areas, usually at differentsocioeconomiclevels or other demographic categories. In contrast, the global digital divide describes disparities in access to computing and information resources, and the opportunities derived from such access.[95]As the internet rapidly expands it is difficult for developing countries to keep up with the constant changes. In 2014 only three countries (China,US,Japan) host 50% of the globally installed bandwidth potential.[28]This concentration is not new, as historically only ten countries have hosted 70–75% of the global telecommunication capacity (see Figure). The U.S. lost its global leadership in terms of installed bandwidth in 2011, replaced by China, who hosted more than twice as much national bandwidth potential in 2014 (29% versus 13% of the global total).[28] Somezero-ratingprograms such asFacebook Zerooffer free/subsidized data access to certain websites. Critics object that this is an anti-competitive program that underminesnet neutralityand creates a "walled garden".[96]A 2015 study reported that 65% ofNigerians, 61% ofIndonesians, and 58% ofIndiansagree with the statement that "Facebook is the Internet" compared with only 5% in the US.[97] Once an individual is connected, Internet connectivity and ICTs can enhance his or her future social and cultural capital.Social capitalis acquired through repeated interactions with other individuals or groups of individuals. Connecting to the Internet creates another set of means by which to achieve repeated interactions. ICTs and Internet connectivity enable repeated interactions through access to social networks, chat rooms, and gaming sites. Once an individual has access to connectivity, obtains infrastructure by which to connect, and can understand and use the information that ICTs and connectivity provide, that individual is capable of becoming a "digital citizen."[54] In the United States, the research provided by Unguarded Availability Services notes a direct correlation between a company's access to technological advancements and its overall success in bolstering the economy.[98]The study, which includes over 2,000 IT executives and staff officers, indicates that 69 percent of employees feel they do not have access to sufficient technology to make their jobs easier, while 63 percent of them believe the lack of technological mechanisms hinders their ability to develop new work skills.[98]Additional analysis provides more evidence to show how the digital divide also affects the economy in places all over the world. ABEGreport suggests that in countries like Sweden, Switzerland, and the U.K., the digital connection among communities is made easier, allowing for their populations to obtain a much larger share of the economies via digital business.[99]In fact, in these places, populations hold shares approximately 2.5 percentage points higher.[99]During a meeting with the United Nations a Bangladesh representative expressed his concern that poor and undeveloped countries would be left behind due to a lack of funds to bridge the digital gap.[100] The digital divide impacts children's ability to learn and grow in low-income school districts. Without Internet access, students are unable to cultivate necessary technological skills to understand today's dynamic economy.[101]The need for the internet starts while children are in school – necessary for matters such as school portal access, homework submission, and assignment research.[102]The Federal Communications Commission's Broadband Task Force created a report showing that about 70% of teachers give students homework that demand access to broadband.[103]Approximately 65% of young scholars use the Internet at home to complete assignments as well as connect with teachers and other students via discussion boards and shared files.[103]A recent study indicates that approximately 50% of students say that they are unable to finish their homework due to an inability to either connect to the Internet or in some cases, find a computer.[103]Additionally, ThePublic Policy Institute of Californiareported in 2023 that 27% of the state’s school children lack the necessary broadband to attend school remotely, and 16% have no internet connection at all.[104] This has led to a new revelation: 42% of students say they received a lower grade because of this disadvantage.[103]According to research conducted by the Center for American Progress, "if the United States were able to close the educational achievement gaps between native-born white children and black and Hispanic children, the U.S. economy would be 5.8 percent—or nearly $2.3 trillion—larger in 2050".[105] In a reverse of this idea, well-off families, especially the tech-savvy parents inSilicon Valley, carefully limit their own children'sscreen time. The children of wealthy families attend play-based preschool programs that emphasizesocial interactioninstead of time spent in front of computers or other digital devices, and they pay to send their children to schools that limit screen time.[106]American families that cannot afford high-quality childcare options are more likely to usetablet computersfilled with apps for children as a cheap replacement for a babysitter, and their government-run schools encourage screen time during school. Students in school are also learning about the digital divide.[106] To reduce the impact of the digital divide and increase digital literacy in young people at an early age, governments have begun to develop and focus policy on embedding digital literacies in both student and educator programs, for instance, in Initial Teacher Training programs in Scotland.[107]The National Framework for Digital Literacies in Initial Teacher Education was developed by representatives from Higher Education institutions that offer Initial Teacher Education (ITE) programs in conjunction with the Scottish Council of Deans of Education (SCDE) with the support of Scottish Government[107]This policy driven approach aims to establish an academic grounding in the exploration of learning and teaching digital literacies and their impact on pedagogy as well as ensuring educators are equipped to teach in the rapidly evolving digital environment and continue their own professional development. Factors such as nationality, gender, and income contribute to the digital divide across the globe. Depending on what someone identifies as, their access to the internet can potentially decrease. According to a study conducted by the ITU in 2022, Africa has the fewest people on the internet at a 40% rate; the next lowest internet population is the Asia-Pacific region at 64%. Internet access remains a problem in Least Developing Countries and Landlocked Developing Countries. They both have 36% of people using the internet compared to a 66% average around the world.[91] Men generally have more access to the internet around the world. The gender parity score across the globe is 0.92. A gender parity score is calculated by the percentage of women who use the internet divided by the percentage of men who use the internet. Ideally, countries want to have gender parity scores between 0.98 and 1.02. The region with the least gender parity is Africa with a score of 0.75. The next lowest gender parity score belongs to the Arab States at 0.87. Americans, Commonwealth of Independent States, and Europe all have the highest gender parity scores with scores that do not go below 0.98 or higher than 1. Gender parity scores are often impacted by class. Low income regions have a score of 0.65 while upper-middle income and high income regions have a score of 0.99.[91] The difference between economic classes has been a prevalent issue with the digital divide up to this point. People who are considered to earn low income use the internet at a 26% rate followed by lower-middle income at 56%, upper-middle income at 79%, and high income at 92%. The staggering difference between low income individuals and high income individuals can be traced to the affordability of mobile products. Products are becoming more affordable as the years pass; according to the ITU, “the global median price of mobile-broadband services dropped from 1.9 percent to 1.5 percent of average gross national income (GNI) per capita.” There is still plenty of work to be done, as there is a 66% difference between low income individuals and high income individuals' access to the internet.[91] TheFacebook divide,[108][109][110][111]a concept derived from the "digital divide", is the phenomenon with regard to access to, use of, and impact ofFacebookon society. It was coined at the International Conference on Management Practices for the New Economy (ICMAPRANE-17) on February 10–11, 2017.[112] Additional concepts of Facebook Native and Facebook Immigrants were suggested at the conference.Facebook divide,Facebook native,Facebook immigrants, andFacebook left-behindare concepts for social and business management research. Facebook immigrants utilize Facebook for their accumulation of both bonding and bridgingsocial capital. Facebook natives, Facebook immigrants, and Facebook left-behind induced the situation of Facebook inequality. In February 2018, the Facebook Divide Index was introduced at the ICMAPRANE conference in Noida, India, to illustrate the Facebook divide phenomenon.[113] In the year 2000, theUnited Nations Volunteers(UNV) program launched its Online Volunteering service,[114]which uses ICT as a vehicle for and in support of volunteering. It constitutes an example of a volunteering initiative that effectively contributes to bridge the digital divide. ICT-enabled volunteering has a clear added value for development. If more people collaborate online with more development institutions and initiatives, this will imply an increase in person-hours dedicated to development cooperation at essentially no additional cost. This is the most visible effect of online volunteering for human development.[115] Since May 17, 2006, theUnited Nationshas raised awareness of the divide by way of theWorld Information Society Day.[116]In 2001, it set up the Information and Communications Technology (ICT) Task Force.[117]LaterUNinitiatives in this area are theWorld Summit on the Information Societysince 2003, and theInternet Governance Forum, set up in 2006. As of 2009, the borderline between ICT as anecessity goodand ICT as aluxury goodwas roughly around US$10 per person per month, or US$120 per year,[61]which means that people consider ICT expenditure of US$120 per year as a basic necessity. Since more than 40% of the world population lives on less than US$2 per day, and around 20% live on less than US$1 per day (or less than US$365 per year), these income segments would have to spend one third of their income on ICT (120/365 = 33%). The global average of ICT spending is at a mere 3% of income.[61]Potential solutions include driving down the costs of ICT, which includes low-cost technologies and shared access throughTelecentres.[118][119] In 2022, the USFederal Communications Commissionstarted a proceeding "to prevent and eliminate digital discrimination and ensure that all people of the United States benefit from equal access to broadband internet access service, consistent with Congress's direction in the Infrastructure Investment and Jobs Act.[120] Social media websites serve as both manifestations of and means by which to combat the digital divide. The former describes phenomena such as the divided users' demographics that make up sites such as Facebook,WordPressand Instagram. Each of these sites hosts communities that engage with otherwise marginalized populations. In 2010, an "online indigenous digital library as part of public library services" was created inDurban, South Africa to narrow the digital divide by not only giving the people of the Durban area access to this digital resource, but also by incorporating the community members into the process of creating it.[121] In 2002, theGates Foundationstarted the Gates Library Initiative which provides training assistance and guidance in libraries.[122] InKenya, lack of funding, language, and technology illiteracy contributed to an overall lack of computer skills and educational advancement. This slowly began to change when foreign investment began.[123][124]In the early 2000s, theCarnegie Foundationfunded a revitalization project through theKenya National Library Service. Those resources enabled public libraries to provide information and communication technologies to their patrons. In 2012, public libraries in theBusiaandKiberiacommunities introduced technology resources to supplement curriculum for primary schools. By 2013, the program expanded into ten schools.[125] Even though individuals might be capable of accessing the Internet, many are opposed by barriers to entry, such as a lack of means to infrastructure or the inability to comprehend or limit the information that the Internet provides. Some individuals can connect, but they do not have the knowledge to use what information ICTs and Internet technologies provide them. This leads to a focus on capabilities and skills, as well as awareness to move from mere access to effective usage of ICT.[126] Community informatics(CI) focuses on issues of "use" rather than "access". CI is concerned with ensuring the opportunity not only for ICT access at the community level but also, according toMichael Gurstein, that the means for the "effective use" of ICTs for community betterment and empowerment are available.[127]Gurstein has also extended the discussion of the digital divide to include issues around access to and the use of "open data" and coined the term "data divide" to refer to this issue area.[128] Since gender, age, race, income, and educational digital divides have lessened compared to the past, some researchers suggest that the digital divide is shifting from a gap in access and connectivity to ICTs to aknowledge divide.[129]A knowledge divide concerning technology presents the possibility that the gap has moved beyond the access and having the resources to connect to ICTs to interpreting and understanding information presented once connected.[130] The second-level digital divide, also referred to as the production gap, describes the gap that separates the consumers of content on the Internet from the producers of content.[131]As the technological digital divide is decreasing between those with access to the Internet and those without, the meaning of the term digital divide is evolving.[129]Previously, digital divide research was focused on accessibility to the Internet and Internet consumption. However, with an increasing number of the population gaining access to the Internet, researchers are examining how people use the Internet to create content and what impact socioeconomics are having on user behavior.[132] New applications have made it possible for anyone with a computer and an Internet connection to be a creator of content, yet the majority ofuser-generated contentavailable widely on the Internet, like public blogs, is created by a small portion of the Internet-using population.Web 2.0technologies like Facebook, YouTube, Twitter, and Blogs enable users to participate online and create content without having to understand how the technology actually works, leading to an ever-increasing digital divide between those who have the skills and understanding to interact more fully with the technology and those who are passive consumers of it.[131] Some of the reasons for this production gap include material factors like the type of Internet connection one has and the frequency of access to the Internet. The more frequently a person has access to the Internet and the faster the connection, the more opportunities they have to gain the technology skills and the more time they have to be creative.[133] Other reasons include cultural factors often associated with class and socioeconomic status. Users of lower socioeconomic status are less likely to participate in content creation due to disadvantages in education and lack of the necessary free time for the work involved in blog or website creation and maintenance.[133]Additionally, there is evidence to support the existence of the second-level digital divide at the K-12 level based on how educators' use technology for instruction.[134]Schools' economic factors have been found to explain variation in how teachers use technology to promote higher-order thinking skills.[134] This article incorporates text from afree contentwork. Licensed under CC BY-SA 3.0 IGO. Text taken fromI'd blush if I could: closing gender divides in digital skills through education​, UNESCO, EQUALS Skills Coalition, UNESCO. UNESCO.
https://en.wikipedia.org/wiki/Digital_divide
As of the early 2000s, severalspeech recognition(SR) software packages exist forLinux. Some of them arefree and open-source softwareand others areproprietary software. Speech recognition usually refers to software that attempts to distinguish thousands of words in a human language.Voice controlmay refer to software used for communicating operational commands to a computer. In the late 1990s, a Linux version ofViaVoice, created byIBM, was made available to users for no charge. In 2002, the freesoftware development kit(SDK) was removed by the developer. In the early 2000s, there was a push to get a high-quality Linux native speech recognition engine developed. As a result, several projects dedicated to creating Linux speech recognition programs were begun, such asMycroft, which is similar to MicrosoftCortana, but open-source. It is essential to compile aspeech corpusto produceacoustic modelsforspeech recognitionprojects.VoxForgeis a free speech corpus and acoustic model repository that was built to collect transcribed speech to be used in speech recognition projects. VoxForge acceptscrowdsourcedspeech samples and corrections of recognized speech sequences. It is licensed under aGNU General Public License(GPL). The first step is to begin recording an audio stream on a computer. The user has two main processing options: Remote recognition was formerly used bysmartphonesbecause they lacked sufficient performance, workingmemory, orstorageto process speech recognition within the phone. These limits have largely been overcome although server-based SR on mobile devices remains universal. Discrete speech recognition can be performed within aweb browserand works well with supported browsers. Remote SR does not require installing software on a desktop computer or mobile device as it is mainly a server-based system with the inherent security issues noted above. The following is a list of projects dedicated to implementing speech recognition in Linux, and major native solutions. These are not end-user applications. These are programminglibrariesthat may be used to develop end-user applications. Speech recognition usually refers to software that attempts to distinguish thousands of words in a human language.Voice controlmay refer to software used for sending operational commands to a computer or appliance. Voice control typically requires a much smaller vocabulary and thus is much easier to implement. Simple software combined withkeyboard shortcuts, have the earliest potential for practically accurate voice control in Linux. It is possible to use programs such asDragon NaturallySpeakingin Linux, by usingWine, though some problems may arise, depending on which version is used.[3] It is also possible to use Windows speech recognition software under Linux. Using no-costvirtualizationsoftware, it is possible to run Windows andNaturallySpeakingunder Linux.VMware ServerorVirtualBoxsupport copy and paste to/from a virtual machine, making dictated text easily transferable to/from the virtual machine.
https://en.wikipedia.org/wiki/Speech_recognition_software_for_Linux
Public transport(also known aspublic transportation,public transit,mass transit, or simplytransit) is a system oftransportforpassengersby group travel systems available for use by the general public unlikeprivate transport, typically managed on a schedule, operated on established routes, and that may charge a posted fee for each trip.[1][2][3]There is no rigid definition of which kinds of transport are included, and air travel is often not thought of when discussing public transport—dictionaries use wording like "buses, trains, etc."[4]Examples of public transport includecity buses,trolleybuses,trams(orlight rail) andpassenger trains,rapid transit(metro/subway/underground, etc.) andferries. Public transport between cities is dominated byairlines,coaches, andintercity rail.High-speed railnetworks are being developed in many parts of the world. Most public transport systems run along fixed routes with set embarkation/disembarkation points to a prearranged timetable, with the most frequent services running to aheadway(e.g., "every 15 minutes" as opposed to being scheduled for a specific time of the day). However, most public transport trips include other modes of travel, such as passengers walking or catching bus services to access train stations.[5]Share taxisoffer on-demand services in many parts of the world, which may compete with fixed public transport lines, or complement them, by bringing passengers to interchanges.Paratransitis sometimes used in areas of low demand and for people who need a door-to-door service.[6] Urban public transit differs distinctly among Asia, North America, and Europe. In Japan, profit-driven, privately owned and publicly traded mass transit andreal estateconglomerates predominantly operate public transit systems.[7][8][better source needed]In North America, municipaltransit authoritiesmost commonly run mass transit operations. In Europe, both state-owned and private companies predominantly operate mass transit systems. For geographical, historical and economic reasons, differences exist internationally regarding the use and extent of public transport. TheInternational Association of Public Transport(UITP) is the international network for public transport authorities and operators, policy decision-makers, scientific institutes and the public transport supply and service industry. It has over 1,900 members from more than 100 countries from all over the globe. In recent years, some high-wealth cities have seen a decline in public transport usage. A number of sources attribute this trend to the rise in popularity of remote work, ride-sharing services, and car loans being relatively cheap across many countries. Major cities such as Toronto, Paris, Chicago, and London have seen this decline and have attempted to intervene by cutting fares and encouraging new modes of transportation, such as e-scooters and e-bikes.[9]Because of the reduced emissions and other environmental impacts of using public transportation over private transportation, many experts have pointed to an increased investment in public transit as an importantclimate change mitigationtactic.[10] Conveyances designed for public hire are as old as the firstferry service. The earliest public transport waswater transport.[11]Ferries appear inGreek mythologywritings. The mystical ferrymanCharonhad to be paid and would only then take passengers toHades.[12] Some historical forms of public transport include thestagecoachestraveling a fixed route betweencoaching inns, and thehorse-drawn boatcarrying paying passengers, which was a feature of Europeancanalsfrom the 17th century onwards. The canal itself as a form of infrastructure dates back to antiquity. Inancient Egyptcanals were used forfreight transportationto bypass theAswancataract. The Chinese also built canals for water transportation as far back as thewarring States period[13]which began in the 5th century BCE. Whether or not those canals were used for for-hire public transport remains unknown; theGrand Canalin China (begun in 486 BCE) served primarily thegrain trade. Thebus, the first organized public transit system within a city, appears to have originated inParisin 1662,[14]although the service in question,Carrosses à cinq sols(English: five-sol coaches), which have been developed by mathematician and philosopherBlaise Pascal, lasted only fifteen years until 1677.[15]Buses are known to have operated inNantesin 1826. The public bus transport system was introduced toLondonin July 1829.[16] The first passengerhorse-drawn vehicleopened in 1806. It ran along theSwansea and Mumbles Railway.[17] In 1825,George Stephensonbuilt theLocomotion No 1for theStockton and Darlington RailwayinnortheastEngland, the first public steam railway in the world. The world's first steam-poweredunderground railwayopened in London in 1863.[18] The first successful electricstreetcarwas built for 11 miles of track for the Union Passenger Railway in Tallahassee, Florida, in 1888. Electric streetcars could carry heavier passenger loads than predecessors, which reduced fares and stimulated greater transit use. Two years after the Richmond success, over thirty-two thousand electric streetcars were operating in America. Electric streetcars also paved the way for the firstsubwaysystem in America. Before electric streetcars, steam powered subways were considered. However, most people believed that riders would avoid the smoke-filled subway tunnels from the steam engines. In 1894, Boston built the first subway in the United States, an electric streetcar line in a 1.5-mile tunnel under Tremont Street's retail district. Other cities quickly followed, constructing thousands of miles of subway in the following decades.[19] In March 2020, Luxembourg abolished fares for trains, trams and buses and became the first country in the world to make all public transport free.[20] TheEncyclopædia Britannicaspecifies that public transportation is within urban areas, but does not limit its discussion of the topic to urban areas.[21] Seven criteria estimate the usability of different types of public transport and its overall appeal. The criteria are speed, comfort, safety, cost, proximity, timeliness and directness.[22]Speed is calculated from total journey time including transfers. Proximity means how far passengers must walk or otherwise travel before they can begin the public transport leg of their journey and how close it leaves them to their desired destination. Timeliness is how long they must wait for the vehicle. Directness records how far a journey using public transport deviates from a passenger's ideal route. In selecting between competingmodes of transport, many individuals are strongly motivated bydirect cost(travel fare/ ticket price to them) andconvenience, as well as being informed byhabit. The same individual may accept the lost time and statisticallyhigher risk of accidentinprivate transport, together with the initial, running and parking costs.Loss of control, spatial constriction,overcrowding, high speeds/accelerations, height and otherphobiasmay discourage use of public transport. Actual travel time on public transport becomes a lesser consideration whenpredictableand when travel itself is reasonablycomfortable(seats, toilets, services), and can thus be scheduled and used pleasurably, productively or for (overnight) rest. Chauffeured movement is enjoyed by many people when it is relaxing, safe, but not too monotonous. Waiting, interchanging, stops and holdups, for example due to traffic or for security, are discomforting.Jet lagis a human constraint discouraging frequent rapid long-distance east–west commuting, favoring modern telecommunications and VR technologies. An airline provides scheduled service with aircraft between airports. Air travel has high speeds, but incurs large waiting times before and after travel, and is therefore often only feasible over longer distances or in areas where a lack of surface infrastructure makes other modes of transport impossible. Bush airlines work more similarly to bus stops; an aircraft waits for passengers and takes off when the aircraft is full. Bus servicesusebuseson conventional roads to carry numerous passengers on shorter journeys. Buses operate with low capacity compared with trams or trains, and can operate on conventional roads, with relatively inexpensive bus stops to serve passengers. Therefore, buses are commonly used in smaller cities, towns, and rural areas, and forshuttleservices supplementing other means of transit in large cities.Midibuseshave an ever lower capacity, howeverdouble decker busesandarticulated buseshave a slightly larger capacity. Intercity bus serviceusecoaches(long-distance buses) for suburb-to-CBD or longer-distance transportation. The vehicles are normally equipped with more comfortable seating, a separate luggage compartment, video and possibly also a toilet. They have higher standards than city buses, but a limited stopping pattern. Trolleybusesareelectrically powered busesthat receive power fromoverhead power lineby way of a set of trolley poles for mobility.Online Electric Vehiclesare buses that run on a conventional battery, but arerechargedfrequently at certain points via underground wires.[23] Certain types of buses, styled after old-style streetcars, are also called trackless trolleys, but are built on the same platforms as a typicaldiesel,CNG, orhybridbus; these are more often used for tourist rides than commuting and tend to be privately owned. Electric busescan store the needed electrical energy on board, or be fed mains electricity continuously from an external source such as overhead lines. The majority of buses using on-board energy storage are battery electric buses (which is what this article mostly deals with), where the electric motor obtains energy from an onboard battery pack. Bus rapid transit(BRT) is a term used for buses operating on dedicated right-of-way, much like a light rail; resulting in a higher capacity and operating speed compared to regular buses. AGuided buscapable of being steered by external means, usually on a dedicated track or roll way that excludes other traffic, permitting the maintenance of schedules even during rush hours. Passenger railtransport is the conveyance of passengers by means of wheeled vehicles specially designed to run on railways. Trains allow high capacity at most distance scales, but requiretrack,signalling, infrastructure andstationsto be built and maintained resulting in high upfront costs. Passenger rail is used on long distances even crossing national borders, within regions and in various ways inurban environments. Inter-city railis long-haul passenger services that connect multiple urban areas. They have few stops, and aim at high average speeds, typically only making one of a few stops per city. These services may also be international. High-speed railis passenger trains operating significantly faster than conventional rail—typically defined as at least 200 kilometres per hour (120 mph). The most predominant systems have been built in Europe and East Asia, and compared with air travel, offer long-distance rail journeys as quick as air services, have lower prices to compete more effectively and use electricity instead of combustion.[24] Urban rail transitis an all-encompassing term for various types of local rail systems, such as these examplestrams,light rail,rapid transit,people movers,commuter rail,monorail,suspension railwaysandfuniculars. Commuter railis part of an urban area's public transport. It provides faster services to outersuburbsand neighboringsatellite cities. Trains stop attrain stationsthat are located to serve a smaller suburban or town center. The stations are often combined withshuttle busorpark and ridesystems. Frequency may be up to several times per hour, and commuter rail systems may either be part of the national railway or operated by local transit agencies. Common forms of commuter rail employ eitherdiesel electriclocomotives, orelectric multiple unittrains. Some commuter train lines share a railway withfreight trains.[25] AMetro rapid transit(MRT) railway system (also called a metro, underground, heavy rail, or subway) operates in an urban area with high capacity and frequency, andgrade separationfrom other traffic.[26][27]Heavy rail is a high-capacity form of rail transit, with 4 to 10 units forming a train, and can be the most expensive form of transit to build. Modern heavy rail systems are mostly driverless, which allows for higher frequencies and less maintenance cost.[25] Systems are able to transport large numbers of people quickly over short distances with little land use. Variations of rapid transit includepeople movers, small-scalelight metroand the commuter rail hybridS-Bahn. More than 160 cities have rapid transit systems, totalling more than 8,000 km (4,971 mi) of track and 7,000 stations. Twenty-five cities have systems under construction. Medium-capacity rail system(MCS) also including light metro, is light capacity rapid transit compared to typical heavy-rail rapid transit. MCS trains are usually 1 to 4 cars. Most medium-capacity rail systems are automated or use light-rail type vehicles. Automated guideway transit(AGT) system is a type of fixed guideway transit infrastructure with a riding or suspension track that supports and physically guides one or more driverless vehicles along its length. Light rail transit(LRT) is a term coined in 1972 and uses mainly tram technology. Light rail has mostly dedicated right-of-ways and less sections shared with other traffic and usually step-free access. A light rail line is generally traversed with increased speed compared to a tram line. Light rail lines are, thus, essentially modernizedinterurbans. Unlike trams, light rail trains are often longer and have one to four cars per train.[25]In some cases, trams are also considered part of the light rail family. Trams(also known as streetcars or trolleys) are railborne vehicles that originally ran in city streets, though over decades more and more dedicated tracks are used. They have higher capacity than buses, but must follow dedicated infrastructure with rails and wires either above or below the track, limiting their flexibility. In the United States, trams were commonly used prior to the 1930s, before being superseded by the bus. In modern public transport systems, they have been reintroduced in the form of the light rail.[25] ARubber-tyred tram, is a development of theguided busin which a vehicle is guided by a fixed rail in the road surface and draws current from overhead electric wires (either viapantographortrolley pole). ATranslohris a rubber-tyred tramway system, originally developed by Lohr Industrie of France and now run by a consortium of Alstom Transport and Fonds stratégique d'investissement (FSI) as newTL. TheAutonomous Rail Rapid Transit(ART) is a lidar (light detection and ranging)guided busandbi-articulated bussystem for urban passenger transport. It is resembling arubber-tyred tramas much a tram and aBus rapid transitsystem.[28] Somewhere between light and heavy rail in terms ofcarbon footprint,[citation needed]monorail systems usually use overhead tracks, similar to anelevated railwayabove other traffic. The systems are either mounted directly on the track supports or put in an overhead design with the train suspended. Monorailsystems are used throughout the world (especially in Europe and eastAsia, particularlyJapan), but apart from public transit installations in Las Vegas and Seattle, most North American monorails are either short shuttle services or privately owned services (With 150,000 daily riders, theDisney monorail systemsis a successful example).[29] Personal rapid transit(PRT) is an automated cab service that runs on rails or aguideway. This is an uncommon mode of transportation (excludingelevators) due to the complexity of automation. A fully implemented system might provide most of the convenience of individual automobiles with the efficiency of public transit. The crucial innovation is that the automated vehicles carry just a few passengers, turn off the guideway to pick up passengers (permitting other PRT vehicles to continue at full speed), and drop them off to the location of their choice (rather than at a stop). Conventional transit simulations show that PRT might attract many auto users in problematic medium-density urban areas. A number of experimental systems are in progress. One might compare personal rapid transit to the more labor-intensivetaxiorparatransitmodes of transportation, or to the (by now automated)elevatorscommon in many publicly accessible areas. Automated people mover(APM) are a special term for grade-separated rail which uses vehicles that are smaller and shorter in size.[25]These systems are generally used only in a small area such as a theme park or an airport. Cable-propelled transit(CPT) is a transit technology that moves people in motor-less, engine-less vehicles that are propelled by a steel cable.[30]There are two sub-groups of CPT—gondola liftsandcable cars (railway). Gondola lifts are supported and propelled from above by cables, whereas cable cars are supported and propelled from below by cables. While historically associated with usage inski resorts, gondola lifts are now finding increased consumption and utilization in many urban areas—built specifically for the purposes of mass transit.[31]Many, if not all, of these systems are implemented and fully integrated within existing public transportation networks. Examples includeMetrocable (Medellín),Metrocable (Caracas),Mi TeleféricoinLa Paz,Portland Aerial Tram,Roosevelt Island Tramwayin New York City, and theLondon Cable Car. Funicularis a type ofcable railwaysystem that connects points along a railway track laid on a steep slope. The system is characterized by two counterbalanced carriages (also called cars or trains) permanently attached to opposite ends of a haulage cable, which is looped over a pulley at the upper end of the track[32] Aferryis a boat used to carry (orferry) passengers, and sometimes their vehicles, across a body of water. Afoot-passengerferry with many stops is sometimes called awater bus. Ferries form a part of the public transport systems of many waterside cities and islands, allowing direct transit between points at a capital cost much lower than bridges or tunnels, though at a lower speed. Ship connections of much larger distances (such as over long distances in water bodies like theMediterranean Sea) may also be called ferry services. A report published by the UK National Infrastructure Commission in 2018 states that "cycling is mass transit and must be treated as such."Cycling infrastructureis normally provided without charge to users because it is cheaper to operate than mechanised transit systems that use sophisticated equipment and do not usehuman power.[33] Many cities around the world have introduced electric bikes and scooters to their public transport infrastructure. For example, in the Netherlands many individuals use e-bikes to replace their car commutes. In major American cities, start-up companies such as Uber and Lyft have implemented e-scooters as a way for people to take short trips around the city.[34] All public transport runs on infrastructure, either on roads, rail, airways or seaways. The infrastructure can be shared with other modes, freight and private transport, or it can be dedicated to public transport. The latter is especially valuable in cases where there are capacity problems for private transport. Investments in infrastructure are expensive and make up a substantial part of the total costs in systems that are new or expanding. Once built, the infrastructure will require operating and maintenance costs, adding to the total cost of public transport. Sometimes governments subsidize infrastructure by providing it free of charge, just as is common with roads for automobiles. Interchanges are locations where passengers can switch from one public transport route to another. This may be between vehicles of the same mode (like a bus interchange), or e.g. between bus and train. It can be between local and intercity transport (such as at acentral stationor airport). Timetables(or 'schedules' inNorth American English) are provided by the transport operator to allow users to plan their journeys. They are often supplemented bymapsand fare schemes to help travelers coordinate their travel. Onlinepublic transport route plannershelp make planning easier.Mobile appsare available for multiple transit systems that provide timetables and other service information and, in some cases, allow ticket purchase, some allowing to plan your journey, with time fares zones e.g. Services are often arranged to operate at regular intervals throughout the day or part of the day (known asclock-face scheduling). Often, more frequent services or even extra routes are operated during the morning and eveningrush hours. Coordination between services at interchange points is important to reduce the total travel time for passengers. This can be done by coordinating shuttle services with main routes, or by creating a fixed time (for instance twice per hour) when all bus and rail routes meet at a station and exchange passengers. There is often a potential conflict between this objective and optimising the utilisation of vehicles and drivers. The main sources of financing are ticket revenue, government subsidies and advertising. The percentage of revenue from passenger charges is known as thefarebox recovery ratio.[35]A limited amount of income may come fromland developmentand rental income from stores and vendors, parking fees, and leasing tunnels and rights-of-way to carryfiber opticcommunication lines. Most—but not all—public transport requires the purchase of aticketto generaterevenuefor the operators. Tickets may be bought either in advance, or at the time of the journey, or the carrier may allow both methods. Passengers may be issued with a paper ticket, a metal or plastictoken, or a magnetic or electronic card (smart card,contactless smart card). Sometimes a ticket has to be validated, e.g. a paper ticket has to be stamped, or anelectronic tickethas to be checked in. Tickets may be valid for a single (or return) trip, or valid within a certain area for a period of time (seetransit pass). Thefareis based on the travel class, either depending on the traveled distance, or based onzone pricing. The tickets may have to be shown or checked automatically at the station platform or when boarding, or during the ride by aconductor. Operators may choose to control all riders, allowing sale of the ticket at the time of ride. Alternatively, aproof-of-paymentsystem allows riders to enter the vehicles without showing the ticket, but riders may or may not be controlled by aticket controller; if the rider fails to show proof of payment, the operator may fine the rider at the magnitude of the fare. Multi-use tickets allow travel more than once. In addition to return tickets, this includes period cards allowing travel within a certain area (for instance month cards), or to travel a specified number of trips or number of days that can be chosen within a longer period of time (calledcarnetticket). Passes aimed at tourists, allowing free or discounted entry at many tourist attractions, typically includezero-fare public transportwithin the city. Period tickets may be for a particular route (in both directions), or for awhole network. Afree travel passallowing free and unlimited travel within a system is sometimes granted to particular social sectors, for example students, elderly, children, employees (job ticket) and the physically or mentallydisabled. Zero-fare public transportservices are funded in full by means other than collecting a fare from passengers, normally through heavysubsidyor commercialsponsorshipby businesses. Several mid-size European cities and many smaller towns around the world have converted their entire bus networks to zero-fare. Three capital cities in Europe have free public transport:Tallinn,Luxembourgand as of 2025,Belgrade. Local zero-fare shuttles or inner-city loops are far more common than city-wide systems. There are also zero-fare airport circulators and university transportation systems. Governments frequently opt to subsidize public transport for social, environmental or economic reasons. Common motivations include the desire to provide transport to people who are unable to use an automobile[36]and to reduce congestion, land use and automobile emissions.[36] Subsidies may take the form of direct payments for financially unprofitable services, but support may also include indirect subsidies. For example, the government may allow free or reduced-cost use of state-owned infrastructure such as railways and roads, to stimulate public transport's economic competitiveness over private transport, that normally also has free infrastructure (subsidized through such things as gas taxes). Other subsidies include tax advantages (for instanceaviation fuelis typically not taxed), bailouts if companies that are likely to collapse (often applied to airlines) and reduction of competition through licensing schemes (often applied to taxis and airlines). Private transport is normally subsidized indirectly through free roads and infrastructure,[37]as well as incentives to build car factories[38]and, on occasion, directly via bailouts of automakers.[39][40]Subsidies also may take the form of initial or increased tolls for drivers, such as theSan Francisco Bay Arearaising tolls on numerous bridges and proposing more hikes to fund theBay Area Rapid Transitsystem.[41] Land development schemes may be initialized, where operators are given the rights to use lands near stations, depots, or tracks for property development. For instance, in Hong Kong,MTR Corporation LimitedandKCR Corporationgenerate additional profits from land development to partially cover the cost of the construction of the urban rail system.[42] Some supporters of mass transit believe that use of taxpayer capital to fund mass transit will ultimately save taxpayer money in other ways, and therefore, state-funded mass transit is a benefit to the taxpayer. Some research has supported this position,[43]but the measurement of benefits and costs is a complex and controversial issue.[44]A lack of mass transit results in more traffic, pollution,[45][46][47]and road construction[48]to accommodate more vehicles, all costly to taxpayers;[49]providing mass transit will therefore alleviate these costs.[50] A study found that support for public transport spending is much higher amongconservativeswho have high levels of trust in government officials than those who do not.[51] Relative to other forms of transportation, public transit is safe (with a low crash risk) and secure (with low rates ofcrime).[52]The injury and death rate for public transit is roughly one-tenth that of automobile travel.[52]A 2014 study noted that "residents of transit-oriented communities have about one-fifth the per capita crash casualty rate as in automobile-oriented communities" and that "Transit also tends to have lower overall crime rates than automobile travel, and transit improvements can help reduce overall crime risk by improving surveillance and economic opportunities for at-risk populations."[52] Although relatively safe and secure, public perceptions that transit systems are dangerous endure.[52]A 2014 study stated that "Various factors contribute to the under-appreciation of transit safety benefits, including the nature of transit travel, dramatic news coverage of transit crashes and crimes, transit agency messages that unintentionally emphasize risks without providing information on its overall safety, and biased traffic safety analysis."[52] Some systems attract vagrants who use the stations or trains as sleeping shelters, though most operators have practices that discourage this.[53] Public transport is means of independent transport for individuals (without walking or bicycling) such as children too young to drive, the elderly without access to cars, those who do not hold a drivers license, and the infirm such as wheelchair users. Kneeling buses, low-floor access boarding on buses and light rail has also enabled greater access for the disabled in mobility. In recent decades low-floor access has been incorporated into modern designs for vehicles. In economically deprived areas, public transport increases individual accessibility to transport where private means are unaffordable. Although there is continuing debate as to the true efficiency of different modes of transportation, mass transit is generally regarded as significantly moreenergy efficientthan other forms of travel. A 2002 study by theBrookings Institutionand theAmerican Enterprise Institutefound that public transportation in the U.S. uses approximately half the fuel required by cars, SUVs and light trucks. In addition, the study noted that "private vehicles emit about 95 percent more carbon monoxide, 92 percent morevolatile organic compoundsand about twice as much carbon dioxide and nitrogen oxide than public vehicles for every passenger mile traveled".[55] Studies have shown that there is a strong inverse correlation betweenurban population densityandenergy consumption per capita, and that public transport could facilitate increased urban population densities, and thus reduce travel distances and fossil fuel consumption.[56] Supporters of thegreen movementusually advocate public transportation, because it offers decreased airbornepollutioncompared to automobiles transporting a single individual.[57]A study conducted in Milan, Italy, in 2004 during and after a transportation strike serves to illustrate the impact that mass transportation has on the environment. Air samples were taken between 2 and 9 January, and then tested for methane, carbon monoxide, non-methane hydrocarbons (NMHCs), and other gases identified as harmful to the environment. The figure below is a computer simulation showing the results of the study "with 2 January showing the lowest concentrations as a result of decreased activity in the city during the holiday season. 9 January showed the highest NMHC concentrations because of increased vehicular activity in the city due to a public transportation strike."[58] Based on the benefits of public transport, the green movement has affected public policy. For example, the state of New Jersey releasedGetting to Work: Reconnecting Jobs with Transit.[59]This initiative attempts to relocate new jobs into areas with higher public transportation accessibility. The initiative cites the use of public transportation as being a means of reducing traffic congestion, providing an economic boost to the areas of job relocation, and most importantly, contributing to a green environment by reducingcarbon dioxide(CO2) emissions. Using public transportation can result in a reduction of an individual's carbon footprint. A single person, 20-mile (32 km) round trip by car can be replaced using public transportation and result in a net CO2emissions reduction of 4,800 pounds (2,200 kg) per year.[60]Using public transportation saves CO2emissions in more ways than simply travel as public transportation can help to alleviate traffic congestion as well as promote more efficient land use. When all three of these are considered, it is estimated that 37 million metric tons of CO2will be saved annually.[60]Another study claims that using public transit instead of private in the U.S. in 2005 would have reduced CO2emissions by 3.9 million metric tons and that the resulting traffic congestion reduction accounts for an additional 3.0 million metric tons of CO2saved.[61]This is a total savings of about 6.9 million metric tons per year given the 2005 values. In order to compare energy impact of public transportation to private transportation, the amount of energy per passenger mile must be calculated. The reason that comparing the energy expenditure per person is necessary is to normalize the data for easy comparison. Here, the units are in per 100 p-km (read as person kilometer or passenger kilometer). In terms of energy consumption, public transportation is better than individual transport in a personal vehicle.[62]In England, bus and rail are popular methods of public transportation, especially in London. Rail provides rapid movement into and out of the city of London while busing helps to provide transport within the city itself. As of 2006–2007, the total energy cost of London's trains was 15 kWh per 100 p-km, about 5 times better than a personal car.[63] For busing in London, it was 32 kWh per 100 p-km, or about 2.5 times less than that of a personal car.[63]This includes lighting, depots, inefficiencies due to capacity (i.e., the train or bus may not be operating at full capacity at all times), and other inefficiencies. Efficiencies of transport in Japan in 1999 were 68 kWh per 100 p-km for a personal car, 19 kWh per 100 p-km for a bus, 6 kWh per 100 p-km for rail, 51 kWh per 100 p-km for air, and 57 kWh per 100 p-km for sea.[63]These numbers from either country can be used in energy comparison calculations orlife-cycle assessmentcalculations. Public transportation also provides an arena to test environmentally friendly fuel alternatives, such as hydrogen-powered vehicles. Swapping out materials to create lighter public transportation vehicles with the same or better performance will increase environmental friendliness of public transportation vehicles while maintaining current standards or improving them. Informing the public about the positive environmental effects of using public transportation in addition to pointing out the potential economic benefit is an important first step towards making a difference. In the 2023 study titled "Subways and CO₂ Emissions: A Global Analysis with Satellite Data," research reveals that subway systems significantly reduceCO₂ emissionsby approximately 50% in the cities they serve, contributing to an 11% global reduction. The study also explores potential expansion in 1,214 urban areas lacking subways, suggesting a potential emission cut by up to 77%. Economically, subways are viable in 794 cities under optimistic financial conditions (SCC at US$150/ton and SIC at US$140 million/km), but this figure drops to 294 cities with more pessimistic assumptions. Despite high costs—about US$200 million per kilometer for construction—subways offer substantial co-benefits, such as reduced traffic congestion and improved public health, making them a strategic investment forurban sustainabilityandclimate mitigation.[64][65] Dense areas with mixed-land uses promote daily public transport use while urban sprawl is associated with sporadic public transport use. A recent European multi-city survey found that dense urban environments, reliable and affordable public transport services, and limiting motorized vehicles in high density areas of the cities will help achieve much needed promotion of public transport use.[66] Urban space is a precious commodity and public transport utilises it more efficiently than a car dominant society, allowing cities to be built more compactly than if they were dependent on automobile transport.[67]Ifpublic transport planningis at the core ofurban planning, it will also force cities to be built more compactly to create efficient feeds into the stations and stops of transport.[5][68]This will at the same time allow the creation of centers around the hubs, serving passengers' daily commercial needs and public services. This approach significantly reducesurban sprawl. Public land planning for public transportation can be difficult but it is the State and Regional organizations that are responsible to planning and improving public transportation roads and routes. With public land prices booming, there must be a plan to using the land most efficiently for public transportation in order to create better transportation systems. Inefficient land use and poor planning leads to a decrease in accessibility to jobs, education, and health care.[69] The consequences for wider society and civic life, is public transport breaks down social and cultural barriers between people in public life. An important social role played by public transport is to ensure that all members of society are able to travel without walking or cycling, not just those with a driving license and access to an automobile—which include groups such as the young, the old, the poor, those with medical conditions, and people banned from driving.Automobile dependencyis a name given by policy makers to places where those without access to a private vehicle do not have access to independent mobility.[71]This dependency contributes to thetransport divide. A 2018 study published in theJournal of Environmental Economics and Managementconcluded that expanded access to public transit has no meaningful impact on automobile volume in the long term.[72] Above that, public transportation opens to its users the possibility of meeting other people, as no concentration is diverted from interacting with fellow-travelers due to any steering activities. Adding to the above-said, public transport becomes a location of inter-social encounters across all boundaries of social, ethnic and other types of affiliation. TheCOVID-19pandemic had a substantial effect on public transport systems, infrastructures and revenues in various cities across the world.[73]The pandemic negatively impacted public transport usage by imposing social distancing, remote work, or unemployment in theUnited States. It caused a 79% drop in public transport riders at the beginning of 2020. This trend continued throughout the year with a 65% reduced ridership as compared to previous years.[74]Similarly inLondon, at the beginning of 2020, ridership in theLondon Undergroundandbusesdeclined by 95% and 85% respectively.[75]A 55% drop in public transport ridership as compared to 2019 was reported inCairo, Egyptafter a period of mandatory halt. To reduce COVID-spread through cash contact, inNairobi, Kenya, cashless payment systems were enforced by National Transport and Safety Authority (NTSA). Public transport was halted for three months in 2020 in Kampala,Ugandawith people resorting to walking or cycling. Post-quarantine, upon renovating public transport infrastructure, public transport such as minibus taxis were assigned specific routes. The situation was difficult in cities where people are heavily dependent on the public transport system. InKigali, Rwandasocial distancing requirements led to fifty percent occupancy restrictions, but as the pandemic situation improved, the occupancy limit was increased to meet popular demands.Addis Ababa, Ethiopiaalso had inadequate bus services relative to demand and longer wait times due to social distancing restrictions and planned to deploy more buses. Both Addis Ababa and Kampala aim to improve walking and cycling infrastructures in the future as means of commuting complementary to buses.[76]
https://en.wikipedia.org/wiki/Public_transport
MozillaPersonawas a decentralizedauthenticationsystem for the web, based on the open BrowserID protocol[1]prototyped byMozilla[2]and standardized byIETF.[3]It was launched in July 2011, but after failing to achieve traction, Mozilla announced in January 2016 plans to decommission the service by the end of the year.[4] Persona was launched in July 2011[5]and shared some of its goals with some similar authentication systems likeOpenIDorFacebook Connect, but it was different in several ways: The privacy goal was motivated by the fact that the identity provider does not know which website the user is identifying on.[6]It was first released in July 2011 and fully deployed byMozillaon its own websites in January 2012.[7] In March 2014, Mozilla indicated it was dropping full-time developers from Persona and moving the project to community ownership. Mozilla indicated, however, that it had no plans to decommission Persona and would maintain some level of involvement such as in maintenance and reviewingpull requests.[8] Persona services are shut down since November 30, 2016.[9] Persona was inspired by theVerifiedEmailProtocol[10][11]which is now known as theBrowserIDprotocol.[12]It uses any useremail addressto identify its owner. This protocol involves the browser, an identity provider, and any compliant website. The browser stores a list of user verified email addresses (certificates issued by the identity providers), and demonstrates the user's ownership of the addresses to the website usingcryptographicproof.[13] The certificates must be renewed every 24 hours by logging into the identity provider (which will usually mean entering the email and a password in a Web form on the identity provider's site). Once done, they will be usable for authenticating to websites with the same browser for the rest of the day, without entering passwords again (single sign-on).[14] The decentralization aspects of the protocol reside in the theoretical support of any identity provider service, while in practice it seems to rely mainly on Mozilla's servers currently (which may in turn delegate email address verification, seeidentity bridgingbelow). However, even if the protocol heavily relies on a central identity provider, this central actor only knows when browsers renew certificates, and cannot in principle monitor where the certificates will be used. Mozilla announced "identity bridging" support for Persona in July 2013. As they describe on their blog: "Traditionally ... Mozilla would send you an email and ask you to click on the confirmation link it contained. With Identity Bridging, Persona learned a new trick; instead of sending confirmation emails, Persona can ask you to verify your identity via your email provider’s existingOpenIDorOAuthgateway."[15] This announcement included support for existing users of the Yahoo Mail service. In August 2013, Mozilla announced support for Identity Bridging with all Gmail accounts. They wrote in this additional announcement that "combined with our Identity Bridge for Yahoo, Persona now natively supports more than 700,000,000 active email users. That covers roughly 60–80% of people on most North American websites."[16] Persona relies heavily on the JavaScript client-side program running in the user's browser, making it widely usable. Support of authentication to Web applications via Persona can be implemented byCMSssuch asDrupal,[17]Serendipity,[18]WordPress,[19]Tiki,[20]orSPIP. There is also support for Persona in thePhonegap[21]platform (used for compilingHTML5apps into mobile apps).Mozillaprovides its own Persona server at persona.org.[22]It is also possible to set up your own Persona identity provider,[23]providingfederated identity. Notable sites implementing Persona includeTing,[24]The TimesCrossword, andVoost.[25]
https://en.wikipedia.org/wiki/Mozilla_Persona
Synchronized Multimedia Integration Language(SMIL(/smaɪl/)) is aWorld Wide Web ConsortiumrecommendedExtensible Markup Language(XML)markup languageto describemultimediapresentations. It defines markup for timing, layout, animations, visual transitions, and media embedding, among other things. SMIL allows presenting media items such as text, images, video, audio, links to other SMIL presentations, and files from multiple web servers. SMIL markup is written in XML, and has similarities toHTML. Members of theWorld Wide Web Consortium(also known as the "W3C") created SMIL forstreaming mediapresentations, and published SMIL 1.0 in June 1998. Many of these W3C members helped author several versions of SMIL specifications between 1996 (when the first multimedia workshops were hosted by the W3C) and 2008 (when SMIL 3.0 was published). SMIL is an XML-based application, and is a part of manyMultimedia Messaging Service(MMS) applications. SMIL can be combined with other XML-based specifications such as with SVG (as has been done withSVG animation) and with XHTML (as done withHTML+TIME). As of 2008[update], theW3C Recommendationfor SMIL isSMIL 3.0. SMIL 1.0 became a W3C Recommendation on 15 June 1998.[2][6] SMIL 2.0became a W3C Recommendation on 9 August 2001.[7]SMIL 2.0 introduced a modular language structure that facilitated integration of SMIL semantics into other XML-based languages. Basic animation and timing modules were integrated into Scalable Vector Graphics (SVG) and the SMIL modules formed a basis forTimed-Text. The modular structure made it possible to define the standard SMIL language profile and theXHTML+SMILlanguage profile with common syntax and standard semantics. SMIL 2.1became a W3C Recommendation on 13 December 2005.[4][8]SMIL 2.1 includes a small number of extensions based on practical experience gathered using SMIL in theMultimedia Messaging Systemon mobile phones. SMIL 3.0became a W3C Recommendation in December 2008.[5]It was first submitted as a W3C Working draft on December 21, 2006.[9]The last draft revision was released on October 6, 2008.[10][11] Authoring and rendering tools for smilText and SMIL 3.0 PanZoom functionality: Demos A SMIL document is similar in structure to anHTMLdocument in that they are typically divided between an optional<head>section and a required<body>section. The<head>section contains layout and metadata information. The<body>section contains the timing information, and is generally composed of combinations of three main tags—sequential ("<seq>", simple playlists), parallel ("<par>", multi-zone/multi-layer playback) and exclusive ("<excl>", event-triggered interrupts). SMIL refers to media objects byURLs, allowing them to be shared between presentations and stored on different servers forload balancing. The language can also associate different media objects with differentbandwidthrequirements. For playback scheduling, SMIL supportsISO-8601wallclock()date/time specification to define begin/end events for playlists. SMIL files take either a.smior.smilfile extension. However,SAMIfiles and Macintoshself mounting imagesalso use.smi, which creates some ambiguity at first glance. As a result, SMIL files commonly use the.smilfile extension to avoid confusion. SMIL was created during a time when structured data using XML was very popular and during a time whenInternet Explorerwas very popular. Thus "combining" SMIL with other markup languages was considered one of thebest current practicesof the day. SMIL is one of three means by whichSVG animationcan be achieved (the others beingJavaScriptandCSS animations). WhileRSSandAtomareweb syndicationmethods, with the former being more popular as a syndication method forpodcasts, SMIL is potentially useful as a script orplaylistthat can tie sequential pieces of multimedia together and can then be syndicated through RSS or Atom.[12][13]In addition, the combination of multimedia-laden .smil files with RSS or Atom syndication would be useful for accessibility to audio-enabled podcasts by thedeafthrough Timed Text closed captions,[14]and can also turn multimedia into hypermedia that can be hyperlinked to other linkable audio and video multimedia.[15] VoiceXMLcan be combined with SMIL to provide a sequential reading of several pre-provided pages or slides in avoice browser, while combining SMIL withMusicXMLwould allow for the creation of infinitely-recombinable sequences of music sheets. Combining SMIL+VoiceXML or SMIL+MusicXML with RSS or Atom could be useful in the creation of an audible pseudo-podcast with embedded hyperlinks, while combining SMIL+SVG with VoiceXML and/or MusicXML would be useful in the creation of an automatically audio-enabledvector graphicsanimationwith embedded hyperlinks. SMIL is anticipated for use withinText Encoding Initiative(TEI) documents.[16][17] SMIL is being implemented on handheld and mobile devices and has also spawned[18]theMultimedia Messaging Service(MMS) which is a video and picture equivalent ofShort Message Service(SMS). SMIL is also one of the underlying technologies used for "Advanced Content" in the (discontinued)HD DVDformat for adding interactive content (menus etc.). The field ofDigital Signageis embracing SMIL as a means of controlling dynamic advertising in public areas.[19][20] Most commonly usedweb browsershave native support for SMIL, but it has not been implemented in Microsoft browsers. It was to be deprecated in Google Chrome,[21]but it has now been decided to suspend that intent until alternatives are sufficiently developed.[22]Other software that implement SMIL playback include: Media player boxes based on dedicated 1080p decoder chips such as the Sigma Designs 8634 processor are getting SMIL players embedded in them. A SMIL file must be embedded, then opened using a plug-in such as Apple's QuickTime or Microsoft's Windows Media Player, to be viewed by a browser that doesn't support SMIL.
https://en.wikipedia.org/wiki/Synchronized_Multimedia_Integration_Language
Early research and development: Merging the networks and creating the Internet: Commercialization, privatization, broader access leads to the modern Internet: Examples of Internet services: HTTP/3is the third major version of theHypertext Transfer Protocolused to exchange information on theWorld Wide Web, complementing the widely deployedHTTP/1.1andHTTP/2. Unlike previous versions which relied on the well-establishedTCP(published in 1974),[2]HTTP/3 usesQUIC(officially introduced in 2021),[3]amultiplexedtransport protocol built onUDP.[4] HTTP/3 uses similar semantics compared to earlier revisions of the protocol, including the samerequest methods,status codes, andmessage fields, but encodes them and maintains session state differently. However, partially due to the protocol's adoption of QUIC, HTTP/3 has lower latency and loads more quickly in real-world usage when compared with previous versions: in some cases over four times as fast than with HTTP/1.1 (which, for many websites, is the only HTTP version deployed).[5][6] As of September 2024, HTTP/3 is supported by more than 95% of major web browsers in use[7]and 34% of the top 10 million websites.[8]It has been supported byChromium(and derived projects includingGoogle Chrome,Microsoft Edge,Samsung Internet, andOpera)[9]since April 2020 and byMozilla Firefoxsince May 2021.[7][10]Safari14 implemented the protocol but it remains disabled by default.[11] HTTP/3 originates from anInternet Draftadopted by the QUIC working group. The original proposal was named "HTTP/2 Semantics Using The QUIC Transport Protocol",[12]and later renamed "Hypertext Transfer Protocol (HTTP) over QUIC".[13] On 28 October 2018 in a mailing list discussion, Mark Nottingham, Chair of theIETFHTTP and QUIC Working Groups, proposed renaming HTTP-over-QUIC to HTTP/3, to "clearly identify it as another binding of HTTP semantics to the wire protocol [...] so people understand its separation from QUIC".[14]Nottingham's proposal was accepted by fellow IETF members a few days later. The HTTP working group was chartered to assist the QUIC working group during the design of HTTP/3, then assume responsibility for maintenance after publication.[15] Support for HTTP/3 was added toChrome(Canary build) in September 2019 and then eventually reached stable builds, but was disabled by a feature flag. It was enabled by default in April 2020.[9]Firefox added support for HTTP/3 in November 2019 through a feature flag[7][16][17]and started enabling it by default in April 2021 in Firefox 88.[7][10]Experimental support for HTTP/3 was added to Safari Technology Preview on April 8, 2020[18]and was included with Safari 14 that ships withiOS 14andmacOS 11,[11][19]but it's still disabled by default as of Safari 16, on both macOS and iOS.[citation needed] On 6 June 2022,IETFpublished HTTP/3 as aProposed StandardinRFC9114.[1] HTTP semantics are consistent across versions: the samerequest methods,status codes, andmessage fieldsare typically applicable to all versions. The differences are in the mapping of these semantics to underlying transports. BothHTTP/1.1andHTTP/2useTCPas their transport. HTTP/3 usesQUIC, atransport layernetwork protocolwhich usesuser spacecongestion controlover theUser Datagram Protocol(UDP). The switch to QUIC aims to fix a major problem of HTTP/2 called "head-of-line blocking": because the parallel nature of HTTP/2's multiplexing is not visible to TCP'sloss recovery mechanisms, a lost or reorderedpacketcauses all activetransactionsto experience a stall regardless of whether that transaction was impacted by the lost packet. Because QUIC provides native multiplexing, lost packets only impact the streams where data has been lost. ProposedDNS resource recordsSVCB (service binding) and HTTPS would allow connecting without first receiving the Alt-Svc header via previous HTTP versions, therefore removing the 1 RTT of handshaking of TCP.[20][21]There is client support for HTTPS resource records since Firefox 92, iOS 14, reported Safari 14 support, and Chromium supports it behind a flag.[22][23][24] Open-sourcelibrariesthat implement client or server logic for QUIC and HTTP/3 include[28]
https://en.wikipedia.org/wiki/HTTP/3
PageRank(PR) is analgorithmused byGoogle Searchtorankweb pagesin theirsearch engineresults. It is named after both the term "web page" and co-founderLarry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.[1] Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known.[2][3]As of September 24, 2019, all patents associated with PageRank have expired.[4] PageRank is alink analysisalgorithm and it assigns a numericalweightingto each element of ahyperlinkedsetof documents, such as theWorld Wide Web, with the purpose of "measuring" its relative importance within the set. Thealgorithmmay be applied to any collection of entities withreciprocalquotations and references. The numerical weight that it assigns to any given elementEis referred to as thePageRank of Eand denoted byPR(E).{\displaystyle PR(E).} A PageRank results from a mathematical algorithm based on theWebgraph, created by all World Wide Web pages as nodes andhyperlinksas edges, taking into consideration authority hubs such ascnn.comormayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is definedrecursivelyand depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. Numerous academic papers concerning PageRank have been published since Page and Brin's original paper.[5]In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank.[6] Other link-based ranking algorithms for Web pages include theHITS algorithminvented byJon Kleinberg(used byTeomaand nowAsk.com), the IBMCLEVER project, theTrustRankalgorithm, theHummingbirdalgorithm,[7]and theSALSA algorithm.[8] Theeigenvalueproblem behind PageRank's algorithm was independently rediscovered and reused in many scoring problems. In 1895,Edmund Landausuggested using it for determining the winner of a chess tournament.[9][10]The eigenvalue problem was also suggested in 1976 by Gabriel Pinski and Francis Narin, who worked onscientometricsranking scientific journals,[11]in 1977 byThomas Saatyin his concept ofAnalytic Hierarchy Processwhich weighted alternative choices,[12]and in 1995 by Bradley Love and Steven Sloman as acognitive modelfor concepts, the centrality algorithm.[13][14] A search engine called "RankDex" from IDD Information Services, designed byRobin Liin 1996, developed a strategy for site-scoring and page-ranking.[15]Li referred to his search mechanism as "link analysis," which involved ranking the popularity of a web site based on how many other sites had linked to it.[16]RankDex, the first search engine with page-ranking and site-scoring algorithms, was launched in 1996.[17]Li filed a patent for the technology in RankDex in 1997; it was granted in 1999.[18]He later used it when he foundedBaiduin China in 2000.[19][20]Google founderLarry Pagereferenced Li's work as a citation in some of his U.S. patents for PageRank.[21][17][22] Larry Page andSergey Brindeveloped PageRank atStanford Universityin 1996 as part of a research project about a new kind of search engine. An interview withHéctor García-Molina, Stanford Computer Science professor and advisor to Sergey,[23]provides background into the development of the page-rank algorithm.[24]Sergey Brin had the idea that information on the web could be ordered in a hierarchy by "link popularity": a page ranks higher as there are more links to it.[25]The system was developed with the help of Scott Hassan and Alan Steremberg, both of whom were cited by Page and Brin as being critical to the development of Google.[5]Rajeev MotwaniandTerry Winogradco-authored with Page and Brin the first paper about the project, describing PageRank and the initial prototype of theGoogle search engine, published in 1998.[5]Shortly after, Page and Brin foundedGoogle Inc., the company behind the Google search engine. While just one of many factors that determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web-search tools.[26] The name "PageRank" plays on the name of developer Larry Page, as well as of the concept of aweb page.[27][28]The word is a trademark of Google, and the PageRank process has beenpatented(U.S. patent 6,285,999). However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University. The university received 1.8 million shares of Google in exchange for use of the patent; it sold the shares in 2005 for $336 million.[29][30] PageRank was influenced bycitation analysis, early developed byEugene Garfieldin the 1950s at the University of Pennsylvania, and byHyper Search, developed byMassimo Marchioriat theUniversity of Padua. In the same year PageRank was introduced (1998),Jon Kleinbergpublished his work onHITS. Google's founders cite Garfield, Marchiori, and Kleinberg in their original papers.[5][31] The PageRank algorithm outputs aprobability distributionused to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value. A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a document with a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to said document. Assume a small universe of four web pages:A,B,C, andD. Links from a page to itself are ignored. Multiple outbound links from one page to another page are treated as a single link. PageRank is initialized to the same value for all pages. In the original form of PageRank, the sum of PageRank over all pages was the total number of pages on the web at that time, so each page in this example would have an initial value of 1. However, later versions of PageRank, and the remainder of this section, assume aprobability distributionbetween 0 and 1. Hence the initial value for each page in this example is 0.25. The PageRank transferred from a given page to the targets of its outbound links upon the next iteration is divided equally among all outbound links. If the only links in the system were from pagesB,C, andDtoA, each link would transfer 0.25 PageRank toAupon the next iteration, for a total of 0.75. Suppose instead that pageBhad a link to pagesCandA, pageChad a link to pageA, and pageDhad links to all three pages. Thus, upon the first iteration, pageBwould transfer half of its existing value (0.125) to pageAand the other half (0.125) to pageC. PageCwould transfer all of its existing value (0.25) to the only page it links to,A. SinceDhad three outbound links, it would transfer one third of its existing value, or approximately 0.083, toA. At the completion of this iteration, pageAwill have a PageRank of approximately 0.458. In other words, the PageRank conferred by an outbound link is equal to the document's own PageRank score divided by the number of outbound linksL( ). In the general case, the PageRank value for any pageucan be expressed as: i.e. the PageRank value for a pageuis dependent on the PageRank values for each pagevcontained in the setBu(the set containing all pages linking to pageu), divided by the numberL(v) of links from pagev. The PageRank theory holds that an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue following links is a damping factord. The probability that they instead jump to any random page is1 - d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.[5] The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents (N) in the collection) and this term is then added to the product of the damping factor and the sum of the incoming PageRank scores. That is, So any page's PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The original paper, however, gave the following formula, which has led to some confusion: The difference between them is that the PageRank values in the first formula sum to one, while in the second formula each PageRank is multiplied byNand the sum becomesN. A statement in Page and Brin's paper that "the sum of all PageRanks is one"[5]and claims by other Google employees[32]support the first variant of the formula above. Page and Brin confused the two formulas in their most popular paper "The Anatomy of a Large-Scale Hypertextual Web Search Engine", where they mistakenly claimed that the latter formula formed a probability distribution over web pages.[5] Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents. The formula uses a model of arandom surferwho reaches their target site after several clicks, then switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as aMarkov chainin which the states are pages, and the transitions are the links between pages – all of which are all equally probable. If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. If the random surfer arrives at a sink page, it picks anotherURLat random and continues surfing again. When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web. This residual probability,d, is usually set to 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature. So, the equation is as follows: wherep1,p2,...,pN{\displaystyle p_{1},p_{2},...,p_{N}}are the pages under consideration,M(pi){\displaystyle M(p_{i})}is the set of pages that link topi{\displaystyle p_{i}},L(pj){\displaystyle L(p_{j})}is the number of outbound links on pagepj{\displaystyle p_{j}}, andN{\displaystyle N}is the total number of pages. The PageRank values are the entries of the dominant righteigenvectorof the modifiedadjacency matrixrescaled so that each column adds up to one. This makes PageRank a particularly elegant metric: the eigenvector is whereRis the solution of the equation where the adjacency functionℓ(pi,pj){\displaystyle \ell (p_{i},p_{j})}is the ratio between number of links outbound from page j to page i to the total number of outbound links of page j. The adjacency function is 0 if pagepj{\displaystyle p_{j}}does not link topi{\displaystyle p_{i}}, and normalized such that, for eachj i.e. the elements of each column sum up to 1, so the matrix is astochastic matrix(for more details see thecomputationsection below). Thus this is a variant of theeigenvector centralitymeasure used commonly innetwork analysis. Because of the largeeigengapof the modified adjacency matrix above,[33]the values of the PageRank eigenvector can be approximated to within a high degree of accuracy within only a few iterations. Google's founders, in their original paper,[31]reported that the PageRank algorithm for a network consisting of 322 million links (in-edges and out-edges) converges to within a tolerable limit in 52 iterations. The convergence in a network of half the above size took approximately 45 iterations. Through this data, they concluded the algorithm can be scaled very well and that the scaling factor for extremely large networks would be roughly linear inlog⁡n{\displaystyle \log n}, where n is the size of the network. As a result ofMarkov theory, it can be shown that the PageRank of a page is the probability of arriving at that page after a large number of clicks. This happens to equalt−1{\displaystyle t^{-1}}wheret{\displaystyle t}is theexpectationof the number of clicks (or random jumps) required to get from the page back to itself. One main disadvantage of PageRank is that it favors older pages. A new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such asWikipedia). Several strategies have been proposed to accelerate the computation of PageRank.[34] Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept,[citation needed]which purports to determine which documents are actually highly valued by the Web community. Since December 2007, when it startedactivelypenalizing sites selling paid text links, Google has combattedlink farmsand other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools is among Google'strade secrets. PageRank can be computed either iteratively or algebraically. The iterative method can be viewed as thepower iterationmethod[35][36]or the power method. The basic mathematical operations performed are identical. Att=0{\displaystyle t=0}, an initial probability distribution is assumed, usually where N is the total number of pages, andpi;0{\displaystyle p_{i};0}is page i at time 0. At each time step, the computation, as detailed above, yields where d is the damping factor, or in matrix notation whereRi(t)=PR(pi;t){\displaystyle \mathbf {R} _{i}(t)=PR(p_{i};t)}and1{\displaystyle \mathbf {1} }is the column vector of lengthN{\displaystyle N}containing only ones. The matrixM{\displaystyle {\mathcal {M}}}is defined as i.e., whereA{\displaystyle A}denotes theadjacency matrixof the graph andK{\displaystyle K}is the diagonal matrix with the outdegrees in the diagonal. The probability calculation is made for each page at a time point, then repeated for the next time point. The computation ends when for some smallϵ{\displaystyle \epsilon } i.e., when convergence is assumed. If the matrixM{\displaystyle {\mathcal {M}}}is a transition probability, i.e., column-stochastic andR{\displaystyle \mathbf {R} }is a probability distribution (i.e.,|R|=1{\displaystyle |\mathbf {R} |=1},ER=1{\displaystyle \mathbf {E} \mathbf {R} =\mathbf {1} }whereE{\displaystyle \mathbf {E} }is matrix of all ones), then equation (2) is equivalent to Hence PageRankR{\displaystyle \mathbf {R} }is the principal eigenvector ofM^{\displaystyle {\widehat {\mathcal {M}}}}. A fast and easy way to compute this is using thepower method: starting with an arbitrary vectorx(0){\displaystyle x(0)}, the operatorM^{\displaystyle {\widehat {\mathcal {M}}}}is applied in succession, i.e., until Note that in equation (3) the matrix on the right-hand side in the parenthesis can be interpreted as whereP{\displaystyle \mathbf {P} }is an initial probability distribution. n the current case Finally, ifM{\displaystyle {\mathcal {M}}}has columns with only zero values, they should be replaced with the initial probability vectorP{\displaystyle \mathbf {P} }. In other words, where the matrixD{\displaystyle {\mathcal {D}}}is defined as with In this case, the above two computations usingM{\displaystyle {\mathcal {M}}}only give the same PageRank if their results are normalized: The PageRank of an undirectedgraphG{\displaystyle G}is statistically close to thedegree distributionof the graphG{\displaystyle G},[37]but they are generally not identical: IfR{\displaystyle R}is the PageRank vector defined above, andD{\displaystyle D}is the degree distribution vector wheredeg⁡(pi){\displaystyle \deg(p_{i})}denotes the degree of vertexpi{\displaystyle p_{i}}, andE{\displaystyle E}is the edge-set of the graph, then, withY=1N1{\displaystyle Y={1 \over N}\mathbf {1} },[38]shows that: 1−d1+d‖Y−D‖1≤‖R−D‖1≤‖Y−D‖1,{\displaystyle {1-d \over 1+d}\|Y-D\|_{1}\leq \|R-D\|_{1}\leq \|Y-D\|_{1},} that is, the PageRank of an undirected graph equals to the degree distribution vector if and only if the graph is regular, i.e., every vertex has the same degree. A generalization of PageRank for the case of ranking two interacting groups of objects was described by Daugulis.[39]In applications it may be necessary to model systems having objects of two kinds where a weighted relation is defined on object pairs. This leads to consideringbipartite graphs. For such graphs two related positive or nonnegative irreducible matrices corresponding to vertex partition sets can be defined. One can compute rankings of objects in both groups as eigenvectors corresponding to the maximal positive eigenvalues of these matrices. Normed eigenvectors exist and are unique by the Perron or Perron–Frobenius theorem. Example: consumers and products. The relation weight is the product consumption rate. Sarma et al. describe tworandom walk-baseddistributed algorithmsfor computing PageRank of nodes in a network.[40]One algorithm takesO(log⁡n/ϵ){\displaystyle O(\log n/\epsilon )}rounds with high probability on any graph (directed or undirected), where n is the network size andϵ{\displaystyle \epsilon }is the reset probability (1−ϵ{\displaystyle 1-\epsilon }, which is called the damping factor) used in the PageRank computation. They also present a faster algorithm that takesO(log⁡n/ϵ){\displaystyle O({\sqrt {\log n}}/\epsilon )}rounds in undirected graphs. In both algorithms, each node processes and sends a number of bits per round that are polylogarithmic in n, the network size. TheGoogle Toolbarlong had a PageRank feature which displayed a visited page's PageRank as a whole number between 0 (least popular) and 10 (most popular). Google had not disclosed the specific method for determining a Toolbar PageRank value, which was to be considered only a rough indication of the value of a website. The "Toolbar Pagerank" was available for verified site maintainers through the Google Webmaster Tools interface. However, on October 15, 2009, a Google employee confirmed that the company had removed PageRank from itsWebmaster Toolssection, saying that "We've been telling people for a long time that they shouldn't focus on PageRank so much. Many site owners seem to think it's the most importantmetricfor them to track, which is simply not true."[41] The "Toolbar Pagerank" was updated very infrequently. It was last updated in November 2013. In October 2014 Matt Cutts announced that another visible pagerank update would not be coming.[42]In March 2016 Google announced it would no longer support this feature, and the underlying API would soon cease to operate.[43]On April 15, 2016, Google turned off display of PageRank Data in Google Toolbar,[44]though the PageRank continued to be used internally to rank content in search results.[45] Thesearch engine results page(SERP) is the actual result returned by a search engine in response to a keyword query. The SERP consists of a list of links to web pages with associated text snippets, paid ads, featured snippets, and Q&A. The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank. The SERP rank of a web page is a function not only of its PageRank, but of a relatively large and continuously adjusted set of factors (over 200).[46][unreliable source?]Search engine optimization(SEO) is aimed at influencing the SERP rank for a website or a set of web pages. Positioning of a webpage on Google SERPs for a keyword depends on relevance and reputation, also known as authority and popularity. PageRank is Google's indication of its assessment of the reputation of a webpage: It is non-keyword specific. Google uses a combination of webpage and website authority to determine the overall authority of a webpage competing for a keyword.[47]The PageRank of the HomePage of a website is the best indication Google offers for website authority.[48] After the introduction ofGoogle Placesinto the mainstream organic SERP, numerous other factors in addition to PageRank affect ranking a business in Local Business Results.[49]When Google elaborated on the reasons for PageRank deprecation at Q&A #March 2016, they announced Links and Content as the Top Ranking Factors. RankBrain had earlier in October 2015 been announced as the #3 Ranking Factor, so the Top 3 Factors have been confirmed officially by Google.[50] TheGoogle DirectoryPageRank was an 8-unit measurement. Unlike the Google Toolbar, which shows a numeric PageRank value upon mouseover of the green bar, the Google Directory only displayed the bar, never the numeric values. Google Directory was closed on July 20, 2011.[51] It was known that the PageRank shown in the Toolbar could easily bespoofed. Redirection from one page to another, either via aHTTP 302response or a "Refresh"meta tag, caused the source page to acquire the PageRank of the destination page. Hence, a new page with PR 0 and no incoming links could have acquired PR 10 by redirecting to the Google home page. Spoofing can usually be detected by performing a Google search for a source URL; if the URL of an entirely different site is displayed in the results, the latter URL may represent the destination of a redirection. Forsearch engine optimizationpurposes, some companies offer to sell high PageRank links to webmasters.[52]As links from higher-PR pages are believed to be more valuable, they tend to be more expensive. It can be an effective and viable marketing strategy to buy link advertisements on content pages of quality and relevant sites to drive traffic and increase a webmaster's link popularity. However, Google has publicly warned webmasters that if they are or were discovered to be selling links for the purpose of conferring PageRank and reputation, their links will be devalued (ignored in the calculation of other pages' PageRanks). The practice of buying and selling[53]is intensely debated across the Webmaster community. Google advised webmasters to use thenofollowHTML attributevalue on paid links. According toMatt Cutts, Google is concerned about webmasters who try togame the system, and thereby reduce the quality and relevance of Google search results.[52] In 2019, Google announced two additional link attributes providing hints about which links to consider or exclude within Search:rel="ugc"as a tag for user-generated content, such as comments; andrel="sponsored"as a tag for advertisements or other types of sponsored content. Multiplerelvalues are also allowed, for example,rel="ugc sponsored"can be used to hint that the link came from user-generated content and is sponsored.[54] Even though PageRank has become less important for SEO purposes, the existence of back-links from more popular websites continues to push a webpage higher up in search rankings.[55] A more intelligent surfer that probabilistically hops from page to page depending on the content of the pages and query terms the surfer is looking for. This model is based on a query-dependent PageRank score of a page which as the name suggests is also a function of query. When given a multiple-term query,Q={q1,q2,⋯}{\displaystyle Q=\{q1,q2,\cdots \}}, the surfer selects aq{\displaystyle q}according to some probability distribution,P(q){\displaystyle P(q)}, and uses that term to guide its behavior for a large number of steps. It then selects another term according to the distribution to determine its behavior, and so on. The resulting distribution over visited web pages is QD-PageRank.[56] The mathematics of PageRank are entirely general and apply to any graph or network in any domain. Thus, PageRank is now regularly used in bibliometrics, social and information network analysis, and for link prediction and recommendation. It is used for systems analysis of road networks, and in biology, chemistry, neuroscience, and physics.[57] PageRank has been used to quantify the scientific impact of researchers. The underlying citation and collaboration networks are used in conjunction with pagerank algorithm in order to come up with a ranking system for individual publications which propagates to individual authors. The new index known as pagerank-index (Pi) is demonstrated to be fairer compared to h-index in the context of many drawbacks exhibited by h-index.[58] For the analysis of protein networks in biology PageRank is also a useful tool.[59][60] In any ecosystem, a modified version of PageRank may be used to determine species that are essential to the continuing health of the environment.[61] A similar newer use of PageRank is to rank academic doctoral programs based on their records of placing their graduates in faculty positions. In PageRank terms, academic departments link to each other by hiring their faculty from each other (and from themselves).[62] A version of PageRank has recently been proposed as a replacement for the traditionalInstitute for Scientific Information(ISI)impact factor,[63]and implemented atEigenfactoras well as atSCImago. Instead of merely counting total citations to a journal, the "importance" of each citation is determined in a PageRank fashion. Inneuroscience, the PageRank of aneuronin a neural network has been found to correlate with its relative firing rate.[64] Personalized PageRank is used byTwitterto present users with other accounts they may wish to follow.[65] Swiftype's site search product builds a "PageRank that's specific to individual websites" by looking at each website's signals of importance and prioritizing content based on factors such as number of links from the home page.[66] AWeb crawlermay use PageRank as one of a number of importance metrics it uses to determine which URL to visit during a crawl of the web. One of the early working papers[67]that were used in the creation of Google isEfficient crawling through URL ordering,[68]which discusses the use of a number of different importance metrics to determine how deeply, and how much of a site Google will crawl. PageRank is presented as one of a number of these importance metrics, though there are others listed such as the number of inbound and outbound links for a URL, and the distance from the root directory on a site to the URL. The PageRank may also be used as a methodology to measure the apparent impact of a community like theBlogosphereon the overall Web itself. This approach uses therefore the PageRank to measure the distribution of attention in reflection of theScale-free networkparadigm.[citation needed] In 2005, in a pilot study in Pakistan,Structural Deep Democracy, SD2[69][70]was used for leadership selection in a sustainable agriculture group called Contact Youth. SD2 usesPageRankfor the processing of the transitive proxy votes, with the additional constraints of mandating at least two initial proxies per voter, and all voters are proxy candidates. More complex variants can be built on top of SD2, such as adding specialist proxies and direct votes for specific issues, but SD2 as the underlying umbrella system, mandates that generalist proxies should always be used. In sport the PageRank algorithm has been used to rank the performance of: teams in the National Football League (NFL) in the USA;[71]individual soccer players;[72]and athletes in the Diamond League.[73] PageRank has been used to rank spaces or streets to predict how many people (pedestrians or vehicles) come to the individual spaces or streets.[74][75]Inlexical semanticsit has been used to performWord Sense Disambiguation,[76]Semantic similarity,[77]and also to automatically rankWordNetsynsetsaccording to how strongly they possess a given semantic property, such as positivity or negativity.[78] How a traffic system changes its operational mode can be described by transitions between quasi-stationary states in correlation structures of traffic flow. PageRank has been used to identify and explore the dominant states among these quasi-stationary states in traffic systems.[79] In early 2005, Google implemented a new value, "nofollow",[80]for therelattribute of HTML link and anchor elements, so that website developers andbloggerscan make links that Google will not consider for the purposes of PageRank—they are links that no longer constitute a "vote" in the PageRank system. The nofollow relationship was added in an attempt to help combatspamdexing. As an example, people could previously create many message-board posts with links to their website to artificially inflate their PageRank. With the nofollow value, message-board administrators can modify their code to automatically insert "rel='nofollow'" to all hyperlinks in posts, thus preventing PageRank from being affected by those particular posts. This method of avoidance, however, also has various drawbacks, such as reducing the link value of legitimate comments. (See:Spam in blogs#nofollow) In an effort to manually control the flow of PageRank among pages within a website, many webmasters practice what is known as PageRank Sculpting[81]—which is the act of strategically placing the nofollow attribute on certain internal links of a website in order to funnel PageRank towards those pages the webmaster deemed most important. This tactic had been used since the inception of the nofollow attribute, but may no longer be effective since Google announced that blocking PageRank transfer with nofollow does not redirect that PageRank to other links.[82]
https://en.wikipedia.org/wiki/PageRank#Iterative_computation
Asecure cryptoprocessoris a dedicatedcomputer-on-a-chipormicroprocessorfor carrying outcryptographicoperations, embedded in a packaging with multiplephysical securitymeasures, which give it a degree oftamper resistance. Unlike cryptographic processors that output decrypted data onto a bus in a secure environment, a secure cryptoprocessor does not output decrypted data or decrypted program instructions in an environment where security cannot always be maintained. The purpose of a secure cryptoprocessor is to act as the keystone of a security subsystem, eliminating the need to protect the rest of the subsystem with physical security measures.[1] Ahardware security module(HSM) contains one or more secure cryptoprocessorchips.[2][3][4]These devices are high grade secure cryptoprocessors used with enterprise servers. A hardware security module can have multiple levels of physical security with a single-chip cryptoprocessor as its most secure component. The cryptoprocessor does not reveal keys or executable instructions on a bus, except in encrypted form, and zeros keys by attempts at probing or scanning. The crypto chip(s) may also bepottedin the hardware security module with other processors and memory chips that store and process encrypted data. Any attempt to remove the potting will cause the keys in the crypto chip to be zeroed. A hardware security module may also be part of a computer (for example anATM) that operates inside a locked safe to deter theft, substitution, and tampering. Modernsmartcardsare probably the most widely deployed form of secure cryptoprocessor, although more complex and versatile secure cryptoprocessors are widely deployed in systems such asAutomated teller machines, TVset-top boxes, military applications, and high-security portable communication equipment.[citation needed]Some secure cryptoprocessors can even run general-purpose operating systems such asLinuxinside their security boundary. Cryptoprocessors input program instructions in encrypted form, decrypt the instructions to plain instructions which are then executed within the same cryptoprocessor chip where the decrypted instructions are inaccessibly stored. By never revealing the decrypted program instructions, the cryptoprocessor prevents tampering of programs by technicians who may have legitimate access to the sub-system data bus. This is known asbus encryption. Data processed by a cryptoprocessor is also frequently encrypted. TheTrusted Platform Module(TPM) is an implementation of a secure cryptoprocessor that brings the notion oftrusted computingto ordinaryPCsby enabling asecure environment.[citation needed]Present TPM implementations focus on providing a tamper-proof boot environment, and persistent and volatile storage encryption. Security chips for embedded systems are also available that provide the same level of physical protection for keys and other secret material as a smartcard processor or TPM but in a smaller, less complex and less expensive package.[citation needed]They are often referred to as cryptographicauthenticationdevices and are used to authenticate peripherals, accessories and/or consumables. Like TPMs, they are usually turnkey integrated circuits intended to be embedded in a system, usually soldered to a PC board. Security measures used in secure cryptoprocessors: Secure cryptoprocessors, while useful, are not invulnerable to attack, particularly for well-equipped and determined opponents (e.g. a government intelligence agency) who are willing to expend enough resources on the project.[5][6] One attack on a secure cryptoprocessor targeted theIBM 4758.[7]A team at the University of Cambridge reported the successful extraction of secret information from an IBM 4758, using a combination of mathematics, and special-purposecodebreakinghardware. However, this attack was not practical in real-world systems because it required the attacker to have full access to all API functions of the device. Normal and recommended practices use the integral access control system to split authority so that no one person could mount the attack.[citation needed] While the vulnerability they exploited was a flaw in the software loaded on the 4758, and not the architecture of the 4758 itself, their attack serves as a reminder that a security system is only as secure as its weakest link: the strong link of the 4758 hardware was rendered useless by flaws in the design and specification of the software loaded on it. Smartcards are significantly more vulnerable, as they are more open to physical attack. Additionally, hardware backdoors can undermine security in smartcards and other cryptoprocessors unless investment is made in anti-backdoor design methods.[8] In the case offull disk encryptionapplications, especially when implemented without abootPIN, a cryptoprocessor would not be secure against acold boot attack[9]ifdata remanencecould be exploited to dumpmemorycontents after theoperating systemhas retrieved the cryptographickeysfrom itsTPM. However, if all of the sensitive data is stored only in cryptoprocessor memory and not in external storage, and the cryptoprocessor is designed to be unable to reveal keys or decrypted or unencrypted data on chipbonding padsorsolder bumps, then such protected data would be accessible only by probing the cryptoprocessor chip after removing any packaging and metal shielding layers from the cryptoprocessor chip. This would require both physical possession of the device as well as skills and equipment beyond that of most technical personnel. Other attack methods involve carefully analyzing the timing of various operations that might vary depending on the secret value or mapping the current consumption versus time to identify differences in the way that '0' bits are handled internally vs. '1' bits. Or the attacker may apply temperature extremes, excessively high or low clock frequencies or supply voltage that exceeds the specifications in order to induce a fault. The internal design of the cryptoprocessor can be tailored to prevent these attacks. Some secure cryptoprocessors containdual processorcores and generate inaccessible encryption keys when needed so that even if the circuitry is reverse engineered, it will not reveal any keys that are necessary to securely decrypt software booted from encrypted flash memory or communicated between cores.[10] The first single-chip cryptoprocessor design was forcopy protectionof personal computer software (see US Patent 4,168,396, Sept 18, 1979) and was inspired by Bill Gates'sOpen Letter to Hobbyists. Thehardware security module(HSM), a type of secure cryptoprocessor,[3][4]was invented byEgyptian-AmericanengineerMohamed M. Atalla,[11]in 1972.[12]He invented a high security module dubbed the "Atalla Box" which encryptedPINandATMmessages, and protected offline devices with an un-guessable PIN-generating key.[13]In 1972, he filed apatentfor the device.[14]He foundedAtalla Corporation(nowUtimaco Atalla) that year,[12]and commercialized the "Atalla Box" the following year,[13]officially as the Identikey system.[15]It was acard readerandcustomer identification system, consisting of acard readerconsole, two customerPIN pads, intelligent controller and built-in electronic interface package.[15]It allowed the customer to type in a secret code, which is transformed by the device, using amicroprocessor, into another code for the teller.[16]During atransaction, the customer'saccount number was read by the card reader.[15]It was a success, and led to the wide use of high security modules.[13] Fearful that Atalla would dominate the market, banks andcredit cardcompanies began working on an international standard in the 1970s.[13]TheIBM 3624, launched in the late 1970s, adopted a similar PIN verification process to the earlier Atalla system.[17]Atalla was an early competitor toIBMin the banking security market.[14][18] At the National Association of Mutual Savings Banks (NAMSB) conference in January 1976, Atalla unveiled an upgrade to its Identikey system, called the Interchange Identikey. It added the capabilities ofprocessingonline transactionsand dealing withnetwork security. Designed with the focus of takingbank transactionsonline, the Identikey system was extended to shared-facility operations. It was consistent and compatible with variousswitchingnetworks, and was capable of resetting itself electronically to any one of 64,000 irreversiblenonlinearalgorithmsas directed bycard datainformation. The Interchange Identikey device was released in March 1976.[16]Later in 1979, Atalla introduced the firstnetwork security processor(NSP).[19]Atalla's HSM products protect 250millioncard transactionsevery day as of 2013,[12]and secure the majority of the world's ATM transactions as of 2014.[11]
https://en.wikipedia.org/wiki/Cryptoprocessor
Irregular warfare(IW) is defined inUnited Statesjoint doctrine as "a violent struggle among state and non-state actors for legitimacy and influence over the relevant populations" and in U.S. law as "Department of Defense activities not involving armed conflict that support predetermined United States policy and military objectives conducted by, with, and through regular forces, irregular forces, groups, and individuals."[1][2]In practice, control of institutions and infrastructure is also important. Concepts associated with irregular warfare are older than the term itself.[3] Irregular warfare favors indirect warfare andasymmetric warfareapproaches, though it may employ the full range of military and other capabilities in order to erode the adversary's power, influence, and will. It is inherently a protracted struggle that will test the resolve of astateand its strategic partners.[4][5][6][7][8] The term "irregular warfare" in Joint doctrine was settled upon in distinction from "traditional warfare" and "unconventional warfare", and to differentiate it as such; it is unrelated to the distinction between "regular" and "irregular forces".[9] One of the earliest known uses of the termirregular warfareisCharles Edward Callwell's classic 1896 publication for theUnited KingdomWar Office,Small Wars: Their Principles and Practices, where he noted in defining 'small wars': "Small wars include the partisan warfare which usually arises when trained soldiers are employed in the quelling of sedition and of insurrections in civilised countries; they include campaigns of conquest when a Great Power adds the territory of barbarous races to its possessions; and they include punitive expeditions against tribes bordering upon distant colonies....Whenever a regular army finds itself engaged upon hostilities against irregular forces, or forces which in their armament, their organization, and their discipline are palpably inferior to it, the conditions of the campaign become distinct from the conditions of modern regular warfare, and it is with hostilities of this nature that this volume proposes to deal. Upon the organization of armies for irregular warfare valuable information is to be found in many instructive military works, official and non-official."[10] A similar usage appears in the 1986 English edition of "Modern Irregular Warfare in Defense Policy and as a Military Phenomenon" by formerNaziofficerFriedrich August Freiherr von der Heydte. The original 1972 German edition of the book is titled "Der Moderne Kleinkrieg als Wehrpolitisches und Militarisches Phänomen". The German word "Kleinkrieg" is literally translated as "Small War."[11]The word "Irregular," used in the title of the English translation of the book, seems to be a reference to non "regular armed forces" as per theThird Geneva Convention. Another early use of the term is in a 1996Central Intelligence Agency(CIA) document by Jeffrey B. White.[12]Majormilitary doctrinedevelopments related to IW were done between 2004 and 2007[13]as a result of theSeptember 11 attackson theUnited States.[14][15][unreliable source?]A key proponent of IW within US Department of Defense (DoD) isMichael G. Vickers, a former paramilitary officer in the CIA.[16]The CIA'sSpecial Activities Center(SAC) is the premiere Americanparamilitaryclandestineunit for creating and for combating irregular warfare units.[17][18][19]For example, SAC paramilitary officers created and led successful irregular units from the Hmong tribe during the war in Laos in the 1960s,[20]from theNorthern Allianceagainst theTalibanduring the war in Afghanistan in 2001,[21]and from theKurdishPeshmergaagainstAnsar al-Islamand the forces ofSaddam Husseinduring the war in Iraq in 2003.[22][23][24] Nearly all modern wars include at least some element of irregular warfare. Since the time of Napoleon, approximately 80% of conflict has been irregular in nature. However, the following conflicts may be considered to have exemplified by irregular warfare:[3][12] Activities and types of conflict included in IW are: According to the DoD, there are five core activities of IW: As a result of DoD Directive 3000.07,[6]United States armed forcesare studying[when?]irregular warfare concepts usingmodeling and simulation.[29][30][31] There have been several militarywargamesandmilitary exercisesassociated with IW, including: Individuals:
https://en.wikipedia.org/wiki/Irregular_warfare
Gpg4winis an email and file encryption package for most versions ofMicrosoft WindowsandMicrosoft Outlook, which utilises theGnuPGframework forsymmetricandpublic-key cryptography, such as data encryption,digital signatures,hash calculationsetc. The original creation of Gpg4win was initiated and funded by Germany'sFederal Office for Information Security(BSI) in 2005,[3][4]resulting in the release of Gpg4win 1.0.0 on 6 April 2006;[5]however Gpg4win and all included tools arefree and open source software, and it is typically the non-proprietary option for privacy recommended[6][7]to Windows users. As Gpg4win v1 was a much overhauled derivate of GnuPP,[8]both were usingGnuPG v1for cryptographic operations and thus only supportedOpenPGPas cryptography standard. Hence in 2007 the development of a fundamentally enhanced version was started, also with support from the German BSI (Federal Office for Information Security); this effort culminated in the release of Gpg4win 2.0.0 on 7 August 2009 after a protracted beta testing phase,[9]which was based on GnuPG 2.0, includedS/MIMEsupport, Kleopatra as a new certificate manager, theExplorerplug-in GpgEX for cryptography operations on files, basic support ofsmart cards, a full set of German dialogue texts in addition to the English ones, new manuals in English and German, plus many other enhancements.[10] In contrast to Gpg4win v2, which focused on new features and software components, the development of Gpg4win v3 focused on usability, plus consolidation of code and features:[11]This resulted in the release of Gpg4win 3.0.0 on 19 September 2017 with proper support forElliptic Curve Cryptography (ECC)by utilising GnuPG 2.2 (instead of 2.0), broadened, stabilised and enhanced smart card support, a fundamentally overhauledOutlookplug-in GpgOL for Outlook 2010 and newer, support of 64-bit versions of Outlook 2010 and newer, supporting dialogues in all languages which KDE supports etc.[12]It is also distributed as GnuPG VS-Desktop with commercial support and approval for handlingNATO RESTRICTED,RESTREINT UE/EU RESTRICTEDandGerman VS-NfDdocuments, which in turn has become the major source of revenue for maintaining and further developing the GnuPG framework and Gpg4win.[13] Gpg4win 4.0.0, released on 21 December 2021,[14]switched to using GnuPG 2.3 (from 2.2) and continued to refine and enhance the feature set of Gpg4win v3.[15]
https://en.wikipedia.org/wiki/Gpg4win
The following tables compare general and technical information for a number offile systems. Note that in addition to the below table, block capabilities can be implemented below the file system layer in Linux (LVM, integritysetup,cryptsetup) or Windows (Volume Shadow Copy Service,SECURITY), etc. "Online" and "offline" are synonymous with "mounted" and "not mounted". Experimental port available to 2.6.32 and later[75][76] While storage devices usually have their size expressed in powers of 10 (for instance a 1TBSolid State Drive will contain at least 1,000,000,000,000 (1012, 10004) bytes), filesystem limits are invariably powers of 2, so usually expressed with IEC prefixes. For instance, a 1TiBlimit means 240, 10244bytes. Approximations (rounding down) using power of 10 are also given below to clarify. InPOSIXnamespace: anyUTF-16code unit (case-sensitive) except/as well asNUL[115] InPOSIXnamespace: anyUTF-16code unit (case-sensitive) except/as well asNUL[117][118]
https://en.wikipedia.org/wiki/Comparison_of_file_systems
Apassword, sometimes called apasscode, is secret data, typically a string of characters, usually used to confirm a user's identity. Traditionally, passwords were expected to bememorized,[1]but the large number of password-protected services that a typical individual accesses can make memorization of unique passwords for each service impractical.[2]Using the terminology of the NIST Digital Identity Guidelines,[3]the secret is held by a party called theclaimantwhile the party verifying the identity of the claimant is called theverifier. When the claimant successfully demonstrates knowledge of the password to the verifier through an establishedauthentication protocol,[4]the verifier is able to infer the claimant's identity. In general, a password is an arbitrarystringofcharactersincluding letters, digits, or other symbols. If the permissible characters are constrained to be numeric, the corresponding secret is sometimes called apersonal identification number(PIN). Despite its name, a password does not need to be an actual word; indeed, a non-word (in the dictionary sense) may be harder to guess, which is a desirable property of passwords. A memorized secret consisting of a sequence of words or other text separated by spaces is sometimes called apassphrase. A passphrase is similar to a password in usage, but the former is generally longer for added security.[5] Passwords have been used since ancient times. Sentries would challenge those wishing to enter an area to supply a password orwatchword, and would only allow a person or group to pass if they knew the password.Polybiusdescribes the system for the distribution of watchwords in theRoman militaryas follows: The way in which they secure the passing round of the watchword for the night is as follows: from the tenthmanipleof each class of infantry and cavalry, the maniple which is encamped at the lower end of the street, a man is chosen who is relieved from guard duty, and he attends every day at sunset at the tent of thetribune, and receiving from him the watchword—that is a wooden tablet with the word inscribed on it – takes his leave, and on returning to his quarters passes on the watchword and tablet before witnesses to the commander of the next maniple, who in turn passes it to the one next to him. All do the same until it reaches the first maniples, those encamped near the tents of the tribunes. These latter are obliged to deliver the tablet to the tribunes before dark. So that if all those issued are returned, the tribune knows that the watchword has been given to all the maniples, and has passed through all on its way back to him. If any one of them is missing, he makes inquiry at once, as he knows by the marks from what quarter the tablet has not returned, and whoever is responsible for the stoppage meets with the punishment he merits.[6] Passwords in military use evolved to include not just a password, but a password and a counterpassword; for example in the opening days of theBattle of Normandy, paratroopers of the U.S. 101st Airborne Division used a password—flash—which was presented as a challenge, and answered with the correct response—thunder. The challenge and response were changed every three days. American paratroopers also famously used a device known as a "cricket" onD-Dayin place of a password system as a temporarily unique method of identification; one metallic click given by the device in lieu of a password was to be met by two clicks in reply.[7] Passwords have been used with computers since the earliest days of computing. TheCompatible Time-Sharing System(CTSS), an operating system introduced atMITin 1961, was the first computer system to implement password login.[8][9]CTSS had a LOGIN command that requested a user password. "After typing PASSWORD, the system turns off the printing mechanism, if possible, so that the user may type in his password with privacy."[10]In the early 1970s,Robert Morrisdeveloped a system of storing login passwords in a hashed form as part of theUnixoperating system. The system was based on a simulated Hagelin rotor crypto machine, and first appeared in 6th Edition Unix in 1974. A later version of his algorithm, known ascrypt(3), used a 12-bitsaltand invoked a modified form of theDESalgorithm 25 times to reduce the risk of pre-computeddictionary attacks.[11] In modern times,user namesand passwords are commonly used by people during alog inprocess thatcontrols accessto protected computeroperating systems,mobile phones,cable TVdecoders,automated teller machines(ATMs), etc. A typicalcomputer userhas passwords for multiple purposes: logging into accounts, retrievinge-mail, accessing applications, databases, networks, web sites, and even reading the morning newspaper online. The easier a password is for the owner to remember generally means it will be easier for anattackerto guess.[12]However, passwords that are difficult to remember may also reduce the security of a system because (a) users might need to write down or electronically store the password, (b) users will need frequent password resets and (c) users are more likely to re-use the same password across different accounts. Similarly, the more stringent the password requirements, such as "have a mix of uppercase and lowercase letters and digits" or "change it monthly", the greater the degree to which users will subvert the system.[13]Others argue longer passwords provide more security (e.g.,entropy) than shorter passwords with a wide variety of characters.[14] InThe Memorability and Security of Passwords,[15]Jeff Yan et al. examine the effect of advice given to users about a good choice of password. They found that passwords based on thinking of a phrase and taking the first letter of each word are just as memorable as naively selected passwords, and just as hard to crack as randomly generated passwords. Combining two or more unrelated words and altering some of the letters to special characters or numbers is another good method,[16]but a single dictionary word is not. Having a personally designedalgorithmfor generating obscure passwords is another good method.[17] However, asking users to remember a password consisting of a "mix of uppercase and lowercase characters" is similar to asking them to remember a sequence of bits: hard to remember, and only a little bit harder to crack (e.g. only 128 times harder to crack for 7-letter passwords, less if the user simply capitalises one of the letters). Asking users to use "both letters and digits" will often lead to easy-to-guess substitutions such as 'E' → '3' and 'I' → '1', substitutions that are well known to attackers. Similarly typing the password one keyboard row higher is a common trick known to attackers.[18] In 2013, Google released a list of the most common password types, all of which are considered insecure because they are too easy to guess (especially after researching an individual on social media), which includes:[19] Traditional advice to memorize passwords and never write them down has become a challenge because of the sheer number of passwords users of computers and the internet are expected to maintain. One survey concluded that the average user has around 100 passwords.[2]To manage the proliferation of passwords, some users employ the same password for multiple accounts, a dangerous practice since a data breach in one account could compromise the rest. Less risky alternatives include the use ofpassword managers,single sign-onsystems and simply keeping paper lists of less critical passwords.[20]Such practices can reduce the number of passwords that must be memorized, such as the password manager's master password, to a more manageable number. The security of a password-protected system depends on several factors. The overall system must be designed for sound security, with protection againstcomputer viruses,man-in-the-middle attacksand the like. Physical security issues are also a concern, from deterringshoulder surfingto more sophisticated physical threats such as video cameras and keyboard sniffers. Passwords should be chosen so that they are hard for an attacker to guess and hard for an attacker to discover using any of the available automatic attack schemes.[21] Nowadays, it is a common practice for computer systems to hide passwords as they are typed. The purpose of this measure is to prevent bystanders from reading the password; however, some argue that this practice may lead to mistakes and stress, encouraging users to choose weak passwords. As an alternative, users should have the option to show or hide passwords as they type them.[21] Effective access control provisions may force extreme measures on criminals seeking to acquire a password or biometric token.[22]Less extreme measures includeextortion,rubber hose cryptanalysis, andside channel attack. Some specific password management issues that must be considered when thinking about, choosing, and handling, a password follow. The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g., three) of failed password entry attempts, also known as throttling.[3]: 63B Sec 5.2.2In the absence of other vulnerabilities, such systems can be effectively secure with relatively simple passwords if they have been well chosen and are not easily guessed.[23] Many systems store acryptographic hashof the password. If an attacker gets access to the file of hashed passwords guessing can be done offline, rapidly testing candidate passwords against the true password's hash value. In the example of a web-server, an online attacker can guess only at the rate at which the server will respond, while an off-line attacker (who gains access to the file) can guess at a rate limited only by the hardware on which the attack is running and the strength of the algorithm used to create the hash. Passwords that are used to generate cryptographic keys (e.g., fordisk encryptionorWi-Fisecurity) can also be subjected to high rate guessing, known aspassword cracking. Lists of common passwords are widely available and can make password attacks efficient. Security in such situations depends on using passwords or passphrases of adequate complexity, making such an attack computationally infeasible for the attacker. Some systems, such asPGPandWi-Fi WPA, apply a computation-intensive hash to the password to slow such attacks, in a technique known askey stretching. An alternative to limiting the rate at which an attacker can make guesses on a password is to limit the total number of guesses that can be made. The password can be disabled, requiring a reset, after a small number of consecutive bad guesses (say 5); and the user may be required to change the password after a larger cumulative number of bad guesses (say 30), to prevent an attacker from making an arbitrarily large number of bad guesses by interspersing them between good guesses made by the legitimate password owner.[24]Attackers may conversely use knowledge of this mitigation to implement adenial of service attackagainst the user by intentionally locking the user out of their own device; this denial of service may open other avenues for the attacker to manipulate the situation to their advantage viasocial engineering. Some computer systems store user passwords asplaintext, against which to compare user logon attempts. If an attacker gains access to such an internal password store, all passwords—and so all user accounts—will be compromised. If some users employ the same password for accounts on different systems, those will be compromised as well. More secure systems store each password in a cryptographically protected form, so access to the actual password will still be difficult for a snooper who gains internal access to the system, while validation of user access attempts remains possible. The most secure do not store passwords at all, but a one-way derivation, such as apolynomial,modulus, or an advancedhash function.[14]Roger Needhaminvented the now-common approach of storing only a "hashed" form of the plaintext password.[25][26]When a user types in a password on such a system, the password handling software runs through a cryptographic hash algorithm, and if the hash value generated from the user's entry matches the hash stored in the password database, the user is permitted access. The hash value is created by applying a cryptographic hash function to a string consisting of the submitted password and, in multiple implementations, another value known as asalt. A salt prevents attackers from easily building a list of hash values for common passwords and prevents password cracking efforts from scaling across all users.[27]MD5andSHA1are frequently used cryptographic hash functions, but they are not recommended for password hashing unless they are used as part of a larger construction such as inPBKDF2.[28] The stored data—sometimes called the "password verifier" or the "password hash"—is often stored in Modular Crypt Format or RFC 2307 hash format, sometimes in the/etc/passwdfile or the/etc/shadowfile.[29] The main storage methods for passwords are plain text, hashed, hashed and salted, and reversibly encrypted.[30]If an attacker gains access to the password file, then if it is stored as plain text, no cracking is necessary. If it is hashed but not salted then it is vulnerable torainbow tableattacks (which are more efficient than cracking). If it is reversibly encrypted then if the attacker gets the decryption key along with the file no cracking is necessary, while if he fails to get the key cracking is not possible. Thus, of the common storage formats for passwords only when passwords have been salted and hashed is cracking both necessary and possible.[30] If a cryptographic hash function is well designed, it is computationally infeasible to reverse the function to recover aplaintextpassword. An attacker can, however, use widely available tools to attempt to guess the passwords. These tools work by hashing possible passwords and comparing the result of each guess to the actual password hashes. If the attacker finds a match, they know that their guess is the actual password for the associated user. Password cracking tools can operate by brute force (i.e. trying every possible combination of characters) or by hashing every word from a list; large lists of possible passwords in multiple languages are widely available on the Internet.[14]The existence ofpassword crackingtools allows attackers to easily recover poorly chosen passwords. In particular, attackers can quickly recover passwords that are short, dictionary words, simple variations on dictionary words, or that use easily guessable patterns.[31]A modified version of theDESalgorithm was used as the basis for the password hashing algorithm in earlyUnixsystems.[32]Thecryptalgorithm used a 12-bit salt value so that each user's hash was unique and iterated the DES algorithm 25 times in order to make the hash function slower, both measures intended to frustrate automated guessing attacks.[32]The user's password was used as a key to encrypt a fixed value. More recent Unix or Unix-like systems (e.g.,Linuxor the variousBSDsystems) use more secure password hashing algorithms such asPBKDF2,bcrypt, andscrypt, which have large salts and an adjustable cost or number of iterations.[33]A poorly designed hash function can make attacks feasible even if a strong password is chosen.LM hashis a widely deployed and insecure example.[34] Passwords are vulnerable to interception (i.e., "snooping") while being transmitted to the authenticating machine or person. If the password is carried as electrical signals on unsecured physical wiring between the user access point and the central system controlling the password database, it is subject to snooping bywiretappingmethods. If it is carried as packeted data over the Internet, anyone able to watch thepacketscontaining the logon information can snoop with a low probability of detection. Email is sometimes used to distribute passwords but this is generally an insecure method. Since most email is sent asplaintext, a message containing a password is readable without effort during transport by any eavesdropper. Further, the message will be stored asplaintexton at least two computers: the sender's and the recipient's. If it passes through intermediate systems during its travels, it will probably be stored on there as well, at least for some time, and may be copied tobackup,cacheor history files on any of these systems. Using client-side encryption will only protect transmission from the mail handling system server to the client machine. Previous or subsequent relays of the email will not be protected and the email will probably be stored on multiple computers, certainly on the originating and receiving computers, most often in clear text. The risk of interception of passwords sent over the Internet can be reduced by, among other approaches, usingcryptographicprotection. The most widely used is theTransport Layer Security(TLS, previously calledSSL) feature built into most current Internetbrowsers. Most browsers alert the user of a TLS/SSL-protected exchange with a server by displaying a closed lock icon, or some other sign, when TLS is in use. There are several other techniques in use. There is a conflict between stored hashed-passwords and hash-basedchallenge–response authentication; the latter requires a client to prove to a server that they know what theshared secret(i.e., password) is, and to do this, the server must be able to obtain the shared secret from its stored form. On a number of systems (includingUnix-type systems) doing remote authentication, the shared secret usually becomes the hashed form and has the serious limitation of exposing passwords to offline guessing attacks. In addition, when the hash is used as a shared secret, an attacker does not need the original password to authenticate remotely; they only need the hash. Rather than transmitting a password, or transmitting the hash of the password,password-authenticated key agreementsystems can perform azero-knowledge password proof, which proves knowledge of the password without exposing it. Moving a step further, augmented systems forpassword-authenticated key agreement(e.g.,AMP,B-SPEKE,PAK-Z,SRP-6) avoid both the conflict and limitation of hash-based methods. An augmented system allows a client to prove knowledge of the password to a server, where the server knows only a (not exactly) hashed password, and where the un-hashed password is required to gain access. Usually, a system must provide a way to change a password, either because a user believes the current password has been (or might have been) compromised, or as a precautionary measure. If a new password is passed to the system in unencrypted form, security can be lost (e.g., viawiretapping) before the new password can even be installed in the passworddatabaseand if the new password is given to a compromised employee, little is gained. Some websites include the user-selected password in anunencryptedconfirmation e-mail message, with the obvious increased vulnerability. Identity managementsystems are increasingly used to automate the issuance of replacements for lost passwords, a feature calledself-service password reset. The user's identity is verified by asking questions and comparing the answers to ones previously stored (i.e., when the account was opened). Some password reset questions ask for personal information that could be found on social media, such as mother's maiden name. As a result, some security experts recommend either making up one's own questions or giving false answers.[35] "Password aging" is a feature of some operating systems which forces users to change passwords frequently (e.g., quarterly, monthly or even more often). Such policies usually provoke user protest and foot-dragging at best and hostility at worst.[36]There is often an increase in the number of people who note down the password and leave it where it can easily be found, as well as help desk calls to reset a forgotten password. Users may use simpler passwords or develop variation patterns on a consistent theme to keep their passwords memorable.[37]Because of these issues, there is some debate as to whether password aging is effective.[38]Changing a password will not prevent abuse in most cases, since the abuse would often be immediately noticeable. However, if someone may have had access to the password through some means, such as sharing a computer or breaching a different site, changing the password limits the window for abuse.[39] Allotting separate passwords to each user of a system is preferable to having a single password shared by legitimate users of the system, certainly from a security viewpoint. This is partly because users are more willing to tell another person (who may not be authorized) a shared password than one exclusively for their use. Single passwords are also much less convenient to change because multiple people need to be told at the same time, and they make removal of a particular user's access more difficult, as for instance on graduation or resignation. Separate logins are also often used for accountability, for example to know who changed a piece of data. Common techniques used to improve the security of computer systems protected by a password include: Some of the more stringent policy enforcement measures can pose a risk of alienating users, possibly decreasing security as a result. It is common practice amongst computer users to reuse the same password on multiple sites. This presents a substantial security risk, because anattackerneeds to only compromise a single site in order to gain access to other sites the victim uses. This problem is exacerbated by also reusingusernames, and by websites requiring email logins, as it makes it easier for an attacker to track a single user across multiple sites. Password reuse can be avoided or minimized by usingmnemonic techniques,writing passwords down on paper, or using apassword manager.[44] It has been argued by Redmond researchersDinei Florencioand Cormac Herley, together with Paul C. van Oorschot of Carleton University, Canada, that password reuse is inevitable, and that users should reuse passwords for low-security websites (which contain little personal data and no financial information, for example) and instead focus their efforts on remembering long, complex passwords for a few important accounts, such as bank accounts.[45]Similar arguments were made byForbesin not change passwords as often as some "experts" advise, due to the same limitations in human memory.[37] Historically, multiple security experts asked people to memorize their passwords: "Never write down a password". More recently, multiple security experts such asBruce Schneierrecommend that people use passwords that are too complicated to memorize, write them down on paper, and keep them in a wallet.[46][47][48][49][50][51][52] Password managersoftware can also store passwords relatively safely, in an encrypted file sealed with a single master password. To facilitate estate administration, it is helpful for people to provide a mechanism for their passwords to be communicated to the persons who will administer their affairs in the event of their death. Should a record of accounts and passwords be prepared, care must be taken to ensure that the records are secure, to prevent theft or fraud.[53] Multi-factor authentication schemes combine passwords (as "knowledge factors") with one or more other means of authentication, to make authentication more secure and less vulnerable to compromised passwords. For example, a simple two-factor login might send a text message, e-mail, automated phone call, or similar alert whenever a login attempt is made, possibly supplying a code that must be entered in addition to a password.[54]More sophisticated factors include such things as hardware tokens and biometric security. Password rotation is a policy that is commonly implemented with the goal of enhancingcomputer security. In 2019, Microsoft stated that the practice is "ancient and obsolete".[55][56] Most organizations specify apassword policythat sets requirements for the composition and usage of passwords, typically dictating minimum length, required categories (e.g., upper and lower case, numbers, and special characters), prohibited elements (e.g., use of one's own name, date of birth, address, telephone number). Some governments have national authentication frameworks[57]that define requirements for user authentication to government services, including requirements for passwords. Many websites enforce standard rules such as minimum and maximum length, but also frequently include composition rules such as featuring at least one capital letter and at least one number/symbol. These latter, more specific rules were largely based on a 2003 report by theNational Institute of Standards and Technology(NIST), authored by Bill Burr.[58]It originally proposed the practice of using numbers, obscure characters and capital letters and updating regularly. In a 2017 article inThe Wall Street Journal, Burr reported he regrets these proposals and made a mistake when he recommended them.[59] According to a 2017 rewrite of this NIST report, a number ofwebsiteshave rules that actually have the opposite effect on the security of their users. This includes complex composition rules as well as forced password changes after certain periods of time. While these rules have long been widespread, they have also long been seen as annoying and ineffective by both users and cyber-security experts.[60]The NIST recommends people use longer phrases as passwords (and advises websites to raise the maximum password length) instead of hard-to-remember passwords with "illusory complexity" such as "pA55w+rd".[61]A user prevented from using the password "password" may simply choose "Password1" if required to include a number and uppercase letter. Combined with forced periodic password changes, this can lead to passwords that are difficult to remember but easy to crack.[58] Paul Grassi, one of the 2017 NIST report's authors, further elaborated: "Everyone knows that an exclamation point is a 1, or an I, or the last character of a password. $ is an S or a 5. If we use these well-known tricks, we aren't fooling any adversary. We are simply fooling the database that stores passwords into thinking the user did something good."[60] Pieris Tsokkis and Eliana Stavrou were able to identify some bad password construction strategies through their research and development of a password generator tool. They came up with eight categories of password construction strategies based on exposed password lists, password cracking tools, and online reports citing the most used passwords. These categories include user-related information, keyboard combinations and patterns, placement strategy, word processing, substitution, capitalization, append dates, and a combination of the previous categories[62] Attempting to crack passwords by trying as many possibilities as time and money permit is abrute force attack. A related method, rather more efficient in most cases, is adictionary attack. In a dictionary attack, all words in one or more dictionaries are tested. Lists of common passwords are also typically tested. Password strengthis the likelihood that a password cannot be guessed or discovered, and varies with the attack algorithm used. Cryptologists and computer scientists often refer to the strength or 'hardness' in terms ofentropy.[14] Passwords easily discovered are termedweakorvulnerable; passwords difficult or impossible to discover are consideredstrong. There are several programs available for password attack (or even auditing and recovery by systems personnel) such asL0phtCrack,John the Ripper, andCain; some of which use password design vulnerabilities (as found in the Microsoft LANManager system) to increase efficiency. These programs are sometimes used by system administrators to detect weak passwords proposed by users. Studies of production computer systems have consistently shown that a large fraction of all user-chosen passwords are readily guessed automatically.[63]For example, Columbia University found 22% of user passwords could be recovered with little effort.[64]According toBruce Schneier, examining data from a 2006phishingattack, 55% ofMySpacepasswords would be crackable in 8 hours using a commercially available Password Recovery Toolkit capable of testing 200,000 passwords per second in 2006.[65]He also reported that the single most common password waspassword1, confirming yet again the general lack of informed care in choosing passwords among users. (He nevertheless maintained, based on these data, that the general quality of passwords has improved over the years—for example, average length was up to eight characters from under seven in previous surveys, and less than 4% were dictionary words.[66]) The multiple ways in which permanent or semi-permanent passwords can be compromised has prompted the development of other techniques. Some are inadequate in practice, and in any case few have become universally available for users seeking a more secure alternative.[74]A 2012 paper[75]examines why passwords have proved so hard to supplant (despite multiple predictions that they would soon be a thing of the past[76]); in examining thirty representative proposed replacements with respect to security, usability and deployability they conclude "none even retains the full set of benefits that legacy passwords already provide." "The password is dead" is a recurring idea incomputer security. The reasons given often include reference to theusabilityas well as security problems of passwords. It often accompanies arguments that the replacement of passwords by a more secure means of authentication is both necessary and imminent. This claim has been made by a number of people at least since 2004.[76][87][88][89][90][91][92][93] Alternatives to passwords includebiometrics,two-factor authenticationorsingle sign-on,Microsoft'sCardspace, theHiggins project, theLiberty Alliance,NSTIC, theFIDO Allianceand various Identity 2.0 proposals.[94][95] However, in spite of these predictions and efforts to replace them passwords are still the dominant form of authentication on the web. In "The Persistence of Passwords", Cormac Herley and Paul van Oorschot suggest that every effort should be made to end the "spectacularly incorrect assumption" that passwords are dead.[96]They argue that "no other single technology matches their combination of cost, immediacy and convenience" and that "passwords are themselves the best fit for many of the scenarios in which they are currently used." Following this, Bonneau et al. systematically compared web passwords to 35 competing authentication schemes in terms of their usability, deployability, and security.[97][98]Their analysis shows that most schemes do better than passwords on security, some schemes do better and some worse with respect to usability, whileeveryscheme does worse than passwords on deployability. The authors conclude with the following observation: "Marginal gains are often not sufficient to reach the activation energy necessary to overcome significant transition costs, which may provide the best explanation of why we are likely to live considerably longer before seeing the funeral procession for passwords arrive at the cemetery."
https://en.wikipedia.org/wiki/Password#Hashing_and_salt
Thelesser of two evils principle, also referred to as thelesser evil principleandlesser-evilism, is the principle that when faced with selecting from two immoral options, the least immoral one should be chosen. The principle is most often invoked in reference to binary political choices under systems that make it impossible to express asincere preference for one's favorite. The maxim existed already in Platonic philosophy.[1]InNicomachean Ethics,Aristotlewrites: "For the lesser evil can be seen in comparison with the greater evil as a good, since this lesser evil is preferable to the greater one, and whatever preferable is good". The modern formulation was popularized byThomas à Kempis'devotional bookThe Imitation of Christwritten in early 15th century. In part IV of hisEthics,Spinozastates the following maxim:[2] Proposition 65: "According to the guidance of reason, of two things which are good, we shall follow the greater good, and of two evils, follow the less." The concept of "lesser evil" voting (LEV) can be seen as a form of theminimaxstrategy ("minimize maximum loss") where voters, when faced with two or more candidates, choose the one they perceive as the most likely to do harm and vote for the one most likely to defeat him, or the "lesser evil." To do so, "voting should not be viewed as a form of personal self-expression or moral judgement directed in retaliation towards major party candidates who fail to reflect our values, or of a corrupt system designed to limit choices to those acceptable to corporate elites" rather as an opportunity to reduce harm or loss.[3] Hannah Arendtargued that "Those who choose the lesser evil forget very quickly that they chose evil". In contrastSeyla Benhabibargues that politics would not exist without the necessity to choose between a greater and a lesser evil.[4]When limited to the two most likely candidates,[5]"lesser evil" is the most likely "greater good",[6]for the "common good", asPope Francishas said.[7] In 2012,Huffington Postcolumnist Sanford Jay Rosen stated that refusal to vote for the lesser of two evils became common practice for left-leaning voters in theUnited Statesdue to their overwhelming disapproval of the United States government's support for theVietnam War.[8]Rosen stated: "Beginning with the1968 presidential election, I often have heard from liberals that they could not vote for the lesser of two evils. Some said they would not vote; some said they would vote for a third-party candidate. That mantra delivered us to Richard Nixon in1972until Watergate did him in. And it delivered us toGeorge W. BushandDick Cheneyin2000until they were termed out in 2009".[8] In the2016 United States presidential election, both major candidates of the major parties —Hillary Clinton(D) andDonald Trump(R) — had disapproval ratings close to 60% by August 2016.[9]Green Party candidateJill Steininvoked this idea in her campaign stating, "Don't vote for the lesser evil, fight for the greater good".[10]Green Party votes hurt Democratic chances in 2000 and 2016.[11][12][13]This sentiment was repeated for the next two election cycles, both of which were between Trump and Democratic candidatesJoe Bidenin 2020 andKamala Harrisin 2024.[14][15]Accordingly, the lesser evil principle should be applied to two front-runners among many choices, after eliminating from consideration "minor party candidates (who) can be spoilers in elections by taking away enough votes from a major party candidate to influence the outcome without winning."[16] In hisDarkHorsepodcast,Bret Weinsteindescribes hisUnity 2020proposal for the2020 presidential electionas an option that, in case of failure, would not asymmetrically weaken voters' second-best choice on a single political side, thereby avoiding thelesser evil paradox.[17] In elections between only two candidates where one is mildly unpopular and the other immensely unpopular, opponents of both candidates frequently advocate a vote for the mildly unpopular candidate. For example, in the second round of the2002 French presidential electiongraffiti in Paris told people to "vote for the crook, not the fascist". The "crook" in those scribbled public messages wasJacques ChiracofRally for the Republicand the "fascist" wasJean-Marie Le Penof theNational Front. Chirac eventually won the second round having garnered 82% of the vote.[18] "Between Scylla and Charybdis" is an idiom derived fromHomer'sOdyssey. In the story,Odysseuschose to go nearScyllaas the lesser of two evils. He lost six of his companions, but if he had gone nearCharybdisall would be doomed. Because of such stories, having to navigate between the two hazards eventually entered idiomatic use. An equivalent English seafaring phrase is "Between a rock and a hard place".[19]The Latin lineincidit in scyllam cupiens vitare charybdim("he runs into Scylla, wishing to avoid Charybdis") had earlier become proverbial, with a meaning much the same asjumping from the frying pan into the fire.Erasmusrecorded it as an ancient proverb in hisAdagia, although the earliest known instance is in theAlexandreis, a 12th-century Latinepic poembyWalter of Châtillon.[20]
https://en.wikipedia.org/wiki/Lesser_of_two_evils_principle
Incriminology, thebroken windows theorystates that visible signs ofcrime,antisocial behaviorandcivil disordercreate anurban environmentthat encourages further crime and disorder, including serious crimes.[1]The theory suggests that policing methods that target minor crimes, such asvandalism,loitering,public drinkingandfare evasion, help to create an atmosphere of order and lawfulness. The theory was introduced in a 1982 article by conservative think tanks social scientistsJames Q. WilsonandGeorge L. Kelling.[1]It was popularized in the 1990s byNew York Citypolice commissionerWilliam Bratton, whose policing policies were influenced by the theory. The theory became subject to debate both within thesocial sciencesand the public sphere. Broken windows policing has been enforced with controversial police practices, such as the high use ofstop-and-frisk in New York Cityin the decade up to 2013. James Q. WilsonandGeorge L. Kellingfirst introduced the broken windows theory in an article titled "Broken Windows", in the March 1982 issue ofThe Atlantic Monthly: Social psychologists and police officers tend to agree that if a window in a building is broken and is left unrepaired, all the rest of the windows will soon be broken. This is as true in nice neighborhoods as in rundown ones. Window-breaking does not necessarily occur on a large scale because some areas are inhabited by determined window-breakers whereas others are populated by window-lovers; rather, one un-repaired broken window is a signal that no one cares, and so breaking more windows costs nothing. (It has always been fun.)[1] The article received a great deal of attention and was very widely cited. A 1996criminologyandurban sociologybook,Fixing Broken Windows: Restoring Order and Reducing Crime in Our CommunitiesbyGeorge L. Kellingand Catharine Coles, is based on the article but develops the argument in greater detail. It discusses the theory in relation tocrimeand strategies to contain or eliminate crime from urban neighborhoods.[2] A successful strategy for preventing vandalism, according to the book's authors, is to address the problems when they are small. Repair the broken windows within a short time, say, a day or a week, and the tendency is that vandals are much less likely to break more windows or do further damage. Clean up the sidewalk every day, and the tendency is forlitternot to accumulate (or for the rate of littering to be much less). Problems are less likely to escalate and thus respectable residents do not flee the neighborhood. Oscar Newmanintroduceddefensible space theoryin his 1972 bookDefensible Space. He argued that although police work is crucial to crime prevention, police authority is not enough to maintain a safe andcrime-freecity. People in the community help with crime prevention. Newman proposed that people care for and protect spaces that they feel invested in, arguing that an area is eventually safer if the people feel a sense of ownership and responsibility towards the area. Broken windows and vandalism are still prevalent because communities simply do not care about the damage. Regardless of how many times the windows are repaired, the community still must invest some of their time to keep it safe. Residents' negligence of broken window-type decay signifies a lack of concern for the community. Newman says this is a clear sign that the society has accepted this disorder—allowing the unrepaired windows to display vulnerability and lack of defense.[3]Malcolm Gladwell also relates this theory to the reality ofNew York Cityin his book,The Tipping Point.[4] Thus, the theory makes a few major claims: that improving the quality of the neighborhood environment reduces petty crime, anti-social behavior, and low-level disorder, and that major crime is also prevented as a result. Criticism of the theory has tended to focus on the latter claim.[5] The reason the state of the urban environment may affect crime consists of three factors:social normsandconformity; the presence or lack of routinemonitoring; and social signaling andsignal crime. In an anonymous urban environment, with few or no other people around, social norms and monitoring are not clearly known. Thus, individuals look for signals within the environment as to the social norms in the setting and the risk of getting caught violating those norms; one of the signals is the area's general appearance. Under the broken windows theory, an ordered and clean environment, one that is maintained, sends the signal that the area is monitored and that criminal behavior is not tolerated. Conversely, a disordered environment, one that is not maintained (broken windows, graffiti, excessive litter), sends the signal that the area is not monitored and that criminal behavior has little risk of detection. The theory assumes that the landscape "communicates" to people. A broken window transmits to criminals the message that a community displays a lack of informal social control and so is unable or unwilling to defend itself against a criminal invasion. It is not so much the actual broken window that is important, but the message the broken window sends to people. It symbolizes the community's defenselessness and vulnerability and represents the lack ofcohesivenessof the people within. Neighborhoods with a strong sense of cohesion fix broken windows and assert social responsibility on themselves, effectively giving themselves control over their space. The theory emphasizes the built environment, but must also consider human behavior.[6] Under the impression that a broken window left unfixed leads to more serious problems, residents begin to change the way they see their community. In an attempt to stay safe, a cohesive community starts to fall apart, as individuals start to spend less time in communal space to avoid potential violent attacks by strangers.[1]The slow deterioration of a community, as a result of broken windows, modifies the way people behave when it comes to their communal space, which, in turn, breaks down community control. As rowdy teenagers, panhandlers, addicts, and prostitutes slowly make their way into a community, it signifies that the community cannot assert informal social control, and citizens become afraid that worse things will happen. As a result, they spend less time in the streets to avoid these subjects and feel less and less connected from their community, if the problems persist. At times, residents tolerate "broken windows" because they feel they belong in the community and "know their place". Problems, however, arise when outsiders begin to disrupt the community's cultural fabric. That is the difference between "regulars" and "strangers" in a community. The way that "regulars" act represents the culture within, but strangers are "outsiders" who do not belong.[6] Consequently, daily activities considered "normal" for residents now become uncomfortable, as the culture of the community carries a different feel from the way that it was once. With regard to social geography, the broken windows theory is a way of explaining people and their interactions with space. The culture of a community can deteriorate and change over time, with the influence of unwanted people and behaviors changing the landscape. The theory can be seen as people shaping space, as the civility and attitude of the community create spaces used for specific purposes by residents. On the other hand, it can also be seen as space shaping people, with elements of the environment influencing and restricting day-to-day decision making. However, with policing efforts to remove unwanted disorderly people that put fear in the public's eyes, the argument would seem to be in favor of "people shaping space", as public policies are enacted and help to determine how one is supposed to behave. All spaces have their own codes of conduct, and what is considered to be right and normal will vary from place to place. The concept also takes into consideration spatial exclusion and social division, as certain people behaving in a given way are considered disruptive and therefore, unwanted. It excludes people from certain spaces because their behavior does not fit the class level of the community and its surroundings. A community has its own standards and communicates a strong message to criminals, by social control, that their neighborhood does not tolerate their behavior. If, however, a community is unable to ward off would-be criminals on their own, policing efforts help. By removing unwanted people from the streets, the residents feel safer and have a higher regard for those that protect them. People of less civility who try to make a mark in the community are removed, according to the theory.[6] Many claim thatinformal social controlcan be an effective strategy to reduce unruly behavior.Garland (2001)expresses that "community policing measures in the realization that informal social control exercised through everyday relationships and institutions is more effective than legal sanctions."[7]Informal social control methods have demonstrated a "get tough" attitude by proactive citizens, and express a sense that disorderly conduct is not tolerated. According to Wilson and Kelling, there are two types of groups involved in maintaining order, 'community watchmen' and 'vigilantes'.[1]The United States has adopted in many ways policing strategies of old European times, and at that time, informal social control was the norm, which gave rise to contemporary formal policing. Though, in earlier times, because there were no legal sanctions to follow, informal policing was primarily 'objective' driven, as stated by Wilson and Kelling (1982). Wilcox et al. 2004argue that improperland usecan cause disorder, and the larger the public land is, the more susceptible to criminal deviance.[8]Therefore, nonresidential spaces, such as businesses, may assume to the responsibility of informal social control "in the form ofsurveillance, communication, supervision, and intervention".[9]It is expected that more strangers occupying the public land creates a higher chance for disorder.Jane Jacobscan be considered one of the original pioneers of this perspective ofbroken windows. Much of her book,The Death and Life of Great American Cities, focuses on residents' and nonresidents' contributions to maintaining order on the street, and explains how local businesses, institutions, and convenience stores provide a sense of having "eyes on the street".[10] On the contrary, many residents feel that regulating disorder is not their responsibility. Wilson and Kelling found that studies done by psychologists suggest people often refuse to go to the aid of someone seeking help, not due to a lack of concern or selfishness "but the absence of some plausible grounds for feeling that one must personally accept responsibility".[1]On the other hand, others plainly refuse to put themselves in harm's way, depending on how grave they perceive the nuisance to be; a 2004 study observed that "most research on disorder is based on individual level perceptions decoupled from a systematic concern with the disorder-generating environment."[11]Essentially, everyone perceives disorder differently, and can contemplate seriousness of a crime based on those perceptions. However, Wilson and Kelling feel that although community involvement can make a difference, "the police are plainly the key to order maintenance."[1] Ranasinghe argues that the concept of fear is a crucial element of broken windows theory, because it is the foundation of the theory.[12]She also adds that public disorder is "... unequivocally constructed as problematic because it is a source of fear".[13]Fear is elevated as perception of disorder rises; creating a social pattern that tears the social fabric of a community and leaves the residents feeling hopeless and disconnected. Wilson and Kelling hint at the idea, but do not focus on its central importance. They indicate that fear was a product of incivility, not crime, and that people avoid one another in response to fear, weakening controls.[1]Hinkle and Weisburd found that police interventions to combat minor offenses, as per the broken windows model, "significantly increased the probability of feeling unsafe," suggesting that such interventions might offset any benefits of broken windows policing in terms of fear reduction.[14] Broken windows policing is sometimes described as a "zero tolerance" policing style,[15]including in some academic studies.[16]Bratton and Kelling have said that broken windows policing and zero tolerance are different, and that minor offenders should receive lenient punishment.[17] In an earlier publication ofThe Atlanticreleased March 1982, Wilson wrote an article indicating that police efforts had gradually shifted from maintaining order to fighting crime.[1]This indicated that order maintenance was something of the past, and soon it would seem as it has been put on the back burner. The shift was attributed to the rise of the social urban riots of the 1960s, and "social scientists began to explore carefully the order maintenance function of the police, and to suggest ways of improving it—not to make streets safer (its original function) but to reduce the incidence of mass violence".[1]Other criminologists argue between similar disconnections, for example, Garland argues that throughout the early and mid 20th century, police in American cities strived to keep away from the neighborhoods under their jurisdiction.[7]This is a possible indicator of the out-of-control social riots that were prevalent at that time.[citation needed]Still many would agree that reducing crime and violence begins with maintaining social control/order.[18] Jane Jacobs'The Death and Life of Great American Citiesis discussed in detail by Ranasinghe, and its importance to the early workings of broken windows, and claims that Kelling's original interest in "minor offences and disorderly behaviour and conditions" was inspired by Jacobs' work.[19]Ranasinghe includes that Jacobs' approach toward social disorganization was centralized on the "streets and their sidewalks, the main public places of a city" and that they "are its most vital organs, because they provide the principal visual scenes".[20]Wilson and Kelling, as well as Jacobs, argue on the concept of civility (or the lack thereof) and how it creates lasting distortions between crime and disorder. Ranasinghe explains that the common framework of both set of authors is to narrate the problem facing urban public places. Jacobs, according to Ranasinghe, maintains that "Civility functions as a means of informal social control, subject little to institutionalized norms and processes, such as the law" 'but rather maintained through an' "intricate, almost unconscious, network of voluntary controls and standards among people... and enforced by the people themselves".[21] Before the introduction of this theory by Wilson and Kelling,Philip Zimbardo, aStanfordpsychologist, arranged an experiment testing the broken-window theory in 1969. Zimbardo arranged for an automobile with no license plates and the hood up to be parked idle in aBronxneighbourhood and a second automobile, in the same condition, to be set up inPalo Alto, California. The car in the Bronx was attacked within minutes of its abandonment. Zimbardo noted that the first "vandals" to arrive were a family—a father, mother, and a young son—who removed the radiator and battery. Within twenty-four hours of its abandonment, everything of value had been stripped from the vehicle. After that, the car's windows were smashed in, parts torn, upholstery ripped, and children were using the car as a playground. At the same time, the vehicle sitting idle in Palo Alto sat untouched for more than a week until Zimbardo himself went up to the vehicle and deliberately smashed it with a sledgehammer. Soon after, people joined in for the destruction, although criticism has been levelled at this claim as the destruction occurred after the car was moved to the campus of Stanford university and Zimbardo's own students were the first to join him. Zimbardo observed that a majority of the adult "vandals" in both cases were primarily well dressed, Caucasian, clean-cut and seemingly respectable individuals. It is believed that, in a neighborhood such as the Bronx where the history of abandoned property and theft is more prevalent, vandalism occurs much more quickly, as the community generally seems apathetic. Similar events can occur in any civilized community when communal barriers—the sense of mutual regard and obligations of civility—are lowered by actions that suggest apathy.[1][22] In 1985, theNew York City Transit AuthorityhiredGeorge L. Kelling, the author ofBroken Windows, as a consultant.[23]Kelling was later hired as a consultant to theBostonand theLos Angelespolice departments. One of Kelling's adherents,David L. Gunn, implemented policies and procedures based on the Broken Windows Theory, during his tenure as President of the New York City Transit Authority. One of his major efforts was to lead a campaign from 1984 to 1990 to ridgraffitifrom New York's subway system. In 1990,William J. Brattonbecame head of theNew York City Transit Police. Bratton was influenced by Kelling, describing him as his "intellectual mentor". In his role, he implemented a tougher stance onfare evasion, fasterarrestee processingmethods, andbackground checkson all those arrested. After being electedMayor of New York Cityin 1993, as aRepublican,Rudy Giulianihired Bratton as hispolice commissionerto implement similar policies and practices throughout the city. Giuliani heavily subscribed to Kelling and Wilson's theories. Such policies emphasized addressing crimes that negatively affectquality of life. In particular, Bratton directed the police to more strictly enforce laws against subway fare evasion,public drinking,public urination, and graffiti. Bratton also revived theNew York City Cabaret Law, a previously dormant Prohibition era ban on dancing in unlicensed establishments. Throughout the late 1990s, NYPD shut down many of the city's acclaimed night spots for illegal dancing. According to a 2001 study of crime trends in New York City by Kelling and William Sousa, rates of both petty and serious crime fell significantly after the aforementioned policies were implemented. Furthermore, crime continued to decline for the following ten years. Such declines suggested that policies based on the Broken Windows Theory wereeffective.[24]Later, in 2016, Brian Jordan Jefferson used the precedent of Kelling and Sousa's study to conduct fieldwork in the 70th precinct of New York City, which it was corroborated that crime mitigation in the area were concerning "quality of life" issues, which included noise complaints and loitering.[25]The falling crime rates throughout New York City had built a mutual relationship between residents and law enforcement in vigilance of disorderly conduct.[citation needed] However, other studies do not find acause and effectrelationship between the adoption of such policies and decreases in crime.[5][26]The decrease may have been part of a broader trend across the United States. The rates of most crimes, including all categories of violent crime, made consecutive declines from their peak in 1990, under Giuliani's predecessor,David Dinkins. Other cities also experienced less crime, even though they had different police policies. Other factors, such as the 39% drop in New York City'sunemployment ratebetween 1992 and 1999,[27]could also explain the decrease reported by Kelling and Sousa.[27] A 2017 study found that when the New York Police Department (NYPD) stopped aggressively enforcing minor legal statutes in late 2014 and early 2015 that civilian complaints of three major crimes (burglary, felony assault, and grand larceny) decreased (slightly with large error bars) during and shortly after sharp reductions inproactive policing. There was no statistically significant effect on other major crimes such as murder, rape, robbery, or grand theft auto. These results are touted as challenging prevailing scholarship as well as conventional wisdom on authority and legal compliance by implying that aggressively enforcing minor legal statutes incites more severe criminal acts.[28] Albuquerque,New Mexico, instituted the Safe Streets Program in the late 1990s based on the Broken Windows Theory. The Safe Streets Program sought to deter and reduce unsafe driving and incidence of crime by saturating areas where high crime and crash rates were prevalent with law enforcement officers. Operating under the theory that AmericanWesternersuse roadways much in the same way that AmericanEasternersuse subways, the developers of the program reasoned that lawlessness on the roadways had much the same effect as it did on theNew York City Subway. Effects of the program were reviewed by the USNational Highway Traffic Safety Administration(NHTSA) and were published in a case study.[29]The methodology behind the program demonstrates the use ofdeterrence theoryin preventing crime.[30] In 2005,Harvard UniversityandSuffolk Universityresearchers worked with local police to identify 34 "crime hot spots" inLowell, Massachusetts. In half of the spots, authorities cleared trash, fixed streetlights, enforced building codes, discouragedloiterers, made moremisdemeanorarrests, and expandedmental health servicesand aid for thehomeless. In the other half of the identified locations, there was no change to routine police service. The areas that received additional attention experienced a 20% reduction in calls to the police. The study concluded that cleaning up the physical environment was more effective than misdemeanor arrests.[31][32] In 2007 and 2008, Kees Keizer and colleagues from theUniversity of Groningenconducted a series of controlled experiments to determine if the effect of existing visible disorder (such as litter or graffiti) increased other crime such as theft, littering, or otherantisocial behavior. They selected several urban locations, which they arranged in two different ways, at different times. In each experiment, there was a "disorder" condition in which violations of social norms as prescribed by signage or national custom, such as graffiti and littering, were clearly visible as well as a control condition where no violations of norms had taken place. The researchers then secretly monitored the locations to observe if people behaved differently when the environment was "disordered". Their observations supported the theory. The conclusion was published in the journalScience: "One example of disorder, like graffiti or littering, can indeed encourage another, like stealing."[33][34] An 18-month study by Carlos Vilalta in Mexico City showed that framework of Broken Windows Theory on homicide in suburban neighborhoods was not a direct correlation, but a "concentrated disadvantage" in the perception of fear and modes of crime prevention.[35]In areas with more social disorder (such as public intoxication), an increased perception of law-abiding citizens to feel unsafe amplified the impact of homicide occurring in the neighborhood. It was also found that it was more effective in preventing instances of violent crime among people living in areas with less physical structural decay (such asgraffiti), lending credence to the Broken Windows Theory basis that law enforcement is trusted more among those in areas with less disorder. Furthering this data, a 2023 study conducted by Ricardo Massa on residency near clandestinedumpsitesassociated economic disenfranchisement with high physical disorder.[36]The neighborhoods that had high concentrations of landfill waste were correlated with crimes (such as vehicle theft and robbery), and most significantly crimes related to property. In a space where property damage and neglect is normalized, a person's response to this type of environment can also greatly be affected by their perception of their surroundings. It was also concluded that non-residents of these high-concentration areas tended to fear and avoid these locations, seeing as there was typically less surveillance and lack of community efficacy surrounding clandestine dumpsites. However, despite this fear, Massa also notes that, in this case, individual targets for crime (such as homicide or rape) were unlikely compared to the vandalism of public and private property. Other side effects of better monitoring and cleaned up streets may well be desired by governments or housing agencies and the population of a neighborhood: broken windows can count as an indicator of low real estate value and may deter investors. Real estate professionals may benefit from adopting the "Broken Windows Theory", because if the number of minor transgressions is monitored in a specific area, there is likely to be a reduction in major transgressions as well. This may actually increase or decrease value in a house or apartment, depending on the area.[37]Fixing windows is, therefore, also a step ofreal estate development, which may lead, whether it is desired or not, togentrification. By reducing the number of broken windows in the community, the inner cities would appear to be attractive to consumers with more capital. Eliminating danger in spaces that are notorious for criminal activity, such as downtown New York City and Chicago, would draw in investment from consumers, increase the city's economic status, and provide a safe and pleasant image for present and future inhabitants.[26] In education, the broken windows theory is used to promote order in classrooms and school cultures. The belief is that students are signaled by disorder or rule-breaking and that they in turn imitate the disorder. Several school movements encourage strict paternalistic practices to enforce student discipline. Such practices include language codes (governing slang, curse words, or speaking out of turn), classroom etiquette (sitting up straight, tracking the speaker), personal dress (uniforms, little or no jewelry), and behavioral codes (walking in lines, specified bathroom times). From 2004 to 2006, Stephen B. Plank and colleagues fromJohns Hopkins Universityconducted a correlational study to determine the degree to which the physical appearance of the school and classroom setting influence student behavior, particularly in respect to the variables concerned in their study: fear, social disorder, and collective efficacy.[38]They collected survey data administered to 6th-8th students by 33 public schools in a largemid-Atlanticcity. From analyses of the survey data, the researchers determined that the variables in their study are statistically significant to the physical conditions of the school and classroom setting. The conclusion, published in theAmerican Journal of Education, was: ...the findings of the current study suggest that educators and researchers should be vigilant about factors that influence student perceptions of climate and safety. Fixing broken windows and attending to the physical appearance of a school cannot alone guarantee productive teaching and learning, but ignoring them likely greatly increases the chances of a troubling downward spiral.[38] A 2015 meta-analysis of broken windows policing implementations found that disorder policing strategies, such as "hot spots policing" orproblem-oriented policing, result in "consistent crime reduction effects across a variety of violent, property, drug, and disorder outcome measures".[39]As a caveat, the authors noted that "aggressive order maintenance strategies that target individual disorderly behaviors do not generate significant crime reductions," pointing specifically tozero tolerancepolicing models that target singular behaviors such as public intoxication and remove disorderly individuals from the street via arrest. The authors recommend that police develop "community co-production" policing strategies instead of merely committing to increasing misdemeanor arrests.[39] Several studies have argued that many of the apparent successes of broken windows policing (such as New York City in the 1990s) were the result of other factors.[40]They claim that the "broken windows theory" closely relatescorrelationwithcausality: reasoning prone tofallacy. David Thacher, assistant professor of public policy and urban planning at theUniversity of Michigan, stated in a 2004 paper:[40] [S]ocial science has not been kind to the broken windows theory. A number of scholars reanalyzed the initial studies that appeared to support it.... Others pressed forward with new, more sophisticated studies of the relationship between disorder and crime. The most prominent among them concluded that the relationship between disorder and serious crime is modest, and even that relationship is largely an artifact of more fundamental social forces. C. R. Sridhar, in his article in theEconomic and Political Weekly, also challenges the theory behind broken windows policing and the idea that the policies ofWilliam Brattonand theNew York Police Departmentwas the cause of the decrease of crime rates inNew York City.[16]The policy targeted people in areas with a significant amount of physical disorder and there appeared to be a causal relationship between the adoption of broken windows policing and the decrease in crime rate. Sridhar, however, discusses other trends (such as New York City's economic boom in the late 1990s) that created a "perfect storm" that contributed to the decrease of crime rate much more significantly than the application of the broken windows policy. Sridhar also compares this decrease in crime rate with other major cities that adopted various policies and determined that the broken windows policy is not as effective. In a 2007 study called "Reefer Madness" in the journalCriminology and Public Policy, Harcourt and Ludwig found further evidence confirming thatmean reversionfully explained the changes in crime rates in the different precincts in New York in the 1990s.[41]Further alternative explanations that have been put forward include the waning of thecrack epidemic,[42]unrelated growth in the prison population by theRockefeller drug laws,[42]and that the number of males from 16 to 24 was dropping regardless of the shape of the USpopulation pyramid.[43] It has also been argued that rates of major crimes also dropped in many other US cities during the 1990s, both those that had adopted broken windows policing and those that had not.[44]It is thought that this is due to the exposure of children to environmental lead, which leads to loss of impulse control and, when they reach young adulthood, criminal acts. There appears to be a correlation between a 25-year lag between the addition and removal of lead from paint and gasoline and rises and falls in murder arrests.[45][46] In his book, Baltimore criminologist Ralph B. Taylor argues that fixing windows is only a partial and short-term solution. His data supports a materialist view: changes in physical decay, superficial social disorder, and racial composition do not lead to higher crime, but economic decline does. He contends that the example shows that real, long-term reductions in crime require that urban politicians, businesses, and community leaders work together to improve the economic fortunes of residents in high-crime areas.[47] In 2015, Northeastern University assistant professor Daniel T. O'Brien criticised the broken theory model. Using hisBig Databased research model, he argues that the broken window model fails to capture the origins of crime in a neighbourhood. He concludes that crime comes from thesocial dynamicsof communities and private spaces and spills into public spaces.[48] According to a study byRobert J. SampsonandStephen Raudenbush, the premise on which the theory operates, that social disorder and crime are connected as part of a causal chain, is faulty. They argue that a third factor, collective efficacy, "defined as cohesion among residents combined with shared expectations for the social control of public space," is the cause of varying crime rates observed in an altered neighborhood environment. They also argue that the relationship between public disorder and crime rate is weak.[49] In the winter 2006 edition of theUniversity of Chicago Law Review,Bernard Harcourtand Jens Ludwig looked at the laterDepartment of Housing and Urban Developmentprogram that rehoused inner-city project tenants in New York into more-orderly neighborhoods.[26]The broken windows theory would suggest that these tenants would commit less crime once moved because of the more stable conditions on the streets. However, Harcourt and Ludwig found that the tenants continued to commit crimes at the same rate. Another tack was taken by a 2010 study questioning the theory's legitimacy concerning the subjectivity of disorder as perceived by persons living in neighborhoods. It concentrated on whether citizens view disorder as separate from crime or identical to it. The study noted that crime cannot be the result of disorder if the two are identical, agreed that disorder provided evidence of "convergent validity" and concluded that broken windows theory misinterprets the relationship between disorder and crime.[50] Broken windows policing has sometimes become associated with zealotry, which has led to critics suggesting that it encourages discriminatory behaviour. Some campaigns such asBlack Lives Matterhave called for an end to broken windows policing.[51]In 2016, aDepartment of Justicereport argued that it had led theBaltimore Police Departmentto discriminate against and alienate minority groups.[52] A central argument is that the term disorder is vague, and giving the police broad discretion to decide what disorder is will lead to discrimination. InDorothy Roberts's article, "Foreword: Race, Vagueness, and the Social Meaning of Order Maintenance and Policing", she says that the broken windows theory in practice leads to the criminalization of communities of color, who are typically disfranchised.[53]She underscores the dangers of vaguely written ordinances that allow for law enforcers to determine who engages in disorderly acts, which, in turn, produces a racially skewed outcome in crime statistics.[54]Similarly, Gary Stewart wrote, "The central drawback of the approaches advanced by Wilson, Kelling, and Kennedy rests in their shared blindness to the potentially harmful impact of broad police discretion on minority communities."[55]According to Stewart, arguments for low-level police intervention, including the broken windows hypothesis, often act "as cover forracistbehavior".[55] The theory has also been criticized for its unsound methodology and its manipulation of racialized tropes. Specifically, Bench Ansfield has shown that in their 1982 article, Wilson and Kelling cited only one source to prove their central contention that disorder leads to crime: the Philip Zimbardo vandalism study (see Precursor Experiments above).[56]But Wilson and Kelling misrepresented Zimbardo's procedure and conclusions, dispensing with Zimbardo's critique of inequality and community anonymity in favor of the oversimplified claim that one broken window gives rise to "a thousand broken windows". Ansfield argues that Wilson and Kelling used the image of the crisis-ridden 1970s Bronx to stoke fears that "all cities would go the way of the Bronx if they didn't embrace their new regime of policing."[57]Wilson and Kelling manipulated the Zimbardo experiment to avail themselves of the racialized symbolism found in the broken windows of the Bronx.[56] Robert J. Sampson argues that based on common misconceptions by the masses, it is implied that those who commit disorder and crime have a clear tie to groups suffering from financial instability and may be of minority status: "The use of racial context to encode disorder does not necessarily mean that people are racially prejudiced in the sense of personal hostility." He notes that residents make a clear implication of who they believe is causing the disruption, which has been termed as implicit bias.[58]He further states that research conducted on implicit bias and stereotyping of cultures suggests that community members hold unrelenting beliefs of African Americans and other disadvantaged minority groups, associating them with crime, violence, disorder, welfare, and undesirability as neighbors.[58]A later study indicated that this contradicted Wilson and Kelling's proposition that disorder is an exogenous construct that has independent effects on how people feel about their neighborhoods.[50] In response, Kelling and Bratton have argued that broken windows policing does not discriminate against law-abiding communities of minority groups if implemented properly.[17]They citedDisorder and Decline: Crime and the Spiral of Decay in American Neighborhoods,[59]a study by Wesley Skogan atNorthwestern University. The study, which surveyed 13,000 residents of large cities, concluded that different ethnic groups have similar ideas as to what they would consider to be "disorder". Minority groups have tended to be targeted at higher rates by the Broken Windows style of policing. Broken Windows policies have been utilized more heavily in minority neighborhoods where low-income, poor infrastructure and social disorder were widespread, causing minority groups to perceive that they were beingracially profiledunder Broken Windows policing.[23][60] A common criticism of broken windows policing is the argument that it criminalizes the poor and homeless. That is because the physical signs that characterize a neighborhood with the "disorder" that broken windows policing targets correlate with the socio-economic conditions of its inhabitants. Many of the acts that are considered legal but "disorderly" are often targeted in public settings and are not targeted when they are conducted in private. Therefore, those without access to a private space are frequently criminalized. Critics, such asRobert J. SampsonandStephen RaudenbushofHarvard University, see the application of the broken windows theory in policing as a war against the poor, as opposed to a war against more serious crimes.[61]Since minority groups in most cities are more likely to be poorer than the rest of the population, a bias against the poor would be linked to a racial bias.[53] According to Bruce D. Johnson, Andrew Golub, and James McCabe, applying the broken windows theory in policing and policymaking can result in development projects that decrease physical disorder but promote undesiredgentrification. Often, when a city is so "improved" in this way, the development of an area can cause the cost of living to rise higher than residents can afford, which forces low-income people out of the area. As the space changes, the middle and upper classes, often white, begin to move into the area, resulting in the gentrification of urban, poor areas. The residents are affected negatively by such an application of the broken windows theory and end up evicted from their homes as if their presence indirectly contributed to the area's problem of "physical disorder".[53] InMore Guns, Less Crime(2000),economistJohn Lott, Jr.examined the use of the broken windows approach as well as community- andproblem-oriented policingprograms in cities over 10,000 in population, over two decades. He found that the impacts of these policing policies were inconsistent across different types of crime. Lott's book has beensubject to criticism, whileother groups supportLott's conclusions. In the 2005 bookFreakonomics, coauthorsSteven D. LevittandStephen J. Dubnerconfirm and question the notion that the broken windows theory was responsible for New York's drop in crime, saying "the pool of potential criminals had dramatically shrunk". Levitt had in theQuarterly Journal of Economicsattributed that possibility to the legalization of abortion withRoe v. Wade, which correlated with a decrease, one generation later, in the number of delinquents in the population at large.[62] In his 2012 bookUncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society,Jim Manziwrites that of the randomized field trials conducted in criminology, onlynuisance abatementper broken windows theory has been successfully replicated.[63][64]
https://en.wikipedia.org/wiki/Broken_windows_theory
Slope Oneis a family of algorithms used forcollaborative filtering, introduced in a 2005 paper by Daniel Lemire and Anna Maclachlan.[1]Arguably, it is the simplest form of non-trivialitem-based collaborative filteringbased on ratings. Their simplicity makes it especially easy to implement them efficiently while their accuracy is often on par with more complicated and computationally expensive algorithms.[1][2]They have also been used as building blocks to improve other algorithms.[3][4][5][6][7][8][9]They are part of major open-source libraries such asApache MahoutandEasyrec. When ratings of items are available, such as is the case when people are given the option of ratings resources (between 1 and 5, for example), collaborative filtering aims to predict the ratings of one individual based on his past ratings and on a (large) database of ratings contributed by other users. Example: Can we predict the rating an individual would give to the new Celine Dion album given that he gave the Beatles 5 out of 5? In this context, item-based collaborative filtering[10][11]predicts the ratings on one item based on the ratings on another item, typically usinglinear regression(f(x)=ax+b{\displaystyle f(x)=ax+b}). Hence, if there are 1,000 items, there could be up to 1,000,000 linear regressions to be learned, and so, up to 2,000,000 regressors. This approach may suffer from severeoverfitting[1]unless we select only the pairs of items for which several users have rated both items. A better alternative may be to learn a simpler predictor such asf(x)=x+b{\displaystyle f(x)=x+b}: experiments show that this simpler predictor (called Slope One) sometimes outperforms[1]linear regression while having half the number of regressors. This simplified approach also reduces storage requirements and latency. Item-based collaborative filtering is just one form ofcollaborative filtering. Other alternatives include user-based collaborative filtering where relationships between users are of interest, instead. However, item-based collaborative filtering is especially scalable with respect to the number of users. We are not always given ratings: when the users provide only binary data (the item was purchased or not), then Slope One and other rating-based algorithm do not apply[citation needed]. Examples of binary item-based collaborative filtering include Amazon'sitem-to-itempatented algorithm[12]which computes the cosine between binary vectors representing the purchases in a user-item matrix. Being arguably simpler than even Slope One, the Item-to-Item algorithm offers an interesting point of reference. Consider an example. In this case, the cosine between items 1 and 2 is: (1,0,0)⋅(0,1,1)‖(1,0,0)‖‖(0,1,1)‖=0{\displaystyle {\frac {(1,0,0)\cdot (0,1,1)}{\Vert (1,0,0)\Vert \Vert (0,1,1)\Vert }}=0}, The cosine between items 1 and 3 is: (1,0,0)⋅(1,1,0)‖(1,0,0)‖‖(1,1,0)‖=12{\displaystyle {\frac {(1,0,0)\cdot (1,1,0)}{\Vert (1,0,0)\Vert \Vert (1,1,0)\Vert }}={\frac {1}{\sqrt {2}}}}, Whereas the cosine between items 2 and 3 is: (0,1,1)⋅(1,1,0)‖(0,1,1)‖‖(1,1,0)‖=12{\displaystyle {\frac {(0,1,1)\cdot (1,1,0)}{\Vert (0,1,1)\Vert \Vert (1,1,0)\Vert }}={\frac {1}{2}}}. Hence, a user visiting item 1 would receive item 3 as a recommendation, a user visiting item 2 would receive item 3 as a recommendation, and finally, a user visiting item 3 would receive item 1 (and then item 2) as a recommendation. The model uses a single parameter per pair of item (the cosine) to make the recommendation. Hence, if there arenitems, up ton(n-1)/2cosines need to be computed and stored. To drastically reduceoverfitting, improve performance and ease implementation, theSlope Onefamily of easily implemented Item-based Rating-Basedcollaborative filteringalgorithms was proposed. Essentially, instead of using linear regression from one item's ratings to another item's ratings (f(x)=ax+b{\displaystyle f(x)=ax+b}), it uses a simpler form of regression with a single free parameter (f(x)=x+b{\displaystyle f(x)=x+b}). The free parameter is then simply the average difference between the two items' ratings. It was shown to be much more accurate than linear regression in some instances,[1]and it takes half the storage or less. Example: For a more realistic example, consider the following table. In this case, the average difference in ratings between item B and A is (-2+1)/2 = -0.5. Hence, on average, item A is rated above item B by 0.5. Similarly, the average difference between item C and A is -3. Hence, if we attempt to predict the rating of Lucy for item A using her rating for item B, we get 2+0.5 = 2.5. Similarly, if we try to predict her rating for item A using her rating of item C, we get 5+3 = 8. If a user rated several items, the predictions are simply combined using a weighted average where a good choice for the weight is the number of users having rated both items. In the above example, both John and Mark rated items A and B, hence weight of 2 and only John rated both items A and C, hence weight of 1 as shown below. we would predict the following rating for Lucy on item A as : 2×2.5+1×82+1=133=4.33{\displaystyle {\frac {2\times 2.5+1\times 8}{2+1}}={\frac {13}{3}}=4.33} Hence, givennitems, to implement Slope One, all that is needed is to compute and store the average differences and the number of common ratings for each of then2pairs of items. Suppose there arenitems,musers, andNratings. Computing the average rating differences for each pair of items requires up ton(n-1)/2units of storage, and up tom n2time steps. This computational bound may be pessimistic: if we assume that users have rated up toyitems, then it is possible to compute the differences in no more thann2+my2. If a user has enteredxratings, predicting a single rating requiresxtime steps, and predicting all of his missing ratings requires up to (n-x)xtime steps. Updating the database when a user has already enteredxratings, and enters a new one, requiresxtime steps. It is possible to reduce storage requirements by partitioning the data (seePartition (database)) or by using sparse storage: pairs of items having no (or few) corating users can be omitted.
https://en.wikipedia.org/wiki/Slope_One
The termumbral calculushas two related but distinct meanings. Inmathematics, before the 1970s, umbral calculus referred to the surprising similarity between seemingly unrelatedpolynomial equationsand certain shadowy techniques used to prove them. These techniques were introduced in 1861 byJohn Blissardand are sometimes calledBlissard's symbolic method.[1]They are often attributed toÉdouard Lucas(orJames Joseph Sylvester), who used the technique extensively.[2]The use of shadowy techniques was put on a solid mathematical footing starting in the 1970s, and the resulting mathematical theory is also referred to as "umbral calculus". In the 1930s and 1940s,Eric Temple Bellattempted to set the umbral calculus on a rigorous footing, however his attempt in making this kind of argument logically rigorous was unsuccessful. ThecombinatorialistJohn Riordanin his bookCombinatorial Identitiespublished in the 1960s, used techniques of this sort extensively. In the 1970s,Steven Roman,Gian-Carlo Rota, and others developed the umbral calculus by means oflinear functionalson spaces of polynomials. Currently,umbral calculusrefers to the study ofSheffer sequences, including polynomial sequences ofbinomial typeandAppell sequences, but may encompass systematic correspondence techniques of thecalculus of finite differences. The method is a notational procedure used for deriving identities involving indexed sequences of numbers bypretending that the indices are exponents. Construed literally, it is absurd, and yet it is successful: identities derived via the umbral calculus can also be properly derived by more complicated methods that can be taken literally without logical difficulty. An example involves theBernoulli polynomials. Consider, for example, the ordinarybinomial expansion(which contains abinomial coefficient): and the remarkably similar-looking relation on theBernoulli polynomials: Compare also the ordinary derivative to a very similar-looking relation on the Bernoulli polynomials: These similarities allow one to constructumbralproofs, which on the surface cannot be correct, but seem to work anyway. Thus, for example, by pretending that the subscriptn−kis an exponent: and then differentiating, one gets the desired result: In the above, the variablebis an "umbra" (Latinforshadow). See alsoFaulhaber's formula. Indifferential calculus, theTaylor seriesof a function is an infinite sum of terms that are expressed in terms of the function'sderivativesat a single point. That is, arealorcomplex-valued functionf(x) that isanalyticata{\displaystyle a}can be written as: f(x)=∑n=0∞f(n)(a)n!(x−a)n{\displaystyle f(x)=\sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}} Similar relationships were also observed in the theory offinite differences. The umbral version of the Taylor series is given by a similar expression involving thek-thforward differencesΔk[f]{\displaystyle \Delta ^{k}[f]}of apolynomialfunctionf, where is thePochhammer symbolused here for the falling sequential product. A similar relationship holds for the backward differences and rising factorial. This series is also known as theNewton seriesorNewton's forward difference expansion. The analogy to Taylor's expansion is utilized in thecalculus of finite differences. Another combinatorialist,Gian-Carlo Rota, pointed out that the mystery vanishes if one considers thelinear functionalLon polynomials inzdefined by Then, using the definition of the Bernoulli polynomials and the definition and linearity ofL, one can write This enables one to replace occurrences ofBn(x){\displaystyle B_{n}(x)}byL((z+x)n){\displaystyle L((z+x)^{n})}, that is, move thenfrom a subscript to a superscript (the key operation of umbral calculus). For instance, we can now prove that: Rota later stated that much confusion resulted from the failure to distinguish between threeequivalence relationsthat occur frequently in this topic, all of which were denoted by "=". In a paper published in 1964, Rota used umbral methods to establish therecursionformula satisfied by theBell numbers, which enumeratepartitionsof finite sets. In the paper of Roman and Rota cited below, the umbral calculus is characterized as the study of theumbral algebra, defined as thealgebraof linear functionals on thevector spaceof polynomials in a variablex, with a productL1L2of linear functionals defined by Whenpolynomial sequencesreplace sequences of numbers as images ofynunder the linear mappingL, then the umbral method is seen to be an essential component of Rota's general theory of special polynomials, and that theory is theumbral calculusby some more modern definitions of the term.[3]A small sample of that theory can be found in the article onpolynomial sequences of binomial type. Another is the article titledSheffer sequence. Rota later applied umbral calculus extensively in his paper with Shen to study the various combinatorial properties of thecumulants.[4]
https://en.wikipedia.org/wiki/Umbral_calculus
Inmachine learning, avariational autoencoder(VAE) is anartificial neural networkarchitecture introduced by Diederik P. Kingma andMax Welling.[1]It is part of the families ofprobabilistic graphical modelsandvariational Bayesian methods.[2] In addition to being seen as anautoencoderneural network architecture, variational autoencoders can also be studied within the mathematical formulation ofvariational Bayesian methods, connecting a neural encoder network to its decoder through a probabilisticlatent space(for example, as amultivariate Gaussian distribution) that corresponds to the parameters of a variational distribution. Thus, the encoder maps each point (such as an image) from a large complex dataset into a distribution within the latent space, rather than to a single point in that space. The decoder has the opposite function, which is to map from the latent space to the input space, again according to a distribution (although in practice, noise is rarely added during the decoding stage). By mapping a point to a distribution instead of a single point, the network can avoid overfitting the training data. Both networks are typically trained together with the usage of thereparameterization trick, although the variance of the noise model can be learned separately.[citation needed] Although this type of model was initially designed forunsupervised learning,[3][4]its effectiveness has been proven forsemi-supervised learning[5][6]andsupervised learning.[7] A variational autoencoder is a generative model with a prior and noise distribution respectively. Usually such models are trained using theexpectation-maximizationmeta-algorithm (e.g.probabilistic PCA, (spike & slab) sparse coding). Such a scheme optimizes a lower bound of the data likelihood, which is usually computationally intractable, and in doing so requires the discovery of q-distributions, or variationalposteriors. These q-distributions are normally parameterized for each individual data point in a separate optimization process. However, variational autoencoders use a neural network as an amortized approach to jointly optimize across data points. In that way, the same parameters are reused for multiple data points, which can result in massive memory savings. The first neural network takes as input the data points themselves, and outputs parameters for the variational distribution. As it maps from a known input space to the low-dimensional latent space, it is called the encoder. The decoder is the second neural network of this model. It is a function that maps from the latent space to the input space, e.g. as the means of the noise distribution. It is possible to use another neural network that maps to the variance, however this can be omitted for simplicity. In such a case, the variance can be optimized with gradient descent. To optimize this model, one needs to know two terms: the "reconstruction error", and theKullback–Leibler divergence(KL-D). Both terms are derived from the free energy expression of the probabilistic model, and therefore differ depending on the noise distribution and the assumed prior of the data, here referred to as p-distribution. For example, a standard VAE task such as IMAGENET is typically assumed to have a gaussianly distributed noise; however, tasks such as binarized MNIST require a Bernoulli noise. The KL-D from the free energy expression maximizes the probability mass of the q-distribution that overlaps with the p-distribution, which unfortunately can result in mode-seeking behaviour. The "reconstruction" term is the remainder of the free energy expression, and requires a sampling approximation to compute its expectation value.[8] More recent approaches replaceKullback–Leibler divergence(KL-D) withvarious statistical distances, see"Statistical distance VAE variants"below. From the point of view of probabilistic modeling, one wants to maximize the likelihood of the datax{\displaystyle x}by their chosen parameterized probability distributionpθ(x)=p(x|θ){\displaystyle p_{\theta }(x)=p(x|\theta )}. This distribution is usually chosen to be a GaussianN(x|μ,σ){\displaystyle N(x|\mu ,\sigma )}which is parameterized byμ{\displaystyle \mu }andσ{\displaystyle \sigma }respectively, and as a member of the exponential family it is easy to work with as a noise distribution. Simple distributions are easy enough to maximize, however distributions where a prior is assumed over the latentsz{\displaystyle z}results in intractable integrals. Let us findpθ(x){\displaystyle p_{\theta }(x)}viamarginalizingoverz{\displaystyle z}. wherepθ(x,z){\displaystyle p_{\theta }({x,z})}represents thejoint distributionunderpθ{\displaystyle p_{\theta }}of the observable datax{\displaystyle x}and its latent representation or encodingz{\displaystyle z}. According to thechain rule, the equation can be rewritten as In the vanilla variational autoencoder,z{\displaystyle z}is usually taken to be a finite-dimensional vector of real numbers, andpθ(x|z){\displaystyle p_{\theta }({x|z})}to be aGaussian distribution. Thenpθ(x){\displaystyle p_{\theta }(x)}is a mixture of Gaussian distributions. It is now possible to define the set of the relationships between the input data and its latent representation as Unfortunately, the computation ofpθ(z|x){\displaystyle p_{\theta }(z|x)}is expensive and in most cases intractable. To speed up the calculus to make it feasible, it is necessary to introduce a further function to approximate the posterior distribution as withϕ{\displaystyle \phi }defined as the set of real values that parametrizeq{\displaystyle q}. This is sometimes calledamortized inference, since by "investing" in finding a goodqϕ{\displaystyle q_{\phi }}, one can later inferz{\displaystyle z}fromx{\displaystyle x}quickly without doing any integrals. In this way, the problem is to find a good probabilistic autoencoder, in which the conditional likelihood distributionpθ(x|z){\displaystyle p_{\theta }(x|z)}is computed by theprobabilistic decoder, and the approximated posterior distributionqϕ(z|x){\displaystyle q_{\phi }(z|x)}is computed by theprobabilistic encoder. Parametrize the encoder asEϕ{\displaystyle E_{\phi }}, and the decoder asDθ{\displaystyle D_{\theta }}. Like manydeep learningapproaches that use gradient-based optimization, VAEs require a differentiable loss function to update the network weights throughbackpropagation. For variational autoencoders, the idea is to jointly optimize the generative model parametersθ{\displaystyle \theta }to reduce the reconstruction error between the input and the output, andϕ{\displaystyle \phi }to makeqϕ(z|x){\displaystyle q_{\phi }({z|x})}as close as possible topθ(z|x){\displaystyle p_{\theta }(z|x)}. As reconstruction loss,mean squared errorandcross entropyare often used. As distance loss between the two distributions the Kullback–Leibler divergenceDKL(qϕ(z|x)∥pθ(z|x)){\displaystyle D_{KL}(q_{\phi }({z|x})\parallel p_{\theta }({z|x}))}is a good choice to squeezeqϕ(z|x){\displaystyle q_{\phi }({z|x})}underpθ(z|x){\displaystyle p_{\theta }(z|x)}.[8][9] The distance loss just defined is expanded as Now define theevidence lower bound(ELBO):Lθ,ϕ(x):=Ez∼qϕ(⋅|x)[ln⁡pθ(x,z)qϕ(z|x)]=ln⁡pθ(x)−DKL(qϕ(⋅|x)∥pθ(⋅|x)){\displaystyle L_{\theta ,\phi }(x):=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]=\ln p_{\theta }(x)-D_{KL}(q_{\phi }({\cdot |x})\parallel p_{\theta }({\cdot |x}))}Maximizing the ELBOθ∗,ϕ∗=argmaxθ,ϕLθ,ϕ(x){\displaystyle \theta ^{*},\phi ^{*}={\underset {\theta ,\phi }{\operatorname {argmax} }}\,L_{\theta ,\phi }(x)}is equivalent to simultaneously maximizingln⁡pθ(x){\displaystyle \ln p_{\theta }(x)}and minimizingDKL(qϕ(z|x)∥pθ(z|x)){\displaystyle D_{KL}(q_{\phi }({z|x})\parallel p_{\theta }({z|x}))}. That is, maximizing the log-likelihood of the observed data, and minimizing the divergence of the approximate posteriorqϕ(⋅|x){\displaystyle q_{\phi }(\cdot |x)}from the exact posteriorpθ(⋅|x){\displaystyle p_{\theta }(\cdot |x)}. The form given is not very convenient for maximization, but the following, equivalent form, is:Lθ,ϕ(x)=Ez∼qϕ(⋅|x)[ln⁡pθ(x|z)]−DKL(qϕ(⋅|x)∥pθ(⋅)){\displaystyle L_{\theta ,\phi }(x)=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln p_{\theta }(x|z)\right]-D_{KL}(q_{\phi }({\cdot |x})\parallel p_{\theta }(\cdot ))}whereln⁡pθ(x|z){\displaystyle \ln p_{\theta }(x|z)}is implemented as−12‖x−Dθ(z)‖22{\displaystyle -{\frac {1}{2}}\|x-D_{\theta }(z)\|_{2}^{2}}, since that is, up to an additive constant, whatx|z∼N(Dθ(z),I){\displaystyle x|z\sim {\mathcal {N}}(D_{\theta }(z),I)}yields. That is, we model the distribution ofx{\displaystyle x}conditional onz{\displaystyle z}to be a Gaussian distribution centered onDθ(z){\displaystyle D_{\theta }(z)}. The distribution ofqϕ(z|x){\displaystyle q_{\phi }(z|x)}andpθ(z){\displaystyle p_{\theta }(z)}are often also chosen to be Gaussians asz|x∼N(Eϕ(x),σϕ(x)2I){\displaystyle z|x\sim {\mathcal {N}}(E_{\phi }(x),\sigma _{\phi }(x)^{2}I)}andz∼N(0,I){\displaystyle z\sim {\mathcal {N}}(0,I)}, with which we obtain by the formula forKL divergence of Gaussians:Lθ,ϕ(x)=−12Ez∼qϕ(⋅|x)[‖x−Dθ(z)‖22]−12(Nσϕ(x)2+‖Eϕ(x)‖22−2Nln⁡σϕ(x))+Const{\displaystyle L_{\theta ,\phi }(x)=-{\frac {1}{2}}\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\|x-D_{\theta }(z)\|_{2}^{2}\right]-{\frac {1}{2}}\left(N\sigma _{\phi }(x)^{2}+\|E_{\phi }(x)\|_{2}^{2}-2N\ln \sigma _{\phi }(x)\right)+Const}HereN{\displaystyle N}is the dimension ofz{\displaystyle z}. For a more detailed derivation and more interpretations of ELBO and its maximization, seeits main page. To efficiently search forθ∗,ϕ∗=argmaxθ,ϕLθ,ϕ(x){\displaystyle \theta ^{*},\phi ^{*}={\underset {\theta ,\phi }{\operatorname {argmax} }}\,L_{\theta ,\phi }(x)}the typical method isgradient ascent. It is straightforward to find∇θEz∼qϕ(⋅|x)[ln⁡pθ(x,z)qϕ(z|x)]=Ez∼qϕ(⋅|x)[∇θln⁡pθ(x,z)qϕ(z|x)]{\displaystyle \nabla _{\theta }\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\nabla _{\theta }\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]}However,∇ϕEz∼qϕ(⋅|x)[ln⁡pθ(x,z)qϕ(z|x)]{\displaystyle \nabla _{\phi }\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]}does not allow one to put the∇ϕ{\displaystyle \nabla _{\phi }}inside the expectation, sinceϕ{\displaystyle \phi }appears in the probability distribution itself. Thereparameterization trick(also known as stochastic backpropagation[10]) bypasses this difficulty.[8][11][12] The most important example is whenz∼qϕ(⋅|x){\displaystyle z\sim q_{\phi }(\cdot |x)}is normally distributed, asN(μϕ(x),Σϕ(x)){\displaystyle {\mathcal {N}}(\mu _{\phi }(x),\Sigma _{\phi }(x))}. This can be reparametrized by lettingε∼N(0,I){\displaystyle {\boldsymbol {\varepsilon }}\sim {\mathcal {N}}(0,{\boldsymbol {I}})}be a "standardrandom number generator", and constructz{\displaystyle z}asz=μϕ(x)+Lϕ(x)ϵ{\displaystyle z=\mu _{\phi }(x)+L_{\phi }(x)\epsilon }. Here,Lϕ(x){\displaystyle L_{\phi }(x)}is obtained by theCholesky decomposition:Σϕ(x)=Lϕ(x)Lϕ(x)T{\displaystyle \Sigma _{\phi }(x)=L_{\phi }(x)L_{\phi }(x)^{T}}Then we have∇ϕEz∼qϕ(⋅|x)[ln⁡pθ(x,z)qϕ(z|x)]=Eϵ[∇ϕln⁡pθ(x,μϕ(x)+Lϕ(x)ϵ)qϕ(μϕ(x)+Lϕ(x)ϵ|x)]{\displaystyle \nabla _{\phi }\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]=\mathbb {E} _{\epsilon }\left[\nabla _{\phi }\ln {\frac {p_{\theta }(x,\mu _{\phi }(x)+L_{\phi }(x)\epsilon )}{q_{\phi }(\mu _{\phi }(x)+L_{\phi }(x)\epsilon |x)}}\right]}and so we obtained an unbiased estimator of the gradient, allowingstochastic gradient descent. Since we reparametrizedz{\displaystyle z}, we need to findqϕ(z|x){\displaystyle q_{\phi }(z|x)}. Letq0{\displaystyle q_{0}}be the probability density function forϵ{\displaystyle \epsilon }, then[clarification needed]ln⁡qϕ(z|x)=ln⁡q0(ϵ)−ln⁡|det(∂ϵz)|{\displaystyle \ln q_{\phi }(z|x)=\ln q_{0}(\epsilon )-\ln |\det(\partial _{\epsilon }z)|}where∂ϵz{\displaystyle \partial _{\epsilon }z}is the Jacobian matrix ofz{\displaystyle z}with respect toϵ{\displaystyle \epsilon }. Sincez=μϕ(x)+Lϕ(x)ϵ{\displaystyle z=\mu _{\phi }(x)+L_{\phi }(x)\epsilon }, this isln⁡qϕ(z|x)=−12‖ϵ‖2−ln⁡|detLϕ(x)|−n2ln⁡(2π){\displaystyle \ln q_{\phi }(z|x)=-{\frac {1}{2}}\|\epsilon \|^{2}-\ln |\det L_{\phi }(x)|-{\frac {n}{2}}\ln(2\pi )} Many variational autoencoders applications and extensions have been used to adapt the architecture to other domains and improve its performance. β{\displaystyle \beta }-VAE is an implementation with a weighted Kullback–Leibler divergence term to automatically discover and interpret factorised latent representations. With this implementation, it is possible to force manifold disentanglement forβ{\displaystyle \beta }values greater than one. This architecture can discover disentangled latent factors without supervision.[13][14] The conditional VAE (CVAE), inserts label information in the latent space to force a deterministic constrained representation of the learned data.[15] Some structures directly deal with the quality of the generated samples[16][17]or implement more than one latent space to further improve the representation learning. Some architectures mix VAE andgenerative adversarial networksto obtain hybrid models.[18][19][20] It is not necessary to use gradients to update the encoder. In fact, the encoder is not necessary for the generative model.[21] After the initial work of Diederik P. Kingma andMax Welling,[22]several procedures were proposed to formulate in a more abstract way the operation of the VAE. In these approaches the loss function is composed of two parts : We obtain the final formula for the loss:Lθ,ϕ=Ex∼Preal[‖x−Dθ(Eϕ(x))‖22]+d(μ(dz),Eϕ♯Preal)2{\displaystyle L_{\theta ,\phi }=\mathbb {E} _{x\sim \mathbb {P} ^{real}}\left[\|x-D_{\theta }(E_{\phi }(x))\|_{2}^{2}\right]+d\left(\mu (dz),E_{\phi }\sharp \mathbb {P} ^{real}\right)^{2}} The statistical distanced{\displaystyle d}requires special properties, for instance it has to be posses a formula as expectation because the loss function will need to be optimized bystochastic optimization algorithms. Several distances can be chosen and this gave rise to several flavors of VAEs:
https://en.wikipedia.org/wiki/Variational_autoencoder
This article is a summary ofdifferentiation rules, that is, rules for computing thederivativeof afunctionincalculus. Unless otherwise stated, all functions are functions ofreal numbers(R{\textstyle \mathbb {R} }) that return real values, although, more generally, the formulas below apply wherever they arewell defined,[1][2]including the case ofcomplex numbers(C{\textstyle \mathbb {C} }).[3] For any value ofc{\textstyle c}, wherec∈R{\textstyle c\in \mathbb {R} }, iff(x){\textstyle f(x)}is the constant function given byf(x)=c{\textstyle f(x)=c}, thendfdx=0{\textstyle {\frac {df}{dx}}=0}.[4] Letc∈R{\textstyle c\in \mathbb {R} }andf(x)=c{\textstyle f(x)=c}. By the definition of the derivative:f′(x)=limh→0f(x+h)−f(x)h=limh→0(c)−(c)h=limh→00h=limh→00=0.{\displaystyle {\begin{aligned}f'(x)&=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}\\&=\lim _{h\to 0}{\frac {(c)-(c)}{h}}\\&=\lim _{h\to 0}{\frac {0}{h}}\\&=\lim _{h\to 0}0\\&=0.\end{aligned}}} This computation shows that the derivative of any constant function is 0. Thederivativeof the function at a point is the slope of the linetangentto the curve at the point. Theslopeof the constant function is 0, because thetangent lineto the constant function is horizontal and its angle is 0. In other words, the value of the constant function,y{\textstyle y}, will not change as the value ofx{\textstyle x}increases or decreases. For any functionsf{\textstyle f}andg{\textstyle g}and any real numbersa{\textstyle a}andb{\textstyle b}, the derivative of the functionh(x)=af(x)+bg(x){\textstyle h(x)=af(x)+bg(x)}with respect tox{\textstyle x}ish′(x)=af′(x)+bg′(x){\textstyle h'(x)=af'(x)+bg'(x)}. InLeibniz's notation, this formula is written as:d(af+bg)dx=adfdx+bdgdx.{\displaystyle {\frac {d(af+bg)}{dx}}=a{\frac {df}{dx}}+b{\frac {dg}{dx}}.} Special cases include: (af)′=af′,{\displaystyle (af)'=af',} (f+g)′=f′+g′,{\displaystyle (f+g)'=f'+g',} (f−g)′=f′−g′.{\displaystyle (f-g)'=f'-g'.} For the functionsf{\textstyle f}andg{\textstyle g}, the derivative of the functionh(x)=f(x)g(x){\textstyle h(x)=f(x)g(x)}with respect tox{\textstyle x}is:h′(x)=(fg)′(x)=f′(x)g(x)+f(x)g′(x).{\displaystyle h'(x)=(fg)'(x)=f'(x)g(x)+f(x)g'(x).} In Leibniz's notation, this formula is written:d(fg)dx=gdfdx+fdgdx.{\displaystyle {\frac {d(fg)}{dx}}=g{\frac {df}{dx}}+f{\frac {dg}{dx}}.} The derivative of the functionh(x)=f(g(x)){\textstyle h(x)=f(g(x))}is:h′(x)=f′(g(x))⋅g′(x).{\displaystyle h'(x)=f'(g(x))\cdot g'(x).} In Leibniz's notation, this formula is written as:ddxh(x)=ddzf(z)|z=g(x)⋅ddxg(x),{\displaystyle {\frac {d}{dx}}h(x)=\left.{\frac {d}{dz}}f(z)\right|_{z=g(x)}\cdot {\frac {d}{dx}}g(x),}often abridged to:dh(x)dx=df(g(x))dg(x)⋅dg(x)dx.{\displaystyle {\frac {dh(x)}{dx}}={\frac {df(g(x))}{dg(x)}}\cdot {\frac {dg(x)}{dx}}.} Focusing on the notion of maps, and the differential being a mapD{\textstyle {\text{D}}}, this formula is written in a more concise way as:[D(f∘g)]x=[Df]g(x)⋅[Dg]x.{\displaystyle [{\text{D}}(f\circ g)]_{x}=[{\text{D}}f]_{g(x)}\cdot [{\text{D}}g]_{x}.} If the functionf{\textstyle f}has aninverse functiong{\textstyle g}, meaning thatg(f(x))=x{\textstyle g(f(x))=x}andf(g(y))=y{\textstyle f(g(y))=y}, then:g′=1f′∘g.{\displaystyle g'={\frac {1}{f'\circ g}}.} In Leibniz notation, this formula is written as:dxdy=1dydx.{\displaystyle {\frac {dx}{dy}}={\frac {1}{\frac {dy}{dx}}}.} Iff(x)=xr{\textstyle f(x)=x^{r}}, for any real numberr≠0{\textstyle r\neq 0}, then:f′(x)=rxr−1.{\displaystyle f'(x)=rx^{r-1}.} Whenr=1{\textstyle r=1}, this formula becomes the special case that, iff(x)=x{\textstyle f(x)=x}, thenf′(x)=1{\textstyle f'(x)=1}. Combining the power rule with the sum and constant multiple rules permits the computation of the derivative of any polynomial. The derivative ofh(x)=1f(x){\textstyle h(x)={\frac {1}{f(x)}}}for any (nonvanishing) functionf{\textstyle f}is:h′(x)=−f′(x)(f(x))2,{\displaystyle h'(x)=-{\frac {f'(x)}{(f(x))^{2}}},}whereverf{\textstyle f}is nonzero. In Leibniz's notation, this formula is written:d(1f)dx=−1f2dfdx.{\displaystyle {\frac {d\left({\frac {1}{f}}\right)}{dx}}=-{\frac {1}{f^{2}}}{\frac {df}{dx}}.} The reciprocal rule can be derived either from the quotient rule or from the combination of power rule and chain rule. Iff{\textstyle f}andg{\textstyle g}are functions, then:(fg)′=f′g−g′fg2,{\displaystyle \left({\frac {f}{g}}\right)'={\frac {f'g-g'f}{g^{2}}},}whereverg{\textstyle g}is nonzero. This can be derived from the product rule and the reciprocal rule. The elementary power rule generalizes considerably. The most general power rule is thefunctional power rule: for any functionsf{\textstyle f}andg{\textstyle g},(fg)′=(egln⁡f)′=fg(f′gf+g′ln⁡f),{\displaystyle (f^{g})'=\left(e^{g\ln f}\right)'=f^{g}\left(f'{g \over f}+g'\ln f\right),\quad }wherever both sides are well defined. Special cases: ddx(cax)=acaxln⁡c,c>0.{\displaystyle {\frac {d}{dx}}\left(c^{ax}\right)={ac^{ax}\ln c},\qquad c>0.}The equation above is true for allc{\displaystyle c}, but the derivative forc<0{\displaystyle c<0}yields a complex number. ddx(eax)=aeax.{\displaystyle {\frac {d}{dx}}\left(e^{ax}\right)=ae^{ax}.} ddx(logc⁡x)=1xln⁡c,c>1.{\displaystyle {\frac {d}{dx}}\left(\log _{c}x\right)={1 \over x\ln c},\qquad c>1.}The equation above is also true for allc{\textstyle c}but yields a complex number ifc<0{\textstyle c<0}. ddx(ln⁡x)=1x,x>0.{\displaystyle {\frac {d}{dx}}\left(\ln x\right)={1 \over x},\qquad x>0.} ddx(ln⁡|x|)=1x,x≠0.{\displaystyle {\frac {d}{dx}}\left(\ln |x|\right)={1 \over x},\qquad x\neq 0.} ddx(W(x))=1x+eW(x),x>−1e,{\displaystyle {\frac {d}{dx}}\left(W(x)\right)={1 \over {x+e^{W(x)}}},\qquad x>-{1 \over e},}whereW(x){\textstyle W(x)}is theLambert W function. ddx(xx)=xx(1+ln⁡x).{\displaystyle {\frac {d}{dx}}\left(x^{x}\right)=x^{x}(1+\ln x).} ddx(f(x)g(x))=g(x)f(x)g(x)−1dfdx+f(x)g(x)ln⁡(f(x))dgdx,iff(x)>0anddfdxanddgdxexist.{\displaystyle {\frac {d}{dx}}\left(f(x)^{g(x)}\right)=g(x)f(x)^{g(x)-1}{\frac {df}{dx}}+f(x)^{g(x)}\ln {(f(x))}{\frac {dg}{dx}},\qquad {\text{if }}f(x)>0{\text{ and }}{\frac {df}{dx}}{\text{ and }}{\frac {dg}{dx}}{\text{ exist.}}} ddx(f1(x)f2(x)(...)fn(x))=[∑k=1n∂∂xk(f1(x1)f2(x2)(...)fn(xn))]|x1=x2=...=xn=x,iffi<n(x)>0anddfidxexists.{\displaystyle {\frac {d}{dx}}\left(f_{1}(x)^{f_{2}(x)^{\left(...\right)^{f_{n}(x)}}}\right)=\left[\sum \limits _{k=1}^{n}{\frac {\partial }{\partial x_{k}}}\left(f_{1}(x_{1})^{f_{2}(x_{2})^{\left(...\right)^{f_{n}(x_{n})}}}\right)\right]{\biggr \vert }_{x_{1}=x_{2}=...=x_{n}=x},\qquad {\text{ if }}f_{i<n}(x)>0{\text{ and }}{\frac {df_{i}}{dx}}{\text{ exists.}}} Thelogarithmic derivativeis another way of stating the rule for differentiating thelogarithmof a function (using the chain rule):(ln⁡f)′=f′f,{\displaystyle (\ln f)'={\frac {f'}{f}},}whereverf{\textstyle f}is positive. Logarithmic differentiationis a technique which uses logarithms and its differentiation rules to simplify certain expressions before actually applying the derivative.[citation needed] Logarithms can be used to remove exponents, convert products into sums, and convert division into subtraction—each of which may lead to a simplified expression for taking derivatives. The derivatives in the table above are for when the range of the inverse secant is[0,π]{\textstyle [0,\pi ]}and when the range of the inverse cosecant is[−π2,π2]{\textstyle \left[-{\frac {\pi }{2}},{\frac {\pi }{2}}\right]}. It is common to additionally define aninverse tangent function with two arguments,arctan⁡(y,x){\textstyle \arctan(y,x)}. Its value lies in the range[−π,π]{\textstyle [-\pi ,\pi ]}and reflects the quadrant of the point(x,y){\textstyle (x,y)}. For the first and fourth quadrant (i.e.,x>0{\displaystyle x>0}), one hasarctan⁡(y,x>0)=arctan⁡(yx){\textstyle \arctan(y,x>0)=\arctan({\frac {y}{x}})}. Its partial derivatives are:∂arctan⁡(y,x)∂y=xx2+y2and∂arctan⁡(y,x)∂x=−yx2+y2.{\displaystyle {\frac {\partial \arctan(y,x)}{\partial y}}={\frac {x}{x^{2}+y^{2}}}\qquad {\text{and}}\qquad {\frac {\partial \arctan(y,x)}{\partial x}}={\frac {-y}{x^{2}+y^{2}}}.} Γ(x)=∫0∞tx−1e−tdt{\displaystyle \Gamma (x)=\int _{0}^{\infty }t^{x-1}e^{-t}\,dt}Γ′(x)=∫0∞tx−1e−tln⁡tdt=Γ(x)(∑n=1∞(ln⁡(1+1n)−1x+n)−1x)=Γ(x)ψ(x),{\displaystyle {\begin{aligned}\Gamma '(x)&=\int _{0}^{\infty }t^{x-1}e^{-t}\ln t\,dt\\&=\Gamma (x)\left(\sum _{n=1}^{\infty }\left(\ln \left(1+{\dfrac {1}{n}}\right)-{\dfrac {1}{x+n}}\right)-{\dfrac {1}{x}}\right)\\&=\Gamma (x)\psi (x),\end{aligned}}}withψ(x){\textstyle \psi (x)}being thedigamma function, expressed by the parenthesized expression to the right ofΓ(x){\textstyle \Gamma (x)}in the line above. ζ(x)=∑n=1∞1nx{\displaystyle \zeta (x)=\sum _{n=1}^{\infty }{\frac {1}{n^{x}}}}ζ′(x)=−∑n=1∞ln⁡nnx=−ln⁡22x−ln⁡33x−ln⁡44x−⋯=−∑pprimep−xln⁡p(1−p−x)2∏qprime,q≠p11−q−x{\displaystyle {\begin{aligned}\zeta '(x)&=-\sum _{n=1}^{\infty }{\frac {\ln n}{n^{x}}}=-{\frac {\ln 2}{2^{x}}}-{\frac {\ln 3}{3^{x}}}-{\frac {\ln 4}{4^{x}}}-\cdots \\&=-\sum _{p{\text{ prime}}}{\frac {p^{-x}\ln p}{(1-p^{-x})^{2}}}\prod _{q{\text{ prime}},q\neq p}{\frac {1}{1-q^{-x}}}\end{aligned}}} Suppose that it is required to differentiate with respect tox{\textstyle x}the function:F(x)=∫a(x)b(x)f(x,t)dt,{\displaystyle F(x)=\int _{a(x)}^{b(x)}f(x,t)\,dt,} where the functionsf(x,t){\textstyle f(x,t)}and∂∂xf(x,t){\textstyle {\frac {\partial }{\partial x}}\,f(x,t)}are both continuous in botht{\textstyle t}andx{\textstyle x}in some region of the(t,x){\textstyle (t,x)}plane, includinga(x)≤t≤b(x){\textstyle a(x)\leq t\leq b(x)}, wherex0≤x≤x1{\textstyle x_{0}\leq x\leq x_{1}}, and the functionsa(x){\textstyle a(x)}andb(x){\textstyle b(x)}are both continuous and both have continuous derivatives forx0≤x≤x1{\textstyle x_{0}\leq x\leq x_{1}}. Then, forx0≤x≤x1{\textstyle \,x_{0}\leq x\leq x_{1}}:F′(x)=f(x,b(x))b′(x)−f(x,a(x))a′(x)+∫a(x)b(x)∂∂xf(x,t)dt.{\displaystyle F'(x)=f(x,b(x))\,b'(x)-f(x,a(x))\,a'(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}\,f(x,t)\;dt\,.} This formula is the general form of theLeibniz integral ruleand can be derived using thefundamental theorem of calculus. Some rules exist for computing then{\textstyle n}th derivative of functions, wheren{\textstyle n}is a positive integer, including: Iff{\textstyle f}andg{\textstyle g}aren{\textstyle n}-times differentiable, then:dndxn[f(g(x))]=n!∑{km}f(r)(g(x))∏m=1n1km!(g(m)(x))km,{\displaystyle {\frac {d^{n}}{dx^{n}}}[f(g(x))]=n!\sum _{\{k_{m}\}}f^{(r)}(g(x))\prod _{m=1}^{n}{\frac {1}{k_{m}!}}\left(g^{(m)}(x)\right)^{k_{m}},}wherer=∑m=1n−1km{\textstyle r=\sum _{m=1}^{n-1}k_{m}}and the set{km}{\textstyle \{k_{m}\}}consists of all non-negative integer solutions of theDiophantine equation∑m=1nmkm=n{\textstyle \sum _{m=1}^{n}mk_{m}=n}. Iff{\textstyle f}andg{\textstyle g}aren{\textstyle n}-times differentiable, then:dndxn[f(x)g(x)]=∑k=0n(nk)dn−kdxn−kf(x)dkdxkg(x).{\displaystyle {\frac {d^{n}}{dx^{n}}}[f(x)g(x)]=\sum _{k=0}^{n}{\binom {n}{k}}{\frac {d^{n-k}}{dx^{n-k}}}f(x){\frac {d^{k}}{dx^{k}}}g(x).} These rules are given in many books, both on elementary and advanced calculus, in pure and applied mathematics. Those in this article (in addition to the above references) can be found in:
https://en.wikipedia.org/wiki/Table_of_derivatives#Derivatives_of_trigonometric_functions
Inpsycholinguistics,language processingrefers to the way humans usewordstocommunicateideas and feelings, and how such communications areprocessedand understood. Language processing is considered to be a uniquely human ability that is not produced with the samegrammaticalunderstanding or systematicity in even human'sclosest primate relatives.[1] Throughout the 20th century the dominant model[2]for language processing in the brain was theGeschwind–Lichteim–Wernicke model, which is based primarily on the analysis ofbrain-damagedpatients. However, due to improvements in intra-corticalelectrophysiologicalrecordings of monkey andhuman brains, as well non-invasive techniques such asfMRI,PET,MEGandEEG, anauditory pathwayconsisting of two parts[3][4]has been revealed and atwo-streams modelhas been developed. In accordance with this model, there are two pathways that connect theauditory cortexto thefrontal lobe, each pathway accounting for different linguistic roles. Theauditory ventral streampathway is responsible for sound recognition, and is accordingly known as the auditory 'what' pathway. Theauditory dorsal streamin both humans and non-human primates is responsible forsound localization, and is accordingly known as the auditory 'where' pathway. In humans, this pathway (especially in theleft hemisphere) is also responsible forspeech production, speech repetition,lip-reading, and phonologicalworking memoryandlong-term memory. In accordance with the 'from where to what' model of language evolution,[5][6]the reason the ADS is characterized with such a broad range of functions is that each indicates a different stage in language evolution. The division of the two streams first occurs in theauditory nervewhere the anterior branch enters the anteriorcochlear nucleusin the brainstem which gives rise to the auditory ventral stream. The posterior branch enters the dorsal and posteroventral cochlear nucleus to give rise to the auditory dorsal stream.[7]: 8 Language processing can also occur in relation tosigned languagesorwritten content. Throughout the 20th century, our knowledge of language processing in the brain was dominated by the Wernicke–Lichtheim–Geschwind model.[8][2][9]The Wernicke–Lichtheim–Geschwind model is primarily based on research conducted on brain-damaged individuals who were reported to possess a variety of language related disorders. In accordance with this model, words are perceived via a specialized word reception center (Wernicke's area) that is located in the lefttemporoparietal junction. This region then projects to a word production center (Broca's area) that is located in the leftinferior frontal gyrus. Because almost all language input was thought to funnel via Wernicke's area and all language output to funnel via Broca's area, it became extremely difficult to identify the basic properties of each region. This lack of clear definition for the contribution of Wernicke's and Broca's regions to human language rendered it extremely difficult to identify their homologues in other primates.[10]With the advent of thefMRIand its application for lesion mappings, however, it was shown that this model is based on incorrect correlations between symptoms and lesions.[11][12][13][14][15][16][17]The refutation of such an influential and dominant model opened the door to new models of language processing in the brain. In the last two decades, significant advances occurred in our understanding of the neural processing of sounds in primates. Initially by recording of neural activity in the auditory cortices of monkeys[18][19]and later elaborated via histological staining[20][21][22]andfMRIscanning studies,[23]3 auditory fields were identified in the primary auditory cortex, and 9 associative auditory fields were shown to surround them (Figure 1 top left). Anatomical tracing and lesion studies further indicated of a separation between the anterior and posterior auditory fields, with the anterior primary auditory fields (areas R-RT) projecting to the anterior associative auditory fields (areas AL-RTL), and the posterior primary auditory field (area A1) projecting to the posterior associative auditory fields (areas CL-CM).[20][24][25][26]Recently, evidence accumulated that indicates homology between the human and monkey auditory fields. In humans, histological staining studies revealed two separate auditory fields in the primary auditory region ofHeschl's gyrus,[27][28]and by mapping the tonotopic organization of the human primary auditory fields with high resolutionfMRIand comparing it to the tonotopic organization of the monkey primary auditory fields, homology was established between the human anterior primary auditory field and monkey area R (denoted in humans as area hR) and the human posterior primary auditory field and the monkey area A1 (denoted in humans as area hA1).[29][30][31][32][33]Intra-cortical recordings from the humanauditory cortexfurther demonstrated similar patterns of connectivity to the auditory cortex of the monkey. Recording from the surface of the auditory cortex (supra-temporal plane) reported that the anterior Heschl's gyrus (area hR) projects primarily to the middle-anteriorsuperior temporal gyrus(mSTG-aSTG) and the posterior Heschl's gyrus (area hA1) projects primarily to the posterior superior temporal gyrus (pSTG) and theplanum temporale(area PT; Figure 1 top right).[34][35]Consistent with connections from area hR to the aSTG and hA1 to the pSTG is anfMRIstudy of a patient with impaired sound recognition (auditory agnosia), who was shown with reduced bilateral activation in areas hR and aSTG but with spared activation in the mSTG-pSTG.[36]This connectivity pattern is also corroborated by a study that recorded activation from the lateral surface of the auditory cortex and reported of simultaneous non-overlapping activation clusters in the pSTG and mSTG-aSTG while listening to sounds.[37] Downstream to the auditory cortex, anatomical tracing studies in monkeys delineated projections from the anterior associative auditory fields (areas AL-RTL) to ventral prefrontal and premotor cortices in theinferior frontal gyrus(IFG)[38][39]andamygdala.[40]Cortical recording and functional imaging studies in macaque monkeys further elaborated on this processing stream by showing that acoustic information flows from the anterior auditory cortex to the temporal pole (TP) and then to the IFG.[41][42][43][44][45][46]This pathway is commonly referred to as the auditory ventral stream (AVS; Figure 1, bottom left-red arrows). In contrast to the anterior auditory fields, tracing studies reported that the posterior auditory fields (areas CL-CM) project primarily to dorsolateral prefrontal and premotor cortices (although some projections do terminate in the IFG.[47][39]Cortical recordings and anatomical tracing studies in monkeys further provided evidence that this processing stream flows from the posterior auditory fields to the frontal lobe via a relay station in the intra-parietal sulcus (IPS).[48][49][50][51][52][53]This pathway is commonly referred to as the auditory dorsal stream (ADS; Figure 1, bottom left-blue arrows). Comparing the white matter pathways involved in communication in humans and monkeys withdiffusion tensor imagingtechniques indicates of similar connections of the AVS and ADS in the two species (Monkey,[52]Human[54][55][56][57][58][59]). In humans, the pSTG was shown to project to the parietal lobe (sylvianparietal-temporal junction-inferior parietal lobule; Spt-IPL), and from there to dorsolateral prefrontal and premotor cortices (Figure 1, bottom right-blue arrows), and the aSTG was shown to project to the anterior temporal lobe (middle temporal gyrus-temporal pole; MTG-TP) and from there to the IFG (Figure 1 bottom right-red arrows). The auditory ventral stream (AVS) connects theauditory cortexwith themiddle temporal gyrusandtemporal pole, which in turn connects with theinferior frontal gyrus. This pathway is responsible for sound recognition, and is accordingly known as the auditory 'what' pathway. The functions of the AVS include the following. Accumulative converging evidence indicates that the AVS is involved in recognizing auditory objects. At the level of the primary auditory cortex, recordings from monkeys showed higher percentage of neurons selective for learned melodic sequences in area R than area A1,[60]and a study in humans demonstrated more selectivity for heard syllables in the anterior Heschl's gyrus (area hR) than posterior Heschl's gyrus (area hA1).[61]In downstream associative auditory fields, studies from both monkeys and humans reported that the border between the anterior and posterior auditory fields (Figure 1-area PC in the monkey and mSTG in the human) processes pitch attributes that are necessary for the recognition of auditory objects.[18]The anterior auditory fields of monkeys were also demonstrated with selectivity for con-specific vocalizations with intra-cortical recordings.[41][19][62]and functional imaging[63][42][43]OnefMRImonkey study further demonstrated a role of the aSTG in the recognition of individual voices.[42]The role of the human mSTG-aSTG in sound recognition was demonstrated via functional imaging studies that correlated activity in this region with isolation of auditory objects from background noise,[64][65]and with the recognition of spoken words,[66][67][68][69][70][71][72]voices,[73]melodies,[74][75]environmental sounds,[76][77][78]and non-speech communicative sounds.[79]Ameta-analysisoffMRIstudies[80]further demonstrated functional dissociation between the left mSTG and aSTG, with the former processing short speech units (phonemes) and the latter processing longer units (e.g., words, environmental sounds). A study that recorded neural activity directly from the left pSTG and aSTG reported that the aSTG, but not pSTG, was more active when the patient listened to speech in her native language than unfamiliar foreign language.[81]Consistently, electro stimulation to the aSTG of this patient resulted in impairedspeech perception[81](see also[82][83]for similar results). Intra-cortical recordings from the right and left aSTG further demonstrated that speech is processed laterally to music.[81]AnfMRIstudy of a patient with impaired sound recognition (auditory agnosia) due tobrainstemdamage was also shown with reduced activation in areas hR and aSTG of both hemispheres when hearing spoken words and environmental sounds.[36]Recordings from the anterior auditory cortex of monkeys while maintaining learned sounds in working memory,[46]and the debilitating effect of induced lesions to this region on working memory recall,[84][85][86]further implicate the AVS in maintaining the perceived auditory objects in working memory. In humans, area mSTG-aSTG was also reported active during rehearsal of heard syllables with MEG.[87]andfMRI[88]The latter study further demonstrated that working memory in the AVS is for the acoustic properties of spoken words and that it is independent to working memory in the ADS, which mediates inner speech. Working memory studies in monkeys also suggest that in monkeys, in contrast to humans, the AVS is the dominant working memory store.[89] In humans, downstream to the aSTG, the MTG and TP are thought to constitute thesemantic lexicon, which is a long-term memory repository of audio-visual representations that are interconnected on the basis of semantic relationships. (See also the reviews by[3][4]discussing this topic). The primary evidence for this role of the MTG-TP is that patients with damage to this region (e.g., patients withsemantic dementiaorherpes simplex virus encephalitis) are reported[90][91]with an impaired ability to describe visual and auditory objects and a tendency to commit semantic errors when naming objects (i.e.,semantic paraphasia). Semantic paraphasias were also expressed by aphasic patients with left MTG-TP damage[14][92]and were shown to occur in non-aphasic patients after electro-stimulation to this region.[93][83]or the underlying white matter pathway[94]Two meta-analyses of thefMRIliterature also reported that the anterior MTG and TP were consistently active during semantic analysis of speech and text;[66][95]and an intra-cortical recording study correlated neural discharge in the MTG with the comprehension of intelligible sentences.[96] In addition to extracting meaning from sounds, the MTG-TP region of the AVS appears to have a role in sentence comprehension, possibly by merging concepts together (e.g., merging the concept 'blue' and 'shirt' to create the concept of a 'blue shirt'). The role of the MTG in extracting meaning from sentences has been demonstrated in functional imaging studies reporting stronger activation in the anterior MTG when proper sentences are contrasted with lists of words, sentences in a foreign or nonsense language, scrambled sentences, sentences with semantic or syntactic violations and sentence-like sequences of environmental sounds.[97][98][99][100][101][102][103][104]OnefMRIstudy[105]in which participants were instructed to read a story further correlated activity in the anterior MTG with the amount of semantic and syntactic content each sentence contained. An EEG study[106]that contrasted cortical activity while reading sentences with and without syntactic violations in healthy participants and patients with MTG-TP damage, concluded that the MTG-TP in both hemispheres participate in the automatic (rule based) stage of syntactic analysis (ELAN component), and that the left MTG-TP is also involved in a later controlled stage of syntax analysis (P600 component). Patients with damage to the MTG-TP region have also been reported with impaired sentence comprehension.[14][107][108]See review[109]for more information on this topic. In contradiction to the Wernicke–Lichtheim–Geschwind model that implicates sound recognition to occur solely in the left hemisphere, studies that examined the properties of the right or left hemisphere in isolation via unilateral hemispheric anesthesia (i.e., the WADA procedure[110]) or intra-cortical recordings from each hemisphere[96]provided evidence thatsound recognitionis processed bilaterally. Moreover, a study that instructed patients with disconnected hemispheres (i.e.,split-brainpatients) to match spoken words to written words presented to the right or left hemifields, reported vocabulary in the right hemisphere that almost matches in size with the left hemisphere[111](The right hemisphere vocabulary was equivalent to the vocabulary of a healthy 11-years old child). This bilateral recognition of sounds is also consistent with the finding that unilateral lesion to the auditory cortex rarely results in deficit to auditory comprehension (i.e.,auditory agnosia), whereas a second lesion to the remaining hemisphere (which could occur years later) does.[112][113]Finally, as mentioned earlier, anfMRIscan of an auditory agnosia patient demonstrated bilateral reduced activation in the anterior auditory cortices,[36]and bilateral electro-stimulation to these regions in both hemispheres resulted with impaired speech recognition.[81] The auditory dorsal stream connects the auditory cortex with theparietal lobe, which in turn connects withinferior frontal gyrus. In both humans and non-human primates, the auditory dorsal stream is responsible for sound localization, and is accordingly known as the auditory 'where' pathway. In humans, this pathway (especially in the left hemisphere) is also responsible for speech production, speech repetition, lip-reading, and phonological working memory and long-term memory. Studies of present-day humans have demonstrated a role for the ADS in speech production, particularly in the vocal expression of the names of objects. For instance, in a series of studies in which sub-cortical fibers were directly stimulated[94]interference in the left pSTG andIPLresulted in errors during object-naming tasks, and interference in the left IFG resulted in speech arrest. Magnetic interference in the pSTG and IFG of healthy participants also produced speech errors and speech arrest, respectively[114][115]One study has also reported that electrical stimulation of the leftIPLcaused patients to believe that they had spoken when they had not and that IFG stimulation caused patients to unconsciously move their lips.[116]The contribution of the ADS to the process of articulating the names of objects could be dependent on the reception of afferents from the semantic lexicon of the AVS, as an intra-cortical recording study reported of activation in the posterior MTG prior to activation in the Spt-IPLregion when patients named objects in pictures[117]Intra-cortical electrical stimulation studies also reported that electrical interference to the posterior MTG was correlated with impaired object naming[118][82] Additionally, lesion studies of stroke patients have provided evidence supporting the dual stream model's role in speech production. Recent research using multivariate lesion/disconnectome symptom mapping has shown that lower scores in speech production tasks are associated with lesions and abnormalities in the left inferior parietal lobe and frontal lobe. These findings from stroke patients further support the involvement of the dorsal stream pathway in speech production, complementing the stimulation and interference studies in healthy participants.[119] Although sound perception is primarily ascribed with the AVS, the ADS appears associated with several aspects of speech perception. For instance, in a meta-analysis offMRIstudies[120](Turkeltaub and Coslett, 2010), in which the auditory perception ofphonemeswas contrasted with closely matching sounds, and the studies were rated for the required level of attention, the authors concluded that attention to phonemes correlates with strong activation in the pSTG-pSTS region. An intra-cortical recording study in which participants were instructed to identify syllables also correlated the hearing of each syllable with its own activation pattern in the pSTG.[121]The involvement of the ADS in both speech perception and production has been further illuminated in several pioneering functional imaging studies that contrasted speech perception with overt or covert speech production.[122][123][124]These studies demonstrated that the pSTS is active only during the perception of speech, whereas area Spt is active during both the perception and production of speech. The authors concluded that the pSTS projects to area Spt, which converts the auditory input into articulatory movements.[125][126]Similar results have been obtained in a study in which participants' temporal and parietal lobes were electrically stimulated. This study reported that electrically stimulating the pSTG region interferes with sentence comprehension and that stimulation of the IPL interferes with the ability to vocalize the names of objects.[83]The authors also reported that stimulation in area Spt and the inferior IPL induced interference during both object-naming and speech-comprehension tasks. The role of the ADS in speech repetition is also congruent with the results of the other functional imaging studies that have localized activation during speech repetition tasks to ADS regions.[127][128][129]An intra-cortical recording study that recorded activity throughout most of the temporal, parietal and frontal lobes also reported activation in the pSTG, Spt, IPL and IFG when speech repetition is contrasted with speech perception.[130]Neuropsychological studies have also found that individuals with speech repetition deficits but preserved auditory comprehension (i.e.,conduction aphasia) suffer from circumscribed damage to the Spt-IPL area[131][132][133][134][135][136][137]or damage to the projections that emanate from this area and target the frontal lobe[138][139][140][141]Studies have also reported a transientspeech repetitiondeficit in patients after direct intra-cortical electrical stimulation to this same region.[11][142][143]Insight into the purpose of speech repetition in the ADS is provided by longitudinal studies of children that correlated the learning of foreign vocabulary with the ability to repeat nonsense words.[144][145] In addition to repeating and producing speech, the ADS appears to have a role in monitoring the quality of the speech output. Neuroanatomical evidence suggests that the ADS is equipped with descending connections from the IFG to the pSTG that relay information about motor activity (i.e., corollary discharges) in the vocal apparatus (mouth, tongue, vocal folds). This feedback marks the sound perceived during speech production as self-produced and can be used to adjust the vocal apparatus to increase the similarity between the perceived and emitted calls. Evidence for descending connections from the IFG to the pSTG has been offered by a study that electrically stimulated the IFG during surgical operations and reported the spread of activation to the pSTG-pSTS-Spt region[146]A study[147]that compared the ability of aphasic patients with frontal, parietal or temporal lobe damage to quickly and repeatedly articulate a string of syllables reported that damage to the frontal lobe interfered with the articulation of both identical syllabic strings ("Bababa") and non-identical syllabic strings ("Badaga"), whereas patients with temporal or parietal lobe damage only exhibited impairment when articulating non-identical syllabic strings. Because the patients with temporal and parietal lobe damage were capable of repeating the syllabic string in the first task, their speech perception and production appears to be relatively preserved, and their deficit in the second task is therefore due to impaired monitoring. Demonstrating the role of the descending ADS connections in monitoring emitted calls, anfMRIstudy instructed participants to speak under normal conditions or when hearing a modified version of their own voice (delayed first formant) and reported that hearing a distorted version of one's own voice results in increased activation in the pSTG.[148]Further demonstrating that the ADS facilitates motor feedback during mimicry is an intra-cortical recording study that contrasted speech perception and repetition.[130]The authors reported that, in addition to activation in the IPL and IFG, speech repetition is characterized by stronger activation in the pSTG than during speech perception. Although sound perception is primarily ascribed with the AVS, the ADS appears associated with several aspects of speech perception. For instance, in a meta-analysis offMRIstudies[120]in which the auditory perception of phonemes was contrasted with closely matching sounds, and the studies were rated for the required level of attention, the authors concluded that attention to phonemes correlates with strong activation in the pSTG-pSTS region. An intra-cortical recording study in which participants were instructed to identify syllables also correlated the hearing of each syllable with its own activation pattern in the pSTG.[149]Consistent with the role of the ADS in discriminating phonemes,[120]studies have ascribed the integration of phonemes and their corresponding lip movements (i.e.,visemes) to the pSTS of the ADS. For example, anfMRIstudy[150]has correlated activation in the pSTS with theMcGurk illusion(in which hearing the syllable "ba" while seeing the viseme "ga" results in the perception of the syllable "da"). Another study has found that using magnetic stimulation to interfere with processing in this area further disrupts the McGurk illusion.[151]The association of the pSTS with the audio-visual integration of speech has also been demonstrated in a study that presented participants with pictures of faces and spoken words of varying quality. The study reported that the pSTS selects for the combined increase of the clarity of faces and spoken words.[152]Corroborating evidence has been provided by anfMRIstudy[153]that contrasted the perception of audio-visual speech with audio-visual non-speech (pictures and sounds of tools). This study reported the detection of speech-selective compartments in the pSTS. In addition, anfMRIstudy[154]that contrasted congruent audio-visual speech with incongruent speech (pictures of still faces) reported pSTS activation. For a review presenting additional converging evidence regarding the role of the pSTS and ADS in phoneme-viseme integration see.[155] Empirical research has demonstrated that visual lip movements enhance speech processing along the auditory dorsal stream, particularly in noisy conditions. Recent studies[156]discovered that the dorsal stream regions, including frontal speech motor areas and supramarginal gyrus, show improved neural representations of speech sounds when visual lip movements are available. A growing body of evidence indicates that humans, in addition to having a long-term store for word meanings located in the MTG-TP of the AVS (i.e., the semantic lexicon), also have a long-term store for the names of objects located in the Spt-IPL region of the ADS (i.e., the phonological lexicon). For example, a study[157][158]examining patients with damage to the AVS (MTG damage) or damage to the ADS (IPL damage) reported that MTG damage results in individuals incorrectly identifying objects (e.g., calling a "goat" a "sheep," an example ofsemantic paraphasia). Conversely, IPL damage results in individuals correctly identifying the object but incorrectly pronouncing its name (e.g., saying "gof" instead of "goat," an example ofphonemic paraphasia). Semantic paraphasia errors have also been reported in patients receiving intra-cortical electrical stimulation of the AVS (MTG), and phonemic paraphasia errors have been reported in patients whose ADS (pSTG, Spt, and IPL) received intra-cortical electrical stimulation.[83][159][94]Further supporting the role of the ADS in object naming is an MEG study that localized activity in the IPL during the learning and during the recall of object names.[160]A study that induced magnetic interference in participants' IPL while they answered questions about an object reported that the participants were capable of answering questions regarding the object's characteristics or perceptual attributes but were impaired when asked whether the word contained two or three syllables.[161]An MEG study has also correlated recovery fromanomia(a disorder characterized by an impaired ability to name objects) with changes in IPL activation.[162]Further supporting the role of the IPL in encoding the sounds of words are studies reporting that, compared to monolinguals, bilinguals have greater cortical density in the IPL but not the MTG.[163][164]Because evidence shows that, inbilinguals, different phonological representations of the same word share the same semantic representation,[165]this increase in density in the IPL verifies the existence of the phonological lexicon: the semantic lexicon of bilinguals is expected to be similar in size to the semantic lexicon of monolinguals, whereas their phonological lexicon should be twice the size. Consistent with this finding, cortical density in the IPL of monolinguals also correlates with vocabulary size.[166][167]Notably, the functional dissociation of the AVS and ADS in object-naming tasks is supported by cumulative evidence from reading research showing that semantic errors are correlated with MTG impairment and phonemic errors with IPL impairment. Based on these associations, the semantic analysis of text has been linked to the inferior-temporal gyrus and MTG, and the phonological analysis of text has been linked to the pSTG-Spt- IPL[168][169][170] Working memoryis often treated as the temporary activation of the representations stored in long-term memory that are used for speech (phonological representations). This sharing of resources between working memory and speech is evident by the finding[171][172]that speaking during rehearsal results in a significant reduction in the number of items that can be recalled from working memory (articulatory suppression). The involvement of the phonological lexicon in working memory is also evidenced by the tendency of individuals to make more errors when recalling words from a recently learned list of phonologically similar words than from a list of phonologically dissimilar words (thephonological similarity effect).[171]Studies have also found that speech errors committed during reading are remarkably similar to speech errors made during the recall of recently learned, phonologically similar words from working memory.[173]Patients with IPL damage have also been observed to exhibit both speech production errors and impaired working memory[174][175][176][177]Finally, the view that verbal working memory is the result of temporarily activating phonological representations in the ADS is compatible with recent models describing working memory as the combination of maintaining representations in the mechanism of attention in parallel to temporarily activating representations in long-term memory.[172][178][179][180]It has been argued that the role of the ADS in the rehearsal of lists of words is the reason this pathway is active during sentence comprehension[181]For a review of the role of the ADS in working memory, see.[182] Studies have shown that performance on phonological working memory tasks correlates with properties of the left dorsal branch of the arcuate fasciculus (AF), which connects posterior temporal language regions with attention-regulating areas in the middle frontal gyrus. The arcuate fasciculus is a white matter pathway in the brain which contains two branches: a ventral branch connecting Wernicke's area with Broca's area and a dorsal branch connecting the posterior temporal region with the middle frontal gyrus. This dorsal branch appears to be particularly important for phonological working memory processes.[183] Language-processing research informstheories of language. The primary theoretical question is whether linguistic structures follow from the brain structures or vice versa. Externalist models, such asFerdinand de Saussure'sstructuralism, argue that language as a social phenomenon is external to the brain. The individual receives the linguistic system from the outside, and the given language shapes the individual's brain.[184] This idea is opposed by internalist models includingNoam Chomsky'stransformational generative grammar,George Lakoff'sCognitive Linguistics, andJohn A. Hawkins'sefficiency hypothesis. According to Chomsky, language is acquired from aninnate brain structureindependently of meaning.[185]Lakoff argues that language emerges from thesensory systems.[186]Hawkins hypothesizes thatcross-linguisticallyprevalent patterns are based on the brain's natural processing preferences.[187] Additionally, models inspired byRichard Dawkins'smemetics, includingConstruction GrammarandUsage-Based Linguistics, advocate a two-way model arguing that the brain shapes language, and language shapes the brain.[188][189] Evidence fromneuroimagingstudies points towards the externalist position.ERPstudies suggest that language processing is based on the interaction of syntax and semantics, and the research does not support innate grammatical structures.[190][191]MRIstudies suggest that the structural characteristics of the child'sfirst languageshapes the processingconnectomeof the brain.[192]Processing research has failed to find support for the inverse idea that syntactic structures reflect the brain's natural processing preferences cross-linguistically.[193] The auditory dorsal stream also has non-language related functions, such as sound localization[194][195][196][197][198]and guidance of eye movements.[199][200]Recent studies also indicate a role of the ADS in localization of family/tribe members, as a study[201]that recorded from the cortex of an epileptic patient reported that the pSTG, but not aSTG, is selective for the presence of new speakers. AnfMRI[202]study of fetuses at their third trimester also demonstrated that area Spt is more selective to female speech than pure tones, and a sub-section of Spt is selective to the speech of their mother in contrast to unfamiliar female voices. It is presently unknown why so many functions are ascribed to the human ADS. An attempt to unify these functions under a single framework was conducted in the 'From where to what' model of language evolution[203][204]In accordance with this model, each function of the ADS indicates of a different intermediate phase in the evolution of language. The roles of sound localization and integration of sound location with voices and auditory objects is interpreted as evidence that the origin of speech is the exchange of contact calls (calls used to report location in cases of separation) between mothers and offspring. The role of the ADS in the perception and production of intonations is interpreted as evidence that speech began by modifying the contact calls with intonations, possibly for distinguishing alarm contact calls from safe contact calls. The role of the ADS in encoding the names of objects (phonological long-term memory) is interpreted as evidence of gradual transition from modifying calls with intonations to complete vocal control. The role of the ADS in the integration of lip movements with phonemes and in speech repetition is interpreted as evidence that spoken words were learned by infants mimicking their parents' vocalizations, initially by imitating their lip movements. The role of the ADS in phonological working memory is interpreted as evidence that the words learned through mimicry remained active in the ADS even when not spoken. This resulted with individuals capable of rehearsing a list of vocalizations, which enabled the production of words with several syllables. Further developments in the ADS enabled the rehearsal of lists of words, which provided the infra-structure for communicating with sentences. Neuroscientific research has provided a scientific understanding of howsign language is processed in the brain. There are over 135 discrete sign languages around the world- making use of different accents formed by separate areas of a country.[205] By resorting to lesion analyses and neuroimaging, neuroscientists have discovered that whether it be spoken or sign language, human brains process language in general, in a similar manner regarding which area of the brain is being used.[205]Lesion analyses are used to examine the consequences of damage to specific brain regions involved in language while neuroimaging explore regions that are engaged in the processing of language.[205] Previous hypotheses have been made that damage to Broca's area or Wernicke’s area does not affect sign language being perceived; however, it is not the case. Studies have shown that damage to these areas are similar in results in spoken language where sign errors are present and/or repeated.[205]In both types of languages, they are affected by damage to the left hemisphere of the brain rather than the right -usually dealing with the arts. There are obvious patterns for utilizing and processing language. In sign language, Broca’s area is activated while processing sign language employs Wernicke’s area similar to that of spoken language.[205] There have been other hypotheses about the lateralization of the two hemispheres. Specifically, the right hemisphere was thought to contribute to the overall communication of a language globally whereas the left hemisphere would be dominant in generating the language locally.[206]Through research in aphasias, RHD signers were found to have a problem maintaining the spatial portion of their signs, confusing similar signs at different locations necessary to communicate with another properly.[206]LHD signers, on the other hand, had similar results to those of hearing patients. Furthermore, other studies have emphasized that sign language is present bilaterally but will need to continue researching to reach a conclusion.[206] There is a comparatively small body of research on the neurology of reading and writing.[207]Most of the studies performed deal with reading rather than writing or spelling, and the majority of both kinds focus solely on the English language.[208]English orthographyis less transparent than that of other languages using aLatin script.[207]Another difficulty is that some studies focus on spelling words of English and omit the few logographic characters found in the script.[207] In terms of spelling, English words can be divided into three categories – regular, irregular, and “novel words” or “nonwords.” Regular words are those in which there is a regular, one-to-one correspondence betweengraphemeandphonemein spelling. Irregular words are those in which no such correspondence exists. Nonwords are those that exhibit the expected orthography of regular words but do not carry meaning, such asnonce wordsandonomatopoeia.[207] An issue in the cognitive and neurological study of reading and spelling in English is whether a single-route or dual-route model best describes how literate speakers are able to read and write all three categories of English words according to accepted standards of orthographic correctness. Single-route models posit that lexical memory is used to store all spellings of words for retrieval in a single process. Dual-route models posit that lexical memory is employed to process irregular and high-frequency regular words, while low-frequency regular words and nonwords are processed using a sub-lexical set of phonological rules.[207] The single-route model for reading has found support in computer modelling studies, which suggest that readers identify words by their orthographic similarities to phonologically alike words.[207]However, cognitive and lesion studies lean towards the dual-route model. Cognitive spelling studies on children and adults suggest that spellers employ phonological rules in spelling regular words and nonwords, while lexical memory is accessed to spell irregular words and high-frequency words of all types.[207]Similarly, lesion studies indicate that lexical memory is used to store irregular words and certain regular words, while phonological rules are used to spell nonwords.[207] More recently, neuroimaging studies usingpositron emission tomographyandfMRIhave suggested a balanced model in which the reading of all word types begins in thevisual word form area, but subsequently branches off into different routes depending upon whether or not access to lexical memory or semantic information is needed (which would be expected with irregular words under a dual-route model).[207]A 2007fMRIstudy found that subjects asked to produce regular words in a spelling task exhibited greater activation in the left posteriorSTG, an area used for phonological processing, while the spelling of irregular words produced greater activation of areas used for lexical memory and semantic processing, such as the leftIFGand leftSMGand both hemispheres of theMTG.[207]Spelling nonwords was found to access members of both pathways, such as the left STG and bilateral MTG andITG.[207]Significantly, it was found that spelling induces activation in areas such as the leftfusiform gyrusand left SMG that are also important in reading, suggesting that a similar pathway is used for both reading and writing.[207] Far less information exists on the cognition and neurology of non-alphabetic and non-English scripts. Every language has amorphologicaland aphonologicalcomponent, either of which can be recorded by awriting system. Scripts recording words and morphemes are consideredlogographic, while those recording phonological segments, such assyllabariesandalphabets, are phonographic.[208]Most systems combine the two and have both logographic and phonographic characters.[208] In terms of complexity, writing systems can be characterized as "transparent" or "opaque" and as "shallow" or "deep". A "transparent" system exhibits an obvious correspondence between grapheme and sound, while in an "opaque" system this relationship is less obvious. The terms "shallow" and "deep" refer to the extent that a system's orthography represents morphemes as opposed to phonological segments.[208]Systems that record larger morphosyntactic or phonological segments, such as logographic systems and syllabaries put greater demand on the memory of users.[208]It would thus be expected that an opaque or deep writing system would put greater demand on areas of the brain used for lexical memory than would a system with transparent or shallow orthography.
https://en.wikipedia.org/wiki/Language_processing
Laboratory informaticsis the specialized application of information technology aimed at optimizing and extending laboratory operations.[1]It encompassesdata acquisition(e.g. through sensors and hardware[2]or voice[3][4][5]), instrument interfacing, laboratory networking,data processing, specialized data management systems (such as achromatography data system), alaboratory information management system, scientific data management (includingdata mininganddata warehousing), and knowledge management (including the use of anelectronic lab notebook). It has become more prevalent with the rise of other "informatics" disciplines such asbioinformatics,cheminformaticsandhealth informatics. Several graduate programs are focused on some form of laboratory informatics, often with a clinical emphasis.[6]A closely related - some consider subsuming - field islaboratory automation. In the context of Public Health Laboratories, theAssociation of Public Health Laboratorieshas identified 19 areas for self-assessment of laboratory informatics in their Laboratories Efficiencies Initiative.[7]These include the following Capability Areas.
https://en.wikipedia.org/wiki/Laboratory_informatics
Linear partial information (LPI)is a method of making decisions based on insufficient orfuzzy information. LPI was introduced in 1970 by Polish–Swiss mathematicianEdward Kofler(1911–2007) to simplifydecisionprocesses. Compared toother methodsthe LPI-fuzziness isalgorithmicallysimple and particularly indecision making, more practically oriented. Instead of anindicator functionthe decision makerlinearizesany fuzziness by establishing of linear restrictions for fuzzy probability distributions or normalized weights. In the LPI-procedure the decision makerlinearizesany fuzziness instead of applying a membership function. This can be done by establishingstochasticand non-stochastic LPI-relations. A mixed stochastic and non-stochastic fuzzification is often a basis for the LPI-procedure. By using the LPI-methods any fuzziness in any decision situation can be considered on the base of thelinearfuzzy logic. Any Stochastic Partial InformationSPI(p), which can be considered as a solution of alinear inequality system, is called Linear Partial InformationLPI(p)about probabilityp. It can be considered as an LPI-fuzzification of the probabilitypcorresponding to the concepts of linear fuzzy logic. Despite the fuzziness of information, it is often necessary to choose the optimal, most cautious strategy, for example in economic planning, in conflict situations or in daily decisions. This is impossible without the concept of fuzzy equilibrium. The concept of fuzzy stability is considered as an extension into a time interval, taking into account the corresponding stability area of the decision maker. The more complex is the model, the softer a choice has to be considered. The idea of fuzzy equilibrium is based on the optimization principles. Therefore, the MaxEmin-, MaxGmin- and PDP-stability have to be analyzed. The violation of these principles leads often to wrong predictions and decisions. Considering a given LPI-decision model, as aconvolutionof the corresponding fuzzy states or a disturbance set, the fuzzy equilibrium strategy remains the most cautious one, despite the presence of the fuzziness. Any deviation from this strategy can cause a loss for the decision maker.
https://en.wikipedia.org/wiki/Linear_partial_information
Electronic design automation(EDA), also referred to aselectronic computer-aided design(ECAD),[1]is a category ofsoftware toolsfor designingelectronic systemssuch asintegrated circuitsandprinted circuit boards. The tools work together in adesign flowthat chip designers use to design and analyze entiresemiconductorchips. Since a modernsemiconductorchip can have billions of components, EDA tools are essential for their design; this article in particular describes EDA specifically with respect tointegrated circuits(ICs). The earliest electronic design automation is attributed toIBMwith the documentation of its700 seriescomputers in the 1950s.[2] Prior to the development of EDA,integrated circuitswere designed by hand and manually laid out.[3]Some advanced shops used geometric software to generate tapes for aGerberphotoplotter, responsible for generating a monochromatic exposure image, but even those copied digital recordings of mechanically drawn components. The process was fundamentally graphic, with the translation from electronics to graphics done manually; the best-known company from this era wasCalma, whoseGDSIIformat is still in use today. By the mid-1970s, developers started to automate circuit design in addition to drafting and the firstplacement and routingtools were developed; as this occurred, the proceedings of theDesign Automation Conferencecatalogued the large majority of the developments of the time.[3] The next era began following the publication of "Introduction toVLSISystems" byCarver MeadandLynn Conwayin 1980,[4]and is considered the standard textbook for chip design.[5]The result was an increase in the complexity of the chips that could be designed, with improved access todesign verificationtools that usedlogic simulation. The chips were easier to lay out and more likely to function correctly, since their designs could be simulated more thoroughly prior to construction. Although the languages and tools have evolved, this general approach of specifying the desired behavior in a textual programming language and letting the tools derive the detailed physical design remains the basis of digital IC design today. The earliest EDA tools were produced academically. One of the most famous was the "Berkeley VLSI Tools Tarball", a set ofUNIXutilities used to design early VLSI systems. Widely used were theEspresso heuristic logic minimizer,[6]responsible for circuit complexity reductions andMagic,[7]a computer-aided design platform. Another crucial development was the formation ofMOSIS,[8]a consortium of universities and fabricators that developed an inexpensive way to train student chip designers by producing real integrated circuits. The basic concept was to use reliable, low-cost, relatively low-technology IC processes and pack a large number of projects perwafer, with several copies of chips from each project remaining preserved. Cooperating fabricators either donated the processed wafers or sold them at cost, as they saw the program as helpful to their own long-term growth. 1981 marked the beginning of EDA as an industry. For many years, the larger electronic companies, such asHewlett-Packard,TektronixandIntel, had pursued EDA internally, with managers and developers beginning to spin out of these companies to concentrate on EDA as a business.Daisy Systems,Mentor GraphicsandValid Logic Systemswere all founded around this time and collectively referred to as DMV. In 1981, theU.S. Department of Defenseadditionally began funding ofVHDLas a hardware description language. Within a few years, there were many companies specializing in EDA, each with a slightly different emphasis. The first trade show for EDA was held at theDesign Automation Conferencein 1984 and in 1986,Verilog, another popular high-level design language, was first introduced as a hardware description language byGateway Design Automation. Simulators quickly followed these introductions, permitting direct simulation of chip designs and executable specifications. Within several years, back-ends were developed to performlogic synthesis. Current digital flows are extremely modular, with front ends producing standardized design descriptions that compile into invocations of units similar to cells without regard to their individual technology. Cells implement logic or other electronic functions via the utilisation of a particular integrated circuit technology. Fabricators generally provide libraries of components for their production processes, with simulation models that fit standard simulation tools. Most analog circuits are still designed in a manual fashion, requiring specialist knowledge that is unique to analog design (such as matching concepts).[9]Hence, analog EDA tools are far less modular, since many more functions are required, they interact more strongly and the components are, in general, less ideal. EDA for electronics has rapidly increased in importance with the continuous scaling ofsemiconductortechnology.[10]Some users arefoundryoperators, who operate thesemiconductor fabricationfacilities ("fabs") and additional individuals responsible for utilising the technology design-service companies who use EDA software to evaluate an incoming design for manufacturing readiness. EDA tools are also used for programming design functionality intoFPGAsor field-programmable gate arrays, customisable integrated circuit designs. Design flow primarily remains characterised via several primary components; these include: Market capitalizationand company name as of March 2023: Market capitalization and company name as of December 2011[update]:[19] Many EDA companies acquire small companies with software or other technology that can be adapted to their core business.[24]Most of the market leaders are amalgamations of many smaller companies and this trend is helped by the tendency of software companies to design tools as accessories that fit naturally into a larger vendor's suite of programs ondigital circuitry; many new tools incorporate analog design and mixed systems.[25]This is happening due to a trend to placeentire electronic systems on a single chip.
https://en.wikipedia.org/wiki/Electronic_design_automation
Vaccination requirements for international travelare the aspect ofvaccination policythat concerns themovement of peopleacrossborders. Countries around the world require travellers departing to other countries, or arriving from other countries, to bevaccinatedagainst certaininfectiousdiseasesin order to preventepidemics. Atborder checks, these travellers are required to show proof of vaccination against specific diseases; the most widely used vaccination record is theInternational Certificate of Vaccination or Prophylaxis (ICVP or Carte Jaune/Yellow Card). Some countries require information about a passenger's vaccination status in apassenger locator form.[citation needed] The first International Certificate of Vaccination againstSmallpoxwas developed by the 1944 International Sanitary Convention[1](itself an amendment of the 1926 International Sanitary Convention on Maritime Navigation and the1933 International Sanitary Convention for Aerial Navigation).[2]The initial certificate was valid for a maximum of three years.[1] The policy had a few flaws: the smallpox vaccination certificates were not always checked by qualified airport personnel, or when passengers transferred at airports in smallpox-free countries. Travel agencies mistakenly provided certificates to some unvaccinated customers, and there were some instances of falsified documents. Lastly, a small number of passengers carrying valid certificates still contracted smallpox because they were improperly vaccinated. However, all experts agree that the mandatory possession of vaccination certificates significantly increased the number of travellers who were vaccinated, and thus contributed to preventing the spread of smallpox, especially when therapid expansion of air travelin the 1960s and 1970s reduced the travelling time from endemic countries to all other countries to just a few hours.[1] After smallpox was successfully eradicated in 1980, the International Certificate of Vaccination against Smallpox was cancelled in 1981, and the new 1983 form lacked any provision for smallpox vaccination.[1] Travellers who wish to enter certain countries or territories must be vaccinated against yellow fever ten days before crossing the border, and be able to present a vaccination record/certificate at the border checks.[3]: 45In most cases, this travel requirement depends on whether the country they are travelling from has been designated by the World Health Organization as being a "country with risk of yellow fever transmission". In a few countries, it does not matter which country the traveller comes from: everyone who wants to enter these countries must be vaccinated against yellow fever. There are exemptions for newborn children; in most cases, any child who is at least nine months or one year old needs to be vaccinated.[4] Travellers who wish to enter or leave certain countries must be vaccinated against polio, usually at most twelve months and at least four weeks before crossing the border, and be able to present a vaccination record/certificate at the border checks.[3]: 25–27Most requirements apply only to travel to or from so-called polio-endemic, polio-affected, polio-exporting, polio-transmission, or "high-risk" countries.[4]As of August 2020, Afghanistan and Pakistan are the only polio-endemic countries in the world (wherewild polio has not yet been eradicated).[5]Several countries have additional precautionary polio vaccination travel requirements, for example to and from "key at-risk countries", which as of December 2020 include China, Indonesia, Mozambique, Myanmar, and Papua New Guinea.[4][6] Travellers who wish to enter or leave certain countries or territories must be vaccinated against meningococcal meningitis, preferably 10–14 days before crossing the border, and be able to present a vaccination record/certificate at the border checks.[3]: 21–24Countries with required meningococcal vaccination for travellers includeThe Gambia,Indonesia,Lebanon,Libya, thePhilippines, and most importantly and extensivelySaudi Arabiafor Muslims visiting or working inMeccaandMedinaduring theHajjorUmrahpilgrimages.[4]For some countries inAfrican meningitis belt, vaccinations prior to entry are not required, but highly recommended.[3]: 21–24 During theCOVID-19 pandemic, severalCOVID-19 vaccineswere developed, and in December 2020 the first vaccination campaign was planned.[8] Anticipating the vaccine, on 23 November 2020,Qantasannounced that the company would ask for proof of COVID-19 vaccination from international travellers. According to Alan Joyce, the firm's CEO, a coronavirus vaccine would become a "necessity" when travelling, "We are looking at changing our terms and conditions to say for international travellers, we will ask people to have a vaccination before they can get on the aircraft."[9]Australian Prime Minister Scott Morrison subsequently announced that all international travellers who fly to Australia without proof of a COVID-19 vaccination will be required to quarantine at their own expense.[7]Victoria PremierDaniel Andrewsand the CEOs ofMelbourne Airport,Brisbane AirportandFlight Centreall supported the Morrison government's "no jab, no fly" policy, with onlySydney Airport's CEO suggesting advanced testing might also be sufficient to eliminate quarantine in the future.[10]TheInternational Air Transport Association(IATA) announced that it was almost finished with developing a digital health pass which states air passengers' COVID-19 testing and vaccination information to airlines and governments.[11] Korean AirandAir New Zealandwere seriously considering mandatory vaccination as well, but would negotiate it with their respective governments.[12]KLM CEO Pieter Elbers responded on 24 November that KLM does not yet have any plans for mandatory vaccination on its flights.[13]Brussels Airlines and Lufthansa said they had no plans yet on requiring passengers to present proof of vaccination before boarding, but Brussels Airport CEO Arnaud Feist agreed with Qantas' policy, stating: "Sooner or later, having proof of vaccination or a negative test will become compulsory."[14]Ryanair announced it would not require proof of vaccination for air travel within the EU, EasyJet stated it would not require any proof at all.The Irish Timescommented that a vaccination certificate for flying was quite common in countries around the world for other diseases, such as foryellow feverin many African countries.[15] On 25 November, separately from IATA's digital health pass initiative, five major airlines –United Airlines,Lufthansa,Virgin Atlantic,Swiss International Air Lines, andJetBlue– announced the 1December 2020 introduction of the CommonPass, which shows the results of passengers' COVID-19 tests. It was designed as an international standard by theWorld Economic Forumand The Commons Project, and set up in such a way that it could also be used to record vaccination results in the future. It standardises test results and aims to prevent forgery of vaccination records, while storing only limited data on a passenger's phone to safeguard their privacy. The CommonPass had already successfully undergone a trial period in October with United Airlines andCathay Pacific Airways.[16][17] On 26 November, the Danish Ministry of Health confirmed that it was working on a COVID-19 "vaccine passport" or simply Vaccination card[18]which would likely not only work as proof of vaccination for air travel, but also for other activities such as concerts, private parties and access to various businesses, a perspective welcomed by theConfederation of Danish Industry. The Danish College of General Practitioners also welcomed the project, saying that it doesn't force anyone to vaccinate, but encourages them to do so if they want to enjoy certain privileges in society.[19] Irish Foreign MinisterSimon Coveneysaid on 27 November 2020 that, although he "currently has no plans" for a passport vaccination stamp, his government was working on changing thepassenger locator formto include proof of PCR negative tests for the coronavirus, and that it was likely to be further adjusted to include vaccination data when a COVID-19 vaccine would become available. Coveney stressed that "We do not want, following enormous efforts and sacrifices from people, to reintroduce the virus again through international travel, which is a danger if it is not managed right."[20] TheIATA Travel Passapplication for smartphone has been developed by the International Air Transport Association (IATA) in early 2021. Themobile appstandardizes the health verification process confirming whether passengers have been vaccinated against, or tested negative for, COVID-19 prior to travel. Passengers will use the app to create a digital passport linked to their e-passport, receive test results and vaccination details from laboratories, and share that information with airlines and authorities. The application is intended to replace the existing paper-based method of providing proof of vaccination in international travel, colloquially known as theYellow Card. Trials of the application are carried out by a number of airlines includingSingapore Airlines,Emirates,Qatar Airways,EtihadandAir New Zealand.[21][22] It has been opined that many countries will increasingly consider the vaccination status of travellers[23]when deciding to allow them entry or not or require them toquarantine[24]since recently published research shows that thePfizer vaccineeffect lasts for at least six months.[25] Various vaccines are not legally required for travellers, but highly recommended by the World Health Organization.[3]For example, for areas with risk of meningococcal meningitis infection in countries inAfrican meningitis belt, vaccinations prior to entry are not required by these countries, but nevertheless highly recommended by the WHO.[3]: 21–24 As of July 2019,ebola vaccinesandmalaria vaccineswere still in development and not yet recommended for travellers.[3]: 4Instead, the WHO recommends various other means of prevention, including several forms ofchemoprophylaxis, in areas where there is a significant risk of becoming infected with malaria.[26]: 4–5
https://en.wikipedia.org/wiki/Vaccination_requirements_for_international_travel
Crimewareis a class ofmalwaredesigned specifically to automatecybercrime.[1] Crimeware (as distinct fromspywareandadware) is designed to perpetrateidentity theftthroughsocial engineeringor technical stealth in order to access a computer user's financial and retail accounts for the purpose of taking funds from those accounts or completing unauthorized transactions on behalf of the cyberthief.[citation needed]Alternatively, crimeware may stealconfidentialor sensitive corporate information. Crimeware represents a growing problem innetwork securityas many malicious code threats seek to pilfer valuable, confidential information. The cybercrime landscape has shifted from individuals developing their own tools to a market where crimeware, tools and services for illegal online activities, can be easily acquired in online marketplaces. These crimeware markets are expected to expand, especially targeting mobile devices.[2] The term crimeware was coined byDavid Jevansin February 2005 in an Anti-Phishing Working Group response to the FDIC article "Putting an End to Account-Hijacking Identity Theft".[3] Criminals use a variety of techniques to steal confidential data through crimeware, including through the following methods: Crimeware threats can be installed on victims' computers through multiple delivery vectors, including: Crimeware can have a significant economic impact due to loss of sensitive and proprietary information and associated financial losses. One survey estimates that in 2005 organizations lost in excess of $30 million due to the theft of proprietary information.[9]Thetheftof financial or confidential information from corporate networks often places the organizations in violation of government and industry-imposed regulatory requirements that attempt to ensure that financial, personal and confidential. US laws and regulations include:
https://en.wikipedia.org/wiki/Crimeware
Indecision theoryandmachine learning,competitive regretrefers to a performance measure that evaluates an algorithm'sregretrelative to anoracleor benchmark strategy. Unlike traditional regret, which compares against the best fixed decision in hindsight, competitive regret compares against decision-makers with different capabilities—either with greater computational resources or access to additional information. The formal definition of competitive regret typically involves a ratio or difference between the regret of an algorithm and the regret of a reference oracle. An algorithm is considered to have "good" competitive regret if this ratio remains bounded even as the problem size increases. This framework has applications in various domains includingonline optimization,reinforcement learning, portfolio selection, andmulti-armed bandit problems. Competitive regret analysis provides researchers with a more nuanced evaluation metric than standard regret, helping them develop algorithms that can achieve near-optimal performance even under practical constraints and uncertainty. Consider estimating a discreteprobability distributionp{\displaystyle p}on a discrete setX{\displaystyle {\mathcal {X}}}based on dataX{\displaystyle X}, the regret of an estimator[1]q{\displaystyle q}is defined as whereP{\displaystyle {\mathcal {P}}}is the set of all possible probability distribution, and whereD(p||q){\displaystyle D(p||q)}is theKullback–Leibler divergencebetweenp{\displaystyle p}andq{\displaystyle q}. The oracle is restricted to have access to partial information of the true distributionp{\displaystyle p}by knowing the location ofp{\displaystyle p}in the parameter space up to a partition.[1]Given a partitionP{\displaystyle \mathbb {P} }of the parameter space, and suppose the oracle knows the subsetP{\displaystyle P}where the truep∈P{\displaystyle p\in P}. The oracle will have regret as The competitive regret to the oracle will be The oracle knows exactlyp{\displaystyle p}, but can only choose the estimator among natural estimators. A natural estimator assigns equal probability to the symbols which appear the same number of time in the sample.[1]The regret of the oracle is and the competitive regret is For the estimatorq{\displaystyle q}proposed in Acharya et al.(2013),[2] HereΔk{\displaystyle \Delta _{k}}denotes the k-dimensional unit simplex surface. The partitionPσ{\displaystyle \mathbb {P} _{\sigma }}denotes the permutation class onΔk{\displaystyle \Delta _{k}}, wherep{\displaystyle p}andp′{\displaystyle p'}are partitioned into the same subset if and only ifp′{\displaystyle p'}is a permutation ofp{\displaystyle p}.
https://en.wikipedia.org/wiki/Competitive_regret
Resource exhaustion attacksare computer securityexploitsthatcrash,hang, or otherwise interfere with the targeted program or system. They are a form ofdenial-of-service attackbut are different fromdistributeddenial-of-service attacks, which involve overwhelming a network host such as a web server with requests from many locations.[1] Resource exhaustion attacks generally exploit a software bug or design deficiency. In software withmanual memory management(most commonly written inCorC++),memory leaksare a very common bug exploited for resource exhaustion. Even if agarbage collectedprogramming language is used, resource exhaustion attacks are possible if the program uses memory inefficiently and does not impose limits on the amount of state used when necessary. File descriptorleaks are another commonvector. Most general-purpose programming languages require the programmer to explicitly close file descriptors, so even particularly high-level languages allow the programmer to make such mistakes. Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Resource_exhaustion_attack
Canadian privacy lawis derived from thecommon law, statutes of theParliament of Canadaand the various provincial legislatures, and theCanadian Charter of Rights and Freedoms. Perhaps ironically, Canada's legal conceptualization of privacy, along with most modern legal Western conceptions of privacy, can be traced back to Warren and Brandeis’s"The Right to Privacy"published in theHarvard Law Reviewin 1890,[1]Holvast states "Almost all authors on privacy start the discussion with the famous article 'The Right to Privacy' of Samuel Warren and Louis Brandeis".[1] Canadianprivacy lawhas evolved over time into what it is today. The first instance of a formal law came when, in 1977, the Canadian government introduced data protection provisions into theCanadian Human Rights Act.[2]In 1982, theCanadian Charter of Rights and Freedomsoutlined that everyone has "the right to life, liberty and security of the person" and "the right to be free from unreasonable search or seizure",[3]but did not directly mention the concept ofprivacy. In 1983, the federalPrivacy Actregulated how federal government collects, uses and discloses personal information. Canadians' constitutional right to privacy was further confirmed in the 1984 Supreme Court case,Hunter v. Southam.[4]In this case, Section 8 of theCanadian Charter of Rights and Freedoms(1982) was found "to protect individuals from unjustified state intrusions upon their privacy" and the court stated suchCharterrights should be interpreted broadly.[5]Later, in a 1988 Supreme Court case, the right to privacy was established as "an essential component of individual freedom".[4]The court report fromR. v. Dymentstates, "From the earliest stage ofCharterinterpretation, this Court has made it clear that the rights it guarantees [including privacy rights] must be interpreted generously, and not in a narrow or legalistic fashion".[5]Throughout the late 1990s and 2000s, privacy legislation placed restrictions on the collection, use and disclosure of information by provincial and territorial governments and by companies and institutions in the private sector. The Privacy Act, passed in 1983[6]by theParliament of Canada, regulates how federal government institutions collect, use and disclose personal information. It also provides individuals with a right of access to information held about them by the federal government, and a right to request correction of any erroneous information.[2] The Act established the office of thePrivacy Commissioner of Canada, who is an Officer of Parliament. The responsibilities of the Privacy Commissioner includes supervising the application of the Act itself. Under the Act, the Privacy Commissioner has powers to audit federal government institutions to ensure their compliance with the act, and is obliged to investigate complaints by individuals about breaches of the act. The Act and its equivalent legislation in most provinces are the expression of internationally accepted principles known as "fair information practices." As a last resort, thePrivacy Commissioner of Canadadoes have the "power of embarrassment", which can be used in the hopes that the party being embarrassed will rectify the problem under public scrutiny[2] Although the office of the commissioner has no mandate to conduct extensive research and education under the currentPrivacy Act, the Commissioner believed that he had become a leading educator in Canada on the issue of privacy.[2] The next major change to the Canadian privacy laws came in 1985 in the form of theAccess to Information Act. The main purposes of the Act were to provide citizens with the right of access to information under the control of governmental institutions. The Act limits access to personal information under specific circumstances.[7] TheFreedom of Information Actwas enacted in 1996, and expanded upon the principles of thePrivacy ActandAccess to Information Act. It was designed to make governmental institutions more accountable to the public, and to protect individual privacy by giving the public right of access to records, as well as giving individuals right of access to and a right to request correction of personal information about themselves. It also specifies limits to the rights of access given to individuals, prevents the unauthorized collection, use or disclosure of personal information by public bodies, and redefines the role of thePrivacy Commissioner of Canada.[8] ThePersonal Information Protection and Electronic Documents Act("PIPEDA") governs the topic ofdata privacy, and how private-sector companies can collect, use and disclose personal information. The Act also contains various provisions to facilitate the use of electronic documents. PIPEDA was passed in 2000 to promote consumer trust in electronic commerce, as well as was intended to assure that Canadian privacy laws protect the personal information of citizens of other nationalities to be in compliance withEU data protection law. In recent years, there have been numerous calls for reform as PIPEDA is considered outdated and unable to address AI effectively.[9]The Canadian government responded with a comprehensive reform project under Parliamentary discussion.[10] PIPEDA includes and creates provisions of theCanadian Standards Association's Model Code for the Protection of Personal Information, developed in 1995. Like any privacy protection act, the individual must be informed of information that may be disclosed, whereby consent is given. This may be done through accepting terms, signing a document or verbal communication. In PIPEDA, "Personal Information" is specified as information about an identifiable individual, which includes both collected information and inferred information about individuals.[11] PIPEDA allows for similar provincial laws to continue to be in effect.Quebec,British ColumbiaandAlbertahave subsequently been determined to have similar legislation, and laws governing personal health information only, in Ontario and New Brunswick, have received similar recognition. They all govern: The provincial Acts that have been so recognized, and agencies responsible, are as follows: TheCivil Code of Quebeccontains provisions governing privacy rights that can be enforced in the courts.[12]In addition, the following provinces have passed similar statutes: All four Acts establish a limited right of action, whereby liability will only be found if the defendant acts wilfully (not a requirement in Manitoba) and without a claim of right. Moreover, the nature and degree of the plaintiff‟s privacy entitlement is circumscribed by what is "reasonable in the circumstances". In January 2012, theOntario Court of Appealdeclared that the common law in Canada recognizes a right to personal privacy, more specifically identified as a "tort of intrusion upon seclusion",[17]as well as considering thatappropriation of personalityis already recognized as a tort in Ontario law.[18]The ramifications of this decision are just beginning to be discussed.[19][20]
https://en.wikipedia.org/wiki/Canadian_privacy_law
SequenceLis a general purposefunctional programminglanguage andauto-parallelizing(Parallel computing) compiler and tool set, whose primary design objectives are performance onmulti-core processorhardware, ease of programming, platform portability/optimization, and code clarity and readability. Its main advantage is that it can be used to write straightforward code that automatically takes full advantage of all the processing power available, withoutprogrammersneeding to be concerned with identifyingparallelisms, specifyingvectorization, avoidingrace conditions, and other challenges of manualdirective-based programmingapproaches such asOpenMP. Programs written in SequenceL can be compiled tomultithreadedcode that runs in parallel, with no explicit indications from a programmer of how or what to parallelize. As of 2015[update], versions of the SequenceLcompilergenerate parallel code inC++andOpenCL, which allows it to work with most popular programming languages, includingC, C++,C#,Fortran,Java, andPython. A platform-specific runtime manages the threads safely, automatically providing parallel performance according to the number of cores available, currently supportingx86,POWER8, andARMplatforms. SequenceL was initially developed over a 20-year period starting in 1989, mostly atTexas Tech University. Primary funding was fromNASA, which originally wanted to develop a specification language which was "self-verifying"; that is, once written, the requirements could beexecuted, and the results verified against the desired outcome. The principal researcher on the project was initially Dr. Daniel Cooke,[2]who was soon joined by Dr. Nelson Rushton (another Texas Tech professor) and later Dr. Brad Nemanich (then a PhD student under Cooke). The goal of creating a language that was simple enough to be readable, but unambiguous enough to be executable, drove the inventors to settle on afunctional,declarativelanguage approach, where a programmer describes desired results, rather than the means to achieve them. The language is then free to solve the problem in the most efficient manner that it can find. As the language evolved, the researchers developed new computational approaches, includingconsume-simplify-produce(CSP).[3]In 1998, research began to apply SequenceL toparallel computing. This culminated in 2004 when it took its more complete form with the addition of thenormalize-transpose(NT) semantic,[4][5]which coincided with the major vendors ofcentral processing units(CPUs) making a major shift tomulti-core processorsrather than continuing to increase clock speeds. NT is the semantic work-horse, being used to simplify and decompose structures, based on adataflow-like execution strategy similar to GAMMA[6]and NESL.[7]The NT semantic achieves a goal similar to that of the Lämmel and Peyton-Jones' boilerplate elimination.[8][9]All other features of the language are definable from these two laws - includingrecursion, subscripting structures, function references, and evaluation of function bodies.[10][11] Though it was not the original intent, these new approaches allowed the language to parallelize a large fraction of the operations it performed, transparently to the programmer. In 2006, a prototype auto-parallelizing compiler was developed at Texas Tech University. In 2009, Texas Tech licensed the intellectual property to Texas Multicore Technologies (TMT),[12]for follow-on commercial development. In January 2017 TMT released v3, which includes a free Community Edition for download in addition to the commercial Professional Edition. SequenceL is designed to be as simple as possible to learn and use, focusing on algorithmic code where it adds value, e.g., the inventors chose not to reinvent I/O since C handled that well. As a result, the fulllanguage reference for SequenceLis only 40 pages, with copious examples, and its formal grammar has around 15 production rules.[13] SequenceL is strictly evaluated (likeLisp), statically typed withtype inference(likeHaskell), and uses a combination of infix and prefix operators that resemble standard, informal mathematical notation (likeC,Pascal,Python, etc.). It is a purely declarative language, meaning that a programmer defines functions, in the mathematical sense, without giving instructions for their implementation. For example, the mathematical definition of matrix multiplication is as follows: The SequenceL definition mirrors that definition more or less exactly: The subscripts following each parameterAandBon the left hand side of the definition indicate thatAandBare depth-2 structures (i.e., lists of lists of scalars), which are here thought of as matrices. From this formal definition, SequenceL infers the dimensions of the defined product from the formula for its (i,j)'th entry (as the set of pairs (i,j) for which the right hand side is defined) and computes each entry by the same formula as in the informal definition above. Notice there are no explicit instructions for iteration in this definition, or for the order in which operations are to be carried out. Because of this, the SequenceL compiler can perform operations in any order (including parallel order) which satisfies the defining equation. In this example, computation of coordinates in the product will be parallelized in a way that, for large matrices, scales linearly with the number of processors. As noted above, SequenceL has no built-in constructs forinput/output(I/O) since it was designed to work in an additive manner with other programming languages. The decision to compile to multithreaded C++ and support the 20+ Simplified Wrapper and Interface Generator (SWIG) languages (C, C++, C#, Java, Python, etc.) means it easily fits into extant design flows, training, and tools. It can be used to enhance extant applications, create multicore libraries, and even create standalone applications by linking the resulting code with other code which performs I/O tasks. SequenceL functions can also be queried from aninterpreterwith given inputs, like Python and other interpreted languages. The main non-scalar construct of SequenceL is the sequence, which is essentially a list. Sequences may be nested to any level. To avoid the routine use of recursion common in many purely functional languages, SequenceL uses a technique termednormalize–transpose(NT), in which scalar operations are automatically distributed over elements of a sequence.[14]For example, in SequenceL we have This results not from overloading the '+' operator, but from the effect of NT that extends to all operations, both built-in and user-defined. As another example, if f() is a 3-argument function whose arguments are scalars, then for any appropriate x and z we will have The NT construct can be used for multiple arguments at once, as in, for example It also works when the expected argument is a non-scalar of any type T, and the actual argument is a list of objects of type T (or, in greater generality, any data structure whose coordinates are of type T). For example, ifAis a matrix andXsis a list of matrices [X1, ..., Xn], and given the above definition of matrix multiply, in SequenceL we would have As a rule, NTs eliminate the need for iteration, recursion, or high level functional operators to This tends to account for most uses of iteration and recursion. A good example that demonstrates the above concepts would be in finding prime numbers. Aprime numberis defined as So a positive integerzis prime if no numbers from 2 throughz-1, inclusive, divide evenly. SequenceL allows this problem to be programmed by literally transcribing the above definition into the language. In SequenceL, a sequence of the numbers from 2 throughz-1, inclusive, is just (2...(z-1)), so a program to find all of the primes between 100 and 200 can be written: Which, in English just says, If that condition isn't met, the function returns nothing. As a result, running this program yields The string "between 100 and 200" doesn't appear in the program. Rather, a programmer will typically pass that part in as the argument. Since the program expects a scalar as an argument, passing it a sequence of numbers instead will cause SequenceL to perform the operation on each member of the sequence automatically. Since the function returns empty for failing values, the result will be the input sequence, but filtered to return only those numbers that satisfy the criteria for primes: In addition to solving this problem with a very short and readable program, SequenceL's evaluation of the nested sequences would all be performed in parallel. The following software components are available and supported by TMT for use in writing SequenceL code. All components are available onx86platforms runningWindows,macOS, and most varieties ofLinux(includingCentOS,RedHat,openSUSE, andUbuntu), and onARMandIBM Powerplatforms running most varieties ofLinux. Acommand-lineinterpreterallows writing code directly into a command shell, or loading code from prewritten text files. This code can be executed, and the results evaluated, to assist in checking code correctness, or finding a quick answer. It is also available via the popularEclipseintegrated development environment(IDE). Code executed in the interpreter does not run in parallel; it executes in one thread. A command-linecompilerreads SequenceL code and generates highly parallelized,vectorized, C++, and optionally OpenCL, which must be linked with the SequenceL runtime library to execute. The runtime environment is a pre-compiled set of libraries which works with the compiled parallelized C++ code to execute optimally on the target platform. It builds on Intel Threaded Building Blocks (TBB)[15]and handles things such as cache optimization, memory management, work queues-stealing, and performance monitoring. AnEclipseintegrated development environmentplug-inprovides standard editing abilities (function rollup, chromacoding, etc.), and a SequenceL debugging environment. This plug-in runs against the SequenceL Interpreter, so cannot be used to debug the multithreaded code; however, by providing automatic parallelization, debugging of parallel SequenceL code is really verifying correctness of sequential SequenceL code. That is, if it runs correctly sequentially, it should run correctly in parallel – so debugging in the interpreter is sufficient. Various math and other standard function libraries are included as SequenceL source code to streamline the programming process and serve as best practice examples. These may be imported, in much the same way that C or C++ libraries are #included.
https://en.wikipedia.org/wiki/SequenceL
Orthogonal defect classification(ODC)[1]turns semantic information in thesoftware defectstream into a measurement on the process.[2]The ideas were developed in the late 1980s and early 1990s by Ram Chillarege[3]atIBM Research. This has led to the development of new analytical methods used for software development and test process analysis. ODC is process model, language and domain independent. Applications of ODC have been reported by several corporations on a variety of platforms and development processes, ranging fromwaterfall, spiral, gated, andagile[4][5]development processes. One of the popular applications of ODC is softwareroot cause analysis. ODC is claimed to reduce the time taken to perform defect analysis by over a factor of 10[citation needed]. The gains come primarily from a different approach to defect analysis, where the ODC data is generated rapidly (in minutes, as opposed to hours per defect) and analytics used for the cause and effect analysis. This shifts the burden of analysis from a purely human method to one that is more data intensive. ODC as proposed in its original papers have specific attribute-value sets that create measurements on the development process. Two of the five more well known categories are thedefect typeanddefect trigger. The defect type captures the changes made in the code as a result of the defect. There are seven values for defect type and they have been empirically established to provide a measurement of the product through the process through their distribution. The concept is that changes in the defect type distribution is a function of the development process model, and thus provides an intrinsic measurement of progress of the product through the process. The defect trigger, similarly provides a measurement of the Testing process. The concept of the trigger is a key contribution that came through ODC and is now fairly widely used in technical and research publications.[6]The software trigger is defined as the force that surfaced the Fault to create the failure. The full set of triggers is available in ODC Documentation. The defect type and trigger collectively provide a large amount of causal information on defects. Additional information from the defect that is captured in standard ODC implementations includes "impact", "source" and "age". ODC training courses report that, once trained, an individual can categorize a defect via ODC in less than 3 minutes when performing the task retrospectively.[7]The time taken is far lower when done in-flight, or in-process. The categorization cannot be directly compared to root-cause-analysis, since ODC data is about "what-is", not "why". However,root cause analysisis very commonly performed using ODC. The analysis that studies ODC data is performing the first pass of root cause analysis, which is confirmed by discussing the results with the development team. This approach has five primary differences between the classical method and the ODC method.[8] Individual defect analysis is just one of the applications of ODC. The original design of ODC was to create a measurement system for software engineering using the defect stream as a source of intrinsic measurements. Thus, the attributes, either singularly, or in conjunction with one of the others provides specific measurements on certain aspects of the engineering process. These measurements can be used for one or more analytical methods, since they were designed with general measurement principles in mind. Todate, several research papers have applied these for a variety of purposes. More recently, there have been research articles that use ODC to assess the methods used for security evaluation, and expanded the scope of ODC.[9]
https://en.wikipedia.org/wiki/Orthogonal_Defect_Classification
Incryptography,cryptographic hash functionscan be divided into two main categories. In the first category are those functions whose designs are based on mathematical problems, and whose security thus follows from rigorous mathematical proofs,complexity theoryandformal reduction. These functions are calledprovably secure cryptographic hash functions. To construct these is very difficult, and few examples have been introduced. Their practical use is limited. In the second category are functions which are not based on mathematical problems, but on an ad-hoc constructions, in which the bits of the message are mixed to produce the hash. These are then believed to be hard to break, but no formal proof is given. Almost all hash functions in widespread use reside in this category. Some of these functions are already broken, and are no longer in use.SeeHash function security summary. Generally, thebasicsecurity ofcryptographic hash functionscan be seen from different angles: pre-image resistance, second pre-image resistance, collision resistance, and pseudo-randomness. The basic question is the meaning ofhard. There are two approaches to answer this question. First is the intuitive/practical approach: "hardmeans that it is almost certainly beyond the reach of any adversary who must be prevented from breaking the system for as long as the security of the system is deemed important." The second approach is theoretical and is based on thecomputational complexity theory: if problemAis hard, then there exists a formalsecurity reductionfrom a problem which is widely considered unsolvable inpolynomial time, such asinteger factorizationor thediscrete logarithmproblem. However, non-existence of a polynomial time algorithm does not automatically ensure that the system is secure. The difficulty of a problem also depends on its size. For example,RSA public-key cryptography(which relies on the difficulty ofinteger factorization) is considered secure only with keys that are at least 2048 bits long, whereas keys for theElGamal cryptosystem(which relies on the difficulty of thediscrete logarithmproblem) are commonly in the range of 256–512 bits. If the set of inputs to the hash is relatively small or is ordered by likelihood in some way, then a brute force search may be practical, regardless of theoretical security. The likelihood of recovering the preimage depends on the input set size and the speed or cost of computing the hash function. A common example is the use of hashes to storepasswordvalidation data. Rather than store the plaintext of user passwords, an access control system typically stores a hash of the password. When a person requests access, the password they submit is hashed and compared with the stored value. If the stored validation data is stolen, then the thief will only have the hash values, not the passwords. However, most users choose passwords in predictable ways, and passwords are often short enough so that all possible combinations can be tested if fast hashes are used.[1]Special hashes calledkey derivation functionshave been created to slow searches.SeePassword cracking. Most hash functions are built on an ad-hoc basis, where the bits of the message are nicely mixed to produce the hash. Variousbitwise operations(e.g. rotations),modular additions, andcompression functionsare used in iterative mode to ensure high complexity and pseudo-randomness of the output. In this way, the security is very hard to prove and the proof is usually not done. Only a few years ago[when?], one of the most popular hash functions,SHA-1, was shown to be less secure than its length suggested: collisions could be found in only 251[2]tests, rather than the brute-force number of 280. In other words, most of the hash functions in use nowadays are not provably collision-resistant. These hashes are not based on purely mathematical functions. This approach results generally in more effective hashing functions, but with the risk that a weakness of such a function will be eventually used to find collisions. One famous case isMD5. In this approach, the security of a hash function is based on some hard mathematical problem, and it is proved that finding collisions of the hash function is as hard as breaking the underlying problem. This gives a somewhat stronger notion of security than just relying on complex mixing of bits as in the classical approach. A cryptographic hash function hasprovable security against collision attacksif finding collisions is provablypolynomial-time reduciblefrom a problemPwhich is supposed to be unsolvable in polynomial time. The function is then called provably secure, or just provable. It means that if finding collisions would be feasible in polynomial time by algorithmA, then one could find and use polynomial time algorithmR(reduction algorithm) that would use algorithmAto solve problemP, which is widely supposed to be unsolvable in polynomial time. That is a contradiction. This means that finding collisions cannot be easier than solvingP. However, this only indicates that finding collisions is difficult insomecases, as not all instances of a computationally hard problem are typically hard. Indeed, very large instances of NP-hard problems are routinely solved, while only the hardest are practically impossible to solve. Examples of problems that are assumed to be not solvable in polynomial time include SWIFFTis an example of a hash function that circumvents these security problems. It can be shown that, for any algorithm that can break SWIFFT with probabilitypwithin an estimated timet, one can find an algorithm that solves theworst-casescenario of a certain difficult mathematical problem within timet′depending ontandp.[citation needed] Lethash(m) =xmmodn, wherenis a hard-to-factor composite number, andxis some prespecified base value. A collisionxm1≡xm2(modn)reveals a multiplem1−m2of themultiplicative orderofxmodulon. This information can be used to factornin polynomial time, assuming certain properties ofx. But the algorithm is quite inefficient because it requires on average 1.5 multiplications modulonper message-bit.
https://en.wikipedia.org/wiki/Provably_secure_cryptographic_hash_function
Incryptography,electromagnetic attacksareside-channel attacksperformed by measuring theelectromagnetic radiationemitted from adeviceand performingsignal analysison it. These attacks are a more specific type of what is sometimes referred to asVan Eck phreaking, with the intention to captureencryptionkeys. Electromagnetic attacks are typically non-invasive and passive, meaning that these attacks are able to be performed by observing the normal functioning of the target device without causing physical damage.[1]However, an attacker may get a bettersignalwith lessnoiseby depackaging the chip and collecting the signal closer to the source. These attacks are successful against cryptographicimplementationsthat perform differentoperationsbased on the data currently being processed, such as thesquare-and-multiplyimplementation ofRSA. Different operations emit different amounts of radiation and an electromagnetic trace of encryption may show the exact operations being performed, allowing an attacker to retrieve full or partialprivate keys. Like many other side-channel attacks, electromagnetic attacks are dependent on the specific implementation of thecryptographic protocoland not on thealgorithmitself. Electromagnetic attacks are often done in conjunction with other side-channel attacks, likepower analysisattacks. Allelectronic devicesemit electromagnetic radiation. Because every wire that carries current creates amagnetic field, electronic devices create some small magnetic fields when in use. These magnetic fields can unintentionally revealinformationabout the operation of a device if not properly designed. Because all electronic devices are affected by this phenomenon, the term ‘device’ can refer to anything from a desktop computer, to mobile phone, to a smart card. Electromagnetic wavesare a type of wave that originate fromcharged particles, are characterized by varyingwavelengthand are categorized along theelectromagnetic spectrum. Any device that uses electricity will emit electromagnetic radiation due to themagnetic fieldcreated by charged particles moving along amedium. For example,radio wavesare emitted byelectricitymoving along aradio transmitter, or even from asatellite. In the case of electromagnetic side-channel attacks, attackers are often looking at electromagnetic radiation emitted by computing devices, which are made up ofcircuits. Electronic circuits consist ofsemiconductingmaterials upon which billions oftransistorsare placed. When a computer performs computations, such as encryption, electricity running through the transistors create a magnetic field and electromagnetic waves are emitted.[2][3][4] Electromagnetic waves can be captured using aninduction coiland an analog to digital converter can then sample the waves at a given clock rate and convert the trace to a digital signal to be further processed by computer. The electronic device performing the computations is synced with a clock that is running at frequencies on the order ofmega-hertz(MHz) togiga-hertz(GHz). However, due to hardware pipelining, and complexity of some instructions, some operations take multiple clock cycles to complete.[5]Therefore, it is not always necessary to sample the signal at such a high clock rate. It is often possible to get information on all or most of the operations while sampling on the order ofkilo-hertz(kHz). Different devices leak information at different frequencies. For example,Intel's Atom processor will leak keys during RSA andAESencryption at frequencies between 50 MHz and 85 MHz.[6]Android version 4.4'sBouncy Castlelibrary implementation ofECDSAis vulnerable to key extraction side channel attacks around the 50 kHz range.[7] Every operation performed by a computer emits electromagnetic radiation and different operations emit radiation at different frequencies. In electromagnetic side-channel attacks, an attacker is only interested in a few frequencies at which encryption is occurring. Signal processing is responsible for isolating these frequencies from the vast multitude of extraneous radiation and noise. To isolate certain frequencies, abandpass filter, which blocks frequencies outside of a given range, must be applied to the electromagnetic trace. Sometimes, the attacker does not know which frequencies encryption is performed at. In this case, the trace can be represented as aspectrogram, which can help determine which frequencies are most prevalent at different points of execution. Depending on the device being attacked and the level of noise, several filters may need to be applied. Electromagnetic attacks can be broadly separated into simple electromagnetic analysis (SEMA) attacks and differential electromagnetic analysis (DEMA) attacks. In simple electromagnetic analysis (SEMA) attacks, the attacker deduces the key directly by observing the trace. It is very effective against asymmetric cryptography implementations.[8]Typically, only a few traces are needed, though the attacker needs to have a strong understanding of the cryptographic device and of the implementation of thecryptographic algorithm. An implementation vulnerable to SEMA attacks will perform a different operation depending on whether thebitof the key is 0 or 1, which will use different amounts of power and/or different chip components. This method is prevalent in many different types of side-channel attacks, in particular, power analysis attacks. Thus, the attacker can observe the entire computation of encryption and can deduce the key. For example, a common attack on asymmetric RSA relies on the fact that the encryption steps rely on the value of the key bits. Every bit is processed with a square operation and then a multiplication operation if and only if the bit is equal to 1. An attacker with a clear trace can deduce the key simply by observing where the multiplication operations are performed. In some cases, simple electromagnetic analysis is not possible or does not provide enough information. Differential electromagnetic analysis (DEMA) attacks are more complex, but are effective against symmetric cryptography implementation, against which SEMA attacks are not.[6]Additionally unlike SEMA, DEMA attacks do not require much knowledge about the device being attacked. While the fact that circuits that emit high-frequency signals may leak secret information was known since 1982 by the NSA, it was classified until 2000,[9]which was right around the time that the first electromagnetic attack against encryption was shown by researchers.[10]Since then, many more complex attacks have been introduced.[which?][citation needed] Smart cards, often colloquially referred to as “chip cards", were designed to provide a more secure financial transaction than a traditional credit card. They contain simple embeddedintegrated circuitsdesigned to perform cryptographic functions.[11]They connect directly to acard readerwhich provides the power necessary to perform an encryptedfinancial transaction. Many side-channel attacks have been shown to be effective against smart cards because they obtain their power supply and clock directly from the card reader. By tampering with a card reader, it is simple to collect traces and perform side-channel attacks. Other works, however, have also shown that smart cards are vulnerable to electromagnetic attacks.[12][13][14] A field-programmable gate arrays (FPGA) have been commonly used to implement cryptographic primitives in hardware to increase speed. These hardware implementations are just as vulnerable as other software based primitives. In 2005, an implementation of elliptic curve encryption was shown vulnerable to both SEMA and DEMA attacks.[15]TheARIAblock cipher is a common primitive implemented with FPGAs that has been shown to leak keys.[16] In contrast to smart cards, which are simple devices performing a single function,personal computersare doing many things at once. Thus, it is much more difficult to perform electromagnetic side-channel attacks against them, due to high levels of noise and fastclock rates. Despite these issues, researchers in 2015 and 2016 showed attacks against a laptop using anear-field magnetic probe. The resulting signal, observed for only a few seconds, was filtered, amplified, and digitized for offline key extraction. Most attacks require expensive, lab-grade equipment, and require the attacker to be extremely close to the victim computer.[17][18]However, some researchers were able to show attacks using cheaper hardware and from distances of up to half a meter.[19]These attacks, however, required the collection of more traces than the more expensive attacks. Smartphonesare of particular interest for electromagnetic side-channel attacks. Since the advent ofmobile phone payment systemssuch asApple Pay, e-commerce systems have become increasingly commonplace. Likewise, the amount of research dedicated to mobile phone security side channel attacks has also increased.[20]Currently most attacks are proofs of concept that use expensive lab-grade signal processing equipment.[21]One of these attacks demonstrated that a commercial radio receiver could detect mobile phone leakage up to three meters away.[22] However, attacks using low-end consumer grade equipment have also shown successful. By using an external USB sound card and an induction coil salvaged from a wireless charging pad, researchers were able to extract a user's signing key in Android's OpenSSL and Apple's CommonCrypto implementations of ECDSA.[20][21][22] Widely used theoretical encryption schemes aremathematically secure, yet this type of security does not consider their physical implementations, and thus, do not necessarily protect against side-channel attacks. Therefore, the vulnerability lies in the code itself, and it is the specific implementation that is shown to be insecure. Luckily, many of the vulnerabilities shown have since beenpatched. Vulnerable implementations include, but are definitely not limited to, the following: The attacks described thus far have mainly focused on the use of induction to detect unintended radiation. However, the use offar-field communicationtechnologies like that ofAM radioscan also be used for side-channel attacks, although no key extraction methods for far-field signal analysis have been demonstrated.[23]Therefore, a rough characterization of potential adversaries using this attack range from highly educated individuals to low to medium funded cartels. The following demonstrates a few possible scenarios: Point of sale systemsthat accept payment from mobile phones or smart cards are vulnerable. Induction coils can be hidden on these systems to record financial transactions from smart cards or mobile phone payments. With keys extracted, a malicious attacker could forge his own card or make fraudulent charges with the private key. Belgarric et al. propose a scenario where mobile payments are performed withbitcointransactions. Since theAndroidimplementation of the bitcoin client uses ECDSA, the signing key can be extracted at the point of sale.[7]These types of attacks are only slightly more complex than magnetic card stripe skimmers currently used on traditional magnetic strip cards. Many public venues such asStarbuckslocations are already offering free publicwireless chargingpads.[24]It was previously shown that the same coils used in wireless charging can be used for detection of unintended radiation. Therefore, these charging pads pose a potential hazard. Malicious charging pads might attempt to extract keys in addition to charging a user’s phone. When coupled with packet sniffing capabilities of public Wi-Fi networks, the keys extracted could be used to performman-in-the-middle attackson users. If far-field attacks are discovered, an attacker only needs to point hisantennaat a victim to perform these attacks; the victim need not be actively charging their phone on one of these public pads.[citation needed] Several countermeasures against electromagnetic attacks have been proposed, though there is no one perfect solution. Many of the following countermeasures will make electromagnetic attacks harder, not impossible. One of the most effective ways to prevent electromagnetic attacks is to make it difficult for an attacker to collect an electromagnetic signal at the physical level. Broadly, the hardware designer could design the encryption hardware to reduce signal strength[25]or to protect the chip. Circuit and wire shielding, such as aFaraday cage, are effective in reducing the signal, as well as filtering the signal or introducing extraneous noise to mask the signal. Additionally, most electromagnetic attacks require attacking equipment to be very close to the target, so distance is an effective countermeasure. Circuit designers can also use certain glues or design components in order to make it difficult or impossible to depackage the chip without destroying it. Recently, white-box modeling was utilized to develop a low-overhead generic circuit-level countermeasure[26]against both electromagnetic as well as power side-channel attacks. To minimize the effects of the higher-level metal layers in an IC acting as more efficient antennas,[27]the idea is to embed the crypto core with a signature suppression circuit,[28][29]routed locally within the lower-level metal layers, leading towards both power and electromagnetic side-channel attack immunity. As many electromagnetic attacks, especially SEMA attacks, rely on asymmetric implementations of cryptographic algorithms, an effective countermeasure is to ensure that a given operation performed at a given step of the algorithm gives no information on the value of that bit.Randomizationof the order of bit encryption, process interrupts, and clock cycle randomization, are all effective ways to make attacks more difficult.[1] The classifiedNational Security AgencyprogramTEMPESTfocuses on both the spying on systems by observing electromagnetic radiation and the securing of equipment to protect against such attacks. TheFederal Communications Commissionoutlines the rules regulating the unintended emissions of electronic devices inPart 15of the Code of Federal Regulations Title 47. The FCC does not provide a certification that devices do not produce excess emissions, but instead relies on a self-verification procedure.[30]
https://en.wikipedia.org/wiki/Electromagnetic_attack
Project production management(PPM)[1][2]is the application ofoperations management[2][3]to the delivery of capital projects. The PPM framework is based on aprojectas aproduction systemview,[1][2][3]in which a project transforms inputs (raw materials, information, labor, plant & machinery) into outputs (goods and services). The knowledge that forms the basis of PPM originated in the discipline ofindustrial engineeringduring theIndustrial Revolution. During this time, industrial engineering matured and then found application in many areas such as military planning and logistics for both the First and Second World Wars and manufacturing systems. As a coherent body of knowledge began to form, industrial engineering evolved into various scientific disciplines includingoperations research, operations management andqueueing theory, amongst other areas of focus. Project Production Management (PPM) is the application of this body of knowledge to the delivery of capital projects. Project management, as defined by theProject Management Institute,[1][2]specifically excludesoperations managementfrom its body of knowledge,[3]on the basis that projects are temporary endeavors with a beginning and an end, whereas operations refer to activities that are either ongoing or repetitive. However, by looking at a large capital project as a production system, such as what is encountered in construction,[4]it is possible to apply the theory and associated technical frameworks from operations research, industrial engineering and queuing theory to optimize, plan, control and improve project performance. For example, Project Production Management applies tools and techniques typically used in manufacturing management, such as described byPhilip M. Morsein,[1]or inFactory Physics[2][5]to assess the impact ofvariabilityandinventoryon project performance. Although any variability in a production system degrades its performance, by understanding which variability is detrimental to the business and which is beneficial, steps can be implemented to reduce detrimental variability. After mitigation steps are put in place, the impact of any residual variability can be addressed by allocating buffers at select points in the project production system – a combination of capacity,inventoryand time. Scientific and Engineering disciplines have contributed to many mathematical methods for the design and planning inproject planningand scheduling, most notablylinearanddynamicprogramming yielding techniques such as thecritical path method(CPM) and theprogram evaluation and review technique(PERT). The application of engineering disciplines, particularly the areas of operations research,industrial engineeringand queueing theory have found much application in the fields ofmanufacturingand factory production systems. Factory Physics is an example of where these scientific principles are described as forming a framework for manufacturing and production management. Just as Factory Physics is the application of scientific principles to construct a framework for manufacturing and production management, Project Production Management is the application of the very same operations principles to the activities in a project, covering an area that has been conventionally out of scope for project management.[3] Modernproject managementtheory and techniques started withFrederick Taylorand Taylorism/scientific managementat the beginning of the 20th century, with the advent of mass manufacturing. It was refined further in the 1950s with techniques such ascritical path method(CPM)[1][2]andprogram evaluation and review technique(PERT).[5][6]Use of CPM and PERT became more common as the computer revolution progressed. As the field of project management continued to grow, the role of the project manager was created and certifying organizations such as the Project Management Institute (PMI) emerged. Modern project management has evolved into a broad variety of knowledge areas described in the Guide to the Project Management Body of Knowledge (PMBOK).[3] Operations management[7][8][9][10](related to the fields ofproduction management,operations researchandindustrial engineering) is a field of science that emerged from the modern manufacturing industry and focuses on modeling and controlling actual work processes. The practice is based upon defining and controlling production systems, which typically consist of a series of inputs, transformational activities,inventoryand outputs. Over the last 50 years, project management and operations management have been considered separate fields of study and practice. PPM applies the theory and results of the various disciplines known asoperations management, operations research,queueing theoryand industrial engineering to the management and execution of projects. By viewing a project as aproduction system, the delivery of capital projects can be analyzed for the impact ofvariability.The effects of variability can be summarized by VUT equation (specificallyKingman's formula for G/G/1 queue). By using a combination ofbuffers–capacity, inventory and time – the impact of variability to project execution performance can be minimized. A set of key results used to analyze and optimize the work in projects were originally articulated byPhilip Morse, considered the father of operations research in the U.S. and summarized in his seminal volume.[8]In introducing its framework formanufacturingmanagement,Factory Physicssummarizes these results: There are key mathematical models that describe the relationships between buffers and variability.Little's law[11]– named after academicJohn Little– describes the relationship between throughput, cycle time and work-in-process (WIP) or inventory.  The Cycle Time Formula[11]summarizes how much time a set of tasks at a particular point in a project take to execute.  Kingman's formula, also known as the VUT equation[11]– summarizing the impact of variability. The following academic journals publish papers pertaining to Operations Management issues:
https://en.wikipedia.org/wiki/Project_production_management
Collaborative intelligenceis distinguished from collective intelligence in three key ways: First, in collective intelligence there is a central controller who poses the question, collects responses from a crowd of anonymous responders, and uses an algorithm to process those responses to achieve a (typically) "better than average" consensus result, whereas collaborative intelligence focuses on gathering, and valuing, diverse input. Second, in collective intelligence the responders are anonymous, whereas in collaborative intelligence, as in social networks, participants are not anonymous. Third, in collective intelligence, as in the standard model of problem-solving, there is a beginning, when the central controller broadcasts the question, and an end, when the central controller announces the "consensus" result. In collaborative intelligence there is no central controller because the process is modeled on evolution. Distributed, autonomous agents contribute and share control, as in evolution and as manifested in the generation ofWikipediaarticles. Collaborative intelligence characterizesmulti-agent,distributed systemswhere each agent, human or machine, is autonomously contributing to aproblem solvingnetwork. Collaborative autonomy of organisms in their ecosystems makes evolution possible. Natural ecosystems, where each organism's unique signature is derived from its genetics, circumstances, behavior and position in its ecosystem, offer principles for design of next generationsocial networksto support collaborative intelligence,crowdsourcingindividual expertise, preferences, and unique contributions in a problem solving process.[1] Four related terms are complementary: Collaborative intelligence is a term used in several disciplines. In business it describes heterogeneous networks of people interacting to produce intelligent outcomes. It can also denote non-autonomousmulti-agent problem-solving systems. The term was used in 1999 to describe the behavior of an intelligent business "ecosystem"[2]where Collaborative Intelligence, or CQ, is "the ability to build, contribute to and manage power found in networks of people."[3]When the computer science community adopted the termcollective intelligenceand gave that term a specific technical denotation, a complementary term was needed to distinguish between anonymous homogeneity in collective prediction systems and non-anonymous heterogeneity in collaborative problem-solving systems. Anonymous collective intelligence was then complemented by collaborative intelligence, which acknowledged identity, viewingsocial networksas the foundation for next generation problem-solving ecosystems, modeled onevolutionary adaptationin nature's ecosystems. Although many sources warn that AI may cause the extinction of the human species,[4]humans may cause our own extinction viaclimate change,ecosystem disruption, decline of our ocean lifeline, increasingmass murdersandpolice brutality, and anarms racethat could triggerWorld War III, driving humanity extinct before AI gets a chance. The surge ofopen sourceapplications in generative AI demonstrates the power of collaborative intelligence (AI-human C-IQ) among distributed, autonomous agents, sharing achievements in collaborative partnerships and networks. The successes of small open source experiments in generative AI provide a model for a paradigm shift from centralized, hierarchical control to decentralized bottom-up, evolutionary development.[5]The key role of AI in collaborative intelligence was predicted in 2012 when Zann Gill wrote that collaborative intelligence (C-IQ) requires “multi-agent, distributed systems where each agent, human or machine, is autonomously contributing to a problem-solving network.”[6]Gill’s ACM paper has been cited in applications ranging from an NIH (U. S. National Institute of Health) Center for Biotechnology study of human robot collaboration,[7]to an assessment of cloud computing tradeoffs.[8]A key application domain for collaborative intelligence is risk management, where preemption is an anticipatory action taken to secure first-options in maximising future gain and/or minimising loss.[9]Prediction of gain/ loss scenarios can increasingly harness AI analytics and predictive systems designed to maximize collaborative intelligence. Other collaborative intelligence applications include the study of social media and policing, harnessing computational approaches to enhance collaborative action between residents and law enforcement.[10]In their Harvard Business Review essay, Collaborative Intelligence: Humans and AI Are Joining Forces – Humans and machines can enhance each other’s strengths, authors H. James Wilson and Paul R. Daugherty report on research involving 1,500 firms in a range of industries, showing that the biggest performance improvements occur when humans and smart machines work together, enhancing each other’s strengths.[11] Collaborative intelligence traces its roots to the Pandemonium Architecture proposed by artificial intelligence pioneerOliver Selfridgeas a paradigm forlearning.[12]His concept was a precursor for the blackboard system where an opportunistic solution space, or blackboard, draws from a range of partitioned knowledge sources, as multiple players assemble a jigsaw puzzle, each contributing a piece.Rodney Brooksnotes that the blackboard model specifies how knowledge is posted to a blackboard for generalsharing, but not how knowledge is retrieved, typically hiding from the consumer of knowledge who originally produced which knowledge,[13]so it would not qualify as a collaborative intelligence system. In the late 1980s,Eshel Ben-Jacobbegan to study bacterialself-organization, believing that bacteria hold the key to understanding larger biological systems. He developed new pattern-forming bacteria species,Paenibacillus vortexandPaenibacillus dendritiformis, and became a pioneer in the study of social behaviors of bacteria.P. dendritiformismanifests a collective faculty, which could be viewed as a precursor of collaborative intelligence, the ability to switch between different morphotypes to adapt with the environment.[14][15]Ants were first characterized by entomologistW. M. Wheeleras cells of a single "superorganism" where seemingly independent individuals can cooperate so closely as to become indistinguishable from a single organism.[16]Later research characterized some insect colonies as instances ofcollective intelligence. The concept ofant colony optimization algorithms, introduced byMarco Dorigo, became a dominant theory ofevolutionary computation. The mechanisms ofevolutionthrough which species adapt toward increased functional effectiveness in their ecosystems are the foundation for principles of collaborative intelligence. Artificial Swarm Intelligence(ASI) is a real-time technology that enables networked human groups to efficiently combine their knowledge, wisdom, insights, and intuitions into an emergent intelligence. Sometimes referred to as a "hive mind," the first real-time human swarms were deployed byUnanimous A.I.using a cloud-based server called "UNU"in 2014. It enables online groups to answer questions, reach decisions, and make predictions by thinking together as a unified intelligence. This process has been shown to produce significantly improved decisions, predictions, estimations, and forecasts, as demonstrated when predicting major events such as the Kentucky Derby, the Oscars, the Stanley Cup, Presidential Elections, and the World Series.[17][18] A type of collaborative AI was the focus of aDARPAArtificial Intelligence Exploration (AIE)[19]Program from 2021 to 2023. Named Shared Experience Lifelong Learning,[20]the program aimed to develop a population of agents capable of sharing a growing number of machine-learned tasks without forgetting. The vision behind this initiative was later elaborated in a Perspective inNature Machine Intelligence,[21]which proposed a synergy between lifelong learning and the sharing of machine-learned knowledge in populations of agents. The envisioned network of AI agents promises to bring about emergent properties such as faster and more efficient learning, a higher degree of open-ended learning, and a potentially more democratic society of AI agents, in contrast to monolithic, large-scale AI systems. These research developments were deemed to implement concepts inspired by sci-fi concepts such as theBorgfromStar Trek, however, featuring more appealing characteristics such as individuality and autonomy.[22] Crowdsourcingevolved from anonymous collective intelligence and is evolving toward credited, open source, collaborative intelligence applications that harness social networks. Evolutionary biologist Ernst Mayr noted that competition among individuals would not contribute to species evolution if individuals were typologically identical. Individual differences are a prerequisite for evolution.[23]This evolutionary principle corresponds to the principle of collaborative autonomy in collaborative intelligence, which is a prerequisite for next generation platforms for crowd-sourcing. Following are examples of crowdsourced experiments with attributes of collaborative intelligence: Ascrowdsourcingevolves from basic pattern recognition tasks to toward collaborative intelligence, tapping the unique expertise of individual contributors insocial networks, constraints guideevolutiontoward increased functional effectiveness, co-evolving with systems to tag, credit, time-stamp, and sort content.[24]Collaborative intelligence requires capacity for effective search, discovery, integration, visualization, and frameworks to support collaborative problem-solving.[25] The collaborative intelligence technology category was established in 2022 by MURAL, a software provider ofinteractive whiteboardcollaboration spaces for group ideation and problem-solving.[26]MURAL formalized the collaborative intelligence category through the acquisition of LUMA Institute,[27]an organization that trains people to be collaborative problem solvers through teachinghuman-centered design.[28]The collaborative intelligence technology category is described by MURAL as combining "collaboration design with collaboration spaces and emerging Collaboration Insights™️ ... to enable and amplify the potential of the team."[29] The termcollective intelligenceoriginally encompassed both collective and collaborative intelligence, and many systems manifest attributes of both.Pierre Lévycoined the term "collective intelligence" in his book of that title, first published in French in 1994.[30]Lévy defined "collective intelligence" to encompass both collective and collaborative intelligence: "a form of universally distributed intelligence, constantly enhanced, coordinated in real time, and in the effective mobilization of skills".[31]Following publication of Lévy's book, computer scientists adopted the term collective intelligence to denote an application within the more general area to which this term now applies in computer science. Specifically, an application that processes input from a large number of discrete responders to specific, generally quantitative, questions (e.g. what will the price ofDRAMbe next year?)Algorithmshomogenize input, maintaining the traditional anonymity of survey responders to generate better-than-average predictions. Recent dependency network studies suggest links between collective and collaborative intelligence. Partial correlation-based Dependency Networks, a new class of correlation-based networks, have been shown to uncover hidden relationships between the nodes of the network. Research by Dror Y. Kenett and his Ph.D. supervisorEshel Ben-Jacobuncovered hidden information about the underlying structure of theU.S. stock marketthat was not present in the standardcorrelation networks, and published their findings in 2011.[32] Collaborative intelligence addresses problems where individual expertise, potentially conflicting priorities of stakeholders, and different interpretations of diverse experts are critical for problem-solving. Potential future applications include: Wikipedia, one of the most popular websites on the Internet, is an exemplar of an innovation network manifesting distributed collaborative intelligence that illustrates principles for experimental business laboratories and start-up accelerators.[33] A new generation of tools to support collaborative intelligence is poised to evolve from crowdsourcing platforms,recommender systems, andevolutionary computation.[25]Existing tools to facilitate group problem-solving include collaborative groupware,synchronous conferencingtechnologies such asinstant messaging,online chat, and shared white boards, which are complemented by asynchronous messaging likeelectronic mail, threaded, moderated discussionforums, web logs, and groupWikis. Managing the Intelligent Enterprise relies on these tools, as well as methods for group member interaction; promotion of creative thinking; group membership feedback; quality control and peer review; and a documented group memory or knowledge base. As groups work together, they develop a shared memory, which is accessible through the collaborative artifacts created by the group, including meeting minutes, transcripts from threaded discussions, and drawings. The shared memory (group memory) is also accessible through the memories of group members; current interest focuses on how technology can support and augment the effectiveness of shared past memory and capacity for future problem-solving. Metaknowledge characterizes how knowledge content interacts with its knowledge context in cross-disciplinary, multi-institutional, or global distributed collaboration.[34]
https://en.wikipedia.org/wiki/Collaborative_intelligence
Inmathematics, aspaceis aset(sometimes known as auniverse) endowed with astructuredefining the relationships among theelementsof the set. Asubspaceis asubsetof the parent space which retains the same structure. While modern mathematics uses many types of spaces, such asEuclidean spaces,linear spaces,topological spaces,Hilbert spaces, orprobability spaces, it does not define the notion of "space" itself.[1][a] A space consists of selectedmathematical objectsthat are treated aspoints, and selected relationships between these points. The nature of the points can vary widely: for example, the points can represent numbers, functions on another space, or subspaces of another space. It is the relationships that define the nature of the space. More precisely, isomorphic spaces are considered identical, where anisomorphismbetween two spaces is a one-to-one correspondence between their points that preserves the relationships. For example, the relationships between the points of a three-dimensional Euclidean space are uniquely determined by Euclid's axioms,[b]and all three-dimensional Euclidean spaces are considered identical. Topological notions such as continuity have natural definitions for every Euclidean space. However, topology does not distinguish straight lines from curved lines, and the relation between Euclidean and topological spaces is thus "forgetful". Relations of this kind are treated in more detail in the"Types of spaces"section. It is not always clear whether a given mathematical object should be considered as ageometric "space", or analgebraic "structure". A general definition of "structure", proposed byBourbaki,[2]embraces all common types of spaces, provides a general definition of isomorphism, and justifies the transfer of properties between isomorphic structures. In ancient Greek mathematics, "space" was a geometric abstraction of the three-dimensional reality observed in everyday life. About 300 BC,Euclidgave axioms for the properties of space. Euclid built all of mathematics on these geometric foundations, going so far as to define numbers by comparing the lengths of line segments to the length of a chosen reference segment. The method of coordinates (analytic geometry) was adopted byRené Descartesin 1637.[3]At that time, geometric theorems were treated as absolute objective truths knowable through intuition and reason, similar to objects of natural science;[4]: 11and axioms were treated as obvious implications of definitions.[4]: 15 Twoequivalence relationsbetween geometric figures were used:congruenceandsimilarity. Translations, rotations and reflections transform a figure into congruent figures;homotheties— into similar figures. For example, all circles are mutually similar, but ellipses are not similar to circles. A third equivalence relation, introduced byGaspard Mongein 1795, occurs inprojective geometry: not only ellipses, but also parabolas and hyperbolas, turn into circles under appropriate projective transformations; they all are projectively equivalent figures. The relation between the two geometries, Euclidean and projective,[4]: 133shows that mathematical objects are not given to uswith their structure.[4]: 21Rather, each mathematical theory describes its objects bysomeof their properties, precisely those that are put as axioms at the foundations of the theory.[4]: 20 Distances and angles cannot appear in theorems of projective geometry, since these notions are neither mentioned in the axioms of projective geometry nor defined from the notions mentioned there. The question "what is the sum of the three angles of a triangle" is meaningful in Euclidean geometry but meaningless in projective geometry. A different situation appeared in the 19th century: in some geometries the sum of the three angles of a triangle is well-defined but different from the classical value (180 degrees). Non-Euclideanhyperbolic geometry, introduced byNikolai Lobachevskyin 1829 andJános Bolyaiin 1832 (andCarl Friedrich Gaussin 1816, unpublished)[4]: 133stated that the sum depends on the triangle and is always less than 180 degrees.Eugenio Beltramiin 1868 andFelix Kleinin 1871 obtained Euclidean "models" of the non-Euclidean hyperbolic geometry, and thereby completely justified this theory as a logical possibility.[4]: 24[5] This discovery forced the abandonment of the pretensions to the absolute truth of Euclidean geometry. It showed that axioms are not "obvious", nor "implications of definitions". Rather, they are hypotheses. To what extent do they correspond to an experimental reality? This important physical problem no longer has anything to do with mathematics. Even if a "geometry" does not correspond to an experimental reality, its theorems remain no less "mathematical truths".[4]: 15 A Euclidean model of anon-Euclidean geometryis a choice of some objects existing in Euclidean space and some relations between these objects that satisfy all axioms (and therefore, all theorems) of the non-Euclidean geometry. These Euclidean objects and relations "play" the non-Euclidean geometry like contemporary actors playing an ancient performance. Actors can imitate a situation that never occurred in reality. Relations between the actors on the stage imitate relations between the characters in the play. Likewise, the chosen relations between the chosen objects of the Euclidean model imitate the non-Euclidean relations. It shows that relations between objects are essential in mathematics, while the nature of the objects is not. The word "geometry" (from Ancient Greek: geo- "earth", -metron "measurement") initially meant a practical way of processing lengths, regions and volumes in the space in which we live, but was then extended widely (as well as the notion of space in question here). According to Bourbaki,[4]: 131the period between 1795 (Géométrie descriptiveof Monge) and 1872 (the"Erlangen programme"of Klein) can be called "the golden age of geometry". The original space investigated by Euclid is now called three-dimensionalEuclidean space. Its axiomatization, started by Euclid 23 centuries ago, was reformed withHilbert's axioms,Tarski's axiomsandBirkhoff's axioms. These axiom systems describe the space viaprimitive notions(such as "point", "between", "congruent") constrained by a number ofaxioms. Analytic geometry made great progress and succeeded in replacing theorems of classical geometry with computations via invariants of transformation groups.[4]: 134, 5Since that time, new theorems of classical geometry have been of more interest to amateurs than to professional mathematicians.[4]: 136However, the heritage of classical geometry was not lost. According to Bourbaki,[4]: 138"passed over in its role as an autonomous and living science, classical geometry is thus transfigured into a universal language of contemporary mathematics". Simultaneously, numbers began to displace geometry as the foundation of mathematics. For instance, in Richard Dedekind's 1872 essayStetigkeit und irrationale Zahlen(Continuity and irrational numbers), he asserts that points on a line ought to have the properties ofDedekind cuts, and that therefore a line was the same thing as the set of real numbers. Dedekind is careful to note that this is an assumption that is incapable of being proven. In modern treatments, Dedekind's assertion is often taken to be the definition of a line, thereby reducing geometry to arithmetic. Three-dimensional Euclidean space is defined to be an affine space whose associated vector space of differences of its elements is equipped with an inner product.[6]A definition "from scratch", as in Euclid, is now not often used, since it does not reveal the relation of this space to other spaces. Also, a three-dimensionalprojective spaceis now defined as the space of all one-dimensional subspaces (that is, straight lines through the origin) of a four-dimensional vector space. This shift in foundations requires a new set of axioms, and if these axioms are adopted, the classical axioms of geometry become theorems. A space now consists of selected mathematical objects (for instance, functions on another space, or subspaces of another space, or just elements of a set) treated as points, and selected relationships between these points. Therefore, spaces are just mathematical structures of convenience. One may expect that the structures called "spaces" are perceived more geometrically than other mathematical objects, but this is not always true. According to the famous inaugural lecture given byBernhard Riemannin 1854, every mathematical object parametrized bynreal numbers may be treated as a point of then-dimensional space of all such objects.[4]: 140Contemporary mathematicians follow this idea routinely and find it extremely suggestive to use the terminology of classical geometry nearly everywhere.[4]: 138 Functionsare important mathematical objects. Usually they form infinite-dimensionalfunction spaces, as noted already by Riemann[4]: 141and elaborated in the 20th century byfunctional analysis. While each type of space has its own definition, the general idea of "space" evades formalization. Some structures are called spaces, other are not, without a formal criterion. Moreover, there is no consensus on the general idea of "structure". According to Pudlák,[7]"Mathematics [...] cannot be explained completely by a single concept such as the mathematical structure. Nevertheless, Bourbaki's structuralist approach is the best that we have." We will return to Bourbaki's structuralist approach in the last section "Spaces and structures", while we now outline a possible classification of spaces (and structures) in the spirit of Bourbaki. We classify spaces on three levels. Given that each mathematical theory describes its objects by some of their properties, the first question to ask is: which properties? This leads to the first (upper) classification level. On the second level, one takes into account answers to especially important questions (among the questions that make sense according to the first level). On the third level of classification, one takes into account answers to all possible questions. For example, theupper-level classificationdistinguishes between Euclidean andprojective spaces, since the distance between two points is defined in Euclidean spaces but undefined in projective spaces. Another example. The question "what is the sum of the three angles of a triangle" makes sense in a Euclidean space but not in a projective space. In a non-Euclidean space the question makes sense but is answered differently, which is not an upper-level distinction. Also, the distinction between a Euclidean plane and a Euclidean 3-dimensional space is not an upper-level distinction; the question "what is the dimension" makes sense in both cases. Thesecond-level classificationdistinguishes, for example, between Euclidean and non-Euclidean spaces; between finite-dimensional and infinite-dimensional spaces; between compact and non-compact spaces, etc. In Bourbaki's terms,[2]the second-level classification is the classification by "species". Unlike biological taxonomy, a space may belong to several species. Thethird-level classificationdistinguishes, for example, between spaces of different dimension, but does not distinguish between a plane of a three-dimensional Euclidean space, treated as a two-dimensional Euclidean space, and the set of all pairs of real numbers, also treated as a two-dimensional Euclidean space. Likewise it does not distinguish between different Euclidean models of the same non-Euclidean space. More formally, the third level classifies spaces up toisomorphism. An isomorphism between two spaces is defined as a one-to-one correspondence between the points of the first space and the points of the second space, that preserves all relations stipulated according to the first level. Mutually isomorphic spaces are thought of as copies of a single space. If one of them belongs to a given species then they all do. The notion of isomorphism sheds light on the upper-level classification. Given a one-to-one correspondence between two spaces of the same upper-level class, one may ask whether it is an isomorphism or not. This question makes no sense for two spaces of different classes. An isomorphism to itself is called an automorphism. Automorphisms of a Euclidean space are shifts, rotations, reflections and compositions of these. Euclidean space is homogeneous in the sense that every point can be transformed into every other point by some automorphism. Euclidean axioms[b]leave no freedom; they determine uniquely all geometric properties of the space. More exactly: all three-dimensional Euclidean spaces are mutually isomorphic. In this sense we have "the" three-dimensional Euclidean space. In Bourbaki's terms, the corresponding theory isunivalent. In contrast, topological spaces are generally non-isomorphic; their theory ismultivalent. A similar idea occurs in mathematical logic: a theory is called categorical if all its models of the same cardinality are mutually isomorphic. According to Bourbaki,[8]the study of multivalent theories is the most striking feature which distinguishes modern mathematics from classical mathematics. Topological notions (continuity, convergence, open sets, closed sets etc.) are defined naturally in every Euclidean space. In other words, every Euclidean space is also a topological space. Every isomorphism between two Euclidean spaces is also an isomorphism between the corresponding topological spaces (called "homeomorphism"), but the converse is wrong: a homeomorphism may distort distances. In Bourbaki's terms,[2]"topological space" is anunderlyingstructure of the "Euclidean space" structure. Similar ideas occur incategory theory: the category of Euclidean spaces is a concrete category over the category of topological spaces; theforgetful(or "stripping")functormaps the former category to the latter category. A three-dimensional Euclidean space is a special case of a Euclidean space. In Bourbaki's terms,[2]the species of three-dimensional Euclidean space isricherthan the species of Euclidean space. Likewise, the species of compact topological space is richer than the species of topological space. Such relations between species of spaces may be expressed diagrammatically as shown in Fig. 3. An arrow from A to B means that everyA-spaceis also aB-space,or may be treated as aB-space,or provides aB-space,etc. Treating A and B as classes of spaces one may interpret the arrow as a transition from A to B. (In Bourbaki's terms,[9]"procedure of deduction" of aB-spacefrom aA-space.Not quite a function unless theclassesA,B are sets; this nuance does not invalidate the following.) The two arrows on Fig. 3 are not invertible, but for different reasons. The transition from "Euclidean" to "topological" is forgetful. Topology distinguishes continuous from discontinuous, but does not distinguish rectilinear from curvilinear. Intuition tells us that the Euclidean structure cannot be restored from the topology. A proof uses an automorphism of the topological space (that is,self-homeomorphism) that is not an automorphism of the Euclidean space (that is, not a composition of shifts, rotations and reflections). Such transformation turns the given Euclidean structure into a (isomorphic but) different Euclidean structure; both Euclidean structures correspond to a single topological structure. In contrast, the transition from "3-dim Euclidean" to "Euclidean" is not forgetful; a Euclidean space need not be 3-dimensional, but if it happens to be 3-dimensional, it is full-fledged, no structure is lost. In other words, the latter transition isinjective(one-to-one), while the former transition is not injective (many-to-one). We denote injective transitions by an arrow with a barbed tail, "↣" rather than "→". Both transitions are notsurjective, that is, not every B-space results from some A-space. First, a 3-dim Euclidean space is a special (not general) case of a Euclidean space. Second, a topology of a Euclidean space is a special case of topology (for instance, it must be non-compact, and connected, etc). We denote surjective transitions by a two-headed arrow, "↠" rather than "→". See for example Fig. 4; there, the arrow from "real linear topological" to "real linear" is two-headed, since every real linear space admits some (at least one) topology compatible with its linear structure. Such topology is non-unique in general, but unique when the real linear space is finite-dimensional. For these spaces the transition is both injective and surjective, that is,bijective; see the arrow from "finite-dim real linear topological" to "finite-dim real linear" on Fig. 4. Theinversetransition exists (and could be shown by a second, backward arrow). The two species of structures are thus equivalent. In practice, one makes no distinction between equivalent species of structures.[10]Equivalent structures may be treated as a single structure, as shown by a large box on Fig. 4. The transitions denoted by the arrows obey isomorphisms. That is, two isomorphicA-spaceslead to two isomorphicB-spaces. The diagram on Fig. 4 iscommutative. That is, all directed paths in the diagram with the same start and endpoints lead to the same result. Other diagrams below are also commutative, except for dashed arrows on Fig. 9. The arrow from "topological" to "measurable" is dashed for the reason explained there: "In order to turn a topological space into a measurable space one endows it with a σ-algebra. The σ-algebra of Borel sets is the most popular, but not the only choice." A solid arrow denotes a prevalent, so-called "canonical" transition that suggests itself naturally and is widely used, often implicitly, by default. For example, speaking about a continuous function on a Euclidean space, one need not specify its topology explicitly. In fact, alternative topologies exist and are used sometimes, for example, thefine topology; but these are always specified explicitly, since they are much less notable that the prevalent topology. A dashed arrow indicates that several transitions are in use and no one is quite prevalent. Two basic spaces arelinear spaces(also called vector spaces) andtopological spaces. Linear spaces are ofalgebraicnature; there are real linear spaces (over thefieldofreal numbers), complex linear spaces (over the field ofcomplex numbers), and more generally, linear spaces over any field. Every complex linear space is also a real linear space (the latterunderliesthe former), since each complex number can be specified by two real numbers. For example, thecomplex planetreated as a one-dimensional complex linear space may be downgraded to a two-dimensional real linear space. In contrast, the real line can be treated as a one-dimensional real linear space but not a complex linear space. See alsofield extensions. More generally, a vector space over a field also has the structure of a vector space over a subfield of that field. Linear operations, given in a linear space by definition, lead to such notions as straight lines (and planes, and other linear subspaces); parallel lines; ellipses (and ellipsoids). However, it is impossible to define orthogonal (perpendicular) lines, or to single out circles among ellipses, because in a linear space there is no structure like a scalar product that could be used for measuring angles. The dimension of a linear space is defined as the maximal number oflinearly independentvectors or, equivalently, as the minimal number of vectors that span the space; it may be finite or infinite. Two linear spaces over the same field are isomorphic if and only if they are of the same dimension. An-dimensionalcomplex linear space is also a2n-dimensionalreal linear space. Topological spacesare ofanalyticnature.Open sets, given in a topological space by definition, lead to such notions ascontinuous functions, paths, maps;convergent sequences, limits; interior, boundary, exterior. However,uniform continuity,bounded sets,Cauchy sequences,differentiable functions(paths, maps) remain undefined. Isomorphisms between topological spaces are traditionally called homeomorphisms; these are one-to-one correspondences continuous in both directions. Theopen interval(0,1) is homeomorphic to the wholereal line(−∞,∞) but not homeomorphic to theclosed interval[0,1], nor to a circle. The surface of a cube is homeomorphic to a sphere (the surface of a ball) but not homeomorphic to a torus. Euclidean spaces of different dimensions are not homeomorphic, which seems evident, but is not easy to prove. The dimension of a topological space is difficult to define;inductive dimension(based on the observation that the dimension of the boundary of a geometric figure is usually one less than the dimension of the figure itself) andLebesgue covering dimensioncan be used. In the case of an-dimensionalEuclidean space, both topological dimensions are equal ton. Every subset of a topological space is itself a topological space (in contrast, onlylinearsubsets of a linear space are linear spaces). Arbitrary topological spaces, investigated bygeneral topology(called also point-set topology) are too diverse for a complete classification up to homeomorphism.Compact topological spacesare an important class of topological spaces ("species" of this "type"). Every continuous function is bounded on such space. The closed interval [0,1] and theextended real line[−∞,∞] are compact; the open interval (0,1) and the line (−∞,∞) are not. Geometric topology investigatesmanifolds(another "species" of this "type"); these are topological spaces locally homeomorphic to Euclidean spaces (and satisfying a few extra conditions). Low-dimensional manifolds are completely classified up to homeomorphism. Both the linear and topological structures underlie thelinear topological space(in other words, topological vector space) structure. A linear topological space is both a real or complex linear space and a topological space, such that the linear operations are continuous. So a linear space that is also topological is not in general a linear topological space. Every finite-dimensional real or complex linear space is a linear topological space in the sense that it carries one and only one topology that makes it a linear topological space. The two structures, "finite-dimensional real or complex linear space" and "finite-dimensional linear topological space", are thus equivalent, that is, mutually underlying. Accordingly, every invertible linear transformation of a finite-dimensional linear topological space is a homeomorphism. The three notions of dimension (one algebraic and two topological) agree for finite-dimensional real linear spaces. In infinite-dimensional spaces, however, different topologies can conform to a given linear structure, and invertible linear transformations are generally not homeomorphisms. It is convenient to introduceaffineandprojective spacesby means of linear spaces, as follows. An-dimensionallinear subspace of a(n+1)-dimensionallinear space, being itself an-dimensionallinear space, is not homogeneous; it contains a special point, the origin. Shifting it by a vector external to it, one obtains an-dimensionalaffine subspace. It is homogeneous. An affine space need not be included into a linear space, but is isomorphic to an affine subspace of a linear space. Alln-dimensionalaffine spaces over a given field are mutually isomorphic. In the words ofJohn Baez, "an affine space is a vector space that's forgotten its origin". In particular, every linear space is also an affine space. Given ann-dimensionalaffine subspaceAin a(n+1)-dimensionallinear spaceL, a straight line inAmay be defined as the intersection ofAwith atwo-dimensionallinear subspace ofLthat intersectsA: in other words, with a plane through the origin that is not parallel toA. More generally, ak-dimensionalaffine subspace ofAis the intersection ofAwith a(k+1)-dimensionallinear subspace ofLthat intersectsA. Every point of the affine subspaceAis the intersection ofAwith aone-dimensionallinear subspace ofL. However, someone-dimensionalsubspaces ofLare parallel toA; in some sense, they intersectAat infinity. The set of allone-dimensionallinear subspaces of a(n+1)-dimensionallinear space is, by definition, an-dimensionalprojective space. And the affine subspaceAis embedded into the projective space as a proper subset. However, the projective space itself is homogeneous. A straight line in the projective space corresponds to atwo-dimensionallinear subspace of the (n+1)-dimensional linear space. More generally, ak-dimensionalprojective subspace of the projective space corresponds to a(k+1)-dimensionallinear subspace of the (n+1)-dimensional linear space, and is isomorphic to thek-dimensionalprojective space. Defined this way, affine and projective spaces are of algebraic nature; they can be real, complex, and more generally, over any field. Every real or complex affine or projective space is also a topological space. An affine space is a non-compact manifold; a projective space is a compact manifold. In a real projective space a straight line is homeomorphic to a circle, therefore compact, in contrast to a straight line in a linear of affine space. Distances between points are defined in ametric space. Isomorphisms between metric spaces are called isometries. Every metric space is also a topological space. A topological space is calledmetrizable, if it underlies a metric space. All manifolds are metrizable. In a metric space, we can define bounded sets and Cauchy sequences. A metric space is calledcompleteif all Cauchy sequences converge. Every incomplete space is isometrically embedded, as a dense subset, into a complete space (the completion). Every compact metric space is complete; the real line is non-compact but complete; the open interval (0,1) is incomplete. Every Euclidean space is also a complete metric space. Moreover, all geometric notions immanent to a Euclidean space can be characterized in terms of its metric. For example, the straight segment connecting two given pointsAandCconsists of all pointsBsuch that the distance betweenAandCis equal to the sum of two distances, betweenAandBand betweenBandC. TheHausdorff dimension(related to the number of small balls that cover the given set) applies to metric spaces, and can be non-integer (especially forfractals). For an-dimensionalEuclidean space, the Hausdorff dimension is equal ton. Uniform spacesdo not introduce distances, but still allow one to use uniform continuity, Cauchy sequences (orfiltersornets), completeness and completion. Every uniform space is also a topological space. Everylineartopological space (metrizable or not) is also a uniform space, and is complete in finite dimension but generally incomplete in infinite dimension. More generally, every commutative topological group is also a uniform space. A non-commutative topological group, however, carries two uniform structures, one left-invariant, the other right-invariant. Vectors in a Euclidean space form a linear space, but each vectorx{\displaystyle x}has also a length, in other words, norm,‖x‖{\displaystyle \lVert x\rVert }. A real or complex linear space endowed with a norm is anormed space. Every normed space is both a linear topological space and a metric space. ABanach spaceis a complete normed space. Many spaces of sequences or functions are infinite-dimensional Banach spaces. The set of all vectors of norm less than one is called the unit ball of a normed space. It is a convex, centrally symmetric set, generally not an ellipsoid; for example, it may be a polygon (in the plane) or, more generally, a polytope (in arbitrary finite dimension). The parallelogram law (called also parallelogram identity) generally fails in normed spaces, but holds for vectors in Euclidean spaces, which follows from the fact that the squared Euclidean norm of a vector is its inner product with itself,‖x‖2=(x,x){\displaystyle \lVert x\rVert ^{2}=(x,x)}. Aninner product spaceis a real or complex linear space, endowed with a bilinear or respectively sesquilinear form, satisfying some conditions and called an inner product. Every inner product space is also a normed space. A normed space underlies an inner product space if and only if it satisfies the parallelogram law, or equivalently, if its unit ball is an ellipsoid. Angles between vectors are defined in inner product spaces. AHilbert spaceis defined as a complete inner product space. (Some authors insist that it must be complex, others admit also real Hilbert spaces.) Many spaces of sequences or functions are infinite-dimensional Hilbert spaces. Hilbert spaces are very important forquantum theory.[11] Alln-dimensionalreal inner product spaces are mutually isomorphic. One may say that then-dimensionalEuclidean space is then-dimensionalreal inner product space that forgot its origin. Smooth manifoldsare not called "spaces", but could be. Every smooth manifold is a topological manifold, and can be embedded into a finite-dimensional linear space. Smooth surfaces in a finite-dimensional linear space are smooth manifolds: for example, the surface of an ellipsoid is a smooth manifold, a polytope is not. Real or complex finite-dimensional linear, affine and projective spaces are also smooth manifolds. At each one of its points, a smooth path in a smooth manifold has a tangent vector that belongs to the manifold's tangent space at this point. Tangent spaces to ann-dimensionalsmooth manifold aren-dimensionallinear spaces. The differential of a smooth function on a smooth manifold provides a linear functional on the tangent space at each point. ARiemannian manifold, or Riemann space, is a smooth manifold whose tangent spaces are endowed with inner products satisfying some conditions. Euclidean spaces are also Riemann spaces. Smooth surfaces in Euclidean spaces are Riemann spaces. A hyperbolicnon-Euclideanspace is also a Riemann space. A curve in a Riemann space has a length, and the length of the shortest curve between two points defines a distance, such that the Riemann space is a metric space. The angle between two curves intersecting at a point is the angle between their tangent lines. Waiving positivity of inner products on tangent spaces, one obtainspseudo-Riemann spaces, including the Lorentzian spaces that are very important forgeneral relativity. Waiving distances and angles while retaining volumes (of geometric bodies) one reachesmeasure theory. Besides the volume, a measure generalizes the notions of area, length, mass (or charge) distribution, and also probability distribution, according toAndrey Kolmogorov'sapproach toprobability theory. A "geometric body" of classical mathematics is much more regular than just a set of points. The boundary of the body is of zero volume. Thus, the volume of the body is the volume of its interior, and the interior can be exhausted by an infinite sequence of cubes. In contrast, the boundary of an arbitrary set of points can be of non-zero volume (an example: the set of all rational points inside a given cube). Measure theory succeeded in extending the notion of volume to a vast class of sets, the so-calledmeasurable sets. Indeed, non-measurable sets almost never occur in applications. Measurable sets, given in ameasurable spaceby definition, lead to measurable functions and maps. In order to turn a topological space into a measurable space one endows it with aσ-algebra.Theσ-algebraofBorel setsis the most popular, but not the only choice. (Baire sets,universally measurable sets, etc, are also used sometimes.) The topology is not uniquely determined by the Borelσ-algebra;for example, thenorm topologyand theweak topologyon aseparableHilbert space lead to the same Borelσ-algebra. Not everyσ-algebrais the Borelσ-algebraof some topology.[c]Actually, aσ-algebracan be generated by a given collection of sets (or functions) irrespective of any topology. Every subset of a measurable space is itself a measurable space. Standard measurable spaces (also calledstandard Borel spaces) are especially useful due to some similarity to compact spaces (seeEoM). Every bijective measurable mapping between standard measurable spaces is an isomorphism; that is, the inverse mapping is also measurable. And a mapping between such spaces is measurable if and only if its graph is measurable in the product space. Similarly, every bijective continuous mapping between compact metric spaces is a homeomorphism; that is, the inverse mapping is also continuous. And a mapping between such spaces is continuous if and only if its graph is closed in the product space. Every Borel set in a Euclidean space (and more generally, in a complete separable metric space), endowed with the Borelσ-algebra,is a standard measurable space. All uncountable standard measurable spaces are mutually isomorphic. Ameasure spaceis a measurable space endowed with a measure. A Euclidean space with theLebesgue measureis a measure space.Integration theorydefines integrability and integrals of measurable functions on a measure space. Sets of measure 0, called null sets, are negligible. Accordingly, a "mod 0 isomorphism" is defined as isomorphism between subsets of full measure (that is, with negligible complement). Aprobability spaceis a measure space such that the measure of the whole space is equal to 1. The product of any family (finite or not) of probability spaces is a probability space. In contrast, for measure spaces in general, only the product of finitely many spaces is defined. Accordingly, there are many infinite-dimensional probability measures (especially,Gaussian measures), but no infinite-dimensional Lebesgue measures. Standard probability spacesareespecially useful. On a standard probability space a conditional expectation may be treated as the integral over the conditional measure (regular conditional probabilities, see alsodisintegration of measure). Given two standard probability spaces, every homomorphism of theirmeasure algebrasis induced by some measure preserving map. Every probability measure on a standard measurable space leads to a standard probability space. The product of a sequence (finite or not) of standard probability spaces is a standard probability space. All non-atomic standard probability spaces are mutually isomorphic mod 0; one of them is the interval (0,1) with the Lebesgue measure. These spaces are less geometric. In particular, the idea of dimension, applicable (in one form or another) to all other spaces, does not apply to measurable, measure and probability spaces. The theoretical study of calculus, known asmathematical analysis, led in the early 20th century to the consideration of linear spaces of real-valued or complex-valued functions. The earliest examples of these werefunction spaces, each one adapted to its own class of problems. These examples shared many common features, and these features were soon abstracted into Hilbert spaces, Banach spaces, and more general topological vector spaces. These were a powerful toolkit for the solution of a wide range of mathematical problems. The most detailed information was carried by a class of spaces calledBanach algebras. These are Banach spaces together with a continuous multiplication operation. An important early example was the Banach algebra of essentially bounded measurable functions on a measure spaceX. This set of functions is a Banach space under pointwise addition and scalar multiplication. With the operation of pointwise multiplication, it becomes a special type of Banach space, one now called a commutativevon Neumann algebra. Pointwise multiplication determines a representation of this algebra on the Hilbert space of square integrable functions onX. An early observation ofJohn von Neumannwas that this correspondence also worked in reverse: Given some mild technical hypotheses, a commutative von Neumann algebra together with a representation on a Hilbert space determines a measure space, and these two constructions (of a von Neumann algebra plus a representation and of a measure space) are mutually inverse. Von Neumann then proposed that non-commutative von Neumann algebras should have geometric meaning, just as commutative von Neumann algebras do. Together withFrancis Murray, he produced a classification of von Neumann algebras. Thedirect integralconstruction shows how to break any von Neumann algebra into a collection of simpler algebras calledfactors. Von Neumann and Murray classified factors into three types. Type I was nearly identical to the commutative case. Types II and III exhibited new phenomena. A type II von Neumann algebra determined a geometry with the peculiar feature that the dimension could be any non-negative real number, not just an integer. Type III algebras were those that were neither types I nor II, and after several decades of effort, these were proven to be closely related to type II factors. A slightly different approach to the geometry of function spaces developed at the same time as von Neumann and Murray's work on the classification of factors. This approach is the theory ofC*-algebras.Here, the motivating example is theC*-algebraC0(X){\displaystyle C_{0}(X)}, whereXis a locally compact Hausdorff topological space. By definition, this is the algebra of continuous complex-valued functions onXthat vanish at infinity (which loosely means that the farther you go from a chosen point, the closer the function gets to zero) with the operations of pointwise addition and multiplication. TheGelfand–Naimark theoremimplied that there is a correspondence between commutativeC*-algebrasand geometric objects: Every commutativeC*-algebrais of the formC0(X){\displaystyle C_{0}(X)}for some locally compact Hausdorff spaceX. Consequently it is possible to study locally compact Hausdorff spaces purely in terms of commutativeC*-algebras.Non-commutative geometry takes this as inspiration for the study of non-commutativeC*-algebras:If there were such a thing as a "non-commutative spaceX," then itsC0(X){\displaystyle C_{0}(X)}would be a non-commutativeC*-algebra; if in addition the Gelfand–Naimark theorem applied to these non-existent objects, then spaces (commutative or not) would be the same asC*-algebras;so, for lack of a direct approach to the definition of a non-commutative space, a non-commutative space isdefinedto be a non-commutativeC*-algebra.Many standard geometric tools can be restated in terms ofC*-algebras,and this gives geometrically-inspired techniques for studying non-commutativeC*-algebras. Both of these examples are now cases of a field callednon-commutative geometry. The specific examples of von Neumann algebras andC*-algebrasare known as non-commutative measure theory and non-commutative topology, respectively. Non-commutative geometry is not merely a pursuit of generality for its own sake and is not just a curiosity. Non-commutative spaces arise naturally, even inevitably, from some constructions. For example, consider the non-periodicPenrose tilingsof the plane by kites and darts. It is a theorem that, in such a tiling, every finite patch of kites and darts appears infinitely often. As a consequence, there is no way to distinguish two Penrose tilings by looking at a finite portion. This makes it impossible to assign the set of all tilings a topology in the traditional sense. Despite this, the Penrose tilings determine a non-commutativeC*-algebra,and consequently they can be studied by the techniques of non-commutative geometry. Another example, and one of great interest withindifferential geometry, comes fromfoliationsof manifolds. These are ways of splitting the manifold up into smaller-dimensional submanifolds calledleaves, each of which is locally parallel to others nearby. The set of all leaves can be made into a topological space. However, the example of anirrational rotationshows that this topological space can be inaccessible to the techniques of classical measure theory. However, there is a non-commutative von Neumann algebra associated to the leaf space of a foliation, and once again, this gives an otherwise unintelligible space a good geometric structure. Algebraic geometrystudies the geometric properties ofpolynomialequations. Polynomials are a type of function defined from the basic arithmetic operations of addition and multiplication. Because of this, they are closely tied to algebra. Algebraic geometry offers a way to apply geometric techniques to questions of pure algebra, and vice versa. Prior to the 1940s, algebraic geometry worked exclusively over the complex numbers, and the most fundamental variety was projective space. The geometry of projective space is closely related to the theory ofperspective, and its algebra is described byhomogeneous polynomials. All other varieties were defined as subsets of projective space. Projective varieties were subsets defined by a set of homogeneous polynomials. At each point of the projective variety, all the polynomials in the set were required to equal zero. The complement of the zero set of a linear polynomial is an affine space, and an affine variety was the intersection of a projective variety with an affine space. André Weilsaw that geometric reasoning could sometimes be applied in number-theoretic situations where the spaces in question might be discrete or even finite. In pursuit of this idea, Weil rewrote the foundations of algebraic geometry, both freeing algebraic geometry from its reliance on complex numbers and introducingabstract algebraic varietieswhich were not embedded in projective space. These are now simply calledvarieties. The type of space that underlies most modern algebraic geometry is even more general than Weil's abstract algebraic varieties. It was introduced byAlexander Grothendieckand is called ascheme. One of the motivations for scheme theory is that polynomials are unusually structured among functions, and algebraic varieties are consequently rigid. This presents problems when attempting to study degenerate situations. For example, almost any pair of points on a circle determines a unique line called the secant line, and as the two points move around the circle, the secant line varies continuously. However, when the two points collide, the secant line degenerates to a tangent line. The tangent line is unique, but the geometry of this configuration—a single point on a circle—is not expressive enough to determine a unique line. Studying situations like this requires a theory capable of assigning extra data to degenerate situations. One of the building blocks of a scheme is a topological space. Topological spaces have continuous functions, but continuous functions are too general to reflect the underlying algebraic structure of interest. The other ingredient in a scheme, therefore, is asheafon the topological space, called the "structure sheaf". On each open subset of the topological space, the sheaf specifies a collection of functions, called "regular functions". The topological space and the structure sheaf together are required to satisfy conditions that mean the functions come from algebraic operations. Like manifolds, schemes are defined as spaces that are locally modeled on a familiar space. In the case of manifolds, the familiar space is Euclidean space. For a scheme, the local models are calledaffine schemes. Affine schemes provide a direct link between algebraic geometry andcommutative algebra. The fundamental objects of study in commutative algebra arecommutative rings. IfR{\displaystyle R}is a commutative ring, then there is a corresponding affine schemeSpec⁡R{\displaystyle \operatorname {Spec} R}which translates the algebraic structure ofR{\displaystyle R}into geometry. Conversely, every affine scheme determines a commutative ring, namely, the ring of global sections of its structure sheaf. These two operations are mutually inverse, so affine schemes provide a new language with which to study questions in commutative algebra. By definition, every point in a scheme has an open neighborhood which is an affine scheme. There are many schemes that are not affine. In particular, projective spaces satisfy a condition calledpropernesswhich is analogous to compactness. Affine schemes cannot be proper (except in trivial situations like when the scheme has only a single point), and hence no projective space is an affine scheme (except for zero-dimensional projective spaces). Projective schemes, meaning those that arise as closed subschemes of a projective space, are the single most important family of schemes.[12] Several generalizations of schemes have been introduced.Michael Artindefined analgebraic spaceas the quotient of a scheme by theequivalence relationsthat defineétale morphisms. Algebraic spaces retain many of the useful properties of schemes while simultaneously being more flexible. For instance, theKeel–Mori theoremcan be used to show that manymoduli spacesare algebraic spaces. More general than an algebraic space is aDeligne–Mumford stack. DM stacks are similar to schemes, but they permit singularities that cannot be described solely in terms of polynomials. They play the same role for schemes thatorbifoldsdo formanifolds. For example, the quotient of the affine plane by a finitegroupof rotations around the origin yields a Deligne–Mumford stack that is not a scheme or an algebraic space. Away from the origin, the quotient by the group action identifies finite sets of equally spaced points on a circle. But at the origin, the circle consists of only a single point, the origin itself, and the group action fixes this point. In the quotient DM stack, however, this point comes with the extra data of being a quotient. This kind of refined structure is useful in the theory of moduli spaces, and in fact, it was originally introduced to describemoduli of algebraic curves. A further generalization are thealgebraic stacks, also called Artin stacks. DM stacks are limited to quotients by finite group actions. While this suffices for many problems in moduli theory, it is too restrictive for others, and Artin stacks permit more general quotients. In Grothendieck's work on theWeil conjectures, he introduced a new type of topology now called aGrothendieck topology. A topological space (in the ordinary sense) axiomatizes the notion of "nearness", making two points be nearby if and only if they lie in many of the same open sets. By contrast, a Grothendieck topology axiomatizes the notion of "covering". A covering of a space is a collection of subspaces that jointly contain all the information of the ambient space. Since sheaves are defined in terms of coverings, a Grothendieck topology can also be seen as an axiomatization of the theory of sheaves. Grothendieck's work on his topologies led him to the theory oftopoi. In his memoirRécoltes et Semailles, he called them his "most vast conception".[13]A sheaf (either on a topological space or with respect to a Grothendieck topology) is used to express local data. Thecategoryof all sheaves carries all possible ways of expressing local data. Since topological spaces are constructed from points, which are themselves a kind of local data, the category of sheaves can therefore be used as a replacement for the original space. Grothendieck consequently defined a topos to be a category of sheaves and studied topoi as objects of interest in their own right. These are now calledGrothendieck topoi. Every topological space determines a topos, and vice versa. There are topological spaces where taking the associated topos loses information, but these are generally considered pathological. (A necessary and sufficient condition is that the topological space be asober space.) Conversely, there are topoi whose associated topological spaces do not capture the original topos. But, far from being pathological, these topoi can be of great mathematical interest. For instance, Grothendieck's theory ofétale cohomology(which eventually led to the proof of the Weil conjectures) can be phrased as cohomology in the étale topos of a scheme, and this topos does not come from a topological space. Topological spaces in fact lead to very special topoi calledlocales. The set of open subsets of a topological space determines alattice. The axioms for a topological space cause these lattices to becomplete Heyting algebras. The theory of locales takes this as its starting point. A locale is defined to be a complete Heyting algebra, and the elementary properties of topological spaces are re-expressed and reproved in these terms. The concept of a locale turns out to be more general than a topological space, in that every sober topological space determines a unique locale, but many interesting locales do not come from topological spaces. Because locales need not have points, the study of locales is somewhat jokingly calledpointless topology. Topoi also display deep connections to mathematical logic. Every Grothendieck topos has a special sheaf called a subobject classifier. This subobject classifier functions like the set of all possible truth values. In the topos of sets, the subobject classifier is the set{0,1}{\displaystyle \{0,1\}}, corresponding to "False" and "True". But in other topoi, the subobject classifier can be much more complicated.LawvereandTierneyrecognized that axiomatizing the subobject classifier yielded a more general kind of topos, now known as anelementary topos, and that elementary topoi were models ofintuitionistic logic. In addition to providing a powerful way to apply tools from logic to geometry, this made possible the use of geometric methods in logic. According to Kevin Arlin, Nevertheless, a general definition of "structure" was proposed by Bourbaki;[2]it embraces alltypes of spacesmentioned above, (nearly?) all types of mathematical structures used till now, and more. It provides a general definition of isomorphism, and justifies transfer of properties between isomorphic structures. However, it was never used actively in mathematical practice (not even in the mathematical treatises written by Bourbaki himself). Here are the last phrases from a review by Robert Reed[14]of a book by Leo Corry: For more information on mathematical structures see Wikipedia:mathematical structure,equivalent definitions of mathematical structures, andtransport of structure. The distinction between geometric "spaces" and algebraic "structures" is sometimes clear, sometimes elusive. Clearly,groupsare algebraic, whileEuclidean spacesare geometric.Modulesoverringsare as algebraic as groups. In particular, when thering appears to be a field, themodule appears to be a linear space; is it algebraic or geometric? In particular, when it is finite-dimensional, over real numbers, andendowed with inner product, itbecomes Euclidean space; now geometric. The (algebraic?)field of real numbersis the same as the (geometric?)real line. Itsalgebraic closure, the (algebraic?)field of complex numbers, is the same as the (geometric?)complex plane. It is first of all "a place we doanalysis" (rather than algebra or geometry). Every space treated in Section "Types of spaces" above, except for "Non-commutative geometry", "Schemes" and "Topoi" subsections, is a set (the "principal base set" of the structure, according to Bourbaki) endowed with some additional structure; elements of the base set are usually called "points" of this space. In contrast, elements of (the base set of) an algebraic structure usually are not called "points". However, sometimes one uses more than one principal base set. For example, two-dimensional projective geometry may beformalized via two base sets, the set of points and the set of lines. Moreover,a striking feature of projective planes is the symmetry of the roles played by points and lines. A less geometric example: a graph may beformalized via two base sets, the set of vertices (called also nodes or points) and the set of edges (called also arcs or lines). Generally,finitely many principal base sets and finitely many auxiliary base setsare stipulated by Bourbaki. Many mathematical structures of geometric flavor treated in the "Non-commutative geometry", "Schemes" and "Topoi" subsections above do not stipulate a base set of points. For example, "pointless topology" (in other words, point-free topology, or locale theory) starts with a single base set whose elements imitate open sets in a topological space (but are not sets of points); see alsomereotopologyandpoint-free geometry. This article was submitted toWikiJournal of Sciencefor externalacademic peer reviewin 2017 (reviewer reports). The updated content was reintegrated into the Wikipedia page under aCC-BY-SA-3.0license (2018). The version of record as reviewed is:Boris Tsirelson; et al. (1 June 2018)."Spaces in mathematics"(PDF).WikiJournal of Science.1(1): 2.doi:10.15347/WJS/2018.002.ISSN2470-6345.WikidataQ55120290.
https://en.wikipedia.org/wiki/Space_(mathematics)
Instatistics,model validationis the task of evaluating whether a chosenstatistical modelis appropriate or not. Oftentimes in statistical inference, inferences from models that appear to fit their data may be flukes, resulting in a misunderstanding by researchers of the actual relevance of their model. To combat this, model validation is used to test whether a statistical model can hold up to permutations in the data. Model validation is also calledmodel criticismormodel evaluation. This topic is not to be confused with the closely related task ofmodel selection, the process of discriminating between multiple candidate models: model validation does not concern so much the conceptual design of models as it tests only the consistency between a chosen model and its stated outputs. There are many ways to validate a model.Residual plotsplot the difference between the actual data and the model's predictions: correlations in the residual plots may indicate a flaw in the model.Cross validationis a method of model validation that iteratively refits the model, each time leaving out just a small sample and comparing whether the samples left out are predicted by the model: there aremany kinds of cross validation.Predictive simulationis used to compare simulated data to actual data.External validationinvolves fitting the model to new data.Akaike information criterionestimates the quality of a model. Model validation comes in many forms and the specific method of model validation a researcher uses is often a constraint of their research design. To emphasize, what this means is that there is no one-size-fits-all method to validating a model. For example, if a researcher is operating with a very limited set of data, but data they have strong prior assumptions about, they may consider validating the fit of their model by using a Bayesian framework and testing the fit of their model using various prior distributions. However, if a researcher has a lot of data and is testing multiple nested models, these conditions may lend themselves toward cross validation and possibly a leave one out test. These are two abstract examples and any actual model validation will have to consider far more intricacies than describes here but these example illustrate that model validation methods are always going to be circumstantial. In general, models can be validated using existing data or with new data, and both methods are discussed more in the following subsections, and a note of caution is provided, too. Validation based on existing data involves analyzing thegoodness of fitof the model or analyzing whether theresidualsseem to be random (i.e.residual diagnostics). This method involves using analyses of the models closeness to the data and trying to understand how well the model predicts its own data. One example of this method is in Figure 1, which shows a polynomial function fit to some data. We see that the polynomial function does not conform well to the data, which appears linear, and might invalidate this polynomial model. Commonly, statistical models on existing data are validated using a validation set, which may also be referred to as a holdout set. A validation set is a set of data points that the user leaves out when fitting a statistical model. After the statistical model is fitted, the validation set is used as a measure of the model's error. If the model fits well on the initial data but has a large error on the validation set, this is a sign of overfitting. If new data becomes available, an existing model can be validated by assessing whether the new data is predicted by the old model. If the new data is not predicted by the old model, then the model might not be valid for the researcher's goals. With this in mind, a modern approach is to validate a neural network is to test its performance on domain-shifted data. This ascertains if the model learned domain-invariant features.[1] A model can be validated only relative to some application area.[2][3]A model that is valid for one application might be invalid for some other applications. As an example, consider the curve in Figure 1: if the application only used inputs from the interval [0, 2], then the curve might well be an acceptable model. When doing a validation, there are three notable causes of potential difficulty, according to theEncyclopedia of Statistical Sciences.[4]The three causes are these: lack of data; lack of control of the input variables; uncertainty about the underlying probability distributions and correlations. The usual methods for dealing with difficulties in validation include the following: checking the assumptions made in constructing the model; examining the available data and related model outputs; applying expert judgment.[2]Note that expert judgment commonly requires expertise in the application area.[2] Expert judgment can sometimes be used to assess the validity of a predictionwithoutobtaining real data: e.g. for the curve in Figure 1, an expert might well be able to assess that a substantial extrapolation will be invalid. Additionally, expert judgment can be used inTuring-type tests, where experts are presented with both real data and related model outputs and then asked to distinguish between the two.[5] For some classes of statistical models, specialized methods of performing validation are available. As an example, if the statistical model was obtained via aregression, then specialized analyses forregression model validationexist and are generally employed. Residual diagnostics comprise analyses of theresidualsto determine whether the residuals seem to be effectively random. Such analyses typically requires estimates of the probability distributions for the residuals. Estimates of the residuals' distributions can often be obtained by repeatedly running the model, i.e. by using repeatedstochastic simulations(employing apseudorandom number generatorfor random variables in the model). If the statistical model was obtained via a regression, thenregression-residual diagnosticsexist and may be used; such diagnostics have been well studied. Cross validation is a method of sampling that involves leaving some parts of the data out of the fitting process and then seeing whether those data that are left out are close or far away from where the model predicts they would be. What that means practically is that cross validation techniques fit the model many, many times with a portion of the data and compares each model fit to the portion it did not use. If the models very rarely describe the data that they were not trained on, then the model is probably wrong.
https://en.wikipedia.org/wiki/Statistical_model_validation
Similarityin network analysis occurs when two nodes (or other more elaborate structures) fall in the same equivalence class. There are three fundamental approaches to constructing measures of network similarity: structural equivalence, automorphic equivalence, and regular equivalence.[1]There is a hierarchy of the three equivalence concepts: any set of structural equivalences are also automorphic and regular equivalences. Any set of automorphic equivalences are also regular equivalences. Not all regular equivalences are necessarily automorphic or structural; and not all automorphic equivalences are necessarily structural.[2] AgglomerativeHierarchical clusteringof nodes on the basis of the similarity of their profiles of ties to other nodes provides a joining tree orDendrogramthat visualizes the degree of similarity among cases - and can be used to find approximate equivalence classes.[2] Usually, our goal in equivalence analysis is to identify and visualize "classes" or clusters of cases. In using cluster analysis, we are implicitly assuming that the similarity or distance among cases reflects a single underlying dimension. It is possible, however, that there are multiple "aspects" or "dimensions" underlying the observed similarities of cases. Factor or components analysis could be applied to correlations or covariances among cases. Alternatively, multi-dimensional scaling could be used (non-metric for data that are inherently nominal or ordinal; metric for valued).[2] MDS represents the patterns of similarity or dissimilarity in the tie profiles among the actors (when applied to adjacency or distances) as a "map" in multi-dimensional space. This map lets us see how "close" actors are, whether they "cluster" in multi-dimensional space and how much variation there is along each dimension.[2] Two vertices of a network are structurally equivalent if they share many of the same neighbors. There is no actor who has exactly the same set of ties as actor A, so actor A is in a class by itself. The same is true for actors B, C, D and G. Each of these nodes has a unique set of edges to other nodes. E and F, however, fall in the same structural equivalence class. Each has only one edge; and that tie is to B. Since E and F have exactly the same pattern of edges with all the vertices, they are structurally equivalent. The same is true in the case of H and I.[2] Structural equivalence is the strongest form of similarity. In many real networks, exact equivalence may be rare, and it could be useful to ease the criteria and measure approximate equivalence. A closely related concept isinstitutional equivalence: two actors (e.g., firms) are institutionally equivalent if they operate in the same set of institutional fields.[3]While structurally equivalent actors have identical relational patterns or network positions, institutional equivalence captures the similarity of institutional influences that actors experience from being in the same fields, regardless of how similar their network positions are. For example, two banks in Chicago might have very different patterns of ties (e.g., one may be a central node, and the other may be in a peripheral position) such that they are not structural equivalents, but because they both operate in the field of finance and banking and in the same geographically defined field (Chicago), they will be subject to some of the same institutional influences.[3] A simple count of common neighbors for two vertices is not on its own a very good measure. One should know the degree of the vertices or how many common neighbors other pairs of vertices has.Cosine similaritytakes into account these regards and also allow for varying degrees of vertices. Salton proposed that we regard the i-th and j-th rows/columns of the adjacency matrix as two vectors and use the cosine of the angle between them as asimilarity measure. The cosine similarity of i and j is the number of common neighbors divided by the geometric mean of their degrees.[4] Its value lies in the range from 0 to 1. The value of 1 indicates that the two vertices have exactly the same neighbors while the value of zero means that they do not have any common neighbors. Cosine similarity is technically undefined if one or both of the nodes has zero degree, but according to the convention, we say that cosine similarity is 0 in these cases.[1] Pearson product-moment correlation coefficientis an alternative method to normalize the count of common neighbors. This method compares the number of common neighbors with the expected value that count would take in a network where vertices are connected randomly. This quantity lies strictly in the range from -1 to 1.[1] Euclidean distanceis equal to the number of neighbors that differ between two vertices. It is rather a dissimilarity measure, since it is larger for vertices which differ more. It could be normalized by dividing by its maximum value. The maximum means that there are no common neighbors, in which case the distance is equal to the sum of the degrees of the vertices.[1] Formally "Two vertices are automorphically equivalent if all the vertices can be re-labeled to form an isomorphic graph with the labels of u and v interchanged. Two automorphically equivalent vertices share exactly the same label-independent properties."[5] More intuitively, actors are automorphically equivalent if we can permute the graph in such a way that exchanging the two actors has no effect on the distances among all actors in the graph. Suppose the graph describes the organizational structure of a company. Actor A is the central headquarter, actors B, C, and D are managers. Actors E, F and H, I are workers at smaller stores; G is the lone worker at another store. Even though actor B and actor D are not structurally equivalent (they do have the same boss, but not the same workers), they do seem to be "equivalent" in a different sense. Both manager B and D have a boss (in this case, the same boss), and each has two workers. If we swapped them, and also swapped the four workers, all of the distances among all the actors in the network would be exactly identical. There are actually five automorphic equivalence classes: {A}, {B, D}, {C}, {E, F, H, I}, and {G}. Note that the less strict definition of "equivalence" has reduced the number of classes.[2] Formally, "Two actors are regularly equivalent if they are equally related to equivalent others." In other words, regularly equivalent vertices are vertices that, while they do not necessarily share neighbors, have neighbors who are themselves similar.[5] Two mothers, for example, are equivalent, because each has a similar pattern of connections with a husband, children, etc. The two mothers do not have ties to the same husband or the same children, so they are not structurally equivalent. Because different mothers may have different numbers of husbands and children, they will not be automorphically equivalent. But they are similar because they have the same relationships with some member or members of another set of actors (who are themselves regarded as equivalent because of the similarity of their ties to a member of the set "mother").[2] In the graph there are three regular equivalence classes. The first is actor A; the second is composed of the three actors B, C, and D; the third consists of the remaining five actors E, F, G, H, and I. The easiest class to see is the five actors across the bottom of the diagram (E, F, G, H, and I). These actors are regularly equivalent to one another because: Each of the five actors, then, has an identical pattern of ties with actors in the other classes. Actors B, C, and D form a class similarly. B and D actually have ties with two members of the third class, whereas actor C has a tie to only one member of the third class, but this doesn't matter, as there is a tie to some member of the third class. Actor A is in a class by itself, defined by:
https://en.wikipedia.org/wiki/Similarity_(network_science)
Innumerical analysis,hill climbingis amathematical optimizationtechnique which belongs to the family oflocal search. It is aniterative algorithmthat starts with an arbitrary solution to a problem, then attempts to find a better solution by making anincrementalchange to the solution. If the change produces a better solution, another incremental change is made to the new solution, and so on until no further improvements can be found. For example, hill climbing can be applied to thetravelling salesman problem. It is easy to find an initial solution that visits all the cities but will likely be very poor compared to the optimal solution. The algorithm starts with such a solution and makes small improvements to it, such as switching the order in which two cities are visited. Eventually, a much shorter route is likely to be obtained. Hill climbing finds optimal solutions forconvexproblems – for other problems it will find onlylocal optima(solutions that cannot be improved upon by any neighboring configurations), which are not necessarily the best possible solution (theglobal optimum) out of all possible solutions (thesearch space). Examples of algorithms that solveconvex problemsby hill-climbing include thesimplex algorithmforlinear programmingandbinary search.[1]: 253 To attempt to avoid getting stuck in local optima, one could use restarts (i.e. repeated local search), or more complex schemes based on iterations (likeiterated local search), or on memory (like reactive search optimization andtabu search), or on memory-less stochastic modifications (likesimulated annealing). The relative simplicity of the algorithm makes it a popular first choice amongst optimizing algorithms. It is used widely inartificial intelligence, for reaching a goal state from a starting node. Different choices for nextnodesand starting nodes are used in related algorithms. Although more advanced algorithms such assimulated annealingortabu searchmay give better results, in some situations hill climbing works just as well. Hill climbing can often produce a better result than other algorithms when the amount of time available to perform a search is limited, such as with real-time systems, so long as a small number of increments typically converges on a good solution (the optimal solution or a close approximation). At the other extreme,bubble sortcan be viewed as a hill climbing algorithm (every adjacent element exchange decreases the number of disordered element pairs), yet this approach is far from efficient for even modest N, as the number of exchanges required grows quadratically. Hill climbing is ananytime algorithm: it can return a valid solution even if it's interrupted at any time before it ends. Hill climbing attempts to maximize (or minimize) a targetfunctionf(x){\displaystyle f(\mathbf {x} )}, wherex{\displaystyle \mathbf {x} }is a vector of continuous and/or discrete values. At each iteration, hill climbing will adjust a single element inx{\displaystyle \mathbf {x} }and determine whether the change improves the value off(x){\displaystyle f(\mathbf {x} )}. (Note that this differs fromgradient descentmethods, which adjust all of the values inx{\displaystyle \mathbf {x} }at each iteration according to the gradient of the hill.) With hill climbing, any change that improvesf(x){\displaystyle f(\mathbf {x} )}is accepted, and the process continues until no change can be found to improve the value off(x){\displaystyle f(\mathbf {x} )}. Thenx{\displaystyle \mathbf {x} }is said to be "locally optimal". In discrete vector spaces, each possible value forx{\displaystyle \mathbf {x} }may be visualized as avertexin agraph. Hill climbing will follow the graph from vertex to vertex, always locally increasing (or decreasing) the value off(x){\displaystyle f(\mathbf {x} )}, until alocal maximum(orlocal minimum)xm{\displaystyle x_{m}}is reached. Insimple hill climbing, the first closer node is chosen, whereas insteepest ascent hill climbingall successors are compared and the closest to the solution is chosen. Both forms fail if there is no closer node, which may happen if there are local maxima in the search space which are not solutions. Steepest ascent hill climbing is similar tobest-first search, which tries all possible extensions of the current path instead of only one.[2] Stochastic hill climbingdoes not examine all neighbors before deciding how to move. Rather, it selects a neighbor at random, and decides (based on the amount of improvement in that neighbor) whether to move to that neighbor or to examine another. Coordinate descentdoes aline searchalong one coordinate direction at the current point in each iteration. Some versions of coordinate descent randomly pick a different coordinate direction each iteration. Random-restart hill climbingis ameta-algorithmbuilt on top of the hill climbing algorithm. It is also known asShotgun hill climbing. It iteratively does hill-climbing, each time with a random initial conditionx0{\displaystyle x_{0}}. The bestxm{\displaystyle x_{m}}is kept: if a new run of hill climbing produces a betterxm{\displaystyle x_{m}}than the stored state, it replaces the stored state. Random-restart hill climbing is a surprisingly effective algorithm in many cases. It turns out that it is often better to spend CPU time exploring the space, than carefully optimizing from an initial condition.[original research?] Hill climbing will not necessarily find the global maximum, but may instead converge on alocal maximum. This problem does not occur if the heuristic is convex. However, as many functions are not convex hill climbing may often fail to reach a global maximum. Other local search algorithms try to overcome this problem such asstochastic hill climbing,random walksandsimulated annealing. Ridgesare a challenging problem for hill climbers that optimize in continuous spaces. Because hill climbers only adjust one element in the vector at a time, each step will move in an axis-aligned direction. If the target function creates a narrow ridge that ascends in a non-axis-aligned direction (or if the goal is to minimize, a narrow alley that descends in a non-axis-aligned direction), then the hill climber can only ascend the ridge (or descend the alley) by zig-zagging. If the sides of the ridge (or alley) are very steep, then the hill climber may be forced to take very tiny steps as it zig-zags toward a better position. Thus, it may take an unreasonable length of time for it to ascend the ridge (or descend the alley). By contrast, gradient descent methods can move in any direction that the ridge or alley may ascend or descend. Hence, gradient descent or theconjugate gradient methodis generally preferred over hill climbing when the target function is differentiable. Hill climbers, however, have the advantage of not requiring the target function to be differentiable, so hill climbers may be preferred when the target function is complex. Another problem that sometimes occurs with hill climbing is that of a plateau. A plateau is encountered when the search space is flat, or sufficiently flat that the value returned by the target function is indistinguishable from the value returned for nearby regions due to the precision used by the machine to represent its value. In such cases, the hill climber may not be able to determine in which direction it should step, and may wander in a direction that never leads to improvement. Contrastgenetic algorithm;random optimization.
https://en.wikipedia.org/wiki/Hill_climbing
Data integrityis the maintenance of, and the assurance of, data accuracy and consistency over its entirelife-cycle.[1]It is a critical aspect to the design, implementation, and usage of any system that stores, processes, or retrieves data. The term is broad in scope and may have widely different meanings depending on the specific context even under the same general umbrella ofcomputing. It is at times used as a proxy term fordata quality,[2]whiledata validationis a prerequisite for data integrity.[3] Data integrity is the opposite ofdata corruption.[4]The overall intent of any data integrity technique is the same: ensure data is recorded exactly as intended (such as a database correctly rejecting mutually exclusive possibilities). Moreover, upon laterretrieval, ensure the data is the same as when it was originally recorded. In short, data integrity aims to prevent unintentional changes to information. Data integrity is not to be confused withdata security, the discipline of protecting data from unauthorized parties. Any unintended changes to data as the result of a storage, retrieval or processing operation, including malicious intent, unexpected hardware failure, andhuman error, is failure of data integrity. If the changes are the result of unauthorized access, it may also be a failure of data security. Depending on the data involved this could manifest itself as benign as a single pixel in an image appearing a different color than was originally recorded, to the loss of vacation pictures or a business-critical database, to even catastrophic loss of human life in alife-critical system. Physical integrity deals with challenges which are associated with correctly storing and fetching the data itself. Challenges with physical integrity may includeelectromechanicalfaults, design flaws, materialfatigue,corrosion,power outages, natural disasters, and other special environmental hazards such asionizing radiation, extreme temperatures, pressures andg-forces. Ensuring physical integrity includes methods such asredundanthardware, anuninterruptible power supply, certain types ofRAIDarrays,radiation hardenedchips,error-correcting memory, use of aclustered file system, using file systems that employ block levelchecksumssuch asZFS, storage arrays that compute parity calculations such asexclusive oror use acryptographic hash functionand even having awatchdog timeron critical subsystems. Physical integrity often makes extensive use of error detecting algorithms known aserror-correcting codes. Human-induced data integrity errors are often detected through the use of simpler checks and algorithms, such as theDamm algorithmorLuhn algorithm. These are used to maintain data integrity after manual transcription from one computer system to another by a human intermediary (e.g. credit card or bank routing numbers). Computer-induced transcription errors can be detected throughhash functions. In production systems, these techniques are used together to ensure various degrees of data integrity. For example, a computerfile systemmay be configured on a fault-tolerant RAID array, but might not provide block-level checksums to detect and preventsilent data corruption. As another example, a database management system might be compliant with theACIDproperties, but the RAID controller or hard disk drive's internal write cache might not be. This type of integrity is concerned with thecorrectnessorrationalityof a piece of data, given a particular context. This includes topics such asreferential integrityandentity integrityin arelational databaseor correctly ignoring impossible sensor data in robotic systems. These concerns involve ensuring that the data "makes sense" given its environment. Challenges includesoftware bugs, design flaws, and human errors. Common methods of ensuring logical integrity include things such ascheck constraints,foreign key constraints, programassertions, and other run-time sanity checks. Physical and logical integrity often share many challenges such as human errors and design flaws, and both must appropriately deal with concurrent requests to record and retrieve data, the latter of which is entirely a subject on its own. If a data sector only has a logical error, it can be reused by overwriting it with new data. In case of a physical error, the affected data sector is permanently unusable. Data integrity contains guidelines fordata retention, specifying or guaranteeing the length of time data can be retained in a particular database (typically arelational database). To achieve data integrity, these rules are consistently and routinely applied to all data entering the system, and any relaxation of enforcement could cause errors in the data. Implementing checks on the data as close as possible to the source of input (such as human data entry), causes less erroneous data to enter the system. Strict enforcement of data integrity rules results in lower error rates, and time saved troubleshooting and tracing erroneous data and the errors it causes to algorithms. Data integrity also includes rules defining the relations a piece of data can have to other pieces of data, such as aCustomerrecord being allowed to link to purchasedProducts, but not to unrelated data such asCorporate Assets. Data integrity often includes checks and correction for invalid data, based on a fixedschemaor a predefined set of rules. An example being textual data entered where a date-time value is required. Rules for data derivation are also applicable, specifying how a data value is derived based on algorithm, contributors and conditions. It also specifies the conditions on how the data value could be re-derived. Data integrity is normally enforced in adatabase systemby a series of integrity constraints or rules. Three types of integrity constraints are an inherent part of therelational data model: entity integrity, referential integrity and domain integrity. If a database supports these features, it is the responsibility of the database to ensure data integrity as well as theconsistency modelfor the data storage and retrieval. If a database does not support these features, it is the responsibility of the applications to ensure data integrity while the database supports theconsistency modelfor the data storage and retrieval. Having a single, well-controlled, and well-defined data-integrity system increases: Moderndatabasessupport these features (seeComparison of relational database management systems), and it has become the de facto responsibility of the database to ensure data integrity. Companies, and indeed many database systems, offer products and services to migrate legacy systems to modern databases. An example of a data-integrity mechanism is the parent-and-child relationship of related records. If a parent record owns one or more related child records all of the referential integrity processes are handled by the database itself, which automatically ensures the accuracy and integrity of the data so that no child record can exist without a parent (also called being orphaned) and that no parent loses their child records. It also ensures that no parent record can be deleted while the parent record owns any child records. All of this is handled at the database level and does not require coding integrity checks into each application. Various research results show that neither widespreadfilesystems(includingUFS,Ext,XFS,JFSandNTFS) norhardware RAIDsolutions provide sufficient protection against data integrity problems.[5][6][7][8][9] Some filesystems (includingBtrfsandZFS) provide internal data andmetadatachecksumming that is used for detectingsilent data corruptionand improving data integrity. If a corruption is detected that way and internal RAID mechanisms provided by those filesystems are also used, such filesystems can additionally reconstruct corrupted data in a transparent way.[10]This approach allows improved data integrity protection covering the entire data paths, which is usually known asend-to-end data protection.[11]
https://en.wikipedia.org/wiki/Database_integrity
Preference regressionis a statistical technique used by marketers to determine consumers’preferredcore benefits. It usually supplementsproduct positioningtechniques likemulti dimensional scalingorfactor analysisand is used to create ideal vectors onperceptual maps. Starting with raw data from surveys, researchers apply positioning techniques to determine important dimensions and plot the position of competingproductson these dimensions. Next theyregressthe survey data against the dimensions. The independent variables are the data collected in the survey. The dependent variable is the preference datum. Like all regression methods, the computer fits weights to best predict data. The resultant regression line is referred to as an ideal vector because the slope of the vector is the ratio of the preferences for the two dimensions. If all the data is used in the regression, the program will derive a single equation and hence a single ideal vector. This tends to be a blunt instrument so researchers refine the process withcluster analysis. This creates clusters that reflectmarket segments. Separate preference regressions are then done on the data within each segment. This provides an ideal vector for each segment. Self-stated importance methodis an alternative method in which direct survey data is used to determine the weightings rather than statistical imputations. A third method isconjoint analysisin which an additive method is used.
https://en.wikipedia.org/wiki/Preference_regression
Incomputing, aplug-in(orplugin,add-in,addin,add-on, oraddon) is asoftware componentthat extends the functionality of an existingsoftware systemwithout requiring the system to bere-built. A plug-infeatureis one way that a system can becustomizable.[1] Applications support plug-ins for a variety of reasons including: Examples of plug-in use for various categories of applications: The host application provides services which the plug-in can use, including a way for plug-ins to register themselves with the host application and aprotocolfor the exchange of data with plug-ins. Plug-ins depend on the services provided by the host application and do not usually work by themselves. Conversely, the host application operates independently of the plug-ins, making it possible for end-users to add and update plug-ins dynamically without needing to make changes to the host application.[11][12] Programmers typically implement plug-ins asshared libraries, which getdynamically loadedat run time.HyperCardsupported a similar facility, but more commonly included the plug-in code in the HyperCard documents (calledstacks) themselves. Thus the HyperCard stack became a self-contained application in its own right, distributable as a single entity that end-users could run without the need for additional installation-steps. Programs may also implement plug-ins by loading a directory of simplescriptfiles written in ascripting languagelikePythonorLua. In the context of aweb browser, a helper application is a separate program—likeIrfanVieworAdobe Reader—that extends the functionality of a browser.[13][14]A helper application extends the functionality an application but unlike the typical plug-in that is loaded into the host application'saddress space, a helper application is a separate application. With a separate address space, the extension cannot crash the host application as is possible if they share an address space.[15] In the mid-1970s, theEDTtext editorran on theUnisysVS/9operating systemfor theUNIVAC Series 90mainframe computer. It allowed a program to be run from the editor, which can access the in-memory edit buffer.[16]The plug-in executable could call the editor to inspect and change the text. TheUniversity of WaterlooFortran compiler used this to allow interactive compilation ofFortranprograms. Early personal computer software with plug-in capability included HyperCard andQuarkXPresson theApple Macintosh, both released in 1987. In 1988,Silicon Beach Softwareincluded plug-in capability inDigital DarkroomandSuperPaint.
https://en.wikipedia.org/wiki/Plug-in_(computing)
The following tables compareenterprise bookmarkingplatforms. The table provides an overview of Enterprise Bookmarking platforms. The platforms listed refer to an application that is installed on a web server (usually requiringMySQLor another database andPHP,perl,Python, or some other language for web apps). This table lists the types of data that can be tagged. Tags and metadata can be used to enrich previously described types of data and content. This table lists the default capabilities each platform provides. Enterprise bookmarkingtools differ fromsocial bookmarkingtools in the way that they often have to meettaxonomyconstraints. Tag management capabilities are the uphill (e.g.faceted classification, predefined tags) and downhill gardening (e.g. tag renaming, moving, merging) abilities that can be put in place to manage thefolksonomygenerated from user tagging. Security abilities at the platform level: Security abilities at the application level: In the case of web applications, this describes the server OS. For centrally-hosted websites that are proprietary, this is not applicable. Any client OS can connect to a web service unless stated otherwise in a footnote.
https://en.wikipedia.org/wiki/Comparison_of_enterprise_bookmarking_platforms
Inmathematics,localization of a categoryconsists of adding to acategoryinversemorphismsfor some collection of morphisms, constraining them to becomeisomorphisms. This is formally similar to the process oflocalization of a ring; it in general makes objects isomorphic that were not so before. Inhomotopy theory, for example, there are many examples of mappings that are invertibleup tohomotopy; and so large classes ofhomotopy equivalentspaces[clarification needed].Calculus of fractionsis another name for working in a localized category. AcategoryCconsists of objects andmorphismsbetween these objects. The morphisms reflect relations between the objects. In many situations, it is meaningful to replaceCby another categoryC'in which certain morphisms are forced to be isomorphisms. This process is called localization. For example, in the category ofR-modules(for some fixed commutative ringR) the multiplication by a fixed elementrofRis typically (i.e., unlessris aunit) not an isomorphism: The category that is most closely related toR-modules, but where this mapisan isomorphism turns out to be the category ofR[S−1]{\displaystyle R[S^{-1}]}-modules. HereR[S−1]{\displaystyle R[S^{-1}]}is thelocalizationofRwith respect to the (multiplicatively closed) subsetSconsisting of all powers ofr,S={1,r,r2,r3,…}{\displaystyle S=\{1,r,r^{2},r^{3},\dots \}}The expression "most closely related" is formalized by two conditions: first, there is afunctor sending anyR-module to itslocalizationwith respect toS. Moreover, given any categoryCand any functor sending the multiplication map byron anyR-module (see above) to an isomorphism ofC, there is a unique functor such thatF=G∘φ{\displaystyle F=G\circ \varphi }. The above examples of localization ofR-modules is abstracted in the following definition. In this shape, it applies in many more examples, some of which are sketched below. Given acategoryCand some classWofmorphismsinC, the localizationC[W−1] is another category which is obtained by inverting all the morphisms inW. More formally, it is characterized by auniversal property: there is a natural localization functorC→C[W−1] and given another categoryD, a functorF:C→Dfactors uniquely overC[W−1] if and only ifFsends all arrows inWto isomorphisms. Thus, the localization of the category is unique up to unique isomorphism of categories, provided that it exists. One construction of the localization is done by declaring that its objects are the same as those inC, but the morphisms are enhanced by adding a formal inverse for each morphism inW. Under suitable hypotheses onW,[1]the morphisms from objectXto objectYare given byroofs (whereX'is an arbitrary object ofCandfis in the given classWof morphisms), modulo certain equivalence relations. These relations turn the map going in the "wrong" direction into an inverse off. This "calculus of fractions" can be seen as a generalization of the construction of rational numbers as equivalence classes of pairs of integers. This procedure, however, in general yields aproper classof morphisms betweenXandY. Typically, the morphisms in a category are only allowed to form a set. Some authors simply ignore such set-theoretic issues. A rigorous construction of localization of categories, avoiding these set-theoretic issues, was one of the initial reasons for the development of the theory ofmodel categories: a model categoryMis a category in which there are three classes of maps; one of these classes is the class ofweak equivalences. Thehomotopy categoryHo(M) is then the localization with respect to the weak equivalences. The axioms of a model category ensure that this localization can be defined without set-theoretical difficulties. Some authors also define alocalizationof a categoryCto be anidempotentand coaugmented functor. A coaugmented functor is a pair(L,l)whereL:C → Cis anendofunctorandl:Id → Lis a natural transformation from the identity functor toL(called the coaugmentation). A coaugmented functor is idempotent if, for everyX, both mapsL(lX),lL(X):L(X) → LL(X)are isomorphisms. It can be proven that in this case, both maps are equal.[2] This definition is related to the one given above as follows: applying the first definition, there is, in many situations, not only a canonical functorC→C[W−1]{\displaystyle C\to C[W^{-1}]}, but also a functor in the opposite direction, For example, modules over the localizationR[S−1]{\displaystyle R[S^{-1}]}of a ring are also modules overRitself, giving a functor In this case, the composition is a localization ofCin the sense of an idempotent and coaugmented functor. Serreintroduced the idea of working inhomotopy theorymodulosome classCofabelian groups. This meant that groupsAandBwere treated as isomorphic, if for exampleA/Blay inC. In the theory ofmodulesover acommutative ringR, whenRhasKrull dimension≥ 2, it can be useful to treat modulesMandNaspseudo-isomorphicifM/Nhassupportof codimension at least two. This idea is much used inIwasawa theory. Thederived categoryof anabelian categoryis much used inhomological algebra. It is the localization of the category of chain complexes (up to homotopy) with respect to thequasi-isomorphisms. Given anabelian categoryAand aSerre subcategoryB,one can define thequotient categoryA/B,which is an abelian category equipped with anexact functorfromAtoA/Bthat isessentially surjectiveand has kernelB.This quotient category can be constructed as a localization ofAby the class of morphisms whose kernel and cokernel are both inB. Anisogenyfrom anabelian varietyAto another oneBis a surjective morphism with finitekernel. Some theorems on abelian varieties require the idea ofabelian variety up to isogenyfor their convenient statement. For example, given an abelian subvarietyA1ofA, there is another subvarietyA2ofAsuch that isisogenoustoA(Poincaré's reducibility theorem: see for exampleAbelian VarietiesbyDavid Mumford). To call this adirect sumdecomposition, we should work in the category of abelian varieties up to isogeny. Thelocalization of a topological space, introduced byDennis Sullivan, produces another topological space whose homology is a localization of the homology of the original space. A much more general concept fromhomotopical algebra, including as special cases both the localization of spaces and of categories, is theBousfield localizationof amodel category. Bousfield localization forces certain maps to becomeweak equivalences, which is in general weaker than forcing them to become isomorphisms.[3]
https://en.wikipedia.org/wiki/Localization_of_a_category
Software security assuranceis a process that helps design and implementsoftwarethat protects thedataandresourcescontained in and controlled by that software. Software is itself a resource and thus must be afforded appropriatesecurity. Software Security Assurance (SSA) is the process of ensuring thatsoftwareis designed to operate at a level of security that is consistent with the potential harm that could result from the loss, inaccuracy, alteration, unavailability, or misuse of the data and resources that it uses, controls, and protects.[1] The software security assurance process begins by identifying and categorizing the information that is to be contained in, or used by, the software. The information should be categorized according to itssensitivity. For example, in the lowest category, the impact of a security violation is minimal (i.e. the impact on the software owner's mission, functions, or reputation is negligible). For a top category, however, the impact may pose a threat to human life; may have an irreparable impact on software owner's missions, functions, image, or reputation; or may result in the loss of significant assets or resources. Once the information is categorized, security requirements can be developed. The security requirements should addressaccess control, includingnetworkaccess and physical access; data management and data access; environmental controls (power, air conditioning, etc.) andoff-line storage; human resource security; and audit trails and usage records. All security vulnerabilities in software are the result ofsecurity bugs, or defects, within the software. In most cases, these defects are created by two primary causes: (1) non-conformance, or a failure to satisfy requirements; and (2) an error or omission in the software requirements. A non-conformance may be simple–the most common is a coding error or defect–or more complex (i.e., a subtle timing error or input validation error). The important point about non-conformance is thatverification and validationtechniques are designed to detect them and security assurance techniques are designed to prevent them. Improvements in these methods, through a software security assurance program, can improve the security of software. The most serious security problems with software-based systems are those that develop when the software requirements are incorrect, inappropriate, or incomplete for the system situation. Unfortunately, errors or omissions in requirements are more difficult to identify. For example, the software may perform exactly as required under normal use, but the requirements may not correctly deal with somesystem state. When the system enters this problem state, unexpected and undesirable behavior may result. This type of problem cannot be handled within the software discipline; it results from a failure of the system and software engineering processes which developed and allocated the system requirements to the software. There are two basic types of Software Security Assurance activities. At a minimum, a software security assurance program should ensure that: Improving the software development process and building better software are ways to improvesoftware security, by producing software with fewer defects and vulnerabilities. A first-order approach is to identify the critical software components that control security-related functions and pay special attention to them throughout the development and testing process. This approach helps to focus scarce security resources on the most critical areas. There are manycommercial off-the-shelf(COTS) software packages that are available to support software security assurance activities. However, before they are used, these tools must be carefully evaluated and their effectiveness must be assured. One way to improve software security is to gain a better understanding of the most commonweaknessesthat can affect software security. With that in mind, there is a current community-based program called the Common Weaknesses Enumeration project,[2]which is sponsored by TheMitre Corporationto identify and describe such weaknesses. The list, which is currently in a very preliminary form, contains descriptions of common software weaknesses, faults, and flaws. Security architecture/design analysis verifies that the software design correctly implements security requirements. Generally speaking, there are four basic techniques that are used for security architecture/design analysis.[3][4] Logic analysis evaluates theequations,algorithms, andcontrol logicof the software design. Data analysis evaluates the description and intended usage of each data item used in design of thesoftware component. The use of interrupts and their effect on data should receive special attention to ensure interrupt handling routines do not alter critical data used by other routines. Interfaceanalysis verifies the proper design of a software component's interfaces with other components of the system, includingcomputer hardware, software, andend-users. Constraint analysis evaluates the design of a software component against restrictions imposed by requirements and real-world limitations. The design must be responsive to all known or anticipated restrictions on the software component. These restrictions may include timing, sizing, and throughput constraints, input and output data limitations, equation and algorithm limitations, and other design limitations. Code analysis verifies that the softwaresource codeis written correctly, implements the desired design, and does not violate any security requirements. Generally speaking, the techniques used in the performance of code analysis mirror those used in design analysis. SecureCode reviewsare conducted during and at the end of the development phase to determine whether established security requirements, security design concepts, and security-related specifications have been satisfied. These reviews typically consist of the presentation of material to a review group. Secure code reviews are most effective when conducted by personnel who have not been directly involved in the development of the software being reviewed. Informal secure code reviews can be conducted on an as-needed basis. To conduct an informal review, the developer simply selects one or more reviewer(s) and provides and/or presents the material to be reviewed. The material may be as informal aspseudo-codeor hand-written documentation. Formal secure code reviews are conducted at the end of the development phase for each software component. The client of the software appoints the formal review group, who may make or affect a "go/no-go" decision to proceed to the next step of thesoftware development life cycle. A secure code inspection or walkthrough is a detailed examination of a product on a step-by-step or line-by-line (ofsource code) basis. The purpose of conducting secure code inspections or walkthroughs is to find errors. Typically, the group that does an inspection or walkthrough is composed of peers from development,security engineeringandquality assurance. Softwaresecurity testing, which includespenetration testing, confirms the results of design and code analysis, investigates software behaviour, and verifies that the software complies with security requirements. Special security testing, conducted in accordance with a security test plan and procedures, establishes the compliance of the software with the security requirements. Security testing focuses on locating software weaknesses and identifying extreme or unexpected situations that could cause the software to fail in ways that would cause a violation of security requirements. Security testing efforts are often limited to the software requirements that are classified as "critical" security items.
https://en.wikipedia.org/wiki/Software_security_assurance
Indatabase theory, aconjunctive queryis a restricted form offirst-orderqueries using thelogical conjunctionoperator. Many first-order queries can be written as conjunctive queries. In particular, a large part of queries issued onrelational databasescan be expressed in this way. Conjunctive queries also have a number of desirable theoretical properties that larger classes of queries (e.g., therelational algebraqueries) do not share. The conjunctive queries are the fragment of (domain independent)first-order logicgiven by the set of formulae that can be constructed fromatomic formulaeusingconjunction∧ andexistential quantification∃, but not usingdisjunction∨,negation¬, oruniversal quantification∀. Each such formula can be rewritten (efficiently) into an equivalent formula inprenex normal form, thus this form is usually simply assumed. Thus conjunctive queries are of the following general form: with thefree variablesx1,…,xk{\displaystyle x_{1},\ldots ,x_{k}}being called distinguished variables, and thebound variablesxk+1,…,xm{\displaystyle x_{k+1},\ldots ,x_{m}}being called undistinguished variables.A1,…,Ar{\displaystyle A_{1},\ldots ,A_{r}}areatomic formulae. As an example of why the restriction to domain independent first-order logic is important, considerx1.∃x2.R(x2){\displaystyle x_{1}.\exists x_{2}.R(x_{2})}, which is not domain independent; seeCodd's theorem. This formula cannot be implemented in the select-project-join fragment of relational algebra, and hence should not be considered a conjunctive query. Conjunctive queries can express a large proportion of queries that are frequently issued onrelational databases. To give an example, imagine a relational database for storing information about students, their address, the courses they take and their gender. Finding all male students and their addresses who attend a course that is also attended by a female student is expressed by the following conjunctive query: Note that since the only entity of interest is the male student and his address, these are the only distinguished variables, while the variablescourse,student2are onlyexistentially quantified, i.e. undistinguished. Conjunctive queries without distinguished variables are calledboolean conjunctive queries. Conjunctive queries where all variables are distinguished (and no variables are bound) are calledequi-join queries,[1]because they are the equivalent, in therelational calculus, of theequi-joinqueries in therelational algebra(when selecting all columns of the result). Conjunctive queries also correspond to select-project-join queries inrelational algebra(i.e., relational algebra queries that do not use the operations union or difference) and to select-from-where queries inSQLin which the where-condition uses exclusively conjunctions of atomic equality conditions, i.e. conditions constructed from column names and constants using no comparison operators other than "=", combined using "and". Notably, this excludes the use of aggregation and subqueries. For example, the above query can be written as an SQL query of the conjunctive query fragment as Besides their logical notation, conjunctive queries can also be written asDatalogrules. Many authors in fact prefer the following Datalog notation for the query above: Although there are no quantifiers in this notation, variables appearing in the head of the rule are still implicitlyuniversally quantified, while variables only appearing in the body of the rule are still implicitly existentially quantified. While any conjunctive query can be written as a Datalog rule, not every Datalog program can be written as a conjunctive query. In fact, only single rules over extensional predicate symbols can be easily rewritten as an equivalent conjunctive query. The problem of deciding whether for a given Datalog program there is an equivalentnonrecursive program(corresponding to a positive relational algebra query, or, equivalently, a formula of positive existentialfirst-order logic, or, as a special case, a conjunctive query) is known as theDatalog boundednessproblem and is undecidable.[2] Extensions of conjunctive queries capturing moreexpressive powerinclude: The formal study of all of these extensions is justified by their application inrelational databasesand is in the realm ofdatabase theory. For the study of thecomputational complexityof evaluating conjunctive queries, two problems have to be distinguished. The first is the problem of evaluating a conjunctive query on arelational databasewhere both the query and the database are considered part of the input. The complexity of this problem is usually referred to ascombined complexity, while the complexity of the problem of evaluating a query on a relational database, where the query is assumed fixed, is calleddata complexity.[3] Conjunctive queries areNP-completewith respect tocombined complexity,[4]while the data complexity of conjunctive queries is very low, in the parallel complexity classAC0, which is contained inLOGSPACEand thus inpolynomial time. TheNP-hardnessof conjunctive queries may appear surprising, sincerelational algebraandSQLstrictly subsume the conjunctive queries and are thus at least as hard (in fact, relational algebra isPSPACE-complete with respect to combined complexity and is therefore even harder under widely held complexity-theoretic assumptions). However, in the usual application scenario, databases are large, while queries are very small, and the data complexity model may be appropriate for studying and describing their difficulty. The problem of listing all answers to a non-Boolean conjunctive query has been studied in the context ofenumeration algorithms, with a characterization (under somecomputational hardness assumptions) of the queries for which enumeration can be performed withlinear timepreprocessing andconstantdelay between each solution. Specifically, these are the acyclic conjunctive queries which also satisfy afree-connexcondition.[5] Conjunctive queries are one of the great success stories ofdatabase theoryin that many interesting problems that are computationally hard orundecidablefor larger classes of queries are feasible for conjunctive queries.[6]For example, consider thequery containment problem. We writeR⊆S{\displaystyle R\subseteq S}for twodatabase relationsR,S{\displaystyle R,S}of the sameschemaif and only if each tuple occurring inR{\displaystyle R}also occurs inS{\displaystyle S}. Given a queryQ{\displaystyle Q}and arelational databaseinstanceI{\displaystyle I}, we write the result relation of evaluating the query on the instance simply asQ(I){\displaystyle Q(I)}. Given two queriesQ1{\displaystyle Q_{1}}andQ2{\displaystyle Q_{2}}and adatabase schema, the query containment problem is the problem of deciding whether for all possible database instancesI{\displaystyle I}over the input database schema,Q1(I)⊆Q2(I){\displaystyle Q_{1}(I)\subseteq Q_{2}(I)}. The main application of query containment is in query optimization: Deciding whether two queries are equivalent is possible by simply checking mutual containment. The query containment problem is undecidable forrelational algebraandSQLbut is decidable andNP-completefor conjunctive queries. In fact, it turns out that the query containment problem for conjunctive queries is exactly the same problem as the query evaluation problem.[6]Since queries tend to be small,NP-completenesshere is usually considered acceptable. The query containment problem for conjunctive queries is also equivalent to theconstraint satisfaction problem.[7] An important class of conjunctive queries that have polynomial-time combined complexity are theacyclicconjunctive queries.[8]The query evaluation, and thus query containment, isLOGCFL-complete and thus inpolynomial time.[9]Acyclicity of conjunctive queries is a structural property of queries that is defined with respect to the query'shypergraph:[6]a conjunctive query is acyclic if and only if it has hypertree-width 1. For the special case of conjunctive queries in which all relations used are binary, this notion corresponds to the treewidth of thedependency graphof the variables in the query (i.e., the graph having the variables of the query as nodes and an undirected edge{x,y}{\displaystyle \{x,y\}}between two variables if and only if there is an atomic formulaR(x,y){\displaystyle R(x,y)}orR(y,x){\displaystyle R(y,x)}in the query) and the conjunctive query is acyclic if and only if its dependency graph isacyclic. An important generalization of acyclicity is the notion ofbounded hypertree-width, which is a measure of how close to acyclic a hypergraph is, analogous to boundedtreewidthingraphs. Conjunctive queries of bounded tree-width haveLOGCFLcombined complexity.[10] Unrestricted conjunctive queries over tree data (i.e., a relational database consisting of a binary child relation of a tree as well as unary relations for labeling the tree nodes) have polynomial time combined complexity.[11]
https://en.wikipedia.org/wiki/Conjunctive_query
Inmachine learning,hyperparameter optimization[1]or tuning is the problem of choosing a set of optimalhyperparametersfor a learning algorithm. A hyperparameter is aparameterwhose value is used to control the learning process, which must be configured before the process starts.[2][3] Hyperparameter optimization determines the set of hyperparameters that yields an optimal model which minimizes a predefinedloss functionon a givendata set.[4]The objective function takes a set of hyperparameters and returns the associated loss.[4]Cross-validationis often used to estimate this generalization performance, and therefore choose the set of values for hyperparameters that maximize it.[5] The traditional method for hyperparameter optimization has beengrid search, or aparameter sweep, which is simply anexhaustive searchingthrough a manually specified subset of the hyperparameter space of a learning algorithm. A grid search algorithm must be guided by some performance metric, typically measured bycross-validationon the training set[6]or evaluation on a hold-out validation set.[7] Since the parameter space of a machine learner may include real-valued or unbounded value spaces for certain parameters, manually set bounds and discretization may be necessary before applying grid search. For example, a typical soft-marginSVMclassifierequipped with anRBF kernelhas at least two hyperparameters that need to be tuned for good performance on unseen data: a regularization constantCand a kernel hyperparameter γ. Both parameters are continuous, so to perform grid search, one selects a finite set of "reasonable" values for each, say Grid search then trains an SVM with each pair (C, γ) in theCartesian productof these two sets and evaluates their performance on a held-out validation set (or by internal cross-validation on the training set, in which case multiple SVMs are trained per pair). Finally, the grid search algorithm outputs the settings that achieved the highest score in the validation procedure. Grid search suffers from thecurse of dimensionality, but is oftenembarrassingly parallelbecause the hyperparameter settings it evaluates are typically independent of each other.[5] Random Search replaces the exhaustive enumeration of all combinations by selecting them randomly. This can be simply applied to the discrete setting described above, but also generalizes to continuous and mixed spaces. A benefit over grid search is that random search can explore many more values than grid search could for continuous hyperparameters. It can outperform Grid search, especially when only a small number of hyperparameters affects the final performance of the machine learning algorithm.[5]In this case, the optimization problem is said to have a low intrinsic dimensionality.[8]Random Search is alsoembarrassingly parallel, and additionally allows the inclusion of prior knowledge by specifying the distribution from which to sample. Despite its simplicity, random search remains one of the important base-lines against which to compare the performance of new hyperparameter optimization methods. Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum. It tries to balance exploration (hyperparameters for which the outcome is most uncertain) and exploitation (hyperparameters expected close to the optimum). In practice, Bayesian optimization has been shown[9][10][11][12][13]to obtain better results in fewer evaluations compared to grid search and random search, due to the ability to reason about the quality of experiments before they are run. For specific learning algorithms, it is possible to compute the gradient with respect to hyperparameters and then optimize the hyperparameters usinggradient descent. The first usage of these techniques was focused on neural networks.[14]Since then, these methods have been extended to other models such assupport vector machines[15]or logistic regression.[16] A different approach in order to obtain a gradient with respect to hyperparameters consists in differentiating the steps of an iterative optimization algorithm usingautomatic differentiation.[17][18][19][20]A more recent work along this direction uses theimplicit function theoremto calculate hypergradients and proposes a stable approximation of the inverse Hessian. The method scales to millions of hyperparameters and requires constant memory.[21] In a different approach,[22]a hypernetwork is trained to approximate the best response function. One of the advantages of this method is that it can handle discrete hyperparameters as well. Self-tuning networks[23]offer a memory efficient version of this approach by choosing a compact representation for the hypernetwork. More recently, Δ-STN[24]has improved this method further by a slight reparameterization of the hypernetwork which speeds up training. Δ-STN also yields a better approximation of the best-response Jacobian by linearizing the network in the weights, hence removing unnecessary nonlinear effects of large changes in the weights. Apart from hypernetwork approaches, gradient-based methods can be used to optimize discrete hyperparameters also by adopting a continuous relaxation of the parameters.[25]Such methods have been extensively used for the optimization of architecture hyperparameters inneural architecture search. Evolutionary optimization is a methodology for the global optimization of noisy black-box functions. In hyperparameter optimization, evolutionary optimization usesevolutionary algorithmsto search the space of hyperparameters for a given algorithm.[10]Evolutionary hyperparameter optimization follows aprocessinspired by the biological concept ofevolution: Evolutionary optimization has been used in hyperparameter optimization for statistical machine learning algorithms,[10]automated machine learning, typical neural network[26]anddeep neural networkarchitecture search,[27][28]as well as training of the weights in deep neural networks.[29] Population Based Training (PBT) learns both hyperparameter values and network weights. Multiple learning processes operate independently, using different hyperparameters. As with evolutionary methods, poorly performing models are iteratively replaced with models that adopt modified hyperparameter values and weights based on the better performers. This replacement model warm starting is the primary differentiator between PBT and other evolutionary methods. PBT thus allows the hyperparameters to evolve and eliminates the need for manual hypertuning. The process makes no assumptions regarding model architecture, loss functions or training procedures. PBT and its variants are adaptive methods: they update hyperparameters during the training of the models. On the contrary, non-adaptive methods have the sub-optimal strategy to assign a constant set of hyperparameters for the whole training.[30] A class of early stopping-based hyperparameter optimization algorithms is purpose built for large search spaces of continuous and discrete hyperparameters, particularly when the computational cost to evaluate the performance of a set of hyperparameters is high. Irace implements the iterated racing algorithm, that focuses the search around the most promising configurations, using statistical tests to discard the ones that perform poorly.[31][32]Another early stopping hyperparameter optimization algorithm is successive halving (SHA),[33]which begins as a random search but periodically prunes low-performing models, thereby focusing computational resources on more promising models. Asynchronous successive halving (ASHA)[34]further improves upon SHA's resource utilization profile by removing the need to synchronously evaluate and prune low-performing models. Hyperband[35]is a higher level early stopping-based algorithm that invokes SHA or ASHA multiple times with varying levels of pruning aggressiveness, in order to be more widely applicable and with fewer required inputs. RBF[36]andspectral[37]approaches have also been developed. When hyperparameter optimization is done, the set of hyperparameters are often fitted on a training set and selected based on the generalization performance, or score, of a validation set. However, this procedure is at risk of overfitting the hyperparameters to the validation set. Therefore, the generalization performance score of the validation set (which can be several sets in the case of a cross-validation procedure) cannot be used to simultaneously estimate the generalization performance of the final model. In order to do so, the generalization performance has to be evaluated on a set independent (which has no intersection) of the set (or sets) used for the optimization of the hyperparameters, otherwise the performance might give a value which is too optimistic (too large). This can be done on a second test set, or through an outercross-validationprocedure called nested cross-validation, which allows an unbiased estimation of the generalization performance of the model, taking into account the bias due to the hyperparameter optimization.
https://en.wikipedia.org/wiki/Hyperparameter_optimization
Inmathematicsandstatistics,deviationserves as a measure to quantify the disparity between anobserved valueof a variable and another designated value, frequently the mean of that variable. Deviations with respect to thesample meanand thepopulation mean(or "true value") are callederrorsandresiduals, respectively. Thesignof the deviation reports the direction of that difference: the deviation is positive when the observed value exceeds the reference value. Theabsolute valueof the deviation indicates the size or magnitude of the difference. In a givensample, there are as many deviations assample points.Summary statisticscan be derived from a set of deviations, such as thestandard deviationand themean absolute deviation, measures ofdispersion, and themean signed deviation, a measure ofbias.[1] The deviation of each data point is calculated by subtracting the mean of the data set from the individual data point. Mathematically, the deviationdof a data pointxin a data set with respect to the meanmis given by the difference: This calculation represents the "distance" of a data point from the mean and provides information about how much individual values vary from the average. Positive deviations indicate values above the mean, while negative deviations indicate values below the mean.[1] The sum of squared deviations is a key component in the calculation ofvariance, another measure of the spread or dispersion of a data set. Variance is calculated by averaging the squared deviations. Deviation is a fundamental concept in understanding the distribution and variability of data points in statistical analysis.[1] A deviation that is a difference between an observed value and thetrue valueof a quantity of interest (wheretrue valuedenotes the Expected Value, such as the population mean) is an error.[2] A deviation that is the difference between the observed value and an estimate of the true value (e.g. the sample mean) is aresidual. These concepts are applicable for data at theintervalandratiolevels of measurement.[3] Di=|xi−m(X)|,{\displaystyle D_{i}=|x_{i}-m(X)|,}where The average absolute deviation (AAD) in statistics is a measure of the dispersion or spread of a set of data points around a central value, usually the mean or median. It is calculated by taking the average of the absolute differences between each data point and the chosen central value. AAD provides a measure of the typical magnitude of deviations from the central value in a dataset, giving insights into the overall variability of the data.[5] Least absolute deviation (LAD) is a statistical method used inregression analysisto estimate the coefficients of a linear model. Unlike the more common least squares method, which minimizes the sum of squared vertical distances (residuals) between the observed and predicted values, the LAD method minimizes the sum of the absolute vertical distances. In the context of linear regression, if (x1,y1), (x2,y2), ... are the data points, andaandbare the coefficients to be estimated for the linear model y=b+(a∗x){\displaystyle y=b+(a*x)} the least absolute deviation estimates (aandb) are obtained by minimizing the sum. The LAD method is less sensitive to outliers compared to the least squares method, making it a robust regression technique in the presence of skewed or heavy-tailed residual distributions.[6] For anunbiased estimator, the average of the signed deviations across the entire set of all observations from the unobserved population parameter value averages zero over an arbitrarily large number of samples. However, by construction the average of signed deviations of values from the sample mean value is always zero, though the average signed deviation from another measure of central tendency, such as the sample median, need not be zero. Mean Signed Deviation is a statistical measure used to assess the average deviation of a set of values from a central point, usually the mean. It is calculated by taking the arithmetic mean of the signed differences between each data point and the mean of the dataset. The term "signed" indicates that the deviations are considered with their respective signs, meaning whether they are above or below the mean. Positive deviations (above the mean) and negative deviations (below the mean) are included in the calculation. The mean signed deviation provides a measure of the average distance and direction of data points from the mean, offering insights into the overall trend and distribution of the data.[3] Statistics of the distribution of deviations are used as measures ofstatistical dispersion. Deviations, which measure the difference between observed values and some reference point, inherently carry units corresponding to the measurement scale used. For example, if lengths are being measured, deviations would be expressed in units like meters or feet. To make deviations unitless and facilitate comparisons across different datasets, one cannondimensionalize. One common method involves dividing deviations by a measure of scale(statistical dispersion), with the population standard deviation used for standardizing or the sample standard deviation forstudentizing(e.g.,Studentized residual). Another approach to nondimensionalization focuses on scaling by location rather than dispersion. The percent deviation offers an illustration of this method, calculated as the difference between the observed value and the accepted value, divided by the accepted value, and then multiplied by 100%. By scaling the deviation based on the accepted value, this technique allows for expressing deviations in percentage terms, providing a clear perspective on the relative difference between the observed and accepted values. Both methods of nondimensionalization serve the purpose of making deviations comparable and interpretable beyond the specific measurement units.[10] In one example, a series of measurements of the speed are taken of sound in a particular medium. The accepted or expected value for the speed of sound in this medium, based on theoretical calculations, is 343 meters per second. Now, during an experiment, multiple measurements are taken by different researchers. Researcher A measures the speed of sound as 340 meters per second, resulting in a deviation of −3 meters per second from the expected value. Researcher B, on the other hand, measures the speed as 345 meters per second, resulting in a deviation of +2 meters per second. In this scientific context, deviation helps quantify how individual measurements differ from the theoretically predicted or accepted value. It provides insights into theaccuracy and precisionof experimental results, allowing researchers to assess the reliability of their data and potentially identify factors contributing to discrepancies. In another example, suppose a chemical reaction is expected to yield 100 grams of a specific compound based on stoichiometry. However, in an actual laboratory experiment, several trials are conducted with different conditions. In Trial 1, the actual yield is measured to be 95 grams, resulting in a deviation of −5 grams from the expected yield. In Trial 2, the actual yield is measured to be 102 grams, resulting in a deviation of +2 grams. These deviations from the expected value provide valuable information about the efficiency and reproducibility of the chemical reaction under different conditions. Scientists can analyze these deviations to optimize reaction conditions, identify potential sources of error, and improve the overall yield and reliability of the process. The concept of deviation is crucial in assessing the accuracy of experimental results and making informed decisions to enhance the outcomes of scientific experiments.
https://en.wikipedia.org/wiki/Deviation_(statistics)
Anadaptive neuro-fuzzy inference systemoradaptive network-based fuzzy inference system(ANFIS) is a kind ofartificial neural networkthat is based onTakagi–Sugeno fuzzy inference system. The technique was developed in the early 1990s.[1][2]Since it integrates both neural networks andfuzzy logicprinciples, it has potential to capture the benefits of both in a singleframework. Its inference system corresponds to a set of fuzzyIF–THEN rulesthat have learning capability to approximatenonlinear functions.[3]Hence, ANFIS is considered to be auniversal estimator.[4]For using the ANFIS in a more efficient and optimal way, one can use the best parameters obtained bygenetic algorithm.[5][6]It has uses in intelligent situational awareenergy management system.[7] It is possible to identify two parts in the network structure, namely premise and consequence parts. In more details, the architecture is composed by five layers. The first layer of an ANFIS network describes the difference to a vanilla neural network. Neural networks in general are operating with adata pre-processingstep, in which thefeaturesare converted into normalized values between 0 and 1. An ANFIS neural network doesn't need asigmoid function, but it's doing the preprocessing step by converting numeric values into fuzzy values.[9] Here is an example: Suppose, the network gets as input the distance between two points in the 2d space. The distance is measured in pixels and it can have values from 0 up to 500 pixels. Converting the numerical values intofuzzy numbersis done with the membership function which consists ofsemantic descriptionslike near, middle and far.[10]Each possible linguistic value is given by an individualneuron. The neuron “near” fires with a value from 0 until 1, if the distance is located within the category "near". While the neuron “middle” fires, if the distance in that category. The input value “distance in pixels” is split into three different neurons for near, middle and far.
https://en.wikipedia.org/wiki/Adaptive_neuro_fuzzy_inference_system
Inalgebra, aunitorinvertible element[a]of aringis aninvertible elementfor the multiplication of the ring. That is, an elementuof a ringRis a unit if there existsvinRsuch thatvu=uv=1,{\displaystyle vu=uv=1,}where1is themultiplicative identity; the elementvis unique for this property and is called themultiplicative inverseofu.[1][2]The set of units ofRforms agroupR×under multiplication, called thegroup of unitsorunit groupofR.[b]Other notations for the unit group areR∗,U(R), andE(R)(from the German termEinheit). Less commonly, the termunitis sometimes used to refer to the element1of the ring, in expressions likering with a unitorunit ring, and alsounit matrix. Because of this ambiguity,1is more commonly called the "unity" or the "identity" of the ring, and the phrases "ring with unity" or a "ring with identity" may be used to emphasize that one is considering a ring instead of arng. The multiplicative identity1and its additive inverse−1are always units. More generally, anyroot of unityin a ringRis a unit: ifrn= 1, thenrn−1is a multiplicative inverse ofr. In anonzero ring, theelement 0is not a unit, soR×is not closed under addition. A nonzero ringRin which every nonzero element is a unit (that is,R×=R∖ {0}) is called adivision ring(or a skew-field). A commutative division ring is called afield. For example, the unit group of the field ofreal numbersRisR∖ {0}. In the ring ofintegersZ, the only units are1and−1. In the ringZ/nZofintegers modulon, the units are the congruence classes(modn)represented by integerscoprimeton. They constitute themultiplicative group of integers modulon. In the ringZ[√3]obtained by adjoining thequadratic integer√3toZ, one has(2 +√3)(2 −√3) = 1, so2 +√3is a unit, and so are its powers, soZ[√3]has infinitely many units. More generally, for thering of integersRin anumber fieldF,Dirichlet's unit theoremstates thatR×is isomorphic to the groupZn×μR{\displaystyle \mathbf {Z} ^{n}\times \mu _{R}}whereμR{\displaystyle \mu _{R}}is the (finite, cyclic) group of roots of unity inRandn, therankof the unit group, isn=r1+r2−1,{\displaystyle n=r_{1}+r_{2}-1,}wherer1,r2{\displaystyle r_{1},r_{2}}are the number of real embeddings and the number of pairs of complex embeddings ofF, respectively. This recovers theZ[√3]example: The unit group of (the ring of integers of) areal quadratic fieldis infinite of rank 1, sincer1=2,r2=0{\displaystyle r_{1}=2,r_{2}=0}. For a commutative ringR, the units of thepolynomial ringR[x]are the polynomialsp(x)=a0+a1x+⋯+anxn{\displaystyle p(x)=a_{0}+a_{1}x+\dots +a_{n}x^{n}}such thata0is a unit inRand the remaining coefficientsa1,…,an{\displaystyle a_{1},\dots ,a_{n}}arenilpotent, i.e., satisfyaiN=0{\displaystyle a_{i}^{N}=0}for someN.[4]In particular, ifRis adomain(or more generallyreduced), then the units ofR[x]are the units ofR. The units of thepower series ringR[[x]]{\displaystyle R[[x]]}are the power seriesp(x)=∑i=0∞aixi{\displaystyle p(x)=\sum _{i=0}^{\infty }a_{i}x^{i}}such thata0is a unit inR.[5] The unit group of the ringMn(R)ofn×nmatricesover a ringRis the groupGLn(R)ofinvertible matrices. For a commutative ringR, an elementAofMn(R)is invertible if and only if thedeterminantofAis invertible inR. In that case,A−1can be given explicitly in terms of theadjugate matrix. For elementsxandyin a ringR, if1−xy{\displaystyle 1-xy}is invertible, then1−yx{\displaystyle 1-yx}is invertible with inverse1+y(1−xy)−1x{\displaystyle 1+y(1-xy)^{-1}x};[6]this formula can be guessed, but not proved, by the following calculation in a ring of noncommutative power series:(1−yx)−1=∑n≥0(yx)n=1+y(∑n≥0(xy)n)x=1+y(1−xy)−1x.{\displaystyle (1-yx)^{-1}=\sum _{n\geq 0}(yx)^{n}=1+y\left(\sum _{n\geq 0}(xy)^{n}\right)x=1+y(1-xy)^{-1}x.}SeeHua's identityfor similar results. Acommutative ringis alocal ringifR∖R×is amaximal ideal. As it turns out, ifR∖R×is an ideal, then it is necessarily amaximal idealandRislocalsince amaximal idealis disjoint fromR×. IfRis afinite field, thenR×is acyclic groupof order|R| − 1. Everyring homomorphismf:R→Sinduces agroup homomorphismR×→S×, sincefmaps units to units. In fact, the formation of the unit group defines afunctorfrom thecategory of ringsto thecategory of groups. This functor has aleft adjointwhich is the integralgroup ringconstruction.[7] Thegroup schemeGL1{\displaystyle \operatorname {GL} _{1}}is isomorphic to themultiplicative group schemeGm{\displaystyle \mathbb {G} _{m}}over any base, so for any commutative ringR, the groupsGL1⁡(R){\displaystyle \operatorname {GL} _{1}(R)}andGm(R){\displaystyle \mathbb {G} _{m}(R)}are canonically isomorphic toU(R). Note that the functorGm{\displaystyle \mathbb {G} _{m}}(that is,R↦U(R)) isrepresentablein the sense:Gm(R)≃Hom⁡(Z[t,t−1],R){\displaystyle \mathbb {G} _{m}(R)\simeq \operatorname {Hom} (\mathbb {Z} [t,t^{-1}],R)}for commutative ringsR(this for instance follows from the aforementioned adjoint relation with the group ring construction). Explicitly this means that there is a natural bijection between the set of the ring homomorphismsZ[t,t−1]→R{\displaystyle \mathbb {Z} [t,t^{-1}]\to R}and the set of unit elements ofR(in contrast,Z[t]{\displaystyle \mathbb {Z} [t]}represents the additive groupGa{\displaystyle \mathbb {G} _{a}}, theforgetful functorfrom the category of commutative rings to thecategory of abelian groups). Suppose thatRis commutative. ElementsrandsofRare calledassociateif there exists a unituinRsuch thatr=us; then writer~s. In any ring, pairs ofadditive inverseelements[c]xand−xareassociate, since any ring includes the unit−1. For example, 6 and −6 are associate inZ. In general,~is anequivalence relationonR. Associatedness can also be described in terms of theactionofR×onRvia multiplication: Two elements ofRare associate if they are in the sameR×-orbit. In anintegral domain, the set of associates of a given nonzero element has the samecardinalityasR×. The equivalence relation~can be viewed as any one ofGreen's semigroup relationsspecialized to the multiplicativesemigroupof a commutative ringR.
https://en.wikipedia.org/wiki/Group_of_units#Finite_groups_of_units
ADALINE(Adaptive Linear Neuronor laterAdaptive Linear Element) is an early single-layerartificial neural networkand the name of the physical device that implemented it.[2][3][1][4][5]It was developed by professorBernard Widrowand his doctoral studentMarcian HoffatStanford Universityin 1960. It is based on theperceptronand consists of weights, a bias, and a summation function. The weights and biases were implemented byrheostats(as seen in the "knobby ADALINE"), and later,memistors. The difference between Adaline and the standard (Rosenblatt) perceptron is in how they learn. Adaline unit weights are adjusted to match a teacher signal, before applying the Heaviside function (see figure), but the standard perceptron unit weights are adjusted to match the correct output, after applying the Heaviside function. Amultilayer network ofADALINEunits is known as aMADALINE. Adaline is a single-layer neural network with multiple nodes, where each node accepts multiple inputs and generates one output. Given the following variables: the output is: If we further assume thatx0=1{\displaystyle x_{0}=1}andw0=θ{\displaystyle w_{0}=\theta }, then the output further reduces to: Thelearning ruleused by ADALINE is the LMS ("least mean squares") algorithm, a special case ofgradient descent. Given the following: the LMS algorithm updates the weights as follows: This update rule minimizesE{\displaystyle E}, the square of the error,[6]and is in fact thestochastic gradient descentupdate forlinear regression.[7] MADALINE (Many ADALINE[8]) is a three-layer (input, hidden, output), fully connected,feedforward neural networkarchitecture forclassificationthat uses ADALINE units in its hidden and output layers. I.e., itsactivation functionis thesign function.[9]The three-layer network usesmemistors. As the sign function is non-differentiable,backpropagationcannot be used to train MADALINE networks. Hence, three different training algorithms have been suggested, called Rule I, Rule II and Rule III. Despite many attempts, they never succeeded in training more than a single layer of weights in a MADALINE model. This was until Widrow saw the backpropagation algorithm in a 1985 conference inSnowbird, Utah.[10] MADALINE Rule 1 (MRI) - The first of these dates back to 1962.[11]It consists of two layers: the first is made of ADALINE units (let the output of thei{\displaystyle i}th ADALINE unit beoi{\displaystyle o_{i}}); the second layer has two units. One is a majority-voting unit that takes in alloi{\displaystyle o_{i}}, and if there are more positives than negatives, outputs +1, and vice versa. Another is a "job assigner": suppose the desired output is -1, and different from the majority-voted output, then the job assigner calculates the minimal number of ADALINE units that must change their outputs from positive to negative, and picks those ADALINE units that areclosestto being negative, and makes them update their weights according to the ADALINE learning rule. It was thought of as a form of "minimal disturbance principle".[12] The largest MADALINE machine built had 1000 weights, each implemented by a memistor. It was built in 1963 and used MRI for learning.[12][13] Some MADALINE machines were demonstrated to perform tasks includinginverted pendulumbalancing,weather forecasting, andspeech recognition.[3] MADALINE Rule 2 (MRII) - The second training algorithm, described in 1988, improved on Rule I.[8]The Rule II training algorithm is based on a principle called "minimal disturbance". It proceeds by looping over training examples, and for each example, it: MADALINE Rule 3 - The third "Rule" applied to a modified network withsigmoidactivations instead of sign; it was later found to be equivalent to backpropagation.[12] Additionally, when flipping single units' signs does not drive the error to zero for a particular example, the training algorithm starts flipping pairs of units' signs, then triples of units, etc.[8]
https://en.wikipedia.org/wiki/ADALINE
TheRabin fingerprinting scheme(akaPolynomial fingerprinting) is a method for implementingfingerprintsusingpolynomialsover afinite field. It was proposed byMichael O. Rabin.[1] Given ann-bit messagem0,...,mn-1, we view it as a polynomial of degreen-1 over thefinite fieldGF(2). We then pick a randomirreducible polynomial⁠p(x){\displaystyle p(x)}⁠of degreekover GF(2), and we define the fingerprint of the messagemto be the remainderr(x){\displaystyle r(x)}after division off(x){\displaystyle f(x)}byp(x){\displaystyle p(x)}over GF(2) which can be viewed as a polynomial of degreek− 1or as ak-bit number. Many implementations of theRabin–Karp algorithminternally use Rabin fingerprints. TheLow Bandwidth Network Filesystem(LBFS) from MIT uses Rabin fingerprints to implement variable size shift-resistant blocks.[2]The basic idea is that the filesystem computes thecryptographic hashof each block in a file. To save on transfers between the client and server, they compare their checksums and only transfer blocks whose checksums differ. But one problem with this scheme is that a single insertion at the beginning of the file will cause every checksum to change if fixed-sized (e.g. 4 KB) blocks are used. So the idea is to select blocks not based on a specific offset but rather by some property of the block contents. LBFS does this by sliding a 48 byte window over the file and computing the Rabin fingerprint of each window. When the low 13 bits of the fingerprint are zero LBFS calls those 48 bytes a breakpoint and ends the current block and begins a new one. Since the output of Rabin fingerprints arepseudo-randomthe probability of any given 48 bytes being a breakpoint is2−13{\displaystyle 2^{-13}}(1 in 8192). This has the effect of shift-resistant variable size blocks.Anyhash functioncould be used to divide a long file into blocks (as long as acryptographic hash functionis then used to find the checksum of each block): but the Rabin fingerprint is an efficientrolling hash, since the computation of the Rabin fingerprint of regionBcan reuse some of the computation of the Rabin fingerprint of regionAwhen regionsAandBoverlap. Note that this is a problem similar to that faced byrsync.[example needed] This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Rabin_fingerprint
Transport Layer Security(TLS) is acryptographic protocoldesigned to provide communications security over a computer network, such as theInternet. Theprotocolis widely used inapplicationssuch asemail,instant messaging, andvoice over IP, but its use in securingHTTPSremains the most publicly visible. The TLS protocol aims primarily to provide security, includingprivacy(confidentiality), integrity, and authenticity through the use ofcryptography, such as the use ofcertificates, between two or more communicating computer applications. It runs in thepresentation layerand is itself composed of two layers: the TLS record and the TLShandshake protocols. The closely relatedDatagram Transport Layer Security(DTLS)is acommunications protocolthat providessecuritytodatagram-based applications. In technical writing, references to "(D)TLS" are often seen when it applies to both versions.[1] TLS is a proposedInternet Engineering Task Force(IETF) standard, first defined in 1999, and the current version is TLS 1.3, defined in August 2018. TLS builds on the now-deprecatedSSL(Secure Sockets Layer) specifications (1994, 1995, 1996) developed byNetscape Communicationsfor adding theHTTPSprotocol to theirNetscape Navigatorweb browser. Client-serverapplications use the TLSprotocolto communicate across a network in a way designed to preventeavesdroppingandtampering. Since applications can communicate either with or without TLS (or SSL), it is necessary for theclientto request that theserverset up a TLS connection.[2]One of the main ways of achieving this is to use a differentport numberfor TLS connections. Port 80 is typically used for unencryptedHTTPtraffic while port 443 is the common port used for encryptedHTTPStraffic. Another mechanism is to make a protocol-specificSTARTTLSrequest to the server to switch the connection to TLS – for example, when using the mail andnewsprotocols. Once the client and server have agreed to use TLS, they negotiate astatefulconnection by using a handshaking procedure (see§ TLS handshake).[3]The protocols use a handshake with anasymmetric cipherto establish not only cipher settings but also a session-specific shared key with which further communication is encrypted using asymmetric cipher. During this handshake, the client and server agree on various parameters used to establish the connection's security: This concludes the handshake and begins the secured connection, which is encrypted and decrypted with the session key until the connection closes. If any one of the above steps fails, then the TLS handshake fails and the connection is not created. TLS and SSL do not fit neatly into any single layer of theOSI modelor theTCP/IP model.[4][5]TLS runs "on top of some reliable transport protocol (e.g., TCP),"[6]: §1which would imply that it is above thetransport layer. It serves encryption to higher layers, which is normally the function of thepresentation layer. However, applications generally use TLS as if it were a transport layer,[4][5]even though applications using TLS must actively control initiating TLS handshakes and handling of exchanged authentication certificates.[6]: §1 When secured by TLS, connections between a client (e.g., a web browser) and a server (e.g., wikipedia.org) will have all of the following properties:[6]: §1 TLS supports many different methods for exchanging keys, encrypting data, and authenticating message integrity. As a result, secure configuration of TLS involves many configurable parameters, and not all choices provide all of the privacy-related properties described in the list above (see the tables below§ Key exchange,§ Cipher security, and§ Data integrity). Attempts have been made to subvert aspects of the communications security that TLS seeks to provide, and the protocol has been revised several times to address these security threats. Developers of web browsers have repeatedly revised their products to defend against potential security weaknesses after these were discovered (see TLS/SSL support history of web browsers). Datagram Transport Layer Security, abbreviated DTLS, is a relatedcommunications protocolprovidingsecuritytodatagram-based applications by allowing them to communicate in a way designed[7][8]to preventeavesdropping,tampering, ormessage forgery. The DTLS protocol is based on thestream-oriented Transport Layer Security (TLS) protocol and is intended to provide similar security guarantees. However, unlike TLS, it can be used with most datagram oriented protocols includingUser Datagram Protocol(UDP),Datagram Congestion Control Protocol(DCCP),Control And Provisioning of Wireless Access Points(CAPWAP),Stream Control Transmission Protocol(SCTP) encapsulation, andSecure Real-time Transport Protocol(SRTP). As the DTLS protocol datagram preserves the semantics of the underlying transport, the application does not suffer from the delays associated with stream protocols. However, the application has to deal withpacket reordering, loss of datagram and data larger than the size of a datagramnetwork packet. Because DTLS uses UDP or SCTP rather than TCP, it avoids theTCP meltdown problem,[9][10]when being used to create a VPN tunnel. The original 2006 release of DTLS version 1.0 was not a standalone document. It was given as a series of deltas to TLS 1.1.[11]Similarly the follow-up 2012 release of DTLS is a delta to TLS 1.2. It was given the version number of DTLS 1.2 to match its TLS version. Lastly, the 2022 DTLS 1.3 is a delta to TLS 1.3. Like the two previous versions, DTLS 1.3 is intended to provide "equivalent security guarantees [to TLS 1.3] with the exception of order protection/non-replayability".[12] ManyVPN clientsincludingCiscoAnyConnect[13]& InterCloud Fabric,[14]OpenConnect,[15]ZScalertunnel,[16]F5 NetworksEdge VPN Client,[17]and Citrix SystemsNetScaler[18]use DTLS to secure UDP traffic. In addition all modern web browsers support DTLS-SRTP[19]forWebRTC. In August 1986, the National Security Agency, the National Bureau of Standards, the Defense Communications Agency launched a project, called the Secure Data Network System (SDNS), with the intent of designing the next generation of secure computer communications network and product specifications to be implemented for applications on public and private internets. It was intended to complement the rapidly emerging new OSI internet standards moving forward both in the U.S. government's GOSIP Profiles and in the huge ITU-ISO JTC1 internet effort internationally.[26] As part of the project, researchers designed a protocol called SP4 (security protocolin layer 4 of the OSI system). This was later renamed the Transport Layer Security Protocol (TLSP) and subsequently published in 1995 as international standard ITU-T X.274|ISO/IEC 10736:1995.[27]Despite the name similarity, this is distinct from today's TLS. Other efforts towards transport layer security included theSecure Network Programming(SNP)application programming interface(API), which in 1993 explored the approach of having a secure transport layer API closely resemblingBerkeley sockets, to facilitate retrofitting pre-existing network applications with security measures. SNP was published and presented in the 1994USENIXSummer Technical Conference.[28][29]The SNP project was funded by a grant fromNSAto ProfessorSimon LamatUT-Austinin 1991.[30]Secure Network Programmingwon the 2004ACM Software System Award.[31][32]Simon Lam was inducted into theInternet Hall of Famefor "inventing secure sockets and implementing the first secure sockets layer, named SNP, in 1993."[33][34] Netscape developed the original SSL protocols, andTaher Elgamal, chief scientist atNetscape Communicationsfrom 1995 to 1998, has been described as the "father of SSL".[35][36][37][38]SSL version 1.0 was never publicly released because of serious security flaws in the protocol. Version 2.0, after being released in February 1995 was quickly found to contain a number of security and usability flaws. It used the same cryptographic keys for message authentication and encryption. It had a weak MAC construction that used the MD5 hash function with a secret prefix, making it vulnerable to length extension attacks. It also provided no protection for either the opening handshake or an explicit message close, both of which meantman-in-the-middle attackscould go undetected. Moreover, SSL 2.0 assumed a single service and a fixed domain certificate, conflicting with the widely used feature of virtual hosting in Web servers, so most websites were effectively impaired from using SSL. These flaws necessitated the complete redesign of the protocol to SSL version 3.0.[39][37]Released in 1996, it was produced byPaul Kocherworking with Netscape engineers Phil Karlton and Alan Freier, with a reference implementation by Christopher Allen and Tim Dierks of Certicom. Newer versions of SSL/TLS are based on SSL 3.0. The 1996 draft of SSL 3.0 was published by IETF as a historical document inRFC6101. SSL 2.0 was deprecated in 2011 byRFC6176. In 2014, SSL 3.0 was found to be vulnerable to thePOODLEattack that affects allblock ciphersin SSL;RC4, the only non-block cipher supported by SSL 3.0, is also feasibly broken as used in SSL 3.0.[40]SSL 3.0 was deprecated in June 2015 byRFC7568. TLS 1.0 was first defined inRFC2246in January 1999 as an upgrade of SSL Version 3.0, and written by Christopher Allen and Tim Dierks of Certicom. As stated in the RFC, "the differences between this protocol and SSL 3.0 are not dramatic, but they are significant enough to preclude interoperability between TLS 1.0 and SSL 3.0". Tim Dierks later wrote that these changes, and the renaming from "SSL" to "TLS", were a face-saving gesture to Microsoft, "so it wouldn't look [like] the IETF was just rubberstamping Netscape's protocol".[41] ThePCI Councilsuggested that organizations migrate from TLS 1.0 to TLS 1.1 or higher before June 30, 2018.[42][43]In October 2018,Apple,Google,Microsoft, andMozillajointly announced they would deprecate TLS 1.0 and 1.1 in March 2020.[20]TLS 1.0 and 1.1 were formally deprecated inRFC8996in March 2021. TLS 1.1 was defined in RFC 4346 in April 2006.[44]It is an update from TLS version 1.0. Significant differences in this version include: Support for TLS versions 1.0 and 1.1 was widely deprecated by web sites around 2020,[46]disabling access toFirefoxversions before 24 andChromium-based browsersbefore 29,[47]though third-party fixes can be applied to Netscape Navigator and older versions of Firefox to add TLS 1.2 support.[48] TLS 1.2 was defined inRFC5246in August 2008.[23]It is based on the earlier TLS 1.1 specification. Major differences include: All TLS versions were further refined inRFC6176in March 2011, removing their backward compatibility with SSL such that TLS sessions never negotiate the use of Secure Sockets Layer (SSL) version 2.0. As of April 2025 there is no formal date for TLS 1.2 to be deprecated. The specifications for TLS 1.2 became redefined as well by the Standards Track DocumentRFC8446to keep it as secure as possible; it is to be seen as a failover protocol now, meant only to be negotiated with clients which are unable to talk over TLS 1.3 (The original RFC 5246 definition for TLS 1.2 is since then obsolete). TLS 1.3 was defined in RFC 8446 in August 2018.[6]It is based on the earlier TLS 1.2 specification. Major differences from TLS 1.2 include:[49] Network Security Services(NSS), the cryptography library developed byMozillaand used by its web browserFirefox, enabled TLS 1.3 by default in February 2017.[51]TLS 1.3 support was subsequently added — but due to compatibility issues for a small number of users, not automatically enabled[52]— toFirefox 52.0, which was released in March 2017. TLS 1.3 was enabled by default in May 2018 with the release ofFirefox 60.0.[53] Google Chromeset TLS 1.3 as the default version for a short time in 2017. It then removed it as the default, due to incompatible middleboxes such asBlue Coat web proxies.[54] The intolerance of the new version of TLS wasprotocol ossification; middleboxes had ossified the protocol's version parameter. As a result, version 1.3 mimics thewire imageof version 1.2. This change occurred very late in the design process, only having been discovered during browser deployment.[55]The discovery of this intolerance also led to the prior version negotiation strategy, where the highest matching version was picked, being abandoned due to unworkable levels of ossification.[56]'Greasing' an extension point, where one protocol participant claims support for non-existent extensions to ensure that unrecognised-but-actually-existent extensions are tolerated and so to resist ossification, was originally designed for TLS, but it has since been adopted elsewhere.[56] During the IETF 100Hackathon, which took place inSingaporein 2017, the TLS Group worked on adaptingopen-source applicationsto use TLS 1.3.[57][58]The TLS group was made up of individuals from Japan, United Kingdom, and Mauritius via the cyberstorm.mu team.[58]This work was continued in the IETF 101 Hackathon inLondon,[59]and the IETF 102 Hackathon in Montreal.[60] wolfSSLenabled the use of TLS 1.3 as of version 3.11.1, released in May 2017.[61]As the first commercial TLS 1.3 implementation, wolfSSL 3.11.1 supported Draft 18 and now supports Draft 28,[62]the final version, as well as many older versions. A series of blogs were published on the performance difference between TLS 1.2 and 1.3.[63] InSeptember 2018, the popularOpenSSLproject released version 1.1.1 of its library, in which support for TLS 1.3 was "the headline new feature".[64] Support for TLS 1.3 was added toSecure Channel(schannel) for theGAreleases ofWindows 11andWindows Server 2022.[65] TheElectronic Frontier Foundationpraised TLS 1.3 and expressed concern about the variant protocol Enterprise Transport Security (ETS) that intentionally disables important security measures in TLS 1.3.[66]Originally called Enterprise TLS (eTLS), ETS is a published standard known as the 'ETSITS103523-3', "Middlebox Security Protocol, Part3: Enterprise Transport Security". It is intended for use entirely within proprietary networks such as banking systems. ETS does not support forward secrecy so as to allow third-party organizations connected to the proprietary networks to be able to use their private key to monitor network traffic for the detection of malware and to make it easier to conduct audits.[67][68]Despite the claimed benefits, the EFF warned that the loss of forward secrecy could make it easier for data to be exposed along with saying that there are better ways to analyze traffic.[66] A digital certificate certifies the ownership of a public key by the named subject of the certificate, and indicates certain expected usages of that key. This allows others (relying parties) to rely upon signatures or on assertions made by the private key that corresponds to the certified public key. Keystores and trust stores can be in various formats, such as.pem, .crt,.pfx, and.jks. TLS typically relies on a set of trusted third-party certificate authorities to establish the authenticity of certificates. Trust is usually anchored in a list of certificates distributed with user agent software,[69]and can be modified by the relying party. According toNetcraft, who monitors active TLS certificates, the market-leading certificate authority (CA) has beenSymantecsince the beginning of their survey (orVeriSignbefore the authentication services business unit was purchased by Symantec). As of 2015, Symantec accounted for just under a third of all certificates and 44% of the valid certificates used by the 1 million busiest websites, as counted by Netcraft.[70]In 2017, Symantec sold its TLS/SSL business to DigiCert.[71]In an updated report, it was shown thatIdenTrust,DigiCert, andSectigoare the top 3 certificate authorities in terms of market share since May 2019.[72] As a consequence of choosingX.509certificates, certificate authorities and apublic key infrastructureare necessary to verify the relation between a certificate and its owner, as well as to generate, sign, and administer the validity of certificates. While this can be more convenient than verifying the identities via aweb of trust, the2013 mass surveillance disclosuresmade it more widely known that certificate authorities are a weak point from a security standpoint, allowingman-in-the-middle attacks(MITM) if the certificate authority cooperates (or is compromised).[73][74] Before a client and server can begin to exchange information protected by TLS, they must securely exchange or agree upon an encryption key and a cipher to use when encrypting data (see§ Cipher). Among the methods used for key exchange/agreement are: public and private keys generated withRSA(denoted TLS_RSA in the TLS handshake protocol),Diffie–Hellman(TLS_DH), ephemeral Diffie–Hellman (TLS_DHE),elliptic-curve Diffie–Hellman(TLS_ECDH), ephemeral elliptic-curve Diffie–Hellman (TLS_ECDHE),anonymous Diffie–Hellman(TLS_DH_anon),[23]pre-shared key(TLS_PSK)[75]andSecure Remote Password(TLS_SRP).[76] The TLS_DH_anon and TLS_ECDH_anon key agreement methods do not authenticate the server or the user and hence are rarely used because those are vulnerable toman-in-the-middle attacks. Only TLS_DHE and TLS_ECDHE provideforward secrecy. Public key certificates used during exchange/agreement also vary in the size of the public/private encryption keys used during the exchange and hence the robustness of the security provided. In July 2013,Googleannounced that it would no longer use 1024-bit public keys and would switch instead to 2048-bit keys to increase the security of the TLS encryption it provides to its users because the encryption strength is directly related to thekey size.[77][78] Notes Amessage authentication code(MAC) is used for data integrity.HMACis used forCBCmode of block ciphers.Authenticated encryption(AEAD) such asGCMandCCM modeuses AEAD-integrated MAC and does not useHMAC.[6]: §8.4HMAC-basedPRF, orHKDFis used for TLS handshake. In applications design, TLS is usually implemented on top of Transport Layer protocols, encrypting all of the protocol-related data of protocols such asHTTP,FTP,SMTP,NNTPandXMPP. Historically, TLS has been used primarily with reliable transport protocols such as theTransmission Control Protocol(TCP). However, it has also been implemented with datagram-oriented transport protocols, such as theUser Datagram Protocol(UDP) and theDatagram Congestion Control Protocol(DCCP), usage of which has been standardized independently using the termDatagram Transport Layer Security(DTLS). A primary use of TLS is to secureWorld Wide Webtraffic between awebsiteand aweb browserencoded with the HTTP protocol. This use of TLS to secure HTTP traffic constitutes theHTTPSprotocol.[93] Notes As of March 2025[update], the latest versions of all major web browsers support TLS 1.2 and 1.3 and have them enabled by default, with the exception ofIE 11. TLS 1.0 and 1.1 are disabled by default on the latest versions of all major browsers. Mitigations against known attacks are not enough yet: Most SSL and TLS programming libraries arefree and open-source software. A paper presented at the 2012ACMconference on computer and communications security[98]showed that many applications used some of these SSL libraries incorrectly, leading to vulnerabilities. According to the authors: "The root cause of most of these vulnerabilities is the terrible design of the APIs to the underlying SSL libraries. Instead of expressing high-level security properties of network tunnels such as confidentiality and authentication, these APIs expose low-level details of the SSL protocol to application developers. As a consequence, developers often use SSL APIs incorrectly, misinterpreting and misunderstanding their manifold parameters, options, side effects, and return values." TheSimple Mail Transfer Protocol(SMTP) can also be protected by TLS. These applications usepublic key certificatesto verify the identity of endpoints. TLS can also be used for tunneling an entire network stack to create aVPN, which is the case withOpenVPNandOpenConnect. Many vendors have by now married TLS's encryption and authentication capabilities with authorization. There has also been substantial development since the late 1990s in creating client technology outside of Web-browsers, in order to enable support for client/server applications. Compared to traditionalIPsecVPN technologies, TLS has some inherent advantages in firewall andNATtraversal that make it easier to administer for large remote-access populations. TLS is also a standard method for protectingSession Initiation Protocol(SIP) application signaling. TLS can be used for providing authentication and encryption of the SIP signaling associated withVoIPand other SIP-based applications.[99] Significant attacks against TLS/SSL are listed below. In February 2015, IETF issued an informational RFC[100]summarizing the various known attacks against TLS/SSL. A vulnerability of the renegotiation procedure was discovered in August 2009 that can lead to plaintext injection attacks against SSL 3.0 and all current versions of TLS.[101]For example, it allows an attacker who can hijack anhttpsconnection to splice their own requests into the beginning of the conversation the client has with the web server. The attacker cannot actually decrypt the client–server communication, so it is different from a typicalman-in-the-middle attack. A short-term fix is for web servers to stop allowing renegotiation, which typically will not require other changes unlessclient certificateauthentication is used. To fix the vulnerability, a renegotiation indication extension was proposed for TLS. It will require the client and server to include and verify information about previous handshakes in any renegotiation handshakes.[102]This extension has become a proposed standard and has been assigned the numberRFC5746. The RFC has been implemented by several libraries.[103][104][105] A protocoldowngrade attack(also called a version rollback attack) tricks a web server into negotiating connections with previous versions of TLS (such as SSLv2) that have long since been abandoned as insecure. Previous modifications to the original protocols, likeFalse Start[106](adopted and enabled by Google Chrome[107]) orSnap Start, reportedly introduced limited TLS protocol downgrade attacks[108]or allowed modifications to the cipher suite list sent by the client to the server. In doing so, an attacker might succeed in influencing the cipher suite selection in an attempt to downgrade the cipher suite negotiated to use either a weaker symmetric encryption algorithm or a weaker key exchange.[109]A paper presented at anACMconference on computer and communications securityin 2012 demonstrated that the False Start extension was at risk: in certain circumstances it could allow an attacker to recover the encryption keys offline and to access the encrypted data.[110] Encryption downgrade attacks can force servers and clients to negotiate a connection using cryptographically weak keys. In 2014, aman-in-the-middleattack called FREAK was discovered affecting theOpenSSLstack, the defaultAndroidweb browser, and someSafaribrowsers.[111]The attack involved tricking servers into negotiating a TLS connection using cryptographically weak 512 bit encryption keys. Logjam is asecurity exploitdiscovered in May 2015 that exploits the option of using legacy"export-grade"512-bitDiffie–Hellmangroups dating back to the 1990s.[112]It forces susceptible servers to downgrade to cryptographically weak 512-bit Diffie–Hellman groups. An attacker can then deduce the keys the client and server determine using theDiffie–Hellman key exchange. TheDROWN attackis an exploit that attacks servers supporting contemporary SSL/TLS protocol suites by exploiting their support for the obsolete, insecure, SSLv2 protocol to leverage an attack on connections using up-to-date protocols that would otherwise be secure.[113][114]DROWN exploits a vulnerability in the protocols used and the configuration of the server, rather than any specific implementation error. Full details of DROWN were announced in March 2016, together with a patch for the exploit. At that time, more than 81,000 of the top 1 million most popular websites were among the TLS protected websites that were vulnerable to the DROWN attack.[114] On September 23, 2011, researchers Thai Duong and Juliano Rizzo demonstrated a proof of concept calledBEAST(Browser Exploit Against SSL/TLS)[115]using aJava appletto violatesame origin policyconstraints, for a long-knowncipher block chaining(CBC) vulnerability in TLS 1.0:[116][117]an attacker observing 2 consecutive ciphertext blocks C0, C1 can test if the plaintext block P1 is equal to x by choosing the next plaintext blockP2 = x ⊕ C0 ⊕ C1; as per CBC operation,C2 = E(C1 ⊕ P2) = E(C1 ⊕ x ⊕ C0 ⊕ C1) = E(C0 ⊕ x), which will be equal to C1 ifx = P1. Practicalexploitshad not been previously demonstrated for thisvulnerability, which was originally discovered byPhillip Rogaway[118]in 2002. The vulnerability of the attack had been fixed with TLS 1.1 in 2006, but TLS 1.1 had not seen wide adoption prior to this attack demonstration. RC4as a stream cipher is immune to BEAST attack. Therefore, RC4 was widely used as a way to mitigate BEAST attack on the server side. However, in 2013, researchers found more weaknesses in RC4. Thereafter enabling RC4 on server side was no longer recommended.[119] Chrome and Firefox themselves are not vulnerable to BEAST attack,[120][121]however, Mozilla updated theirNSSlibraries to mitigate BEAST-likeattacks. NSS is used byMozilla FirefoxandGoogle Chrometo implement SSL. Someweb serversthat have a broken implementation of the SSL specification may stop working as a result.[122] Microsoftreleased Security Bulletin MS12-006 on January 10, 2012, which fixed the BEAST vulnerability by changing the way that the Windows Secure Channel (Schannel) component transmits encrypted network packets from the server end.[123]Users of Internet Explorer (prior to version 11) that run on older versions of Windows (Windows 7,Windows 8andWindows Server 2008 R2) can restrict use of TLS to 1.1 or higher. Applefixed BEAST vulnerability by implementing 1/n-1 split and turning it on by default inOS X Mavericks, released on October 22, 2013.[124] The authors of the BEAST attack are also the creators of the laterCRIMEattack, which can allow an attacker to recover the content of web cookies whendata compressionis used along with TLS.[125][126]When used to recover the content of secretauthentication cookies, it allows an attacker to performsession hijackingon an authenticated web session. While the CRIME attack was presented as a general attack that could work effectively against a large number of protocols, including but not limited to TLS, and application-layer protocols such asSPDYorHTTP, only exploits against TLS and SPDY were demonstrated and largely mitigated in browsers and servers. The CRIME exploit againstHTTP compressionhas not been mitigated at all, even though the authors of CRIME have warned that this vulnerability might be even more widespread than SPDY and TLS compression combined. In 2013 a new instance of the CRIME attack against HTTP compression, dubbedBREACH, was announced. Based on the CRIME attack a BREACH attack can extract login tokens, email addresses or other sensitive information from TLS encrypted web traffic in as little as 30 seconds (depending on the number of bytes to be extracted), provided the attacker tricks the victim into visiting a malicious web link or is able to inject content into valid pages the user is visiting (ex: a wireless network under the control of the attacker).[127]All versions of TLS and SSL are at risk from BREACH regardless of the encryption algorithm or cipher used.[128]Unlike previous instances of CRIME, which can be successfully defended against by turning off TLS compression or SPDY header compression, BREACH exploits HTTP compression which cannot realistically be turned off, as virtually all web servers rely upon it to improve data transmission speeds for users.[127]This is a known limitation of TLS as it is susceptible tochosen-plaintext attackagainst the application-layer data it was meant to protect. Earlier TLS versions were vulnerable against thepadding oracle attackdiscovered in 2002. A novel variant, called theLucky Thirteen attack, was published in 2013. Some experts[90]also recommended avoidingtriple DESCBC. Since the last supported ciphers developed to support any program usingWindows XP's SSL/TLS library like Internet Explorer on Windows XP areRC4and Triple-DES, and since RC4 is now deprecated (see discussion ofRC4 attacks), this makes it difficult to support any version of SSL for any program using this library on XP. A fix was released as the Encrypt-then-MAC extension to the TLS specification, released asRFC7366.[129]The Lucky Thirteen attack can be mitigated in TLS 1.2 by using only AES_GCM ciphers; AES_CBC remains vulnerable. SSL may safeguard email, VoIP, and other types of communications over insecure networks in addition to its primary use case of secure data transmission between a client and the server.[2] On October 14, 2014, Google researchers published a vulnerability in the design of SSL 3.0, which makesCBC mode of operationwith SSL 3.0 vulnerable to apadding attack(CVE-2014-3566). They named this attackPOODLE(Padding Oracle On Downgraded Legacy Encryption). On average, attackers only need to make 256 SSL 3.0 requests to reveal one byte of encrypted messages.[96] Although this vulnerability only exists in SSL 3.0 and most clients and servers support TLS 1.0 and above, all major browsers voluntarily downgrade to SSL 3.0 if the handshakes with newer versions of TLS fail unless they provide the option for a user or administrator to disable SSL 3.0 and the user or administrator does so[citation needed]. Therefore, the man-in-the-middle can first conduct aversion rollback attackand then exploit this vulnerability.[96] On December 8, 2014, a variant of POODLE was announced that impacts TLS implementations that do not properly enforce padding byte requirements.[130] Despite the existence of attacks onRC4that broke its security, cipher suites in SSL and TLS that were based on RC4 were still considered secure prior to 2013 based on the way in which they were used in SSL and TLS. In 2011, the RC4 suite was actually recommended as a workaround for theBEASTattack.[131]New forms of attack disclosed in March 2013 conclusively demonstrated the feasibility of breaking RC4 in TLS, suggesting it was not a good workaround for BEAST.[95]An attack scenario was proposed by AlFardan, Bernstein, Paterson, Poettering and Schuldt that used newly discovered statistical biases in the RC4 key table[132]to recover parts of the plaintext with a large number of TLS encryptions.[133][134]An attack on RC4 in TLS and SSL that requires 13 × 220encryptions to break RC4 was unveiled on 8 July 2013 and later described as "feasible" in the accompanying presentation at aUSENIXSecurity Symposium in August 2013.[135][136]In July 2015, subsequent improvements in the attack make it increasingly practical to defeat the security of RC4-encrypted TLS.[137] As many modern browsers have been designed to defeat BEAST attacks (except Safari for Mac OS X 10.7 or earlier, for iOS 6 or earlier, and for Windows; see§ Web browsers), RC4 is no longer a good choice for TLS 1.0. The CBC ciphers which were affected by the BEAST attack in the past have become a more popular choice for protection.[90]Mozilla and Microsoft recommend disabling RC4 where possible.[138][139]RFC7465prohibits the use of RC4 cipher suites in all versions of TLS. On September 1, 2015, Microsoft, Google, and Mozilla announced that RC4 cipher suites would be disabled by default in their browsers (Microsoft Edge [Legacy],Internet Explorer 11on Windows 7/8.1/10,Firefox, andChrome) in early 2016.[140][141][142] A TLS (logout) truncation attack blocks a victim's account logout requests so that the user unknowingly remains logged into a web service. When the request to sign out is sent, the attacker injects an unencryptedTCPFIN message (no more data from sender) to close the connection. The server therefore does not receive the logout request and is unaware of the abnormal termination.[143] Published in July 2013,[144][145]the attack causes web services such asGmailandHotmailto display a page that informs the user that they have successfully signed-out, while ensuring that the user's browser maintains authorization with the service, allowing an attacker with subsequent access to the browser to access and take over control of the user's logged-in account. The attack does not rely on installing malware on the victim's computer; attackers need only place themselves between the victim and the web server (e.g., by setting up a rogue wireless hotspot).[143]This vulnerability also requires access to the victim's computer. Another possibility is when using FTP the data connection can have a false FIN in the data stream, and if the protocol rules for exchanging close_notify alerts is not adhered to a file can be truncated. In February 2013 two researchers from Royal Holloway, University of London discovered a timing attack[146]which allowed them to recover (parts of the) plaintext from a DTLS connection using the OpenSSL or GnuTLS implementation of DTLS whenCipher Block Chainingmode encryption was used. This attack, discovered in mid-2016, exploits weaknesses in theWeb Proxy Autodiscovery Protocol(WPAD) to expose the URL that a web user is attempting to reach via a TLS-enabled web link.[147]Disclosure of a URL can violate a user's privacy, not only because of the website accessed, but also because URLs are sometimes used to authenticate users. Document sharing services, such as those offered by Google and Dropbox, also work by sending a user a security token that is included in the URL. An attacker who obtains such URLs may be able to gain full access to a victim's account or data. The exploit works against almost all browsers and operating systems. The Sweet32 attack breaks all 64-bit block ciphers used in CBC mode as used in TLS by exploiting abirthday attackand either aman-in-the-middle attackor injection of a maliciousJavaScriptinto a web page. The purpose of the man-in-the-middle attack or the JavaScript injection is to allow the attacker to capture enough traffic to mount a birthday attack.[148] TheHeartbleedbug is a serious vulnerability specific to the implementation of SSL/TLS in the popularOpenSSLcryptographic software library, affecting versions 1.0.1 to 1.0.1f. This weakness, reported in April 2014, allows attackers to stealprivate keysfrom servers that should normally be protected.[149]The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret private keys associated with thepublic certificatesused to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.[150]The vulnerability is caused by abuffer over-readbug in the OpenSSL software, rather than a defect in the SSL or TLS protocol specification. In September 2014, a variant ofDaniel Bleichenbacher's PKCS#1 v1.5 RSA Signature Forgery vulnerability[151]was announced by Intel Security Advanced Threat Research. This attack, dubbed BERserk, is a result of incomplete ASN.1 length decoding of public key signatures in some SSL implementations, and allows a man-in-the-middle attack by forging a public key signature.[152] In February 2015, after media reported the hidden pre-installation ofsuperfishadware on some Lenovo notebooks,[153]a researcher found a trusted root certificate on affected Lenovo machines to be insecure, as the keys could easily be accessed using the company name, Komodia, as a passphrase.[154]The Komodia library was designed to intercept client-side TLS/SSL traffic for parental control and surveillance, but it was also used in numerous adware programs, including Superfish, that were often surreptitiously installed unbeknownst to the computer user. In turn, thesepotentially unwanted programsinstalled the corrupt root certificate, allowing attackers to completely control web traffic and confirm false websites as authentic. In May 2016, it was reported that dozens of Danish HTTPS-protected websites belonging toVisa Inc.were vulnerable to attacks allowing hackers to inject malicious code and forged content into the browsers of visitors.[155]The attacks worked because the TLS implementation used on the affected servers incorrectly reused random numbers (nonces) that are intended to be used only once, ensuring that eachTLS handshakeis unique.[155] In February 2017, an implementation error caused by a single mistyped character in code used to parse HTML created a buffer overflow error onCloudflareservers. Similar in its effects to the Heartbleed bug discovered in 2014, this overflow error, widely known asCloudbleed, allowed unauthorized third parties to read data in the memory of programs running on the servers—data that should otherwise have been protected by TLS.[156] As of July 2021[update], the Trustworthy Internet Movement estimated the ratio of websites that are vulnerable to TLS attacks.[94] Forward secrecy is a property of cryptographic systems which ensures that a session key derived from a set of public and private keys will not be compromised if one of the private keys is compromised in the future.[157]Without forward secrecy, if the server's private key is compromised, not only will all future TLS-encrypted sessions using that server certificate be compromised, but also any past sessions that used it as well (provided that these past sessions were intercepted and stored at the time of transmission).[158]An implementation of TLS can provide forward secrecy by requiring the use of ephemeralDiffie–Hellman key exchangeto establish session keys, and some notable TLS implementations do so exclusively: e.g.,Gmailand other Google HTTPS services that useOpenSSL.[159]However, many clients and servers supporting TLS (including browsers and web servers) are not configured to implement such restrictions.[160][161]In practice, unless a web service uses Diffie–Hellman key exchange to implement forward secrecy, all of the encrypted web traffic to and from that service can be decrypted by a third party if it obtains the server's master (private) key; e.g., by means of a court order.[162] Even where Diffie–Hellman key exchange is implemented, server-side session management mechanisms can impact forward secrecy. The use ofTLS session tickets(a TLS extension) causes the session to be protected by AES128-CBC-SHA256 regardless of any other negotiated TLS parameters, including forward secrecy ciphersuites, and the long-lived TLS session ticket keys defeat the attempt to implement forward secrecy.[163][164][165]Stanford University research in 2014 also found that of 473,802 TLS servers surveyed, 82.9% of the servers deploying ephemeral Diffie–Hellman (DHE) key exchange to support forward secrecy were using weak Diffie–Hellman parameters. These weak parameter choices could potentially compromise the effectiveness of the forward secrecy that the servers sought to provide.[166] Since late 2011, Google has provided forward secrecy with TLS by default to users of itsGmailservice, along withGoogle Docsand encrypted search, among other services.[167]Since November 2013,Twitterhas provided forward secrecy with TLS to users of its service.[168]As of August 2019[update], about 80% of TLS-enabled websites are configured to use cipher suites that provide forward secrecy to most web browsers.[94] TLS interception (orHTTPSinterception if applied particularly to that protocol) is the practice of intercepting an encrypted data stream in order to decrypt it, read and possibly manipulate it, and then re-encrypt it and send the data on its way again. This is done by way of a "transparent proxy": the interception software terminates the incoming TLS connection, inspects the HTTP plaintext, and then creates a new TLS connection to the destination.[169] TLS/HTTPS interception is used as aninformation securitymeasure by network operators in order to be able to scan for and protect against the intrusion of malicious content into the network, such ascomputer virusesand othermalware.[169]Such content could otherwise not be detected as long as it is protected by encryption, which is increasingly the case as a result of the routine use of HTTPS and other secure protocols. A significant drawback of TLS/HTTPS interception is that it introduces new security risks of its own. One notable limitation is that it provides a point where network traffic is available unencrypted thus giving attackers an incentive to attack this point in particular in order to gain access to otherwise secure content. The interception also allows the network operator, or persons who gain access to its interception system, to performman-in-the-middle attacksagainst network users. A 2017 study found that "HTTPS interception has become startlingly widespread, and that interception products as a class have a dramatically negative impact on connection security".[169] The TLS protocol exchangesrecords, which encapsulate the data to be exchanged in a specific format (see below). Each record can be compressed, padded, appended with amessage authentication code(MAC), or encrypted, all depending on the state of the connection. Each record has acontent typefield that designates the type of data encapsulated, a length field and a TLS version field. The data encapsulated may be control or procedural messages of the TLS itself, or simply the application data needed to be transferred by TLS. The specifications (cipher suite, keys etc.) required to exchange application data by TLS, are agreed upon in the "TLS handshake" between the client requesting the data and the server responding to requests. The protocol therefore defines both the structure of payloads transferred in TLS and the procedure to establish and monitor the transfer. When the connection starts, the record encapsulates a "control" protocol – the handshake messaging protocol (content type22). This protocol is used to exchange all the information required by both sides for the exchange of the actual application data by TLS. It defines the format of messages and the order of their exchange. These may vary according to the demands of the client and server – i.e., there are several possible procedures to set up the connection. This initial exchange results in a successful TLS connection (both parties ready to transfer application data with TLS) or an alert message (as specified below). A typical connection example follows, illustrating ahandshakewhere the server (but not the client) is authenticated by its certificate: The followingfullexample shows a client being authenticated (in addition to the server as in the example above; seemutual authentication) via TLS using certificates exchanged between both peers. Public key operations (e.g., RSA) are relatively expensive in terms of computational power. TLS provides a secure shortcut in the handshake mechanism to avoid these operations: resumed sessions. Resumed sessions are implemented using session IDs or session tickets. Apart from the performance benefit, resumed sessions can also be used forsingle sign-on, as it guarantees that both the original session and any resumed session originate from the same client. This is of particular importance for theFTP over TLS/SSLprotocol, which would otherwise suffer from a man-in-the-middle attack in which an attacker could intercept the contents of the secondary data connections.[172] The TLS 1.3 handshake was condensed to only one round trip compared to the two round trips required in previous versions of TLS/SSL. To start the handshake, the client guesses which key exchange algorithm will be selected by the server and sends aClientHellomessage to the server containing a list of supported ciphers (in order of the client's preference) and public keys for some or all of its key exchange guesses. If the client successfully guesses the key exchange algorithm, 1 round trip is eliminated from the handshake. After receiving theClientHello, the server selects a cipher and sends back aServerHellowith its own public key, followed by serverCertificateandFinishedmessages.[173] After the client receives the server's finished message, it now is coordinated with the server on which cipher suite to use.[174] In an ordinaryfullhandshake, the server sends asession idas part of theServerHellomessage. The client associates thissession idwith the server's IP address and TCP port, so that when the client connects again to that server, it can use thesession idto shortcut the handshake. In the server, thesession idmaps to the cryptographic parameters previously negotiated, specifically the "master secret". Both sides must have the same "master secret" or the resumed handshake will fail (this prevents an eavesdropper from using asession id). The random data in theClientHelloandServerHellomessages virtually guarantee that the generated connection keys will be different from in the previous connection. In the RFCs, this type of handshake is called anabbreviatedhandshake. It is also described in the literature as arestarthandshake. RFC5077extends TLS via use of session tickets, instead of session IDs. It defines a way to resume a TLS session without requiring that session-specific state is stored at the TLS server. When using session tickets, the TLS server stores its session-specific state in a session ticket and sends the session ticket to the TLS client for storing. The client resumes a TLS session by sending the session ticket to the server, and the server resumes the TLS session according to the session-specific state in the ticket. The session ticket is encrypted and authenticated by the server, and the server verifies its validity before using its contents. One particular weakness of this method withOpenSSLis that it always limits encryption and authentication security of the transmitted TLS session ticket toAES128-CBC-SHA256, no matter what other TLS parameters were negotiated for the actual TLS session.[164]This means that the state information (the TLS session ticket) is not as well protected as the TLS session itself. Of particular concern is OpenSSL's storage of the keys in an application-wide context (SSL_CTX), i.e. for the life of the application, and not allowing for re-keying of theAES128-CBC-SHA256TLS session tickets without resetting the application-wide OpenSSL context (which is uncommon, error-prone and often requires manual administrative intervention).[165][163] This is the general format of all TLS records. Most messages exchanged during the setup of the TLS session are based on this record, unless an error or warning occurs and needs to be signaled by an Alert protocol record (see below), or the encryption mode of the session is modified by another record (see ChangeCipherSpec protocol below). Note that multiple handshake messages may be combined within one record. This record should normally not be sent during normal handshaking or application exchanges. However, this message can be sent at any time during the handshake and up to the closure of the session. If this is used to signal a fatal error, the session will be closed immediately after sending this record, so this record is used to give a reason for this closure. If the alert level is flagged as a warning, the remote can decide to close the session if it decides that the session is not reliable enough for its needs (before doing so, the remote may also send its own signal). From the application protocol point of view, TLS belongs to a lower layer, although the TCP/IP model is too coarse to show it. This means that the TLS handshake is usually (except in theSTARTTLScase) performed before the application protocol can start. In thename-based virtual serverfeature being provided by the application layer, all co-hosted virtual servers share the same certificate because the server has to select and send a certificate immediately after the ClientHello message. This is a big problem in hosting environments because it means either sharing the same certificate among all customers or using a different IP address for each of them. There are two known workarounds provided byX.509: To provide the server name,RFC4366Transport Layer Security (TLS) Extensions allow clients to include aServer Name Indicationextension (SNI) in the extended ClientHello message. This extension hints to the server immediately which name the client wishes to connect to, so the server can select the appropriate certificate to send to the clients. RFC2817also documents a method to implement name-based virtual hosting by upgrading HTTP to TLS via anHTTP/1.1 Upgrade header. Normally this is to securely implement HTTP over TLS within the main "http"URI scheme(which avoids forking the URI space and reduces the number of used ports), however, few implementations currently support this.[citation needed] The current approved version of (D)TLS is version 1.3, which is specified in: The current standards replaces these former versions, which are now considered obsolete: OtherRFCssubsequently extended (D)TLS. Extensions to (D)TLS 1.3 include: Extensions to (D)TLS 1.2 include: Extensions to (D)TLS 1.1 include: Extensions to TLS 1.0 include:
https://en.wikipedia.org/wiki/Transport_Layer_Security#Cipher
Control theoryis a field ofcontrol engineeringandapplied mathematicsthat deals with thecontrolofdynamical systemsin engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing anydelay,overshoot, orsteady-state errorand ensuring a level of controlstability; often with the aim to achieve a degree ofoptimality. To do this, acontrollerwith the requisite corrective behavior is required. This controller monitors the controlledprocess variable(PV), and compares it with the reference orset point(SP). The difference between actual and desired value of the process variable, called theerrorsignal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied arecontrollabilityandobservability. Control theory is used incontrol system engineeringto design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such asrobotics. Extensive use is usually made of a diagrammatic style known as theblock diagram. In it thetransfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on thedifferential equationsdescribing the system. Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described byJames Clerk Maxwell.[1]Control theory was further advanced byEdward Routhin 1874,Charles Sturmand in 1895,Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development ofPID controltheory byNicolas Minorsky.[2]Although a major application ofmathematicalcontrol theory is incontrol systems engineering, which deals with the design ofprocess controlsystems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology andoperations research.[3] Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of thecentrifugal governor, conducted by the physicistJames Clerk Maxwellin 1868, entitledOn Governors.[4]A centrifugal governor was already used to regulate the velocity of windmills.[5]Maxwell described and analyzed the phenomenon ofself-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate,Edward John Routh, abstracted Maxwell's results for the general class of linear systems.[6]Independently,Adolf Hurwitzanalyzed system stability using differential equations in 1877, resulting in what is now known as theRouth–Hurwitz theorem.[7][8] A notable application of dynamic control was in the area of crewed flight. TheWright brothersmade their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds. ByWorld War II, control theory was becoming an important area of research.Irmgard Flügge-Lotzdeveloped the theory of discontinuous automatic control systems, and applied thebang-bang principleto the development ofautomatic flight control equipmentfor aircraft.[9][10]Other areas of application for discontinuous controls includedfire-control systems,guidance systemsandelectronics. Sometimes, mechanical methods are used to improve the stability of systems. For example,ship stabilizersare fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship. TheSpace Racealso depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find aninternal modelthat obeys thegood regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of aregulatorinteracting with aplant. Fundamentally, there are two types of control loop:open-loop control(feedforward), andclosed-loop control(feedback). The definition of a closed loop control system according to theBritish Standards Institutionis "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."[12] Aclosed-loop controlleror feedback controller is acontrol loopwhich incorporatesfeedback, in contrast to anopen-loop controllerornon-feedback controller. A closed-loop controller uses feedback to controlstatesoroutputsof adynamical system. Its name comes from the information path in the system: process inputs (e.g.,voltageapplied to anelectric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured withsensorsand processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.[14] In the case of linearfeedbacksystems, acontrol loopincludingsensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at asetpoint(SP). An everyday example is thecruise controlon a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. ThePID algorithmin the controller restores the actual speed to the desired speed in an optimum way, with minimal delay orovershoot, by controlling the power output of the vehicle's engine. Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent.Open-loop control systemsdo not make use of feedback, and run only in pre-arranged ways. Closed-loop controllers have the following advantages over open-loop controllers: In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termedfeedforwardand serves to further improve reference tracking performance. A common closed-loop controller architecture is thePID controller. The field of control theory can be divided into two branches: Mathematical techniques for analyzing and designing control systems fall into two different categories: In contrast to the frequency-domain analysis of the classical control theory, modern control theory utilizes the time-domainstate spacerepresentation,[citation needed]a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space.[17][18] Control systems can be divided into different categories depending on the number of inputs and outputs. The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain usingdifferential equations, in the complex-s domain with theLaplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory arePID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model. Modern control theory is carried out in thestate space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first orderdifferential equationsdefined usingstate variables.Nonlinear,multivariable,adaptiveandrobust controltheories come under this division. Being fairly new, modern control theory has many areas yet to be explored. Scholars likeRudolf E. KálmánandAleksandr Lyapunovare well known among the people who have shaped modern control theory. Thestabilityof a generaldynamical systemwith no input can be described withLyapunov stabilitycriteria. For simplicity, the following descriptions focus on continuous-time and discrete-timelinear systems. Mathematically, this means that for a causal linear system to be stable all of thepolesof itstransfer functionmust have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is inCartesian coordinateswhere thex{\displaystyle x}axis is the real axis and the discrete Z-transform is incircular coordinateswhere theρ{\displaystyle \rho }axis is the real axis. When the appropriate conditions above are satisfied a system is said to beasymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or amodulusequal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it ismarginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero. If a system in question has animpulse responseof then the Z-transform (seethis example), is given by which has a pole inz=0.5{\displaystyle z=0.5}(zeroimaginary part). This system is BIBO (asymptotically) stable since the pole isinsidethe unit circle. However, if the impulse response was then the Z-transform is which has a pole atz=1.5{\displaystyle z=1.5}and is not BIBO stable since the pole has a modulus strictly greater than one. Numerous tools exist for the analysis of the poles of a system. These include graphical systems like theroot locus,Bode plotsor theNyquist plots. Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships useantiroll finsthat extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll. Controllabilityandobservabilityare main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termedstabilizable. Observability instead is related to the possibility ofobserving, through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable. From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of theeigenvaluesof the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis. Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors. Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especiallyroboticsor aircraft cruise control). A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles haveRe[λ]<−λ¯{\displaystyle Re[\lambda ]<-{\overline {\lambda }}}, whereλ¯{\displaystyle {\overline {\lambda }}}is a fixed value strictly greater than zero, instead of simply asking thatRe[λ]<0{\displaystyle Re[\lambda ]<0}. Another typical specification is the rejection of a step disturbance; including anintegratorin the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included. Other "classical" control theory specifications regard the time-response of the closed-loop system. These include therise time(the time needed by the control system to reach the desired value after a perturbation), peakovershoot(the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related torobustness(see after). Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI). A control system must always have some robustness property. Arobust controlleris such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the truesystem dynamicscan be so complicated that a complete model is impossible. The process of determining the equations that govern the model's dynamics is calledsystem identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically itstransfer functionor matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of amass-spring-dampersystem we know thatmx¨(t)=−Kx(t)−Bx˙(t){\displaystyle m{\ddot {x}}(t)=-Kx(t)-\mathrm {B} {\dot {x}}(t)}. Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal. Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance. Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and usingNyquistandBode diagrams. Topics includegain and phase marginand amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties. A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem:model predictive control(see later), andanti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold. For MIMO systems, pole placement can be performed mathematically using astate space representationof the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design. Processes in industries likeroboticsand theaerospace industrytypically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g.,feedback linearization,backstepping,sliding mode control, trajectory linearization control normally take advantage of results based onLyapunov's theory.Differential geometryhas been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states.[19] When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions. A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks. Every control system must guarantee first the stability of the closed-loop behavior. Forlinear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based onAleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen. Many active and historical figures made significant contribution to control theory including
https://en.wikipedia.org/wiki/Control_theory#Stability
Aspatial light modulator(SLM) is a device that can control theintensity,phase, orpolarizationoflightin a spatially varying manner. A simple example is anoverhead projectortransparency. Usually when the term SLM is used, it means that the transparency can be controlled by acomputer. SLMs are primarily marketed forimage projection, displays devices,[1]andmaskless lithography.[citation needed]SLMs are also used inoptical computingandholographic optical tweezers. Usually, an SLM modulates the intensity of the light beam. However, it is also possible to produce devices that modulate the phase of the beam or both the intensity and the phase simultaneously. It is also possible to produce devices that modulate the polarization of the beam, and modulate the polarization, phase, and intensity simultaneously.[2] SLMs are used extensively inholographic data storagesetups to encode information into a laser beam similarly to the way a transparency does for an overhead projector. They can also be used as part of aholographic display technology. In the 1980s, large SLMs were placed on overhead projectors to project computer monitor contents to the screen. Since then, more modernprojectorshave been developed where the SLM is built inside the projector. These are commonly used in meetings for presentations. Liquid crystal SLMs can help solve problems related to laser microparticle manipulation. In this case spiral beam parameters can be changed dynamically.[3] As its name implies, the image on an electrically addressed spatial light modulator is created and changed electronically, as in most electronic displays. EASLMs usually receive input via a conventional interface such as VGA or DVI input. They are available at resolutions up toQXGA(2048 × 1536). Unlike ordinary displays, they are usually much smaller (having an active area of about 2 cm²) as they are not normally meant to be viewed directly. An example of an EASLM is thedigital micromirror device (DMD)at the heart ofDLPdisplays orLCoSDisplays usingferroelectricliquid crystals(FLCoS) ornematic liquid crystals(electrically controlled birefringence effect). Spatial light modulators can be either reflective or transmissive depending on their design and purpose.[4] DMDs, short for digital micromirror devices, are spatial light modulators that specifically work with binary amplitude-only modulation.[5][6]Each pixel on the SLM can only be in one of two states: "on" or "off". The main purpose of the SLM is to control and adjust the amplitude of the light. Phase modulation can be achieved using a DMD by using Lee holography techniques, or by using the superpixel method.[7][6] The image on an optically addressed spatial light modulator, also known as alight valve, is created and changed by shining light encoded with an image on its front or back surface. A photosensor allows the OASLM to sense the brightness of each pixel and replicate the image usingliquid crystals. As long as the OASLM is powered, the image is retained even after the light is extinguished. An electrical signal is used to clear the whole OASLM at once. They are often used as the second stage of a very-high-resolution display, such as one for a computer-generated holographic display. In a process called active tiling, images displayed on an EASLM are sequentially transferred to different parts on an OASLM, before the whole image on the OASLM is presented to the viewer. As EASLMs can run as fast as 2500 frames per second, it is possible to tile around 100 copies of the image on the EASLM onto an OASLM while still displaying full-motion video on the OASLM. This potentially gives images with resolutions of above 100 megapixels. Multiphoton intrapulse interference phase scan(MIIPS) is a technique based on the computer-controlled phase scan of a linear-array spatial light modulator. Through the phase scan to an ultrashort pulse, MIIPS can not only characterize but also manipulate the ultrashort pulse to get the needed pulse shape at target spot (such astransform-limited pulsefor optimized peak power, and other specific pulse shapes). This technique features with full calibration and control of the ultrashort pulse, with no moving parts, and simple optical setup. Linear array SLMs that use nematic liquid crystal elements are available that can modulate amplitude, phase, or both simultaneously.[8][9]
https://en.wikipedia.org/wiki/Spatial_light_modulator
Incomputability theory, anundecidable problemis adecision problemfor which aneffective method(algorithm) to derive the correct answer does not exist. More formally, an undecidable problem is a problem whose language is not arecursive set; see the articleDecidable language. There areuncountablymany undecidable problems, so the list below is necessarily incomplete. Though undecidable languages are not recursive languages, they may besubsetsofTuringrecognizable languages: i.e., such undecidable languages may be recursively enumerable. Many, if not most, undecidable problems in mathematics can be posed asword problems: determining when two distinct strings of symbols (encoding some mathematical concept or object) represent the same object or not. For undecidability in axiomatic mathematics, seeList of statements undecidable in ZFC.
https://en.wikipedia.org/wiki/List_of_undecidable_problems
XML External Entity attack, or simplyXXE attack, is a type of attack against an application that parsesXMLinput. This attack occurs when XML input containing a reference to an external entity is processed by a weakly configured XML parser. This attack may lead to the disclosure of confidential data,DoS attacks,server-side request forgery,port scanningfrom the perspective of the machine where the parser is located, and other[which?]system impacts.[1] The XML 1.0 standard defines the structure of an XML document. The standard defines a concept called anentity, which is a term that refers to multiple types of data unit. One of those types of entities is an external general/parameter parsed entity, often shortened to external entity, that can access local or remote content via a declaredsystem identifier. The system identifier is assumed to be aURIthat can be accessed by the XML processor when processing the entity. The XML processor then replaces occurrences of the named external entity with the contents that is referenced by the system identifier. If the system identifier contains tainted data and the XML processor dereferences this tainted data, the XML processor may disclose confidential information normally not accessible by the application. Similar attack vectors apply the usage of externalDTDs, externalstyle sheets, externalschemas, etc. which, when included, allow similar external resource inclusion style attacks. Attacks can include disclosing local files, which may contain sensitive data such as passwords or private user data, usingfile://schemes or relative paths in the system identifier. Since the attack occurs relative to the application processing the XML document, an attacker may use this trusted application to pivot to other internal systems, possibly disclosing other internal content viaHTTPrequests or launching aSSRFattack to any unprotected internal services. In some situations, an XML processor library that is vulnerable to client-sidememory corruptionissues may be exploited by dereferencing a maliciousURI, possibly allowing arbitrary code execution under theapplication account. Other attacks can access local resources that may not stop returning data, possibly impacting application availability if too many threads or processes are not released. The application does not need to explicitly return the response to the attacker for it to be vulnerable to information disclosures. An attacker can leverageDNSinformation to exfiltrate data through subdomain names to a DNS server under their control.[2] The examples below are fromOWASP'sTesting for XML Injection (WSTG-INPV-07).[3] When thePHP"expect" module is loaded,remote code executionmay be possible with a modified payload. Since the entire XML document is communicated from an untrusted client, it is not usually possible to selectivelyvalidateor escape tainted data within the system identifier in the DTD. The XML processor could be configured to use a local static DTD and disallow any declared DTD included in the XML document.
https://en.wikipedia.org/wiki/XML_external_entity_attack
Aneural Turing machine(NTM) is arecurrent neural networkmodel of aTuring machine. The approach was published byAlex Graveset al. in 2014.[1]NTMs combine the fuzzypattern matchingcapabilities ofneural networkswith thealgorithmicpower ofprogrammable computers. An NTM has a neural network controller coupled toexternal memoryresources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them usinggradient descent.[2]An NTM with along short-term memory(LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone.[1] The authors of the original NTM paper did not publish theirsource code.[1]The first stable open-source implementation was published in 2018 at the 27th International Conference on Artificial Neural Networks, receiving a best-paper award.[3][4][5]Other open source implementations of NTMs exist but as of 2018 they are not sufficiently stable for production use.[6][7][8][9][10][11][12]The developers either report that thegradientsof their implementation sometimes becomeNaNduring training for unknown reasons and cause training to fail;[10][11][9]report slow convergence;[7][6]or do not report the speed of learning of their implementation.[12][8] Differentiable neural computersare an outgrowth of Neural Turing machines, withattention mechanismsthat control where the memory is active, and improve performance.[13]
https://en.wikipedia.org/wiki/Neural_Turing_machine
Invideo games,artificial intelligence(AI) is used to generate responsive, adaptive orintelligentbehaviors primarily innon-playable characters(NPCs) similar tohuman-like intelligence. Artificial intelligence has been an integral part of video games since their inception in 1948, first seen in the gameNim.[1]AI in video games is a distinct subfield and differs from academic AI. It serves to improve the game-player experience rather thanmachine learningor decision making. During thegolden age of arcade video gamesthe idea of AI opponents was largely popularized in the form of graduated difficulty levels, distinct movement patterns, and in-game events dependent on the player's input. Modern games often implement existing techniques such aspathfindinganddecision treesto guide the actions of NPCs. AI is often used in mechanisms which are not immediately visible to the user, such asdata miningandprocedural-content generation.[2]One of the most infamous examples of this NPC technology and gradual difficulty levels can be found in the gameMike Tyson's Punch-Out!!(1987).[3] In general, game AI does not, as might be thought and sometimes is depicted to be the case, mean a realization of an artificial person corresponding to an NPC in the manner of theTuring testor anartificial general intelligence. The termgame AIis used to refer to a broad set ofalgorithmsthat also include techniques fromcontrol theory,robotics,computer graphicsandcomputer sciencein general, and so video game AI may often not constitute "true AI" in that such techniques do not necessarily facilitate computer learning or other standard criteria, only constituting "automated computation" or a predetermined and limited set of responses to a predetermined and limited set of inputs.[4][5][6] Many industries and corporate voices[who?]argue that game AI has come a long way in the sense that it has revolutionized the way humans interact with all forms of technology, although many[who?]expert researchers are skeptical of such claims, and particularly of the notion that such technologies fit the definition of "intelligence" standardly used in thecognitive sciences.[4][5][6][7]Industry voices[who?]make the argument that AI has become more versatile in the way we use all technological devices for more than their intended purpose because the AI allows the technology to operate in multiple ways, allegedly developing their own personalities and carrying out complex instructions of the user.[8][9] People[who?]in the field of AI have argued that video game AI is not true intelligence, but an advertising buzzword used to describe computer programs that use simple sorting and matching algorithms to create the illusion of intelligent behavior while bestowing software with a misleading aura of scientific or technological complexity and advancement.[4][5][6][10]Since game AI for NPCs is centered on appearance of intelligence and good gameplay within environment restrictions, its approach is very different from that of traditional AI. Game playing was an area of research in AI from its inception. One of the first examples of AI is the computerized game ofNimmade in 1951 and published in 1952. Despite being advanced technology in the year it was made, 20 years beforePong, the game took the form of a relatively small box and was able to regularly win games even against highly skilled players of the game.[1]In 1951, using theFerranti Mark 1machine of theUniversity of Manchester,Christopher Stracheywrote acheckersprogram andDietrich Prinzwrote one forchess.[11]These were among the first computer programs ever written.Arthur Samuel's checkers program, developed in the middle 1950s and early 1960s, eventually achieved sufficient skill to challenge a respectable amateur.[12]Work on checkers and chess would culminate in the defeat ofGarry KasparovbyIBM'sDeep Bluecomputer in 1997.[13]The firstvideo gamesdeveloped in the 1960s and early 1970s, likeSpacewar!,Pong, andGotcha(1973), were games implemented ondiscrete logicand strictly based on the competition of two players, without AI. Games that featured asingle playermode with enemies started appearing in the 1970s. The first notable ones for thearcadeappeared in 1974: theTaitogameSpeed Race(racing video game) and theAtarigamesQwak(duck huntinglight gun shooter) andPursuit(fighter aircraft dogfighting simulator). Two text-based computer games,Star Trek(1971) andHunt the Wumpus(1973), also had enemies. Enemy movement was based on stored patterns. The incorporation ofmicroprocessorswould allow more computation and random elements overlaid into movement patterns. It was during thegolden age of video arcade gamesthat the idea of AI opponents was largely popularized, due to the success ofSpace Invaders(1978), which sported an increasing difficulty level, distinct movement patterns, and in-game events dependent onhash functionsbased on the player's input.Galaxian(1979) added more complex and varied enemy movements, including maneuvers by individual enemies who break out of formation.Pac-Man(1980) introduced AI patterns tomaze games, with the added quirk of different personalities for each enemy.Karate Champ(1984) later introduced AI patterns tofighting games.First Queen(1988) was atacticalaction RPGwhich featured characters that can be controlled by the computer's AI in following the leader.[14][15]Therole-playing video gameDragon Quest IV(1990) introduced a "Tactics" system, where the user can adjust the AI routines ofnon-player charactersduring battle, a concept later introduced to theaction role-playing gamegenre bySecret of Mana(1993). Games likeMadden Football,Earl Weaver BaseballandTony La Russa Baseballall based their AI in an attempt to duplicate on the computer the coaching or managerial style of the selected celebrity. Madden, Weaver and La Russa all did extensive work with these game development teams to maximize the accuracy of the games.[citation needed]Later sports titles allowed users to "tune" variables in the AI to produce a player-defined managerial or coaching strategy. The emergence of new game genres in the 1990s prompted the use of formal AI tools likefinite-state machines.Real-time strategygames taxed the AI with many objects, incomplete information, pathfinding problems, real-time decisions and economic planning, among other things.[16]The first games of the genre had notorious problems.Herzog Zwei(1989), for example, had almost broken pathfinding and very basic three-state state machines for unit control, andDune II(1992) attacked the players' base in a beeline and used numerous cheats.[17]Later games in the genre exhibited more sophisticated AI. Later games have usedbottom-upAI methods, such as theemergent behaviourand evaluation of player actions in games likeCreaturesorBlack & White.Façade (interactive story)was released in 2005 and used interactive multiple way dialogs and AI as the main aspect of game. Games have provided an environment for developing artificial intelligence with potential applications beyond gameplay. Examples includeWatson, aJeopardy!-playing computer; and theRoboCuptournament, where robots are trained to compete in soccer.[18] Many experts[who?]complain that the "AI" in the termgame AIoverstates its worth, as game AI is not aboutintelligence, and shares few of the objectives of the academic field of AI. Whereas "real AI" addresses fields of machine learning, decision making based on arbitrary data input, and even the ultimate goal ofstrong AIthat can reason, "game AI" often consists of a half-dozen rules of thumb, orheuristics, that are just enough to give a good gameplay experience.[citation needed]Historically, academic game-AI projects have been relatively separate from commercial products because the academic approaches tended to be simple and non-scalable. Commercial game AI has developed its own set of tools, which have been sufficient to give good performance in many cases.[2] Game developers' increasing awareness of academic AI and a growing interest in computer games by the academic community is causing the definition of what counts as AI in a game to become lessidiosyncratic. Nevertheless, significant differences between different application domains of AI mean that game AI can still be viewed as a distinct subfield of AI. In particular, the ability to legitimately solve some AI problems in games bycheatingcreates an important distinction. For example, inferring the position of an unseen object from past observations can be a difficult problem when AI is applied to robotics, but in a computer game a NPC can simply look up the position in the game'sscene graph. Such cheating can lead to unrealistic behavior and so is not always desirable. But its possibility serves to distinguish game AI and leads to new problems to solve, such as when and how to cheat.[citation needed] The major limitation to strong AI is the inherent depth of thinking and the extreme complexity of the decision-making process. This means that although it would be then theoretically possible to make "smart" AI the problem would take considerable processing power.[citation needed] Game AI/heuristic algorithms are used in a wide variety of quite disparate fields inside a game. The most obvious is in the control of any NPCs in the game, although "scripting" (decision tree) is currently the most common means of control.[19]These handwritten decision trees often result in "artificial stupidity" such as repetitive behavior, loss of immersion, or abnormal behavior in situations the developers did not plan for.[20] Pathfinding, another common use for AI, is widely seen inreal-time strategygames. Pathfinding is the method for determining how to get a NPC from one point on a map to another, taking into consideration the terrain, obstacles and possibly "fog of war".[21][22]Commercial videogames often use fast and simple "grid-based pathfinding", wherein the terrain is mapped onto a rigid grid of uniform squares and a pathfinding algorithm such asA*orIDA*is applied to the grid.[23][24][25]Instead of just a rigid grid, some games use irregular polygons and assemble anavigation meshout of the areas of the map that NPCs can walk to.[23][26]As a third method, it is sometimes convenient for developers to manually select "waypoints" that NPCs should use to navigate; the cost is that such waypoints can create unnatural-looking movement. In addition, waypoints tend to perform worse than navigation meshes in complex environments.[27][28]Beyond static pathfinding,navigationis a sub-field of Game AI focusing on giving NPCs the capability to navigate in a dynamic environment, finding a path to a target while avoiding collisions with other entities (other NPC, players...) or collaborating with them (group navigation).[citation needed]Navigation in dynamic strategy games with large numbers of units, such asAge of Empires(1997) orCivilization V(2010), often performs poorly; units often get in the way of other units.[28] Rather than improve the Game AI to properly solve a difficult problem in the virtual environment, it is often more cost-effective to just modify the scenario to be more tractable. If pathfinding gets bogged down over a specific obstacle, a developer may just end up moving or deleting the obstacle.[29]InHalf-Life(1998), the pathfinding algorithm sometimes failed to find a reasonable way for all the NPCs to evade a thrown grenade; rather than allow the NPCs to attempt to bumble out of the way and risk appearing stupid, the developers instead scripted the NPCs to crouch down and cover in place in that situation.[30] Many contemporary video games fall under the category of action,first-person shooter, or adventure. In most of these types of games, there is some level of combat that takes place. The AI's ability to be efficient in combat is important in these genres. A common goal today is to make the AI more human or at least appear so. One of the more positive and efficient features found in modern-day video game AI is the ability to hunt. AI originally reacted in a very black and white manner. If the player were in a specific area then the AI would react in either a complete offensive manner or be entirely defensive. In recent years, the idea of "hunting" has been introduced; in this 'hunting' state the AI will look for realistic markers, such as sounds made by the character or footprints they may have left behind.[31]These developments ultimately allow for a more complex form of play. With this feature, the player can actually consider how to approach or avoid an enemy. This is a feature that is particularly prevalent in thestealthgenre. Another development in recent game AI has been the development of "survival instinct". In-game computers can recognize different objects in an environment and determine whether it is beneficial or detrimental to its survival. Like a user, the AI can look for cover in a firefight before taking actions that would leave it otherwise vulnerable, such as reloading a weapon or throwing a grenade. There can be set markers that tell it when to react in a certain way. For example, if the AI is given a command to check its health throughout a game then further commands can be set so that it reacts a specific way at a certain percentage of health. If the health is below a certain threshold then the AI can be set to run away from the player and avoid it until another function is triggered. Another example could be if the AI notices it is out of bullets, it will find a cover object and hide behind it until it has reloaded. Actions like these make the AI seem more human. However, there is still a need for improvement in this area. Another side-effect of combat AI occurs when two AI-controlled characters encounter each other; first popularized in theid SoftwaregameDoom, so-called 'monster infighting' can break out in certain situations. Specifically, AI agents that are programmed to respond to hostile attacks will sometimes attackeach otherif their cohort's attacks land too close to them.[citation needed]In the case ofDoom, published gameplay manuals even suggest taking advantage of monster infighting in order to survive certain levels and difficulty settings. Procedural content generation(PCG) is an AI technique to autonomously create ingame content throughalgorithmswith minimal input from designers.[32]PCG is typically used to dynamically generate game features such as levels, NPC dialogue, and sounds. Developers input specific parameters to guide the algorithms into making content for them. PCG offers numerous advantages from both a developmental and player experience standpoint. Game studios are able to spend less money on artists and save time on production.[33]Players are given a fresh, highly replayable experience as the game generates new content each time they play. PCG allows game content to adapt in real time to the player's actions.[34] Generative algorithms (a rudimentary form of AI) have been used for level creation for decades. The iconic 1980dungeon crawlercomputer gameRogueis a foundational example. Players are tasked with descending through the increasingly difficult levels of a dungeon to retrieve the Amulet of Yendor. The dungeon levels are algorithmically generated at the start of each game. The save file is deleted every time the player dies.[35]The algorithmic dungeon generation creates unique gameplay that would not otherwise be there as the goal of retrieving the amulet is the same each time. Opinions on total level generation as seen in games likeRoguecan vary. Some developers can be skeptical of the quality of generated content and desire to create a world with a more "human" feel so they will use PCG more sparingly.[32]Consequently, they will only use PCG to generate specific components of an otherwise handcrafted level. A notable example of this isUbisoft's2017tactical shooterTom Clancy's Ghost Recon Wildlands. Developers used apathfinding algorithmtrained with adata setof real maps to create road networks that would weave through handcrafted villages within the game world.[34]This is an intelligent use of PCG as the AI would have a large amount of real world data to work with and roads are straightforward to create. However, the AI would likely miss nuances and subtleties if it was tasked with creating a village where people live. As AI has become more advanced, developer goals are shifting to create massive repositories of levels from data sets. In 2023, researchers fromNew York Universityand theUniversity of the Witwatersrandtrained alarge language modelto generate levels in the style of the 1981puzzle gameSokoban. They found that the model excelled at generating levels with specifically requested characteristics such as difficulty level or layout.[32]However, current models such as the one used in the study require large datasets of levels to be effective. They concluded that, while promising, the high data cost of large language models currently outweighs the benefits for this application.[32]Continued advancements in the field will likely lead to more mainstream use in the future. Themusical scoreof a video game is an important expression of the emotional tone of a scene to the player.Sound effectssuch as the noise of a weapon hitting an enemy help indicate the effect of the player's actions. Generating these in real time creates an engaging experience for the player because the game is more responsive to their input.[32]An example is the 2013adventure gameProteuswhere an algorithm dynamically adapts the music based on the angle the player is viewing the ingame landscape from.[35] Recent breakthroughs in AI have resulted in the creation of advanced tools that are capable of creating music and sound based on evolving factors with minimal developer input. One such example is the MetaComposure music generator. MetaComposure is anevolutionary algorithmdesigned to generate original music compositions during real time gameplay to match the current mood of the environment.[36]The algorithm is able to assess the current mood of the game state through "mood tagging". Research indicates that there is a significantpositive statistical correlationregarding player rated game engagement and the dynamically generated musical compositions when they accurately match their current emotions.[37] Game AI often amounts to pathfinding and finite-state machines. Pathfinding gets the AI from point A to point B, usually in the most direct way possible. State machines permit transitioning between different behaviors. TheMonte Carlo tree searchmethod[38]provides a more engaging game experience by creating additional obstacles for the player to overcome. The MCTS consists of a tree diagram in which the AI essentially playstic-tac-toe. Depending on the outcome, it selects a pathway yielding the next obstacle for the player. In complex video games, these trees may have more branches, provided that the player can come up with several strategies to surpass the obstacle. Academic AI may play a role within game AI, outside the traditional concern of controlling NPC behavior.Georgios N. Yannakakishighlighted four potential application areas:[2] Rather than procedural generation, some researchers have usedgenerative adversarial networks(GANs) to create new content. In 2018 researchers at Cornwall University trained a GAN on a thousand human-created levels forDoom; following training, the neural net prototype was able to design new playable levels on its own. Similarly, researchers at theUniversity of Californiaprototyped a GAN to generate levels forSuper Mario.[39]In 2020 Nvidia displayed a GAN-created clone ofPac-Man; the GAN learned how to recreate the game by watching 50,000 (mostly bot-generated) playthroughs.[40] Non-player charactersare entities within video games that are not controlled by players, but instead are managed by AI systems. NPCs contribute to the immersion, storytelling, and the mechanics of a game. They often serve as companions, quest-givers, merchants and much more. Their realism has advanced significantly in the past few years, thanks to improvements in AI technologies. NPCs are essential in both narrative-driven as well as open-world games. They help convey the lore and context of the game, making them pivotal to world-building and narrative progression. For instance, an NPC can provide critical information, offer quests, or simply populate the world to add a sense of realism to the game. Additionally, their role as quest-givers or merchants makes them integral to the gameplay loop, giving players access to resources, missions, or services that enable further progression. Additionally, NPCs can be designed to serve functional roles in games, such as a merchant or to provide a service to the player. These characters are central to facilitating game mechanics by acting as intermediaries between the player and in-game systems. Academics[who?]say the interactions between players and NPCs are often designed to be straightforward but contextually relevant, ensuring that the player receives necessary feedback or resources for gameplay continuity. Recent advancements[as of?]in artificial intelligence have significantly enhanced the complexity and realism of NPCs. Before these advancements, AI operated on pre-programmed behaviors, making them predictable and repeatable. With AI developing NPCs have become more adaptive and able to dynamically respond to players. Experts[who?]think the integration of deep learning and reinforcement learning techniques has enabled NPCs to adjust their behavior in response to player actions, creating a more interactive and personalized gameplay experience. One such development is the use of adaptive behavior models. These allow NPCs to analyze and learn from players decisions in real time. This behavior allows for a much more engaging experience. For example as said by experts in the field,[who?]NPCs in modern video games can now react to player actions with increased sophistication, such as adjusting their tactics in combat or changing their dialogue based on past interactions. By using deep learning algorithms these systems emulate human-like decisions-making, thus making NPCs feel more like real people rather than static game elements. Another advancements in NPC AI is the use ofnatural language processing, which allows NPCs to engage in more realistic conversations with players. Before this NPC dialogue was limited to a fixed set of responses. It is said[by whom?]that NLP has improved the fluidity of NPC conversations, allowing them to respond more contextually to player inputs. This development has increased the depth and immersion of player-NPC interactions, as players can now engage in more complex dialogues that affect the storyline and gameplay outcomes. Additionally, deep learning models have allowed NPCs to become more capable of predicting players behaviors. Deep learning allows NPCs to process large amounts of data and adapt to player strategies, making interactions with them less predictable and more varied. This creates a more immersive experience, as NPCs are now able to "learn" from player behavior, which provides a greater sense of realism within the game. Despite all of these advancements in NPC AI, there are still significant challenges that developers face in designing NPCs. They need to balance realism, functionally, and players expectations. The key challenge is to make sure that NPCs enhance the players experience, rather than disturb the gameplay. Overly realistic NPCs that behave unpredictably can frustrate players by hindering progression or breaking immersion. Conversely, NPCs that are too predictable or simplistic may fail to engage players, reducing the overall effectiveness of the game's narrative and mechanics. Another factor that needs to be accounted for is the computation cost of implementing advanced AI for NPCs. The use of these Advanced AI techniques requires large amount of processing power, which can limit its usage. Balancing the performance of AI-driven NPCs with the game's overall technical limitations is crucial for ensuring smooth gameplay. Experts[who?]mentioned how developers must allocate resources efficiently to avoid overburdening the game’s systems, particularly in large, open-world games where numerous NPCs must interact with the player simultaneously. Finally, creating NPCs that can respond dynamically to a wide range of player behaviors remains a difficult task. NPCs must be able to handle both scripted interactions and unscripted scenarios where players may behave in unexpected ways. Designing NPCs capable of adapting to such variability requires complex AI models that can account for numerous possible interactions, which can be resource-intensive and time-consuming for developers. Gamers always ask if the AI cheats (presumably so they can complain if they lose) In the context of artificial intelligence in video games, cheating refers to the programmer giving agents actions and access to information that would be unavailable to the player in the same situation.[42]Believing that theAtari 8-bitcould not compete against a human player,Chris Crawforddid not fix a bug inEastern Front (1941)that benefited the computer-controlled Russian side.[43]Computer Gaming Worldin 1994 reported that "It is a well-known fact that many AIs 'cheat' (or, at least, 'fudge') in order to be able to keep up with human players".[44] For example, if the agents want to know if the player is nearby they can either be given complex, human-like sensors (seeing, hearing, etc.), or they can cheat by simply asking thegame enginefor the player's position. Common variations include giving AIs higher speeds in racing games to catch up to the player or spawning them in advantageous positions in first-person shooters. The use of cheating in AI shows the limitations of the "intelligence" achievable artificially; generally speaking, in games where strategic creativity is important, humans could easily beat the AI after a minimum of trial and error if it were not for this advantage. Cheating is often implemented for performance reasons where in many cases it may be considered acceptable as long as the effect is not obvious to the player. While cheating refers only to privileges given specifically to the AI—it does not include the inhuman swiftness and precision natural to a computer—a player might call the computer's inherent advantages "cheating" if they result in the agent acting unlike a human player.[42]Sid Meierstated that he omitted multiplayer alliances inCivilizationbecause he found that the computer was almost as good as humans in using them, which caused players to think that the computer was cheating.[45]Developers say that most game AIs are honest but they dislike players erroneously complaining about "cheating" AI. In addition, humans use tactics against computers that they would not against other people.[43] In the 1996 gameCreatures, the user "hatches" small furry animals and teaches them how to behave. These "Norns" can talk, feed themselves, and protect themselves against vicious creatures. It was the first popular application of machine learning in an interactive simulation.Neural networksare used by the creatures to learn what to do. The game is regarded as a breakthrough inartificial liferesearch, which aims to model the behavior of creatures interacting with their environment.[46] In the 2001first-person shooterHalo: Combat Evolvedthe player assumes the role of the Master Chief, battling various aliens on foot or in vehicles. Enemies use cover very wisely, and employ suppressing fire and grenades. The squad situation affects the individuals, so certain enemies flee when their leader dies. Attention is paid to the little details, with enemies notably throwing back grenades or team-members responding to being bothered. The underlying "behavior tree" technology has become very popular in the games industry sinceHalo 2.[46] The 2005psychological horrorfirst-person shooterF.E.A.R.has player characters engage abattalionof clonedsuper-soldiers, robots andparanormal creatures. The AI uses a planner to generate context-sensitive behaviors, the first time in a mainstream game. This technology is still used as a reference for many studios. The Replicas are capable of utilizing the game environment to their advantage, such as overturning tables and shelves to create cover, opening doors, crashing through windows, or even noticing (and alerting the rest of their comrades to) the player's flashlight. In addition, the AI is also capable of performing flanking maneuvers, using suppressing fire, throwing grenades to flush the player out of cover, and even playing dead. Most of these actions, in particular the flanking, is the result of emergent behavior.[47][48] Thesurvival horrorseriesS.T.A.L.K.E.R.(2007–) confronts the player with man-made experiments, military soldiers, and mercenaries known as Stalkers. The various encountered enemies (if the difficulty level is set to its highest) use combat tactics and behaviors such as healing wounded allies, giving orders, out-flanking the player and using weapons with pinpoint accuracy.[citation needed] The 2010real-time strategygameStarCraft II: Wings of Libertygives the player control of one of three factions in a 1v1, 2v2, or 3v3 battle arena. The player must defeat their opponents by destroying all their units and bases. This is accomplished by creating units that are effective at countering opponents' units. Players can play against multiple different levels of AI difficulty ranging from very easy to Cheater 3 (insane). The AI is able to cheat at the difficulty Cheater 1 (vision), where it can see units and bases when a player in the same situation could not. Cheater 2 gives the AI extra resources, while Cheater 3 gives an extensive advantage over its opponent.[49] Red Dead Redemption 2, released by Rockstar Games in 2018, exemplifies the advanced use of AI in modern video games. The game incorporates a highly detailed AI system that governs the behavior of NPCs and the dynamic game world. NPCs in the game display complex and varied behaviors based on a wide range of factors including their environment, player interactions, and time of day. This level of AI integration creates a rich, immersive experience where characters react to players in a realistic manner, contributing to the game's reputation as one of the most advanced open-world games ever created.[50] Generative artificial intelligence, AI systems that can respond to prompts and produce text, images, and audio and video clips, arose in 2023 with systems likeChatGPTandStable Diffusion. In video games, these systems could create the potential for game assets to be created indefinitely, bypassing typical limitations on human creations. For example, the 2024browser-basedsandboxgameInfinite Craftusesgenerative AIsoftware, includingLLaMA. When two elements are being combined, a new element is generated by the AI.[51]The 2024 browser-based gameOasisuses generative AI to simulate the video gameMinecraft.Oasisis trained on millions of hours of footage fromMinecraft, and predicts how the next frame of gameplay looks using this dataset.Oasisdoes not have object permanence because it does not store any data.[52] However, there are similarconcernsin other fields particularly the potential for loss of jobs normally dedicated to the creation of these assets.[53]In January 2024,SAG-AFTRA, a United States union representing actors, signed a contract with Replica Studios that would allow Replica to capture the voicework of union actors for creating AI voice systems based on their voices for use in video games, with the contract assuring pay and rights protections. While the contract was agreed upon by a SAG-AFTRA committee, many members expressed criticism of the move, having not been told of it until it was completed and that the deal did not do enough to protect the actors.[54] Recent advancements in AI for video games have led to more complex and adaptive behaviors in non-playable characters (NPCs). For instance, AI systems now utilize sophisticated techniques such as decision trees and state machines to enhance NPC interactions and realism, as discussed in "Artificial Intelligence in Games".[55]Recent advancements in AI for video games have also focused on improving dynamic and adaptive behaviors in NPCs. For example, recent research has explored the use of complex neural networks to enable NPCs to learn and adapt their behavior based on player actions, enhancing the overall gaming experience. This approach is detailed in the IEEE paper on "AI Techniques for Interactive Game Systems".[56]
https://en.wikipedia.org/wiki/Artificial_intelligence_in_video_games
RADIX 50[1][2][3]orRAD50[3](also referred to asRADIX50,[4]RADIX-50[5]orRAD-50), is an uppercase-onlycharacter encodingcreated byDigital Equipment Corporation(DEC) for use on theirDECsystem,PDP, andVAXcomputers. RADIX 50's 40-character repertoire (050 inoctal) can encode six characters plus four additional bits into one36-bitmachineword(PDP-6,PDP-10/DECsystem-10,DECSYSTEM-20), three characters plus two additional bits into one18-bitword (PDP-9,[2]PDP-15),[6]or three characters into one16-bitword (PDP-11, VAX).[3] The actual encoding differs between the 36-bit and 16-bit systems. In 36-bit DEC systems RADIX 50 was commonly used insymbol tablesfor assemblers or compilers which supported six-character symbol names from a 40-character alphabet. This left four bits to encode properties of the symbol. For its similarities to theSQUOZE character encoding schemeused inIBM'sSHARE Operating Systemfor representing object code symbols, DEC's variant was also sometimes calledDEC Squoze,[7]however, IBM SQUOZE packed six characters of a 50-character alphabet plus two additional flag bits into one 36-bit word.[6] RADIX 50 was not normally used in 36-bit systems for encoding ordinary character strings; file names were normally encoded as sixsix-bitcharacters, and full ASCII strings as five seven-bit characters and one unused bit per 36-bit word. RADIX 50 (also calledRadix 508format[2]) was used in Digital's 18-bit PDP-9 and PDP-15 computers to store symbols in symbol tables, leaving two extra bits per 18-bit word ("symbol classification bits").[2] Some strings in DEC's 16-bit systems were encoded as 8-bit bytes, while others used RADIX 50 (then also calledMOD40).[3][8] In RADIX 50, strings were encoded in successive words as needed, with the first character within each word located in the most significant position. For example, using the PDP-11 encoding, the string "ABCDEF", with character values 1, 2, 3, 4, 5, and 6, would be encoded as a word containing the value 1×402+ 2×401+ 3×400=1683, followed by a second word containing the value 4×402+ 5×401+ 6×400=6606. Thus, 16-bit words encoded values ranging from 0 (three spaces) to63999("999"). When there were fewer than three characters in a word, the last word for the string was padded with trailing spaces.[3] There were several minor variations of this encoding with differing interpretations of the 27, 28, 29 code points. Where RADIX 50 was used for filenames stored on media, the code points represent the$,%,*characters, and will be shown as such when listing the directory with utilities such as DIR.[9]When encoding strings in the PDP-11 assembler and other PDP-11programming languagesthe code points represent the$,.,%characters, and are encoded as such with the default RAD50 macro in the global macros file, and this encoding was used in thesymbol tables. Some early documentation for theRT-11operating system considered the code point 29 to be undefined.[3] The use of RADIX 50 was the source of the filename size conventions used byDigital Equipment CorporationPDP-11 operating systems. Using RADIX 50 encoding, six characters of a filename could be stored in two 16-bit words, while three more extension (file type) characters could be stored in a third 16-bit word. Similarly, a three-character device name such as "DL1" could also be stored in a 16-bit word. The period that separated the filename and its extension, and the colon separating a device name from a filename, was implied (i.e., was not stored and always assumed to be present).
https://en.wikipedia.org/wiki/DEC_MOD40
Afile inclusion vulnerabilityis a type ofwebvulnerabilitythat is most commonly found to affectweb applicationsthat rely on a scriptingrun time. This issue is caused when an application builds a path to executable code using an attacker-controlled variable in a way that allows the attacker to control which file is executed at run time. A file include vulnerability is distinct from a genericdirectory traversal attack, in that directory traversal is a way of gaining unauthorizedfile systemaccess, and a file inclusion vulnerability subverts how an application loads code for execution. Successful exploitation of a file inclusion vulnerability will result inremote code executionon theweb serverthat runs the affected web application. An attacker can use remote code execution to create aweb shellon the web server, which can be used forwebsite defacement. Remote file inclusion(RFI) occurs when the web application downloads and executes a remote file. These remote files are usually obtained in the form of anHTTPorFTPURIas a user-supplied parameter to the web application. Local file inclusion(LFI) is similar to a remote file inclusion vulnerability except instead of including remote files, only local files i.e. files on the current server can be included for execution. This issue can still lead to remote code execution by including a file that contains attacker-controlled data such as the web server's access logs. InPHPthe main cause is due to the use of unvalidated user-input with a filesystem function that includes a file for execution. Most notable are theincludeandrequirestatements. Most of the vulnerabilities can be attributed to novice programmers not being familiar with all of the capabilities of the PHP programming language. The PHP language has a directive which, if enabled, allows filesystem functions to use aURLto retrieve data from remote locations.[1]The directive isallow_url_fopenin PHP versions <= 4.3.4 andallow_url_includesince PHP 5.2.0. In PHP 5.x this directive is disabled by default, in prior versions it was enabled by default.[2]To exploit the vulnerability an attacker will alter a variable that is passed to one of these functions to cause it to include malicious code from a remote resource. To mitigate this vulnerability all user input needs to bevalidatedbefore being used.[3][4] Consider thisPHPscript which includes a file specified by request: The developer intended to read inenglish.phporfrench.php, which will alter the application's behavior to display the language of the user's choice. But it is possible to inject another path using thelanguageparameter. The best solution in this case is to use a whitelist of accepted language parameters. If a strong method of input validation such as a whitelist cannot be used, then rely upon input filtering or validation of the passed-in path to make sure it does not contain unintended characters and character patterns. However, this may require anticipating all possible problematic character combinations. A safer solution is to use a predefined Switch/Case statement to determine which file to include rather than use a URL or form parameter to dynamically generate the path. JavaServer Pages(JSP) is a scripting language which can include files for execution at runtime. The following script is vulnerable to a file inclusion vulnerability: AServer Side Includeis very uncommon and are not typically enabled on a default web server. A server-side include can be used to gain remote code execution on a vulnerable web server.[6] The following code is vulnerable to a remote-file inclusion vulnerability: The above code is not anXSS vulnerability, but rather including a newfileto be executed by the server.
https://en.wikipedia.org/wiki/File_inclusion_vulnerability
Apseudonymous remailerornym server, as opposed to ananonymous remailer, is anInternetsoftware program designed to allow people to writepseudonymousmessages onUsenetnewsgroups and send pseudonymousemail. Unlike purely anonymous remailers, it assigns its users a user name, and it keeps a database of instructions on how to return messages to the real user. These instructions usually involve the anonymous remailer network itself, thus protecting the true identity of the user. Primordial pseudonymous remailers once recorded enough information to trace the identity of the real user, making it possible for someone to obtain the identity of the real user through legal or illegal means. This form of pseudonymous remailer is no longer common. David Chaumwrote an article in 1981 that described many of the features present in modern pseudonymous remailers.[1] ThePenet remailer, which lasted from 1993 to 1996, was a popular pseudonymous remailer. Anym server(short for "pseudonymserver") is aserverthat provides an untraceable e-mail address, such that neither the nym server operator nor the operators of the remailers involved can discover which nym corresponds to which real identity. To set up a nym, one creates aPGPkeypair and submits it to the nym server, along with instructions (called areply block) toanonymous remailers(such asCypherpunkorMixmaster) on how to send a message to one's real address. The nym server returns a confirmation through this reply block. One then sends a message to the address in the confirmation. To send a message through the nym server so that theFromaddress is the nym, one adds a few headers,[clarification needed]signs the message with one's nym key, encrypts it with the nym server key, and sends the message to the nym server, optionally routing it through some anonymous remailers. When the nym server receives the message it decrypts it and sends it on to the intended recipient, with theFromaddress indicating one's nym. When the nym server gets a message addressedtothe nym, it appends it to the nym's reply block and sends it to the first remailer in the chain, which sends it to the next and so on until it reaches your real address. It is considered good practice to include instructions to encrypt it on the way, so that someone (or some organization) doing in/outtraffic analysison the nym server cannot easily match the message received by you to the one sent by the nym server. Existing "multi-use reply block" nym servers were shown to be susceptible to passive traffic analysis with one month's worth of incomingspam(based on 2005 figures) in a paper byBram Cohen,Len Sassaman, andNick Mathewson.[2]
https://en.wikipedia.org/wiki/Nym_server